id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9910/astro-ph9910038.html
ar5iv
text
# Evolution of Dust Extinction and Supernova Cosmology ## 1. Introduction Type Ia supernovae (SNe Ia) have been known as a representative standard candle in the universe, and used in measurements of cosmological parameters such as the Hubble constant ($`H_0`$), density parameter ($`\mathrm{\Omega }_M`$), and the cosmological constant ($`\mathrm{\Omega }_\mathrm{\Lambda }`$). Recently two independent groups have obtained the same result that a $`\mathrm{\Lambda }`$-dominated flat universe is strongly favored, by using a few tens of SNe Ia at redshift $`z`$ 0.5 (Riess et al. 1998 \[R98\]; Perlmutter et al. 1999 \[P99\]). Extinction by dust could affect these analyses, and both groups have made a considerable effort to assess the systematic uncertainty due to extinction. Both groups reported that there is no significant color difference between the high-$`z`$ and local samples. In the sample of P99, average reddening $`E\left(BV\right)`$ is 0.033$`\pm `$0.014 mag for the local sample and 0.035$`\pm `$0.022 for the high-$`z`$ sample (P99). The mean $`BV`$ color of the high-$`z`$ R98 sample is $`0.13\pm `$0.05 or $`0.07\pm `$0.05 depending on two analysis methods, while expectation of unreddened color is $`0.10`$ to $`0.05`$. However, there is a statistical uncertainty of $`\stackrel{>}{}0.03`$ mag in the color difference, and in addition, there is a systematic uncertainty in the K-correction of about 0.03 mag (P99, and probably similar number also for R98). Therefore, there is observational uncertainty of at least $``$ 0.05 mag in reddening evolution for both groups. Furthermore, these error estimates have been achieved by statistical averaging of many supernovae; typical color uncertainty for each supernova is much larger ($`0.1`$–0.2 mag, see Fig. 6 of P99). Therefore, it is difficult to clearly check a systematic evolution of average $`B`$-band extinction $`\mathrm{\Delta }A_BR_B\mathrm{\Delta }E\left(BV\right)0.2`$ mag with a reddening of $`\mathrm{\Delta }E\left(BV\right)0.05`$ by the current observational data, where $`R_B`$ is the total-to-selective extinction ratio for the $`B`$ band, and $`R_B4`$ for the standard extinction curve. On the other hand, the difference of apparent magnitude between an open universe and a $`\mathrm{\Lambda }`$-dominated flat universe is only $``$ 0.2 mag at $`z0.5`$, and hence it is still possible that unchecked systematic evolution of extinction has affected the measurements of cosmological parameters. Aguirre (1999a, b) considered the effect of intergalactic dust which was ejected from galaxies. Such dust could have a significantly grey extinction and Aguirre has shown that this kind of dust may affect measurements of cosmological parameters with smaller reddening. This is an interesting possibility, but existence of such intergalactic dust has not yet been confirmed. Here we consider the dust existing in host galaxies with the standard extinction curve. Although it is uncertain whether supernovae evolve to high redshifts, their host galaxies should undoubtedly evolve as shown by various observations of galaxies at high redshifts. Both the gas column density and gas metallicity, which are important physical quantities for dust opacity, change with time by various star formation histories depending on morphological types of galaxies. Therefore, average dust extinction in host galaxies should evolve systematically, and the aim of this paper is to make a quantitative estimate for this evolution, by using a realistic model of photometric and chemical evolution of galaxies and supernova rate histories in various types of galaxies. We find that typical evolution in average $`A_B`$ is $``$ 0.1–0.2 mag from $`z=0`$ to $``$0.5, which is significant for measurements of cosmological parameters but may have escaped from the reddening check. Therefore, this effect should not be ignored in measurements of cosmological parameters by high-$`z`$ SNe Ia. ## 2. Evolution of Average Extinction in Host Galaxies In this letter we consider only the average extinction of a supernova in a host galaxy, and do not consider the variation within a galaxy depending on the supernova location in it. Although the variation within a host galaxy can be washed out by statistical averaging of many supernovae, evolution of galaxies will cause systematic evolution of average extinction which cannot be removed by statistical averaging. It is physically natural to assume that the dust-to-metal ratio is constant and hence the dust opacity is proportional to gas column density and gas metallicity of a host galaxy. In fact, it is well known that the extinction in our Galaxy is well correlated to the HI gas column density (e.g., Burstein & Heiles 1982; Pei 1992). It is also known that the dust opacity is correlated to the metallicity among the Galaxy and the Large and Small Magellanic Clouds, when gas column density is fixed (e.g., Pei 1992). Hence in the following we assume that the dust opacity is proportional to gas column density and gas metallicity, which evolve according to the star formation history in a galaxy. The star formation history can be inferred from the present-day properties of observed galaxies, by using the well-known technique of stellar population synthesis. We can estimate the time evolution of gas fraction and metallicity in a galaxy by using photometric and chemical evolution models for various galaxy types. Since the observed extinction of high-$`z`$ SNe Ia is an average over various types of galaxies, we also need the evolution of SN Ia rate in various galaxy types. In the next section, we describe the model of galaxy evolution and SN Ia rate evolution used in this letter, which is constructed to reproduce various properties of the present-day galaxies. ### 2.1. Evolution of galaxies and Type Ia supernova rate We use photometric and chemical evolution models for five morphological types of E/S0, S0a-Sa, Sab-Sb, Sbc-Sc, and Scd-Sd. The basic framework of the model is the same as that of elliptical galaxies of Arimoto & Yoshii (1987) and that of spiral galaxies of Arimoto, Yoshii, & Takahara (1992), but model parameters are updated to match the latest observations (Kobayashi et al. 1999), by using an updated stellar population database of Kodama & Arimoto (1997) and nucleosynthesis yields of supernovae of Tsujimoto et al. (1995). The model parameters for spiral galaxies are determined to reproduce the present-day gas fractions and $`BV`$ colors in various galaxy types at 15 Gyr after the formation. The model of elliptical galaxies is the so-called galactic wind model, in which star formation stops at about 1 Gyr after the formation by a supernova-driven galactic wind (Larson (1974); Arimoto & Yoshii (1987)). We assume that gas fraction in a elliptical galaxy decreases exponentially after the galactic wind time ($``$ 1 Gyr), with a time scale same as the galactic wind time. These models give the evolution of gas fraction and metallicity in each galaxy type depending on the star formation history. SN Ia rate history in each type of galaxies is calculated with the metallicity-dependent SN Ia model introduced by Kobayashi et al. (1998). In their SN Ia progenitor model, an accreting white dwarf (WD) blows a strong wind to reach the Chandrasekhar mass limit. If the iron abundance of progenitors is as low as \[Fe/H\]$`\stackrel{<}{}1`$, the wind is too weak for SNe Ia to occur. Their SN Ia scenario has two progenitor systems: one is a red-giant (RG) companion with the initial mass of $`M_{\mathrm{RG},0}1M_{}`$ and an orbital period of tens to hundreds days (Hachisu, Kato, & Nomoto (1996), 1999). The other is a near main-sequence (MS) companion with an initial mass of $`M_{\mathrm{MS},0}2`$$`3M_{}`$ and a period of several tenths of a day to several days (Li & van den Heuvel (1997); Hachisu et al. (1999)). The occurrence of SNe Ia is determined from two factors: lifetime of companions (i.e., mass of companions) and iron abundance of progenitors. (See Kobayashi et al. 1998, 1999 for detail.) This model successfully reproduces the observed chemical evolution in the solar neighborhood such as the evolution of oxygen to iron ratio and the abundance distribution function of disk stars (Kobayashi et al. (1998)), the present SN II and Ia rates in spirals and ellipticals, and cosmic SN Ia rate at $`z0.5`$ (Kobayashi et al. (1999)). ### 2.2. Average extinction evolution towards high redshifts We have modeled the evolution of gas fraction ($`f_\mathrm{g}`$), metallicity ($`Z`$), and SN Ia rate per unit baryon mass of a galaxy ($`_{Ia}`$) in various types of galaxies, from which we calculate the evolution of average extinction in the universe. We assume that these quantities do not depend on the mass of galaxies. The basic assumption is that the dust opacity, and hence average $`A_B`$ in a galaxy is proportional to gas column density and gas metallicity. The average extinction at redshift $`z`$ in a $`i`$-th type galaxy with the present-day $`B`$ luminosity $`L_B`$ is given by $`A_{B,i}(z,L_B)=\kappa f_{\mathrm{g},i}\left(t_z\right)Z_i\left(t_z\right)\left[r_{\mathrm{e},i}\left(L_B\right)\right]^2\left(M_\mathrm{b}/L_B\right)_iL_B`$, where $`t_z`$ is the time from formation of galaxies, $`r_\mathrm{e}`$ the effective radius of galaxies, and $`\left(M_\mathrm{b}/L_B\right)`$ is the baryon-mass to light ratio which is determined by the evolution model. We assume a single formation epoch $`z_F`$ for all galaxy types for simplicity.<sup>1</sup><sup>1</sup>1In reality, there should be some dispersion in galaxy ages. However, the systematic evolution of dust extinction is owing to the fact that all galaxies should become younger on average towards high redshifts, and present-day age dispersion cannot remove this systematic effect. The proportional constant $`\kappa `$ will be determined later. We do not consider the size evolution of galaxies, and determine $`r_\mathrm{e}\left(L_B\right)`$ from empirical relations observed in local galaxies (Bender et al. 1992 for ellipticals, and Mao & Mo 1998 for disk galaxies). It should be noted that the extinction depends on the absolute luminosity of galaxies. From the empirical $`L_B`$-$`r_\mathrm{e}`$ relation, the surface brightness becomes brighter with increasing luminosity of disk galaxies, and hence the massive galaxies should be more dusty than smaller ones. This trend is consistent with observations (van den Bergh and Pierce 1990; Wang 1991). Then the average extinction of SNe Ia over all galaxy types at a given redshift is $`A_B\left(z\right)={\displaystyle \frac{_i𝑑L_BA_{B,i}(z,L_B)_{\mathrm{Ia},i}\left(t_z\right)\left(M_\mathrm{b}/L_B\right)_iL_B\varphi _i\left(L_B\right)}{_i𝑑L_B_{\mathrm{Ia},i}\left(t_z\right)\left(M_\mathrm{b}/L_B\right)_iL_B\varphi _i\left(L_B\right)}}`$ where $`\varphi _i`$ is the type-dependent galaxy luminosity function at $`z=0`$, for which we adopted the Schechter parameters derived by Efstathiou, Ellis, & Peterson (1988) using the catalog of the Center for Astrophysics (CfA) Redshift Survey (Huchra et al. 1983). We have to determine the overall normalization of extinction, $`\kappa `$, for which we use the average $`V`$ extinction of the Milky Way (Sbc type, $`L_B=1.4\times 10^{10}L_B`$) at $`z=0`$: $`A_V_{\mathrm{MW}}`$. This is an average of extinction of SNe Ia occurring in our Galaxy seen by an extragalactic observer, and hence it is different, in a strict sense, from the average of the Galactic extinction which is extinction of extragalactic objects observed by us. However, if the location of the Sun is typical in the Milky Way, we may infer this quantity by the average of the Galactic extinction. The average Galactic extinction of the 42 SNe Ia observed by P99 is $``$ 0.1 mag in $`A_R`$ or $`A_I`$ (see Table 1 of P99). This suggests $`A_V_{\mathrm{MW}}0.1`$–0.2 with the standard Galactic extinction law (e.g., Pei 1992). The average reddening of the Galaxy then becomes $`E\left(BV\right)_{\mathrm{MW}}`$ 0.03-0.06 mag, which is a typical reddening at the Galactic latitude of $``$40–50 in the Galactic extinction map (Burstein & Heiles 1982; Schlegel, Finkbeiner, & Davis 1998). This estimate is consistent with a model of dust distribution in our Galaxy, which suggests that the average extinction of SNe Ia in the Galaxy seen by an extragalactic observer is typically $`A_V_{\mathrm{MW}}`$ 0.1–0.2 mag (Hatano, Branch, & Deaton 1998).<sup>2</sup><sup>2</sup>2 Hatano et al. suggested that most supernovae are only mildly obscured but there is a long tail to stronger extinctions in the extinction distribution. In the actual observations, such a tail will be cut out due to a magnitude limit of a survey. Hence, we have used here the mean extinction of the “extinction-limited subset” in the Table 1 of Hatano et al., in which strongly obscured supernovae with $`A_B>0.6`$ are removed. Therefore we use $`A_V_{\mathrm{MW}}`$ 0.1–0.2 mag as a plausible range of the average extinction of our Galaxy. Figure 1 shows the evolution of $`B`$ extinction for each galaxy type as well as the average over all galaxy types, normalized by $`A_V_{\mathrm{MW}}`$, i.e., $`A_B\left(z\right)/A_V_{\mathrm{MW}}`$. Here we used a cosmological model with $`(h,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=(0.5,0.2,0)`$, and set $`z_F=4.5`$ so that the age of galaxies is 15 Gyr which was assumed in the evolution model. The thick solid line is the average over all galaxy types, and the thin lines are for individual galaxy types as indicated. Since we have used a galactic wind model for elliptical galaxies, they do not have interstellar gas and hence there is no extinction in elliptical galaxies at $`z<1`$. The evolution of extinction is caused by spiral galaxies, but the behavior of evolution is considerably dependent on galaxy types. Early-type spiral galaxies become more dusty towards $`z1`$, but an opposite trend is seen for late types. These behaviors can be understood as a competition of the two effects: gas fraction evolution and metallicity evolution. The gas fraction in early spiral galaxies is much smaller than late types at present, but rapidly increases towards high redshifts. This increase is responsible for increase of gas column density and hence the dust opacity. On the other hand, the gas fraction does not increase so much in late type galaxies, and decrease of metallicity towards high redshifts is responsible for the decrease of dust opacity. In redshifts more than 1, the extinction decreases towards higher redshifts in all spiral galaxies because the metallicity evolution becomes dominant. The average over all types is weighted by the SNe Ia rate in each type. Because the star formation rate increases more rapidly to high redshifts in early-type spiral galaxies than late types, the average extinction is more weighted to early types at higher redshifts. Hence $`A_B/A_V_{\mathrm{MW}}`$ increases to high redshifts by $`1`$ from $`z=0`$ to 0.5. This result suggests that, with $`A_V_{\mathrm{MW}}`$ 0.1–0.2, the average extinction $`A_B`$ of SNe Ia at $`z0.5`$ is larger than the local sample by about 0.1–0.2 mag. This systematic evolution of average extinction is comparable with the difference between an open and a $`\mathrm{\Lambda }`$-dominated universe in the Hubble diagram, and hence this effect significantly affects measurements of cosmological parameters. In the next section we apply the above model in the estimate of cosmological parameters by using the sample of P99. ## 3. Effect on the Cosmological Parameters Figure 2 shows the Hubble diagram for SNe Ia of the primary fit C of P99, which plots restframe $`B`$ magnitude residuals from a $`\mathrm{\Lambda }`$-dominated flat cosmology \[$`(h,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })=(0.65,0.2,0.8)`$\] without dust effect (thin solid line). Thin long- and short-dashed lines are the predictions of the dust-free case with an open universe (0.5, 0.2, 0.0) and the Einstein-de Sitter (EdS) universe (0.5, 1.0, 0.0), respectively. As reported by P99, the $`\mathrm{\Lambda }`$-dominated flat universe gives the best-fit to the data. Next, the thick lines show the predictions when the model of extinction evolution is taken into account, where the cosmological parameters are the same with the dust-free curves of the same line-markings. Here we adopt $`A_V_{\mathrm{MW}}`$ = 0.2 mag. In the open and $`\mathrm{\Lambda }`$-dominated models, the galaxy formation epoch is set to $`z_F=4.5`$ and 5.0 so that the age of galaxies becomes 15 Gyr. In the EdS model, the age of the universe is shorter than 15 Gyr, and hence we set $`z_F=5`$ which gives an age of galaxies as $``$ 12 Gyr. Although this age is a little shorter than that assumed in the evolution model, the evolutionary effect during 12–15 Gyr is small and hence this inconsistency is not serious. As expected, the model curves with the dust effect are typically 0.1–0.2 mag fainter than those without dust. As a result, the open universe becomes the most favored cosmology among the three when the extinction evolution is taken into account. We avoid more detailed statistical analysis to derive any decisive conclusion about the cosmological parameters, because the result would be highly dependent on the extinction evolution model. However, the evolution model presented here is quite a natural and standard one without any exotic assumption. Hence, our conclusion is that the systematic evolution of average extinction in host galaxies should more carefully be taken into account when one uses SNe Ia to constrain the cosmological parameters. ## 4. Discussion P99 estimated the systematic uncertainty of extinction to be less than 0.025 mag ($`1\sigma `$) in $`A_B`$, based on an analysis after removing nine reddest supernovae. Aguirre (1999b) argued that this limit does not apply if the dispersion in brightness and/or colors of high-$`z`$ supernovae is dominated by factors other than extinction. As noted before, the observational uncertainty in $`E\left(BV\right)`$ for each supernova is typically $`0.1`$–0.2 mag, which is larger than the systematic reddening of $`E\left(BV\right)`$ 0.025–0.05 considered in this letter. Therefore, it is doubtful that P99 analysis successfully removed high-$`z`$ supernovae reddened by the systematic extinction evolution. Rather, the supernovae removed by P99 might be reddened by strong reddening depending on locations in host galaxies, or by some other reasons as argued by Aguirre (1999b). Hence, we consider that the systematic evolution of dust considered here has not yet been checked by the observations. In the analysis of R98, the reddening correction is systematically included in the process of light-curve-shape fitting, and one may consider that our result does not apply to the analysis of R98. However, as noted in Introduction, one must correct a reddening with $`E\left(BV\right)`$ 0.05 to discriminate between an open and $`\mathrm{\Lambda }`$ cosmologies, and this is comparable with the observational error of color estimates. In principle, it is difficult to correct the reddening effect when the reddening is as small as the observational error in colors, because the reddening correction is performed based on the observed colors. The reddening correction by R98 will be effective for supernovae strongly reddened beyond the color uncertainty, but it is not clear to us whether this correction has successfully corrected the systematic reddening evolution discussed in this letter. The two groups (R98, P99) argued that the observed dispersion of apparent magnitudes showing no significant evolution to high redshifts gives further support that their results are not affected by extinction. Their argument on the dispersion test is true if the observed dispersion is dominated by the dispersion of extinction, and R98 claimed that the observed dispersion is smaller than that expected from a dust distribution model of Hatano et al. (1998) if mean extinction is $``$ 0.2 mag. However, there is large uncertainty in dust distribution models, and the selection effect might have reduced the apparent dispersion (see the footnote 2). We should also be careful about the uncertainty in the observed ‘intrinsic’ dispersion, which was derived by subtracting the measurement error from the actually observed dispersion (P99). Hence, this test does not favor our scenario but cannot strictly reject it. We suggest that the best way to constrain the cosmological parameters by high-$`z`$ SNe Ia would be an analysis using only SNe Ia in elliptical galaxies, in which the dust evolution effect is much smaller than spiral galaxies at $`z<1`$. In fact, P99 tried to analyze their supernovae with known host-galaxy types, and found no significant change in the best-fit cosmology. However, the host-galaxy classification is only based on spectra of host galaxies without high-resolution images. The uncertainty of a fit with a specified host-galaxy type is still large due to the limited number of supernovae, and P99 concluded that this test will need to await the host-galaxy classification of the full set of high-$`z`$ supernovae and a larger low-$`z`$ supernova sample. It is expected from our calculation that the fitting result of such an analysis in the future will be dependent on host-galaxy types, giving important information for chemical evolution of galaxies. High-$`z`$ supernovae beyond $`z1`$ are also desirable to study galaxy evolution as well as cosmological parameters.
no-problem/9910/hep-th9910225.html
ar5iv
text
# References INFNCA-TH9909 October 1999 Reply Comment on “Entropy of 2D black holes from counting microstates” Mariano Cadoni<sup>a,c,∗</sup> and Salvatore Mignemi<sup>b,c,∗∗</sup> <sup>a</sup>Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, 09042, Monserrato, Italy. <sup>b</sup> Dipartimento di Matematica, Universitá di Cagliari, viale Merello 92, 09123, Cagliari, Italy. <sup>c</sup> INFN, Sezione di Cagliari. ## Abstract We show that the arguments proposed by Park and Yee against our recent derivation of the statistical entropy of 2D black holes do not apply to the case under consideration PACS: 04.70. Dy, 04.50. +h E-Mail: CADONI@CA.INFN.IT <sup>∗∗</sup>E-Mail: MIGNEMI@CA.INFN.IT In a recent Comment Park and Yee claimed that derivation of the statistical entropy of 2D (two-dimensional) black holes proposed by us in Ref. is plagued by an error. In this Reply Comment we show that the arguments used by Park and Yee in Ref. against our derivation do not apply to our case. Before going into the details of the confutation of the claim of Ref. , let us briefly explain the arguments of Park and Yee. In our attempt to calculate the statistical entropy of the 2D anti-de Sitter (AdS) black hole along the lines of Ref. we found a major difficulty: owing to the dimension of the boundary, the charges $`J[\chi ]`$ (Eq. (18) of Ref. ) do not support a realization of the Virasoro algebra (the asymptotic symmetries of 2D AdS space). This problem is not a peculiarity of the 2D case but shows up also in higher dimensions . To solve the problem we proposed to define the new, time-integrated, generators $`\widehat{J}[\chi ]`$ (Eq. (22) of Ref. ). Moreover, we were able to show that the Dirac bracket algebra of the charges $`\widehat{J}[\chi ]`$ gives a central extension of the Virasoro algebra and to calculate its central charge. To compute the central charge of the algebra we used the equation: $$\widehat{\delta _\omega J[\chi ]}=\widehat{J}[[\chi ,\omega ]]+c(\chi ,\omega ),$$ (1) where the hat has the meaning of an overall time-integration. Park and Yee claim that the left-hand side of Eq. (1) can not be written as $$\{\widehat{J}[\chi ],\widehat{J}[\omega ]\}_{DB},$$ (2) thus invalidating our result that the charges $`\widehat{J}`$ span a representation of the Virasoro algebra. The demonstration of Park and Yee relies on the two assumptions that need to be generalized if one wants to interpret consistently the time-integrated charges $`\widehat{J}`$ as generators of a algebra. First, the generators of the asymptotic symmetry can no longer be identified with the phase space functionals $`H[\chi ]`$ ( Eq. (17) of Ref. ), but rather with the time-integrated ones $`\widehat{H}[\chi ]`$. Second, the usual definition of the Poisson brackets, as brackets evaluated at equal times, has to be generalized in order to allow for general brackets $$\{\widehat{H}[\chi ],\widehat{H}[\omega ]\}_{PB},$$ (3) where $`\widehat{H}[\chi ]`$ is a time-integrated functional. This generalization of objects of the canonical formalism is implicitly contained in the definition of the charges $`\widehat{J}[\chi ]`$ . Moreover this is what is needed in order to recognize Eq. (1) as a canonical realization of the asymptotic symmetries. We do not know if in this framework the charges $`\widehat{J}[\chi ]`$ have a sensible interpretation as Noether charges. This is irrelevant for our purposes since we are just looking for a canonical realization of the asymptotic symmetries that allows us to perform a computation of the central charge of the algebra. Using the previously defined generalized notions of canonical generators and Poisson brackets we can easily prove that the left-hand side of Eq. (1) can be written as a Dirac bracket algebra. One just needs to compute explicitly the brackets $`\{\widehat{H}[\chi ],\widehat{H}[\omega ]\}_{PB}`$. One finds : $$\{\widehat{H}[\chi ],\widehat{H}[\omega ]\}=\widehat{H}[[\chi ,\omega ]]+c(\chi ,\omega )$$ (4) where the central charge has exactly the same value found in Ref. . Fixing the gauge so that the constraints hold strongly and using Eq. (17) of Ref. , the previous equation implies $$\{\widehat{J}[\chi ],\widehat{J}[\omega ]\}_{DB}=\widehat{J}[[\chi ,\omega ]]+c(\chi ,\omega ).$$ (5) Comparing this equation with Eq. (1), it follows immediately that the left-hand side of the latter can be written as the Dirac bracket in Eq. (2). Let us now show explicitly that the calculations used in Ref. by Park and Yee to support their claim are inconsistent with our definitions of generators and Poisson brackets. From equation (4) it follows immediately that the canonical generators of the Virasoro algebra are the functionals $`\widehat{H}[\chi ]`$ rather then $`H[\chi ]`$. Therefore Eq. (6) of Ref. , which is the starting point of the demonstration of Park and Yee, does not apply. The right equation to be used here is instead: $$\{J[\chi ],\widehat{H}[\omega ]\}_{DB}=\{J[\chi ],\widehat{J}[\omega ]\}_{DB}=\delta _\omega J[\chi ].$$ (6) Following Yee and Park we perform now the time integration of Eq. (6). The left-hand side becomes $$\frac{\lambda }{2\pi }_0^{\frac{2\pi }{\lambda }}𝑑t^{}\{J[\chi (t^{})],\widehat{J}[\omega ]\}_{DB}=\{\widehat{J}[\chi ],\widehat{J}[\omega ]\}_{DB},$$ (7) from which it follows immediately that the left-hand side of Eq. (1) can be written as a Dirac bracket.
no-problem/9910/cond-mat9910429.html
ar5iv
text
# Spin dynamics simulations of the magnetic dynamics of RbMnF3 and direct comparison with experiment ## I Introduction Heisenberg models are examples of magnetic systems that have true dynamics with the real time dynamics governed by coupled equations of motion. According to the classification of the different dynamic universality classes proposed by Hohenberg and Halperin in their work on the theory of dynamic critical phenomena , the classical Heisenberg ferromagnet and antiferromagnet are of class J and G, respectively. Although both classes have true dynamics, in the former class the order parameter (the uniform magnetization) is conserved, whereas in the latter the order parameter (the staggered magnetization) is not a conserved quantity. The difference in the dynamic behavior of the Heisenberg ferromagnet and antiferromagnet can already be seen from the linear spin-wave theory, which predicts different low-temperature dispersion curves for the two models. Dynamic critical behavior is describable in terms of a dynamic critical exponent $`z`$ which depends on conservation laws, lattice dimension, and the static critical exponents. The static critical behavior of three-dimensional Heisenberg models have been studied using a variety of approaches, including a high-resolution Monte Carlo simulation which determined the critical temperature and the static critical exponents for simple cubic and body-centered-cubic lattices . In contrast, the theory of the dynamic behavior of Heisenberg models is not so well understood. A very close realization of an isotropic three-dimensional Heisenberg antiferromagnet is RbMnF<sub>3</sub>. Early experimental studies have shown that in RbMnF<sub>3</sub> the Mn<sup>2+</sup> ions, with spin $`S=5/2`$, form a simple cubic lattice structure with a nearest-neighbor exchange constant of $`J^{exp}=(0.58\pm 0.06)`$ meV and a second-neighbor constant of less than $`0.04`$ meV \[both defined using our convention for the exchange constant to be shown in Eq. (14)\]. Magnetic ordering with antiferromagnetic alignment of spins occurs below 83K, which we denote as the critical temperature $`T_c`$. In this material, the magnetic anisotropy was found to be only about $`6\times 10^6`$ of the exchange field, and no deviation from cubic symmetry was seen at $`T_c`$ . Both the static properties and the dynamic response of RbMnF<sub>3</sub> have been examined through neutron scattering experiments. The early work of Tucciarone et al found that in the critical region the neutron scattering function has a central peak (peak at zero frequency transfer) and a spin-wave peak. Later, the experimental study by Cox et al observed a small central peak below $`T_c`$ as well. The more recent experiments by Coldea et al have also found central peaks for $`TT_c`$, in agreement with previous experiments. From the theoretical side, renormalization-group (RNG) below $`T_c`$ predicts spin-wave peaks, and a central peak in the longitudinal component of the neutron scattering function. However, at the critical temperature both renormalization-group and mode-coupling theories predict only the spin-wave peak, i.e. the central peak has not been predicted by these theories. The central peak is thought to be due to spin diffusion resulting from nonlinearities in the dynamical equations . In their recent experiment, Coldea et al obtained the most precise experimental estimate of the dynamic critical exponent, $`z=(1.43\pm 0.04)`$. The theoretical prediction is $`z=1.5`$ for class G models in three dimensions. Large-scale computer simulations using spin-dynamics techniques to study the dynamic behavior of Heisenberg ferromagnet and antiferromagnet have been carried out by Chen and Landau and Bunker et al , respectively. So far, however, there are no direct comparisons of the dispersion curve and the dynamic structure lineshapes obtained from simulations with the corresponding experimental results. In the present work we have carried out large-scale simulations of the dynamic behavior of the Heisenberg antiferromagnet on a simple cubic lattice, and make direct comparisons with experimental data for RbMnF<sub>3</sub>. Sec. II of this paper contains the definition of the model and introduces some simulation background. In Sec. III we present and discuss our simulation results and compare them with experiments. Sec. IV contains some concluding remarks. ## II Model and Methods ### A Model The classical Heisenberg antiferromagnetic model is defined by the Hamiltonian $$=J\underset{<\mathrm{𝐫𝐫}^{}>}{}𝐒_𝐫𝐒_𝐫^{},$$ (1) where $`𝐒_𝐫=(S_{𝐫}^{}{}_{}{}^{x},S_{𝐫}^{}{}_{}{}^{y},S_{𝐫}^{}{}_{}{}^{z})`$ is a three-dimensional classical spin of unit length at site $`𝐫`$ and $`J>0`$ is the antiferromagnetic coupling constant between nearest-neighbor pairs of spins. We consider $`L\times L\times L`$ simple cubic lattices with periodic boundary conditions. The time dependence of each spin can be determined from the integration of the equations of motion \[Eq.(2) in Ref.\], and the dynamic structure factor $`S(𝐪,\omega )`$ for momentum transfer $`𝐪`$ and frequency transfer $`\omega `$, observable in neutron scattering experiments, is given by $$S^k(𝐪,\omega )=\underset{𝐫,𝐫^{}}{}\mathrm{exp}[i𝐪(𝐫𝐫^{})]_{\mathrm{}}^+\mathrm{}\mathrm{exp}(i\omega t)C^k(𝐫𝐫^{},t)\frac{dt}{\sqrt{2\pi }},$$ (2) where $`C^k(𝐫𝐫^{},t)`$ is the space-displaced, time-displaced spin-spin correlation function defined, with $`k=x,y,`$ or $`z`$, as $$C^k(𝐫𝐫^{},t)=S_{𝐫}^{}{}_{}{}^{k}(t)S_{𝐫^{}}^{}{}_{}{}^{k}(0)S_{𝐫}^{}{}_{}{}^{k}(t)S_{𝐫^{}}^{}{}_{}{}^{k}(0).$$ (3) The displacement $`𝐫`$ is in units of the lattice unit cell length $`a`$. In the case of antiferromagnets, the wave-vectors are measured with respect to the $`(\pi ,\pi ,\pi )`$ point which corresponds to the Brillouin zone center. ### B Simulation method Using a combination of Monte Carlo and spin-dynamics methods, we simulated the behavior of the simple-cubic classical Heisenberg antiferromagnet with $`12L60`$ at the critical temperature $`T_c=1.442929(77)J`$ and below $`T_c`$. We have chosen units such that the Boltzmann constant $`k_B=1`$. Equilibrium configurations were generated using a hybrid Monte Carlo method in which a single hybrid Monte Carlo step consisted of two Metropolis steps and eight overrelaxation steps . Typically 1000 hybrid Monte Carlo steps were used to generate each equilibrium configuration and the coupled equations of motion were then integrated numerically, using these states as initial spin configurations. Numerical integrations were performed to a maximum time $`t_{max}`$, using a time step of $`\mathrm{\Delta }t`$. The space-displaced, time-displaced spin-spin correlation function $`C^k(𝐫𝐫^{},t)`$ was computed for time-displacements ranging from $`0`$ to $`t_{cutoff}`$. Each of such correlations was calculated from an average over between 40 to 80 different time starting points, evenly spaced by $`10\mathrm{\Delta }t`$. Our unit for time is the interaction constant $`J`$ defined in Eq. (1). The parameters used in this work were as follows. At $`T=0.9T_c`$ we used lattices sizes $`L=12,24,36,48`$, and $`60`$, for which the respective time parameters were $`(t_{max}J,t_{cutoff}J)=(440,400),(440,400),(480,400),(680,600)`$, and $`(1080,1000)`$. The respective numbers of initial configurations used were $`7020`$, $`2850`$, $`1110`$, $`400`$, and $`810`$. At $`T_c`$ and lattice sizes $`L=18,24,30,36,48`$ we used $`t_{max}J=480`$ and $`t_{cutoff}J=400`$, and for $`L=60`$ we used $`t_{max}J=1080`$ and $`t_{cutoff}J=1000`$. The number of initial configurations used at $`T_c`$ was $`1000`$ for $`L=18,30,36`$, whereas for $`L=24,48`$, and $`60`$ the respective numbers were $`1125`$, $`715`$, and $`510`$. Other temperatures considered, chosen to coincide with experimental values, were $`T=0.774T_c`$, $`0.846T_c`$ and $`0.936T_c`$, for which $`t_{max}J=480`$, $`t_{cutoff}J=400`$ and the number of initial configurations used was $`120`$. For lattice size $`L=24`$ at $`T=0.9T_c`$ the integration was carried out with a time step $`\mathrm{\Delta }t=0.01J^1`$ using a fourth-order predictor-corrector method, as in Refs.. For other lattice sizes and temperatures, we used a new algorithm based on fourth-order Suzuki-Trotter decompositions of exponential operators, with a time step $`\mathrm{\Delta }t=0.2J^1`$ (except for $`L=12`$ at $`T=0.9T_c`$ for which $`\mathrm{\Delta }t=0.1J^1`$). The latter method allowed us to use larger integration time steps and thus to carry out the integration to larger $`t_{max}`$, as compared with previous work . In the original study of Chen and Landau $`t_{max}=120J^1`$ and in the work of Bunker et al $`t_{max}=200J^1`$. In comparison, in the present work we have used a larger $`t_{max}440J^1`$, more equilibrium initial spin configurations, and a larger lattice size, namely $`L=60`$. As it is clear from the above, we have concentrated our efforts on two cases: one temperature below $`T_c`$, namely $`T=0.9T_c`$, and at $`T_c`$. We also used the technique of calculating partial spin sums “on the fly” which limits us to data in the $`(q,0,0)`$, $`(q,q,0)`$ and $`(q,q,q)`$ directions with $`q`$ determined by the periodic boundary conditions, $$q=\frac{2\pi n}{L},n=\pm 1,\pm 2,\mathrm{},\pm (L1),L.$$ (4) Since all three Cartesian spatial directions are equivalent by symmetry, the same operation carried out for the $`(q,0,0)`$ direction was also carried out for the other two reciprocal lattice directions $`(0,q,0)`$ and $`(0,0,q)`$ and the results were averaged. Similarly, the same operations carried out for the $`(q,q,0)`$ and $`(q,q,q)`$ directions were also carried out for the equivalent reciprocal lattice directions and the results were averaged for each case. For the ferromagnetic case the total magnetization is conserved and the dynamic structure factor $`S(𝐪,\omega )`$ can be separated into a component along the axis of the total magnetization (longitudinal component) and a transverse component. However, for the antiferromagnet the order parameter is not conserved and separation is not possible. We thus refer to the average $$S(𝐪,\omega )=\frac{1}{3}[S^x(𝐪,\omega )+S^y(𝐪,\omega )+S^z(𝐪,\omega )]$$ (5) as the dynamic structure factor. ### C Dynamic finite-size scaling Two practical limitations on spin-dynamics techniques imposed by limited computer resources are the finite lattice size and the finite evolution time. The finite time cutoff can introduce oscillations in $`S(𝐪,\omega )`$, which can be smoothed out by convoluting the spin-spin correlation function with a resolution function in frequency. The finite-size scaling can be used to extract the dynamic critical exponent $`z`$ using $$\frac{\omega \overline{S_L}^k(𝐪,\omega )}{\overline{\chi }_{L}^{}{}_{}{}^{k}(𝐪)}=G(\omega L^z,qL,\delta _\omega L^z)$$ (6) and $$\overline{\omega }_m=L^z\overline{\mathrm{\Omega }}(qL,\delta _\omega L^z),$$ (7) where $`\overline{S_L}^k(𝐪,\omega )`$ is the dynamic structure factor convoluted with a Gaussian resolution function with characteristic parameter $`\delta _\omega `$, $`\overline{\omega }_m`$ is a characteristic frequency, defined as $$_{\overline{\omega }_m}^{\overline{\omega }_m}\overline{S_L}^k(𝐪,\omega )\frac{d\omega }{2\pi }=\frac{1}{2}\overline{\chi }_{L}^{}{}_{}{}^{k}(𝐪),$$ (8) and $`\overline{\chi }_{L}^{}{}_{}{}^{k}(𝐪)`$ is the total integrated intensity. For the time cutoffs of at least $`400J^1`$ used in the present simulations, the oscillations in the dynamic structure factor due to the finite $`t_{cutoff}`$ were not very significant. Thus, we first estimate the dynamic critical exponent $`z`$ without using a resolution function, or equivalently, we take $`\delta _\omega =0`$. In this case the analysis is simpler and a value for $`z`$ can be obtained from the slope of a graph of $`\mathrm{log}(\omega _m)`$ vs $`\mathrm{log}(L)`$ (where $`\omega _m`$ is the characteristic frequency in the absence of a resolution function) if the value $`qL`$ is fixed. It is important to note that lattice sizes included in this calculation should be large enough to be in the asymptotic-size regime. The approximate lattice size of the onset of the asymptotic regime can be estimated by looking at the behavior of $`\omega L^z`$ for different lattice sizes using trial values of $`z`$. The effects of the small oscillations in the dynamic structure factor on the dynamic critical exponent $`z`$ can be evaluated by repeating the analysis using a resolution function. For this purpose, we chose $$\delta _\omega =0.02\left[\frac{60}{L}\right]^z$$ (9) so that the function $`\overline{\mathrm{\Omega }}(qL,\delta _\omega L^z)`$ in Eq. (7) is a constant if $`qL`$ is fixed, yielding $$\overline{\omega }_mL^z.$$ (10) Because $`\delta _\omega `$ depends on $`z`$, this exponent had to be determined iteratively. We used $`n=1,2`$ and several initial values for $`z`$ in the iterations, in order to check how stable the converged value of $`z`$ is. ## III Results ### A Numerical data for $`S(𝐪,\omega )`$ For $`TT_c`$ our results for the dynamic structure factor, as defined in Eq. (5), show a spin-wave and a central peak. In Fig. 1 we show lineshapes for lattice size $`L=60`$ at $`T=0.9T_c`$ and wave-vectors $`q=\pi /10`$, $`\pi /6`$ and $`\pi /3`$ in the direction. We see that as $`q`$ increases, the central peak broadens and its relative amplitude increases. Fig. 2 shows lineshapes for $`L=60`$ at $`T_c`$ and wave-vectors $`q=\pi /10`$ and $`\pi /6`$ in the direction. It is clear from these lineshapes that the oscillations due to the finite $`t_{cutoff}`$ are indeed negligible; therefore, in our analysis of the lineshapes we have not convoluted our results with a resolution function. (As explained in the following section, we later convoluted our results with a Gaussian resolution function in order to directly compare our lineshapes with the experiments. The reason for this is that there is an intrinsic finite resolution in the experimental data due to the finite divergence of neutron beams.) The width of any structure in the lineshapes discussed here is much larger than our resolution in frequency $`\mathrm{\Delta }\omega =1.2\pi /t_{cutoff}`$. Below $`T_c`$, previous theoretical and experimental studies motivated us to extract the position and the half-width of the spin-wave and central peaks by fitting the lineshape to a Lorentzian form $$S(𝐪,\omega )=\frac{A\mathrm{\Gamma }_1^2}{\mathrm{\Gamma }_1^2+\omega ^2}+\frac{B\mathrm{\Gamma }_2^2}{\mathrm{\Gamma }_2^2+(\omega +\omega _s)^2}+\frac{B\mathrm{\Gamma }_2^2}{\mathrm{\Gamma }_2^2+(\omega \omega _s)^2}$$ (11) where the first term corresponds to the central peak and the last two terms are contributions from the spin-wave creation and annihilation peaks at $`\omega =\pm \omega _s`$. For $`T=0.9T_c`$ we find that Lorentzian lineshapes fit our results reasonably well for small values of $`q`$, except for the smallest value, namely $`q=2\pi /L`$, in the direction. The reason is that for each lattice size $`L`$, the dynamic structure factor for the smallest value of $`q`$ corresponds to correlations between spins displaced in space by a distance $`L/2`$, and the effect of the finite lattice size is particularly prominent in these cases, causing the lineshapes to depart significantly from a Lorentzian form. For large values of $`q`$ (approximately $`q>2\pi (L/42)/L`$) the Lorentzian form given in Eq. (11) does not fit the data, especially at large frequency transfer. In general, the fitted parameters varied when different frequency ranges were used in the fit. Although this variation was small, it was often larger than the statistical error in the fitted parameters obtained from the fit using a single frequency range. Therefore, for $`T=0.9T_c`$ we estimated the error in the fitted parameters by fitting the lineshapes using three different ranges of frequency. The values of the parameters were then averaged and the error bars estimated from the fluctuations. At $`T_c`$, renormalization-group theory (RNG) predicts a non-Lorentzian functional form for the spin-wave lineshape, which has been used along with a Lorentzian central peak to analyze experimental data . Since it is more complicated to perform fits to this RNG functional form and since the spin-wave peaks obtained from the simulations are more pronounced than in the experiment, and thus less dependent on the fitted functional, we have fitted the lineshapes at $`T_c`$ to Lorentzians, as given in Eq. (11). Although obtaining a good fit to our data at $`T_c`$ was more difficult than below $`T_c`$, the resulting fits at $`T_c`$ are still reasonable. Unlike for $`T=0.9T_c`$, at $`T_c`$ the lineshape parameters used in the analysis below are the values obtained from the fit to only one frequency range, which was the one that gave the best fit. One should then expect that the actual error in the fitted parameters at $`T_c`$ is larger (by up to a factor of 5) than the error bars shown in the figures below. In addition to the spin-wave and the central peaks, we observed some other peaks on the large frequency tail of the spin-wave peaks. Although these large frequency peaks had very small amplitudes, they could be discerned from the background fluctuations. Using the spin-wave frequencies in the , and directions we could check that the position of these extra peaks corresponded to frequencies of two spin-wave addition peaks. These extra structures in the lineshapes were particularly visible for the smallest values of wave-vectors. Fig. 3 shows how the dispersion curve varies as the temperature increases from $`T=0.774T_c`$ to $`T_c`$. The dispersion curves illustrated here are for the direction and are plotted up to $`q=\pi /2`$, which corresponds to one half of the Brillouin zone. As mentioned before, for larger values of $`q`$ the Lorentzian in Eq. (11) did not yield good fits to the lineshapes; however, the spin-wave positions could still be directly read off the graphs, but with larger error bars. For our present purpose of observing the approach to the critical region as the temperature is raised from below $`T_c`$, it suffices to consider the dispersion curve for wave-vectors up to $`q=\pi /2`$. Well below $`T_c`$, the dispersion relation is linear for small $`q`$; as $`TT_c`$, it changes gradually from linear to a power-law behavior of the form $$\omega _s=A_sq^x.$$ (12) For $`T=0.9T_c`$ and $`L=60`$ a fit including the smallest five $`q`$ values of the dispersion curve to Eq. (12) yielded $`x=1.017\pm 0.003`$. As we probed further away from the Brillouin zone center by including larger values of $`q`$ in the fit, the exponent decreased slightly. In order to check how sensitive the fitted exponent is to the particular form of the fitted function, we have performed new fits to a function which includes a quadratic term, i.e., $$\omega _s=A_sq^x+B_sq^2.$$ (13) Fitting the smallest five $`q`$ values of the dispersion curve to a function of the form given by Eq. (13) yielded an exponent $`x=1.020\pm 0.003`$, which is in good agreement with the value obtained from the previous fit. When larger values of $`q`$ in the dispersion curve were included in the fits, Eq. (13) tended to yield smaller $`\chi ^2`$’s per degree of freedom than Eq. (12). The dispersion curve for $`T=T_c`$ and $`L=60`$ fitted to Eq. (12) yielded an exponent of $`x=1.38\pm 0.01`$ when the smallest 12 values of $`q`$ were included in the fit. As the larger $`q`$ were excluded from the fit, the exponent increased slightly, tending towards $`x1.40`$. When only the smallest few values of $`q`$ were included in the fit, the exponent decreased again, reflecting the fact that as we probed correlations between spins separated by larger distances (or equivalently, smaller $`q`$) the finite size of the lattice (and thus of the correlation length) is revealed, showing that the system is not at criticality. Hence, the exponent $`x`$ decreases towards unity. On the other hand, large values of $`q`$ correspond to short distance (in the direct lattice space) spin-spin correlations, and the correlation length is much larger than the distance probed. One would thus expect the critical behavior of the system to be manifest, and this is indeed consistent with what we obtained. Our results at $`T_c`$ are in agreement with the recent experiment which obtained an exponent $`x=1.43\pm 0.04`$ when the dispersion curve at $`T_c`$ was fitted to a power-law relation of the form given in Eq. (12). The solid lines in Fig. 3 are fits to Eq. (13); in general, these fits gave lower values of $`\chi ^2`$ per degree of freedom than fits to Eq. (12). In the critical region, dynamic scaling theory predicts that the half-width of spin-wave peaks behaves as a power-law, $`\mathrm{\Gamma }_2q^{1.5}`$ , whereas for the hydrodynamic regime the prediction from hydrodynamic theory is $`\mathrm{\Gamma }_2q^2`$ . The half-width of the spin-wave peaks at $`T=0.9T_c`$ and $`L=60`$ from our simulations is shown in Fig. 4. We observed a crossover from $`\mathrm{\Gamma }_2=(0.401\pm 0.004)q^{1.46\pm 0.06}`$ for larger values of wave-vector to $`\mathrm{\Gamma }_2=(0.48\pm 0.02)q^{1.86\pm 0.05}`$ for small values of $`q`$. The behavior for relatively large wave-vectors is in agreement with dynamic scaling theory and with the recent experiment . The exponent we obtained by fitting only small values of $`q`$ is close to the hydrodynamic prediction. Thus, for the spin-wave half-width we have observed a crossover of exponents associated with two different regimes, namely, the critical and the hydrodynamic regions. This crossover is similar to the one observed in the dispersion curve at $`T_c`$, discussed above. For $`T=T_c`$ and $`L=60`$ the spin-wave half-width also had a power-law behavior which varied from approximately $`q^{1.2}`$ when the 12 smallest values of $`q`$ were included to approximately $`q^{1.4}`$ when only the smallest five wave-vectors were included in the fit. In their recent experiment, Coldea et al obtained $`\mathrm{\Gamma }_2=Dq^{1.41\pm 0.05}`$ for the temperature range $`0.77T_cT<T_c`$, and the coefficient $`D`$ increased with increasing temperature. As in the experiments, the dynamic structure factors from our simulations had central peaks (zero frequency transfer peaks) for $`TT_c`$. In contrast, renormalization-group theory predicts a central peak in the longitudinal component of the dynamic structure factor only below $`T_c`$ , and none of the theories predict a central peak at $`T_c`$ . For $`T=0.9T_c`$ and $`L=60`$ fitting the central peak half-width to the form $`\mathrm{\Gamma }_1q^x`$ yielded very large $`\chi ^2`$ per degree of freedom. A much improved fit was obtained by using the function $`\mathrm{\Gamma }_1=A_1+B_1q^{C_1}`$, which allows for a non-zero central peak width when $`q`$ vanishes. In these fits the data for the smallest value of $`q`$ observable in $`L=60`$, i.e. $`n=1`$, were not included, because of the large finite-size effects in them. The fit including data for $`q`$ corresponding to $`n=2`$ until $`n=7`$ yielded $`A_10.013\pm 0.001`$, $`B_10.120\pm 0.005`$ and $`C_12.4\pm 0.2`$. As we systematically included larger values of $`q`$ in the fits, these parameters decreased slightly. At $`T_c`$ we have also fitted the central peaks to Lorentzians, according to Eq. (11); however, these tended to yield curves with smaller amplitudes than the data, as can be seen in Fig. 2. Since there is no theoretical prediction for the central peak, we have also tried to fit them with a Gaussian form. These latter fits were, nevertheless, much worse than the fits with Lorentzians. The lattice sizes that we used, namely $`L=12`$, $`24`$, $`36`$, $`48`$, and $`60`$, are all multiples of $`12`$; thus, there are certain wave-vectors which are common to all lattice sizes. This is an advantage in the study of effects due to finite lattices, because it allows us to compare lineshapes and spin-wave dispersion relations for different lattice sizes at a fixed value of wave-vector. At $`T=0.9T_c`$ we did not see a significant finite-size effect for $`L24`$; however, when we superimposed lineshapes at $`T_c`$ for a fixed value of $`q`$, and different values of $`L`$, finite-size effects were noticeable for $`L=24`$. For the larger values of $`L`$ that we used, the lineshapes were the same within the error bars. The dynamic critical exponent $`z`$ was extracted from the finite-size scaling of $`\overline{\omega }_m`$, as described in a previous section. We started the analysis using no resolution function, or equivalently $`\delta _\omega =0`$, and $`n=1,2`$. As in previous work , we estimated the lattice $`L=30`$ to be approximately the onset of the asymptotic-size regime. From the slope of a $`\mathrm{log}(\overline{\omega }_m)`$ vs $`\mathrm{log}(L)`$ graph fitted with four data points, namely lattice sizes $`L=30,36,48`$, and $`60`$, we obtained $`z=1.45\pm 0.01`$ for $`n=1`$ and $`z=1.42\pm 0.01`$ for $`n=2`$. In order to check the effects of the very small oscillations in $`S(𝐪,\omega )`$ due to the finite cutoff time, we proceeded to estimate the value of $`z`$ using a resolution function with $`\delta _\omega `$ given by Eq. (9). For the iterations, we used several initial values of $`z^{(0)}`$, ranging from $`z^{(0)}=1.31`$ to $`1.59`$, and in all cases the exponent $`z`$ converged rapidly to the final value with at most three iterations being necessary. For all the initial values of $`z^{(0)}`$ that we used, we obtained an exponent $`z=1.43\pm 0.01`$ for $`n=1`$ and $`z=1.42\pm 0.01`$ for $`n=2`$. In general, the $`\chi ^2`$ per degree of freedom in the fits for $`n=2`$ were slightly lower than for $`n=1`$, although it was reasonable in all cases. Our final estimate for the dynamic critical exponent is $`z=1.43\pm 0.03`$, where the error bar reflects the fluctuations in the different estimates of $`z`$. A comparison of the characteristic frequency $`\overline{\omega }_m`$ as a function of the lattice size $`L`$ for the analysis with and without a resolution function is shown in Fig. 5. For the former case, the graph shown corresponds to the converged values of $`z`$ for both $`n=1`$ and $`2`$. ### B Comparison with experiment In this section we compare our results with the recent neutron scattering experiment by Coldea et al . Before proceeding with the direct comparison, it is necessary to clarify the units and possible normalization factors between simulation and experiment. The neutron scattering experiment was done on RbMnF<sub>3</sub>, which can be described by a quantum Heisenberg Hamiltonian of the form $$=J^{exp}\underset{<\mathrm{𝐫𝐫}^{}>}{}𝐒_{𝐫}^{}{}_{}{}^{Q}𝐒_{𝐫^{}}^{}{}_{}{}^{Q},$$ (14) where $`𝐒_{𝐫}^{}{}_{}{}^{Q}`$ are spin operators with magnitude $`|𝐒_{𝐫}^{}{}_{}{}^{Q}|^2=S(S+1)`$ and the interaction strength between pairs of nearest-neighbors was determined experimentally to be $`J^{exp}=(0.58\pm 0.06)`$ meV . In contrast, our simulations were performed on a classical Heisenberg Hamiltonian given in Eq. (1). However, quantum Heisenberg systems with large spin values ($`S2`$) have been shown to behave as classical Heisenberg systems, where the spins are taken to be vectors of magnitude $`\sqrt{S(S+1)}`$ with the same interaction strength between pairs of nearest neighbors as in the quantum system . In our simulations the spins were taken to be vectors of unit length. Hence, to preserve the Hamiltonian the interaction strength $`J`$ from our simulation has to be normalized according to $$J=J^{exp}S(S+1).$$ (15) Although this choice of normalization for spin vectors and the interaction strength leaves the Hamiltonian unchanged, it does modify the equations of motions. The dynamics of the classical system so defined is the same as the quantum system defined by the Hamiltonian in Eq. (14) if one rescales the time, or equivalently, the frequency transfer. We obtain $$\omega ^{exp}=J^{exp}\sqrt{S(S+1)}\frac{w}{J},$$ (16) where $`\omega ^{exp}`$ is the frequency transfer in the quantum system, measured experimentally, and $`w/J`$ is the frequency transfer in units of J from our simulations. Parenthetically, we note that the critical temperature of the classical Hamiltonian in Eq. (1) has been determined from simulations to be $`T_c=1.442929(77)J`$ . Using the normalization for the interaction strength $`J`$ given in Eq. (15) and the experimental value $`J^{exp}=(6.8\pm 0.6)`$K we get $`T_c=(85.9\pm 7.6)`$K, where the 9 percent error comes from the uncertainty in $`J^{exp}`$. The experimental value of the critical temperature is around $`83`$K which is well within the error bars. Due to detailed balance, neutron scattering experiments measure the dynamic structure factor multiplied by a temperature and frequency dependent population factor . This factor does not appear in the simulations of the classical system for which the dynamic structure factor is computed directly. For the comparison, we removed the population factor from the experimental data. The finite divergence of the neutron beam gives rise to a resolution function which is usually approximated by a Gaussian in the $`4`$-dimensional energy and wave-vector space. In the experiment , the measured resolution width along the energy axis was $`0.25`$ meV (full-width at half-maximum) for incoherent elastic scattering. In order to directly compare our results with the experiment, we convoluted the lineshapes from our simulation with a Gaussian resolution function in energy with the experimental value of full-width at half-maximum, normalized according to Eq. (16). The standard deviation $`\delta _\omega `$ thus obtained for the Gaussian resolution function was $`0.0619`$ in units of $`J`$. As a test of the effects of the resolution function in the wave-vector space, we have also convoluted our lineshapes with a 3-dimensional Gaussian function where, for simplicity, we have taken the resolution width in the three wave-vector components to be the same, and equal to the average of the experimental resolution in the longitudinal and transverse directions. The effect of this convolution was found to be negligible; thus the lineshapes used in the comparisons shown below do not include the resolution in wave-vector. The experiment performed constant wave-vector scans with both positive and negative energy transfer. The wave-vector transfer $`𝐐`$ was measured along the direction, around the antiferromagnetic zone center which in our notation is the $`(\pi ,\pi ,\pi )`$ point. Note that Ref. defines the wave-vector transfer $`𝐐`$ in units such that the antiferromagnetic zone center is $`(0.5,0.5,0.5)`$; hence, to express $`𝐐`$ of Ref. in units of Å<sup>-1</sup> one has to multiply it by $`2\pi /a`$, where $`a`$ is the cubic lattice parameter expressed in Å. However, in the simulation direct lattice positions are defined in units of the lattice constant $`a`$; thus we obtain wave-vectors multiplied by the constant $`a`$. Let us emphasize that one has to divide the wave-vector $`𝐐`$ \[and also $`q`$, see Eq. (4)\] defined in this paper by $`2\pi `$ in order to express it in the units used in Ref. . In the experiment, measurements were taken for wave-vectors $`𝐐=(\pi +q,\pi +q,\pi +q)`$, with $`q=2\pi (0.02),`$ $`2\pi (0.04),`$…, $`2\pi (0.12)`$, but unfortunately these values of $`q`$ are not all observed in our simulations with the particular lattice sizes that we used. For instance, with a lattice size $`L=60`$ we observe wave-vectors with $`q=2\pi (0.01666\mathrm{}),`$ $`2\pi (0.03333\mathrm{})`$,… and so on, according to Eq. (4). Thus, in order to directly compare the lineshapes from the simulation with the experimental ones, it was necessary to interpolate our results to obtain the same $`q`$ values of the experiment. This was done by first fitting our lineshapes with a Lorentzian form, as given in Eq. (11). Since the parameters $`B`$, $`\mathrm{\Gamma }_2`$ and $`\omega _s`$ obtained from these fits behave as power-laws of $`q`$, we linearly interpolated the logarithm of these parameters as a function of the logarithm of $`q`$, to obtain new parameters for the lineshapes corresponding to those values of $`q`$ observed in the experiment. We estimated the uncertainties from this procedure to be less than five percent for the parameter $`B`$, less than three percent for the spin-wave half-width $`\mathrm{\Gamma }_2`$ and the spin-wave position $`\omega _s`$ at $`T_c`$, and less than one percent for the spin-wave position $`\omega _s`$ at $`T=0.9T_c`$. Below $`T_c`$, the parameters $`A`$ and $`\mathrm{\Gamma }_1`$ associated with the central peak were linearly interpolated, yielding new parameters with uncertainties of approximately five percent. At $`T_c`$, the parameter $`A`$ was interpolated in the log-log plane (as for $`B`$, $`\mathrm{\Gamma }_2`$ and $`\omega _s`$ discussed above), whereas $`\mathrm{\Gamma }_1`$ was simply linearly interpolated. The uncertainties in $`A`$ and $`\mathrm{\Gamma }_1`$ at $`T_c`$ were estimated to be less than ten percent. For $`L=60`$, there is one value of $`q`$, namely $`q=2\pi (0.10)`$, which is accessible to both simulation and experiment. This was the only case for which we did not have to interpolate in $`q`$. Below $`T_c`$, the simulations are mainly for $`T=0.9T_c`$ which unfortunately does not coincide with any temperature used in the experiment; however, it is very close to $`T=0.894T_c`$ which is one of the temperatures for which experimental results are available. To correct for the slight difference, we made a linear interpolation in temperature, using our results for $`L=24`$ at $`T=0.846T_c`$ and at $`T=0.9T_c`$. We first fitted the lineshapes at these two temperatures to a Lorentzian of the form given by Eq. (11), then we linearly interpolated the position and the amplitude of the spin-wave peak at these temperatures, to obtain the spin-wave position and amplitude corresponding to $`T=0.894T_c`$. For small values of $`q`$ we found that the frequency of the spin-wave peak at $`T=0.894T_c`$ was approximately $`1.5`$ percent larger than at $`T=0.9T_c`$ and this difference decreased for larger values of $`q`$. The spin-wave amplitude at $`T=0.894T_c`$ was found to be approximately $`5`$ percent larger than at $`T=0.9T_c`$ for small values of $`q`$. As for the spin-wave position, the difference in the amplitudes decreased for larger values of wave-vector. The intensity of the lineshapes in the neutron scattering experiment was measured in counts per 15 seconds. For both temperatures $`T=0.894T_c`$ and $`T=T_c`$ the measurements for the several wave-vectors were done with the same experimental set-up and conditions. Therefore, the relative intensities of the lineshapes for the different wave-vectors is fixed, and equal for both temperatures. The intensity of the lineshapes obtained in the simulation had to be normalized to the experimental value; however, because the relative intensities for different wave-vectors is fixed, we have only one independent normalization factor for all the wave-vectors at both temperatures. The normalization of the intensity was chosen so that the spin-wave peak for $`T=0.894T_c`$ and $`q=2\pi (0.08)`$ from the experiment and the simulation matched. This same factor was used to normalize the intensities of the lineshapes corresponding to the remaining values of wave-vectors at $`T=0.894T_c`$, and for all cases at $`T_c`$. The factor used was $`70`$ counts/15 secs., which we multiplied to the simulated lineshapes for all values of $`q`$ at $`T=0.894T_c`$ and at $`T_c`$. The final lineshapes for $`T=0.894T_c`$, $`L=60`$, and several wave-vectors are shown in Fig. 6 together with experimental lineshapes for each case. Figs. 7(a) and 7(b) show respectively the comparisons of the dispersion curve and the spin-wave half-width from the simulation and the experiment at $`T=0.894T_c`$. The good agreement between our results and experiment can be seen from either the direct comparison of the lineshapes, or the comparisons of the dispersion curve and the spin-wave half-width. There is an agreement between the lineshape intensities from simulation and experiment over two orders of magnitude, from $`q=2\pi (0.02)`$ to $`q=2\pi (0.10)`$. Fig. 8 shows the comparison of lineshapes from the simulation and the experiment for $`T=T_c`$, $`L=60`$, and several values of $`q`$. The dispersion curve obtained from the simulations at $`T=T_c`$, shown in Fig. 9, is systematically larger than the experimental values. We would like to emphasize that the error bars shown for the dispersion curve obtained from our simulations at $`T_c`$ reflect only the statistical errors of a best fit of the lineshapes with Eq. (11). For each wave-vector, this fit was done with only one range of frequency; hence errors associated with the choice of frequency range and the quality of the fit were not taken into account. It is reasonable to expect that such sources of error would increase the error bars by a factor of 5. From the direct comparison of the simulated and experimental lineshapes at $`T_c`$ it is difficult to determine the difference in the spin-wave frequencies, because the spin-wave peaks from the experiment are not very pronounced, and their positions have to be extracted from the fits of the lineshapes. As we mentioned before, the experimental data at $`T_c`$ were fitted to a functional form predicted by RNG theory plus a Lorentzian central peak. As an illustration, one such fit is shown in Fig. 8(c) for $`q=2\pi (0.08)`$, along with the RNG component of the fit and the prediction by mode-coupling theory. Finally, even though at $`T_c`$ the lineshape intensities from the simulations for small frequency transfer tended to be lower as compared to the experiment, the agreement is still reasonably good, considering the variation of the intensities over almost two orders of magnitude from $`q=2\pi (0.02)`$ to $`q=2\pi (0.12)`$. ## IV Conclusion We have studied the dynamic critical properties of the classical Heisenberg antiferromagnet in a simple cubic lattice, using large-scale computer simulations. A new time integration technique implemented by Krech et al allowed us to use a larger time integration step and we were thus able to extend the maximum integration time to much larger values than in previous work. Below $`T_c`$, the dispersion curves were approximately linear for wave-vectors well within the first Brillouin zone. Increasing the temperature towards the critical temperature the dispersion curve became a power-law, reflecting the crossover from hydrodynamic to critical behavior. The power-law behavior of the spin-wave half-width at $`T=0.9T_c`$ also showed a crossover from critical behavior at large values of $`q`$ to hydrodynamic behavior at small values of $`q`$. The dynamic critical exponent was estimated to be $`z=(1.43\pm 0.03)`$ which is in agreement with the experimental value of Coldea et al, and slightly lower than the dynamic scaling prediction. We made direct, quantitative comparison of both the dispersion curve and the lineshapes obtained from our simulations with the recent experimental results by Coldea et al. At $`T=0.894T_c`$ the agreement was very good. The major difference was at $`T_c`$ where spin-wave peaks from our simulations tended to be at slightly larger frequencies than the experimental results. Both at $`T=0.894T_c`$ and at $`T_c`$ the lineshape intensities varied over almost two orders of magnitude from $`q=2\pi (0.02)`$ to $`q=2\pi (0.10)`$ and there was good agreement between the intensities from simulation and experiment over the whole range. Thus, the simple isotropic nearest-neighbor classical Heisenberg model is very good for describing the dynamic behavior of this real magnetic system, except for small differences in spin-wave frequencies at the critical temperature. ACKNOWLEDGMENTS We are indebted to Professor R. A. Cowley and Dr. R. Coldea for very helpful discussions and for sending us ascii files of their data. We would also like to thank Dr. M. Krech and Professor H. B. Schüttler for valuable discussions. Computer simulations were carried out on the Cray T90 at the San Diego Supercomputing Center, and on a Silicon Graphics Origin2000 and IBM R6000 machines in the University of Georgia. This research was supported in part by NSF Grant No. DMR-9727714. FIGURE CAPTIONS * Fig. 1: Dynamic structure factor $`S(𝐪,\omega )`$ from our simulations for $`L=60`$ at $`T=0.9T_c`$ and wave-vectors (a) $`q=\pi /10`$, (b) $`\pi /6`$ and (c) $`\pi /3`$ in the direction. The symbols represent spin dynamics data and the solid line is a fit with the Lorentzian function given in Eq. (11). For clarity, error bars are only shown for a few typical points, i.e. error bars for the data in the neighborhood of each of these points are similar. At high frequencies error bars are of the size of the fluctuations in these data. * Fig. 2: Dynamic structure factor $`S(𝐪,\omega )`$ from our simulations for $`L=60`$ at $`T=T_c`$ and wave-vectors (a) $`q=\pi /10`$ and (b) $`\pi /6`$ in the direction. The symbols represent spin dynamics data and the solid line is a fit with the Lorentzian function given in Eq. (11). As in Fig. 1, error bars are only shown for a few typical points. * Fig. 3: Spin-wave dispersion relations for $`TT_c`$, in the direction. The symbols represent spin-wave positions extracted from Lorentzian fits to the lineshapes from the simulations, and the solid curves are fits of the dispersion relations at different temperatures to Eq. (13). * Fig. 4: Log-log graph of the half-width of the spin-wave peak extracted from Lorentzian fits to the lineshapes obtained from simulations for $`L=60`$ and $`T=0.9T_c`$ in the direction as a function of $`q`$. * Fig. 5: Finite-size scaling plot for $`\overline{\omega }_m`$ (with $`qL=`$const, $`\delta _\omega L^z=`$const) for the analysis with and without a resolution function. For the former case, the data used correspond to the converged values of $`z`$, for $`n=1,2`$. The error bars were smaller than the symbol sizes. * Fig. 6: Comparison of lineshapes obtained from fits to simulation data for $`L=60`$ (solid line) and experiment (open circles) at $`T=0.894T_c`$ in the direction: (a) $`q=2\pi (0.04)`$, (b) $`q=2\pi (0.06)`$, (c) $`q=2\pi (0.08)`$, and (d) $`q=2\pi (0.10)`$. The horizontal line segment in each graph represents the resolution in energy (full-width at half-maximum). * Fig. 7: Comparison of the (a) dispersion curve and the (b) spin-wave half-width, obtained from simulations for $`L=60`$ (open circle) and the experiment (open triangle) at $`T=0.894T_c`$, in the direction. The simulation data shown here correspond to values of $`q`$ accessible with $`L=60`$, without interpolation to match the $`q`$ values from the experiment. * Fig. 8: Comparison of lineshapes obtained from fits to simulation data for $`L=60`$ (solid line) and experiment (open circles) at $`T=T_c`$ in the direction: (a) $`q=2\pi (0.04)`$, (b) $`q=2\pi (0.06)`$, (c) $`q=2\pi (0.08)`$, and (d) $`q=2\pi (0.10)`$. The dot-dashed line in (c) is a fit of the experimental data to the functional form predicted by the RNG theory plus a Lorentzian central peak, and the RNG component of the fit is shown by the long-dashed line. The prediction of Mode Coupling (MC) theory for $`q=2\pi (0.08)`$ is shown by the dotted line in (c). The horizontal line segment in each graph represents the resolution in energy (full-width at half-maximum). * Fig. 9: Comparison of the dispersion curve obtained from our simulation for $`L=60`$ (open circle) and the experiment (open triangle) at $`T=T_c`$, in the direction. In the notation here, the first Brillouin zone edge is at $`|(q,q,q)|2.72`$. The simulation data shown here correspond to values of $`q`$ accessible with $`L=60`$, without interpolation to match the $`q`$ values from the experiment.
no-problem/9910/astro-ph9910007.html
ar5iv
text
# Stellar kinematics of barred galaxies ## 1. Introduction The advent of large telescopes (e.g. VLT) equipped with very sensitive spectrographs will make the obtaining of absorption lines spectra a routine task. The instrumental progress will increase our knowledge of stellar kinematics. Among the problems that will be addressed by such instruments, the stellar bar kinematics holds my attention since counter-rotating motion is now often observed in such galaxies. This paper is specially devoted to the interpretation of the stellar counter-rotating component observed in the double-barred galaxy NGC 5728 (Prada & Gutiérrez 1999). ## 2. The model The generic 2D dynamical self-consistent model is one of the sets made by Wozniak & Pfenniger (1997). The mass model consists of a Ferrers ellipsoid (a/b/c=6/1.5/0.6 kpc, n=2) superposed on a Miyamoto-Nagai disk. The mass inside corotation is $`0.32\times 10^{11}M_{}`$. A set of orbits compatible with the mass distribution is numerically selected from a wide library using the Schwarzschild method. This technique gives the weight of the selected orbits. The distribution function is thus fully determined. This allows to compute the velocity field on a grid by averaging the velocities of each selected orbits weighted by their mass fraction. The distribution function of this model is very similar to those of N-body models. The mass on retrograde orbits inside corotation amounts to 19%. ## 3. Discussion Although this model is not fitted for a detailed modeling of NGC 5728, the theoretical velocity field has been projected onto the plane of the sky using the projection angles of this object ($`i`$=48 and PA$`_{\text{bar/line of nodes}}`$=35). A cut along the bar major axis simulates the slit of a spectrograph. In Fig. 1, we display the line-of-sight velocity (LOSV) curves obtained separately for direct and retrograde orbits. Both components are separated by $``$ 20 km.s<sup>-1</sup> which is not easily observable with current spectrographs. However, the theoretical velocities are obviously lower than those of NGC 5728 because the mass model does not match the real mass distribution. A more realistic model for this galaxy should be more massive, especially in the nucleus ($`4\times 10^9M_{}`$ inside 300 pc) so that the gap between direct and retrograde velocities will increase. Thus, except for a scaling factor on velocities, the counter-rotating core found in the model is very likely of the same nature that the one of NGC 5728. The internal dynamics of the large-scale bar could thus explain the observations without the need to invoke any external origin. Moreover, Prada & Gutiérrez (1999) suggested that the counter-rotating component is associated to the nuclear bar. As shown by $`N`$-body simulations (Friedli 1996) and discussed by Wozniak & Pfenniger (1997), this likely happens if a critical mass ratio is reached above which the counter-rotating bar dynamics is decoupled from the direct bar. However this model with only one large-scale bar plainly shows that a counter-rotating structure could be kinematically detected in barred galaxies whereas it is not photometrically observable. Thanks to the improvements of spectrographs and algorithms of Gaussian decomposition (e.g. Kuijken & Merrifield 1993) it will become easy to detect such retrograde motions in barred galaxies in a near future. ## References Friedli, D. 1996, A&A 312, 761 Kuijken, K., Merrifield, M.R. 1993, MNRAS 264, 712 Prada, F., Gutiérrez, C.M. 1999, ApJ 517, 123 Wozniak, H., Pfenniger, D. 1997, A&A 317, 14
no-problem/9910/cond-mat9910082.html
ar5iv
text
# Density Matrix Renormalization ## 1 Introduction The basics of the Density Matrix Renormalization Group were developed by S. White in 1992 and since then DMRG has proved to be a very powerful method for low dimensional interacting systems. Its remarkable accuracy can be seen for example in the spin-1 Heisenberg chain: for a system of hundreds of sites a precision of $`10^{10}`$ for the ground state energy can be achieved. Since then it has been applied to a great variety of systems and problems (principally in one dimension) including, among others, spin chains, fermionic and bosonic systems, disordered models, impurities, etc. It has also been improved substantially in several directions like two dimensional (2D) classical systems, phonons, molecules, the inclusion of temperature and the calculation of dynamical properties. Some calculations have also been performed in 2D quantum systems. All these topics are treated in detail and in a pedagogical way in a book published recently, where the reader can find an extensive review on DMRG. In this article we will attempt to cover the different areas where it has been applied. Regretfully, however, we won’t be able to review the large number of papers that have been written using different aspects of this very efficient method. We have chosen what, in our opinion, are the most representative contributions and we suggest the interested reader to look for further information in these references. Our aim here is to give the reader a general overview on the subject. One of the most important limitations of numerical calculations in finite systems is the great amount of states that have to be considered and its exponential growth with system size. Several methods have been introduced in order to reduce the size of the Hilbert space to be able to reach larger systems, such as Monte Carlo, renormalization group (RG) and DMRG. Each method considers a particular criterion to keep the relevant information. The DMRG was originally developed to overcome the problems that arise in interacting systems in 1D when standard RG procedures were applied. Consider a block B (a block is a collection of sites) where the Hamiltonian $`H_B`$ and end-operators are defined. These traditional methods consist in putting together two or more blocks (e.g. B-B’, which we will call the superblock), connected using end-operators, in a basis that is a direct product of the basis of each block, forming $`H_{BB^{}}`$. This Hamiltonian is then diagonalized, the superblock is replaced by a new effective block $`B_{new}`$ formed by a certain number $`m`$ of lowest-lying eigenstates of $`H_{BB^{}}`$ and the iteration is continued (see Ref. ). Although it has been used successfully in certain cases, this procedure, or similar versions of it, has been applied to several interacting systems with poor performance. For example, it has been applied to the 1D Hubbard model keeping $`m1000`$ states. For 16 sites, an error of 5-10% was obtained . Other results were also discouraging. A better performance was obtained by adding a single site at a time rather than doubling the block size. However, there is one case where a similar version of this method applies very well: the Kondo model. Wilson mapped the one-impurity problem onto a one-dimensional lattice with exponentially descreasing hoppings. The difference with the method explained above is that in this case, one site (equivalent to an “onion shell”) is added at each step and, due to the exponential decrease of the hopping, very accurate results can be obtained. Returning to the problem of putting several blocks together, the main source of error comes from the election of eigenstates of $`H_{BB^{}}`$ as representative states of a superblock. Since $`H_{BB^{}}`$ has no connection to the rest of the lattice, its eigenstates may have unwanted features (like nodes) at the ends of the block and this can’t be improved by increasing the number of states kept. Based on this consideration, Noack and White tried including different boundary conditions and boundary strengths. This turned out to work well for single particle and Anderson localization problems but, however, it did not improve significantly results in interacting systems. These considerations lead to the idea of taking a larger superblock that includes the blocks $`BB^{}`$, diagonalize the Hamiltonian in this large superblock and then somehow project the most favorable states onto $`BB^{}`$. Then $`BB^{}`$ is replaced by $`B_{new}`$. In this way, awkward features in the boundary would vanish and a better representation of the states in the infinite system would be achieved. White proposed the density matrix as the optimal way of projecting the best states onto part of the system and this will be discussed in the next section. The justification of using the density matrix is given in detail in Ref.. A very easy and pedagogical way of understanding the basic functioning of DMRG is applying it to the calculation of simple quantum problems like one particle in a tight binding chain . In the following Section we will briefly describe the standard method; in Sect. 3 we will mention some of the most important applications; in Sect. 4 we review the most relevant extensions to the method and finally in Sect. 5 we concentrate on the way dynamical calculations can be performed within DMRG. ## 2 The Method The DMRG allows for a systematic truncation of the Hilbert space by keeping the most probable states describing a wave function (e.g. the ground state) instead of the lowest energy states usually kept in previous real space renormalization techniques. The basic idea consists in starting from a small system (e.g with $`N`$ sites) and then gradually increase its size (to $`N+2`$, $`N+4`$,…) until the desired length is reached. Let us call the collection of $`N`$ sites the universe and divide it into two parts: the system and the environment (see Fig. 1). The Hamiltonian is constructed in the universe and its ground state $`|\psi _0>`$ is obtained. This is considered as the state of the universe and called the target state. It has components on the system and the envorinment. We want to obtain the most relevant states of the system, i.e., the states of the system that have largest weight in $`|\psi _0`$. To obtain this, the environment is considered as a statistical bath and the density matrix is used to obtain the desired information on the system. So instead of keeping eigenstates of the Hamiltonian in the block (system), we keep eigenstates of the density matrix. We will be more explicit below. Let’s define block \[B\] as a finite chain with $`l`$ sites, having an associated Hilbert space with $`m`$ states where operators are defined (in particular the Hamiltonian in this finite chain, $`H_B`$ and the operators at the ends of the block, useful for linking it to other chains or added sites). Except for the first iteration, the basis in this block isn’t explicitly known due to previous basis rotations and reductions. The operators in this basis are matrices and the basis states are characterized by quantum numbers (like $`S^z`$, charge or number of particles, etc). We also define an added block or site as \[a\] having $`n`$ states. A general iteration of the method consists of: i) Define the Hamiltonian $`H_{BB^{}}`$ for the superblock (the universe) formed by putting together two blocks \[B\] and \[B’\] and two added sites \[a\] and \[a’\] in this way: \[B a a’ B’ \] (the primes are only to indicate additional blocks, but the primed blocks have the same structure as the non-primed ones; this can vary, see the finite size algorithm below). In general, blocks \[B\] and \[B’\] come from the previous iteration. The total Hilbert space of this superblock is the direct product of the individual spaces corresponding to each block and the added sites. In practice a quantum number of the superblock can be fixed (in a spin chain for example one can look at the total $`S^z=0`$ subspace), so the total number of states in the superblock is much smaller than $`(mn)^2`$. As, in some cases, the quantum number of the superblock consists of the sum of the quantum numbers of the individual blocks, each one must contain several subspaces (several values of $`S^z`$ for example). Here periodic boundary conditions can be attached to the ends and a different block layout should be considered (e.g. \[B a B’ a’ \]) to avoid connecting blocks \[B\] and \[B’\] which takes longer to converge. The boundary conditions are between \[a’\] and \[B\]. For closed chains the performance is poorer than for open boundary conditions . ii) Diagonalize the Hamiltonian $`H_{BB^{}}`$ to obtain the ground state $`|\psi _0`$ (target state) using Lanczos or Davidson algorithms. Other states could also be kept, such as the first excited ones: they are all called target states. iii) Construct the density matrix: $$\rho _{ii^{}}=\underset{j}{}\psi _{0,ij}\psi _{0,i^{}j}$$ (1) on block \[B a\], where $`\psi _{0,ij}=ij|\psi _0`$, the states $`|i`$ belonging to the Hilbert space of the block \[B a\] and the states $`|j`$ to the block \[B’ a’\]. The density matrix considers the part \[B a\] as a system and \[B’ a’\], as a statistical bath. The eigenstates of $`\rho `$ with the highest eigenvalues correspond to the most probable states (or equivalently the states with highest weight) of block \[B a\] in the ground state of the whole superblock. These states are kept up to a certain cutoff, keeping a total of $`m`$ states per block. The density matrix eigenvalues sum up to unity and the truncation error, defined as the sum of the density matrix eigenvalues corresponding to discarded eigenvectors, gives a qualitative indication of the accuracy of the calculation. iv) With these $`m`$ states a rectangular matrix $`O`$ is formed and it is used to change basis and reduce all operators defined in \[B a\]. This block \[B a\] is then renamed as block \[B<sub>new</sub>\] or simply \[B\] (for example, the Hamiltonian in block \[B a\], $`H_{Ba}`$, is transformed into $`H_B`$ as $`H_B=O^{}H_{Ba}O`$). v) A new block \[a\] is added (one site in our case) and the new superblock \[B a a’ B’\] is formed as the direct product of the states of all the blocks. vi) This iteration continues until the desired length is achieved. At each step the length is $`N=2l+2`$ (if \[a\] consists of one site). When more than one target state is used, i.e more than one state is wished to be well described, the density matrix is defined as: $$\rho _{ii^{}}=\underset{l}{}p_l\underset{j}{}\varphi _{l,ij}\varphi _{l,i^{}j}$$ (2) where $`p_l`$ defines the probability of finding the system in the target state $`|\varphi _l`$ (not necessarily eigenstates of the Hamiltonian). The method described above is usually called the infinite system algorithm since the system size increases in two lattice sites (if the added block \[a\] has one site) at each iteration. There is a way to increase precision at each length $`N`$ called the finite system algorithm. It consists of fixing the lattice size and zipping a couple of times until convergence is reached. In this case and for the block configuration \[B a a’ B’ \], $`N=l+1+1+l^{}`$ where $`l`$ and $`l^{}`$ are the number of sites in $`B`$ and $`B^{}`$ respectively. In this step the density matrix is used to project onto the left $`l+1`$ sites. In order to keep $`N`$ fixed, in the next block configuration, the right block $`B^{}`$ should be defined in $`l1`$ sites such that $`N=(l+1)+1+1+(l1)^{}`$. The operators in this smaller block should be kept from previous iterations (in some cases from the iterations for the system size with $`N2`$). The calculation of static properties like correlation functions is easily done by keeping the operators in question at each step and performing the corresponding basis change and reduction, in a similar manner as done with the Hamiltonian in each block. The energy and measurements are calculated in the superblock. A faster convergence of Lanczos or Davidson algorithm is achieved by choosing a good trial vector. An interesting analysis on DMRG accuracy is done in Ref. . Fixed points of the DMRG and their relation to matrix product wave functions were studied in and an analytic formulation combining the block renormalization group with variational and Fokker-Planck methods in . The connection of the method with quantum groups and conformal field theory is treated in . There are also interesting connections between the density matrix spectra and integrable models via corner transfer matrices. These articles give a deep insight into the essence of the DMRG method. ## 3 Applications Since its development, the number of papers using DMRG has grown enormously and other improvements to the method have been performed. We would like to mention some applications where this method has proved to be useful. Other applications related to further developments of the DMRG will be mentioned in Sect. 4. A very impressive result with unprecedented accuracy was obtained by White and Huse when calculating the spin gap in a $`S=1`$ Heisenberg chain obtaining $`\mathrm{\Delta }=0.41050J`$. They also calculated very accurate spin correlation functions and excitation energies for one and several magnon states and performed a very detailed analysis of the excitations for different momenta. They obtained a spin correlation length of 6.03 lattice spacings. Simultaneously Sørensen and Affleck also calculated the structure factor and spin gap for this system up to length 100 with very high accuracy, comparing their results with the nonlinear $`\sigma `$ model. In a subsequent paper they applied the DMRG to the anisotropic $`S=1`$ chain, obtaining the values for the Haldane gap. They also performed a detailed study of the $`S=1/2`$ end excitations in an open chain. Thermodynamic properties in open $`S=1`$ chains such as specific heat, electron paramagnetic resonance (EPR) and magnetic susceptibility calculated using DMRG gave an excellent fit to experimental data, confirming the existence of free spins 1/2 at the boundaries. A related problem, i.e. the effect of non-magnetic impurities in spin systems (dimerized, ladders and 2D) was studied in . For larger integer spins there have also been some studies. Nishiyama and coworkers calculated the low energy spectrum and correlation functions of the $`S=2`$ antiferromagnetic Heisenberg open chain. They found $`S=1`$ end excitations (in agreement with the Valence Bond Theory). Edge excitations for other values of $`S`$ have been studied in Ref. . Almost at the same time Schollwöck and Jolicoeur calculated the spin gap in the same system, up to 350 sites, ($`\mathrm{\Delta }=0.085J`$), correlation functions that showed topological order and a spin correlation length of 49 lattice spacings. More recent accurate studies of $`S=2`$ chains are found in . In Ref. the dispersion of the single magnon band and other properties of the $`S=2`$ antiferromagnetic Heisenberg chains were calculated. Concerning $`S=1/2`$ systems, DMRG has been crucial for obtaining the logarithmic corrections to the $`1/r`$ dependence of the spin-spin correlation functions in the isotropic Heisenberg model . For this, very accurate values for the energy and correlation functions were needed. For $`N=100`$ sites an error of $`10^5`$ was achieved keeping $`m=150`$ states per block, comparing with the exact finite-size Bethe Ansatz results. For this model it was found that the data for the correlation function has a very accurate scaling behaviour and advantage of this was taken to obtain the logarithmic corrections in the thermodynamic limit. Other calculations of the spin correlations have been performed for the anisotropic case . Similar calculations have been performed for the $`S=3/2`$ Heisenberg chain . In this case a stronger logarithmic correction to the spin correlation function was found. For this model there was interest in obtaining the central charge $`c`$ to elucidate whether this model corresponds to the same universality class as the $`S=1/2`$ case, where the central charge can be obtained from the finite-size scaling of the energy. Although there have been previous attempts, these calculations presented difficulties since they involved also a term $`1/\mathrm{ln}^3N`$. With the DMRG the value $`c=1`$ was clearly obtained. In Ref. , DMRG was applied to an effective spin Hamiltonian obtained from an SU(4) spin-orbit critical state in 1D. Another application to enlarged symmetry cases (SU(4)) was done to study coherence in arrays of quantum dots. Dimerization and frustration have been considered in Refs. and alternating spin chains in . The case of several coupled spin chains (ladder models) , magnetization properties and plateaus for quantum spin ladder systems and finite 2D systems like an application to $`CaV_4O_9`$ reaching 24x11 square lattices have also been studied. There has been a great amount of applications to fermionic systems such as 1D Hubbard and t-J models . Also several coupled chains at different dopings have been considered . Quite large systems can be reached, for example in , a 4x20 lattice was considered to study ferromagnetism in the infinite U Hubbard model; the ground state of a 4-leg t-J ladder in ; the one and two hole ground state in a 10x7 t-J lattice; a doped 3-leg t-J ladder and the study of striped phases and domain walls in 19x8 t-J systems. Impurity problems have been studied for example in one- and two-impurity Kondo systems, Kondo and Anderson lattices , Kondo lattices with localized $`f^2`$ configurations, a t-J chain coupled to localized Kondo spins and ferromagnetic Kondo models for manganites . ## 4 Other extensions to DMRG There have been several extensions to DMRG like the inclusion of symmetries to the method such as spin and parity. Total spin conservation and continuous symmetries have been treated in and in interaction-round a face Hamiltonians, a formulation that can be applied to rotational-invariant sytems like $`S=1`$ and 2 chains. A momentum representation of this technique that allows for a diagonalization in a fixed momentum subspace has been developed as well as applications in dimension higher than one and Bethe lattices. The inclusion of symmetries is essential to the method since it allows to consider a smaller number of states, enhance precision and obtain eigenstates with a definite quantum number. Other recent applications have been in nuclear shell model calculations where a two level pairing model has been considered and in the study of ultrasmall superconducting grains, in this case, using the particle (hole) states around the Fermi level as the system (environment) block. A very interesting and successful application is a recent work in High Energy Physics. Here the DMRG is used in an asymptotically free model with bound states, a toy model for quantum chromodynamics, namely the two dimensional delta-function potential. For this case an algorithm similar to the momentum space DMRG was used where the block and environment consist of low and high energy states respectively. The results obtained here are much more accurate than the similarity renormalization group and a generalization to field-theoretical models is proposed based on the discreet light-cone quantization in momentum space. Below we briefly mention other important extensions, leaving the calculation of dynamical properties for the next Section. ### 4.1 Classical systems The DMRG has been very successfully extended to study classical systems. For a detailed description we refer the reader to Ref. . Since 1D quantum systems are related to 2D classical systems, it is natural to adapt DMRG to the classical 2D case. This method is based on the renormalization group transformation for the transfer matrix $`T`$. It is a variational method that maximizes the partition function using a limited number of degrees of freedom, where the variational state is written as a product of local matrices. For 2D classical systems, this algorithm is superior to the classical Monte Carlo method in accuracy, speed and in the possibility of treating much larger systems. A further improvement to this method is based on the corner transfer matrix, the CTMRG and can be generalized to any dimension. It was first applied to the Ising model and also to the Potts model, where very accurate density profiles and critical indices were calculated. Further applications have included non-hermitian problems in equilibrium and non-equilibrium physics. In the first case, transfer matrices may be non-hermitian and several situations have been considered: a model for the Quantum Hall effect and the $`q`$-symmetric Heisenberg chain related to the conformal series of critical models. In the second case, the adaptation of the DMRG to non-equilibrium physics like the asymmetric exclusion problem and reaction-diffusion problems has shown to be very successful. ### 4.2 Finite temperature DMRG The adaptation of the DMRG method for classical systems paved the way for the study of 1D quantum systems at non zero temperature, by using the Trotter-Suzuki method . In this case the system is infinite and the finiteness is in the level of the Trotter approximation. Standard DMRG usually produces its best results for the ground state energy and less accurate results for higher excitations. A different situation occurs here: the lower the temperature, the less accurate the results. Very nice results have been obtained for the dimerized, $`S=1/2`$, XY model, where the specific heat was calculated involving an extremely small basis set ($`m=16`$), the agreement with the exact solution being much better in the case where the system has a substantial gap. It has also been used to calculate thermodynamic properties of the anisotropic $`S=1/2`$ Heisenberg model, with relative errors for the spin susceptibility of less than $`10^3`$ down to temperatures of the order of $`0.01J`$ keeping $`m=80`$ states. A complete study of thermodynamic properties like magnetization, susceptibility, specific heat and temperature dependent correlation functions for the $`S=1/2`$ and 3/2 Heisenberg models was done in . Other applications have been the calculation of the temperature dependence of the charge and spin gap in the Kondo insulator, the calculation of thermodynamic properties of ferrimagnetic chains, the study of impurity properties in spin chains, frustrated quantum spin chains, t-J ladders and dimerized frustrated Heisenberg chains. An alternative way of incorporating temperature into the DMRG procedure was developed by Moukouri and Caron. They considered the standard DMRG taking into account several low-lying target states (see Eq. 2) to construct the density matrix, weighted with the Boltzmann factor ($`\beta `$ is the inverse temperature): $$\rho _{ii^{}}=\underset{l}{}e^{\beta E_l}\underset{j}{}\varphi _{l,ij}\varphi _{l,i^{}j}$$ (3) With this method they performed reliable calculations of the magnetic susceptibility of quantum spin chains with $`S=1/2`$ and $`3/2`$, showing excellent agreement with Bethe Ansatz exact results. They also calculated low temperature thermodynamic properties of the 1D Kondo Lattice Model and Zhang et al. applied the same method in the study of a magnetic impurity embedded in a quantum spin chain. ### 4.3 Phonons, bosons and disorder A significant limitation to the DMRG method is that it requires a finite basis and calculations in problems with infinite degrees of freedom per site require a large truncation of the basis states. However, Jeckelmann and White developed a way of including phonons in DMRG calculations by transforming each boson site into several artificial interacting two-state pseudo-sites and then applying DMRG to this interacting system (called the “pseudo-site system”). The idea is based on the fact that DMRG is much better able to handle several few-states sites than few many-state sites. The key idea is to substitute each boson site with $`2^N`$ states into $`N`$ pseudo-sites with 2 states. They applied this method to the Holstein model for several hundred sites (keeping more than a hundred states per phonon mode) obtaining negligible error. In addition, up to date, this method is the most accurate one to determine the ground state energy of the polaron problem (Holstein model with a single electron). An alternative method (the “Optimal phonon basis”) is a procedure for generating a controlled truncation of a large Hilbert space, which allows the use of a very small optimal basis without significant loss of accuracy. The system here consists of only one site and the environment has several sites, both having electronic and phononic degrees of freedom. The density matrix is used to trace out the degrees of freedom of the environment and extract the most relevant states of the site in question. In following steps, more bare phonons are included to the optimal basis obtained in this way. A variant of this scheme is the “four block method”, as described in . They obtain very accurately the Luttinger liquid-CDW insulator transition in the 1D Hostein model for spinless fermions. The method has also been applied to pure bosonic systems such as the disordered bosonic Hubbard model, where gaps, correlation functions and superfluid density are obtained. The phase diagram for the non-disordered Bose-Hubbard model, showing a reentrance of the superfluid phase into the insulating phase was calculated in Ref. The DMRG has been also been generalized to 1D random systems, and applied to the random antiferromagnetic and ferromagnetic Heisenberg chains, including quasiperiodic exchange modulation and a detailed study of the Haldane phase in these systems. It has also been used in disordered Fermi systems such as the spinless model. In particular, the transition from the Fermi glass to the Mott insulator and the strong enhancement of persistent currents in the transition was studied in correlated one-dimensional disordered rings. ### 4.4 Molecules There have been several applications to molecules and polymers, such as the Pariser-Parr-Pople (PPP) Hamiltonian for a cyclic polyene (where long-range interactions are included). It has also been applied to conjugated organic systems (polymers), adapting the DMRG to take into account the most important symmetries in order to obtain the desired excited states. Also conjugated one dimensional semiconductors have been studied, in which the standard approach can be extended to complex 1D oligomers where the fundamental repeat is not just one or two atoms, but a complex molecular building block. Recent attempts to apply DMRG to the ab initio calculation of electronic states in molecules have been successful. Here, DMRG is applied within the conventional quantum chemical framework of a finite basis set with non-orthogonal basis functions centered on each atom. After the standard Hartree-Fock (HF) calculation in which a Hamiltonian is produced within the orthogonal HF basis, DMRG is used to include correlations beyond HF, where each orbital is treated as a “site” in a 1D lattice. One important difference with standard DMRG is that, as the interactions are long ranged, several operators must be kept, making the calculation somewhat cumbersome. However, very accurate results have been obtained in a check performed in a water molecule (keeping up to 25 orbitals and $`m200`$ states per block), obtaining an offset of 0.00024Hartrees with respect to the exact ground state energy, a better performance than any other approximate method. In order to avoid the non-locality introduced in the treatment explained above, White introduced the concept of orthlets, local, orthogonal and compact wave functions that allow prior knowledge about singularities to be incorporated into the basis and an adequate resolution for the cores. The most relevant functions in this basis are chosen via the density matrix. ## 5 Dynamical correlation functions The DMRG was originally developed to calculate static ground state properties and low-lying energies. However, it can also be useful to calculate dynamical response functions. These are of great interest in condensed matter physics in connection with experiments such as nuclear magnetic resonance (NMR), neutron scattering, optical absorption, photoemission, etc. We will describe three different methods in this Section. ### 5.1 Lanczos and correction vector techniques An effective way of extending the basic ideas of this method to the calculation of dynamical quantities is described in Ref.. It is important to notice here that due to the particular real-space construction, it is not possible to fix the momentum as a quantum number. However, we will show that by keeping the appropriate target states, a good value of momentum can be obtained. We want to calculate the following dynamical correlation function at $`T=0`$: $$C_A(tt^{})=\psi _0|A^{}(t)A(t^{})|\psi _0,$$ (4) where $`A^{}`$ is the Hermitean conjugate of the operator $`A`$, $`A(t)`$ is the Heisenberg representation of $`A`$, and $`|\psi _0`$ is the ground state of the system. Its Fourier transform is: $$C_A(\omega )=\underset{n}{}|\psi _n|A|\psi _0|^2\delta (\omega (E_nE_0)),$$ (5) where the summation is taken over all the eigenstates $`|\psi _n`$ of the Hamiltonian $`H`$ with energy $`E_n`$, and $`E_0`$ is the ground state energy. Defining the Green’s function $$G_A(z)=\psi _0|A^{}(zH)^1A|\psi _0,$$ (6) the correlation function $`C_A(\omega )`$ can be obtained as $$C_A(\omega )=\frac{1}{\pi }\underset{\eta 0^+}{lim}\mathrm{Im}G_A(\omega +i\eta +E_0).$$ (7) The function $`G_A`$ can be written in the form of a continued fraction: $$G_A(z)=\frac{\psi _0|A^{}A|\psi _0}{za_0\frac{b_1^2}{za_1\frac{b_2^2}{z\mathrm{}}}}$$ (8) The coefficients $`a_n`$ and $`b_n`$ can be obtained using the following recursion equations : $$|f_{n+1}=H|f_na_n|f_nb_n^2|f_{n1}$$ (9) where $`|f_0`$ $`=`$ $`A|\psi _0`$ $`a_n`$ $`=`$ $`f_n|H|f_n/f_n|f_n,`$ $`b_n`$ $`=`$ $`f_n|f_n/f_{n1}|f_{n1};b_0=0`$ (10) For finite systems the Green’s function $`G_A(z)`$ has a finite number of poles so only a certain number of coefficients $`a_n`$ and $`b_n`$ have to be calculated. The DMRG technique presents a good framework to calculate such quantities. With it, the ground state, Hamiltonian and the operator $`A`$ required for the evaluation of $`C_A(\omega )`$ are obtained. An important requirement is that the reduced Hilbert space should also describe with great precision the relevant excited states $`|\psi _n`$. This is achieved by choosing the appropriate target states. For most systems it is enough to consider as target states the ground state $`|\psi _0`$ and the first few $`|f_n`$ with $`n=0,1\mathrm{}`$ and $`|f_0=A|\psi _0`$ as described above. In doing so, states in the reduced Hilbert space relevant to the excited states connected to the ground state via the operator of interest $`A`$ are included. The fact that $`|f_0`$ is an excellent trial state, in particular, for the lowest triplet excitations of the two-dimensional antiferromagnet was shown in Ref. . Of course, if the number $`m`$ of states kept per block is fixed, the more target states considered, the less precisely each one of them is described. An optimal number of target states and $`m`$ have to be found for each case. Due to this reduction, the algorithm can be applied up to certain lengths, depending on the states involved. For longer chains, the higher energy excitations will become inaccurate. Proper sum rules have to be calculated to determine the errors in each case. As an application of the method we calculate $$S^{zz}(q,\omega )=\underset{n}{}|\psi _n|S_q^z|\psi _0|^2\delta (\omega (E_nE_0)),$$ (11) for the 1D isotropic Heisenberg model with spin $`S=1/2`$. The spin dynamics of this model has been extensively studied. The lowest excited states in the thermodynamic limit are the des Cloiseaux-Pearson triplets , having total spin $`S^T=1`$. The dispersion of this spin-wave branch is $`\omega _q^l=\frac{J\pi }{2}|\mathrm{sin}(q)|`$. Above this lower boundary there exists a two-parameter continuum of excited triplet states that have been calculated using the Bethe Ansatz approach with an upper boundary given by $`\omega _q^u=J\pi |\mathrm{sin}(q/2)|`$. It has been shown , however, that there are excitations above this upper boundary due to higher order scattering processes, with a weight that is at least one order of magnitude lower than the spin-wave continuum. In Fig. 2 we show the spectrum for $`q=\pi `$ and $`N=24`$ for different values of $`m`$, where exact results are available for comparison. The delta peaks of Eq. (11) are broadened by a Lorentzian for visualizing purposes. As is expected, increasing $`m`$ gives more precise results for the higher excitations. This spectra has been obtained using the infinite system method and more precise results are expected using the finite system method, as described later. In Fig. 3 we show the spectrum for two systems lengths and $`q=\pi `$ and $`q=\pi /2`$ keeping $`m=200`$ states and periodic boundary conditions. For this case it was enough to take 3 target states, i. e. $`|\psi _0`$, $`|f_0=S_\pi ^z|\psi _0`$ and $`|f_1`$. Here we have used $`40`$ pairs of coefficients $`a_n`$ and $`b_n`$, but we noticed that if we considered only the first ($`10`$) coefficients $`a_n`$ and $`b_n`$, the spectrum at low energies remains essentially unchanged. Minor differences arise at $`\omega /J2`$. This is another indication that only the first $`|f_n`$ are relevant for the low energy dynamical properties for finite systems. In the inset of Fig. 3 the spectrum for $`q=\pi /2`$ and $`N=28`$ is shown. For this case we considered 5 target states i. e. $`|\psi _0`$, $`|f_0=S_{\pi /2}^z|\psi _0`$, $`|f_nn=1,3`$ and $`m=200`$. Here, and for all the cases considered, we have verified that the results are very weakly dependent on the weights $`p_l`$ of the target states (see Eq.(2)) as long as the appropriate target states are chosen. For lengths where this value of $`q`$ is not defined we took the nearest value. Even though we are including states with a given momentum as target states, due to the particular real-space construction of the reduced Hilbert space, this translational symmetry is not fulfilled and the momentum is not fixed. To check how the reduction on the Hilbert space influences the momentum $`q`$ of the target state $`|f_0=S_q^z|\psi _0`$, we calculated the expectation values $`\psi _0|S_q^{}^zS_q^z|\psi _0`$ for all $`q^{}`$. If the momenta of the states were well defined, this value is proportional to $`\delta _{qq^{}}`$ if $`q0`$. For $`q=0`$, $`_rS_r^z=0`$. The momentum distribution for $`q=\pi `$ is shown in Fig. 4 in a semilogarithmic scale where the $`y`$-axis has been shifted by .003 so as to have well-defined logarithms. We can see here that the momentum is better defined, even for much larger systems, but, as expected, more weight on other $`q^{}`$ values arises for larger $`N`$. As a check of the approximation we calculated the sum rule $$\frac{1}{4\pi ^2}_0^{\mathrm{}}𝑑\omega _{q=0}^{2\pi }S^{zz}(q,\omega )\psi _0|(S_{r=0}^z)^2|\psi _0=\frac{1}{4}$$ (12) for $`N=28`$, 5 target states and $`m=200`$. We obtain a relative error of 0.86%. Recently, important improvements to this method have been published : By considering the finite system method in open chains, Kühner and White obtained a higher precision in dynamical responses of spin chains. In order to define a momentum in an open chain and to avoid end effects, they introduce a filter function with weight centered in the middle of the chain and zero at the boundaries. In this section we presented a method of calculating dynamical responses with DMRG. Although the basis truncation is big, this method keeps only the most relevant states and, for example, even by considering a $`0.1\%`$ of the total Hilbert space (for $`N=28`$ only $``$ 40000 states are kept) a reasonable description of the low energy excitations is obtained. We show that it is also possible to obtain states with well defined momenta if the appropriate target states are used. #### 5.1.1 Correction vector technique Introduced in Ref. in the DMRG context and improved in Ref. , this method focuses on a particular energy or energy window, allowing a more precise description in that range and the possibility of calculating spectra for higher energies. Instead of using the tridiagonalization of the Hamiltonian, but in a similar spirit regarding the important target states to be kept, the spectrum can be calculated for a given $`z=w+i\eta `$ by using a correction vector (related to the operator $`A`$ that can depend on momentum $`q`$). Following (6), the (complex) correction vector $`|x(z)`$ can be defined as: $$|x(z)=\frac{1}{zH}A|\psi _0$$ (13) so the Green’s function can be calculated as $$G(z)=\psi _0|A^{}|x(z)$$ (14) Separating the correction vector in real and imaginary parts $`|x(z)=|x^r(z)+i|x^i(z)`$ we obtain $$((Hw)^2+\eta ^2)|x^i(z)=\eta A|\psi _0$$ (15) and $$|x^r(z)=\frac{1}{\eta }(wH)|x^i(z)$$ (16) The former equation is solved using the conjugate gradient method. In order to keep the information of the excitations at this particular energy the following states are targeted in the DMRG iterations: The ground state $`|\psi _0`$, the first Lanczos vector $`A|\psi _0`$ and the correction vector $`|x(z)`$. Even though only a certain energy is focused on, DMRG gives the correct excitations for an energy range surrounding this particular point so that by running several times for nearby frequencies, an approximate spectrum can be obtained for a wider region . ### 5.2 Moment expansion This method relies on a moment expansion of the dynamical correlations using sum rules that depend only on static correlation functions which can be calculated with DMRG. With these moments, the Green’s functions can be calculated using the maximum entropy method. The first step is the calculation of sum rules. As an example, and following , the spin-spin correlation function $`S^z(q,w)`$ of the Heisenberg model is calculated where the operator $`A`$ of Eq. (4) is $`S^z(q)=N^{1/2}S^z(l)\mathrm{exp}(iql)`$ and the sum rules are: $`m_1(q)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dw}{\pi }}{\displaystyle \frac{S^z(q,w)}{w}}={\displaystyle \frac{1}{2}}\chi (q,w=0)`$ $`m_2(q)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dw}{\pi }}w{\displaystyle \frac{S^z(q,w)}{w}}={\displaystyle \frac{1}{2}}S^z(q,t=0)`$ $`m_3(q)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dw}{\pi }}w^2{\displaystyle \frac{S^z(q,w)}{w}}={\displaystyle \frac{1}{2}}[[H,S^z(q)],S^z(q)]`$ (17) $`=`$ $`2[1\mathrm{cos}(q)]{\displaystyle \underset{i}{}}S_i^+S_{i+1}^{}+S_i^{}S_{i+1}^+`$ where $`\chi (q,w=0)`$ is the static susceptibility. These sum rules can be easily generalized to higher moments: $`m_l(q)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dw}{\pi }}w^{l1}{\displaystyle \frac{S^z(q,w)}{w}}`$ (18) $`=`$ $`{\displaystyle \frac{1}{2}}[[H,\mathrm{},[H,S^z(q)]\mathrm{}],S^z(q)]`$ for $`l`$ odd. A similar expression is obtained for $`l`$ even, where the outer square bracket is replaced by an anticommutator and the total sign is changed. Here $`H`$ appears in the commutator $`l2`$ times. Apart from the first moment which is given by the static susceptibility, all the other moments can be expressed as equal time correlations (using a symbolic manipulator). The static susceptibility $`\chi `$ is calculated by applying a small field $`h_q_in_i\mathrm{cos}(qi)`$ and calculating the density response $`n_q=1/N_in_q\mathrm{cos}(qi)`$ with DMRG. Then $`\chi =n_q/h_q`$ for $`h_q0`$. These moments are calculated for several chain lengths and extrapolated to the infinite system. Once the moments are calculated, the final spectra is constructed via the Maximum Entropy method (ME), which has become a standard way to extract maximum information from incomplete data (for details see Ref. and references therein). Reasonable spectra are obtained for the XY and isotropic models, although information about the exact position of the gaps has to be included. Otherwise, the spectra are only qualitatively correct. This method requires the calculation of a large amount of moments in order to get good results: The more information given to the ME equations, the better the result. ### 5.3 Finite temperature dynamics In order to include temperature in the calculation of dynamical quantities, the Transfer Matrix RG described above (TMRG) was extended to obtain imaginary time correlation functions. After Fourier transformation in the imaginary time axis, analytic continuation from imaginary to real frequencies is done using maximum entropy (ME). The combination of the TMRG and ME is free from statistical errors and the negative sign problem of Monte Carlo methods. Since we are dealing with the transfer matrix, the thermodynamic limit can be discussed directly without extrapolations. However, in the present scheme, only local quantities can be calculated. A systematic investigation of local spectral functions is done in Ref. for the anisotropic Heisenberg antiferromagnetic chain. They obtain good qualitative results especially for high temperatures but a quantitative description of peaks and gaps are beyond the method, due to the severe intrinsic limitation of the analytic continuation. This method was also applied with great success to the 1D Kondo insulator. The temperature dependence of the local density of states and local dynamic spin and charge correlation functions was calculated. ## 6 Conclusions We have presented here a very brief description of the Density Matrix Renormalization Group technique, its applications and extensions. The aim of this article is to give the unexperienced reader an idea of the possibilities and scope of this powerful, though relatively simple, method. The experienced reader can find here an extensive (however incomplete) list of references covering most applications to date using DMRG, in a great variety of fields such as Condensed Matter, Statistical Mechanics and High Energy Physics. ## Acknowledgments The author acknowledges hospitality at the Centre de Recherches Mathematiques, University of Montreal and at the Physics Department of the University of Buenos Aires, Argentina, where this work has been performed. We thank S. White for a critical reading of the manuscript and all those authors that have updated references and sent instructive comments. K. H. is a fellow of CONICET, Argentina. Grants: PICT 03-00121-02153 and PICT 03-00000-00651.
no-problem/9910/astro-ph9910250.html
ar5iv
text
# Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample ## 1 Introduction The measurement of the evolution of galaxy clustering is a direct test of theories of the evolution of structure and galaxy formation in the universe. Clustering is predicted to change with increasing redshift in a manner that depends on the background cosmology, the spectrum of primordial density fluctuations out of which clustering grows, and the relation of galaxies to dark matter halos. Specific predictions of clustering growth are available for a range of CDM-style cosmological models and galaxy identification algorithms. As a result of biasing (Kaiser, 1984) there is a physically important generic prediction that the clustering of normal galaxies should not be identical to dark matter clustering and their clustering should evolve slowly at low redshifts. More generally, clustering evolution is of interest for its impact on galaxies, since clustering leads to galaxy-galaxy merging and creates the groups and clusters in which galaxies are subject to high gas densities and temperatures not found in the general field. The theoretical groundwork to interpret the quantitative evolution of dark matter clustering and the trends of galaxy clustering evolution is largely in place for hierarchical structure models. Although the details of the mass buildup of galaxies and the evolution of their emitted light are far from certain at this time, clustering of galaxies depends primarily on the distribution of initial density fluctuations on the mass scale of galaxies. N-body simulations of ever growing precision along with their theoretical analysis (e.g., Davis et al., 1985; Pearce et al., 1999) have led to a good semi-analytic understanding of dark matter clustering into the nonlinear regime. One result is a remarkable, convenient, theoretically motivated, empirical equation that relates the linear power spectrum and its nonlinear outcome (Efstathiou Davis White & Frenk, 1985; Hamilton et al., 1991; Peacock & Dodds, 1996). This allows an analytic prediction of the clustering evolution of the dark matter density field. Normal galaxies, which are known to exist near the centers of dark matter halos with velocity dispersions in the approximate range of 50 to 250 $`\text{km}s^1`$, cannot have a clustering evolution identical to the full dark matter density field. Kaiser (1984) showed that the dense “peaks” in the initial density field that ultimately collapse to form halos are usually more correlated than the full density field. For high peaks, the peak-peak correlation, $`\xi _{\nu \nu }(r|z)`$, is approximately $`[\nu /\sigma (z)]^2`$ times the correlation of the full dark matter density field, $`\xi _{\rho \rho }(r|z)`$, (see Mo & White 1996 for a more general expression) where $`\nu `$ measures the minimal “peak height” for formation of a halo, in units of the variance on that mass scale, $`\sigma (M,z)`$ (Bardeen et al., 1986). Both $`\sigma ^2(M,z)`$ and $`\xi _{\rho \rho }(z,r)`$ (in co-moving co-ordinates) grow approximately as $`D^2(z,\mathrm{\Omega })`$ (exactly so in the linear regime), where $`D(z,\mathrm{\Omega })`$ is the growth factor for density perturbations in the cosmology of interest. The result is that $`\xi _{\nu \nu }(r|z)`$ stays nearly constant in co-moving co-ordinates. This result is approximately verified in n-body experiments (Carlberg & Couchman, 1989; Carlberg, 1991; Colin Carlberg & Couchman, 1997; Jenkins et al., 1998) which can be accurately modeled with a theoretically motivated analytic function (Mo & White, 1996). An implication is that dark matter halo clustering evolution will have little sensitivity to the background cosmology (Governato et al., 1998; Pearce et al., 1999). These results formally apply to low density “just virialized” halos, whereas galaxies are found in the dense central regions of dark matter halos. Therefore the clustering of galaxies needs to take into account their dissipation relative to the (presumed) dissipationless dark matter. Dissipationless n-body simulations which resolve sub-halos within larger virialized units provide the basic dynamical information but still require a theory of galaxy formation to associate them with luminous objects, as guided by observations such as we report here. At low redshift there have been several substantial clustering surveys, deriving information from both angular correlations, which can be de-projected with non-evolving luminosity functions, and redshift surveys, where the kinematics of the galaxies provides additional information about clustering dynamics. The observational measurement of clustering at higher redshifts has yet to reach the size, scale coverage, or redshift precision of the pioneering low redshift CfA survey (Davis & Peebles, 1983). Angular correlations of galaxies at higher redshifts provide some insights, but inevitably mix different galaxy populations at different redshifts and require an accurate $`n(z)`$ to break the degeneracy between an evolving luminosity function and an evolving clustering amplitude (Infante & Pritchet, 1995; Postman et al., 1998; Connolly Szalay & Brunner, 1998). There are two redshift surveys extending out to $`z1`$, the Canada France Redshift Survey (Lilly et al., 1995) and the Hawaii K survey (Cowie et al., 1996). Measurement of the correlation evolution of the galaxies in these surveys found a fairly rapid decline in clustering with redshift (LeFèvre et al., 1996; Carlberg et al., 1997). Neither analysis took into account the evolution of the luminosity function or was able to quantify the effects of the small sky areas containing the samples. Recently two relatively large sky area samples, a preliminary analysis of the survey reported in this paper (Carlberg et al., 1998) and a shallower limiting magnitude survey (Small et al., 1998), have indicated much stronger correlations at about redshift $`z0.3`$ than the earlier small area surveys. This is the first in a series of papers which discusses the clustering and kinematic properties of the Canadian Network for Observational Cosmology field galaxy redshift survey. (Yee et al., 2000). The CNOC2 survey is designed to be comparable in size and precision to the first CfA redshift survey (Davis & Peebles, 1983) but covering galaxies out to redshift 0.7. In this paper we restrict our analysis to measurements of the correlation amplitude of the high-luminosity galaxies in the CNOC2 redshift survey, which are a particularly simple and interesting subsample. The CNOC2 luminosity function is known in considerable detail (Lin et al., 1999) which allows us to define an evolution-compensated volume limited sample thereby creating a straightforward sample to analyze. The next section of the paper briefly discusses the CNOC2 sample and the volume limited subsample of high-luminosity galaxies. Section 3 describes in detail our estimate of the projected correlation function and its errors, along with its sensitivity to various possible systematic errors. Section 4 reports our best estimates of the evolution of high-luminosity galaxy clustering in a variety of cosmologies. The results are discussed and conclusions drawn in Section 5. Throughout this paper we use $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ as various cosmologies as specified. ## 2 The CNOC2 Sample The Canadian Network for Observational Cosmology (CNOC) field galaxy redshift survey is designed to investigate nonlinear clustering dynamics and its relation to galaxy evolution on scales smaller than approximately 20$`h^1`$ Mpc over the $`0z0.7`$ range. There is substantial galaxy evolution over this redshift range (Broadhurst Ellis & Shanks, 1988; Ellis et al., 1996; Lilly et al., 1995; Cowie et al., 1996; Lin et al., 1999) for which the physical cause is unclear. The issues in designing a dynamically useful redshift survey are centered around efficiently and economically obtaining sufficient data to conclusively answer the questions posed about the evolution of clustering in galaxies and the dark matter. At these relatively modest redshifts the galaxies have spectra whose lines are within the range where efficient multi-object spectrographs allow velocities with a precision comparable to local surveys can be obtained. The observational procedures build on those for the CNOC1 cluster redshift survey (Yee Ellingson & Carlberg, 1996), although there are considerable differences of detail. The strategy and procedures are discussed in greater detail in the CNOC2 methods paper (Yee et al., 2000). A representative volume in the universe must contain a reasonable number of low richness clusters and a good sampling of 50 $`h^1`$ Mpc voids, or equivalently the “cosmic” variance from one sample to another of equivalent design, but different sky positions, should not be too large. Taking a CDM spectrum (with $`\sigma _8=1`$ and $`\mathrm{\Gamma }_s=0.2`$; see Efstathiou, Bond & White 1992) as a guide we find that even a $`50h^1\text{Mpc}`$ sphere, with enclosed volume of about $`0.5\times 10^6h^3\mathrm{Mpc}^3`$, has an expected dispersion in galaxy numbers of approximately 17%. However, a spherical geometry is an ideal not available to relatively narrow angle surveys. The best available option is to spread the area of the survey over several independent patches on the sky. Each patch should subtend an angle exceeding the correlation length. At $`z=0.4`$ a patch 0.5 degree across subtends 8.8 $`h^1`$ Mpc (co-moving, for $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$) which turns out to be about two correlation lengths. We can crudely approximate the survey as a set of cylinders or spheres arranged in a row. A sphere of radius the correlation length (whatever that happens to be) has an expected variance of about $`3/(3\gamma )`$ in the numbers of galaxies (Peebles, 1980), where $`\gamma =1.8`$ is the approximate slope of the power law portion of the correlation function. Over our redshift range there will be about 200 of these spheres in our sample, which if we divide into bins of 50, statistically reduces the variance in binned counts to about 30%. The observational practicalities of always having a field accessible at modest airmass, control over patch to patch variations, and the constraint of not unduly fragmenting the survey area, indicate that we need a minimum of four and no more than about eight of these patches. We choose four. The resulting survey is well suited to measuring the evolution of correlations. The CNOC2 survey is contained in four patches on the sky. Each patch consists of a central block, roughly 30 arcminutes on a side, with two “legs”, 10 arcminutes wide and about 40 arcminutes in extent, to provide an estimate of the effects of structure on larger scales. The resulting total sky area is about 1.55 square degrees. The sampled volume is about $`0.5\times 10^6h^3\mathrm{Mpc}^3`$, roughly comparable to the low redshift CfA survey used for similar measurements at low redshift (Davis & Peebles, 1983) which had 1230 galaxies in the “semi-volume limited” Northern sample from which the correlation length was derived. The redshift range targeted, 0.1 to about 0.7, suggests we set the limiting magnitude at $`R=21.5`$ magnitude, which gives a median redshift of about 0.4. At this limit the sky density is about 6000 galaxies per square degree yielding a photometric sample of about 10,000 galaxies to the spectroscopic limit. Photometry is obtained in the UBVRI bands, with the R-band fixing the sample limit at 21.5 mag. The R filter has the important feature of always being redward of the 4000Å break over our redshift range. The other bands provide information useful for determining appropriate k-corrections and separating galaxies into types of different evolutionary state (an issue not considered in this paper). The spectra are band-limited with a filter that restricts the range to 4390-6293Å. The S/N and spectral resolution give an observed frame velocity error of about 100 $`\text{km}s^1`$, as determined by comparison of independent spectra of the same objects. In total there are about 6000 galaxies with redshifts in our sample. At low redshift we will use the Las Campanas Redshift Survey (LCRS) to provide a directly comparable sample. The LCRS is an R-band selected survey (Shectman et al., 1996) that covers the redshift range 0.033 to 0.15, with R magnitudes restricted to 15.0 and 17.7 mag. The bright magnitude limit leads to higher luminosity galaxies being depleted at low redshifts. The LCRS’s selection function varies from field to field and under samples galaxies with separations less than about 100 $`h^1`$ kpc. We compute both magnitude and geometric weights using the approach adopted in the CNOC cluster redshift survey Yee Ellingson & Carlberg (1996) but smoothing over a circle of 0.4 degree. The same procedures are used for the CNOC2 data, using a smoothing radius of 2 arcminutes. The same correlation analysis programs were used for the LCRS and CNOC2 data. The only differences are that the LCRS data are not k-corrected and are analyzed for a single cosmological model, $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$. ### 2.1 The High-Luminosity Subsample In order to determine how clustering evolves we must measure the correlations of a statistically identical population of galaxies at increasing redshifts, which demands a secure statistical knowledge of the evolution of the galaxy luminosity function. Individual galaxy luminosities change through stellar evolution, new star formation and merging with other galaxies. All of this would have no effect if galaxy correlations were independent of their mass and luminosity. However, there is a strong theoretical expectation that galaxy clustering will increase with mass and hence luminosity (Kaiser, 1984) and there is growing evidence that the effect is observationally present at both low (Loveday et al., 1995) and intermediate redshifts (Carlberg et al., 1998). The practical issue is to define samples at different redshifts which can be sensibly compared. If the evolution is purely in luminosity, then we would want to compensate for the luminosity evolution so that the sample limit brings in galaxies of the same intrinsic luminosity at all redshifts. This would identify the same galaxies at all times. If evolution was pure merging, with no star formation, then it is particular interest to compare the clustering of galaxies of the same total stellar mass at different redshifts. As a currently practical stand-in for stellar mass we use the k-corrected and evolution compensated R-band absolute luminosity. Our study of the CNOC2 luminosity function evolution (Lin et al., 1999) found that the R-band evolution of the k-corrected galaxy luminosity function could be approximated as pure luminosity evolution at a rate, $$M_R^k(z)=M_R^{k,e}Qz,$$ (1) with a mean $`Q1`$ (Lin et al., 1999). Equation 1 defines the k-corrected and evolution-compensated R absolute magnitudes that we use to select the sample for analysis. Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample plots $`M_R^{k,e}`$ as a function of redshift for the entire flux limited, $`m_R21.5`$ mag, CNOC2 sample. It is not necessary to know the fractional completeness for a correlation measurement. Our correlation analysis will use galaxies with $`M_R^{k,e}20`$ mag, which defines a volume limited sample over the $`0.1z0.65`$ range. Our resulting subsample (for $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$) contains 2285 galaxies. The alternate cosmological models considered below lead to slightly different absolute magnitudes and sample sizes. Beyond $`z0.55`$ the limited CNOC2 spectral band pass leads to a lower probability that a redshift will be obtained for redder galaxies (Lin et al., 1999; Yee et al., 2000). This may lead to an erroneously low correlation in this redshift range. However, as far as we can tell from the correlation statistics, the high-luminosity galaxies for which we do have redshifts in this range have correlations statistically consistent with a smooth continuation of those at lower redshift. The LCRS data are evolution-compensated with the same $`Q`$ as the CNOC2 data, although at a mean redshift of about 0.1, this makes very little difference. The resulting low-redshift subsample derived from LCRS contains 12467 galaxies for the correlation analysis. ## 3 Real Space Correlations The goal of this paper is to estimate the evolution of the clustering of a well defined population of galaxies. The CNOC2 sample is designed to measure nonlinear clustering on scales of 10$`h^1`$ Mpc and less, where clustering is quite naturally measured in terms of the two-point correlation function, $`\xi (r)`$. This function measures the galaxy density excess above the mean background density, $`n_0`$, at distance $`r`$ from a galaxy, $`n(r)=n_0[1+\xi (r)]`$ (Peebles, 1980). Measurement of the real space correlation function $`\xi (r)`$ is not straightforward with redshift space data. The projected real space correlation function, $`w_p`$, removes the peculiar velocities of redshift space at the cost of making a choice for the length of the redshift column over which the integration is done. The correlation function is a measure of the variation in galaxy numbers from one volume to another. The measurement technique can easily and fairly subtly either artificially increase or decrease the variation around the estimated mean. Our survey was designed to have enough redundancy to explicitly test for a number of these effects. We will represent the evolving correlations with a double power law model. That is, over the range of scales that we are investigating the two point correlation is well represented as a power law $`\xi (r|z)=[r_0(z)/r]^\gamma `$. Furthermore, the evolution of $`\xi `$ is accurately described, over the redshift range we investigate, with the “$`ϵ`$” model, $`\xi (r|z)=\xi (r|0)(1+z)^{(3+ϵ)}`$, if lengths are measured in physical co-ordinates (Groth & Peebles, 1977; Koo & Szalay, 1984). In this equation the factor of $`(1+z)^3`$ allows for the change in the mean density of galaxies due to expansion. Consequently, if the universe consists of a set of physically invariant clusters in a smooth background, then $`ϵ=0`$. A clustering pattern that is fixed in co-moving co-ordinates would have $`ϵ=\gamma 3`$, approximately $`1.2`$ for our mean $`\gamma `$. A positive $`ϵ`$ indicates a decline of the physical clustered density with increasing redshift, as might be expected if clustering is growing. Combining these two power law equations gives the double power law $`ϵ`$ model, $`\xi (r|z)=(r_{00}/r)^\gamma (1+z)^{(3+ϵ)}`$, in physical co-ordinates, or, rewritten in co-moving co-ordinates, $$r_0(z)=r_{00}(1+z)^{(3+ϵ\gamma )/\gamma }.$$ (2) The following sections discuss the steps leading up to our measurement and modeling of the co-moving $`r_0(z)`$. First we measure the unclustered distribution $`n(z)`$, then the projected real space correlation function $`w_p(r_p)`$ (after choosing the appropriate redshift space integration limit $`R_p`$), fit those values to the projection of $`\xi (r)=(r_0/r)^\gamma `$, estimate random errors, do a $`\chi ^2`$ fit to determine the evolution parameter $`ϵ`$ and finally discuss the systematic errors. ### 3.1 The Unclustered Distribution A crucial operational detail of correlation measurements is to accurately assess the mean unclustered density as a function of redshift. Once the smooth $`n(z)`$ is known, then we follow the usual procedure and generate a random sample which follows the redshift distribution of the data as if they were unclustered. We generate uniform random positions in the sky area occupied by the galaxies, as approximated by a series of rectangles. Correlations are then readily measured as the ratio of the number of galaxy-galaxy pairs to galaxy-random pairs. The use of a sample uniformly distributed on the sky assumes that there are no angular selection effects. For instance, crowding of spectrographic fibres causes the LCRS to be strongly under-sampled for pairs closer than 100$`h^1`$ kpc. The CNOC2 spectroscopic sample was double masked to try to fairly sample all pair separations. Slit crowding still leads to some under sampling in CNOC2 (Yee et al., 2000). Consequently we leave these small scales out of all our fits. To minimizes other geometric effects, we apply to both samples an explicit geometric weight, calculated following the procedures of Yee, Ellingson & Carlberg (1996). Comparison with unweighted correlations shows that this makes little practical difference to the resulting CNOC2 correlations, since the corrections are 10% or so in the mean. One approach to generating an unclustered distribution in redshift is to use the luminosity function, which can be estimated using maximum-likelihood techniques that are insensitive to the clustering. The complication here is that the luminosity functions need to be generated taking into account the magnitude selection function, in which only about half of the galaxies within the photometric limit have redshifts. Moreover, the selection function is magnitude dependent. A more direct approach is to model directly the redshift distribution of the selected subsample. We chose a model function having sufficiently few parameters that it is not very sensitive to the details of the clustering that are present. As our fitting function we adopt a Maxwellian form, $$n(z|\sigma _z,z_p)=n_0z^2\mathrm{exp}\left[\frac{1}{2}\left(\frac{zz_p}{\sigma _z}\right)^2\right],$$ (3) where $`\sigma _z`$ and $`z_p`$ are fitting parameters. This function was arrived at after trying various combinations of exponential cutoffs and power law rises at low redshifts, including log-normal types of distributions. The resulting form is both simple and adequately describes the data. We use a maximum likelihood approach to find the parameters of this function. The logarithm of the likelihood is $`\mathrm{log}(L)=\mathrm{log}(L_i)`$, where the individual likelihoods are, $$L_i(z|\sigma _z,z_p)=\frac{n(z|\sigma _z,z_p)}{_{z_b}^{z_t}n(z|\sigma _z,z_p)𝑑z}.$$ (4) The redshift range $`z_b`$ to $`z_t`$ for the fit is taken as 0.05 to 0.70 for the CNOC2 sample, although we only use the data between redshift 0.10 and 0.65. The redshift limits are 0.033 and 0.15 for the LCRS. The CNOC2 redshift distribution in bins of $`\mathrm{\Delta }z=0.01`$ for all four patches and the resulting fit is shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample in bins of $`\mathrm{\Delta }z=0.01`$. The best overall fit has $`\sigma _z0.18`$ and $`z_p0.230`$. If the strong clustering feature at $`z0.15`$ in the 2148-05 patch is not included, then the fits to the individual fields are all consistent with the global fit. Inserted into the Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample is a panel showing the 68, 90 and 99% confidence contours from the maximum likelihood fit. The error is about 10% in the $`z_p`$ parameter and 5% in $`\sigma _z`$. Given these well defined parameters we find that the 68% confidence $`n(z)`$ range is about 10% at any redshift we use in the analysis. This is sufficient to measure correlations to 20% precision with respect to the background out to about 2 correlation lengths. A small systematic effect visible in a $`\mathrm{\Delta }z=0.001`$ version of the $`n(z)`$ plot is that there are small redshift “notches” at 0.496 and 0.581 when the \[OII\] line falls on the 5577Å or 5892Å night sky line, respectively. This leads to an underestimate of the true mean density in the 0.45-0.55 and 0.55-0.65 redshift bins. This will bias the derived $`r_0`$ upwards in these two redshift bins about 2% and 5% respectively, which is within our random errors. ### 3.2 The Projected Real Space Correlation Function The correlation function is a real space quantity, whereas the redshift space separation of two galaxies depends on their peculiar velocities as well as the physical separation. Although the peculiar velocities contain much useful information about clustering dynamics, they are an unwanted complication for the study of configuration space correlations. The peculiar velocities are eliminated by integrating over the redshift direction to give the projected correlation function, $$w_p(r_p)=_{R_p}^{R_p}\xi (\sqrt{r_p^2+r_z^2})𝑑r_z$$ (5) (Davis & Peebles, 1983). If we take a power law correlation $`\xi (r)=(r_0/r)^\gamma `$ and integrate to $`R_p=\mathrm{}`$ we find, $`w_p(r_p)/r_p=\mathrm{\Gamma }(1/2)\mathrm{\Gamma }((\gamma 1)/2)/\mathrm{\Gamma }(\gamma /2)(r_0/r_p)^\gamma `$ (Peebles, 1980). However, in a practical survey, summing over ever increasing distances leads to little increase in the signal and growing noise from fluctuations in the field density. The signal-to-noise considerations in the choice of $`R_p`$ are straightforward. To capture the bulk of the correlation signal, $`R_p`$ should be significantly larger than the local $`r_0`$ and the length corresponding to the pairwise velocity dispersion, $`\sigma _{12}/H(z)`$. These are both about 3 or 4 $`h^1`$ Mpc. Large values, say $`R_p100h^1\text{Mpc}`$, might more completely integrate the correlation signal but they do so at the considerable cost of increased noise. Exactly where to terminate the integration depends greatly on the range of correlations of interest. Here we are focussed on the non-linear correlations, $`\xi >1`$. Before we evaluate an appropriate choice for $`R_p`$ we must choose a correlation function estimator. ### 3.3 Galaxy-Galaxy Clustering The optimal choice of a statistical estimator of the correlation function depends on the application. With point data the basic procedure is to determine the average number of neighboring galaxies within some projected radius, $`r_p`$, and redshift distance $`R_p`$. The $`ij`$ pair is weighted as $`w_iw_j`$ and the sum over all sample pairs is $`DD`$ (Peebles, 1980). A random sample of redshifts following the fitted $`n(z)`$ is generated along with $`xy`$ co-ordinates in the visible sky area of the catalogue. We then compute the average number of random sample galaxies within precisely the same volume, assigning the random points unit weight. This average is known as DR. Then, we estimate $`w_p(r_p)`$ using the simplest and computationally inexpensive estimator, $$w_p=\frac{DD}{DR}1,$$ (6) which is accurate for the nonlinear clustering examined here and faster than methods which include the RR, i.e. random-random pairs.. We have verified that the $`DD/RR1`$ and $`(DD2DR+RR)/RR`$ estimators give virtually identical results over the range of pairwise separations that we use in the fits, $`0.16r_p5.0h^1\text{Mpc}`$. These alternate estimators are known to be superior when $`\xi 1`$, which only occurs at the outer separation limit of our measurements. We use 100,000 random objects per patch, distributed over the redshift range 0.10 to 0.65 using the fitted $`n(z)`$. The DD and DR sums extend over all four patches, so that patch to patch variations in the mean volume density become part of the correlation signal. This procedure assumes that there are no significant patch-to-patch variations in the mean photometric selection function, which is supported by the absence of any significant differences in the number-magnitude relation from patch to patch. We use geometric weights alone for the results presented here. Magnitude weights give statistically identical results for the same sample of galaxies, but, as might be anticipated, the errors in the resulting determination of the correlation evolution are nearly a factor of two larger. Estimated projected correlation functions, in co-moving co-ordinates using $`R_p=10h^1\text{Mpc}`$, are displayed for the LCRS galaxies bounded by redshifts \[0.033, 0.15\] and seven somewhat arbitrary redshift bins for the CNOC2 data, \[0.10, 0.20, 0.26, 0.35, 0.40, 0.45, 0.55, 0.65\] in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. These boundaries make the end bins bigger than the middle ones to reduce the variation in numbers between the bins. For the open model we found the summed geometric weights in the bins to be \[151.8, 264.4, 267.3, 471.6, 301.2, 496.7, 286.2\], and \[172.2, 305.0, 321.1, 595.6, 368.3, 600.3, 315.4\] for the $`\mathrm{\Lambda }`$ cosmology, showing the sample differences due to cosmology are quite small. Extensive testing found that provided the bins are not made significantly narrower than the adopted limits, the bin sizes make no significant difference to the results. Adjusting the bins to have nearly constant numbers makes no difference to our result. Of course the LCRS data is very important for providing a solid measurement at low redshift. The measured $`w_p`$ are fit to the projection of the power law correlation function, $`\xi (r)=(\widehat{r}_0/r)^{\widehat{\gamma }}`$, estimating both $`\widehat{r}`$ and $`\widehat{\gamma }`$. The errors at each $`r_p`$ are taken as $`1/\sqrt{DD}`$. The fits are restricted to the $`0.16r_p5.0h^1\text{Mpc}`$ range where there are minimal complications from geometric selection effects and the correlation signal is strong. Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample displays these fits as solid lines. Also shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample as dashed lines are fits where we have converted to standardized $`\gamma =1.8`$ correlation lengths, $`r_0`$, using $`r_0=\widehat{r}_0^{\widehat{\gamma }/1.8}`$. All results here are derived using co-moving co-ordinates, and normalized to a Hubble constant $`H_0=`$ $`100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The results displayed in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample are derived assuming a background cosmology of $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$. ### 3.4 Random Errors of the Correlations The problem of error estimates for correlation measurements remains a topic of active research. The shot noise estimate of the fractional error as $`1/\sqrt{DD}`$ is appropriate for weak clustering, but a substantial underestimate for strongly nonlinear clustering, where the clustering itself reduces the effective number of independent pairs. A formal error expression in terms of the three- and four-point correlation function is available (Peebles, 1980) but cumbersome and computationally expensive. Resampling techniques, such as the Bootstrap and Jackknife (Efron & Tibshirani, 1986), produce substantial over-estimates of the error. A straightforward approach to error estimates is to take advantage of our sample being distributed over a number of separate patches. We separately fit each of the four CNOC2 and six LCRS patches, to obtain an $`r_0`$ for each patch or slice. The estimated error in any correlation length, $`r_0`$, is simply, $$\sigma _{r0}^2=\frac{1}{n1}\underset{i=1}{\overset{n}{}}[r_0(i)r_0]^2,$$ (7) where the sum extends over the $`n=4`$ CNOC2 patches and $`n=6`$ LCRS slices at the redshift of interest. The average correlation length in Eq. 7 is computed from the individual patches and is not equal to the correlation of the four fields combined, which is generally larger than the average since it includes the patch to patch variation in mean counts as part of the signal. Because we have only four CNOC2 patches and six LCRS strips the estimated errors will themselves have substantial fluctuations. The resulting co-moving correlation lengths for a power law model are displayed for a range of $`R_p`$ in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. The open circles are the results for the four individual CNOC2 patches (the individual LCRS slices are so similar that they are not displayed). The solid points give the result from the combined data, along with the estimated error. The mean of the fitted slopes is $`\gamma =1.87\pm 0.07`$. In principle our smaller values of $`R_p`$ could cause the measured $`w_p`$ to miss real correlation at large $`r_p`$, leading to $`\gamma `$ values that are systematically to large. However, for the $`R_p10h^1\text{Mpc}`$ our fitted $`\gamma `$ values have no significant dependence on $`R_p`$. ### 3.5 Observational Estimates of $`ϵ`$ The co-moving $`r_0(z_j)`$, derived from fits to the measured $`w_p`$, are in turn fit to the $`ϵ`$ model by minimizing, $$\chi ^2=\underset{j}{}\left[\frac{r_0^\gamma (z_j)(1+z_j)^{3\gamma }r_{00}^\gamma (1+z_j)^ϵ}{\sigma _\xi (z_j)}\right]^2,$$ (8) over the $`j`$ redshift bins by varying $`r_{00}`$ and $`ϵ`$. The quantity $`r_0^\gamma (z_j)(1+z_j)^{3\gamma }`$ is proportional to the mean clustered physical density. The $`r_0(z_j)`$ are the results of the fits to the correlation measurement of the four patches combined. The $`\sigma _\xi (z_j)`$ are the variances estimated from the standard deviations of the $`r_0`$ values $`\sigma _{r0}`$, re-expressed as a variance of the correlation amplitude, $`\sigma _\xi (z)[(r_0(z)+\sigma _{r0}(z))^\gamma r_0^\gamma (z)](1+z)^{3\gamma }`$. The $`\sigma _{r0}`$ are evaluated using Equation 7. The $`\chi ^2`$ statistic allows us to evaluate absolute goodness of fit, as well as determine parameter confidence intervals. ### 3.6 Systematic Errors of the Correlations We can assess the effect of varying $`R_p`$ in the $`w_p`$ integration using the results of the fits to the $`ϵ`$ model, Eq. 2. Fitted $`r_{00}`$ and $`ϵ`$ are displayed as a function of $`R_p`$ in Figures Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample and Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. The errors are the 90% confidence intervals. No 90% confidence fits were found at $`R_p`$ of 20 and 100 $`h^1`$ Mpc, which likely reflects variations in the estimated errors more than a true failure of the model. From these two figures we conclude that $`R_p=`$ 10 or 30 $`h^1`$ Mpc converge to give statistically identical values of $`r_{00}`$ and $`ϵ`$. Smaller $`R_p`$ values fail to include the full signal and larger $`R_p`$ values give huge patch to patch variations as large voids come and go. The most conservative choice for $`R_p`$ is 10$`h^1`$ Mpc, the one with the largest error in the stable range. Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample weakly suggests that somewhat larger $`R_p`$ would lead to a small increase in the correlation length, to $`r_{00}5.2h^1\text{Mpc}`$. We will adopt the $`R_p=10h^1\text{Mpc}`$ fits as our standard results, noting that the inferred $`ϵ`$ have essentially no dependence on $`R_p`$. For small survey volumes the derived correlation length tends to systematically underestimate the result from a very large area. That is, clustering is known to be significant on scales of at least 100$`h^1`$ Mpc, hence surveys smaller than that in any dimension are likely to be measuring the range of clustering about either a local valley or plateau, and not seeing the full range of clustered density. The effect of increasing survey size on correlations can be seen in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample, in that the combined analysis (filled circles) generally gives correlations higher than the mean of the individual patches (other symbols). Quantitatively, the straight mean of the CNOC2 $`r_0`$ is 3.2$`h^1`$ Mpc; the median is 3.4$`h^1`$ Mpc. It is more appropriate to average together the pair counts, which is equivalent to taking the average $`r_0^\gamma ^{1/\gamma }`$ leading to an average correlation length of 3.5$`h^1`$ Mpc. Performing a joint correlation analysis of all four patches together gives an $`r_0`$ of 4.0$`h^1`$ Mpc. This raises the question as to whether the correlations have converged within the current survey. The expected variation from patch to patch for the given volumes with narrow redshift bins is about 45%, which is consistent with the difference between a correlation length of 3.5 and 4.3$`h^1`$ Mpc. In the combined sample with larger bins we expect that there could be as much as about 10% of the variance missing, which would boost the correlation lengths by another 5%. ## 4 The Evolution of Galaxy Clustering The correlation lengths for CNOC2 and LCRS, derived from fitting $`w_p(r_p)`$ as discussed in §3.3 and analyzed in precisely the same way for our standard $`R_p=10h^1\text{Mpc}`$ and $`Q=1`$, are shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample and reported in Table 1. It is immediately clear that there is relatively little correlation evolution for high-luminosity galaxies. It must be borne in mind that the sample is defined to be a similar set of galaxies with $`LL_{}`$, with luminosity evolution-compensated, that approximates a sample of fixed stellar mass with redshift. Samples which admit lower luminosity galaxies, or do not correct for evolution, or are selected in bluer pass-bands where evolutionary effects are larger and less certainly corrected, will all tend to have lower correlation amplitudes. The $`\chi ^2`$ contours of the $`r_{00}ϵ`$ model fits to the measured correlations of §3 are shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. At redshifts beyond 0.1 or so, the choice of cosmological model has a substantial effect on the correlation estimates. Relative to a high matter density cosmological model, low density and $`\mathrm{\Lambda }`$ models have larger distances and volumes, which cause the correlations to be enhanced. The LCRS data are analyzed only within the $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$ model. The correlations for three cosmologies, flat matter dominated, open, and low-density $`\mathrm{\Lambda }`$, are shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. The $`\chi ^2`$ contours at the 68%, 90% and 99% contours are shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. The best fit $`ϵ`$ value is $`0.17\pm 0.18`$ for $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$ with $`r_{00}=5.03\pm 0.08h^1\text{Mpc}`$. The evolution rates for the flat matter dominated and flat low-density models are $`ϵ=+0.8\pm 0.22`$ and $`ϵ=0.8\pm 0.19`$, respectively, with $`r_{00}`$ of $`5.30\pm 0.1h^1\text{Mpc}`$ and $`4.85\pm 0.1h^1\text{Mpc}`$, respectively. These are marked with plus signs in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. The effects of alternate values for the luminosity evolution are shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample with crosses indicating the results for $`Q=0`$ and $`Q=2`$, with the adopted value being $`Q=1`$. The absolute magnitude limit remains $`M_R=20`$ mag in all cases and we use the $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology. For $`Q=0`$ the summed bin weights are \[176.1,329.2,347.7,659.7,417.7,673.6,318.4\] and for $`Q=2`$ they are \[128.7,203.7,190.2,317.6,191.2,296.0,159.3\]. The effect is that less (more) evolution compensation gives rise to a more (less) rapid decline in the correlations with increasing redshift. In the absence of any allowance for luminosity evolution, $`Q=0`$, galaxies of lower luminosity are included with increasing numbers at higher redshift. Galaxy correlations tend to decline slightly with decreasing luminosity with the strongest declines being at high-luminosity (Loveday et al., 1995) although the details of this important effect remain controversial. A preliminary investigation finds a small effect in the CNOC2 sample Carlberg et al. (1998). If we do not correct for luminosity evolution, then intrinsically lower luminosity galaxies are included in increasing numbers at higher redshifts, which will leads to an increased rate of decline of correlations with redshift, as we have empirically demonstrated here. The observed effect over the 0.0 to 0.65 redshift range is approximately $`\mathrm{\Delta }ϵ0.3\mathrm{\Delta }Q`$. This effect is partially responsible for the difference between the results here and those of LeFèvre et al. (1996). ## 5 Comparison of Observations and Theory We now can compare our measurements of clustering evolution to the various simple theoretical models and analytic fits to n-body simulation results. We will cast these predictions into the form of an equivalent theoretical $`ϵ_T`$ using, $$\xi (r,z_2)=\xi (r,z_1)\left(\frac{1+z_2}{1+z_1}\right)^{(3+ϵ_T\gamma )},$$ (9) where we have assumed that the correlation function will always be of the form $`r^\gamma `$. Linear growth is the simplest case, $$\xi (r,z)=D^2(z,\mathrm{\Omega })\left(\frac{r_{00}}{r}\right)^\gamma ,$$ (10) where $`D(z,\mathrm{\Omega })`$ is the linear perturbation growth factor (Peebles, 1980, 1993). For the $`\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`D(z)`$ is simply the expansion factor, $`a(z)=(1+z)^1`$, which gives the result that $`ϵ_T=0.8`$. For other $`\mathrm{\Omega }`$ values we approximate the redshift dependence as a power law in $`1+z`$ by evaluating the growth factor at $`z=0`$ and $`z=0.5`$, as is appropriate for this survey. The results are presented in Table 2. An alternate clustering model is to allow for some bias, $`b(z)`$, of galaxies clustering with respect to the dark matter, $`\xi _{gg}=b^2\xi _{\rho \rho }`$. There are two simple forms which describe the bias, $`b(z)`$, (Mo & White, 1996), $$b_{MW}(z)=1\frac{1}{\delta _c}+\frac{\delta _c}{D^2(z)\sigma ^2(M)},$$ (11) where $`\sigma (M)`$ is the tophat mass variance (Bardeen et al., 1986) in a sphere of radius $`R=1/\mathrm{\Omega }_M^{1/3}h^1\text{Mpc}`$ and the critical linear overdensity, $`\delta _c1.68`$. A refinement to this formula based on fitting to n-body results is (Jing, 1998), $$b_J(z)=b_{MW}(z)\left(1+\frac{0.5D^4(z)\sigma ^4(M)}{\delta _c^4}\right)^{0.060.02n},$$ (12) where $`n=d\mathrm{ln}\sigma ^2(R)/d\mathrm{ln}R3`$ is the effective index of the perturbation spectrum. We use the fit to CDM spectrum of Efstathiou, Bond & White (1992) to evaluate our tophat variances. One possibility of relevance only in an $`\mathrm{\Omega }_M=1`$ cosmology, is that clustering obeys a scaling law (Peebles, 1980; Efstathiou Davis White & Frenk, 1985), $`\xi (r,t)=\xi (s)`$, with $$sr(1+z)^{2/(n+3)}.$$ (13) CDM has such a large negative effective index, $`n2.1`$ on galaxy scales, that it gives rise to a very large theoretical value of $`ϵ_T7`$, as was seen in the early CDM simulations (e.g., Davis et al., 1985). An $`ϵ_T`$ this large is completely excluded by clustering evolution studies. The observations and theoretical predictions in Table 2 allow us to draw a number of conclusions. Linear theory is not a very good model because of both biasing and nonlinearities, given that the range of scales fit in the power law spans overdensities ranging from $`10^3`$ to $`0.2`$, but it is a useful reference point. The open and $`\mathrm{\Omega }_M=1`$ cosmologies are consistent with linear theory growth but the low density $`\mathrm{\Lambda }`$ model is marginally excluded at 3.4 standard deviations (s.d. ). A comparison to n-body experiments (Colin Carlberg & Couchman, 1997; Colin et al., 1999; Kravtsov & Klypin, 1999) shows the same approximate consistency with low density mass-traces-light cosmologies. Mo & White biasing in an open cosmology is marginally excluded at 3.8 s.d. and Jing biasing more conclusively at 4.7 s.d. Biased clustering in an $`\mathrm{\Omega }_M=1`$ cosmology is excluded at more than 6 s.d. The low density flat model is acceptable under all biasing models. Bearing in mind that the evolution of correlations is a test of both a galaxy formation and evolution model and the cosmology, these results mainly exclude models where galaxies are closely identified via the biasing mechanism with dark matter halos in an $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$, cosmology or in an $`\mathrm{\Omega }_M=1`$ cosmology. The problem in both cases is that they predict almost no correlation evolution, whereas we observed a small but significant decrease of the co-moving correlation length with redshift. ## 6 Conclusions The CNOC2 redshift survey has measured precision velocities for more than 6000 galaxies in the redshift 0.1 to 0.7 range. The sky area of about 1.55 square degrees therefore covers a volume of about $`0.5\times 10^6h^3\mathrm{Mpc}^3`$. We have defined a volume limited subsample of those galaxies with k-corrected and evolution corrected R-band absolute magnitudes of $`M_R^{k,e}20`$ mag, where $`M_{}20.3`$ mag in the R-band. This subset contains about 2300 galaxies in the 0.1 to 0.65 redshift range. At low redshift we add about 13000 identically selected galaxies from the LCRS. The correlation measurements from this paper are contained in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample and the associated Table 1. Over the redshift range examined, the correlation evolution can be described with the double power law model, $`\xi (r|z)=(r_{00}/r)^\gamma (1+z)^{(3+ϵ\gamma )}`$, in co-moving co-ordinates. We measure a $`\gamma =1.87\pm 0.07`$ and set $`\gamma =1.8`$ for fitting purposes. Our results for various cosmologies and evolution corrections are shown in Figure Galaxy Clustering Evolution in the CNOC2 High-Luminosity Sample. The primary conclusion is that correlations show a weak decline with redshift. Furthermore, there is no evidence in the current data for a change in the slope of the correlation function with redshift. These observations test both the amplitude and redshift evolution of clustering predictions. They jointly constrain the cosmology and the galaxy formation history, and do not provide any strong constraints on the background cosmology by themselves. The comparison of our measurements and analytic fits is presented in Table 2. The correlation amplitude and slope are in quite good agreement with appropriately selected dark matter halos in a CDM simulation (Pearce et al., 1999), however the agreement depends fairly sensitively on the mass range selected (Kauffmann et al., 1999). The rate decline of clustering with redshift is slow but significant in all cosmologies examined here. That is, $`ϵ`$ is always greater than $`1.2`$, the value for a fixed co-moving clustering length. The prediction of a slower clustering evolution in a biased $`\mathrm{\Omega }_M=1`$ cosmology is completely excluded by the more rapid decline measured here and somewhat marginally excluded in an $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology. The models of biased clustering is an $`\mathrm{\Omega }_M=0.2,\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$ cosmology are statistically consistent with our measurements. This research was supported by NSERC and NRC of Canada. HL acknowledges support provided by NASA through Hubble Fellowship grant #HF-01110.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. We thank the CFHT Corporation for support, and the operators for their efficient control of the telescope.
no-problem/9910/astro-ph9910117.html
ar5iv
text
# Simulations Applied to the Bright SHARC XCLF: Results and Implications ## 1. Introduction The interest in the X-ray Cluster Luminosity Function (XCLF) has been heightened in recent years with the realization that a measure of its evolution with redshift could be used to derive the value of the cosmological parameter $`\mathrm{\Omega }_o`$ (cf. Nichol et al., 1999, Oukbir $`\&`$ Blanchard, 1997). The conclusions drawn from this type of analysis are, however, extremely model dependent, and observational difficulties aside, it may be a long time before a consensus is reached as to the value of $`\mathrm{\Omega }_o`$ (cf. Reichart et al. 1999, Bahcall et al. 1997). Simply understanding the origin and evolution of clusters of galaxies is interesting itself, however, and a measure of the evolution of the XCLF provide insights into the process of cluster formation. As we will describe below, it is interesting to note that, although there is no statistically significant evidence in our Bright SHARC survey for evolution of the XCLF out to a z = 0.8, there is a great deal of evidence that the X-ray emission (and other properties of clusters) are continually evolving. Therefore, some how clusters evolve, but manage to keep the total luminosity function relatively constant. The process that could do this is hierarchical clustering (cf. Doroshkevich et al. 1998) in which smaller X-ray fainter clusters evolve into larger, X-ray brighter clusters, in just such a way so as to keep the XCLF approximately constant with redshift. The conclusion then is that the most likely portion of the XCLF to detect evolution is at the bright end where the hierarchical coalescence requires the most amount of time to form such large structures. Below, we begin by giving just one example of cluster evolution. Then we describe the results of our simulations and the impact of the SHARC derivation of the luminosity function for $`0.3<z<0.8`$. Next, we discuss how these assumptions that are included in the simulations affect the conclusions that can be drawn about evolution. Finally, we demonstrate that not only is better sky coverage needed to improve statistics, but the evolution of the shape of the emission profiles is also going to be necessary before the true completeness of a survey can be accurately determined. ## 2. Cluster Evolution There are many pieces of evidence that clusters are continually forming and evolving, but one of the earliest was the surprising discovery that the intra-cluster medium in the Perseus cluster is cooler in the core region (Ulmer & Jernigan, 1978). In fact, one of the explanations that Ulmer and Jernigan gave for this result was that the denser material in the core had cooled faster due to the cooling time being related to the density of the material. This first discovery is often ignored, yet this was the discovery of what now seems to be the nearly ubiquitous phenomenon called “cooling flows” (cf. Peres al., 1998). ## 3. The Simulations A key to any survey is to understand how effective the survey was at detecting objects. Without this understanding, the detected number of objects cannot be converted into a “true” number. The so-called “Bright SHARC” survey and been described in detail by Romer et al. (1999). A preliminary measure of the XCLF derived from the results of Romer et al. was reported by Nichol et al.(1999). Full details of our simulations can be found in Adami et al.(1999). ### 3.1. The Simulation Procedure The simulation program produced fake clusters which were placed on real ROSAT data sets that were used for analysis. The fake clusters were systematically placed within different annuli (but randomly other wise) in the ROSAT PSPC field of view. The data set we used was a statistically complete sampling of the data set that actually was used for analysis and which included all ROSAT pointing with galactic lat. greater than $`|20^{}|`$ and data within the region between $`2.^{}5`$ and $`19^{}`$ of the center of the field of view (FOV) of the ROSAT PSPC. These data sets were then put through the standard SHARC processing pipeline and extended sources were identified in the standard manner. If a fake cluster was found as an extended X-ray source, then the cluster was counted as “detected.” We also kept track of the apparent luminosity of the cluster to determine how much the cluster luminosity differed from the true (input) luminosity. ### 3.2. The Results In Figure 1, we show two results. In the left-most panel, we show the detection efficiency as a function of redshift and radial distance from the center of the ROSAT PSPC FOV. We can see the effects of the degradation of the angular resolution in that clusters are less easily found in the outer portions of the FOV. We also see that the detection efficiency falls off with increasing redshift, as expected. At redshifts $`>0.55`$, we see some apparent detections at low luminosity, but a comparison with the right most figure reveals the cause: confusion. The right-most panel shows that the apparent luminosity of these objects is much higher than the true luminosity which can only mean that the fake cluster in our simulation landed close enough to a real object to have been mistakenly classified as a “detection.” This turns out to be negligible effect, however, as the overall detection efficiency rate is small (usually less that 5%). This shows, however, the importance of optical follow up and rigorous identification before an “extended source” can be truly identified as an X-ray bright cluster. We also see in Figure 1 that, generally, the luminosity was well determined by our process except in cases where the detection rate is so small that in real life (as opposed to our simulations) they were easily removed from our sample by optical followup. These results are all for a standard set of parameters for the cluster profile, temperature, and cosmology (see Nichol et al. 1999). We next show in Figure 2 one effect of modifying the shape of the cluster emission profile. We see that it is much easier to detect clusters if their X-ray emission profile is the Navarro, Frenk and White (1997; NFW) model and that cooling flows do not have much effect. Also, at the highest redshifts, but not shown in a figure here, clusters which are more elliptical than the average are more easily detected. These two points demonstrate: (1) that knowledge of shapes of the cluster emission at high redshift is extremely important for correcting for survey incompleteness; and, (2) that highly elliptical clusters are more easily detected than the average, so that this may explain in part why only clusters that appear to be highly elliptical have been detected at high redshift. For even L<sub>x</sub> $`10^{45}`$ erg cm<sup>-2</sup> s<sup>-1</sup> clusters at $`0.8`$ are apparently quite faint. Keeping the total number of photons emitted constant, but confining the emission region to a more highly elliptical shape means the cluster has an intrinsically higher surface brightness and is, hence, easier to detect. For brevity, we do not show many of the other cases we tested, but the effect of the assumed cosmology on the assumed detection efficiency is shown in Figure 3, which is discussed below. ## 4. The Simulations Applied To The Bright SHARC Sample In Figure 3 we show the results of applying the derived efficiency of detection to the Bright SHARC sample. There are several comparisons to be made within this figure. First there is the comparison between the standard model result and the preliminary result of Nichol et al. Here we see the new, refined result does not differ in a statistically significant manner from the Nichol et al. work. Second, we see that the effect of assuming different cosmologies is negligible within the statistical uncertainty. Third we see that the statistical uncertainty in the number of clusters in the highest luminosity bin is so large because these clusters are so rare and that a much larger survey than the Bright SHARC is needed to determine if there is any discernible evolution in the highest luminosity bin. Fourth, when we compare with local XCLF, we see there is no statistically significant evidence for evolution of the XCLF as a whole and that these results are qualitatively consistent with the scenario for hierarchical cluster formation discussed in the introduction. Follow up work requires both much larger and deeper surveys. #### Acknowledgments. We thank the IGRAP meeting organizators and support in part by NASA grant NAG5-2432 and by NASA Illinois space grant ## References Adami C., et al., 1999, ApJ submitted Bahcall N.A., et al., 1997, ApJ 485, L53 Doroshkevich A.G., Fong R., $`\&`$ Makarova O., 1998, A$`\&`$A 329, 14 Ebeling H., et al., 1997, ApJ 479, L101 de Grandi S., et al., 1999, ApJ 513, L17 Navarro J.F., Frenk C.S., $`\&`$ White S.D.M., 1997, ApJ 490, 493 Nichol R.C., et al., 1999, ApJ 521, L21 Oukbir J., $`\&`$ Blanchard A., 1997, A$`\&`$A 317, 1 Peres C. B., et al., 1998, MNRAS 298, 416 Reichart D.E., et al., 1999, ApJ 518, 521 Romer A.K., et al., 1999, ApJS in press, astro-ph: 9907401: R99 Ulmer M. P., $`\&`$ Jernigan J. G., 1998, ApJ 222, L85
no-problem/9910/quant-ph9910115.html
ar5iv
text
# References IFT-UAM/CSIC-99-43 A Comment on Fisher Information and Quantum Algorithms J.J. Alvarez<sup>1</sup><sup>1</sup>1E-mail: juanjose.alvarez@uam.es and C. Gómez<sup>⋄‡</sup><sup>2</sup><sup>2</sup>2E-mail: cesar.gomez@uam.es Instituto de Física Teórica, C-XVI, Universidad Autónoma de Madrid E-28049-Madrid, Spain I.M.A.F.F., C.S.I.C., Serrano 113bis E-28006-Madrid, Spain Abstract > We show that Grover’s algorithm defines a geodesic in quantum Hilbert space with the Fubini-Study metric. From statistical point of view Grover’s algorithm is characterized by constant Fisher’s function. Quantum algorithms changing complexity class as Shor’s factorization does not preserve constant Fisher’s information. An adiabatic quantum factorization algorithm in non polynomial time is presented to exemplify the result. 1.- Recently a lot of attention has been paid to the problem of defining quantum algorithms . Generically a quantum algorithm defines a discrete path in a quantum Hilbert space with the end point of the path corresponding to a quantum state that, after an appropriated measurement, will eventually provide, with high probability, the answer to a given problem. In this note we will work out some geometrical aspects of quantum algorithms. We will consider first the example of Grover’s algorithm . In this case it can be shown that the path \- in the quantum Hilbert space- associated with the algorithm, is a geodesic in Fubini-Study metric . Geodesics in quantum Hilbert space are intimately connected with Fisher information function . Using Fisher’s function we will define a formal Lagrangian on probability space such that their trajectories coincide with the quantum algorithm path. Next we will work out Shor’s factorization algorithm. We will show that, in this case as in any other involving change in complexity class (from NP to P problem), the Fisher’s function does not remain constant. However it is possible to design a factorization algorithm using Grover’s scheme. In this factorization algorithm Fisher’s function remains constant but “computing time” is non polynomial (the process is adiabatic with respect to Fisher’s “entropy”). This strongly indicates that changes in complexity class require no conservation of Fisher’s function - i.e. non unitarity quantum state projection. 2.- Grover’s algorithm provides a way to find, by means of a quantum computer , one particular item - in a set of N items randomly ordered -after approximately $`\frac{\pi }{4}\sqrt{N}`$ iterations. This algorithm is known to be optimal . In order to define the algorithm let us introduce the quantum state: $$\psi =\frac{1}{\sqrt{N}}\underset{i=0}{\overset{N\mathrm{\hspace{0.17em}1}}{}}i$$ (1) The algorithm is determined by a set of states $`\psi _j`$: $$\psi _j=k_j0+\underset{i=\mathrm{\hspace{0.17em}1}}{\overset{N1}{}}l_ji$$ (2) with: $`k_{j+1}`$ $`=`$ $`{\displaystyle \frac{N2}{N}}k_j+\mathrm{\hspace{0.17em}2}{\displaystyle \frac{N1}{N}}l_j`$ $`l_{j+1}`$ $`=`$ $`{\displaystyle \frac{2}{N}}k_j+{\displaystyle \frac{N2}{N}}l_j`$ (3) where the state $`0`$ represents the item we are looking for . We will think of (2) as the discrete path defining Grover’s algorithm. Let us approximate the discrete path (2) by a path: $$\psi (\varphi )=\underset{j=0}{\overset{N\mathrm{\hspace{0.17em}1}}{}}c_j(\varphi )j$$ (4) depending on a continuous parameter $`\varphi `$. The probabilities $`p_j(\varphi )=|c_j(\varphi )|^2`$ to find item $`j`$ at ”computer time” $`\varphi `$ are - for Grover’s algorithm - given by: $`p_0(\varphi )`$ $`=`$ $`\mathrm{sin}^2\varphi `$ $`p_j(\varphi )`$ $`=`$ $`{\displaystyle \frac{\mathrm{cos}^2\varphi }{N\mathrm{\hspace{0.17em}1}}}j0`$ (5) These probabilities define a path on ”probability space”. Transitions from $`\psi (\varphi )`$ to $`\psi (\varphi +\delta \varphi )`$ are associated with the quantum computer operations performed by means of quantum gates. If these transformations are unitary we get: $$\dot{\psi }\dot{\psi }=\frac{1}{4}\underset{j=1}{\overset{N}{}}\frac{\dot{p_j}^2}{p_j}=\mathrm{\hspace{0.17em}1}$$ (6) provided we normalize the state $`\psi `$, and where $`\dot{p_j}=\frac{dp_j}{d\varphi }`$. Equation (6) is our first contact with Fisher’s information function. In fact defining : $$(\varphi )=\underset{i=1}{\overset{N}{}}\frac{\dot{p_i}^2}{p_i}$$ (7) we notice that a path of states generated by unitary transformations are associated with a one parameter family of probability distributions of a constant Fisher function of value equal to four. 3.- Introducing quantum phases by $`c_j=\sqrt{p_j}e^{\phi _j}`$ the Fubini-Study metric on Hilbert space is given by: $$ds_{FS}^2=\frac{1}{4}\underset{j=1}{\overset{N}{}}\frac{dp_j^2}{p_j}+\left[\underset{j=1}{\overset{N}{}}p_jd\phi _j^2\left(\underset{j=1}{\overset{N}{}}p_jd\phi _j\right)^2\right].$$ (8) The induced metric on a path $`(p_j(\varphi ),\phi _j(\varphi ))`$ is given by: $$ds_{Ind.}^2=\frac{1}{4}\left((\varphi )+\mathrm{\hspace{0.17em}4}\sigma _{\dot{\phi }}^2\right)d\varphi ^2$$ (9) with $`\dot{\phi }=\frac{d\phi }{d\varphi }`$, and $`(\varphi )`$ the Fisher function defined in (7). For a path with $`\dot{\phi }=\mathrm{\hspace{0.17em}0}`$ <sup>3</sup><sup>3</sup>3This in particular means that entanglement remains constant, as the one defined by Grover’s algorithm, the geodesic is given by minimizing <sup>4</sup><sup>4</sup>4Use of Fisher’s function to define variational problems has been also considered in $$𝒮=\frac{1}{2}_A^B\left(_\varphi \right)^{1/2}𝑑\varphi $$ (10) with the constraint: $$\underset{i=1}{\overset{N}{}}p_i=\mathrm{\hspace{0.17em}1}.$$ (11) Defining new variables $`p_i=x_i^2`$ the equations of motion for the ”Lagrangian” $`\frac{1}{2}\left(_\varphi \right)^{1/2}`$ are: $$\ddot{x_i}\left(\frac{\dot{_\varphi }}{_\varphi }\right)\dot{x_i}+\frac{_\varphi }{4}x_i=\mathrm{\hspace{0.17em}0}.$$ (12) For any quantum algorithm performed by successive unitary transformations, we know $`(\varphi )=cte.`$ reducing (12) to the harmonic oscillator equation: $$\ddot{x_i}+\frac{_\varphi }{4}x_i=\mathrm{\hspace{0.17em}0}$$ (13) where the natural frequency is given by $`\omega ^2=\frac{_\varphi }{4}=\mathrm{\hspace{0.17em}1}`$. It is now easy to check that Grover’s path (S0.Ex2) is in fact solution to (13). Thus, we conclude that Grover’s algorithm defines a geodesic path in quantum Hilbert space. 4.- Obviously we can always transform a quantum algorithm of Grover’s type into a one parameter family of probability distributions $`p_i(\varphi )`$ with $`i`$ running over the Hilbert space basis. What we have pointed out in this note, is that this family of probability distributions is completely determined by unitarity and the condition of minima for ”Fisher’s information action” (10). Notice that in our definition of Fisher’s information function the ”computing time” $`\varphi `$ is playing the statistical role of an statistical estimator. In particular with respect to this ”computing time”, $`(\varphi )`$ is constant as a consequence of unitarity. Hence, in Grover’s algorithm, the ”input information” at the starting point $`\varphi =\varphi _0`$ of the computation is given by: $$\underset{i=0}{\overset{N\mathrm{\hspace{0.17em}1}}{}}\frac{\left(\frac{P_i(\varphi )}{\varphi }\right)^2}{p_i(\varphi )}|_{\varphi =\varphi _0}$$ (14) and it is this quantity the one that remains constant in the process. This is in contrast to the evolution of the standard Fisher’s information -contained in $`\{p_i(\varphi )\}`$\- concerning where is the item we are looking for. Obviously this second form of information increases in the process until reaching its maximum corresponding to the point where we find the desired solution. It is interesting to observe that the ”input information” (14) is determined by quantum unitarity and can not be smaller or bigger. In summary, we conclude that from the information theory point of view quantum computations of Grover’s type appears as equivalent to classical statistical processes governed by minimum Fisher’s action. 5.- Next let us consider Shor’s factorization algorithm . As it is well known, classical algorithms for number factorization require exponential time $`\mathrm{exp}(c(\mathrm{log}N)^{1/3}(\mathrm{log}\mathrm{log}N)^{2/3})`$ where N is the integer we want to factorize an c is some constant. From complexity theory, number factorization is considered a NP-problem. Given a number N we can reduce the problem of factorizing N to find the period r of the function $`f(a)=y^amodN`$ for a random number $`y`$ smaller than N and coprime with N. In Shor’s quantum algorithm the period of $`f(a)`$ is obtained in two steps. First we define the quantum register state: $$\mathrm{\Psi }=\frac{1}{\sqrt{q}}\underset{a=0}{\overset{q\mathrm{\hspace{0.17em}1}}{}}ay^amodN$$ (15) with $`N^2<q<2N^2`$. Then, we measure the value of $`y^amodN`$. For each eigenvalue $`l`$ we get the state: $$\chi _l=\sqrt{\frac{r}{q}}\underset{n=\mathrm{\hspace{0.17em}0}}{\overset{q/r\mathrm{\hspace{0.17em}1}}{}}l+nr.$$ (16) The next step is to proceed by a discrete Fourier transform to wash out the dependence on $`l`$. At the end of the process we get the desired period r in polynomial time: $`𝒪((\mathrm{log}N)^2(\mathrm{log}\mathrm{log}N)(\mathrm{log}\mathrm{log}\mathrm{log}N))`$. In this algorithm there are series of unitary transformations we can model out in terms of standard quantum gates and a typically non unitary process consisting in the measurement projecting from the register state (15) to state (16). As it is clear from our previous discussion, Fisher information will be conserved during the unitary discrete Fourier transform but will generically change in the non unitary measurement process. This change is, as we will see in a moment, related to the change in complexity class achieved by Shor’s quantum algorithm. In order to visualize this more clearly let us design a way to find the period of $`f(a)`$ using Grover’s type of algorithm. We start with the quantum register state (15). Let us define the following transformation: $`𝒞[af(a)]`$ $`=`$ $`1iff(a)=f(1)`$ $`𝒞[af(a)]`$ $`=`$ $`0otherwise.`$ (17) Grover’s loop of transformations is then defined by rotating a $`\pi `$ angle the state $`a`$ if $`𝒞[ay^amodN]=\mathrm{\hspace{0.17em}1}`$ and doing nothing otherwise. Once we do that we do the inversion about the average as in Grover’s algorithm. At the end of $`𝒪(N)`$ steps we will get: $$\eta =\frac{1}{\sqrt{\tau }}\underset{j=\mathrm{\hspace{0.17em}0}}{\overset{\tau \mathrm{\hspace{0.17em}1}}{}}\mathrm{\hspace{0.17em}1}+jr$$ (18) where $`\tau `$ is the greatest integer less than $`\frac{q\mathrm{\hspace{0.17em}1}}{r}`$. So, just doing three measurement operations over the state $`\eta `$ one finds , with high probability, the period r. As discussed in the first part of this note the whole Grover’s process is unitary preserving constant the Fisher information function. In terms of time it takes an exponential time of $`𝒪(2^{\mathrm{log}N})`$. The difference between the fast projection from (15) to (16) performed in Shor’s algorithm and the adiabatic slow one using Grover’s loop defined above is that in the adiabatic one the complexity class is not changed and Fisher’s function remains constant, playing the classical role of entropy. The quantum adiabatic algorithm using Grover’s loop is certainly more efficient than the classical one and very likely more robust with respect to quantum decoherence problems than the faster Shor’s algorithm. Technologically is more feasible using for instance the recent implementation of Grover’s algorithm , . 6.- To finish we would like to suggest the following general conjecture: Changes in complexity class should involve no conservation of Fisher’s information function and reciprocally constant Fisher’s information will not change the complexity class. Our exercise also shows that the typical non unitary quantum projection from (15) to (16) used by Shor’s algorithm can be, for the practical purpposses of quantum computation, done using only unitary transformations. The bill you have to paid for adiabaticity is longer time and not change of complexity class.
no-problem/9910/cond-mat9910122.html
ar5iv
text
# Deconstruction of the Trap Model for the New Conducting State in 2D In a series of recent papers, Altshuler, Maslov, and Pudalov (AMP) proposed that the recent experimental finding by Kravchenko et al., Popović et al., Simmons et al., and Hanein et al. of a new conducting state in a dilute 2D electron gas is really much ado about not very much. Namely, no new conducting state exists in a dilute 2D electron gas, and all experiments observing a downturn in the resistivity will eventually observe an upturn at sufficiently low temperatures. In defense of this view, they offer a trap model coupled with arguments from weak localization in which temperature-dependent traps are superimposed on a temperature-independent background potential. Within this model they predict that for a given concentration and strength of the trap potential, a downturn of the resistivity occurs but eventually the resistivity turns around and increases at some characteristic temperature, $`T_{\mathrm{min}}`$. They argue that $`T_{min}`$ should increase as the electron density increases. Consequently, saturation and eventual upturn of the resistivity should be easiest to observe in the high electron density samples. In fact, such an upturn has been observed, thus far, only in the highest density samples, in apparent agreement with the prediction of the trap model. While general criticisms have been levied at the AMP model, which actually relies on four parameters to fit the experimental data, their calculation of $`T_{\mathrm{min}}`$ has not been addressed. I show here that within the AMP model 1) $`T_{\mathrm{min}}`$ in fact decreases with increasing electron density and 2) $`T_{\mathrm{min}}`$ is on the order of $`1K`$, both of which are inconsistent with the experimental observations. Consequently, the lack of any upturn in the electrical resistivity in this temperature regime in the low electron density samples rules out the trapping model as a viable interpretation of the experiments on the new conducting state. Within a model that has both temperature-dependent and temperature independent disorder, AMP write the resistivity accordingly as $`\rho _d(T)=\rho _1+\rho _0(T).`$ (1) In fact, a form of this type was first proposed by Pudalov for Si samples and later adopted in the context of the GaAs samples as a saturation of the resistivity was observed at low temperatures. Within the AMP model, the resistivity exhibits a minimum at $`T_{\mathrm{min}}={\displaystyle \frac{pa}{2}}{\displaystyle \frac{\rho _1^2}{d\rho _0/dT|_{T=T_{\mathrm{min}}}}}`$ (2) where $`p`$ and $`a`$ are numerical constants. In reaching the conclusion that $`T_{\mathrm{min}}`$ increases with increasing electron density, AMP used the experimental fact that the denominator, $`d\rho _0/dT|_{T=T_{\mathrm{min}}}`$, decreases as the carrier density increases. It is unfortunate, however, that AMP did not consider the density dependence of $`\rho _1`$, because to determine definitively the density dependence of a function, both the denominator and the numerator, rather than only the denominator, must be considered. The experiments clearly show that $`\rho _1`$, the resistivity from the residual scattering, is strongly dependent on the carrier density, $`n_s`$. For example, Hanein et al. have shown that $`\rho _1`$ is inversely proportional to $`n_sn_c`$ in GaAs heterostructures. Nearly exponential density dependence of $`\rho _1^1(n_s)`$ was also reported for Si in Ref. at $`n_sn_c`$. Inclusion of this effect leads to precisely the opposite conclusion regarding the density dependence of $`T_{\mathrm{min}}`$. To show this, I analyse the beautiful data of Pudalov et al. of Ref. 3b on Si-MOSFET’s. Specifically, I focus on the data shown in the inset of Fig. 3. Shown there is a plot of the resistivity as a function of temperature for 11 electron densities. To consider the most favourable case for the AMP scenario, I determined the slope of the resistivity from its largest value. Because $`T_{\mathrm{min}}`$ is inversely proportional to $`d\rho _0/dT`$, my estimate will then be a lower bound to $`T_{\mathrm{min}}`$. Using the digitization feature of Ghostview, I simply chose two points on the steepest part of $`\rho (T)`$ and then determined the slope. Consequently, my analysis does not fall prey to the ambiguity suggested in the response by AMP. In addition, $`\rho _1`$ was obtained from the extrapolated leveled value of $`\rho (T)`$ at zero tempearture. I display in Figure 1 a plot of $`T_{\mathrm{min}}`$ versus the electron density obtained by analysing each of the eleven curves shown in the inset of Figure 3 in Ref. 3b. Further, to remove any ambiguity, I have provided the data points used in the analysis in the figure caption. As the figure clearly shows, $`T_{\mathrm{min}}`$ predicted by the AMP model decreases (roughly as $`1/n_s`$), in contrast to their claim. Hence, rather than corroborating the AMP scenario, the upturn at high electron density now stands in stark contrast to what their model actually predicts. Further, the $`T_{\mathrm{min}}`$’s determined here represent a lower bound to the turnaround temperature. As these temperatures are all on the order of 1K, they are certainly well accessible experimentally. However, no such turnaround has been observed in the experiments in the low density samples on the conducting side. In fact, the recent finding by Kravchenko and Klapwijk that the resistivity in a low density Si sample does not exhibit an upturn down to 35 mK further points to the incorrectness of the AMP model. We close by pointing out that $`\rho _0(T)\mathrm{exp}(T_0/T)`$. Exponential decrease of the resistivity is an indication that some sort of charge gap exists in the single particle spectrum. Fermi liquids by definition cannot have a gap of any sort in the single particle spectrum. In fact, no traditional metal has a charge gap in the single particle spectrum. The only phase we know of that has a charge gap in the single particle spectrum that conducts at zero temperature is a superconductor. Hence, this would suggest that experiments sensitive to pair formation should be of utmost importance to the resolution of the nature of the charge carriers in the new conducting state in 2D. ###### Acknowledgements. I thank Ladir Da Silva for help with the graphics and the DMR division of the NSF for funding this work.
no-problem/9910/patt-sol9910003.html
ar5iv
text
# Dynamics of a Single Peak of the Rosensweig Instability in a Magnetic Fluid ## 1 Introduction Magnetic fluids (MF) are stable colloidal suspensions of ferromagnetic nanoparticles (typically magnetite or cobalt) dispersed in a carrier liquid (typically oil or water). The nanoparticles are coated with a layer of chemically adsorbed surfactants to avoid agglomeration. The behaviour of MF is characterized by the complex interaction of their hydrodynamic and magnetic properties with external forces. Magnetic fluids have a wide range of applications and show many fascinating effects , as the labyrinthine instability or the Rosensweig instability. The latter instability occurs when a layer of MF with a free surface is subjected to a uniform and vertically oriented magnetic field. Above a certain threshold of the magnetic field that surface becomes unstable, giving rise to a hexagonal pattern of peaks . Superimposing the static magnetic field with oscillating external forces leads to nonlinear surface oscillations. Experimentally, either vertical vibrations or magnetic fields have been investigated as alternating external forces. For free surface phenomena the fluid motion strongly depends on the shape of the surface and vice versa. Additionally for MF, the shape of the surface is determined by the magnetic field configuration which contributes via the Kelvin force to the Navier-Stokes equation the solution of which gives the flow field. Thus the dynamics of MF is inherently governed by the nonlinear interaction between the flow field, the surface shape, and the magnetic field configuration. For that reason one attempts to study simple systems of MF which nevertheless show the essential features. The nonlinear dynamics of a single peak of magnetic fluid, i.e., the dynamics of a 0-dimensional system in a vertically oscillating magnetic field was studied exemplarily in . By varying the amplitude and the frequency of the alternating field and the strength of the static field, the peak response can be harmonic, subharmonic (twice the driving period) or higher multiples of the driving period. For suitable choices of the parameters, non-periodic chaotic peak oscillations were observed. Taking the above described circumstances into account for a theoretical approach, a sound model should be analytically tractable as well as capable of showing all essential features. Beyond these primary demands, the model may also predict new phenomena of peak oscillations. The aspiration to confirm such new phenomena experimentally motivates a simple and robust model to guide the design of the experimental setup. Such a model is proposed for the dynamics of a single peak of MF. It is based on the approximation of the peak by a half-ellipsoid with the same height and radius as the peak. The resulting equation giving the dependence of the height of the peak on the applied induction is derived in the following section. The character of the bifurcation is analysed in Sec. III for the case of a static induction. In Sec. IV the dynamics of the peak is studied and the results are compared with the experimental behaviour for different frequencies of a time-dependent induction. The final section summarizes the results and outlines two aspects for further experiments. ## 2 Model The complex and nonlinear interactions in MF with a free surface formed by a peak (see Fig. 1) present a formidable problem, since the form of the peak is not known analytically. The aim of our model is an analytical equation for the height of the peak at its centre $`𝐫=(x,y)=0`$. The shape of the peak, particular the form at the tip of the peak is beyond the potential of this model. The equation will thus neglect the influence of the surface regions away from the peak tip and of the boundaries. A layer of an incompressible, nonconducting, and inviscid MF of half-infinite thickness between $`z=0`$ and $`z\mathrm{}`$ is considered with a free surface described by $`z=\zeta (x,y,t)`$. It is assumed that the magnetization $`𝐌`$ of the MF depends linearly on the applied magnetic field $`𝐇`$, $`𝐌=\chi 𝐇`$, where $`\chi `$ is the susceptibility of the MF. The system is governed by the equation of continuity, $`\mathrm{div}𝐯=0`$, and the Euler equation for MF in the presence of gravity $$\rho \left[_t𝐯+\left(𝐯\mathrm{grad}\right)𝐯\right]=\mathrm{grad}p+\mu _0M\mathrm{grad}H+\rho 𝐠,$$ (1) where the magnetostriction is neglected and the co-linearity of the magnetization and the field is exploited for the magnetic force term. In Eq. (1) the velocity field is denoted by $`𝐯`$, the density of the MF by $`\rho `$, the pressure by $`p`$, the permeability of free space by $`\mu _0`$ and the acceleration due to gravity by $`𝐠`$. $`M`$, $`H`$, and $`B`$ are the absolute values of the magnetization, the magnetic field and the induction $`𝐁`$ in the fluid. In the static case, $`𝐯=0`$, the integral of the equation of motion (1) may be calculated to give $$p=\rho gz+\mu _0_0^HM𝑑H^{}+\mathrm{const}.$$ (2) The remaining boundary condition in the static case, the continuity of the normal stress across the free surface, gives $$p=\sigma K\frac{\mu _0}{2}\left(𝐌𝐧\right)^2\mathrm{at}z=\zeta ,$$ (3) where the pressure in the non-magnetic medium above the MF was set to zero. The surface tension between the magnetic and non-magnetic medium is denoted by $`\sigma `$, the curvature of the surface by $`K=\mathrm{div}𝐧`$, and the unit vector normal to the surface by $`𝐧`$. By inserting Eq. (2) at $`z=\zeta `$ into Eq. (3), the balance of pressure results in an equation for the surface $`\zeta `$ $$\rho g\zeta \mu _0\left(\frac{M_n^2}{2}+_0^{H(\zeta )}M𝑑H^{}\right)+\sigma K=\mathrm{const}$$ (4) with $`M_n=𝐌𝐧`$. After the peak is formed, the equilibrium is characterized by the equality of the pressure along the surface. Motivated by our aim of an analytically tractable model, we choose the two reference points $`𝐫=0`$, the centre of the peak, and $`|𝐫|0`$, the flat interface far away from the peak, where the pressure equality is evaluated. The magnetization is related to the induction by $$M=\frac{\chi }{\mu _0(\chi +1)}B(𝐫).$$ (5) Applying Eq. (4) at ($`𝐫=0`$, $`\zeta =h`$) and ($`|𝐫|0`$, $`\zeta =0`$) leads to $$\rho gh\sigma K(h)+\frac{\chi }{2\mu _0(\chi +1)}B_{ext}^2\left\{\left[\frac{B(h)}{B_{ext}}\right]^21\right\}=0,$$ (6) where $`B_{ext}`$ is the external applied induction. The remaining two unknown quantities, the curvature $`K(h)`$ and the induction $`B(h)`$ at the tip of the peak, are determined by an approximation. We model the peak, which is assumed to be rotationally symmetric, by a half-ellipsoid with the same height and radius as the peak (see Fig. 1). Thus one can make use of the analytical results for a rotational ellipsoid with the vertical (horizontal) semiprincipal axis $`h`$ ($`R`$) with the curvature given by $$K|_{z=h}=\frac{h}{R^2}$$ (7) and the induction $$B|_{z=h}=B_{ell}=\frac{\chi +1}{1+\chi \beta }B_{ext}\mathrm{with}$$ (8) $$\beta =\{\begin{array}{cc}\frac{1+ϵ^2}{ϵ^3}(ϵ\mathrm{arctan}ϵ)& ϵ=\sqrt{(R/h)^21}\mathrm{for}R>h\\ & \\ & \\ \frac{1ϵ^2}{ϵ^3}(\mathrm{artanh}ϵϵ)& ϵ=\sqrt{1(R/h)^2}\mathrm{for}R<h.\end{array}$$ (9) It is emphasized that an applied induction $`B_{ext}`$ results in a uniform induction $`B_{ell}`$ within the ellipsoid. The demagnetization factor $`\beta `$ is a purely geometrical quantity because it relates the dimensions of the major and minor semiprincipal axes by means of the eccentricity $`ϵ`$. Whereas (7) can be substituted directly into Eq. (6), the result (8) has to be modified to the case of a half-ellipsoid atop the layer of MF. The proposed modification is $$B|_{z=h}=\frac{1+\chi (1+\lambda \beta )}{1+\chi \beta (1+\lambda \beta )}B_{ext},$$ (10) where a parameter $`\lambda `$ is introduced, which mimics the influence of the magnetic field of the layer on the field at the tip of the peak. The form of (10) ensures that in the limits of a magnetically impermeable material ($`\chi =0`$), of a ‘magnetic conductor’ ($`\chi \mathrm{}`$), of a very oblate ellipsoid ($`\beta 1`$), and of a very prolate ellipsoid ($`\beta 0`$) the results are the same as in Eq. (8). As long as the height of the half-ellipsoid is large compared to its diameter, the influence of the magnetic layer on the magnetic field at the tip of the peak is small. This is obviously not the case if the half-ellipsoid becomes disk-shaped, i.e. $`h<R`$. For this case (10) is expanded up to the first order in $`h/R`$, $$B|_{z=h,hR}\left[1+\frac{\chi (1+\lambda )\pi h}{[1+\chi (1+\lambda )]\mathrm{\hspace{0.17em}2}R}\right]B_{ext}$$ (11) and compared to the analytical result for $`B`$ at the crest of a sinusoidal surface wave (SW) (pp. 178 in ) with the wave length $`4R`$ (Fig. 2) $$B|_{z=h,SW}=\left[1+\frac{\chi \pi h}{(\chi +2)\mathrm{\hspace{0.17em}2}R}\right]B_{ext}.$$ (12) The condition that both values of $`B`$ should coincide, leads to an equation for the parameter $`\lambda `$ $$\lambda =\frac{1}{2}.$$ (13) The determination of $`\lambda `$ adjusts the radius, since the critical wave number for surface waves is the capillary wave number, $`k_c=(2\pi /\lambda _c)=\sqrt{\rho g/\sigma }`$. Therefore the radius of the half-ellipsoid is fixed to $`R=(\lambda _c/4)=\pi /(2k_c)`$. By inserting (7) into (6) and introducing dimensionless quantities for all lengths and the induction $$\overline{h}=\sqrt{\frac{\rho g}{\sigma }}h\overline{B}=\frac{\chi }{\sqrt{2\mu _0(\chi +1)(\chi +2)\sqrt{\rho \sigma g}}}B$$ (14) we obtain a nonlinear equation for the dependence of the peak height $`\overline{h}`$ on the applied induction $`\overline{B}_{ext}`$ (the bars are omitted for the rest of the paper) $$B_{ext}^2\left[\left(\frac{B(h)}{B_{ext}}\right)^21\right]h\left[1+\frac{1}{R^2}\right]\frac{\chi }{(\chi +2)}=0.$$ (15) The nonlinear behaviour enters into the equation through $`B(h)/B_{ext}`$ which is determined by (9, 10, 13). Eq. (15) presents the fundamental equation of the model in which the height of the peak depends on the properties of the applied induction only. The quality of the approximation is tested in the static case for which (15) was derived. It forms the starting point for the description of the peak dynamics, where the effects of inertia and damping have to be taken into account. ## 3 Static Peak For a layer of MF with a free surface subjected to a vertical magnetic field there are three different energies which contribute to the total energy $`E_{tot}`$. The potential energy and the surface energy increase $`E_{tot}`$ with increasing $`h`$, whereas the magnetic field energy decreases $`E_{tot}`$ with increasing $`h`$. The plane surface corresponds to a minimum of the total energy at $`h=0`$. If the surface is perturbed, the magnetic flux is concentrated in the peaks of the disturbances. The resulting force tends to increase the modulations, while surface tension and gravitational forces tend to decrease the disturbances. When the increasing field passes a certain strength $`H_c`$, the destabilizing force will win over the stabilizing ones. The resulting peaks are energetically favourable because for $`H>H_c`$ the total energy has now a second minimum at $`h>0`$ which is deeper than the first one at $`h=0`$. The transition from the first to the second minimum corresponds to the sudden jump from $`h=0`$ to $`h>0`$. If the peaks are established, a decreasing field results in smaller heights of the peak up to a second critical field $`H_s`$, the saddle-node field, where the peaks suddenly break down. With respect to the total energy this means a transition back to the first minimum at $`h=0`$ because it is now energetically more favourable. Such a dependence of the height of the peak on the variation of the magnetic field is typified as a hysteresis. The difference between the two critical fields defines the width of the hysteresis. For a MF with $`\chi =1.15`$ the width was measured to 6% of the critical field and the critical height at $`H_c`$ is given by $`2.1`$ mm . The corresponding critical inductions are $`B_c=\mu _0[H_c+M(H_c)]`$ and $`B_s=\mu _0[H_s+M(H_s)]`$. For a static induction, $`B_{ext}=B_0`$, the solution of Eq. (15) is determined for two susceptibilities, $`\chi =1.15`$ and $`\chi =2.5`$. The former value is given in for a mixture of EMG 901 and EMG 909 (both Ferrofluidics Corporation) in a ratio of 7 to 3, whereas the latter value was measured in a recent experiment for the same mixture . For both susceptibilities a distinct hysteresis appears, whose width increases with increasing susceptibility. Correspondingly, in the limit $`\chi 0`$ the hysteresis disappears (Fig. 3a). For $`\chi =1.15`$ the width is $`5\%`$ of the critical induction $`B_c`$ and the critical height of the peak is $`h_c2.0/k_c2.9`$ mm. The material parameters $`\rho =1377`$ $`\mathrm{kg}\mathrm{m}^3`$ and $`\sigma =2.86\mathrm{\hspace{0.17em}10}^2`$ $`\mathrm{kg}\mathrm{s}^2`$ as given in were used. For the other chosen susceptibility, $`\chi =2.5`$, the width is $`13\%`$ of the critical induction $`B_c`$ and the critical height is $`h_c6.9/k_c10.0`$ mm. Figure 3b shows the dependence of the width of the hysteresis on the susceptibility of the magnetic fluid. Whereas for small susceptibilities a fair increase of the width can be detected, a tendency towards a saturation in the growth can be seen for larger susceptibilities. No systematic measurements of the width of the hysteresis have yet been undertaken. Therefore any experimental test which would determine the range of validity of the model is pending on subsequent measurements. Despite the simplicity of the proposed approximation, the model describes the generic static behaviour of the height of the peak very well, i.e. the appearance of a hysteresis for increasing and decreasing induction at nonzero susceptibilities as it is observed in experiments . Note in this connection that for $`\lambda =0`$, i.e. when neglecting the difference in the magnetic field of an ellipsoid and a half-ellipsoid, no hysteresis is found. Note also that in a one-dimensional system one finds a supercritical bifurcation for $`\chi <2.53`$ whereas our simplified two-dimensional model yields a subcritical bifurcation for all values of $`\chi `$ in accordance with experiment. Beyond the qualitative agreement, the quantitative values for the width of the hysteresis and the critical height are in satisfying agreement with the measurements in for $`\chi =1.15`$. This agreement is achieved without any fit-parameter since the fixed value of the parameter $`\lambda `$ applies for any MF. The fact that the critical induction is not equal to one (cf. 15) is a consequence of the evaluation of the introduced parameter $`\lambda `$, which determines the radius. The imposed value of the radius ensures the equality of the magnetic induction at the top of the oblate ellipsoid and the crest of the surface wave. But the curvature is different: $`h/R^2`$ at the top of the ellipsoid is smaller than the value $`\pi ^2h/(4R^2)`$ it takes at the crest of the surface wave. Thus the expansion of (15) for small $`h`$ with $`R=\pi /2`$ $$h\left[B_0^2\frac{\pi }{R}1\frac{1}{R^2}\right]=0$$ (16) leads to a critical induction smaller than $`1`$, $`B_c=\sqrt{1/2+2/\pi ^2}0.84`$. The half-ellipsoid approximation (10) with $`\lambda =1/2`$ was quantitatively compared to a numerically exact determination of the magnetic field of a rotational half-ellipsoid atop of a horizontal layer by solving the Laplace equation for the magnetic potential (Fig. 4). For $`0.5h/R6.5`$ the magnetic induction at the tip of the peak is approximated with an accuracy of $`1.7\%`$. The comparison shows that the modification of the magnetic field at the tip of the peak through the magnetic field of the layer is rather weak even for small heights. This supports our assumption that the field at the tip of the peak is the essential feature to describe its behaviour. Therefore Eq. (10) describes $`B`$ directly at the height of the peak fairly accurately. Furthermore, equation (15) leads to the correct character of the bifurcation, i.e. a subcritical instability, and gives the right width of the hysteresis compared with the experimental results . With this level of confirmation, the dynamics of a single peak of MF is studied in the following section. ## 4 Oscillating Peak ### 4.1 Inertia and Damping The induction is chosen to be a superposition of a static part, $`B_0`$, and a time-dependent part, $`\mathrm{\Delta }B\mathrm{cos}(\omega t)`$. The amplitude of the oscillating part is denoted by $`\mathrm{\Delta }B`$ and the frequency by $`\omega =2\pi f=2\pi /T`$. In correspondence with the experiments , the response-period of the peak is studied in dependence on the three parameters, the strength of the static part, the amplitude of the alternating part, and the driving frequency. If the last two parameters are kept constant, one distinguishes between three different regimes for the behaviour of $`h(t)`$ with increasing $`B_0`$. For small $`B_0`$ the surface remains flat, i.e. $`h(t)0`$. Beyond a first, lower threshold $`h(t)`$ oscillates between zero and a maximum $`h_{max}`$ whereas beyond a second, higher threshold it alternates between two positive extrema, $`0<h_{min}<h_{max}`$ (see Fig. 5). The behaviour in the second regime will be the focus of our study since it was analysed experimentally in detail in . In order to formulate a differential equation for the peak dynamics, the effects of inertia and damping have to be incorporated into Eq. (15). Since each term in (15) stems from the equation of pressure balance, the inertial term may be written as $$\frac{\mathrm{force}}{\mathrm{area}}=\frac{m}{A}\frac{d^2h}{dt^2}\frac{\rho |h|A}{A}\frac{d^2h}{dt^2}=\rho |h|\frac{d^2h}{dt^2}.$$ (17) The sign of proportionality indicates that in the frame of our model the mass and the area of the peak cannot be precisely determined. For these quantities the knowledge of the complete surface and the flow field are necessary. For this reason we choose the simple relation of a linear dependence of the mass of the peak on its height. The implementation of the damping is difficult. In the experiment one observes that the peak periodically arises up to a maximal height and collapses to zero height. This behaviour leads to the assumption that the system is endowed with a dissipation mechanism which acts particularly strongly when the collapsing peak approaches $`z=0`$. Since such a mechanism cannot be derived in the frame of the present model, the idea of an impact oscillator is used. The impact oscillator is an externally excited oscillator, where the oscillating mass impacts on a fixed boundary. From this boundary the mass is reflected with a velocity $$\frac{dh}{dt}|_{t_{0^+}}=\tau \frac{dh}{dt}|_{t_0^{}},$$ (18) where $`\tau `$ is the coefficient of restitution and $`t_0`$ is the time of the impact, $`h(t_0)=0`$. Consequently, there are oscillations between $`0h<\mathrm{}`$ only. For a weakly damped impact oscillator it is known that infinite series of transitions from period 1 to period $`N`$ ($`N=3`$, $`4`$, $`\mathrm{}`$) can appear. This phenomenological resemblance to the observations in also motivates the use of the idea of an impact oscillator. It is emphasized that the chosen special form of damping applies only to the second regime, where $`h(t)`$ oscillates between zero and a maximum $`h_{max}`$. In our model an impact with $`z=0`$ occurs whenever $`h(t)`$ reaches zero. The height and the velocity after the impact are fixed and independent of the behaviour before the impact. We choose $$h=10^6\mathrm{and}\frac{dh}{dt}=0\mathrm{at}t=t_{0^+},$$ (19) which corresponds to a nearly complete dissipation of the energy at every impact. A similar choice was made for a model proposed in . The choice of fixed values is obviously an oversimplification because it does not make any difference whether a large peak with a high velocity rushes towards $`z=0`$ or whether a small peak slowly approaches $`z=0`$. The resulting differential equation for this cut-off mechanism in dimensionless quantities is (the time is scaled by $`g^{3/4}\rho ^{1/4}\sigma ^{1/4}`$) $$\frac{d^2h}{dt^2}=\frac{B_{ext}^2}{h}\left[\left(\frac{B(h)}{B_{ext}}\right)^21\right]\frac{(\chi +2)}{\chi }1\frac{1}{R^2}$$ (20) with $`B_{ext}=B_0+\mathrm{\Delta }B\mathrm{cos}(\omega t)`$. Eq. (20) is solved by means of the forth-order Runge-Kutta integration method with a standard time step of $`T/200000`$. The other standard parameters for the integration are $`h(0)=10^6`$ and $`d_th(0)=0`$ as initial conditions and a total time of $`200T`$ over which the solution is calculated. The first 100 periods are considered as transient time for the system to relax to a response-behaviour independent of the initial conditions. The last 100 periods are analysed with respect to a periodic behaviour of $`h(t)`$ by means of a Poincaré section. For our one-dimensional dynamics a Poincaré section means to compare $`h`$ at a certain time, say $`t_m=mT`$, with $`h`$ at times, which are $`N`$ periods ($`N=1`$, $`2`$, $`\mathrm{}`$) later with respect to $`t_m`$: $`h(t_m)`$ $`\stackrel{\mathrm{?}}{=}`$ $`h(t_m+T)`$ $`h(t_m)`$ $`\stackrel{\mathrm{?}}{=}`$ $`h(t_m+2T)`$ $`\mathrm{}`$ $`h(t_m)`$ $`\stackrel{\mathrm{?}}{=}`$ $`h(t_m+NT).`$ (21) Those equations which are fulfilled give the period $`N`$ (and any higher multiples of $`N`$) of the peak response. The chosen $`100`$ periods of analysis ensure a good reliability of the estimated periods up to 30. ### 4.2 Results and Discussion The results of the Poincaré sections are plotted as period diagrams in the $`B_0`$$`\mathrm{\Delta }B`$ plane at a fixed frequency $`f`$ and for two different susceptibilities (see Figs. 6, 7, 9). The constant part, $`B_0`$, is sampled in steps of $`0.01`$. The amplitude of the alternating part, $`\mathrm{\Delta }B`$, is increased in steps of $`0.025`$ with an initial value of $`0.05`$. The different periods in the interesting second regime, $`0h(t)h_{max}`$, are coded by colours. The periods 1 to 10 are encoded by a chart of distinctive colours starting with green, red, blue, and ending with orange. The higher periods from 11 to 30 are encoded by a continuous colour chart. Periods above 30 or a non-periodic behaviour of $`h(t)`$ are noted by grey. This selection of colours is guided by the choice of colours in . White areas inside and right of the coloured horizontal stripes indicate regions, where $`h(t)`$ oscillates between two positive extrema. White areas left of the coloured stripes denote the regime $`h(t)0`$. The period diagram for a low frequency of $`f=0.03`$ ($`2.5`$ Hz) is displayed in Fig. 6. In accordance with the experimental results for $`2.5`$ Hz (see Fig. 5a in ) the response of the peak is harmonic almost everywhere in the $`B_0`$$`\mathrm{\Delta }B`$ plane. Responses with higher periods are detected only at the edge towards the third regime. The area of harmonic response is cone-like shaped, where the limit to the left is given by $`\mathrm{\Delta }B=B_cB_0`$ for $`B_0B_c`$ (solid line) and the limit to the right is given by $`\mathrm{\Delta }B=B_0B_c`$ for $`B_0B_c`$ (dashed line). These strict limits apply particularly to the MF with $`\chi =2.5`$ (Fig. 6b), whereas the right limit is more frayed for the MF with $`\chi =1.15`$ (Fig. 6a). The feature of a cone-like shape is also found in the experiment, but with a slight asymmetry at very small amplitudes of the alternating field. An asymmetry could not be found with our simple model. Another difference is that our right limit is too large compared to the experimental data. The appearance of only harmonic responses is caused by the low frequency. The corresponding characteristic time of the excitation, $`T`$, is large enough for the peak to follow the slow modulations of the external field. Therefore the peak oscillates with the same frequency as the external excitation. By considering the quasi-static limit $`f0`$ (see Fig. 3), the boundaries of the second regime can be understood as follows: As long as $`B_0`$ is smaller than $`B_c\mathrm{\Delta }B`$, the resulting external induction varies over a range where at its lower bound $`h_1=0`$ is stable. At its upper bound either $`h_2=0`$ is stable if $`B_0+\mathrm{\Delta }B<B_s`$, or $`h_2=0`$ and $`h_2>0`$ are stable if $`B_0+\mathrm{\Delta }B>B_s`$. Because of the greater attraction of zero height in the bistable area due to the strong damping at $`h=0`$, the dynamics of $`h(t)`$ is bounded by zero in both cases. If $`B_0`$ is larger than $`B_c+\mathrm{\Delta }B`$, the resulting external induction varies over a range where at its lower bound $`h_1>0`$ is stable and at its upper bound $`h_2>h_1`$ is stable. Thus the dynamics of $`h(t)`$ is bounded between $`h_1`$ and $`h_2`$. Consequently, for low frequencies the peak alternates in the second regime as long as $`B_c\mathrm{\Delta }BB_0B_c+\mathrm{\Delta }B`$. The fact that $`h(t)`$ remains at zero even when for a certain time a nonzero height is stable (but not attractive enough to win over $`h=0`$) was observed in the experiment, too. In our dynamics the zero height is always more attractive than the nonzero height. This is not the case in the experiment, which explains the observed lower limit to the right. Fig. 7 shows the results for a medium frequency of $`f=0.1`$ ($`8.2`$ Hz). For the MF with the low susceptibility of $`\chi =1.15`$, the second regime splits into two disjoint parts. For smaller values of $`B_0`$ we find only harmonic responses, whereas for higher values of $`B_0`$ we observe the period $`N=2`$. The second regime is separated from the first regime by $`\mathrm{\Delta }B=B_cB_0`$ for $`B_0B_c`$ (solid line). The limit to the right is given by $`\mathrm{\Delta }B=B_0B_c`$ for $`B_0B_c`$ (dashed line) only for amplitudes $`\mathrm{\Delta }B`$ above $`0.15`$ (Fig. 7a). For the MF with the high susceptibility of $`\chi =2.5`$, the second regime forms a compact region, which is separated from the first regime by $`\mathrm{\Delta }B=B_cB_0`$ for $`B_0B_c`$ (solid line). In contrast to the low frequency behaviour, the whole structure of periods shows a specific composition. For a fixed amplitude $`\mathrm{\Delta }B`$ the peak starts to oscillate harmonically. For $`\mathrm{\Delta }B>0.25`$ and increasing $`B_0`$ the period-1 state is replaced by the period-2 state which lasts up to the right limit of the second regime. This clear two-state picture changes if $`\mathrm{\Delta }B`$ is decreased. For $`\mathrm{\Delta }B0.25`$ a tongue of high periodic ($`N>2`$) and non-period oscillations appears (Fig. 7b). For $`0.175\mathrm{\Delta }B0.25`$ the tongue is embedded in the period-2 state. For $`\mathrm{\Delta }B<0.175`$ the tongue follows directly the harmonic oscillations. In this tongue we find odd-number periods of $`3`$, $`5`$, and $`9`$ and even-number periods of $`4`$, $`8`$, $`14`$, $`16`$, and $`18`$ (see Fig. 8). The structure of periods in Fig. 7b displays generic features which are also observed in the experiment for $`12.5`$ Hz (see Fig. 6a in ). Beside the agreement in the generic features, there are three major quantitative differences. The period-2 state area between the harmonic response and the tongue is much thinner than in the experiment. An extended area of period $`N=3`$ could not be found and the right limit of the period-2 state is too low compared to the experimental results. For a frequency of $`f=0.2`$ ($`16.4`$ Hz) the results are shown in Fig. 9. For a low susceptibility of $`\chi =1.15`$ the peak starts to oscillate harmonically independently of the strength of $`\mathrm{\Delta }B`$. With increasing $`B_0`$ the period-1 state is followed by a period of $`N=4`$ for $`\mathrm{\Delta }B0.125`$. For $`\mathrm{\Delta }B>0.125`$ the harmonic response is mainly replaced by the period-2 state, which is then replaced by the period $`3`$. One notes the appearance of oscillations between two positive extrema inside the second regime for the low susceptibility case (Fig. 9a). For a high susceptibility of $`\chi =2.5`$ the whole period diagram displays a band-like structure (Fig. 9b). For a fixed amplitude $`\mathrm{\Delta }B`$ and increasing $`B_0`$, the period $`N=1`$ appears first. Then either the period $`N=6`$ follows for $`\mathrm{\Delta }B0.15`$ or the periods $`N=2`$ and $`N=6`$ follow for $`\mathrm{\Delta }B>0.15`$. The whole structure of periodic orbits ends with a broad band of period $`N=5`$. This last novel feature is remarkable because no similar phenomenon has been observed in the experiment. For all tested frequencies in , the second regime gives way to the third regime by a period of $`N=1`$ or $`N=2`$. The comparison between the experimental and theoretical data generally shows a qualitative and partly a quantitative agreement with the dynamics of the peak. This agreement is achieved with a certain choice for the mass of the peak (17) and for the strength of the impact (19). The shown results are robust against modifications of (17, 19) by a constant of $`O(1)`$. It is not necessary to fit parameters as the damping constant, the driving period, the critical field, and the resolution limit of the height in contrast to the minimal model in . The other improvements are a more realistic nonlinear force term and the multiplicative character of the driving. Our results at low and medium frequencies for $`\chi =2.5`$ support the presumption that the MF used for the measurements of the dynamical behaviour has a higher susceptibility than given in . The experimental results at a high frequency of $`23.5`$ Hz (see Fig. 7a in ) could not be found in our tested range of frequencies, $`0.01f0.5`$. ## 5 Summary In order to describe the complex and nonlinear dynamics of a single peak of the Rosensweig instability in an oscillatory magnetic field, we propose a model aiming at an analytical equation for the height of the peak at its centre. Our model approximates the peak by a half-ellipsoid atop a layer of magnetic fluid. By exploiting the Euler equation for magnetic fluids and the analytical results for a rotational ellipsoid, we obtain a nonlinear equation for the dependence of the peak height on the applied induction (15). For static induction the quality of our proposed model is tested. It leads to the correct subcritical character of the bifurcation and gives the right width of the hysteresis compared with experimental results. For a time-dependent induction the effects of inertia and damping are incorporated into equation (15). In correspondence with the experiments the dynamics is studied in a region, where the peak alternates between zero and a maximal height $`h_{max}`$. Our model shows not only qualitative agreement with the experimental results, as in the appearance of period doubling, trebling, and higher multiples of the driving period. Also a quantitative agreement is found for the parameter ranges of frequency and induction in which these phenomena occur. For low frequencies the response of the peak is harmonic for nearly any strength of the external excitation which is a superposition of a static part and an oscillatory part. The whole area of harmonic response is cone-like shaped in accordance with the experiment. For a medium frequency a structure of periods is found, where a tongue of high periodic and non-periodic oscillations appears. For low values of the amplitude of the alternating induction, the tongue directly follows the period-1 state. For higher values of the amplitude the tongue is embedded in the period-2 state. The appearance and the location inside the parameter plane of an area of high periodic and non-periodic oscillations agree with the experimental data in the same frequency range. Beside the agreement with the generic features observed in the experiment at low and medium frequencies, the model predicts a novel phenomenon. For a frequency of about $`16.4`$ Hz the peak oscillates with the period $`N=5`$ as the final period before the oscillations between zero and $`h_{max}`$ end (see Fig. 9b). It would be challenging to seek a final period greater than $`2`$ in the experiment, because for the studied frequencies in the final oscillations have only periods of $`N=1`$ or $`N=2`$. In the dynamics of a magnetic fluid with a low susceptibility a mixing of areas with different types of oscillations is found. For frequencies which are not too low, areas with oscillations between two positive extrema appear regularly inside areas with oscillations between zero and $`h_{max}`$. It would be interesting to test in experiments whether such a mixing can be observed for MF with low susceptibilities. ## Acknowledgment The authors are grateful to Johannes Berg and René Friedrichs for helpful discussions. It is a pleasure to thank Andreas Tiefenau for providing the data of the shape of the magnetic fluid peak. This work was supported by the Deutsche Forschungsgemeinschaft under Grant EN 278/2. ## Figures
no-problem/9910/hep-ph9910433.html
ar5iv
text
# 1 Parametric oscillations in a medium with “castle wall” density profile for the case 𝑋₃=0 (parametric resonance). Solid curve: transition probability for neutrino flavour oscillations as a function of the coordinate along the neutrino path for the case of total conversion over 5 periods of density modulation (10 layers). Dashed curve: the same for the case of total conversion over 3 layers. The kinks correspond to the borders of the layers of different densities. The curves were plotted for the realization () (𝑐₁=𝑐₂=0) of the parametric resonance condition. Neutrino energy is between the MSW resonance energies corresponding to the densities 𝑁₁ and 𝑁₂. 1. In the present Comment we show that the conditions for total neutrino conversion studied by Chizhov and Petcov are just the conditions of the parametric resonance of neutrino oscillations supplemented by the requirement that the parametric enhancement be complete. Therefore the “new effect of total neutrino conversion” is nothing but a particular case of the parametric enhancement of neutrino oscillations, suggested in and widely discussed in the literature . The parametric resonance occurs when the oscillation frequency changes in a certain correlation with the frequency itself and with the amplitude of the oscillations, leading to specific phase relationships. A classical example is a pendulum with vertically oscillating suspension point . This situation can, in particular, be realized for oscillating neutrinos crossing layers of medium of different densities . Indeed, the oscillation parameters depend on matter density, and crossing the layers of different density means changing frequency and amplitude of neutrino oscillations. 2. The propagation of neutrinos through medium with periodic density modulations leads to parametric oscillations , see figs. 1 and 2. Let us consider neutrino propagation in a medium with the periodic “castle wall” density profile: a system of alternating layers of matter with constant densities $`N_1`$ and $`N_2`$ and widths $`L_1`$ and $`L_2`$. Let $`\theta _{1,2}`$ be the mixing angles in matter at densities $`N_1`$ and $`N_2`$. We denote by $`2\varphi _i`$ ($`i=1,2`$) the oscillation phase acquired by neutrinos in the layer of density $`N_i`$ and width $`L_i`$. We will use the notation $`s_i\mathrm{sin}\varphi _i`$, $`c_i\mathrm{cos}\varphi _i`$. The evolution matrix over one period of density modulation $`L=L_1+L_2`$ is $$U_2=Yi𝝈𝐗=\mathrm{exp}[i(𝝈\widehat{𝐗})\mathrm{\Phi }],$$ (1) $$Y=c_1c_2\mathrm{cos}(2\theta _12\theta _2)s_1s_2,\mathrm{\Phi }=\mathrm{arccos}Y=\mathrm{arcsin}|𝐗|,\widehat{𝐗}=𝐗/|𝐗|.$$ (2) The vector $`𝐗`$ can be written in components as $$𝐗=((s_1c_2\mathrm{sin}2\theta _1+s_2c_1\mathrm{sin}2\theta _2),s_1s_2\mathrm{sin}(2\theta _12\theta _2),(s_1c_2\mathrm{cos}2\theta _1+s_2c_1\mathrm{cos}2\theta _2)).$$ (3) Notice that $`Y^2+𝐗^2=1`$ as a consequence of unitarity of $`U_T`$. From Eq. (1) one easily finds the transition probability after passing $`n`$ periods $$P(\nu _a\nu _b;r=nL)=\left(1\frac{X_3^2}{𝐗^2}\right)\mathrm{sin}^2\mathrm{\Phi }_p,\mathrm{\Phi }_p=n\mathrm{\Phi }.$$ (4) The transition probability after passing an odd number of alternating layers, which can be considered as $`n`$ periods plus one additional layer of density $`N_1`$ (the corresponding distance $`r=nL+L_1`$), is also given by Eq. (4), the only difference being that the phase is now $$\mathrm{\Phi }_p=n\mathrm{\Phi }+\phi ,\phi =\mathrm{arcsin}\left(s_1\mathrm{sin}2\theta _1/\sqrt{1X_3^2/|𝐗|^2}\right).$$ (5) Eqs. (4) and (5) give the transition probability at the borders of the layers. The pre-sine factor in (4) and $`\mathrm{\Phi }_p`$ are the depth and the phase of the parametric oscillations. The phase $`\mathrm{\Phi }_p`$ determines the length of the parametric oscillations (see figs. 1 and 2). The parametric resonance occurs when the pre-sine factor in (4) becomes equal to unity, i.e. the depth of the parametric oscillations is maximal. The resonance condition is therefore (see Eq. (26) in ) $$X_3(s_1c_2\mathrm{cos}2\theta _1+s_2c_1\mathrm{cos}2\theta _2)=0.$$ (6) The parametric resonance condition (6) can be realized in various ways <sup>1</sup><sup>1</sup>1We do not consider the trivial cases of the MSW resonance for which $`X_3=0`$ because $`\mathrm{cos}2\theta _i=0`$ and $`s_i=\pm 1`$, $`i=1`$ or 2, or $`\mathrm{cos}2\theta _1=\mathrm{cos}2\theta _2=0`$.. One well known realization is $`c_1=c_2=0`$, or $$2\varphi _1=\pi +2\pi k^{},2\varphi _2=\pi +2\pi k^{^{\prime \prime }},$$ (7) independently of the mixing angles <sup>2</sup><sup>2</sup>2It was renamed into “the oscillation length resonance” in .. (This was reproduced as solution III in , see Eq. (18) there). If, however, $`c_1`$ and $`c_2`$ are non-zero, the cancellation between the two terms in (6) can occur, which implies certain correlation between the phases and mixing angles in the layers. (This covers solution IV in ). For an example, see fig. 2. In general, the parametric resonance in neutrino oscillations does not require a periodic matter density profile (although the periodicity may make it easier to meet the resonance conditions), and can occur even in stochastic media . 3. As follows immediately from (4), the conditions for total neutrino conversion $`P(\nu _a\nu _b)=1`$ are $$X_3=0,\mathrm{\Phi }_p=\frac{\pi }{2}+2\pi k$$ (8) for evolution over any number of layers, including the two- and three-layer cases considered in <sup>3</sup><sup>3</sup>3These cases correspond to $`n=1`$ in Eqs. (4) and (5).. Thus the maximal transition probability implies the fulfillment of the parametric resonance condition. According to Eq. (4) for two layers the conditions (8) reduce to $$X_3=0,Y=0.$$ (9) It is easy to see that Eqs. (9) are equivalent to two conditions (22) in (solution IV). Notice that conditions (9) can be obtained directly from (1): the survival probability $`P(\nu _a\nu _a)=Y^2+X_3^2`$ and therefore the condition of total neutrino conversion gives $`Y^2+X_3^2=0`$. Consider now the three-layer case (it gives a good approximation for the case of neutrinos crossing the earth, where the layers correspond to the mantle, core and then again mantle). Using expression for the phase (5), one can write the conditions of total transition (8) as $`X_3=0`$, $`Y=\pm s_1\mathrm{sin}2\theta _1`$, or equivalently as $$X_3=0,2c_1Yc_2=0.$$ (10) Conditions (10) can also be obtained directly from the evolution matrix which in this case is $$U_3=Zi𝝈𝐖.$$ (11) Here $$Z=2c_1Yc_2,$$ (12) $`Y`$ has been defined in (2), and the vector $`𝐖`$ can be written in components as $$𝐖=(2s_1Y\mathrm{sin}2\theta _1+s_2\mathrm{sin}2\theta _2,0,\left(2s_1Y\mathrm{cos}2\theta _1+s_2\mathrm{cos}2\theta _2\right)).$$ (13) The neutrino flavour transition probability in this case is $`P(\nu _a\nu _b)=W_1^2`$. The total neutrino conversion corresponds to zero survival probability: $`Z^2+W_3^2=0`$, or $$Z=0,W_3=0.$$ (14) There are two possible realizations of these conditions, depending on the value of $`c_1`$. If $`c_1=0`$ then from (12) and (14) it follows that $`c_2`$ must vanish, too. So, we arrive at the realization (7) of the parametric resonance condition (6). The second condition in (14) is then the one for the “totality” of transition. It can be written as $`\mathrm{cos}(2\theta _12\theta _2)=\mathrm{cos}2\theta _2/2\mathrm{cos}2\theta _1`$, which is equivalent to the requirement that the transition probability $$P=\mathrm{sin}^2(2\theta _24\theta _1)$$ (15) takes the value 1. If $`c_10`$ then the first equality in (14) implies $`Y=c_2/2c_1`$. Inserting this into the expression for $`W_3`$ in (13) one obtains $`W_3=X_3/c_1`$. The condition $`W_3=0`$ thus means $`X_3=0`$. Therefore in this case, too, total neutrino conversion implies parametric resonance. Conditions (10) are equivalent at $`c_10`$ to the conditions of the total neutrino conversion in Eq. (26) of . Similarly, one can analyze the case of $`\nu _2\nu _e`$ transitions which is relevant for oscillations of solar and supernova neutrinos in the earth. In particular, it is easy to show that the parametric resonance condition for the probability $`P_{2e}`$ of $`\nu _2\nu _e`$ oscillations is $$X_3^{}X_3\mathrm{cos}\theta _0X_1\mathrm{sin}\theta _0=0.$$ (16) The conditions of total $`\nu _2\nu _e`$ conversion found in imply equality (16). 4. The existence of strong enhancement peaks in transition probability $`P`$ rather than the condition $`P=1`$ is of physical relevance. For sufficiently large vacuum mixing angles, the transition probability has a series of peaks of comparable height and the total conversion peak is just one of them. In fact, peaks with $`P_{max}<1`$ can contribute to observable effects even more than the ones with $`P_{max}=1`$. For some applications, e.g., for oscillations of solar neutrinos in the earth, even partial (or relative) enhancement can be important. Let us comment on various realizations of the parametric enhancement of neutrino oscillations. Large oscillation effects can be due to large mixing in matter and therefore to large-amplitude oscillations, or due to specific properties of the density profile. In general, both mechanisms are present. Depending on neutrino parameters, either of the mechanisms can dominate, or they can give comparable contributions to the observable effects. (i) The most interesting case is the one when neutrino mixing in matter of both densities $`N_1`$ and $`N_2`$ is small: $`\mathrm{sin}^22\theta _1,\mathrm{sin}^22\theta _21`$, and a strong enhancement of transition probability is due to the specific shape of the matter density distribution. Let us consider the three layer case (neutrino oscillations in the earth) with densities $`N_1`$ \- $`N_2`$ \- $`N_1`$ ($`N_1<N_2`$) and concentrate on peaks of the transition probability with $`P_{max}<1`$ relevant for solar neutrinos. Suppose that the neutrino energy is between the MSW resonance energies corresponding to the densities $`N_1`$ and $`N_2`$ which means that $`2\theta _1<\pi /2`$ and $`2\theta _2>\pi /2`$. In this case $`\mathrm{sin}^22\theta _{1,2}1`$ implies that $`2\theta _1`$ is small and $`2\theta _2`$ is close to $`\pi `$. If $`4\theta _1+(\pi 2\theta _2)<\pi /2`$<sup>4</sup><sup>4</sup>4This condition is equivalent to $`\mathrm{cos}(2\theta _24\theta _1)<0`$ . the maximal enhancement of the transition probability takes place for the values of the oscillation phases $`2\varphi _i=\pi +2\pi k_i`$, i.e. for the realization (7) of the parametric resonance condition (6). In this case the transition probability given in (15) can be significantly larger than that in one layer with a matter of constant density with largest of the two $`\mathrm{sin}^22\theta _i`$ . If neutrino energy is above the MSW resonance energies, which means $`2\theta _1,2\theta _2>\pi /2`$, the smallness of $`\mathrm{sin}^22\theta _{1,2}`$ (even for large or maximal vacuum mixing) is due to the matter suppression effects. Again for $`2(\pi 2\theta _1)(\pi 2\theta _2)<\pi /2`$ the maximal enhancement of the transition probability corresponds to the realization (7) of the parametric resonance with probability given in (15) . Notice that for neutrinos traversing the earth the phases $`2\varphi _i`$ are not arbitrary and the condition $`2\varphi _1,2\varphi _2=(\text{odd integer})\times \pi `$ can be satisfied only approximately. For $`2\varphi _1,2\varphi _2(\text{odd integer})\times \pi `$ the transition probability is smaller than (15). In this case, for $`\mathrm{sin}^22\theta _0<0.03`$, the maximum of $`P`$ is achieved for relatively small but non-vanishing values of $`X_3`$, which corresponds to the parametric oscillations with non-maximal depth. (ii) For vacuum mixing close to the maximal one, $`\mathrm{sin}^22\theta _0\stackrel{>}{_{}}0.9`$, the MSW resonances in the core and mantle are very wide and therefore the mixing angles in medium in the resonance energy interval are also large: $`\mathrm{sin}^22\theta _1\mathrm{sin}^22\theta _20.91`$. The change of the mixing angle in passing from the mantle to the core or vice versa is small and one can consider the earth matter as a single layer with a density close to the MSW resonance one. The effect of the matter density profile on the transition probability is small, and what matters is the total oscillation phase acquired when neutrinos traverse the earth. The complete conversion requires this phase to be an odd integer of $`\pi `$. In particular, for three layers this implies $`2(2\varphi _1+\varphi _2)=\pi (2k+1)`$. Indeed, large $`\mathrm{sin}^22\theta _0`$ solutions found in satisfy this equality with a high precision. (iii) There are several peaks of transition probabilities with $`P(\nu _a\nu _b)=1`$ which correspond to intermediate values of the vacuum mixing angle, $`\mathrm{sin}^22\theta _00.150.6`$ . These peaks are due to an interplay of the effects of large-amplitude oscillations and specific matter density profile. However, none of the known neutrino anomalies can be explained through neutrino oscillations with mixing angles in this range. The oscillation solutions of the solar neutrino problem require the vacuum mixing angle to be either very small or close to the maximal one; the dominant mode of the atmospheric neutrino oscillations requires maximal or almost maximal mixing. The mixing angle $`\theta _{13}`$ governing the subdominant $`\nu _e\nu _\mu `$ and $`\nu _e\nu _\tau `$ oscillations of atmospheric neutrinos is severely restricted by the CHOOZ experiment and the solar and atmospheric neutrino observations. Nevertheless, the solution with $`\mathrm{sin}^22\theta _00.15`$ , though on the verge of being ruled out by CHOOZ for the range of $`\mathrm{\Delta }m^2`$ allowed by the Super-Kamiokande atmospheric neutrino data, is at present not excluded. It can lead to a significant up-down asymmetry of the e-like events in the Super-Kamiokande atmospheric neutrino data (see fig. 7 in ). We have shown that the effects discussed in are those of the parametric enhancement of neutrino oscillations, contrary to the claim of the authors that they have found completely new effects which have nothing to do with the parametric resonance. Written in the form (9) or (10) the conditions for total neutrino conversion have a clear physical meaning of the conditions of the parametric resonance and ($`\pi /2+\pi k`$) phase of the parametric oscillations. This work was supported in part by the TMR network grant ERBFMRX-CT960090 of the European Union. The work of E.A. was supported by Fundação para a Ciência e a Tecnologia through the grant PRAXIS XXI/BCC/16414/98.
no-problem/9910/cond-mat9910264.html
ar5iv
text
# Density-matrix functional theory of the Hubbard model: An exact numerical study ## I Introduction Density functional theory (DFT) has been the subject of remarkable developments since its original formulation by Hohenberg and Kohn (HK) . After formal improvements, extensions, and an uncountable number of applications to a wide variety of physical problems, this theoretical approach has become the most efficient, albeit not infallible, method of determining the electronic properties of matter from first principles . The most important innovation of DFT, which is actually at the origin of its breakthrough, is to replace the wave function by the electronic density $`\rho (\stackrel{}{r})`$ as the fundamental variable of the many-body problem. In practice, density functional (DF) calculations are largely based on the Kohn-Sham (KS) scheme that reduces the many-body $`N`$-particle problem to the solution of a set of self-consistent single-particle equations . Although this transformation is formally exact, the implementations always require approximations, since the KS equations involve functional derivatives of the unknown interaction energy $`W[\rho (\stackrel{}{r})]`$, usually expressed in terms of the exchange and correlation (XC) energy $`E_{\mathrm{XC}}[\rho (\stackrel{}{r})]`$. Therefore, understanding the functional dependence of $`E_{\mathrm{XC}}[\rho (\stackrel{}{r})]`$ and improving its approximations are central to the development of DF methods. The currently most widespread Ansätze for $`E_{\mathrm{XC}}[\rho (\stackrel{}{r})]`$ —the local density approximation (LDA) with spin polarized and gradient corrected extensions — were originally derived from exact results for the homogeneous electron gas. It is one of the purposes of this paper to investigate the properties of the interaction-energy functional from an intrinsically inhomogeneous point of view, namely, by considering exactly solvable many-body lattice models. Despite the remarkable success of the local spin density approximation, present DFT fails systematically in accounting for phenomena where strong electron correlations play a central role, for example, in heavy-fermion materials or high-$`T_c`$ superconductors. These systems are usually described by simplifying the low-energy electron dynamics using parameterized lattice models such as Pariser-Parr-Pople, Hubbard, or Anderson models and related Hamiltonians . Being in principle an exact theory, the limitations of the DF approach have to be ascribed to the approximations used for exchange and correlation and not to the underlying HKS formalism. It would be therefore very interesting to extend the range of applicability of DFT to strongly correlated systems and to characterize the properties $`E_{\mathrm{XC}}`$ in the limit of strong correlations. Studies of the XC functional on simple models should provide useful insights for future extensions to realistic Hamiltonians. Moreover, taking into account the demonstrated power of the DF approach in ab initio calculations, one may also expect that a DFT with an appropriate $`E_{\mathrm{XC}}`$ could become an efficient tool for studying many-body models, a subject of theoretical interest on its own. Several properties of DFT on lattice models have been already studied in previous works. Gunnarsson and Schönhammer were, to our knowledge, the first to propose a DF approach on a semiconductor model in order to study the band-gap problem. In this case the local site occupancies were treated as the basic variables. Some years later Schindlmayr and Godby provided a different formulation of DFT on a lattice by considering as basic variables both diagonal elements $`\gamma _{ii}`$ and off-diagonal elements $`\gamma _{ij}`$ of the single-particle density matrix (see also ). Schönhammer et al. then derived a more general framework that unifies the two previous approaches . Using Levy’s constrained search method they showed that different basic variables and different $`W`$ functionals can be considered depending on the type of model or perturbation under study. Site occupations alone may be used as basic variables, if only the orbital energies are varied (i.e., if all hopping integrals $`t_{ij}`$ are kept constant for $`ij`$). However, off-diagonal elements of the single-particle density matrix must be included explicitly if the functional $`W`$ is intended to be applied to more general situations involving different values of $`t_{ij}`$, for example, the Hubbard model on various lattice structures or for different interaction regimes, i.e., different $`U/t`$. In this paper we investigate the properties of Levy’s interaction-energy functional $`W`$ as a function of $`\gamma _{ij}`$ by solving the constrained search minimization problem exactly. In Sec. II the basic formalism of density-matrix functional theory (DMFT) on lattice models is recalled and the equations for determining $`W[\gamma _{ij}]`$ are derived. Sec. III presents and discusses exact results for the correlation energy $`E_\mathrm{C}`$ of the Hubbard model, which is given by the difference between $`W`$ and the Hartree-Fock energy $`E_{\mathrm{HF}}`$. These are obtained, either numerically for finite clusters with different lattice structures, or from the Bethe-Ansatz solution for the one-dimensional chain. Finally, Sec. IV summarizes our conclusions and points out some relevant extensions. ## II Theory In Sec. II A the main results of Levy’s formulation of DMFT are presented in a form that is appropriate for the study of model Hamiltonians such as the Hubbard model. Here, the hopping integrals $`t_{ij}`$ between sites (or orbitals) $`i`$ and $`j`$ play the role given in conventional DFT to the external potential $`V_{ext}(\stackrel{}{r})`$. Consequently, the single-particle density matrix $`\gamma _{ij}`$ replaces the density $`\rho (\stackrel{}{r})`$ as basic variable . In Sec. II B, we derive equations that allow to determine Levy’s interaction-energy functional $`W[\gamma _{ij}]`$ in terms of the ground-state energy of a many-body Hamiltonian with effective hopping integrals $`\lambda _{ij}`$ that depend implicitly on $`\gamma _{ij}`$. ### A DMFT of lattice models We consider the many-body Hamiltonian $$H=\underset{ij\sigma }{}t_{ij}\widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }+\frac{1}{2}\underset{\genfrac{}{}{0pt}{}{ijkl}{\sigma \sigma ^{}}}{}V_{ijkl}\widehat{c}_{i\sigma }^{}\widehat{c}_{k\sigma ^{}}^{}\widehat{c}_{l\sigma ^{}}\widehat{c}_{j\sigma },$$ (1) where $`\widehat{c}_{i\sigma }^{}`$ ($`\widehat{c}_{i\sigma }`$) is the usual creation (annihilation) operator for an electron with spin $`\sigma `$ at site (or orbital) $`i`$. $`H`$ can be regarded as the second quantization of Schrödinger’s equation on a basis . However, in the present paper, the hopping integrals $`t_{ij}`$ and the interaction matrix elements $`V_{ijkl}`$ are taken as parameters to be varied independently. The matrix $`t_{ij}`$ defines the lattice (e.g., one dimensional chains, square or triangular two-dimensional lattices) and the range of single-particle interactions (e.g., up to first or second neighbors). From the ab initio perspective $`t_{ij}`$ is given by the external potential and by the choice of the basis . $`V_{ijkl}`$ defines the type of many-body interactions which may be repulsive (Coulomb like) or attractive (in order to simulate electronic pairing) and which are usually approximated as short ranged (e.g., intra-atomic). Eq. (1) is mainly used in this section to derive general results which can then be applied to various specific models by simplifying the interactions. A particularly relevant example, to be considered in some detail in Sec. III, is the single-band Hubbard model with nearest neighbor (NN) hoppings , which can be obtained from Eq. (1) by setting $`t_{ij}=t`$ for $`i`$ and $`j`$ NN’s, $`t_{ij}=0`$ otherwise, and $`V_{ijkl}=U\delta _{ij}\delta _{kl}\delta _{ik}`$ . In order to apply DMFT to model Hamiltonians of the form (1) we follow Levy’s constrained search procedure as proposed by Schindlmayr and Godby . The ground-state energy is determined by minimizing the functional $$E[\gamma _{ij}]=E_K[\gamma _{ij}]+W[\gamma _{ij}]$$ (2) with respect to the single-particle density matrix $`\gamma _{ij}`$. $`E[\gamma _{ij}]`$ is physically defined for all density matrices that can be written as $$\gamma _{ij}=\underset{\sigma }{}\gamma _{ij\sigma }=\underset{\sigma }{}\mathrm{\Psi }|c_{i\sigma }^{}c_{j\sigma }|\mathrm{\Psi }$$ (3) for all $`i`$ and $`j`$, where $`|\mathrm{\Psi }`$ is an $`N`$-particle state. In other words, $`\gamma _{ij}`$ must derive from a physical state. It is then said to be pure-state $`N`$-representable . The first term in Eq. (2) is given by $$E_K=\underset{ij}{}t_{ij}\gamma _{ij}.$$ (4) It includes all single-particle contributions and is usually regarded as the kinetic energy associated with the electronic motion in the lattice. Notice that Eq. (4) yields the exact kinetic energy for a given $`\gamma _{ij}`$. There are no corrections on $`E_K`$ to be included in other parts of the functional as in the KS approach. The second term in Eq. (2) is the interaction-energy functional given by $$W[\gamma _{ij}]=min\left[\frac{1}{2}\underset{\genfrac{}{}{0pt}{}{nmkl}{\sigma \sigma ^{}}}{}V_{nmkl}\mathrm{\Psi }[\gamma _{ij}]|\widehat{c}_{n\sigma }^{}\widehat{c}_{k\sigma ^{}}^{}\widehat{c}_{l\sigma ^{}}\widehat{c}_{m\sigma }|\mathrm{\Psi }[\gamma _{ij}]\right].$$ (5) The minimization in Eq. (5) implies a search over all $`N`$-particles states $`|\mathrm{\Psi }[\gamma _{ij}]`$ that satisfy $`\mathrm{\Psi }[\gamma _{ij}]|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }[\gamma _{ij}]=\gamma _{ij}`$ for all $`i`$ and $`j`$. Therefore, $`W[\gamma _{ij}]`$ represents the minimum value of the interaction energy compatible with a given density matrix $`\gamma _{ij}`$. $`W`$ is usually expressed in terms of the Hartree-Fock energy $$E_{\mathrm{HF}}[\gamma _{ij}]=\frac{1}{2}\underset{\genfrac{}{}{0pt}{}{ijkl}{\sigma \sigma ^{}}}{}V_{ijkl}\left(\gamma _{ij\sigma }\gamma _{kl\sigma ^{}}\delta _{\sigma \sigma ^{}}\gamma _{il\sigma }\gamma _{kj\sigma }\right)$$ (6) and the correlation energy $`E_\mathrm{C}[\gamma _{ij}]`$ as $$W[\gamma _{ij}]=E_{\mathrm{HF}}[\gamma _{ij}]+E_\mathrm{C}[\gamma _{ij}].$$ (7) $`W`$ and $`E_\mathrm{C}`$ are universal functionals of $`\gamma _{ij}`$ in the sense that they are independent of $`t_{ij}`$, i.e., of the system under study. They depend on the considered interactions or model, as defined by $`V_{ijkl}`$, on the number of electrons $`N_e`$, and on the structure of the many-body Hilbert space, as given by $`N_e`$ and the number of orbitals or sites $`N_a`$. Notice that $`E_\mathrm{C}`$ in Eq. (7) does not include any exchange contributions. Given $`\gamma _{ij}`$ ($`\gamma _{ij\sigma }=\gamma _{ij}/2`$ in nonmagnetic cases) there is no need to approximate the exchange term, which is taken into account exactly by $`E_{\mathrm{HF}}`$ \[Eq. (6)\]. Nevertheless, if useful in practice, it is of course possible to split $`W`$ in the Hartree energy $`E_\mathrm{H}`$ and the exchange and correlation energy $`E_{\mathrm{XC}}`$ is a similar way as in the KS approach. The variational principle results from the following two relations : $$E_{gs}\underset{ij}{}t_{ij}\gamma _{ij}+W[\gamma _{ij}]$$ (8) for all pure-state $`N`$-representable $`\gamma _{ij}`$ , and $$E_{gs}=\underset{ij}{}t_{ij}\gamma _{ij}^{gs}+W[\gamma _{ij}^{gs}],$$ (9) where $`E_{gs}=\mathrm{\Psi }_{gs}|H|\mathrm{\Psi }_{gs}`$ refers to the ground-state energy and $`\gamma _{ij}^{gs}=\mathrm{\Psi }_{gs}|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }_{gs}`$ to the ground-state single-particle density matrix. As already pointed out in previous works , $`W`$ and $`E_\mathrm{C}`$ depend in general on both diagonal elements $`\gamma _{ii}`$ and off-diagonal elements $`\gamma _{ij}`$ of the density-matrix, since the hopping integrals $`t_{ij}`$ are non local in the sites. The situation is similar to the DF approach proposed by Gilbert for the study of non-local potentials $`V_{ext}(\stackrel{}{r},\stackrel{}{r}^{})`$ as those appearing in the theory of pseudo-potentials . A formulation of DFT on a lattice only in terms of $`\gamma _{ii}`$ would be possible if one would restrict oneself to a family of models with constant $`t_{ij}`$ for $`ij`$. However, in this case the functional $`W[\gamma _{ii}]`$ would depend on the actual value of $`t_{ij}`$ for $`ij`$ . The functional $`W[\gamma _{ij}]`$, valid for all lattice structures and for all types of hybridizations, can be simplified at the expense of universality if the hopping integrals are short ranged. For example, if only NN hoppings are considered, the kinetic energy $`E_K`$ is independent of the density-matrix elements between sites that are not NN’s. Therefore, the constrained search in Eq. (5) may restricted to the $`|\mathrm{\Psi }[\gamma _{ij}]`$ that satisfy $`\mathrm{\Psi }[\gamma _{ij}]|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }[\gamma _{ij}]=\gamma _{ij}`$ only for $`i=j`$ and for NN $`ij`$. In this way the number of variables in $`W[\gamma _{ij}]`$ is reduced significantly rendering the interpretation of the functional dependence far simpler. While this is a great practical advantage, it also implies that $`W`$ and $`E_\mathrm{C}`$ lose their universal character since the dependence on the NN $`\gamma _{ij}`$ is now different for different lattices. In Sec. III results for one-, two-, and three-dimensional lattices with NN hoppings are compared in order to quantify this effect. For the applications in Sec. III we shall consider the single-band Hubbard model with NN hoppings, which in the usual notation is given by $$H=t\underset{i,j\sigma }{}\widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }+U\underset{i}{}\widehat{n}_i\widehat{n}_i.$$ (10) In this case the interaction energy functional reads $$W[\gamma _{ij}]=min\left[U\underset{l}{}\mathrm{\Psi }[\gamma _{ij}]|\widehat{n}_l\widehat{n}_l|\mathrm{\Psi }[\gamma _{ij}]\right],$$ (11) where the minimization is performed with respect to all $`N`$-particle $`|\mathrm{\Psi }[\gamma _{ij}]`$ satisfying $`\mathrm{\Psi }[\gamma _{ij}]|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }[\gamma _{ij}]=\gamma _{ij}`$ for $`i`$ and $`j`$ NN’s. If the interactions are repulsive ($`U>0`$) $`W[\gamma _{ij}]`$ represents the minimum average number of double occupations which can be obtained for a given degree of electron delocalization, i.e., for a given value of $`\gamma _{ij}`$. For attractive interactions ($`U<0`$) double occupations are favored and $`W[\gamma _{ij}]`$ corresponds to the maximum of $`_l\widehat{n}_l\widehat{n}_l`$ for a given $`\gamma _{ij}`$. ### B Exact XC energy functional In order to determine $`E_\mathrm{C}[\gamma _{ij}]`$ and $`W[\gamma _{ij}]`$ we look for the extremes of $`F`$ $`=`$ $`{\displaystyle \underset{\genfrac{}{}{0pt}{}{ijkl}{\sigma \sigma ^{}}}{}}\left[V_{ijkl}\mathrm{\Psi }|\widehat{c}_{i\sigma }^{}\widehat{c}_{k\sigma ^{}}^{}\widehat{c}_{l\sigma }\widehat{c}_{j\sigma }|\mathrm{\Psi }\right]+\epsilon \left(1\mathrm{\Psi }|\mathrm{\Psi }\right)`$ (12) $`+`$ $`{\displaystyle \underset{i,j}{}}\lambda _{ij}\left(\mathrm{\Psi }|{\displaystyle \underset{\sigma }{}}\widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }\gamma _{ij}\right)`$ (13) with respect to $`|\mathrm{\Psi }`$. Lagrange multipliers $`\epsilon `$ and $`\lambda _{ij}`$ have been introduced to enforce the normalization of $`|\mathrm{\Psi }`$ and the conditions on the representability of $`\gamma _{ij}`$. Derivation with respect to $`\mathrm{\Psi }|`$, $`\epsilon `$ and $`\lambda _{ij}`$ yields the eigenvalue equations $$\underset{ij\sigma }{}\lambda _{ij}\widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }+\underset{\genfrac{}{}{0pt}{}{ijkl}{\sigma \sigma ^{}}}{}V_{ijkl}\widehat{c}_{i\sigma }^{}\widehat{c}_{k\sigma ^{}}^{}\widehat{c}_{l\sigma }\widehat{c}_{j\sigma }|\mathrm{\Psi }=\epsilon |\mathrm{\Psi },$$ (14) and the auxiliary conditions $`\mathrm{\Psi }|\mathrm{\Psi }=1`$ and $`\gamma _{ij}=\mathrm{\Psi }|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }`$. The Lagrange multipliers $`\lambda _{ij}`$ play the role of hopping integrals to be chosen in order that $`|\mathrm{\Psi }`$ yields the given $`\gamma _{ij}`$. The pure-state representability of $`\gamma _{ij}`$ ensures that there is always a solution . In practice, however, one usually varies $`\lambda _{ij}`$ in order to scan the domain of representability of $`\gamma _{ij}`$. For given $`\lambda _{ij}`$, the eigenstate $`|\mathrm{\Psi }_0`$ corresponding to the lowest eigenvalue of Eq. (14) yields the minimum $`W[\gamma _{ij}]`$ for $`\gamma _{ij}=\mathrm{\Psi }_0|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }_0`$. Any other $`|\mathrm{\Psi }`$ satisfying $`\gamma _{ij}=\mathrm{\Psi }|_\sigma \widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }`$ would have higher $`\epsilon `$ and thus higher $`W`$. The subset of $`\gamma _{ij}`$ which are representable by a ground-state of Eq. (14) is the physically relevant one, since it necessarily includes the absolute minimum $`\gamma _{ij}^{gs}`$ of $`E[\gamma _{ij}]`$. Nevertheless, it should be noted that pure-state representable $`\gamma _{ij}`$ may be considered that can only be represented by excited states or by linear combinations of eigenstates of Eq. (14). In the later case, $`\lambda _{ij}=0ij`$, and $`|\mathrm{\Psi }_0`$ is an eigenstate of the interaction term with lowest eigenvalue. Examples shall be discussed in Sec. III. For the Hubbard model Eq. (14) reduces to $$\underset{\genfrac{}{}{0pt}{}{ij}{\sigma }}{}\lambda _{ij}\widehat{c}_{i\sigma }^{}\widehat{c}_{j\sigma }|\mathrm{\Psi }+U\underset{i}{}\widehat{n}_i\widehat{n}_i|\mathrm{\Psi }=\epsilon |\mathrm{\Psi }.$$ (15) This eigenvalue problem can be solved numerically for clusters with different lattice structures and periodic boundary conditions. In this case we expand $`|\mathrm{\Psi }[\gamma _{ij}]`$ in a complete set of basis states $`|\mathrm{\Phi }_m`$ which have definite occupation numbers $`\nu _{i\sigma }^m`$ at all orbitals $`i\sigma `$ ($`\widehat{n}_{i\sigma }|\mathrm{\Phi }_m=\nu _{i\sigma }^m|\mathrm{\Phi }_m`$ with $`\nu _{i\sigma }^m=0`$ or $`1`$). The values of $`\nu _{i\sigma }^m`$ satisfy the usual conservation of the number of electrons $`N_e=N_e+N_e`$ and of the $`z`$ component of the total spin $`S_z=(N_eN_e)/2`$, where $`N_{e\sigma }=_i\nu _{i\sigma }^m`$. For not too large clusters, the lowest energy $`|\mathrm{\Psi }_0[\gamma _{ij}]`$ —the ground state of Eq. (15)— can be determined by sparse-matrix diagonalization procedures, for example, by using Lanczos iterative method . $`|\mathrm{\Psi }_0[\gamma _{ij}]`$ is calculated in the subspace of minimal $`S_z`$ since this ensures that there are no a priori restrictions on the total spin $`S`$. In addition, spin-projector operators may be used to study the dependence of $`E_\mathrm{C}(\gamma _{12})`$ on $`S`$. For a one-dimensional (1D) chain with NN hoppings $`t_{ij}=t`$, translational symmetry implies equal density-matrix elements $`\gamma _{ij}`$ between NN’s. Therefore, one may set $`\lambda _{ij}=\lambda `$ for all NN $`ij`$, and then Eq. (15) has the same form as the 1D Hubbard model for which Lieb and Wu’s exact solution is available . In this case the lowest eigenvalue $`\epsilon `$ is determined following the work by Shiba . The coupled Bethe-Ansatz equations are solved as a function of $`\lambda `$, band-filling $`n=N_e/N_a`$, and for positive and negative $`U`$, by means of a simple iterative procedure. ## III Results and Discussion In this section we present and discuss exact results for the correlation energy functional $`E_\mathrm{C}[\gamma _{ij}]`$ of the single-band Hubbard Hamiltonian with nearest neighbor hoppings . Given the lattice structure, $`N_a`$ and $`N_e`$, the model is characterized by the dimensionless parameter $`U/t`$ which measures the competition between kinetic and interaction energies \[see Eq. (10)\]. $`U>0`$ corresponds to the usual intra-atomic repulsive Coulomb interaction, while the attractive case ($`U<0`$) simulates intra-atomic pairing of electrons. ### A Repulsive interaction $`U>0`$ In Fig. 1 the correlation energy $`E_\mathrm{C}`$ of the one-dimensional (1D) Hubbard model is shown for half-band filling ($`N_e=N_a`$) as a function of the density-matrix element or bond order $`\gamma _{12}`$ between NN’s. $`\gamma _{ij}=\gamma _{12}`$ for all NN’s $`i`$ and $`j`$. Results are given for rings of finite length $`N_a`$ as well as for the infinite chain. Several general qualitative features may be identified. First of all we observe that on bipartite lattices $`E_\mathrm{C}(\gamma _{12})=E_\mathrm{C}(\gamma _{12})`$, since the sign of the NN bond order can be changed without affecting the interaction energy $`W(\gamma _{12})`$ by changing the phase of the local orbitals at one of the sublattices ($`c_{i\sigma }c_{i\sigma }`$ for $`iA`$ and $`c_{j\sigma }`$ unchanged for $`jB`$, where $`A`$ and $`B`$ refer to the sublattices). Let us recall that the domain of definition of $`E_\mathrm{C}(\gamma _{12})`$ is limited by the pure-state representability of $`\gamma _{ij}`$. The upper bound $`\gamma _{12}^{0+}`$ and the lower bound $`\gamma _{12}^0`$ for $`\gamma _{12}`$ ($`\gamma _{12}^{0+}=\gamma _{12}^0=\gamma _{12}^0`$ on bipartite lattices) are the extreme values of the bond order between NN’s on a given lattice and for given $`N_a`$ and $`N_e`$ ($`\gamma _{ij}=\gamma _{12}`$ for all NN $`ij`$). They represent the maximum degree of electron delocalization. $`\gamma _{12}^{0+}`$ and $`\gamma _{12}^0`$ correspond to the extremes of the kinetic energy $`E_K`$ \[$`E_K=_{ij}t_{ij}\gamma _{ij}=(zN_a/2)t\gamma _{12}`$, where $`z`$ is the coordination number\] and thus to the ground state of the Hubbard model for $`U=0`$ \[$`\gamma _{12}^{0+}`$ for $`t>0`$ and $`\gamma _{12}^0`$ for $`t<0`$, see Eq. (10)\]. For $`\gamma _{12}=\gamma _{12}^0`$ the underlying electronic state $`|\mathrm{\Psi }_0`$ is usually a single Slater determinant and therefore $`E_\mathrm{C}(\gamma _{12}^0)=0`$. In other words, the correlation energy vanishes as expected in the fully delocalized limit . As $`|\gamma _{12}|`$ decreases $`E_\mathrm{C}`$ decreases ($`E_\mathrm{C}<0`$) since correlations can reduce the Coulomb energy more and more efficiently as the electrons localize. $`E_\mathrm{C}`$ is minimum in the strongly correlated limit $`\gamma _{12}=\gamma _{12}^{\mathrm{}}`$. For half-band filling this corresponds to a fully localized electronic state ($`\gamma _{12}^{\mathrm{}}=0`$). Here, $`E_\mathrm{C}`$ cancels out the Hartree-Fock energy $`E_{\mathrm{HF}}`$ and the Coulomb energy $`W`$ vanishes ($`E_\mathrm{C}^{\mathrm{}}=E_{\mathrm{HF}}`$) . The ground-state values of $`\gamma _{12}^{gs}`$ and $`E_{gs}`$ for a given $`U/t`$ result from the competition between lowering $`E_\mathrm{C}`$ by decreasing $`\gamma _{12}`$ and lowering $`E_K`$ by increasing it ($`t>0`$). The divergence of $`E_\mathrm{C}/\gamma _{12}`$ for $`\gamma _{12}=\gamma _{12}^0`$ is a necessary condition in order that $`\gamma _{12}^{gs}<\gamma _{12}^0`$ for arbitrary small $`U>0`$. On the other side, for small $`\gamma _{12}`$, we observe that $`(E_\mathrm{C}+E_{\mathrm{HF}})\gamma _{12}^2`$. This implies that for $`U/t1`$, $`\gamma _{12}^{gs}t/U`$ and $`E_{gs}t^2/U`$, a well known result in the Heisenberg limit of the Hubbard model ($`N_e=N_a`$) . A more quantitative analysis of $`E_\mathrm{C}(\gamma _{12})`$ and in particular the comparison of results for different $`N_a`$ is complicated by the size dependence of $`\gamma _{12}^0`$ and $`E_{\mathrm{HF}}`$. It is therefore useful to measure $`E_\mathrm{C}`$ in units of the Hartree-Fock energy and to bring the domains of representability to a common range by considering $`\epsilon _\mathrm{C}=E_\mathrm{C}/E_{\mathrm{HF}}`$ as a function of $`g_{12}=\gamma _{12}/\gamma _{12}^0`$. Fig. 2 shows that $`\epsilon _\mathrm{C}(g_{12})`$ has approximately the same behavior for all considered $`N_a`$. Finite size effects are small except for the very small sizes. The largest deviations from the common trend are found for $`N_a=N_e=4`$. Here we observe a discontinuous drop of $`\epsilon _\mathrm{C}`$ for $`g_{12}=1`$ ($`g_{12}<1`$) which is due to the degeneracy of the single-particle spectrum. In fact in this case two of the four electrons occupy a doubly degenerate state in the uncorrelated limit and the minimum interaction energy $`W(\gamma _{12})`$ does not correspond to a single-Slater-determinant state even for $`\gamma _{12}=\gamma _{12}^0`$ . As $`N_a`$ increases $`\epsilon _\mathrm{C}(g_{12})`$ approaches the infinite-length limit with alternations around the $`N_a=\mathrm{}`$ curve. The strong similarity between $`\epsilon _\mathrm{C}(g_{12})`$ for small $`N_a`$ and for $`N_a=\mathrm{}`$ is a remarkable result. It suggests that good approximations for $`E_\mathrm{C}(\gamma _{12})`$ in extended systems could be derived from finite cluster calculations. Fig. 3 shows the band-filling dependence of $`E_\mathrm{C}(\gamma _{12})`$ in a 10-site 1D Hubbard ring. Results are given for $`N_eN_a`$, since for $`N_eN_a`$, $`E_\mathrm{C}(\gamma _{ij},N_e)=E_\mathrm{C}(\gamma _{ij},2N_aN_e)`$ as a result of electron-hole symmetry . Although $`E_\mathrm{C}(\gamma _{12})`$ depends strongly on $`N_e`$, several qualitative properties are shared by all band fillings: (i) As in the half-filled band case, the domain of representability of $`\gamma _{12}`$ is bound by the bond orders in the uncorrelated limits. In fact, $`\gamma _{12}^0\gamma _{12}\gamma _{12}^{0+}`$, where $`\gamma _{12}^{0+}`$ ($`\gamma _{12}^0`$) corresponds to the ground state of the $`U=0`$ tight-binding model for $`t>0`$ ($`t<0`$). On bipartite lattices $`\gamma _{12}^{0+}=\gamma _{12}^0=\gamma _{12}^0`$. Notice that $`\gamma _{12}^0`$ increases monotonously with $`N_e`$ as the single-particle band is filled up. This is an important contribution to the band-filling dependence of $`E_\mathrm{C}`$ (see Fig. 3). (ii) In the delocalized limit, $`E_\mathrm{C}(\gamma _{12}^0)=0`$ for all the $`N_e`$ for which $`W(\gamma _{12}^0)`$ derives from a single Slater determinant . Moreover, the divergence of $`E_\mathrm{C}/\gamma _{12}`$ for $`\gamma _{12}=\gamma _{12}^0`$ indicates that $`\gamma _{12}^{gs}<\gamma _{12}^0`$ for arbitrary small $`U>0`$, as expected from perturbation theory. (iii) Starting from $`\gamma _{12}=\gamma _{12}^0`$, $`E_\mathrm{C}(\gamma _{12})`$ decreases with decreasing $`\gamma _{12}`$ reaching its lowest possible value $`E_\mathrm{C}^{\mathrm{}}=E_{\mathrm{HF}}`$ for $`\gamma _{12}=\gamma _{12}^\mathrm{}+`$ ($`N_eN_a`$). The same behavior is of course observed for $`\gamma _{12}<0`$. In particular, $`E_\mathrm{C}=E_{\mathrm{HF}}`$ also for $`\gamma _{12}=\gamma _{12}^{\mathrm{}}`$. As shown in Fig. 3, $`E_\mathrm{C}^{\mathrm{}}`$ decreases rapidly with increasing $`N_e`$, since $`E_{\mathrm{HF}}`$ increases quadratically with the electron density . (iv) On bipartite lattices $`\gamma _{12}^\mathrm{}+=\gamma _{12}^{\mathrm{}}=\gamma _{12}^{\mathrm{}}`$, while on non-bipartite structures one generally has $`|\gamma _{12}^\mathrm{}+||\gamma _{12}^{\mathrm{}}|`$, since the single-particle spectrum is different for positive and negative energies. The decrease of $`E_\mathrm{C}`$ with decreasing $`|\gamma _{12}|`$ shows that the reduction of the Coulomb energy due to correlations is done at the expense of kinetic energy or electron delocalization, as already discussed for $`N_e=N_a`$ (Fig. 1). (v) $`\gamma _{12}^{\mathrm{}}>0`$ for all $`N_e<N_a`$ ($`\gamma _{12}^{\mathrm{}}=0`$ for $`N_e=N_a`$). $`\gamma _{12}^{\mathrm{}}`$ represents the largest NN bond order that can be constructed under the constraint of vanishing Coulomb repulsion energy. A lower bound for $`\gamma _{12}^{\mathrm{}}`$ is given by the bond order $`\gamma _{12}^{FM}`$ in the fully-polarized ferromagnetic state ($`\gamma _{12}^{\mathrm{}}\gamma _{12}^{FM}`$). This is obtained by occupying the lowest single-particle states with all electrons of the same spin ($`N_eN_a`$). Therefore, $`\gamma _{12}^{FM}`$ increases with $`N_e`$ for $`N_eN_a/2`$ and then decreases for $`N_a/2<N_eN_a`$ reaching $`\gamma _{12}^{FM}=0`$ at half-band filling ($`\gamma _{12}^{FM}>0`$ for $`N_e<N_a`$). In this way the non-monotonous dependence of $`\gamma _{12}^{\mathrm{}}`$ on $`N_e`$ can be explained (see Fig. 3). (vi) The correlation energy is constant and equal to $`E_{\mathrm{HF}}`$ for $`\gamma _{12}^{\mathrm{}}\gamma _{12}\gamma _{12}^\mathrm{}+`$. These values of $`\gamma _{12}`$ can never correspond to the ground-state energy of the Hubbard model, since in this range increasing $`\gamma _{12}`$ always lowers the kinetic energy ($`t>0`$) without increasing the Coulomb repulsion ($`\gamma _{12}^{\mathrm{}}\gamma _{12}^{gs}\gamma _{12}^0`$). For $`\gamma _{12}^{\mathrm{}}<\gamma _{12}<\gamma _{12}^\mathrm{}+`$, $`\gamma _{12}`$ cannot be represented by a ground state of Eq. (15). In this range $`\gamma _{12}`$ can be derived from a linear combination of states having minimal Coulomb repulsion . In order to compare the functional dependences of the correlation energy for different band fillings, it is useful to scale $`E_\mathrm{C}`$ in units of the Hartree-Fock energy and to bring the relevant domains $`\gamma _{12}^{\mathrm{}}\gamma _{12}\gamma _{12}^0`$ of different $`N_e`$ to a common range. In Fig. 4, $`\epsilon _\mathrm{C}=E_\mathrm{C}/E_{\mathrm{HF}}`$ is shown as a function of $`g_{12}=(\gamma _{12}\gamma _{12}^{\mathrm{}})/(\gamma _{12}^0\gamma _{12}^{\mathrm{}})`$. We observe that the results for $`\epsilon _\mathrm{C}(g_{12})`$ are remarkably similar for all band-fillings. The largest deviations from the common trend are found for $`N_e=4`$. As already discussed for $`N_a=N_e=4`$, this anomalous behavior is related to the degeneracy of the single-particle spectrum and to the finite size of system. Fig. 4 shows that for the Hubbard model the largest part of the dependence of $`E_\mathrm{C}(\gamma _{12})`$ on band filling comes from $`E_{\mathrm{HF}}`$, $`\gamma _{12}^0`$ and $`\gamma _{12}^{\mathrm{}}`$. Similar conclusions are derived from the results for the infinite 1D chain presented in Fig. 5. For a given $`g_{12}`$, $`\epsilon _\mathrm{C}(g_{12})`$ depends weakly on $`N_e/N_a`$ if the carrier density is low ($`N_e/N_a0.4`$), and tends to increase as we approach half-band filling \[see Fig. 5(b)\]. For high carrier densities it become comparatively more difficult to minimize the Coulomb energy for a given degree of delocalization $`g_{12}`$. The effect is most pronounced for $`g_{12}0.8`$$`0.9`$, i.e., close to the uncorrelated limit. As we approach the strongly correlated limit ($`g_{12}0.4`$) the dependence of $`\epsilon _\mathrm{C}`$ on $`N_e/N_a`$ is very weak even for $`N_e/N_a1`$. One concludes that $`\epsilon _\mathrm{C}(g_{12})`$ is a useful basis for introducing practical approximations on more complex systems. The correlation energy $`E_\mathrm{C}`$ is a universal functional of the complete single-particle density matrix $`\gamma _{ij}`$. $`E_\mathrm{C}[\gamma _{ij}]`$ and $`W[\gamma _{ij}]`$ may depend on $`N_a`$ and $`N_e`$ but are independent of $`t_{ij}`$ and in particular of the lattice structure. The functional $`E_\mathrm{C}(\gamma _{12})`$ considered in this paper depends by definition on the type of lattice, since the constraints imposed in the minimization only apply to NN bonds. In order to investigate this problem we have determined $`E_\mathrm{C}(\gamma _{12})`$ for 2D and 3D finite clusters having $`N_a12`$ sites and periodic boundary conditions. In Fig. 6 we compare these results with those of the 1D $`12`$-site periodic ring. As shown in the inset figure, the qualitative behavior is in all cases very similar. The main quantitative differences come from the domain of representability of $`\gamma _{12}`$, i.e., from the values of $`\gamma _{12}^{0+}`$ and $`\gamma _{12}^0`$ ($`\gamma _{12}^0\gamma _{12}\gamma _{12}^{0+}`$). Once scaled as a function of $`\gamma _{12}/\gamma _{12}^0`$, $`E_\mathrm{C}`$ depends rather weakly on the lattice structure. Notice that the Hartree-Fock energy $`E_{\mathrm{HF}}=(U/4)N_a`$ is the same for all structures. However, for the BCC structure we obtain $`W(\gamma _{12}^0)<E_{\mathrm{HF}}`$, i.e., $`E_\mathrm{C}(\gamma _{12}^0)<0`$, due to degeneracies in the single-particle spectrum of the considered finite cluster \[see inset Fig. 6(b)\]. In order to correct for this finite size effect it is here more appropriate to consider $`\epsilon _\mathrm{C}=[E_\mathrm{C}(\gamma _{12})E_\mathrm{C}(\gamma _{12}^0)]/W(\gamma _{12}^0)`$. Still, the differences in $`\epsilon _\mathrm{C}`$ between BCC and FCC structures appear to be more important than between square and triangular 2D lattices. This is probably related to the degeneracies in the spectrum of the BCC cluster, as already observed for rings with $`N_e=4m`$ \[Figs. 2 and 4(a)\]. The largest changes in $`\epsilon _\mathrm{C}`$ for different lattice structures are observed for intermediate degree of delocalization ($`g_{12}0.7`$$`0.9`$, see Fig. 6). Note that there is no monotonic trend as a function of the lattice dimension. For example, for $`g_{12}=0.7`$$`0.9`$, $`\epsilon _\mathrm{C}`$ first increases somewhat as we go from 1D to 2D lattices, but it then decreases coming close to the 1D curve for the 3D FCC lattice \[$`\epsilon _\mathrm{C}(2\mathrm{D})>\epsilon _\mathrm{C}(\mathrm{FCC})>\epsilon _\mathrm{C}(1\mathrm{D})>\epsilon _\mathrm{C}(\mathrm{BCC})`$ for $`0.7g_{12}0.9`$\]. Finally, it is worth noting that in the strongly correlated limit ($`g_{12}0.3`$) the results for $`\epsilon _\mathrm{C}(g_{12})`$ are nearly the same for all considered lattice structures (see Fig. 6). This should be useful in order to develop simple general approximations to $`E_\mathrm{C}(\gamma _{12})`$ in this limit. ### B Attractive interaction $`U<0`$ The attractive Hubbard model describes itinerant electrons with local intra-atomic pairing ($`U<0`$). The electronic correlations are very different from those found in the repulsive case discussed so far. In particular Levy’s interaction energy functional $`W(\gamma _{ij})`$ now correspond to the maximum average number of double occupation for a given $`\gamma _{ij}`$ \[see Eq. (11)\]. Therefore, it is very interesting to investigate the properties of the correlation energy functional $`E_\mathrm{C}(\gamma _{ij})`$ also for $`U<0`$ and to contrast them with the results of the previous section. In Fig. 7 the correlation energy $`E_\mathrm{C}(\gamma _{12})`$ of the attractive Hubbard model is given at half-band filling for various finite rings ($`N_a12`$) and for the infinite 1D chain ($`N_e=N_a`$). The band-filling dependence of $`E_\mathrm{C}(\gamma _{12})`$ is shown in Fig. 8 for a finite $`12`$-site ring ($`N_eN_a=12`$). As in the repulsive case, $`\gamma _{12}^0\gamma _{12}\gamma _{12}^{0+}`$ since the domain of representability of $`\gamma _{12}`$ is independent of the form or type of the interaction. Moreover, $`E_\mathrm{C}(\gamma _{12})=E_\mathrm{C}(\gamma _{12})`$ due to the electron-hole symmetry of bipartite lattices . Starting from $`\gamma _{12}^{0+}`$ or $`\gamma _{12}^0`$ ($`\gamma _{12}^{0+}=\gamma _{12}^0=\gamma _{12}^0`$ on bipartite lattices), $`E_\mathrm{C}(\gamma _{12})`$ decreases with decreasing $`|\gamma _{12}|`$ reaching the minimum $`E_\mathrm{C}^{\mathrm{}}`$ for $`\gamma _{12}=\gamma _{12}^{\mathrm{}}`$ and for $`\gamma _{12}=\gamma _{12}^{\mathrm{}}`$ ($`\gamma _{12}^\mathrm{}+=\gamma _{12}^{\mathrm{}}=\gamma _{12}^{\mathrm{}}`$ in this case). For $`N_e`$ even, $`W(\gamma _{12}^{\mathrm{}})=N_eU/2`$, and for $`N_e`$ odd, $`W(\gamma _{12}^{\mathrm{}})=(N_e1)U/2`$, which correspond to the maximum number of electron pairs that can be formed. For $`N_e`$ even, the minimum $`E_\mathrm{C}^{\mathrm{}}=U(N_e/2)[1N_e/(2N_a)]`$ is achieved only for a complete electron localization (i.e, $`\gamma _{12}^{\mathrm{}}=0`$). In contrast, for odd $`N_e`$ a finite-size effect is observed. In this case, one of the electrons remains unpaired even in the limit of strong electron correlations and the minimum of $`E_\mathrm{C}`$ is $`E_\mathrm{C}^{\mathrm{}}=U[(N_e1)/2][1(N_e+1)/(2N_a)]`$. Moreover, non-vanishing $`\gamma _{12}^{\mathrm{}}`$ are obtained as a result of the delocalization of the unpaired electron. $`\gamma _{12}^{\mathrm{}}`$ represents the maximum bond order that can be obtained when $`(N_e1)/2`$ electron pairs are formed ($`\gamma _{12}^{\mathrm{}}0`$ for $`N_a\mathrm{}`$, $`N_e`$ odd). Notice that in all cases the ground state $`\gamma _{12}^{gs}`$ is found in the interval $`\gamma _{12}^{\mathrm{}}\gamma _{12}^{gs}\gamma _{12}^0`$. It is interesting to observe that $`E_\mathrm{C}(\gamma _{12})`$ can be appropriately scaled in a similar way as for $`U>0`$. In Fig. 8(b), $`\epsilon _\mathrm{C}(g_{12})=E_\mathrm{C}/|E_\mathrm{C}^{\mathrm{}}|`$ is shown as a function of the degree of delocalization $`g_{12}=(\gamma _{12}\gamma _{12}^{\mathrm{}})/(\gamma _{12}^0\gamma _{12}^{\mathrm{}})`$. $`\epsilon _\mathrm{C}(g_{12})`$ presents a pseudo-universal behavior in the sense that it depends weakly on $`N_a`$ and $`N_e`$. The main deviations from the common trend are found for $`N_e=N_a=4`$. As already discussed for $`U>0`$, this is a consequence of degeneracies in the single-particle spectrum. In this case, the wave function corresponding to the minimum in Levy’s functional for $`\gamma _{12}\gamma _{12}^0`$ \[Eq. (11)\] cannot be described by a single Slater determinant and $`W(\gamma _{12}\gamma _{12}^0)<E_{\mathrm{HF}}`$. ## IV Conclusion Density-matrix functional theory has been applied to lattice Hamiltonians taking the Hubbard model as a particularly relevant example. In this framework the basic variable is the single-particle density matrix $`\gamma _{ij}`$ and the key unknown is the correlation energy functional $`E_\mathrm{C}[\gamma _{ij}]`$. The challenge is therefore to determine $`E_\mathrm{C}[\gamma _{ij}]`$ or to provide with useful accurate approximations for it. In this paper we presented a systematic study of the functional dependence of $`E_\mathrm{C}(\gamma _{12})`$ on periodic lattices, where $`\gamma _{12}`$ is the density-matrix element between nearest neighbors ($`\gamma _{ij}=\gamma _{12}`$ for all NN $`ij`$). Based on finite-cluster exact diagonalizations and on the Bethe-Ansatz solution of the 1D chain, we derived rigorous results for $`E_\mathrm{C}(\gamma _{12})`$ of the Hubbard model as a function of the number of sites $`N_a`$, band filling $`N_e/N_a`$ and lattice structure. A basis for applications of density-matrix functional theory to many-body lattice models is thereby provided. The observed pseudo-universal behavior of $`\epsilon _\mathrm{C}(g_{12})=E_\mathrm{C}/E_{\mathrm{HF}}`$ as a function of $`g_{12}=(\gamma _{12}\gamma _{12}^{\mathrm{}})/(\gamma _{12}^0\gamma _{12}^{\mathrm{}})`$ encourages transferring $`\epsilon _\mathrm{C}(g_{12})`$ from finite-size systems to infinite lattices or even to different lattice geometries. In fact, the exact $`E_\mathrm{C}(\gamma _{12})`$ of the Hubbard dimer has been recently used to infer a simple general Ansatz for $`E_\mathrm{C}(\gamma _{12})`$ . With this approximation to $`E_\mathrm{C}(\gamma _{12})`$ the ground-state energies and charge-excitation gaps of 1D and 2D lattices have been determined successfully in the whole range of $`U/t`$. Further investigations, for example, by considering magnetic impurity models or more complex multiband Hamiltonians, are certainly worthwhile. ###### Acknowledgements. The authors gratefully acknowledge the financial support provided by CONACyT-Mexico (RLS) and by the Alexander von Humboldt Foundation (GMP).
no-problem/9910/cond-mat9910333.html
ar5iv
text
# 1 𝑟⁢(𝑁,𝐵,0.15)/√𝑁 versus 𝐵. Houdayer and Martin reply: Marinari, Parisi, and Zuliani have studied the Edwards-Anderson spin glass model at a field $`B=0.4`$. They had previously studied the four-dimensional case and in their comment to our paper they present results for the $`d=3`$ case using the same techniques. The cornerstone of their approach is an out-of-equilibrium estimate of $`q_{\text{min}}`$. They find this quantity to be different from the (equilibrium) mean value of $`q`$, giving evidence for replica symmetry breaking (RSB). However we see a danger in relying on out-of-equilibrium measurements: metastable states that do not contribute to the (equilibrium) $`P(q)`$ (because they have excess free energies diverging with system size) may very well contribute to out-of-equilibrium overlaps. Since we have some doubts about the validity of out-of-equilibrium measurements of $`q_{\text{min}}`$, let us consider the evidence for RSB in the presence of a field using equilibrium measurements. Most work has been performed in $`d=4`$ where there is a clear signal of a growing spin glass susceptibility. Less clear is whether this quantity actually diverges at $`T>0`$ and $`B>0`$: fits are compatible with such a divergence but it is difficult to conclude that there is a finite temperature transition. Measurements of higher order cumulants of $`P(q)`$ are very disappointing: the cumulants do not cross as they do in zero field, there is no clear pattern in the data, and in fact there is no sensible way to extrapolate the data to larger sizes. Furthermore, $`P(q)`$ has a long tail at $`q<0`$ that cannot be there in the thermodynamic limit, and there is no hint yet of a delta function peak at $`q=q_{\text{min}}`$. The situation in $`d=3`$ is even more ambiguous because the evidence for a diverging spin glass susceptibility is much weaker. Nevertheless, since simulations of the Sherrington-Kirkpatrick model run into similar difficulties, one need not conclude that such results disfavor a RSB scenario in finite dimensions. But it is also fair to say that there is today no substantial evidence via equilibrium measurements for an Almeida-Thouless transition line in $`d=3`$. Part of the difficulty stems from the nearby critical point ($`T_c,B=0`$) that leads to severe finite size effects; this may explain the non crossing of the Binder cumulants for different size lattices. Staying away from that critical point requires lowering the temperature, leading to insurmountable difficulties for thermalizing the lattices. Our approach bypasses this problem by taking the zero temperature limit and finding ground states. Doing so, we found that the finite size effects for the mean field model were very small as shown in figure 3 of our paper. A comparison to what occurs in simulations at finite temperature of the Sherrington-Kirkpatrick model suggests that the ($`T_c`$, $`B=0`$) critical point is quite far away from where we work. Extrapolating this to the Edwards-Anderson model case, we are led to conclude that the crossing points of the curves of figure 1 in our paper simply converge to $`B=B_c=0`$. We tried to substantiate this hypothesis by finite size scaling (figure 2 in our paper). If on the other hand one insists on having a critical value $`B_c>0`$ of the field, we are led to ask whether the curves $`r(N,B,0.15)/\sqrt{N}`$ vs $`B`$ superpose at large $`N`$ on a curve extending to $`B>0`$. Our data is displayed in figure 1. Judging from this figure, we probably would need to work with lattices larger than $`20^3`$ before such behavior could be seen. This may be feasible in the not so distant future, but much remains to be done. J. Houdayer and O.C. Martin LPTMS, Université Paris-Sud, F-91405 Orsay, France July 22, 1999 PACS numbers: 75.10.Nr, 64.60.Cn
no-problem/9910/cond-mat9910031.html
ar5iv
text
# Longitudinal magnetic excitations in classical spin systems \[ ## Abstract Using spin dynamics simulations we predict the splitting of the longitudinal spin wave peak in all antiferromagnets with single site anisotropy into two peaks separated by twice the energy gap at the Brillouin zone center. This phenomenon has yet to be observed experimentally but can be easily investigated through neutron scattering experiments on MnF<sub>2</sub> and FeF<sub>2</sub>. We have also determined that for all classical Heisenberg models the longitudinal propagative excitations are entirely multiple spin-wave in nature. \] The mechanism for longitudinal excitations in high spin magnets with weak to moderate anisotropy, like MnF<sub>2</sub>, FeF<sub>2</sub>, RbMnF<sub>3</sub>, EuO, and EuS, is not completely understood. There are conflicting theoretical predictions and experimental results of limited resolution ; however the spin dynamics simulation technique is able to analyze both the transverse and longitudinal components of the dynamic structure factor for simple classical Heisenberg models. This is true in both the hydrodynamic and critical temperature regimes and, unlike mode coupling theory, the accuracy of our results can be improved continuously through the use of more computer time. Indeed using high speed supercomputers we have already achieved considerably higher precision than existing experimental results. The above materials all have spin values ($`S2`$) which are large enough to be effectively described by the classical limit, $`S\mathrm{}`$, and bi-linear exchange interactions between nearest, and in some cases second neighbor atoms on simple lattice structures. RbMnF<sub>3</sub> is an SC antiferromagnet, MnF<sub>2</sub> and FeF<sub>2</sub> are BCC anisotropic antiferromagnets with weak and moderate anisotropy respectively, and EuO and EuS are ferromagnets. The degree of anisotropy in EuO, EuS, and RbMnF<sub>3</sub> is negligible. While for EuO and EuS, and to a lesser extent MnF<sub>2</sub> and FeF<sub>2</sub>, the second neighbor interactions are not negligible, a qualitative understanding of the magnetic dynamics can still be obtained through a model with only nearest neighbor interactions. We performed our simulation on the isotropic SC Heisenberg magnet with both ferro- and antiferromagnetic bi-linear interactions and the anisotropic BCC Heisenberg antiferromagnet. The Hamiltonian for our model is given by $$=J\underset{\mathrm{𝐫𝐫}^{}}{}𝐒_𝐫𝐒_𝐫^{}D\underset{𝐫}{}(𝐒_𝐫^z)^2$$ (1) where $`S_𝐫`$ is a three-dimensional classical spin of unit length, $`\mathrm{𝐫𝐫}^{}`$ is a nearest neighbor pair, and $`D`$ is the uniaxial single-site anisotropy term. We determined the dynamic structure factor in the , , and reciprocal lattice directions. For the antiferromagnetic case we have made the transformation $`qq+Q`$ where $`Q`$ is the Brillouin zone boundary in the $`[111]`$ direction. We have used the spin dynamics simulation technique, which has been developed in previous work , to calculate the dynamic structure factor. The spin dynamics simulation technique involves the creation of an equilibrium distribution of initial configurations using the Monte Carlo (MC) technique which are then precessed through constant energy dynamics to yield the space-time correlation function from each configuration. These are averaged together and Fourier transformed to obtain a result for the dynamic structure factor. By including more initial configurations the accuracy can be improved indefinitely. We have used periodic boundary conditions and studied lattices of linear sizes of $`L=12`$ and $`L=24`$. The critical temperatures were determined using the fourth order cumulant crossing technique for the anisotropic systems, and $`T_c`$ is already accurately known for the isotropic Heisenberg model . Anisotropy value $`D=0.0591`$ was used for MnF<sub>2</sub> to match the experimentally determined degree of anisotropy. For all these models we performed the simulation at temperature $`T=0.5T_c`$, low enough to be completely outside the critical regime but not too low for the MC simulation to produce an equilibrium distribution of configurations. We also studied $`T=0.8T_c`$ and $`T=0.9T_c`$ for the isotropic ferromagnet to investigate the approach to the critical regime. The results from the higher temperature simulations were convoluted with a Gaussian resolution function of width $`\delta _\omega `$ to minimize effects due to the finite time cutoff. For the isotropic case we used a distribution of $`1000`$ initial configurations for $`L=12`$ and $`200`$ configurations for $`L=24`$, and for the anisotropic case we used $`6000`$ configurations for $`L=12`$ and $`400`$ configurations for $`L=24`$. A smaller number of configurations was used for $`L=24`$ due to limits of computer time. For the longitudinal component of the dynamic structure factor in the ferromagnetic case we observed many excitation peaks, as seen in Fig. 1, however a different set is present for the $`L=12`$ and $`L=24`$ lattice sizes at the same q value. We conjecture that these excitations are two-spin-wave peaks since due to finite lattice size effects the frequencies of the peaks will be limited to certain values which will be different for different lattice sizes. In order to test this hypothesis we need to be able to predict the expected positions of the two-spin-wave peaks. This requires that we obtain an approximate estimate for the general form of the dispersion curve at the temperature at which the simulation is performed, i. e. $`\omega (𝐪)`$ at all points on the reciprocal lattice. The spin wave stiffness coefficient, $`D(T)`$, is determined from the low $`q`$, low temperature limit of the dispersion curve where $`\omega =D(T)q^2`$ for the ferromagnetic case and $`\omega =D(T)q`$ for the antiferromagnetic case. We assume that this holds above the low temperature limit, and the dispersion curve will be given by the linear spin-wave dispersion curve, $`\omega (𝐪)`$ at $`T=0`$, multiplied by the factor $`D(T)/D(0)`$. At $`T=0.5T_c`$ the prediction diverges from the real result only slightly in the high $`q`$ region where the prediction is at a lower $`\omega `$ value than the actual result. With increasing temperature our approximation of the form of the dispersion curve becomes less accurate. Now that we have determined that the linear spin-wave result multiplied by the factor $`D(T)/D(0)`$ is a good approximation to the dispersion curve at $`T=0.5T_c`$, we can estimate the spin-wave frequency over all $`𝐪`$ values, not just those in the reciprocal lattice directions we have measured. All two-spin-wave creation peaks will thus be at frequency $$\omega _{ij}^+(𝐪_𝐢\pm 𝐪_𝐣)=\omega (𝐪_𝐢)+\omega (𝐪_𝐣)$$ (2) and the spin-wave annihilation peaks will thus be at frequency $$\omega _{ij}^{}(𝐪_𝐢\pm 𝐪_𝐣)=\omega (𝐪_𝐢)\omega (𝐪_𝐣)$$ (3) where $`𝐪_𝐢`$ and $`𝐪_𝐣`$ are the wave vectors of the two spin waves which comprise the two-spin-wave excitation. As shown in Fig. 1, for the isotropic ferromagnet we see a near perfect match of peaks in $`S(𝐪,\omega )`$ and the predicted positions of the two spin-wave annihilation peaks, clearly indicating that the excitations in the longitudinal component are due to two-spin-wave annihilation. For the anisotropic antiferromagnet the approximation of the dispersion curve from a measurement of $`D(T)`$ is inaccurate since the actual dispersion curve does not follow the functional form of the zero temperature dispersion curve. Instead we identified a set of two-spin-wave annihilation and creation peaks which involve spin-waves exclusively in the reciprocal lattice directions which we measured. Plotting the expected frequencies of these two-spin-wave peaks we see a good match between the longitudinal spin-waves and the expected values for both the annihilation and creation peaks. This result for weak anisotropy (MnF<sub>2</sub>) is shown in Fig. 2. Note that no trace of a peak is seen at the location of the single spin-wave peak in the transverse component. We see, as expected, the presence of both creation and annihilation spin-wave peaks for the isotropic antiferromagnet as well. The two-spin-wave peaks where one of the single spin-waves of which they are made up has the lowest $`𝐪`$ are the most intense. This is to be expected since the lower $`𝐪`$ single spin-waves are more intense themselves. As the temperature rises the two-spin-wave peaks broaden. For the antiferromagnetic case, where the longitudinal and transverse components can not be separated, as $`T`$ approaches $`T_c`$ the two-spin-wave peaks disappear into the tails of the single spin-wave peak and the diffusive central peak. For the ferromagnetic case the two-spin-wave peaks do not disappear entirely, the two-spin-wave peak corresponding to the lowest $`𝐪`$ spin-waves remains and the other two-spin-wave peaks broaden to disappear into its tail. A previous study by Chen et al. misidentified the peak in the longitudinal component as a single spin-wave excitation. This happened because they were only able to look in the lattice direction for which the dominant two-spin-wave is at the same frequency as the single spin-wave peak for any given $`𝐪`$. Since for ferromagnets only spin-wave annihilation peaks are present, the dominant two-spin-wave process found in the direction is \[q,0,1\]-\[0,0,-1\]. Since in the low $`𝐪`$ limit $`\omega q^2`$, $`\omega ([q,0,0])\omega ([q,0,1])\omega ([0,0,1])`$. This is however not the case for the direction. In Fig. 3 we show the longitudinal and transverse components in the and directions at temperatures approaching $`T_c`$. Note the fact that the longitudinal and transverse spin wave peaks are at the same frequency in the direction but not in the direction. The existence of a set of finite two-spin-wave peaks is an artifact of the finite size of the lattice we use in our simulation. If one were to measure the longitudinal component of the dynamic structure factor for a real crystal, i. e. effectively an infinite lattice of magnetic moments, the spectrum of possible two-spin-wave peaks would be continuous. The longitudinal component of the dynamic structure factor for the isotropic antiferromagnet has been measured experimentally by Cox et al. . Taking measurements in the direction, they found a diffusive central peak and a propagative spin-wave peak at the same frequency as the transverse single spin-wave peak. The longitudinal component of the dynamic structure factor for the isotropic ferromagnet EuO has been found experimentally to have a propagative spin-wave and no diffusive central peak . For both the ferro- and antiferromagnetic cases, the longitudinal excitations at the single spin-wave peak frequency can be explained in terms of two-spin-wave peaks as follows. Since spin-wave intensity increases with decreasing $`q`$ the most intense two-spin-waves will be those which are comprised of one spin-wave with a very small $`q`$ and another spin-wave with a $`𝐪`$ very close to the resultant two-spin-wave $`𝐪`$. If we look at the limit of infinite lattice size, i.e. the real system, for the isotropic case a spin-wave with extremely small $`q`$ will have a negligible frequency. Thus the intensity of the two-spin-wave creation and annihilation peaks will have a maximum at the single spin-wave frequency. Even though only annihilation two-spin-wave peaks are present for the ferromagnetic case one should still see this effect in both the ferro- and antiferromagnetic cases and this is what is seen in the experiments . We see no evidence of a diffusive central spin-wave peak in the longitudinal component of the dynamic structure factor for the isotropic ferromagnet in agreement with the experimental results of Dietrich et al. , and the theoretical result of Villain but in disagreement with the theoretical predictions of Vaks et al. . When we apply this same reasoning to the anisotropic antiferromagnet, which as shown in Fig. 2 also displays the same two-spin-wave peak behavior, we are left with an extremely intriguing result which can be measured experimentally. A spin-wave at very small $`𝐪`$ will no longer have a negligible frequency but instead the frequency of the energy gap at the Brillouin zone center. If our hypothesis about the origin of excitations in the longitudinal component of the dynamic structure factor is correct then for an anisotropic antiferromagnet one will observe an apparent splitting in the spin-wave peak in the longitudinal component. In an infinite lattice there will be a peak due to two-spin-wave creation which is shifted upwards from the single spin-wave frequency by an amount equal to the energy gap, and a spin-wave annihilation peak shifted downwards by the same frequency. The theoretical explanation as to why the longitudinal component of the dynamic structure factor is made up of two-spin-wave peaks and why only two-spin-wave annihilation peaks are present for the ferromagnetic case while both two-spin-wave annihilation and creation peaks are present for the antiferromagnetic case is as follows. If we express the dynamics in quantum mechanical formalism the spins, as they appear in the Hamiltonian, are expressed in terms of the operators $`𝐒_i^+`$, $`𝐒_i^{}`$, and $`𝐒_i^z`$ where $`𝐒_i^+`$ $`=`$ $`𝐒_i^x+i𝐒_i^y`$ (4) $`𝐒_i^{}`$ $`=`$ $`𝐒_i^xi𝐒_i^y.`$ (5) $`𝐒_i^+`$ and $`𝐒_i^{}`$ can be expressed in terms of ladder operators, $`a_i`$ and $`a_i^+`$, which raise or lower $`𝐒_i^z`$ by one quanta. In the linear approximation $`𝐒_i^+`$ $`=`$ $`(2𝐒)^{1/2}a_i^{}`$ (6) $`𝐒_i^{}`$ $`=`$ $`(2𝐒)^{1/2}a_i.`$ (7) If we take the approximation for $`𝐒_i^z`$ to one order higher than $`𝐒_i^z=𝐒`$ then $$𝐒_i^z=𝐒a_i^{}a_i=𝐒\frac{1}{2𝐒}𝐒_i^+𝐒_i^{}.$$ (8) For the classical limit this corresponds to an infinitesimal raising and lowering of the $`z`$ component of the spin which is a longitudinal spin-wave. These spin-waves will be at the frequencies corresponding to the difference between the frequencies of single transverse spin-waves since they result from creation followed by annihilation of a spin-wave. Thus for the ferromagnetic case all two-spin wave excitations will result from these annihilation processes. Unlike the ferromagnetic the antiferromagnet is not a quantum state of the Heisenberg Hamiltonian. As a result the $`a_i`$ and $`a_i^+`$ are replaced through a Bogoliubov transformation by new operators which are a linear combination of the creation and annihilation operators but which only connect excitations on the same sublattice . As a result both creation and annihilation two-spin-wave excitations exist. In conclusion, for magnets which can be described by a classical spin model, the longitudinal propagative excitations are made up of two-spin-wave peaks. In agreement with our theoretical picture only annihilation two-spin-wave peaks were present for the ferromagnetic case but both annihilation and creation two-spin-wave peaks are present for both the isotropic and anisotropic antiferromagnets. As the temperature approached the critical regime the longitudinal two-spin-wave peaks broadened and converged into a single peak at the frequency of the dominant lowest-$`q`$ two-spin-wave peak. In an isotropic lattice of infinite size, i.e. a real crystal, the two-spin-waves will result in a peak at the single spin-wave frequency for either ferromagnetic or antiferromagnetic isotropic systems, in agreement with experimental results . For the anisotropic antiferromagnet both creation and annihilation two-spin-wave peaks exist, and the longitudinal component of the anisotropic antiferromagnet should show two peaks; a two-spin-wave annihilation peak at the single spin-wave peak frequency minus the energy gap frequency, and a two-spin-wave creation peak at the single spin-wave peak frequency plus the energy gap frequency. This result indicates the presence of a new form of excitation behavior in magnetic materials which can be directly tested experimentally. We thank Shan-Ho Tsai, Roderich Moessner, Werner Schweika, and Michael Krech For Helpful suggestions and stimulating discussions. This research was supported in part by NSF grant #DMR9727714.
no-problem/9910/astro-ph9910375.html
ar5iv
text
# The Narrow-Line Regions of LINERs as Resolved with the Hubble Space Telescope1footnote 11footnote 1Based on observations with the Hubble Space Telescope which is operated by AURA, Inc., under NASA contract NAS 5-26555. ## 1 Introduction Nuclear activity in galaxies, which finds its most dramatic expression in quasars, also appears in systems with much lower luminosities. Many galactic nuclei exhibit broad H$`\alpha `$ emission lines which, while much weaker, are nonetheless qualitatively similar to those observed in quasars (Stauffer 1982; Keel 1983b; Filippenko & Sargent 1985; Ho et al. 1997b). A significant fraction of emission-line objects, which may be physically related to AGNs, are galaxies containing low-ionization nuclear emission-line regions (LINERs; Heckman 1980; see the reviews included in Eracleous et al. 1996). LINERs, present in over 30% of all galaxies and in 60% of Sa–Sab spirals with $`B`$ 12.5 mag (Ho, Filippenko, & Sargent 1997a), could thus represent the low-luminosity end of the AGN phenomenon. In fact, about 15%–25% of LINERs have a broad component in the H$`\alpha `$ line — the “type 1.9” LINERs — similar to the fraction in Seyferts (Ho et al. 1997b). Recently, Barth, Filippenko & Moran (1999a, b) have shown that some LINERs have weakly polarized broad emission lines, analogous to the polarized broad lines from the “hidden broad-line region” of some Seyfert 2 galaxies (e.g., Antonucci & Miller 1985). However, unlike Seyfert nuclei and QSOs, whose enormous luminosities and rapid variability argue for a nonstellar energy source, the luminosities of LINERs are sufficiently low that one cannot unambiguously associate them with AGNs of higher luminosities. For example, stellar energy sources are plausible both on energetic and spectroscopic grounds (e.g., Terlevich & Melnick 1985; Filippenko & Terlevich 1992; Shields 1992; Maoz et al. 1998). The potential role of LINERs in constituting the faint end of the AGN luminosity function is important for understanding the nature of AGNs, their evolution, and their contribution to the X-ray background. To address some of the above issues, Maoz et al. (1995) obtained ultraviolet (UV; 2300 Å) images of an unbiased selection from a complete sample of nearby galaxies with the Hubble Space Telescope (HST) Faint Object Camera (FOC). They discovered that 6 out of 25 LINERs in the sample contain unresolved ($`<0\stackrel{}{\mathrm{.}}1`$, or $`<12`$ pc) nuclear UV emission sources. A similar result was found by Barth et al. (1998), using UV images taken with the Wide-Field Planetary Camera 2 (WFPC2) on HST. The extreme-UV emission from such sources may provide some or all of the energy required to produce the nuclear emission lines by photoionization. More specifically, Maoz et al. (1998) showed that in three out of seven UV-bright LINERS, the extreme-UV flux, based on a reasonable extrapolation from the UV, is sufficient to account for the observed H$`\alpha `$ flux. In the other four objects, the extreme-UV flux is deficient by a factor of a few, but these four objects have X-ray/UV flux ratios 100 times larger than the previous three, which suggests that there is much more flux in the extreme-UV than a simple extrapolation from the UV would indicate. This suggestion is also supported by the spectral energy distributions of LINERs and low-luminosity AGNs presented by Ho (1999). Any mild foreground extinction would alleviate the deficit even further. It is thus plausible to conclude that the line-emitting gas of UV-bright LINERs is powered by photoionization. The 6 UV-bright and 19 UV-dark LINERs studied by Maoz et al. (1995) are otherwise similar in terms of spectral line ratios and overall emission-line luminosities. A nuclear UV source may therefore exist in all LINERs, but may be obscured by dust in 75% of the objects. Alternatively, Eracleous, Livio, & Binette (1995) have suggested that the emission lines are produced in response to a variable continuum that is in its “off” state with a 25% duty cycle (due, perhaps, to sporadic tidal disruption and accretion of individual stars by a central black hole). Another possibility is that the emission lines in UV-dark LINERs are produced in shocked, rather than photoionized, gas (Koski & Osterbrock 1976; Fosbury et al. 1978; Heckman 1980; Dopita & Sutherland 1995), thus accounting for the absence of a central, point-like UV source. Moreover, the UV-bright LINERs are not necessarily AGNs, as the UV sources could be hot star clusters. Indeed, UV spectroscopy with the HST has shown that, while some LINERs may be AGNs (Ho et al. 1996; Barth et al. 1996), the UV emission in other UV-bright LINERs is clearly dominated by massive stars (Maoz et al. 1998). Interestingly though, there is not a clear correspondence between the existence of a point-like nuclear UV source and the detection of broad H$`\alpha `$ wings in the spectrum, as is the case in most Seyfert 1s. An independent source of information comes from the X-ray band, where the morphologies and spectra of LINERs suggest that some of them could harbor low-luminosity AGNs. Published and archival X-ray images of LINERs with high angular resolution (5″–8″), taken with the Einstein and ROSAT HRIs (e.g., Fabbiano, Kim, & Trinchieri 1992; Koratkar et al. 1995), show a wide variety of X-ray morphologies: point sources, with or without a surrounding halo, and diffuse sources, which do not seem to be related to the UV morphology. The 0.5–10 keV spectra of LINERs obtained with ASCA can generally be fitted by a linear combination of a Raymond-Smith plasma model ($`kT0.60.8`$ keV) and an absorbed (column densities in excess of $`10^{21}`$ cm<sup>-2</sup>) hard component. The soft, thermal plasma emission is usually attributed to circumnuclear hot gas. In the case of LINER 1.9s, the hard component is well fitted by a power law with photon indices $`\mathrm{\Gamma }`$ 1.7–2.0 (e.g., Serlemitsos, Ptak, & Yaqoob 1996; Ptak et al. 1999; Awaki 1999; Terashima 1999; Ho et al. 1999), as seen in luminous Seyfert 1s (Nandra et al. 1997), and the emission has a compact, spatially unresolved morphology within the coarse angular resolution of ASCA (FWHM $`3\mathrm{}`$). Where higher resolution ROSAT HRI images are available, a central compact core is seen in the soft X-rays as well. These characteristics strengthen the case that LINER 1.9s are genuine AGNs. The situation for LINER 2s is more complicated. Terashima et al. (1999) have recently analyzed ASCA observations of a small sample of LINER 2s, and they find that the hard component, while consistent with a power law with $`\mathrm{\Gamma }2`$, can also be represented by a thermal bremsstrahlung model with a temperature of several keV. Moreover, the emission in the hard band is seen to be extended on scales of several kpc, consistent with a population of discrete sources such as low-mass X-ray binaries. Terashima et al. also show that, based on an extrapolation of their absorption-corrected X-ray fluxes into the UV, there is perhaps insufficient power to drive the luminosities of the optical emission lines. These findings suggest that either LINER 2s do not contain an AGN or that the AGN component, if present, must be heavily obscured by matter with a column density much greater than $`10^{23}`$ cm<sup>-2</sup>. Another important tool for studying AGNs, which we employ here, is narrowband, emission-line imaging of the nuclear regions. Narrowband imaging of Seyfert 1 and 2 nuclei has revealed, in some cases, striking ionization cones emerging from the active nuclei and well aligned with the axes of the radio jets (Haniff, Wilson, & Ward 1988; Pogge 1989a; Wilson & Tsvetanov 1995). This technique produces spectacular results when combined with the angular resolution of HST (e.g., NGC 5728, Wilson et al. 1993). The ionization structure of the narrow-line region gas, as revealed by such studies, gives complementary information to that provided by single-aperture spectra. Ground-based narrowband imaging of LINERs by Keel (1983a) and by Pogge (1989b) has shown that they are distinct from Seyferts in their circumnuclear emission, at least when probed on the same ($`1`$″) angular scale. At these scales, some LINERs have faint diffuse emission, as opposed to the linear structures in many Seyferts, and the emission is usually dominated by a compact, marginally resolved nuclear region. Resolving the nuclear structures of LINERs can provide further clues to their relation to AGNs. The small scales and faintness of these structures relative to the bright host-galaxy background mean that the capabilities of HST are needed for this task. To this end we have carried out a study of the narrow-line regions of LINERs using narrowband \[O III\] $`\lambda 5007`$ and H$`\alpha `$+\[N II\] WFPC2 images of 14 objects. The results of our study are the subject of this paper. In §2 we describe the observations and the data reduction. In §3 we present the final images and measurements and we discuss them in §4. Finally, in §5 we summarize the results and present our conclusions. Throughout this paper we assume a Hubble constant of $`H_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. ## 2 Observations and Reduction The galaxies we have chosen for narrowband imaging with HST represent the two classes of LINERs, UV-bright and UV-dark, that have been identified in previous HST UV observations (Maoz et al. 1995; Barth et al. 1998). They were selected on the basis of their bright, high-contrast H$`\alpha `$ lines, as determined from spectra and narrowband images obtained from the ground. We have supplemented these data with archival images of eight other galaxies, classified as LINERs by Ho et al. (1997a), and observed with WFPC2 using the same narrowband filters. In Table 1 we list the galaxies included in our collection, and we summarize their basic properties. We emphasize that these galaxies do not constitute a statistically well-defined sample, but rather a random selection of LINERs with relatively strong emission lines. At any rate, this is the first time that the spatial structure of the narrow-line region is studied at HST resolution for a sizable number of such objects. The galaxies were imaged with WFPC2, generally with the nucleus positioned on the PC CCD, which has a scale of $`0\stackrel{}{\mathrm{.}}0455\mathrm{pixel}^1`$. In the case of NGC 3031 the nucleus was positioned on the WF3 CCD, whose scale is $`0\stackrel{}{\mathrm{.}}10\mathrm{pixel}^1`$. Table 2 summarizes the observations of each galaxy, gives the filters used, their corresponding exposure times, and the observing programs under which the observations were carried out. All the galaxies were observed through either the F656N or the F658N filters, in order to sample the H$`\alpha `$+\[N II\] complex at the proper redshift. Images through the F502N filter, which covers the \[O III\]$`\lambda 5007`$ line, exist for only five of these galaxies with sufficient integration time to be useful. Broad- and medium-band images of each galaxy were also obtained, as detailed in Table 2, and used for continuum subtraction and derivation of color maps. For NGC 1052, whose narrowband image was obtained from the HST archive, no broad-band images are available. However, the extended line emission in this object is strong enough that is can be seen even without continuum subtraction. In fact, this is the only object in which, after some additional image processing, we find an unambiguous ionization cone, analogous to those seen in Seyferts (see §3). All of the WFPC2 images used in this study were processed by the standard STScI OPUS pipeline (described by Biretta et al. 1996), and required only minimal post-processing to combine multiple images, correct for saturated pixels, and remove cosmic rays. We present only the WFPC2 PC1 detector images since the nuclei of the galaxies were centered on this CCD. The exception is NGC 3031 (M81), for which the archival images had the nucleus centered on the WF3 detector. For the targets in our own observing program (GO-6436), continuum images were acquired as two pairs of short and long integrations. If the object was particularly bright, an additional, 6 s integration was also obtained. The short-exposure images were used to correct for saturated pixels in the long-exposure images. Our narrowband images were acquired as two or three long integrations, since saturation was not expected to be a problem. For pairs or triplets of images of the same integration time, we combined the images using a statistical differencing technique implemented as an XVista command script (Pogge & Martini 1999). This technique is as follows. The difference image, formed by subtracting one image in a pair from the other, consists primarily of positive and negative cosmic-ray hits, as the galaxy, foreground stars, and background, all cancel to within the noise. All pixels within $`\pm 5\sigma `$ of the mean residual background level on the difference image are then set to zero (tagging them as unaffected by cosmic rays), and a pair of cosmic-ray templates are derived by separating the remaining positive and negative pixels. These templates are then subtracted from the original images, and the two cosmic-ray subtracted images are added together to form the final galaxy image. When three images are available, all pair-wise combinations are used to generate the templates. In all cases, the statistical differencing method produced superior cosmic-ray rejection compared to standard tasks (e.g., CRREJ in STSDAS), and it is computationally much faster. Archival data sets with pairs of images were processed in the same way. In a few cases, however, only single integrations were available, and the cosmic ray hits were removed manually, using the interactive TVZAP routine in XVista. When the archival images pairs had unequal integration times (e.g., for NGC 404, 500s and 1200s for the F656N filter), we scaled the long integration to the shorter one and applied the differencing method, followed by additional manual cleaning. The resulting image cleaning is not as thorough as with well-matched integration times, but it still is better than the other algorithms we tried. In all of our images, the mean intensity level of the background sky is negligible (a few counts at most). This was estimated by examining the outskirts of one of the WF frames without much galaxy light in it, and computing a modal sky level in reasonably clear regions. The combined on-band emission-line and off-band continuum images were converted to units of flux density per pixel, based on the May 1997 updated photometry values for each filter for the PC detector. Continuum-subtracted emission-line images were created by subtracting the associated continuum-band images. In a few cases, it was clear that our background estimate was in error (it left either positive or negative fields of pixels), and so we refined the continuum estimation and iterated. The final continuum-subtracted images were left in units of flux density per pixel. For several archival data sets, the continuum images had to be registered and/or rotated to match the narrowband images. This presented no problem, and standard XVista tools were used (the procedure is analogous to the one described in Pogge 1992). Color maps were generated for all six of our GO program images by converting flux density per pixel into standard Johnson/Cousins magnitudes using the transformations derived by Holtzman et al. (1995), and then dividing the two images. For NGC 4192 and NGC 4569, we had both F547M and F791W image pairs from our own program as well as archival F555W and F814W images, so we could verify the conversion between these bands and estimates of the ($`VI`$) colors. We were careful to register the original on-band and off-band images so that we could later directly compare our emission-line and color maps. We use these below to study the associations between the emission-line regions and the patches of dust and star clusters in the galaxies. For the LINERs for which we have only archival images, we could create ($`VI`$) color maps for three galaxies (NGC 3998, NGC 4374, and NGC 4594). For four galaxies with F547M images and no corresponding red broad-band image (NGC 3031, 4486, 4036, and 4258), we were able to map the distribution of dust using an “unsharp masking” technique described by Pogge & Martini (2000). In brief, an unsharp mask for an image was created by smoothing the original F547M image with a model PSF image computed using TinyTim (Krist & Hook 1997). The (Image $``$ PSF) convolution was carried out in the Fourier domain using an XVista command script. The original image was then divided by the smoothed image to form a normalized residual image in which dusty features appear as negative residuals, and emission or stars appear as positive residuals. Using this technique on F547M images of galaxies for which we have $`VI`$ color maps shows that the normalized unsharp residual images can retrieve all of the dust structures seen in the color maps. ## 3 Results ### 3.1 Images and Measurements Figures 1a–e show our reduced narrowband images and dust maps. For each galaxy in Figure 1a–d we show the continuum-subtracted H$`\alpha `$+\[N II\] PC1 image on the left, and on the right show either the $`(VI)`$ color map, or an unsharp residual map of the F547M image if there was no second broadband filter image. In both the $`VI`$ and the F547M unsharp mask images darker shades denote the regions of dust absorption. Each panel of these figures shows a $`10\mathrm{}\times 10\mathrm{}`$ segment of the image centered on the nucleus (oriented with North up and East to the left), and with a scale bar in the lower left corner of each emission-line image showing 100 pc projected at the galaxy’s distance (see Table 1). The contrast of the emission-line images is chosen to emphasize the faint circumnuclear emission regions. Figure 1e shows our images of NGC 3031 which, unlike the others, is on the WF3 detector. Here we show H$`\alpha `$+\[N II\] emission on the left, and the unsharp residual map of the F574M filter image on the right for the central 30″ of this galaxy. The scale bar on the lower left shows 100 pc at the distance of NGC 3031 (Table 1). Figure 2 shows on the left the original F658N image (i.e., without continuum subtraction) of NGC 1052, with a normalized unsharp residual map of the same on the right. This residual map shows emission as bright and absorption (presumably dust) as dark. Each panel shows the inner 15″ of NGC 1052, and the scale bar indicates 100 pc. The axis of the VLA radio jet (Wrobel 1984) is shown as a dashed line. Figures 3a and 3b show the continuum-subtracted \[O III\] $`\lambda 5007`$ images for the 5 galaxies for which these data are available. In Figure 3a, we pair \[O III\] $`\lambda 5007`$ emission-line images with “excitation maps” of the H$`\alpha `$+\[N II\]/\[O III\] $`\lambda 5007`$ ratio for NGC 4258, NGC 4579, and NGC 5005. Although noisy, these maps do not reveal any clear high-excitation knots with the exception of NGC 4258. Here we see relatively-highly excited gas in a segment of the braided jet that lies to the north of the nucleus in our images (Cecil, Wilson, & Tully 1992; Cecil, Wilson, & DePree 1995; Cecil, Morse, & Veilleux 1995). Figure 3b shows only the continuum-subtracted \[O III\] images for the remaining two galaxies, NGC 4192 and NGC 4569. The excitation maps constructed for these galaxies are extremely noisy due to the low signal-to-noise ratio in the \[O III\] images, and only total fluxes in synthetic apertures can be measured with any confidence (see Table 3). Overall, the entire resolved emission-line regions of these LINERs seem to be in a low-ionization state. We have measured the integrated emission-line fluxes through various apertures, separating the nuclear and circumnuclear contributions. These measurements are summarized in Table 3. Circular apertures were used, except in the cases of NGC 4192 and NGC 5005, where rectangular apertures were used to avoid strong dust lanes (in both) and H II regions (in NGC 4192). Since NGC 4192 and NGC 5005 have no discernible nuclei in their narrowband images, the nuclear fluxes were estimated in apertures centered using the brightness peaks in their F791W continuum images. Table 3 gives “band” fluxes, without an attempt to convert to emission in a particular line by correcting for the filter transmission of other lines in the bandpasses (i.e., \[O III\] $`\lambda `$4959 and \[N II\] $`\lambda \lambda `$6548, 6583). In the next section we describe the main features of the images of individual galaxies. We refer to an object as being “UV-bright” if space-UV observations (generally the HST/FOC F220W images of Maoz et al. 1995) have revealed a bright compact nuclear UV source in the galaxy. We will base the optical spectral classification of these objects on Ho et al. (1997a), and follow their terminology, where a “LINER 2” is a LINER without detected broad H$`\alpha `$ wings, a “LINER 1.9” is a LINER that does have such weak broad wings, and a “transition object” is one whose optical narrow emission-line ratios are intermediate between those of a LINER and an H II nucleus. Data on these properties are also summarized in Table 1. ### 3.2 Individual Objects ### NGC 404 This is a UV-bright LINER 2, whose UV spectrum has a significant contribution from massive stars (Maoz et al. 1998). In the HST emission-line images, much of the nuclear emission appears as a hollow one-sided fan extending into filamentary wisps at distances of 5″ or more from the nucleus. These wisps are reminiscent of gaseous structures blown out by supernovae, which are expected, given that the spectrum of this object is dominated by hot stars. There is also one bright point source 0$`\stackrel{}{\mathrm{.}}`$16 north of the nucleus, possibly a planetary nebula or a compact H II region. It is not obviously associated with a secondary UV source seen in the FOC image of this galaxy. The nuclear region is dusty, but has a blue nucleus, suggesting the nucleus itself is unobscured. The distance to this galaxy is controversial, as discussed in detail by Wiklind & Henkel (1990). The distance we have adopted in Table 1 is Tully’s (1988) value of 2.4 pc (for $`H_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), which was assigned based on NGC 404’s probable membership in his so-called “14+12” group. On the other hand, the CO observations of Wiklind & Henkel (1990) can only be reconciled with other observational data if the distance is 10 Mpc. The $`I`$-band (F814W) image shows the galaxy beginning to be resolved into stars (the brightest giants are apparently visible). If so, then our measurements favor the shorter distance. ### NGC 1052 This galaxy is often considered to be the prototypical LINER, with weak broad H$`\alpha `$ wings that give it a LINER 1.9 classification (Ho et al. 1997a). The broad wings have recently been shown by Barth et al. (1999a) to be preferentially polarized relative to the narrow lines and the continuum, suggesting the presence of a hidden broad-line region that is seen in scattered light. Recent HST observations (Allen, Koratkar, & Dopita 1999; Gabel et al. 1999) show the UV-bright nucleus has a UV–optical spectrum consisting of narrow lines on top of a featureless continuum. The archival H$`\alpha `$+\[N II\] F658N WFPC2 image presented here shows a collimated, conical structure emerging from a compact core. The high surface-brightness line emission is evident in the original image even though we do not have a broad-band image to perform continuum subtraction. The biconical nature of the structure is most clearly brought out in the normalized unsharp residual map of the image (Figure 2b), which reveals the rear (west) side of the cone. Together with M84 (see below), these are the only objects among the LINERs imaged which show a clear indication of a Seyfert-like emission-line cone. The cone’s axis is at position angle 96° with a full opening angle of about 70°. This corresponds roughly to the axis of the radio lobes observed in this galaxy (Wrobel 1984), and is similar to the alignment generally found in Seyferts. Our result thus adds another AGN characteristic to this LINER. We also note that there are two faint knots of emission straddling the nucleus, about 5″ from it, along a position angle of 81°. ### NGC 3031 (M81) Detailed modeling of the narrow- and broad-line spectrum of this object (Ho et al. 1996) clearly shows that the line-emitting gas has the low-ionization state expected of LINERs, even if the measured \[O III\]/H$`\beta `$ ratio technically places it in the Seyfert class (Ho et al. 1997a). The UV spectrum (Ho et al. 1996; Maoz et al. 1998) consists of broad, AGN-like, emission lines superposed on a featureless continuum. Devereux, Ford, & Jacoby (1997) have already presented the H$`\alpha `$+\[N II\] data shown here, and have also shown that the galaxy possesses a UV-bright nucleus. The H$`\alpha `$+\[N II\] emission comes mostly from a bright compact source, surrounded by symmetric, disk-like, diffuse emission (Figure 1e, left). The unsharp-residual processed image (Figure 1e, right) shows a spiral-like dust lane extending $``$12″north of the nucleus. The major axis of the disk is at a position angle of 18° and has a minor-to-major axis ratio of 0.78. It extends up to about 5″ from the center, while faint, filamentary structure is visible out to 8″. ### NGC 3718 This is a UV-dark LINER 1.9. The emission in the HST narrowband images is dominated by a strong point source, surrounded by some diffuse circumnuclear H$`\alpha `$ emission. The diffuse emission is brighter on one side. The $`VI`$ image shows the nucleus is clearly very dusty and likely obscured. This is not surprising given its very red optical spectrum (Ho et al. 1995). ### NGC 3998 This LINER 1.9 has been shown to be UV-bright by Fabbiano, Fassnacht, & Trinchieri (1994). Ultraviolet spectra from HST do not exist, to date. The H$`\alpha `$+\[N II\] image shows a 100-pc disk-like structure surrounding a compact nucleus. The major axis of this disk is oriented along a position angle of 90° with a length of 3″, while the minor axis length is 2″. The $`VI`$ map shows little indication of dust in the nuclear region. ### NGC 4036 This LINER 1.9 is UV-dark, based on WFPC2 F218W images (Barth et al. 1998). Its H$`\alpha `$+\[N II\] image has a complex filamentary and clumpy structure, with several “tentacles” extending up to 4″ northeast of the nucleus along a position angle of 70°. The nucleus proper resembles an ellipse with a major axis of 0$`\stackrel{}{\mathrm{.}}`$6 along a position angle of 45°. The unsharp-masked F547M image reveals wisps of dust in a disk-like configuration surrounding the nucleus on all scales probed. This is one of the few LINERs in our sample whose emission-line morphology can possibly be termed “linear” in some sense, but it seems that this morphology is in the plane of the inclined dusty disk, rather than perpendicular to it. ### NGC 4192 (M98) This object has been classified by Ho et al. (1997a) as a “transition object,” one whose optical spectrum is intermediate between that of a LINER and an H II nucleus. It appears dark in UV images. In the HST emission-line images, the nucleus is resolved into knots spanning 0$`\stackrel{}{\mathrm{.}}`$5 in an east-west direction. In the continuum images the nucleus appears “soft”, rather than having a sharp point source like NGC 4569. The $`VI`$ map shows nuclear region is dusty and the nucleus probably obscured. On larger scales, there is a ring of H II regions, partially obscured by dust, and already seen in the ground-based images of Pogge (1989b). ### NGC 4258 (M106) This galaxy contains the famous masing disk (Watson & Wallin 1994; Miyoshi et al. 1995) whose Keplerian rotation provides some of the best evidence for a massive black hole in a galactic nucleus. It has variably been classified as a LINER or a Seyfert 1.9, and is another example of a borderline case. Wilkes et al. (1995) and Barth et al. (1999c) have shown that the spectrum in polarized light has emission lines that are broader than the lines in the total flux spectrum. However, this is seen not only in the Balmer lines but in most of the forbidden lines as well, with the width of the lines in the polarized spectrum depending on the critical density of the transition. The phenomenon is thus different from that of the hidden broad-line regions revealed in polarized light in some Seyfert 2 galaxies. A WFPC2 F218W image taken by Ho et al. (2000a) shows no conspicuous UV nucleus. We therefore aligned the brighter O/B star knots on the F218W image with those in an archival F300W image of this galaxy. We detect 2180 Å flux from all the blue stars easily visible in the F300W and F547M images (see Figure 4). We then find that a “nucleus” per se is visible in the F218W image, but it is weak and its contrast low compared to its surroundings. Translation of the nuclear count rate to a UV flux is not straightforward, because of the large time fluctuations in the UV sensitivity of WFPC2, plus the proneness of the F218W filter to red leaks when observing such obviously-red sources. The UV flux for this nucleus, which we list in Table 1, accounts for neither effect and must therefore be treated as uncertain. In any case, it is clear the flux is quite low compared with the UV-bright objects in our sample. It is reasonable to treat this nucleus as intermediate, between UV-bright and UV-dark. The emission-line images show a compact core and a spiral feature emerging to the north (extending up to 5″from the nucleus) which could be the base of the helical emission-line jet seen on larger scales by Cecil, Wilson, & Tully (1992). Thus, this may be considered another LINER with collimated (or at least organized) narrow-line emission. Although there is ample evidence for circumnuclear dust in the images, there is no dust that obviously covers the nucleus in the unsharp-masked F547M image. This is confirmed also in “$`UV`$” image we have formed using the F330W and F547M images. Since the masing molecular gas disk is viewed nearly edge-on (Miyoshi et al. 1995), with significant optical depth along the line of sight to the nucleus, perhaps it is the dust in this disk itself that is partially obscuring the nucleus in the UV, and thus making it appear so weak. ### NGC 4374 (M84) The LINER 2 nucleus of this galaxy is UV-dark, based on FOC F220W imaging by Zirbel & Baum (1998). M84 has a nonthermal, flat-spectrum radio core and compact X-ray emission (see discussion in Ho 1999), and its nucleus has recently been found to contain a massive compact dark object, presumably a supermassive black hole (Bower et al. 1998). The H$`\alpha `$+\[N II\] data have been previously presented by Bower et al. (1997). The images show an inclined gas disk surrounding the nucleus. Our $`VI`$ map clearly shows that the nucleus is covered by a thick dust lane. Bower et al. (1997) also argued for the possible presence of an ionization cone that is roughly aligned with the radio structure in this object (Birkinshaw & Davies 1985), but we find the case for such a cone is not clear. At the very least, it is not an obvious morphological structure in the extended H$`\alpha `$ emission-line gas (Figure 1c, top left panel). This structure takes the form of filaments that extend roughly east-west and north-south, along position angles 85° and 0°. The east-west complex extends 5″ east and 3″ west of the nucleus, while the north-south complex extents 2″ north and south of the nucleus. ### NGC 4486 (M87) This LINER 2, a giant elliptical galaxy in the Virgo cluster, is well known for its collimated jet seen at radio, optical, and UV wavelengths. Both the jet and the dynamical evidence for a supermassive black hole (Sargent et al. 1978; Harms et al. 1994; Macchetto et al. 1997) testify to the existence of an AGN. The nucleus is UV-bright (Boksenberg et al. 1992; Maoz et al. 1996). Recent UV spectroscopy of the nucleus with HST/FOS (Sankrit, Sembach, & Canizares 1999) and HST/STIS (Ho et al. 2000b) reveals emission lines of width $`3000`$ km s<sup>-1</sup> on top of a featureless continuum. The H$`\alpha `$+\[N II\] image was previously published by Ford et al. (1994). It shows a compact disk with a major axis of length 0$`\stackrel{}{\mathrm{.}}`$77 along position angle 0°, and a minor axis of length 0$`\stackrel{}{\mathrm{.}}`$59. The disk is surrounded by wispy filaments extending in various directions up to 10″ from the nucleus. It is noteworthy that the optical jet, which is conspicuous in the raw data (and also visible in the unsharp residual map in Figure 1c), disappears completely in the continuum-subtracted image, indicating very little line emission from the jet itself. The unsharp residual map also shows very little evidence of nuclear dust. ### NGC 4569 (M90) This galaxy has a bright, point-like nucleus at optical and UV bands, with a LINER 2 optical spectrum. Maoz et al. (1998) have shown that the UV spectrum is dominated by massive stars. The new HST images show an unresolved nucleus in both continuum and emission lines. The nucleus dominates the emission. On larger scales, there is a disk or spiral-arm-like structure in the H$`\alpha `$ image, extending up to 2″ from the nucleus in the north-south direction (position angle 4°). Similar structures are seen in \[O III\] although they are not as well defined. The $`VI`$ map shows that, while the circumnuclear region is dusty, the nucleus itself is apparently unobscured by dust. ### NGC 4579 (M58) This is a LINER 1.9 galaxy with many AGN characteristics (Filippenko & Sargent 1985; Barth et al. 1996; Ho et al. 1997b; Maoz et al. 1998; Terashima et al. 1998). The H$`\alpha `$ emission is dominated by a nuclear point source, but is surrounded by complex clumpy and filamentary emission. The overall complex has an elliptical shape with a major axis of length 2″ along position angle 120° and a minor axis of length 1″. The filamentary emission may be likened to a shell or a ring (perhaps part of a disk) with a dark lane going across it. A similar structure is seen in \[O III\], although the signal-to-noise ratio is lower. The $`VI`$ image shows that, while the filaments are associated with circumnuclear dust, the nucleus appears to be unobscured. ### NGC 4594 (M104) The “Sombrero” galaxy has a LINER 2 nucleus which may be, like NGC 4258, borderline between UV-bright and UV-dark. Crane et al. (1993) have shown that the nucleus appears unresolved and isolated in HST images at 3400 Å. However, the HST/FOS UV spectrum of this galaxy, analyzed by Nicholson et al. (1998) and Maoz et al. (1998), shows shortward of 3200 Å a red continuum falling with decreasing wavelength, and becoming dominated by scattered light within the spectrograph below around 2500Å. In Table 1 we quote the flux density measured by Maoz et al. (1998) from this spectrum, but because of the scattered light contamination and the lack of a UV image, we regard the quoted flux density as an upper limit to the true value. Due to the low signal-to-noise ratio of the UV spectrum, the nature of the UV light source (stars or AGN) is ambiguous. Fabbiano & Juda (1997) observed this galaxy with the ROSAT/HRI and detected a point-like soft X-ray source coincident with the nucleus but noted that the source could be highly absorbed. The H$`\alpha `$+\[N II\] image shows “S”-shaped wisps emerging from a bright, compact, possibly disky H$`\alpha `$ core. The two wisps extent up to 4″ east and west and up to 1″ south of the nucleus. The $`VI`$ image shows that the dust generally follows the H$`\alpha `$ morphology, but with the nucleus behind a dust lane. ### NGC 5005 This is a LINER 1.9, which is dark in UV images. In the new HST images, the line emission is distributed in a number of compact clumps within 1″of the nucleus. These are surrounded by fan-shaped filaments and diffuse emission extending up to 3″ southeast of the nucleus. The emission-line and $`VI`$ images both show clearly that the nucleus is obscured. In an attempt to identify whether the emission-line clumps are associated with individual stars or star clusters, we have tried to align the H$`\alpha `$+\[N II\] image with the FOC 2200Å image of Maoz et al. (1996). We find no unique registration that will align all the major H II regions and the UV knots in a region 10″ south of the nucleus, and no registration that can align the nuclear UV and H$`\alpha `$ knots. It thus appears that here, as in the other galaxies, the line-emitting gas is dusty, causing the UV and H$`\alpha `$ emission to be mutually exclusive. As a consequence, we cannot answer conclusively the question of whether, in this galaxy, there is direct evidence for the excitation of the emission-line gas by hot stars. ## 4 Discussion With the information given above, we are in a position to address some of the following questions. 1. Do any of the LINERs, when observed at HST resolution, show ionization cones or linear structures analogous to those seen in Seyferts? If so, what are their general characteristics (e.g., opening angles, linear extent, excitation level)? Ionization cones (or lobes) are probably the best evidence for obscuration of the nucleus by a toroidal structure, which would account for the absence of a nuclear UV source in UV-dark LINERs. 2. Is there a difference in the morphology of the ionized gas in the circumnuclear regions of UV-bright and UV-dark LINERs? Differences in morphology can afford direct tests of competing scenarios, as follows: 1. Obscuration: The nuclear UV source could be hidden by a toroidal structure, as detailed above, or with patchy foreground obscuration by circumnuclear dust (e.g. van Dokkum & Franx 1996), not necessarily associated with the nucleus itself. 2. An ionizing continuum source temporarily in its “off” state: The duty-cycle hypothesis of Eracleous et al. (1995) predicts a spatial gap between the nucleus and the ionization front in the \[O III\]-emitting region because of rapid recombination of the O<sup>+2</sup> ion. In contrast, the long recombination time scale of the ionized zone implies that its corresponding gap should be unobservable across the narrow-line region. The recurrence time of active phases of the nuclear source in this scenario is of order a century. In view of the distances of these galaxies the implied angular size of a typical \[O III\] ring would be around 0.<sup>′′</sup>6, well within the resolution of these HST images. 3. Shock excitation of the emission-line gas: This could manifest itself as filamentary and bow-shaped structures indicative of shock fronts. The emission-line images can be particularly informative at scales of a few arcseconds where the high angular resolution of the HST and the often-seen clumpiness of line-emitting gas can reveal faint line-emitting structures that are undetectable from the ground. First, we find that only one of the LINER nuclei observed, NGC 1052, shows an unambiguous ionization cone of the kind often seen in Seyfert galaxies. M84 may also exhibit a biconical structure, but the evidence in that object is less clear. Two other galaxies, NGC 4036 and NGC 4258, have structures that could plausibly be termed “linear.” None of the remaining 10 LINERs show this kind of morphology. Our attempt to find a link between LINERs and AGNs through this avenue has therefore given a positive result in only one, or at most four, cases. In NGC 1052, which already has various known AGN features, the cones are indeed aligned with the radio structure, as in Seyferts. Similarly, the possible biconical gas structure in M84, if real, would be roughly aligned with the axis of its radio jets (Birkinshaw & Davies 1985). Obviously, in the other objects we cannot search for alignment of the complex emission line structures with radio structures. Nonetheless, it will be interesting to see in the future whether or not elongated radio structures are common in LINERs. Second, there is no clear difference in emission-line morphology between the UV-dark and UV-bright LINERs, but rather, there is a large variety from object to object. On the other hand, there is clear evidence for obscuration of the nucleus by clumps and lanes of dust in all of the clearly UV-dark objects, but not in the UV-bright ones. We conclude that foreground obscuration by nuclear dust is the cause of the non-detection of a central UV point source in these LINERs, if such a source is present. In the one possible exception, NGC 4258, the detected but weak central UV source may be attenuated by dust mixed with the molecular gas in the masing disk that is known to exist on the line of sight to the nucleus. Although our sample is small and statistically incomplete, one may speculate that this is the reason that 75% of LINERs are UV-dark (Maoz et al. 1995; Barth et al. 1998) — that is, that all LINERs are photoionized by a central UV source, whose nature is of yet unknown, but that this source is obscured by circumnuclear dust in 75% of the cases. In the same vein, we have found no evidence for obscuration by toroidal structures on smaller scales (which would produce the ionization cones we have generally failed to find), nor signs of a central source with “gaps” in the gas morphology hypothesized by Eracleous et al. (1995) in their duty-cycle picture. Nor do we find clear signs of outflows and shock-like morphologies, although there are hints of structures that may turn out to be related to such phenomena, if studied with deeper images at higher resolution. If, as the above results suggest, all LINERs have a central UV source with a photon flux of the right order of magnitude to power the observed emission line spectrum, then shocks are not needed to explain the excitation of the emission-line gas. Our sample contains similar numbers of so-called LINER 1.9s, i.e., LINERs with weak broad H$`\alpha `$ emission, and LINER 2s, in which such broad lines have not been detected. The relative numbers of these two types among the LINER population are similar to the relative numbers of Seyfert 1 and Seyfert 2 galaxies (Ho et al. 1997b), and this may be another clue to a relation between LINERs and higher-luminosity AGNs. We find, however, no obvious differences in the emission line morphologies of the two LINER types. This is contrary to Seyferts, where the line morphologies of Seyfert 1s are more compact (Pogge 1989b; Schmitt & Kinney 1996), suggestive of a geometry in which the central engine and broad-line region are viewed unobscured along the axis of an obscuring torus. A caveat to this point is that the above study has compared Seyfert 1s and 2s, rather than 1.9s and 2s, and this distinction may be important. One can imagine a number of physical reasons for the differences in the morphologies of LINERs and Seyferts. LINERs may, as a general rule, lack the toroidal collimating structures postulated in Seyferts. Alternatively, they may generally lack the relativistic jets that are often coaligned with extended emission structures in Seyferts. The jet/emission-line region alignment in Seyferts is thought to arise because both jets and ionizing radiation are collimated by related structures, or because the jet opens a path through the interstellar medium for ionizing photons to follow, or because the jet itself excites the line emission. Among our sample, this explanation cannot apply to M87, which has a conspicuous jet, yet no linear emission-line structure, either coincident with the jet or elsewhere. Another possible explanation for the difference between LINERs and Seyferts is a deficit of circumnuclear gas or of ionizing photons in LINERs on the larger scales where linear structures appear in Seyferts. However, all Seyferts with extended narrow-line regions that have been imaged at HST resolution to date show that the collimated linear structures and cones persist all the way to the smallest angular scales probed (NGC 1068: Axon et al. 1998; NGC 4151: Evans et al. 1993; NGC 5252: Tsvetanov et al. 1996), and this is also what we have found in the biconical emission of the LINER NGC 1052. On the other hand, one might argue that these objects were preselected to have the narrowest and brightest extended narrow-line regions, and do not represent the Seyfert population as a whole. Finally, we note that the absence of linear emission-line structures in LINERs do not preclude them from being AGNs. Indeed, many of the LINERs in our sample have radio jets and/or broad-line regions, features that are considered characteristic of nuclear activity in more powerful objects. While linear emission-line features are found in many powerful AGNs, they are by no means a defining characteristic of the class. A further point that has interesting physical and practical implications is that, when imaged at HST resolution, LINERs do not reveal simple disk-like gas structures, but rather more complex geometries. This implies that the kinematics of the circumnuclear gas are also likely to be quite complicated and could lead astray the interpretation of kinematic measurements aimed at determining the central black hole masses. Of special interest is the morphology of the line-emitting gas in the innermost regions close to the nucleus. In most of the cases in our sample, there is no indication of a small-scale disk, even if such a disk exists on larger scales. The kinematics of the gas at small radii, therefore, is unlikely to be governed predominantly by rotation. Indeed, recent HST spectroscopy of several galactic nuclei shows that the ionized gas has velocity dispersions that are large even in the innermost regions, as opposed to the circular velocity field expected from a cold gas disk. For example, the HST FOS emission-line spectra of the nuclear ionized gas disk in M87 (Harms et al. 1994; Macchetto et al. 1997) show line widths of $`\sigma 500\mathrm{km}\mathrm{s}^1`$ at projected radii of 0$`\stackrel{}{\mathrm{.}}`$2 to 0$`\stackrel{}{\mathrm{.}}`$6 where the rotational velocity is $`500600\mathrm{k}\mathrm{m}\mathrm{s}^1`$ (see Figure 5 of Macchetto et al. 1997). A similar trend is seen in NGC 4261 (Ferrarese, Ford, & Jaffe 1996): at a deprojected distance of $`0\stackrel{}{\mathrm{.}}2`$ from the nucleus, the gas disk shows $`v/\sigma 1`$. Finally, the nuclear ionized gas disk of M84 observed by Bower et al. (1998) also displays large nonrotational motions near the center. It is puzzling how gas in such a disturbed kinematic state within such a small ($`10^3`$ pc<sup>3</sup>) volume can avoid settling into a cool, rotationally-dominated disk. The clumpy gas filaments will collide with each other at supersonic velocities of order 100–200 km s<sup>-1</sup> on a dynamical time scale, which at a distance of 5 pc from a $`10^8M_{}`$ central mass is $`10^5`$ yr. This is much shorter than the expected lifetime of the AGN or the nuclear starburst, but much longer than the cooling time, which, for free-free emission, is of order 100 yr for gas with a density $`10^5\mathrm{cm}^3`$ that has been heated to $`10^6`$ K by collisions. ## 5 Summary We have presented narrowband (\[O III\]$`\lambda 5007`$ and H$`\alpha `$+\[N II\]) emission-line images of 14 galaxies with LINER nuclei. Most of these data have not been previously published, and this is the first time that the narrow-line regions of a significant number of LINERs are studied at HST resolution. The objects in our sample include representatives of the various subclasses of LINERs that have emerged in recent years: “type 1.9” and “type 2,” UV-bright and UV-dark, objects with starburst-dominated or AGN-dominated UV spectra. Our main observational findings are as follows. 1. The narrow-line regions of nearby LINERs are resolved by HST, with much of the line emission coming from regions with sizes of 10–100 pc. 2. In general, the emission-line morphology is complex and disordered, with varying contributions from a compact core, a disk, clumps, and filaments. We find no obvious distinctions in morphologies among the various LINER subclasses. 3. In only one object, NGC 1052, possibly two if we include M84, have we found clear evidence for an ionization cone analogous to those seen in Seyfert galaxies. The ionization cone of NGC 1052 is well-aligned with its radio structure. Two or three other objects have morphologies that can perhaps be termed “linear.” 4. Obscuration of the nucleus by circumnuclear clumps of dust is fairly ubiquitous in the UV-dark LINERs but absent in the UV-bright ones. These findings lead us to the following conclusions. First, the data are consistent with a picture in which most or all LINERs are objects that are photoionized by a central UV source, even when the central source is not visible directly. As discussed in §1, Maoz et al. (1998) showed that in UV-bright LINERS, the extreme-UV flux, based on a reasonable extrapolation from the UV, is of the right magnitude to account for the observed H$`\alpha `$ in a photoionization scenario. Any mild foreground extinction, which appears to be common based on the images presented here, would only strengthen this conclusion. Hence, the line emission UV-bright LINERs is likely to be powered by photoionization. Our results suggest that the UV visibility of the nucleus is determined simply by the circumnuclear dust morphology along our line of sight. This suggestion is reinforced by the the anti-correlation between UV brightness on the one hand, and galaxy inclination and Balmer decrement on the other, found by Barth et al. (1998). A similar inclination effect has been seen in Seyfert galaxies at visible (Keel 1980) and X-ray wavelengths (Lawrence & Elvis 1982). Thus, the fact that the majority of the LINERs in optically-selected samples are UV dark (Maoz et al. 1995; Barth et al. 1998) does not necessarily imply that these objects are excited by processes other than photoionization (e.g., shocks; Dopita & Sutherland 1995) or that they are in an “off” state (Eracleous et al. 1995). There is also no correspondence between UV darkness and the absence of broad lines in LINERs, which one might expect in a duty-cycle scenario when the continuum source is turned off. Moreover, the UV spectra of the nuclei of individual LINERs have so far failed to reveal the emission-line signatures predicted by shock models. This result argues against these alternative explanations for the UV-bright/dark dichotomy. If, as our results suggest, UV-dark LINERs appear as such only because of foreground extinction, then it is plausible to conclude that all LINERs harbor a source of ionizing radiation, and hence that their line-emitting gas is powered by photoionization. In passing, we note that in the non-elliptical galaxies in our sample, the circumnuclear dust, though patchy and sometimes chaotic in appearance, generally lies in a preferred plane. The position angle of this plane coincides remarkably closely to the direction of the major axis of the large-scale galactic disk (data compiled in Ho et al. 1997a). This explains why the UV visibility of the nuclei correlates with the inclinations of the host galaxies (Barth et al. 1998) despite the fact that the obscuration, as seen in our images, actually occurs on much smaller scales. Second, whatever the nature of the central source in a LINER, be it an accretion flow onto a black hole, a compact star cluster, or a combination of the two, it is not generally revealed by the narrowband images we have obtained. The one LINER that shows a clear Seyfert-like ionization cone, NGC 1052, does have additional AGN characteristics: weak broad wings in its H$`\alpha `$ emission profile (Ho et al. 1997b), a hidden broad-line region (Barth et al. 1999a), a radio jet and compact, flat-spectrum core (Wrobel 1984), and a nonthermal hard X-ray spectrum (Guainazzi & Antonelli 1999; Weaver et al. 1999). On the other hand, many of the other LINERs which have AGN features, such as NGC 4579, M81, and M87, show no evidence for ionization cones in our images. Finally, we have pointed out that the complex gas morphologies revealed by our images suggest caution in interpreting the gas kinematics in the innermost regions of these objects, for example, in searches for, and mass measurements of, central black holes. This work was supported by grant GO-06436.01-95A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. Undergraduate research assistant S. Benfer (Ohio Wesleyan) helped with the initial reductions of our GO imaging data. D.M. acknowledges support by a grant from the Israel Science Foundation. L.C.H. is grateful to Sandra Faber for bringing to his attention the issue concerning the kinematics of the compact narrow-line regions in LINERs that we discussed at the end of §3.2. Figure 1a-d: Narrowband PC1 images and dust maps. For each galaxy we show the continuum-subtracted H$`\alpha `$+\[N II\] image on the left, and on the right either the $`(VI)`$ color map or the unsharp mask of the F547M frame if no $`I`$-band image is available. Darker shades denote regions of dust absorption. Each panel shows a 10″$`\times `$10″ segment of the image centered on the nucleus (oriented with North up, East to the left), and with a scale bar in the lower left corner of each emission-line images showing 100 pc projected at the galaxy’s distance (see Table 1). The contrast of the emission-line images is chosen to emphasize the faint circumnuclear emission regions. Figure 1e: Narrowband WF3 images of the central 30″ of NGC 3031, shown like the others in Figure 1a-d. H$`\alpha `$+\[N II\] emission is on the left, and the unsharp residual map is on the right. The scale bar in the lower left shows 100 pc at the distance of NGC 3031 (Table 1). Figure 2: Narrowband PC1 images of the central 15″ of NGC 1052. Left panel: F658N image (without continuum subtraction); Right panel: normalized unsharp residual map of the same. The residual map shows emission as bright and absorption (presumably dust) as dark. Each panel shows of NGC 1052, and the scale bar indicates 100 pc. The axis of the VLA radio jet (Wrobel 1984) is shown with the dashed line. Figure 3a: Continuum-subtracted PC1 \[O III\]$`\lambda 5007`$ emission-line images (left) of NGC 4258, NGC 4579, and NGC 5005, shown alongside of “excitation maps” of the H$`\alpha `$+\[N II\]/\[O III\] $`\lambda 5007`$ ratio (right). The scaling and orientation follow that in Figures 1a-d. Figure 3b: Continuum-subtracted PC1 \[O III\]$`\lambda 5007`$ images of NGC 4192 (left) and NGC 4569 (right). For these galaxies the “excitation maps” are extremely noisy and contain no useful information. The scaling and orientation are as in Figure 3a. Figure 4: Montage of PC1 images of NGC 4258 showing (left-to-right, top-to-bottom) F547M, \[O III\]$`\lambda 5007`$ emission, F300W, and F218W. All maps show the central 10″of the galaxy centered on the active nucleus. The scaling and orientation are as in Figures 1–3.
no-problem/9910/astro-ph9910569.html
ar5iv
text
# A 1400-MHz pilot search for young pulsars ## 1 Introduction Although the majority of pulsar radio flux density spectra peak at frequencies around 400 MHz, during the 1980s it was realised that the sensitivity of pulsar surveys conducted at such frequencies becomes seriously compromised when searching along the Galactic plane. The reasons for this are twofold: (1) The system temperature becomes dominated by the sky background radiation. Typical 400-MHz sky background temperatures are $`900`$ K in the direction of the Galactic centre, and $`300`$ K along the Galactic plane (Haslam et al. 1982). (2) The observed pulse width can become much larger than the intrinsic width due to multi-path scattering and/or dispersion by free electrons in the interstellar medium. Both these effects lead to a net reduction in signal-to-noise ratio. In extreme cases of scattering and dispersion, the observed pulse width becomes comparable to the pulse period and the pulsar is no longer visible as a periodic radio source. Fortunately, all these effects diminish strongly at a higher observing frequency: The brightness temperature of the radio continuum emission $`T_\nu `$ at a given observing frequency $`\nu `$ has a power law dependence $`T_\nu \nu ^\beta `$ with a spectral index $`\beta 3`$ (Lawson et al. 1987; Reich & Reich 1988). This means that the 408-MHz sky background temperatures quoted above are reduced by more than an order of magnitude for high frequency ($`\stackrel{>}{_{}}1`$-GHz) surveys. Furthermore, pulse dispersion and scattering scale as $`\mathrm{\Delta }\nu /\nu ^3`$ and $`\nu ^4`$ respectively (e.g. Manchester & Taylor 1977), for an observing frequency $`\nu `$ and bandwidth $`\mathrm{\Delta }\nu `$. Clifton & Lyne (1986) (see also Clifton et al. 1992) were the first to really demonstrate the worth of surveying at high frequencies. In a 1400-MHz survey of a thin strip of 200 deg<sup>2</sup> along the Galactic plane, Clifton et al. found 40 new pulsars. All of these sources were missed by a previous 390-MHz survey (Stokes et al. 1985) which overlapped the same region. This was in spite of the fact that, after scaling the sensitivity limits for typical pulsar spectral indices, the Stokes et al. survey had twice the nominal sensitivity of the Clifton et al. survey. Johnston et al. (1992a) carried out a complementary survey of the southern Galactic plane using the Parkes radio telescope at 1520 MHz, finding 46 pulsars missed by previous lower frequency searches covering this region (Manchester et al. 1978). The pulsars discovered in these two high frequency surveys are primarily young neutron stars that have not had time to move far from their birth places close to the Galactic plane. A large sample of such objects is desirable for studies of the birth and evolution of neutron stars and of the size of the neutron star population in the inner Galaxy (Johnston 1994). In addition, these surveys discovered several interesting binary pulsars including PSR B1259$``$63 — a 48-ms pulsar in a 3.4-yr orbit around a 10 M Be star (Johnston et al. 1992b). Significant improvements in sensitivity have lead to renewed interest in Galactic plane searches. In particular Camilo et al. (2000a) report the discovery of over 400 pulsars in the first half of a new survey of the southern Galactic plane using the recently commissioned Parkes $`\lambda `$ 21-cm multibeam system. Their survey is some seven times more sensitive than the Clifton et al. and Johnston et al. surveys, and the new discoveries already include several binary pulsars (e.g. Lyne et al. 1999), as well as a large number of very distant, high dispersion measure, sources. A preliminary account of the exciting results from the Parkes multibeam survey (Camilo et al. 1997) prompted us to utilise the large collecting area of the 100–m Effelsberg radio telescope to perform a new search along the northern Galactic plane. In this paper we report on a small survey carried out during 1998 to test the feasibility of future observations with a wide-bandwidth search system currently under development. This pilot search proved successful, discovering four new pulsars — the first ever found with this telescope. In Sect. 2 we describe in some detail the survey observations and data reduction techniques. In Sect. 3 we estimate the sensitivity of the survey. The results are presented in Sect. 4. These results, along with follow-up timing observations, are discussed in Sec. 5. Finally, in Sect. 6, we summarise the main conclusions from this work and their implications for future pulsar search experiments at Effelsberg. ## 2 Survey observations and data reduction All observations reported in this paper were carried out at a centre frequency of 1402 MHz on a number of separate sessions between 1998 June and 1999 April using the 100–m Effelsberg radio telescope operated by the Max-Planck-Institut für Radioastronomie. Although 1400-MHz timing observations at Effelsberg are routinely made with typical bandwidths of 40 MHz or more (see e.g. Kramer et al. 1998), the search hardware available to us has a maximum bandwidth of 16 MHz in each of the two orthogonal, circular polarisation channels. Nonetheless, the large forward gain of the telescope at 1400 MHz (1.5 K Jy<sup>-1</sup>), the relatively low system temperature of the receiver (35 K) and long integration times employed in the survey (35 min per pointing) means that the system achieves a sensitivity which represents a threefold improvement over that achieved by Clifton et al. (1992) during their survey. The main aim of the observations reported here was to test the feasibility of a larger search with a 100-MHz bandwidth system which is presently being commissioned. Given the limited amount of telescope time available for this pilot project, we chose to restrict our search area to a 2 deg<sup>2</sup> patch of the Galactic plane defined by $`28^{}<l<30^{}`$ and $`|b|<0.5^{}`$. The rationale for this choice is simple — this line-of-sight is close to the Scutum spiral arm and as a result passes through one of the most pulsar-rich parts of the northern Galactic plane. In addition, since this part of the sky is not visible from Arecibo, Effelsberg is presently the largest radio telescope in the world capable of surveying it. The survey region was divided up into a grid of 126 positions consisting of 9 strips of 14 positions along lines of constant galactic latitude ($`b=0.0^{},\pm 0.12^{},\pm 0.24^{},\pm 0.36^{},\pm 0.48^{}`$). This choice of spacing ensured some overlap between the 3-dB width of the telescope beam (9’). The $`b=0.0^{}`$ strip was centred on $`l=29^{}`$. Beam centres on adjacent strips were alternately offset by half a beam width to ensure the most efficient coverage on the sky. At the start of each observing run, we carried out a 5-min observation of PSR B2011+38. This relatively luminous 230-ms pulsar has a dispersion measure of 239 cm<sup>-3</sup> pc and is known not to be prone to significant intensity variations due to interstellar scintillation (Lorimer et al. 1995). The fact that the search code detected this pulsar with consistently high signal-to-noise ratios (consistent with its 1400-MHz flux density — $`6.4\pm 0.5`$ mJy; Lorimer et al. 1995) gave us confidence that the individual filterbank channels were functioning normally, and that the nominal system sensitivity was being achieved. The search field is visible from Effelsberg for about 8.5 hr per day. Since each grid position in the field was observed for 35 min, we typically observed up to 14 separate positions on the sky during a given transit. In search mode, the incoming signals of each polarisation are fed into a pair of $`4\times 4`$-MHz filterbanks. The outputs from the filterbanks are subsequently detected and digitised every 500 $`\mu `$s using 2-MHz voltage-to-frequency converters, resulting in an effective 10-bit quantisation of the signals. This is the fastest sustainable data rate using this system. Signals from the orthogonal polarisations were combined to form a total power time series for each 4-MHz frequency channel over the band. These four frequency channels are then passed to the standard Effelsberg Pulsar Observing System (see Kramer 1995) which stored contiguous blocks of data to disk every 1024 samples (0.512 s). The four-channel search system used for this survey results in refreshingly low data rates compared to most other searches where the backends routinely sample 256 channels or more (see e.g. Manchester et al. 1996). The main advantage of such a simple system is that, as soon as each 35-min integration was complete, a preliminary analysis of the data could be carried out well within the time that the telescope was observing the next grid position. This quasi on-the-fly processing scheme allowed rapid re-observation of any pulsar candidates found during the search. The data analysis procedure was optimised to search Fourier spectra of each 35-min time series for dispersed periodic signals. The software for this purpose was developed largely from scratch, taking advantage of ideas used to process our on-going search of the Galactic centre from Effelsberg (Kramer et al. 1996; Kramer et al. 2000), as well as previous experience gained by one of us (DRL) during the Parkes $`\lambda `$ 70-cm Southern Sky Survey (Manchester et al. 1996; Lyne et al. 1998). We also made use of several “standard” pulsar search techniques described in detail by a number of authors (Hankins & Rickett 1975; Lyne 1988; Nice 1992). In what follows we give a brief overview of our analysis procedure. We adopted a two-stage data analysis procedure whereby the data were quickly analysed after the observation at Effelsberg and then stored on magnetic tape for more detailed off-line analyses at Bonn. The three differences in the analyses are the range of dispersion measures searched, the signal-to-noise thresholds, and the method used to excise radio frequency interference (see below). Both analyses begin by computing the Fast Fourier Transform of the $`2^{22}`$ point time series in each of the four frequency channels. Since the dispersion measure (DM) of any pulsar is a priori unknown we need to de-disperse the data for a number of trial DM values before the periodicity search begins. For our purposes this is most readily achieved by applying the shift theorem (see e.g. Bracewell 1965) to the Fourier components of each channel before summing appropriately to produce a number of de-dispersed amplitude spectra. Our on-line analysis in Effelsberg produced 18 amplitude spectra for each beam corresponding to a DM range between zero and 1,500 cm<sup>-3</sup> pc. Subsequent analyses in Bonn produced, in addition to this, a further 18 spectra per beam which increased the range of DMs out to 10,000 cm<sup>-3</sup> pc. Each amplitude spectrum was then searched for harmonically related spikes in the Fourier domain — the characteristic signature of any periodic signal. Since pulsar signals have generally short duty cycles, and therefore many harmonics, we summed the spectra over 2, 4, 8, and 16 harmonics using an algorithm described by Lyne (1988) and repeated the search for significant spectral features. Having completed the search of all the amplitude spectra for a given beam, we then compiled a list of all non-harmonically-related spectral features with a signal-to-noise ratio greater than 8 in the Effelsberg analyses and 7 in the Bonn analyses. Typically, depending on the amount of interference present in the data, there are of order five to ten such “pulsar candidates” in each beam. For each candidate, the analysis described so far resulted in a period $`P`$ and dispersion measure DM; the latter quantity is based upon the maximum spectral signal-to-noise ratio found as a function of all the DM trials. Working now in the time domain, we fold the filterbank channels at the nominal period of each candidate to produce one pulse profile per channel. These profiles are then de-dispersed at the nominal DM to produce an integrated profile over the 16-MHz band. The results of this analysis are summarised in the plot of the form shown in Fig. 1 which is the output from the discovery observation of PSR J1842–0415. This plot serves as a good example showing the characteristics of a strong pulsar candidate. The high signal-to-noise integrated profile (top left panel) can be seen as a function of time and radio frequency in the grey scales (lower left and right panels). In addition, the dispersed nature of the signal is immediately evident in the upper right hand panel which shows the signal-to-noise ratio as a function of trial DM. This combination of diagnostics proved extremely useful in differentiating between a good pulsar candidate and spurious interference. The most significant difference between our two data reduction strategies concerns the methods employed to eliminate radio frequency interference. Since the radio frequency environment in Effelsberg is pervaded by a number of man-made signals with fluctuation frequencies predominantly between 10–2000-Hz, both modes of data reduction required some means of excising these unwanted signals. The “on-line” data reduction mode in Effelsberg achieved this by simply clipping all spectral features above 10-Hz whose amplitudes exceeded five times the spectral rms! Whilst this simple-minded approach was sufficient to detect and confirm all the pulsars finally discovered in the survey, we were aware that it significantly compromised our sensitivity to pulsars with periods below 0.1 s. To address this important issue, our data analysis procedure in Bonn made use of the fact that the vast majority of man-made interfering signals are not dispersed and occur predominantly at a constant fluctuation frequency at any given epoch. These signals are immediately apparent in a compilation of a large number of zero-DM amplitude spectra for different beam positions. Based on the statistics of over 60 individual spectra, we constructed a “spectral mask” which contains the frequencies of those spectral features which occur more than 5 times above a signal-to-noise threshold of 7. We found 611 such frequencies between 30 and 2000-Hz — 0.06% of the total number of spectral bins. Most of these are in fact related to the 50-Hz mains power line. By masking (i.e. ignoring) just these frequencies in our analysis, it was then possible to detect short-period pulsars with fundamental frequencies outside the masked frequency bins in our data. We verified the validity of this approach by a analysing number of test observations on millisecond and short-period pulsars which were essentially undetectable without the use of the spectral mask, simply because of the dominating effect of the interfering signals. Thus, although we did not detect any short-period pulsars in this survey, we are confident that no potentially detectable pulsars with fundamental frequencies outside the masked frequency bins were missed because of radio frequency interference. ## 3 Search sensitivity To estimate the sensitivity of this survey, we make use of the following expression which is similar to that derived by Dewey et al. (1984) to find the minimum detectable flux density a pulsar has to have in order to be detectable: $$S_{\mathrm{min}}=\frac{\eta T_{\mathrm{sys}}}{G\sqrt{2\mathrm{\Delta }\nu \tau }}\left(\frac{W}{PW}\right)^{1/2}.$$ (1) Here the constant factor $`\eta `$ takes into account losses in the hardware and the threshold signal-to-noise ratio above which a detection is considered significant ($`\eta 10`$ in our case), $`T_{\mathrm{sys}}`$ is the system temperature (see below), $`G`$ is the gain of the telescope (1.5 K Jy<sup>-1</sup> for Effelsberg operating at 21-cm), $`\mathrm{\Delta }\nu `$ is the observing bandwidth (16-MHz for this survey), the factor of $`\sqrt{2}`$ indicates that two polarisation channels were summed, $`\tau `$ is the integration time per telescope pointing (35 min), $`P`$ is the period of the pulsar and $`W`$ is the observed width of the pulse. The system temperature $`T_{\mathrm{sys}}`$ is essentially the sum of the noise temperature of the receiver $`T_{\mathrm{rec}}`$, the spillover noise into the beam side-lobes from the ground $`T_{\mathrm{spill}}`$ and the excess background temperature $`T_{\mathrm{sky}}`$ caused largely by synchrotron radiating electrons in the Galactic plane itself. From regular calibration measurements we found $`T_{\mathrm{rec}}`$ to be 35 K. The spillover contribution $`T_{\mathrm{spill}}`$ was estimated to be 5 K for typical telescope elevations during survey observations. We estimate $`T_{\mathrm{sky}}`$ by scaling the 408-MHz all-sky survey of Haslam et al. (1982) to 1400 MHz assuming a spectral index of –2.7 (Lawson et al. 1987), finding a typical value in the direction $`l=29`$ and $`b=0.0`$ to be 15 K. With these values in Eq. (1), we find the minimum flux density for detecting a 0.5 s pulsar with a duty cycle of 4% to be about 0.3 mJy. We caution that this sensitivity estimate should be viewed as a “best case scenario”, valid for relatively long-period pulsars with low dispersion measures and narrow pulses observed at the beam centre. The effects of sampling and dispersion and pulse scattering significantly degrade the search sensitivity at short periods. Specifically, the observed pulse width $`W`$ in Eq. (1) is often likely to be greater than the intrinsic width $`W_{\mathrm{int}}`$ emitted at the pulsar because of the scattering and dispersion of pulses by free electrons in the interstellar medium, and by the post-detection integration performed in the receiver. The sampled pulse profile is the convolution of the intrinsic pulse width and broadening functions due to dispersion, scattering and integration and is estimated from the following quadrature sum: $$W^2=W_{\mathrm{int}}^2+t_{\mathrm{samp}}^2+t_{\mathrm{DM}}^2+t_{\mathrm{scatt}}^2,$$ (2) where $`t_{\mathrm{samp}}`$ is the data sampling interval, $`t_{\mathrm{DM}}`$ is the dispersion broadening across one filterbank channel and $`t_{\mathrm{scatt}}`$ is the interstellar scatter broadening. To highlight the effects of pulse broadening on sensitivity, in Fig. 2 we present the effective sensitivity as a function of period for a hypothetical pulsar with an intrinsic duty cycle of 5% for assumed DMs of 0, 128 and 512 cm<sup>-3</sup> pc. The scallops in the curves at short periods reflect the reduction in sensitivity due to the loss of higher-order harmonics in the Fourier spectrum (see e.g. Nice 1992). The severe degradation in sensitivity at short periods and high dispersion measures is clearly seen in this diagram. In particular, we note that due to the dispersion across individual filterbank channels, the present observing system is essentially insensitive to pulsars with periods less than 30 ms and DMs larger than 500 cm<sup>-3</sup> pc. In the discussion hitherto we have implicitly assumed that the apparent pulse period remains constant during the observation. Given the necessarily long integration times employed to achieve good sensitivity, this assumption is only valid for solitary pulsars, or those in binary systems where the orbital periods are longer than about a day. For shorter-period binary systems, as noted by a number of authors (see e.g. Johnston & Kulkarni 1992), the Doppler shifting of the pulse period results in a spreading of the total signal power over a number of frequency bins in the Fourier domain. Thus, a narrow harmonic becomes smeared over several spectral bins. As an example of this effect, as seen in the time domain, Fig. 3 shows a 35-min search mode observation of PSR B1744–24A; the 11.56 ms eclipsing binary pulsar in the globular cluster Terzan 5 (Lyne et al. 1990). Given the short orbital period of this system (1.8 hr), the observation covers about one third of the orbit! Although the search code nominally detects the pulsar with a signal-to-noise ratio of 9.5 for this observation, the Doppler shifting of the pulse period seen in the individual sub-integrations clearly results in a significant reduction in sensitivity. The analysis reported in this paper makes no attempt to recover the loss in sensitivity due to this effect. To date, the only pulsar searches where this issue is tackled has been in searches for binary pulsars in globular clusters (e.g. Anderson et al. 1990; Camilo et al. 2000b). These searches applied a technique whereby the time series is compensated for first-order Doppler accelerations. Although these searches have been very successful they add significantly to the computational effort required to reduce the data, and have therefore only been applied to globular clusters where the DM is known a-priori from observations of solitary pulsars. For our data, where the DM is a-priori unknown, we are presently developing computationally-efficient algorithms which will permit us to greatly improve the sensitivity to binary pulsars by re-analysing these data in future. We note that the present analysis results in significantly reduced sensitivity to binary pulsars with orbital periods less than one day. We conclude this discussion with some remarks on the search sensitivity to very long-period ($`P>5`$ s) pulsars. The existence of radio pulsars with such periods are of great relevance to theories of pulsar emission, many of which predict that the emission ceases when the period crosses a critical value (see e.g. Chen & Ruderman 1993). Young et al. (1999) have recently demonstrated that the period of PSR J2144$``$3933, originally discovered in the Parkes Southern Sky Survey, is 8.5 s — three times that previously thought. This is presently the longest period for a radio pulsar. Young et al. make the valid point that such pulsars could be very numerous in the Galaxy since they have very narrow emission beams and therefore radiate to only a small fraction of the celestial sphere. An additional factor here is that the number of pulses emitted by e.g. a 10-s pulsar during typical pulsar survey integration times is $`\stackrel{<}{_{}}30`$. If the pulsar undergoes significant periods in the null state, as might be expected (Ritchings 1976), it will be harder to detect in an FFT-based search (Nice 1999). One way to tackle this problem is to employ longer integration times, such as we do here. The FFT-based periodicity search we use is, however, not an ideal means to find long period signals since the sensitivity is degraded by a strong “red noise” component in the amplitude spectrum. The noise itself is a result of DC-level fluctuations (e.g. in the receiver) during an observation. In the above analysis of the survey data, we minimised the effects of this red noise component by subtracting a baseline off the spectrum before normalising it. However, because of the rapid increase of the red noise below about 0.1-Hz, we chose to ignore all spectral signals with frequencies below this value. Whilst this is common practice in pulsar search codes, it obviously reduces our sensitivity to $`P>10`$ s pulsars! In recognition of this selection effect, we are currently re-analysing our data using a so-called “fast folding” algorithm (e.g. Staelin 1969) to search for periodic signals in the period range 3–20 s. The results of this analysis, and a detailed discussion of the algorithm, will be presented elsewhere (Müller et al. in preparation). ## 4 Survey results A total of seven pulsars were detected during the course of the survey, four of which were previously unknown. Follow-up observations carried out to confirm the existence of each of the new pulsars were used to check that the true period had been correctly identified by the search code. The basic properties and detection statistics of all seven pulsars are summarised in Table 1. Flux values for the previously known pulsars are taken from Lorimer et al. (1995). Flux values for the newly discovered pulsars are averages of a number of independent measurements based on the timing measurements described in Sec. 5 and have fractional uncertainties of about 30% in each case. The relative positions of all these pulsars are shown on our sensitivity curve in Fig. 2. The astute reader will, by now, have noticed a striking similarity between the periods of PSRs J1842–0415 and J1844–0310 and, to a lesser extent, PSRs J1845–0316 and J1841–0345. This unexpected result initially gave us some cause for concern as to whether the signals we had detected were indeed pulsars! However, having thoroughly investigated each new pulsar, we are now confident that this is nothing more than a bizarre coincidence. A number of independent facts confirm this. Firstly, all the new pulsars are separated by a significant number of telescope pointings on the sky. Secondly, the periods are detected only at the nominal position of each pulsar, and therefore cannot be put down to terrestrial interference. Furthermore, all the dispersion measures are significantly different. Finally, our timing measurements show that each pulsar has a distinct set of spin-down parameters. We note in passing that this survey places an upper limit to the pulsed radio emission from the 6.97-s anomalous X-ray pulsar J1845.0$``$0300 discovered by Torii et al. (1998) that lies in the search region. No radio pulsations were seen at the grid position closest to this pulsar, setting a 1400-MHz pulsed flux limit of $`0.3(\delta /4)^{1/2}`$ mJy, where $`\delta `$ is the pulse duty cycle in percent. This limit assumes (possibly incorrectly) that the effects of interstellar scattering are negligible along this line of sight at this observing frequency. Deeper radio searches for this object, and also for the 11.8-s pulsar in Kes 73 (Vasisht & Gotthelf 1997), should be carried out in future at different observing frequencies. ## 5 Follow-up observations In order to obtain more detailed spin and astrometric parameters of the newly-discovered pulsars, following confirmation, each was included in our monthly $`\lambda `$ 21-cm timing observations of millisecond pulsars using the Effelsberg-Berkeley-Pulsar-Processor. Full details of the observing procedures are described by Kramer et al. (1999). In brief, during each observing session, a pulse time-of-arrival (TOA) measurement is obtained for each pulsar by cross-correlating the observed pulse profile with a high signal-to-noise “template” profile constructed from the addition of many observations. The template profiles obtained in this way are presented in Fig. 4. For each pulsar, the TOAs obtained from all the sessions were referred to the equivalent time at the solar system barycentre and fitted in a bootstrap fashion to a simple spin-down model using the tempo software package<sup>1</sup><sup>1</sup>1Available from http://pulsar.princeton.edu/tempo. In Fig. 5, we present the resulting model-observed TOA residuals from this analysis. The phase-coherent timing solutions we obtain for each pulsar indicate that they are all solitary objects. The fitted parameters are summarised in Table 2. A sub-arcsecond position has been determined for PSR J1842–0415, where the baseline of timing observations already spans over a year. The remaining pulsars have timing baselines spanning just over 6 months. This is however, sufficient to decouple the covariant effects of position error and spin down and, as a result, we have determined accurate period derivatives for each pulsar. Table 2 also lists the characteristic ages ($`\tau _c`$) and surface magnetic field strengths ($`B`$) inferred from these measured period and period derivatives (see e.g. Manchester & Taylor 1977 for definitions of these parameters). In addition, we also list the distance ($`D`$) to each pulsar inferred from its DM, Galactic coordinates and the Taylor & Cordes (1993) electron density model, as well as the 1400-MHz luminosities inferred from these distances and the observed flux densities as $`S_{1400}D^2`$. It is significant that five of the seven pulsars detected in this survey (including all the newly-discovered pulsars) have characteristic ages below 0.5 Myr — over an order of magnitude younger than the median age of the normal pulsars detected by the Parkes 70-cm Southern Sky survey (Manchester et al. 1996; Lyne et al. 1998). This result should not be surprising when it is realised that we have preferentially selected a sample of objects located close to their birth sites along the Galactic plane (Clifton et al. 1992; Johnston et al. 1992a). By far the youngest of the new discoveries is PSR J1841–0345, which has a characteristic age of only 55 kyr. Since this is within the mean lifetime of supernova remnants (60 kyr — Frail et al. 1994), we checked the position of this and the other newly discovered pulsars with the most recent catalogue of supernova remnants (Green 1998) for spatial coincidences. No supernova remnants in the catalogue lie within 0.3 degrees of J1841–0345, or indeed any of the other new pulsars. In their study of pulsar-supernova remnant associations Frail et al. (1994) undertook a programme of deep radio imaging to search for previously undetected supernova remnants around several of the young pulsars from the Johnston et al. (1992a) survey. Using the accurate positions we obtained from the timing analysis, we examined the NRAO VLA Sky Survey (NVSS; Condon et al. 1998) images of the fields surrounding each pulsar for evidence of diffuse $`\lambda `$ 20-cm emission which could be attributed to uncatalogued supernova remnants. The only pulsar for which any diffuse emission is evident in the NVSS survey (down to the 1-mJy sensitivity limit) is J1845–0316, shown in Fig. 6. It is presently not at all obvious whether this emission is attributable to the supernova remnant associated with this pulsar simply because there is such a high density of similar radio sources in this region of the sky. As a result, the by-chance probability of finding unrelated diffuse radio emission, particularly in deeper images of this region, will be rather high, making it difficult to unambiguously identify any associated supernova remnants without additional information (e.g. independent distance estimates to the pulsar and the candidate remnant). ## 6 Conclusions The discovery of four sub-mJy pulsars in the limited pilot search observations reported here clearly demonstrate the potential for future pulsar surveys with the Effelsberg radio telescope. As mentioned earlier, the main aim of this survey was to test the feasibility of finding pulsars with a new wide-band search system currently under development. This new system employs narrower channel bandwidths and has much faster sampling rates than presently available; it will therefore have significantly improved sensitivity to short-period, highly dispersed pulsars. Now that the Parkes multibeam survey is extending its coverage out to $`l=50^{}`$ (Lyne et al. 2000) there is little to be gained in using the new system at Effelsberg to initiate a large-scale $`\lambda `$ 21-cm search of the Galactic plane. A targeted $`\lambda `$ 21-cm search of globular clusters, however, is a worthy scientific goal since deep (several hour) integrations would achieve a substantially improved sensitivity over previous searches (see e.g. Biggs & Lyne 1996). Such a search would be particularly timely given the flurry of binary pulsar discoveries in a recent $`\lambda `$ 21-cm search of 47 Tucanae (Camilo et al. 2000b). Another excellent use of the new system would be an $`\lambda `$ 11-cm search for heavily scattered pulsars close to the plane. Such a search would open up an entirely new area of parameter space in Galactic plane searches since it is known that many pulsars discovered at $`\lambda `$ 21-cm are still strongly affected by interstellar scattering. The strong inverse dependence of scattering on observing frequency means that the effects of scattering on an $`\lambda `$ 11-cm search would be an order of magnitude smaller than at $`\lambda `$ 21 cm. In the vicinity of the Galactic centre, where scattering is expected to be greatest (Cordes & Lazio 1997), the best prospects for finding pulsars still seem to be in searches carried out at 5 GHz ($`\lambda `$ 6-cm), or even higher frequencies (see e.g. Kramer et al. 1996; Kramer et al. 2000). ###### Acknowledgements. We wish to thank Jiannis Seiradakis for help and encouragement during the survey observations, and Oleg Doroshenko for assistance with the timing observations. The constant support of the skilled and dedicated operators at Effelsberg viz: Herrn. Koch, Marschner, Seidel, Georgi, Schlich and Bartel was instrumental in our quest to finally find pulsars with the Effelsberg radio telescope. DRL would also like to thank Chris Salter for useful comments on an earlier version of the manuscript, and Bryan Gaensler for helpful discussions.
no-problem/9910/hep-ph9910502.html
ar5iv
text
# Program Summary ## Program Summary Title of the Programs: phi3 Computer: Sun 19 Operating system: Unix Programming language used: FORTRAN 77 Peripherals used: Laser Printer Number of lines in distributed program: 1510 Keywords: Nonperturbative, bound states, Feynman-Schwinger representation, Monte-Carlo, numerical evaluation. Nature of physical problem: The program provided here evaluates the mass and distribution probabilities of the fully interacting n-body propagator ($`n3`$) for scalar $`\chi ^2\varphi `$ interaction in 3+1 dimensions. The evaluation takes into account all self energy, vertex dressing, and exchange interaction contributions except those involving matter loops (the quenched approximation). Method of solution: The Feynman-Schwinger representation approach is used to express the field theoretical Green’s function in terms of a quantum mechanical path integral. The resultant expression is evaluated using a Monte-Carlo simulation. Restrictions of the program: Only $`n=1,2`$, and $`3`$ body propagators are considered. The extension to $`n4`$ requires straightforward modifications in the program. Typical running time: About 1 day for the 1-body propagator with the self energy. ## LONG WRITE-UP ## 1 Introduction In nuclear physics one is often faced by problems that require nonperturbative methods. The best known example is the problem of bound states. Even if the underlying theory may have a small coupling constant (such as in QED), and therefore allows the use of perturbation theory in general, the treatment of bound states are inherently nonperturbative. The n-body bound state is defined by the pole of the interacting n-body propagator. A perturbative approximation of n-body propagator does not produce the bound state pole location. Therefore it is essential that reliable nonperturbative methods that take all orders of interaction into account are developed. For this reason, numerous nonperturbative methods have been developed and successfully used in the literature. Some of the best known examples are lattice gauge theory (LGT), and relativistic bound state equations . In a recent paper we (along with authors J. Tjon, and F. Gross) have discussed yet another method known as the Feynman-Schwinger representation(FSR). The basic idea in the FSR approach is to integrate out all fields at the expense of introducing quantum mechanical path integrals over the trajectories of particles. Replacing the path integrals over fields with path integrals over trajectories has an enormous computational advantage. The advantage is due to the fact that the path integration over trajectories involves a variation of lines rather than fields in a volume. Therefore the degrees of freedom is considerably fewer. The FSR approach differs from the LGT in that it utilizes a space-time continuum, therefore maintaining the Poincare symmetry. On the other hand it should be pointed out that the FSR approach is not without its drawbacks. In particular, how to extend the FSR approach to include fermions is not known. In the past researchers have attacked the fermion problem using various approximations. An exact result involving fermions is an important problem and requires further study. While being able to calculate a nonperturbative result by itself is interesting, an additional motivation in studying the FSR approach is to determine which subsets of diagrams give the dominant contribution to the n-body propagator. This is particularly important in determining what kind of approximations are reasonable within the context of bound state equations. An example of how the FSR results can be used to compare different nonperturbative approximation schemes was presented in Refs . In those works the emphasis was on the development of the formalism and application to the 1 and 2-body propagators. In Ref , The FSR prediction for the 1-body mass in Scalar QED was compared by the rainbow-Dyson-Schwinger equation prediction. It was found that while the FSR approach provides a real mass pole for all coupling strengths, the Dyson-Schwinger equation provides a complex mass pole beyond a critical coupling strength. Furthermore it was found that, for Scalar QED in 0+1 dimension, the vertex corrections to the exchange interaction do not contribute to the 2-body binding energy . These examples demonstrate the potential usefulness of the FSR approach. The knowledge about the nonperturbative propagators and vertices provided by the FSR is valuable as an input in testing and improving the modeling of other nonperturbative approaches such as Dyson-Schwinger equations. The two and three-body bound state sectors provide possibilities for the application of the FSR formalism. In particular it is important to see how various bound state equations (such as Bethe-Salpeter, Gross (spectator), Blankenbekler-Sugar, Equal-time, and nonrelativistic) compare with the quenched FSR results. Applications of the FSR approach to 2 and 3 body states with comparisons to various bound state equation results in $`\chi ^2\varphi `$ theory is currently under study and will be presented in a separate article. In this paper we present a complete numerical algorithm for the evaluation of $`n`$-body masses ($`n3`$) and distribution probabilities for scalar $`\chi ^2\varphi `$ interaction in 3+1 dimensions. By providing this algorithm we intend to facilitate the comparison of various nonperturbative methods with the exact quenched results in $`\chi ^2\varphi `$ theory. The organization of the paper is as follows: In the next section we present a brief summary of the FSR formalism. In the third section, using various 1, 2, and 3-body cases as examples, we discuss how results are obtained. And in the fourth section we explain the components of the program. ## 2 The Feynman-Schwinger representation for scalar fields We consider the theory of charged scalar particles $`\chi `$ of mass $`m`$ interacting through the exchange of a neutral scalar particle $`\varphi `$ of mass $`\mu `$. The Euclidean Lagrangian for this theory is given by $$_E=\chi ^{}\left[m^2^2+g\varphi \right]\chi +\frac{1}{2}\varphi (\mu ^2^2)\varphi .$$ (1) The two body Green’s function for the transition from the initial state $`\mathrm{\Phi }_i=\chi ^{}(x)\chi (\overline{x})`$ to final state $`\mathrm{\Phi }_f=\chi ^{}(y)\chi (\overline{y})`$ is given by $$G(y,\overline{y}|x,\overline{x})=N𝒟\chi ^{}𝒟\chi 𝒟\varphi \mathrm{\Phi }_f^{}\mathrm{\Phi }_ie^{S_E}.$$ (2) The final result for the two-body propagator involves a quantum mechanical path integral that sums up contributions coming from all possible trajectories of particles $$G=_0^{\mathrm{}}𝑑s_0^{\mathrm{}}𝑑\overline{s}(𝒟z)_{xy}(𝒟\overline{z})_{\overline{x}\overline{y}}e^{S[Z]},$$ (3) where $`S[Z]`$ is given by $$S[Z]iK[z,s]iK[\overline{z},\overline{s}]+V_0[z,s]+2V_{12}[z,\overline{z},s,\overline{s}]+V_0[\overline{z},\overline{s}].$$ (4) where $`K[z,s]`$ $`=`$ $`(m^2+iϵ)s{\displaystyle \frac{1}{4s}}{\displaystyle _0^1}𝑑\tau {\displaystyle \frac{dz_\mu (\tau )}{d\tau }}{\displaystyle \frac{dz^\mu (\tau )}{d\tau }},`$ (5) $`V_0[z,s]`$ $`=`$ $`{\displaystyle \frac{g^2}{2}}s^2{\displaystyle _0^1}𝑑\tau {\displaystyle _0^1}𝑑\tau ^{}\mathrm{\Delta }(z(\tau )z(\tau ^{}),\mu ),`$ (6) $`V_{12}[z,\overline{z},s,\overline{s}]`$ $`=`$ $`{\displaystyle \frac{g^2}{2}}s\overline{s}{\displaystyle _0^1}𝑑\tau {\displaystyle _0^1}𝑑\overline{\tau }\mathrm{\Delta }(z(\tau )\overline{z}(\overline{\tau }),\mu ).`$ (7) Here the $`V_0[z,s]`$ term represents the self energy contribution, while the $`V_{12}[z,\overline{z},s,\overline{s}]`$ term represents the exchange interaction (Fig. 1). The interaction kernel $`\mathrm{\Delta }(x)`$ is defined by $`\mathrm{\Delta }(x,\mu )`$ $`=`$ $`{\displaystyle \frac{d^4p}{(2\pi )^4}\frac{e^{ipx}}{p^2+\mu ^2}}={\displaystyle \frac{\mu }{4\pi ^2|x|}}K_1(\mu |x|).`$ (8) While we present expressions for the 2-body case, generalization to an arbitrary n-body system is trivial. The bound state spectrum can be determined from the spectral decomposition of the two body Green’s function $$G(T)=\underset{n=0}{\overset{\mathrm{}}{}}c_ne^{m_nT},$$ (9) where T is defined as the average time between the initial and final states $$T\frac{1}{2}(y_4+\overline{y}_4x_4\overline{x}_4).$$ (10) In the limit of large $`T`$, the ground state mass is given by $$m_0=\underset{T\mathrm{}}{lim}\frac{d}{dT}ln[G(T)]=\frac{𝒟ZS^{}[Z]e^{S[Z]}}{𝒟Ze^{S[Z]}},$$ (11) While this result in principle is correct, the convergence to the asymptotic mass is slow due to the continuum contribution. The spectrum of the particle involves the mass pole and a cut beyond this pole representing the continuum contribution. Assuming that the constituents of the bound state are restricted to be at equal times in the initial and final states, the Green’s function can be written as a function of time $`T`$, total displacement $`𝐑`$, and relative coordinate $`𝐫`$, G(T,R,r).<sup>1</sup><sup>1</sup>1Dependence on the initial relative final coordinate $`𝐫_0`$ is implicit. In order to eliminate the contribution of continuum states, introduce the Fourier transform, $$\stackrel{~}{G}(T,𝐏,𝐩)=d^3𝐑d^3𝐫e^{i𝐏𝐑+i𝐩𝐫}G(T,𝐑,𝐫)$$ (12) where $`𝐏`$ is the CM momentum, and $`𝐩`$ is the relative momentum between particles. Setting both $`𝐏=𝐩=0`$ one has $$\stackrel{~}{G}(T,0,0)=d^3𝐑d^3𝐫G(T,𝐑,𝐫),$$ (13) which eliminates the contribution of the continuum and projects out the s-wave state. While an integration over $`𝐫`$ is not necessary for the elimination of the continuum contribution, it is useful in eliminating the contribution of states with nonzero orbital angular momentum. While the result for the Green’s function Eq. 3 is exact in the quenched approximation, due to its oscillatory behavior it is not appropriate for Monte-Carlo simulation. In Ref. it was shown that one can perform a Wick rotation in variable $`s`$ to avoid these oscillations. In the limit $`g^20`$ the dominant contribution to the integral in Eq. (3) can be shown, by using the saddle point method, to come from $$s=is_0=i\frac{T}{2m}.$$ (14) Since the large $`s`$ values do not contribute to the integral even without the interaction term, it is a good approximation to suppress the $`g^2`$ term at large s values. While this suppression is done it is important that the integrand is not modified in the region of dominant contribution $`sis_0`$. This can be achieved by scaling the $`s`$ variable, in the interaction term only, by $$s\frac{s}{R(s,s_0)},$$ (15) where $$R(s,s_0)1(sis_0)^2/\mathrm{\Gamma }^2.$$ (16) In the free case, $`(g^2=0)`$, the width $`W`$ of the region of dominant $`s`$ contribution goes as $$W=\sqrt{\frac{T}{2m^3}}.$$ (17) Therefore, in the free case the dominant contribution to the $`s`$ integral comes from $`i(s_0W)<s<i(s_0+W)`$. This claim is supported by the Monte-Carlo simulation results. In Fig. 2 we present the results for $`s`$-distributions for two different coupling strengths for time $`T=40`$, and $`m=1`$ GeV. According to the estimate given above Eq. (17), for $`g=0`$, the dominant contribution to the $`s`$-distribution comes from the region $`15.53<s<24.47`$ which is in agreement with the result presented in Fig. 2. In order to ensure that the scaling given in Eq. (15) does not make a significant change in the region of dominant contribution, $`\mathrm{\Gamma }`$ should be chosen such that $$\mathrm{\Gamma }W$$ (18) As one increases the coupling strength, the value of $`s_0`$ deviates from its free value. Therefore, in general, $`s_0`$ has to be defined self consistently by monitoring the peak of the $`s`$ distribution. In Figure 2 we display the s-distribution for two different coupling strengths. It is seen that as the coupling strength is increased the peak of the s-distribution moves towards higher s values. In general the peak of the distribution can be parameterized as $$s_0=C\frac{T}{2m}.$$ (19) The dependence of $`C`$ on coupling strength $`g^2`$ is determined self consistently. In Fig. 3 $`C`$, which gives the location of the stationary point through Eq. (19), is plotted as a function of the coupling strength $`g^2`$. According to Fig. 3, it is not possible to find a self consistent stationary point beyond the critical coupling strength of $`g^2=31`$ GeV<sup>2</sup>. A similar critical behavior was also observed in Refs. within the context of a variational approach. The insensitivity of the dressed mass to the width $`\mathrm{\Gamma }`$ has been investigated and found that a choice of $`\mathrm{\Gamma }^2=2W^2`$ was satisfactorily large. Results presented here employ the same value of $`\mathrm{\Gamma }`$. Prescription given by Eqs. (15), and (16) enables one to perform a Wick rotation in variable $`s`$ to obtain a finite and non-oscillatory expression for the fully interacting two-body propagator: $`G`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑s{\displaystyle _0^{\mathrm{}}}𝑑\overline{s}{\displaystyle (𝒟z)_{xy}(𝒟\overline{z})_{\overline{x}\overline{y}}}`$ $`\times \mathrm{exp}\left[K[z,s]K[\overline{z},\overline{s}]+V_0[z,s_r]+V_0[\overline{z},\overline{s}_r]+2V_{12}[z,\overline{z},s_r,\overline{s}_r]\right],`$ where $$s_r\frac{s}{R(s,s_0)}.$$ (21) The path integral is discretized using $$(𝒟)_{xy}(N/4\pi s)^{2N}\mathrm{\Pi }_{i=1}^{N1}d^4z_i,$$ (22) where the s-dependence is critical in obtaining the correct normalization. Note that the integration over the final coordinates is not included in this expression. The discretized versions of kinetic and interaction terms are given by $`K[z,s]`$ $``$ $`(m^2+iϵ)s{\displaystyle \frac{N}{4s}}{\displaystyle \underset{i=1}{\overset{N}{}}}(z_iz_{i1})^2,`$ (23) $`V_0[z,s]`$ $``$ $`{\displaystyle \frac{g^2s^2}{2N^2}}{\displaystyle \underset{i,j=1}{\overset{N}{}}}\mathrm{\Delta }({\displaystyle \frac{1}{2}}(z_i+z_{i1}z_jz_{j1}),\mu ),`$ (24) $`V_{12}[z,\overline{z},s,\overline{s}]`$ $``$ $`{\displaystyle \frac{g^2s\overline{s}}{2N^2}}{\displaystyle \underset{i,j=1}{\overset{N}{}}}\mathrm{\Delta }({\displaystyle \frac{1}{2}}(z_i+z_{i1}\overline{z}_j\overline{z}_{j1}),\mu ).`$ (25) We next address the regularization of the ultraviolet (short distance) singularities. The ultraviolet singularity in the kernel $`\mathrm{\Delta }(x,\mu )`$ Eq. (8) can be regularized using a Pauli-Villars regularization prescription. In order to do this one replaces the kernel $$\mathrm{\Delta }(x,\mu )\mathrm{\Delta }(x,\mu )\mathrm{\Delta }(x,\alpha \mu ),$$ (26) where $`\alpha `$ is in principle a large constant. The ultraviolet singularity in the interaction is of the type $$𝑑zz\mathrm{\Delta }(z,\mu ).$$ (27) At short distances the kernel $`\mathrm{\Delta }(z,\mu )`$ goes as $`1/z^2`$. Therefore, the self energy calculation involves a logarithmic type singularity. The Pauli-Villars regularization takes care of this singularity. The Pauli-Villars regularization is particularly convenient for Monte-Carlo simulations since it only involves a modification of the kernel. In order to achieve an efficient convergence in numerical simulations we use a rather small cut-off parameter $`\alpha =3`$. This choice leads to a less singular kernel. The value of $`\alpha `$ can be increased arbitrarily at the cost additional computational time. It should be noted that while we employ a Pauli-Villars regularization throughout this paper, in general, the bound state problem without self energies does not have any UV singularities. UV singularities are only associated with the self energies of the particles. After this brief summary of the formalism, in the next section we present some of the results and discuss the details of the algorithm. ## 3 Running the code: applications The Monte-Carlo simulation starts by choosing an initial configuration for the trajectories of the particles. The choice of the initial trajectory is arbitrary (except at the end points). In Fig. 4 the evolution of the action as a function of number of updates for two different initial conditions is displayed. Starting with a random initial trajectory is analogous to a high temperature system and the termalization (reaching to the ground state) takes a long time (about 1000 updates). However if one starts by a classical free trajectory, the initial configuration is analogous to a frozen system without any fluctuations and the termalization results in an increase of the action. However asymptotically both results should converge as shown in Fig. 4. We usually start with an orderly system and disregard the first 1000 updates. Step sizes of the random walker in configuration space and in s-space should be chosen such that the average acceptance rate of Monte-Carlo updates are about % 50. In each run we typically make 500000 updates of trajectories. In order to reduce the statistical errors runs must be repeated (usually more then 10 times) using different random number seeds. The correlation function $`X(n)`$ of sampled configurations in each run is defined as $$X(n)\frac{m(i)m(i+n)m^2}{m^2},$$ (28) where $`m(i)`$ is the mass measurement at the $`i`$’th update. The correlation function $`X(n)`$ measures how the information about a given configuration is lost as a function of the number of updates $`n`$. In Figure 5 we show the correlation function for the 1-body problem. According to Fig. 5 the number of updates necessary for the correlation between sampled trajectories to vanish is around $`n=1000`$. The number of configurations sampled in our simulations, which is around 500000, is well above the correlation length 1000, thereby insuring that uncorrelated trajectories are sampled during the Monte-Carlo integration. The time $`T`$ required to reach to the asymptotic limit increases as $`g^2`$ increases. For $`g^2=25`$ the asymptotic value of the mass, given by Eq. (11), is obtained around $`mT=40`$. In particular for the result shown in Fig. 6, as $`g^2`$ ranged from $`g^2=0`$ GeV<sup>2</sup> to $`g^2=31`$ GeV<sup>2</sup> the asymptotic time values used were increased from $`mT=35`$ to $`mT=45`$. As time $`T`$ is increased the number of steps $`N`$ should also be proportionally increased so that the step size of the particle trajectory remains the same. As one increases the coupling strength $`g^2`$, trajectories of the particles deviate from the classical trajectory to a greater degree. This increase in fluctuations requires that one uses a higher $`N`$ value. The number of steps $`N`$ particles take between the initial and final coordinates is typically chosen to be $`35<N<45`$. In Fig. 6 the $`g^2`$ dependence of the one-body dressed mass is presented. 1-body masses presented here are lower than those in . This is due to the fact that in that work $`C`$ was assumed to be approximately 1. However it was subsequently realized that $`C`$ deviates from 1 significantly (see Fig. 3) as coupling strength is increased, and a self consistent determination of $`C`$ is important to pin down the critical point. According to Fig. 3 there is no self consistent stationary point Eq. (19) beyond the critical coupling strength $`g^2=31`$ GeV<sup>2</sup>, and the 1-body dressed mass becomes unstable for $`g^2>31`$ GeV<sup>2</sup> (Fig. 6). Next two applications involve the two and three-body bound states of equal mass particles. While the algorithm provided is capable to take into account all self energy and vertex dressing corrections to the bound state, in the bound state applications provided here the self energy contributions of particles were not taken into account. This is controlled by a switch in the input file (see Table 4.1 for input options). In Fig. 7 the $`g^2`$ dependence of the two-body bound state mass is presented. Beyond the critical coupling strength of $`g^2=100`$ GeV<sup>2</sup> the 2-body mass becomes unstable. During the Monte-Carlo simulation the radial distributions of particles are stored in histograms. In general for an n-body bound state there are $`n(n1)/2`$ relative distances. Since the particles are assumed to have equal time coordinates in the final state, essentially the relative distance is equal to the spatial distance. Let $`r_{ij}=|r_ir_j|`$ be the distance between particles $`i`$ and $`j`$. In this case the histogram $`P_{ij}(r_{ij})`$ stores the number of final state configurations sampled in the interval $`(r_{ij}\delta /2,r_{ij}+\delta /2)`$, where $`\delta `$ is the bin size. The range of $`r_{ij}`$ values is specified in the program using $$0<r_{ij}<\frac{20}{m_i+m_j}\mathrm{Fermi}.$$ (29) The choice of range is arbitrary as long as a reasonably smooth histogram is produced. The number of bins in each histogram is chosen to be 100. Therefore for two particles of mass 1 GeV, the maximum radius to be histogrammed is 10 Fermi; and the size of each bin is given by $`\delta =0.1`$ Fermi. In Fig. 8 we present the projection of the radial probability distribution onto a two-dimensional surface. The two-body probability distribution, presented in Fig. 8, shows the result for $`g^2=25`$ GeV<sup>2</sup>. Notice that the probability distribution vanishes at the origin. This is due to the phase space factor $`4\pi r^2`$. In order to be able to compare the probability distribution with a wave function the phase space factor should be factored out. In general the radial wave function amplitude $`\mathrm{\Psi }(r)`$ is given by: $$|\mathrm{\Psi }(r)|=\sqrt{\frac{P(r)}{4\pi r^2}}$$ (30) The normalization of the probability distribution histogram presented in Fig. 9 is arbitrarily fixed such that the maximum entry is equal to 1. In generating the surface plot (Fig. 8), it is assumed that one of the particles is fixed at the origin. The amplitude of the surface gives the probability of finding the second particle at a distance $`r`$. While the surface plot for a two-body bound state is not necessary for a visual understanding of the bound state structure, the three-body bound state demands a three dimensional plot. In the three-body case there are three relative coordinates. In order to be able to plot the three-body probability distribution we fix the location of two of the particles along the y axis, and calculate the probability of finding the third particle in an arbitrary distance from two fixed particles. In Fig. 10 we represent the probability distribution of the third particle for a given fixed configuration of the first and second particles. Assume that the fixed particles are particle 1 and particle 2. The probability distribution of the third particle is given by $$P_3(|𝐫_\mathrm{𝟑}|)P_{13}(|𝐫_\mathrm{𝟑}𝐫_\mathrm{𝟏}|)P_{23}(|𝐫_\mathrm{𝟑}𝐫_\mathrm{𝟐}|)$$ (31) Probability distribution $`P_3(|𝐫_\mathrm{𝟑}|)`$ includes the phase space contribution. Therefore $`P_3(|𝐫_\mathrm{𝟑}|)`$ represents the probability of finding the third particle in a ring shown in Fig. 11. For example in the first plot of Fig. 10 two fixed particles are very close to each other such that the third particle sees them as a point particle. Therefore the probability distribution of the first plot of Fig. 10 is very similar to the two-body distribution given in Fig. 8. However as the fixed particles are separated from each other the third particle starts having a nonzero probability of being in between the two fixed particles (second and third plots of Fig. 10). Eventually when the two fixed particles are kept away from each other the third particle has a nonzero probability distribution only at the origin (the last plot shown in Fig. 10). In Fig. 12 the two-body distribution function $`P_{12}`$ in a three-body system is shown. All of these plots were produced with three equal particles of mass $`1`$ GeV, and the coupling strength of $`g^2=64`$ GeV<sup>2</sup>. ## 4 Program details In this section we discuss the details of the program. We start by providing the tables of input parameters 4.1 and arrays 4.2 and then summarize the components of the program. ### 4.1 Description of Input parameters SIG = 1 GeV Arbitrary momentum scale IPAR(I) I=1,2,3, Particles present (0=n, 1=y) IEXCH(I,I) I=1,2,3, Self Interactions (0=n, 1=y) IEXCH(I,J) $`(I,J)=(1,2),(1,3),(2,3)`$ Exch. Int. (0=n, 1=y) QM(I) Mass of particle I G The coupling strength XMU The exchange mass $`\mu `$ ALPHA The Pauli-Villars mass ratio: $`m_{pv}=\alpha `$ $`\mu `$ BETA = 1 The width of $`R(s,s_0)1\beta (siC\widehat{s}_0)^2/T^\gamma `$ GAMMA = 1 The T dependence of the width. C The peak location of the s distribution s=$`C\widehat{s}_0`$. C is determined self-consistently. N Number of steps along the trajectory NSMPL $`\mathrm{\#}`$ of sampled trajectories in MC integration NVOID No. of uncounted samples for initial termalization ZSTEP Max. step size of the random walker in coord. space SSTEP Max. step size of the random walker in $`s`$ space EPSA The short distance(UV) cut-off for preventing explicit occurrence of 1/0.0 type of singularity. This is not a regulator however. The UV regularization is done using Pauli-Villars subtraction IDUM The seed of the random number generator Z(I,0,J) Initial coordinates of particle I=1,2,3 Z(I,N,J) Final coordinates of particle I=1,2,3 IWRTA,IWRTM Action, mass to be stored in file ? (0=n, 1=y) INTOUT Integrate over final coordinates ? ### 4.2 Arrays QM(3) Particle masses IPAR(3) Determines which particles are present. Z(3,0:400,4) The trajectories of particles ZNEW(3,0:400,4) The updated trajectories of particles SMAX(3) The maximum value of s value to be histogrammed. (not an integration cut-off) RMAX(3,3) The maximum relative distance between particles to be histogrammed. (not an integration cut-off) SHIST(3,200) Histogram of $`s`$ values RHIST(3,3,200) Histogram of relative distances between particles SUMK(3) Kinetic energies of particles SUMV(3,3) Potential energies of particles: SUMV(I,I): self energy of the I’th particle, SUMV(I,J): the exchange energy between I’th and J’th particles. SUMKN(3) Updated kinetic energies SUMVN(3,3) Updated potential energies ### 4.3 Main Program The main program starts by calling the INPUT subroutine. This subroutine reads input parameters described in Table 4.1. Next, subroutine XINTGR is called to perform the Monte-Carlo simulation. In the following the role of each subroutine and function is explained. ### 4.4 Subroutine INPUT In this subroutine 12 input parameters are read. In addition to reading these parameters, histograms for the radial and s distributions, and initial trajectories of the particles are initialized. Initialization of trajectories is done using the classical free trajectories of particles. ### 4.5 Subroutine XINTGR This subroutine controls basic steps of the Monte-Carlo sampling process. First, the kinetic and the potential sums corresponding to the initial trajectories of particles are calculated by calling the XSUMS subroutine. Next step is the termalization process. In order to reach termalization the configuration of trajectories are updated NVOID times by calling the UPDATE subroutine. First NVOID updates are not used in the actual calculation of the bound state mass or the probability distribution. After the termalization the sampling is done for NSMPL times. The results for the $`s`$ values and the relative distances of particles are histogrammed. Finally the bound state mass is calculated. ### 4.6 Subroutine UPDATE This subroutine is responsible for updating the current configuration of trajectories. Each coordinate and $`s`$ parameter is updated once. Samplings are done according to distribution $$e^{S[z]}$$ (32) distribution. If the ratio $`r`$, $$r=e^{(S[z^{}]S[z])},$$ (33) is larger than 1 updates are always accepted. When $`r<1`$ the update is accepted with a probability of $`r`$. ### 4.7 Subroutine XSUMS This subroutine calculates kinetic and potential sums before the configuration updates start. Since calculation of the kinetic and potential sums is a costly operation, after every update we calculate the shift in the sums. ### 4.8 Subroutine ACTION This subroutine calculates the action using known values of kinetic and potential sums. The action is stored in a file to monitor the termalization. ( See Fig. 4 ). ### 4.9 Subroutine UPDSUM Since calculation of the kinetic and potential sums is a costly operation, after every update we calculate the shift in the sums. Using this shift sums are updated. ### 4.10 Function XDERIV This subroutine calculates the derivative operator $`S^{}[Z]`$ Eq. (11) which gives the mass of the bound state. ### 4.11 Function DELTA, DELTAP These subroutine calculate, respectively, the interaction kernel Eq. (8) and its time derivative. ### 4.12 Function DLARAN This is a random number generator obtained from LAPACK package at Netlib. DLARAN returns a random real number from a uniform (0,1) distribution. In the actual implementation of the program we have used a Numerical Recipes random number generator (ran2). However for copyright reasons here we provide this alternative random number generator. ### 4.13 Functions BESK0, BESK1, KZEONE BESK0 and BESK1 are modified bessel functions $`K_0(x)`$ and $`K_1(x)`$ which are needed in calculation of the interaction kernel and its derivative. BESKO and BESK1 call KZEONE subroutine . KZEONE subroutine returns real and imaginary parts of $`e^xK_0(x)`$ and $`e^xK_1(x)`$. The KZEONE subroutine is considerably slower than the Numerical Recipes subroutines for modified bessel functions. For copyright reasons we provide KZEONE rather than the Numerical Recipes subroutines for the modified bessel functions. ## 5 Conclusions In this paper, using the Feynman-Schwinger representation, we have presented an algorithm that calculates 1,2,3 body masses and distribution probabilities in the quenched approximation for $`\chi ^2\varphi `$ theory in 3+1 dimension. The FSR approach provides an efficient method to calculate nonperturbative propagators. In this work we have presented results of applications to the 1, 2 and 3 body states. A detailed comparison of quenched bound state results of the FSR approach with the bound state equation predictions is under study and will be presented in a separate physics article . It is hoped that, through comparison, this simple and rigorous nonperturbative method will enhance our understanding of various nonperturbative bound state models and approximations. Acknowledgements We are grateful to F. Gross, and J. Tjon for discussions. The support of the DOE through grant No. DE-FG02-97ER41032 is gratefully acknowledged. The Thomas Jefferson National Accelerator Facility is gratefully acknowledged for warm hospitality and for providing computer resources. ## 6 Test run INPUT | 1.0 | SIG (GeV) | | --- | --- | | 1 0 0 | IPAR(1), (2), (3) | | 1 0 0 | IEXCH(1,1)(2,2)(3,3) | | 0 0 0 | IEXCH(1,2)(1,3)(2,3) | | 1.0 | QM(1) | | 1.0 | QM(2) | | 1.0 | QM(3) | | 2.0 | G | | 0.15 | XMU | | 3.0 | ALPHA | | 0.5 | BETA | | 1.0 | GAMMA | | 1.0 | C | | 35 | N | | 500000 | NSMPL | | 5000 | NVOID | | 2.1 | ZSTEP | | 4.5 | SSTEP | | 1.0D-4 | EPSA | | 1 | IDUM | | 0. 0. 0. 0. | Initial coordinates | | 1. 1. 0. 0. | | | -1. 1. 0. 0. | | | 0. 0. 0. 35. | Final coordinates | | 1. 1. 0. 35. | | | -1. 1. 0. 35. | | | 0 0 | IWRTA, IWRTM | | 1 | INTOUT | ## 7 OUTPUT | SIG | 1.0 GeV | | --- | --- | | Particles present | 1 0 0 | | Self interactions | 1 0 0 | | Exchange ” 1-2 | 0 | | ” ” 1-3 | 0 | | ” ” 2-3 | 0 | | QM(1) | 1. GeV | | QM(2) | 1. GeV | | QM(3) | 1. GeV | | G | 2. GeV | | XMU | 0.15 GeV | | ALPHA | 3. | | BETA | 0.5 | | GAMMA | 1. | | C | 1. | | N | 35 | | NSMPL | 500000 | | NVOID | 5000 | | ZSTEP | 2.1 1/GeV | | SSTEP | 4.5 1/GeV<sup>2</sup> | | EPSA | 0.10E-03 GeV<sup>-1</sup> | | IDUM | 1 | | Initial coordinates: | 0 0 0 0 GeV<sup>-1</sup> | | | 1 1 0 0 GeV<sup>-1</sup> | | | -1 1 0 0 GeV<sup>-1</sup> | | Final coordinates: | 0 0 0 35 GeV<sup>-1</sup> | | | 1 1 0 35 GeV<sup>-1</sup> | | | -1 1 0 35 GeV<sup>-1</sup> | | INTOUT | 1 | | z update % | 50.33 | | s update % | 53.63 | | Bound state mass | 0.97$`\pm `$0.001 GeV |
no-problem/9910/quant-ph9910058.html
ar5iv
text
# Strengthening the Bell Theorem: conditions to falsify local realism in an experiment ## Abstract The two-particle correlation obtained from the quantum state used in the Bell inequality is sinusoidal, but the standard Bell inequality only uses two pairs of settings and not the whole sinusoidal curve. The highest to-date visibility of an explicit model reproducing sinusoidal fringes is $`2/\pi `$. We conjecture from a numerical approach presented in this paper that the highest possible visibility for a local hidden variable model reproducing the sinusoidal character of the quantum prediction for the two-particle Bell-type interference phenomena is $`\frac{1}{\sqrt{2}}`$. In addition, the approach can be applied directly to experimental data. It is a common wisdom in the quantum optical community that the threshold visibility of the sinusoidal two-particle interference pattern beyond which the Bell inequalities are violated is (for the case of perfect detectors) $`\frac{1}{\sqrt{2}}`$ (see, e. g. ). Most of the experiments exceed that limit (with the usual “fair sampling assumption”) . Some difficulties to reach this threshold were observed in the very early experiments , as well as in some recent ones involving novel techniques. Thus far, in atomic interferometry EPR experiments and for the phenomenon of entanglement swapping , the resulting visibility is less than the magic $`71\%`$. It is also well known that the Clauser-Horne inequality and the CHSH inequality are not only necessary conditions for the existence of local realistic models but also sufficient (in the case of the CHSH inequality, this requires some simplifying assumptions ). However, the sufficiency proofs used involve only two pairs of settings of the local macroscopic parameters (e.g. orientations of the polarizers) that define the measured local observables. Thus, the constructions are valid for precisely those settings and nothing more, and there is no guarantee that the models can be extended to more settings. Consequently one may ask what is the maximal visibility for a model applicable to all possible settings of the measuring apparata, that returns sinusoidal two particle interference fringes. It is already known that for perfect detectors, this value cannot be higher than $`\frac{1}{\sqrt{2}}`$ or lower than $`2/\pi `$ (this is the visibility of the recent ad hoc model by Larsson ; for earlier models returning visibilities of $`50\%`$ see e.g. ). The knowledge of the maximal visibility of sinusoidal two-particle fringes in a Bell-type experiment that still can be fully modeled in a local realistic way, may help us to distinguish better between ‘local’ and ‘nonlocal’ density matrices. For two two-state systems one can find precise conditions which have to be satisfied by density matrices describing the general state, pure or mixed, of the full system, that enable violation of the CHSH inequalities . States fulfilling such conditions are often called “nonlocal”. However, since the CHSH inequality is necessary and sufficient only for two pairs of settings, it is not excluded a priori that some states that satisfy such inequalities for all possible sets of two pairs of local dichotomic observables, nevertheless give predictions that in their entirety cannot be modeled by local hidden variables. Such models must first of all reproduce the full continuous sinusoidal variation of two-particle interference fringes, as well as the other predictions. It is clear that the full solution of the question would require a construction, or a proof of existence, of local hidden variable models which return sinusoidal fringes of the maximal possible visibility, that are applicable for all possible settings of the measuring apparata. Since this seems to be very difficult, we chose a numerical method of pointwise approximation at a finite number of settings at each side of the experiment. Due to exponential growth of the computation time when the number of settings increases, we managed to reach up to 9 settings on each side, i.e. up to 81 measurements points (which due to a certain symmetry, about which we will say more later, effectively can be transformed into $`18\times 18=324`$ points). The exponential growth hinders any substantial increase in this number. Such numerical models cannot give a definite answer concerning the critical visibility of sinusoidal fringes, however our calculations enable us to put forward a strong conjecture that this value must be indeed $`\frac{1}{\sqrt{2}}`$ (see below). Experimentally our problem can be formulated in the following way: the two particle state produced by the source does not allow for single particle interference, and in the experiment less-than-perfect two-particle fringes are obtained, due to some fundamental limitations (like those present in the case of entanglement swapping, e.g. ) or due to imperfections of the devices. What is the critical two particle interference visibility beyond which the observed process falsifies local realism? We shall ask these questions assuming, for simplicity, perfect detection efficiencies, which is possible theoretically, and experimentally thus far amounts to the usual “fair sampling assumption”. Furthermore, the problem may be investigated without the use of the assumption that the observed fringes are of a sinusoidal nature even though two observed two-particle fringes in experiments with high photon counts follow almost exactly the sinusoidal curves . In experiments with lower count rates, still with relatively good level of confidence the recorded data have approximately the same character, and it is customary to fit them with sinusoidal curves. It is now a standard procedure to perform the two-particle interference experiments by recording many points of the interference pattern, rather than stabilizing the devices at measurement settings appropriate for the best violation of some Bell inequality. Further, in some of the experiments, e.g. those involving optical fiber interferometers it is currently not possible to stabilize the phase differences and what is observed is just the interference pattern changing in time, and the visibility of the sinusoidal two-particle fringes is used as the critical parameter . Even though the numerical calculations presented here only reach $`18\times 18`$ points, this is more than enough in comparison to the experimental data. The usual experimental scans rarely involve more than 20 points. Further, our algorithm can be applied directly to the measurement data, and in that way one can even avoid the standby hypothesis that the fringes follow a sinusoidal pattern. The algorithm can directly answer the question: are the data compatible with local realism or not? Since physics is an experimental science, the questions about Nature get their final answers solely in this way. Let us now go to a formal treatment of the problem, and our numerical solution of it. In a standard Bell-type experiment one has a source emitting two particles, each of which propagates towards one of two spatially separated measuring devices. The particles are described by the maximally entangled state, e.g., $`|\mathrm{\Psi }={\displaystyle \frac{1}{\sqrt{2}}}(|+_1|_2|_1|+_2),`$ (1) where $`|+_1`$ is the state of the first particle with its spin directed along the vector $`𝒛`$ of a certain frame of reference ($``$ denotes the opposite direction), etc. We will now assume that the measuring devices are Stern-Gerlach apparata, measuring the observable $`𝒏𝝈`$, where $`n=a,b`$ ($`a`$ for the first observer, $`b`$ for the second one), $`𝒏`$ is a unit vector representing direction at which observer $`n`$ makes a measurement and $`𝝈`$ is a vector the components of which are standard Pauli matrices. The family of observables $`𝒏𝝈`$ covers all possible dichotomic observables for a spin $`\frac{1}{2}`$ system, endowed with a spectrum consisting of $`\pm 1`$. In each run of the experiment every observer obtains one of the two possible results of measurement, $`\pm 1`$. The probability of obtaining the result $`m=\pm 1`$ at the observer $`a`$, when measuring the projection of the spin of the incoming particle at the direction $`𝒂`$, and the result $`l=\pm 1`$ at the observer $`b`$, when measuring the projection of spin of the incoming particle at the direction $`𝒃`$ is equal to $`P_{QM}(m,l;𝒂,𝒃)={\displaystyle \frac{1}{4}}(1ml𝒂𝒃),`$ (2) while the probabilities of obtaining one of the results in the local stations reveal no dependence on the local parameters, $`P_{QM}(m;𝒂)=\frac{1}{2}`$ and $`P_{QM}(l;𝒃)=\frac{1}{2}`$. In a real experiment, however, one cannot expect that the observed probabilities will follow (2). Therefore, we will allow that the interference pattern is of a reduced visibility. In such a case (2) should be replaced by $`P_{QM}(m,l;𝒂,𝒃)={\displaystyle \frac{1}{4}}(1mlV𝒂𝒃),`$ (3) where $`0V1`$ stands for the visibility. Now for the pointwise approximation. For $`V=1`$ the quantum prediction for two-particle correlation function reads: $`E_{QM}(𝒂,𝒃)={\displaystyle \underset{m,l}{}}mlP(m,l;𝒂,𝒃)=𝒂𝒃,`$ (4) and there is no single particle interference (i.e. the local results are absolutely random). If one assumes that the unit vectors which define the measured observables are always coplanar the correlation function can be simplified to $`E_{QM}(\alpha ,\beta )=\mathrm{cos}(\alpha \beta )`$ (with the obvious definition of $`\alpha `$ and $`\beta `$). Let us assume that in the experiment the observer at the side $`a`$ chooses between, say, $`N`$ settings of the local apparatus, denoted here by $`𝒂_i`$ with $`i=1,2,3,\mathrm{}N`$, and the other observer, at side $`b`$, chooses between $`M`$ settings $`𝒃_j`$, with $`j=1,2,\mathrm{},M`$. The quantum correlation function for $`V=1`$ has at these settings the following fixed numerical values: $`E_{QM}(𝒂_i,𝒃_j)=𝒂_i𝒃_j.`$ Thus, we have a certain matrix of quantum predictions. We can denote this matrix by $`\widehat{𝐐}`$, with $`Q_{ij}=E_{QM}(𝒂_i,𝒃_j)`$. For $`V1`$ the correlation function reduces to $`VQ_{ij}`$. The Bell theorem is a statement on the impossibility of modeling certain quantum predictions by local and realistic theories. To make the matters simpler let us use the local hidden variable (LHV) formalism. Within such a formalism the correlation function must have the following structure $$E_{LHV}(𝒂_i,𝒃_j)=𝑑\lambda \rho (\lambda )A(𝒂_i,\lambda )B(𝒃_j,\lambda ),$$ (5) where for dichotomic measurements $`A(𝒂_i,\lambda )=\pm 1`$ and $`B(𝒃_j,\lambda )=\pm 1`$, and they represent the values of local measurements predetermined by the LHV’s, denoted by $`\lambda `$, for the specified local settings. This expression is an average over a certain LHV distribution $`\rho (\lambda )`$ of certain factorizable (rank 1) matrices, namely those with elements given by $`M(\lambda )_{ij}=A(𝒂_i,\lambda )B(𝒃_j,\lambda )`$. The symbol $`\lambda `$ may hide very many parameters. However, since the only possible values of $`A(𝒂_i,\lambda )`$ and $`B(𝒃_j,\lambda )`$ are $`\pm 1`$ there are only $`2^N`$ different sequences of the values of $`(A(𝒂_1,\lambda ),A(𝒂_2,\lambda ),\mathrm{},A(𝒂_N,\lambda ))`$, and only $`2^M`$ different sequences of $`(B(𝒃_1,\lambda ),B(𝒃_2,\lambda ),\mathrm{},B(𝒃_M,\lambda ))`$ and consequently they form only $`2^{N+M}`$ matrices $`\widehat{M}(\lambda )`$. Therefore the structure of LHV models of $`E_{LHV}(𝒂_i,𝒃_j)`$ reduces to discrete probabilistic models involving the average of all the $`2^{N+M}`$ matrices $`\widehat{M}(\lambda )`$. In other words, the LHV’s can be replaced, without any loss of generality, by a certain pair of variables $`k`$ and $`l`$ that have integer values respectively from $`1`$ to $`2^N`$ and from $`1`$ to $`2^M`$. To each $`k`$ we ascribe one possible sequence of the possible values of $`A(𝒂_i,\lambda )`$, denoted from now on by $`A(𝒂_i,k)`$, similarly we replace $`B(𝒃_j,\lambda )`$ by $`B(𝒃_j,l)`$. With this notation the possible LHV models of the correlation function $`E_{LHV}(𝒂_i,𝒃_j)`$ acquire the following simple form $$E_{LHV}(𝒂_i,𝒃_j)=\underset{k=1}{\overset{2^N}{}}\underset{l=1}{\overset{2^M}{}}p_{kl}A(𝒂_i,k)B(𝒃_j,l),$$ (6) with, of course, the probabilities satisfying $`p_{kl}0`$ and $`p_{kl}=1`$. The special case that we study here enables us to simplify the description further. To satisfy the additional requirement that the LHV model returns the quantum prediction of equal probability of the results at the local observation stations, that $`P(l;𝒂)=P(m;𝒃)=\frac{1}{2}`$, one can use the following observation. For each $`k`$, there must exist a $`k^{}k`$ with the property that $`A(𝒂_i,k^{})=A(𝒂_i,k)`$, and similarly for each $`l`$, there must exist an $`l^{}l`$ for which $`B(𝒃_j,l^{})=B(𝒃_j,l)`$. Then $`A(𝒂_i,k)B(𝒃_j,l)=A(𝒂_i,k^{})B(𝒃_j,l^{})`$, and thus they give exactly the same matrix of LHV predictions. By assuming $`p_{kl}=p_{k^{}l^{}}`$ the property of total randomness of local results will always be reproduced by the LHV models, and the generality will not be reduced since the contributions of $`p_{kl}`$ and $`p_{k^{}l^{}}`$ to (6) cannot be distinguished. Of course in the actual computer calculations of the correlation function we take only one representative of the two pairs, reducing in this way the number of probabilities and matrices of LHV predictions in (6) by a factor of two. Another reduction by a factor of four is given by the fact that in the coplanar case, the choice of the settings may be limited on each side to ranges not greater than $`\pi `$ (i.e. $`\varphi \alpha _i\varphi +\pi ,\psi \beta _j\psi +\pi `$). This is due to the simple observation that a model of the type (6) once established for such settings can be easily extended to settings $`\alpha _i^{^{}}=\alpha _j+\pi ,\beta _j^{^{}}=\beta _j+\pi `$ by putting $`A(\alpha _i^{^{}},l)=A(\alpha _i,l)`$ and $`B(\beta _j^{^{}},l)=B(\beta _j,l)`$ (nevertheless, some scans with wider ranges were performed). The conditions for LHV’s to reproduce the quantum prediction with a final visibility $`V`$ can be simplified to the problem of maximizing a parameter $`V`$ for which exists a set of $`2^{N+M}`$ probabilities, $`p_{kl}`$, such that $$\underset{k=1}{\overset{2^N}{}}\underset{l=1}{\overset{2^M}{}}p_{kl}A(𝒂_i,k)B(𝒃_j,l)=VQ_{ij}.$$ (7) This is a typical linear optimization problem in which we have more unknowns than conditions and for which many good algorithms exist. Therefore the simplest method to solve it is to use the standard method of linear programming. The core of the algorithm is a procedure which finds the maximum of a linear function within the given constraints. In our case the constraints are the $`N\times M`$ equations (7), and the condition that $`p_{kl}=1`$. Our unknowns are all $`p_{kl}`$ and $`V`$ (all nonnegative). We can treat them as points in $`2^{N+M}+1`$ dimensional space. The constraints given by (7) define some subset of this space. On this subset we use the trivial linear function $`f(p_{kl},V)=V`$ as our goal function (for clarity, our function depends only on the variable $`V`$) and we search for its maximum. In order to apply this method to experimental results, replace $`Q_{ij}`$ in (7) by the measured values $`E_{ij}^{\text{exp}}`$, and perform the same task. If the critical $`V`$ returned by the program is less that $`1`$, the data cannot be reproduced by any LHV model. Note that one even does not have to know what the settings are(!). This type of approach may be useful especially when one is not able to stabilize the interferometers at settings which are required for some Bell inequality, but there are data at many other settings. For the case of coplanar settings we have checked lots of “interesting” combinations of the settings of the apparata at each side (e.g. $`N\times N`$ problems , $`2N9`$, with evenly spaced settings, etc. ). The main interesting generic feature of our results is the following one. Whenever among the coplanar settings $`\alpha _i`$ and $`\beta _j`$ there is a subset $`\alpha _{i_1}`$, $`\alpha _{i_2}`$ and $`\beta _{j_1}`$, $`\beta _{j_2}`$ such that for these settings the CHSH inequality (equivalently CH inequality) is maximally violated by the ideal quantum prediction, our optimum, the maximal visibility reproduced by LHV’s, is $`V=\frac{1}{\sqrt{2}}`$. In all other cases, namely for those settings without any such a subset, we have obtained maximal visibilities describable by LHV’s (usually, slightly) higher than $`\frac{1}{\sqrt{2}}`$. However with increasing number of more or less evenly spaced settings this difference decreases. For general measurement directions (i.e. those including non-coplanar settings) all numerical scans follow the same pattern as for the co-planar case. Additionally, we have checked 200000 randomly chosen sets of $`5\times 5`$ settings, and never a visibility lower than $`\frac{1}{\sqrt{2}}`$ has been returned. All this strongly suggests that a LHV model returning correlation function of the form $`\frac{1}{\sqrt{2}}𝒂𝒃`$ in its entirety indeed exists. Furthermore, in the coplanar case, using carefully chosen equally spaced settings more structure is introduced into the model, in the form of two additional symmetries of the matrix $`\widehat{Q}`$ (in addition to the ones described above): ordinary matrix-transpose symmetry, and constant diagonals. This reduces the rate of the exponential growth of the calculation to a point where the problem is computable on a standard PC in reasonable time, even for $`13\times 13`$ (extendible to $`26\times 26`$) settings. Even in this case the returned visibilities exhibit the behavior discussed above and always satisfy $`V\frac{1}{\sqrt{2}}`$. Finally let us present application of our method to raw experimental data. In a recent Bell-type experiment Weinfurter and Michler have obtained the following matrix of results $$\widehat{E^{\text{exp}}}=\left[\begin{array}{ccc}0.894& 0.061& 0.761\\ 0.851& 0.343& 0.765\\ 0.625& 0.688& 0.516\\ 0.251& 0.860& 0.103\\ 0.226& 0.921& 0.389\\ 0.530& 0.651& 0.648\\ 0.855& 0.323& 0.832\\ 0.852& 0.092& 0.843\\ 0.785& 0.539& 0.638\\ 0.397& 0.795& 0.253\end{array}\right].$$ (8) Our program gives the verdict that the values of all entries to the matrix of results have to be reduced by the factor of $`0.796`$ to be describable by local hidden variables. In the recent long-distance EPR-Bell experiment the following set of values of the correlation function was obtained : $$\widehat{E^{\text{exp}}}=\left[\begin{array}{cc}0.960& 0.102\\ 0.903& 0.375\\ 0.733& 0.660\\ 0.479& 0.809\\ 0.191& 0.903\\ 0.120& 0.923\\ 0.429& 0.807\\ 0.666& 0.656\\ 0.842& 0.395\\ 0.951& 0.152\\ 0.953& 0.171\end{array}\right].$$ (9) This matrix has to be reduced by the factor of $`0.7366`$ to have a local realistic description. To conclude, the performed calculations enable us to put forward the following conjecture, which is the main result of this letter: sinusoidal two-particle fringes of visibility up to $`\frac{1}{\sqrt{2}}`$ are describable by local realistic theories. At this stage we are not able to give an analytic proof of the above. However, for finite sets of measurement points the results of data analysis with the use of our program fully concur with this hypothesis. This implies, e.g. that one needs a re-run of the entanglement swapping experiment, in order to show that this phenomenon can lead to observable violations of local realism. MŻ was supported by the University of Gdansk Grant No BW/5400-5-0264-9. DK was supported by the KBN Grant 2 P03B 096 15. MŻ thanks A. Zeilinger, H. Weinfurter and N. Gisin for discussions on the subject.
no-problem/9910/hep-ph9910410.html
ar5iv
text
# 1 Introduction ## 1 Introduction Inflationary cosmology has become one of the cornerstones of modern cosmology. Inflation was the first theory which made predictions about the structure of the Universe on large scales based on causal physics. The development of the inflationary Universe scenario has opened up a new and extremely promising avenue for connecting fundamental physics with experiment. These lectures form a short pedagogical introduction to inflation, focusing more on the basic principles than on detailed particle physics model building. Section 2 outlines some of the basic problems of standard cosmology which served as a motivation for the development of inflationary cosmology, especially the apparent impossibility of having a causal theory of structure formation within the context of standard cosmology. In Section 3, it is shown how the basic idea of inflation can solve the horizon and flatness problems, and can lead to a causal theory of structure formation. It is shown that when trying to implement the idea of inflation, one is automatically driven to consider the interplay between particle physics / field theory and cosmology. The section ends with a brief survey of some models of inflation. Section 4 reviews two areas in which there has been major progress since the early days of inflation. The first topic is reheating. It is shown that parametric resonance effects may play a crucial role in reheating the Universe at the end of inflation. The second topic is the quantum theory of the generation and evolution of cosmological perturbations which has become a cornerstone for precision calculations of observable quantities. In spite of the remarkable success of the inflationary Universe paradigm, there are several serious problems of principle for current models of inflation, specifically potential-driven models. These problems are discussed in Section 5. Section 6 is a summary of some new approaches to solving the problems of potential-driven inflation. An attempt to obtain inflation from condensates is discussed, a nonsingular Universe construction making use of higher derivative terms in the gravitational action is explained, and a framework for calculating the back-reaction of cosmological perturbations is summarized. As indicated before, this review focuses on the principles and problems of inflationary cosmology. Readers interested in comprehensive reviews of inflation are referred to . A recent review focusing on inflationary model building in the context of supersymmetric models can be found in . For a review at a similar level to this one but with a different bias see . ## 2 Successes and Problems of Standard Cosmology ### 2.1 Framework of Standard Cosmology The standard big bang cosmology rests on three theoretical pillars: the cosmological principle, Einstein’s general theory of relativity and a classical perfect fluid description of matter. The cosmological principle states that on large distance scales the Universe is homogeneous. This implies that the metric of space-time can be written in Friedmann-Robertson-Walker (FRW) form: $$ds^2=dt^2a(t)^2\left[\frac{dr^2}{1kr^2}+r^2(d\vartheta ^2+\mathrm{sin}^2\vartheta d\phi ^2)\right],$$ (1) where the constant $`k`$ determines the topology of the spatial sections. In the following, we shall usually set $`k=0`$, i.e. consider a spatially flat Universe. In this case, we can set the scale factor $`a(t)`$ to be equal to $`1`$ at the present time $`t_0`$, i.e. $`a(t_0)=1`$, without loss of generality. The coordinates $`r,\vartheta `$ and $`\phi `$ are comoving spherical coordinates. World lines with constant comoving coordinates are geodesics corresponding to particles at rest. If the Universe is expanding, i.e. $`a(t)`$ is increasing, then the physical distance $`\mathrm{\Delta }x_p(t)`$ between two points at rest with fixed comoving distance $`\mathrm{\Delta }x_c`$ grows: $$\mathrm{\Delta }x_p=a(t)\mathrm{\Delta }x_c.$$ (2) The dynamics of an expanding Universe is determined by the Einstein equations, which relate the expansion rate to the matter content, specifically to the energy density $`\rho `$ and pressure $`p`$. For a homogeneous and isotropic Universe and setting the cosmological constant to zero, they reduce to the Friedmann-Robertston-Walker (FRW) equations $$\left(\frac{\dot{a}}{a}\right)^2\frac{k}{a^2}=\frac{8\pi G}{3}\rho $$ (3) $$\frac{\ddot{a}}{a}=\frac{4\pi G}{3}(\rho +3p).$$ (4) These equations can be combined to yield the continuity equation (with Hubble constant $`H=\dot{a}/a`$) $$\dot{\rho }=3H(\rho +p).$$ (5) The third key assumption of standard cosmology is that matter is described by a classical ideal gas with an equation of state $$p=w\rho .$$ (6) For cold matter (dust), pressure is negligible and hence $`w=0`$. From (5) it follows that $$\rho _m(t)a^3(t),$$ (7) where $`\rho _m`$ is the energy density in cold matter. For radiation we have $`w=1/3`$ and hence it follows from (5) that $$\rho _r(t)a^4(t),$$ (8) $`\rho _r(t)`$ being the energy density in radiation. ### 2.2 Successes of Standard Cosmology The three classic observational pillars of standard cosmology are Hubble’s law, the existence and black body nature of the nearly isotropic cosmic microwave background (CMB), and the abundances of light elements (nucleosynthesis). These successes are discussed in detail in many textbooks (see e.g. for some recent ones) on cosmology, and also in the lectures by Blanchard and Sarkar at this school, and will therefore not be reviewed here. Let us just recall two important aspects of the thermal history of the early Universe. Since the energy density in radiation redshifts faster than the matter energy density, it follows that although the energy density of the Universe is now mostly in cold matter, it was initially dominated by radiation. The transition occurred at a time denoted by $`t_{eq}`$, the “time of equal matter and radiation”. As discussed in the lectures by Padmanabhan at this school, $`t_{eq}`$ is the time when perturbations can start to grow by gravitational clustering. The second important time is $`t_{rec}`$, the “time of recombination” when photons fell out of equilibrium (since ions and electrons had by then combined to form electrically neutral atoms). The photons of the CMB have travelled without scattering from $`t_{rec}`$ to the present. Their spatial distribution is predicted to be a black body since the cosmological redshift preserves the black body nature of the initial spectrum (simply redshifting the temperature) which was in turn determined by thermal equilibrium. CMB anisotropies probe the density fluctuations at $`t_{rec}`$ (see the lectures by Zadellariaga at this school for a detailed analysis). Note that for the usual values of the cosmological parameters, $`t_{eq}<t_{rec}`$. ### 2.3 Problems of Standard Cosmology Standard Big Bang cosmology is faced with several important problems. None represents an actual conflict with observations. The problems I will focus on here – the homogeneity, flatness and formation of structure problems (see e.g. ) – are questions which have no answer within the theory and are therefore the main motivation for inflationary cosmology. The “horizon problem” is illustrated in Fig. 1. As is sketched, the comoving region $`\mathrm{}_p(t_{rec})`$ over which the CMB is observed to be homogeneous to better than one part in $`10^4`$ is much larger than the comoving forward light cone $`\mathrm{}_f(t_{rec})`$ at $`t_{rec}`$, which is the maximal distance over which microphysical forces could have caused the homogeneity: $$\mathrm{}_p(t_{rec})=\underset{t_{rec}}{\overset{t_0}{}}𝑑ta^1(t)3t_0\left(1\left(\frac{t_{rec}}{t_0}\right)^{1/3}\right)$$ (9) $$\mathrm{}_f(t_{rec})=\underset{0}{\overset{t_{rec}}{}}𝑑ta^1(t)3t_0^{2/3}t_{rec}^{1/3}.$$ (10) From the above equations it is obvious that $`\mathrm{}_p(t_{rec})\mathrm{}_f(t_{rec})`$. Hence, standard cosmology cannot explain the observed isotropy of the CMB. In standard cosmology and in an expanding Universe with conserved total entropy, $`\mathrm{\Omega }=1`$ is an unstable fixed point. This can be seen as follows. For a spatially flat Universe $`(\mathrm{\Omega }=1)`$ $$H^2=\frac{8\pi G}{3}\rho _c,$$ (11) whereas for a nonflat Universe $$H^2+\epsilon T^2=\frac{8\pi G}{3}\rho ,$$ (12) with $$\epsilon =\frac{k}{(aT)^2}.$$ (13) The quantity $`\epsilon `$ is proportional to $`s^{2/3}`$, where $`s`$ is the comoving entropy density. Hence, in standard cosmology, $`\epsilon `$ is constant. Combining (11) and (12) gives $$\frac{\rho \rho _c}{\rho _c}=\frac{3}{8\pi G}\frac{\epsilon T^2}{\rho _c}T^2.$$ (14) Thus, as the temperature decreases, $`|\mathrm{\Omega }1|`$ increases. In fact, in order to explain the present small value of $`\mathrm{\Omega }1`$, the initial energy density had to be extremely close to critical density. For example, at $`T=10^{15}`$ GeV, (14) implies $$\frac{\rho \rho _c}{\rho _c}10^{50}.$$ (15) What is the origin of these fine tuned initial conditions? This is the “flatness problem” of standard cosmology. The third of the classic problems of standard cosmological model is the “formation of structure problem.” Observations indicate that galaxies and even clusters of galaxies have nonrandom correlations on scales larger than 50 Mpc (see e.g. ). This scale is comparable to the comoving horizon at $`t_{eq}`$. Thus, if the initial density perturbations were produced much before $`t_{eq}`$, the correlations cannot be explained by a causal mechanism. Gravity alone is, in general, too weak to build up correlations on the scale of clusters after $`t_{eq}`$ (see, however, the explosion scenario of and topological defect models discussed in these proceedings in ). Hence, the two questions of what generates the primordial density perturbations and what causes the observed correlations do not have an answer in the context of standard cosmology. This problem is illustrated by Fig. 2. Standard cosmology extrapolated all the way back to the big bang cannot be taken as a self-consistent theory. The theory predicts that as the big bang is approached the temperature of matter diverges. This implies that the classical ideal gas description of matter which is one of the pillars of the theory breaks down. This comment serves as a guide to which of the key assumptions of standard cosmology will have to be replaced in order to obtain an improved theory: this improved theory will have to be based on the best theory available which describes matter at high temperatures and energies. Currently the best available matter theory is quantum field theory. In the near future, however, quantum field theory may have to be replaced by the theory which extends it to even higher energies, most likely string theory. ## 3 Overview of Inflationary Cosmology ### 3.1 The Inflationary Scenario The idea of inflation is very simple (for some early reviews of inflation see e.g. ). We assume there is a time interval beginning at $`t_i`$ and ending at $`t_R`$ (the “reheating time”) during which the Universe is exponentially expanding, i.e., $$a(t)e^{Ht},tϵ[t_i,t_R]$$ (16) with constant Hubble expansion parameter $`H`$. Such a period is called “de Sitter” or “inflationary.” The success of Big Bang nucleosynthesis sets an upper limit to the time $`t_R`$ of reheating: $$t_Rt_{NS},$$ (17) $`t_{NS}`$ being the time of nucleosynthesis. The phases of an inflationary Universe are sketched in Fig. 3. Before the onset of inflation there are no constraints on the state of the Universe. In some models a classical space-time emerges immediately in an inflationary state, in others there is an initial radiation dominated FRW period. Our sketch applies to the second case. After $`t_R`$, the Universe is very hot and dense, and the subsequent evolution is as in standard cosmology. During the inflationary phase, the number density of any particles initially in thermal equilibrium at $`t=t_i`$ decays exponentially. Hence, the matter temperature $`T_m(t)`$ also decays exponentially. At $`t=t_R`$, all of the energy which is responsible for inflation (see later) is released as thermal energy. This is a nonadiabatic process during which the entropy increases by a large factor. Fig. 4 is a sketch of how a period of inflation can solve the homogeneity problem. $`\mathrm{\Delta }t=t_Rt_i`$ is the period of inflation. During inflation, the forward light cone increases exponentially compared to a model without inflation, whereas the past light cone is not affected for $`tt_R`$. Hence, provided $`\mathrm{\Delta }t`$ is sufficiently large, $`\mathrm{}_f(t_R)`$ will be greater than $`\mathrm{}_p(t_R)`$. Inflation also can solve the flatness problem . The key point is that the entropy density $`s`$ is no longer constant. As will be explained later, the temperatures at $`t_i`$ and $`t_R`$ are essentially equal. Hence, the entropy increases during inflation by a factor $`\mathrm{exp}(3H\mathrm{\Delta }t)`$. Thus, $`ϵ`$ decreases by a factor of $`\mathrm{exp}(2H\mathrm{\Delta }t)`$. Hence, $`\rho `$ and $`\rho _c`$ can be of comparable magnitude at both $`t_i`$ and the present time. In fact, if inflation occurs at all, then rather generically, the theory predicts that at the present time $`\mathrm{\Omega }=1`$ to a high accuracy (now $`\mathrm{\Omega }<1`$ requires special initial conditions or rather special models ). Most importantly, inflation provides a causal mechanism for generating the primordial perturbations required for galaxies, clusters and even larger objects. In inflationary Universe models, the Hubble radius (“apparent” horizon), $`3t`$, and the (“actual”) horizon (the forward light cone) do not coincide at late times. Provided that the duration of inflation is sufficiently long, then (as sketched in Fig. 5) all scales within our present apparent horizon were inside the horizon since $`t_i`$. Thus, in principle it is possible to have a casual generation mechanism for perturbations . The generation of perturbations is supposed to be due to a causal microphysical process. Such processes can only act coherently on length scales smaller than the Hubble radius $`\mathrm{}_H(t)`$, where $$\mathrm{}_H(t)=H^1(t).$$ (18) A heuristic way to understand $`\mathrm{}_H(t)`$ is to realize that it is the distance which light (and hence the maximal distance any causal effects) can propagate in one expansion time. As will be discussed in Section 4, the density perturbations produced during inflation are due to quantum fluctuations in the matter and gravitational fields . The amplitude of these inhomogeneities corresponds to a temperature $`T_H`$ $$T_HH,$$ (19) the Hawking temperature of the de Sitter phase. This leads one to expect that at all times during inflation, perturbations will be produced with a fixed physical wavelength $`H^1`$. Subsequently, the length of the waves is stretched with the expansion of space, and soon becomes larger than the Hubble radius. The phases of the inhomogeneities are random. Thus, the inflationary Universe scenario predicts perturbations on all scales ranging from the comoving Hubble radius at the beginning of inflation to the corresponding quantity at the time of reheating. In particular, provided that inflation lasts sufficiently long, perturbations on scales of galaxies and beyond will be generated. Note, however, that it is very dangerous to interpret de Sitter Hawking radiation as thermal radiation. In fact, the equation of state of this “radiation” is not thermal . ### 3.2 How to Obtain Inflation Obviously, the key question is how to obtain inflation. From the FRW equations, it follows that in order to get an exponential increase of the scale factor, the equation of state of matter must be $$p=\rho $$ (20) which is not compatible with the standard (cosmological) model description of matter as an ideal gas of classical matter. As mentioned earlier, the ideal gas description of matter breaks down in the very early Universe. Matter must, instead, be described in terms of quantum field theory (QFT). In the resulting framework (classical general relativity as a description of space and time, and QFT as a description of the matter content) it is possible to obtain inflation. More important than the quantum nature of matter is its field nature. Note, however, that quantum field-driven inflation is not the only way to obtain inflation. In fact, before the seminal paper by Guth , Starobinsky proposed a model with exponential expansion of the scale factor based on higher derivative curvature terms in the gravitational action. Current quantum field theories of matter contain three types of fields: spin 1/2 fermions (the matter fields) $`\psi `$, spin 1 bosons $`A_\mu `$ (the gauge bosons) and spin 0 bosons, the scalar fields $`\phi `$ (the Higgs fields used to spontaneously break internal gauge symmetries). The Lagrangian of the field theory is constrained by gauge invariance, minimal coupling and renormalizability. The Lagrangian of the bosonic sector of the theory is thus constrained to have the form $$_m(\phi ,A_\mu )=\frac{1}{2}D_\mu \phi D^\mu \phi V(\phi )+\frac{1}{4}F_{\mu \nu }F^{\mu \nu },$$ (21) where in Minkowski space-time $`D_\mu =_\mu igA_\mu `$ denotes the (gauge) covariant derivative, $`g`$ being the gauge coupling constant, $`F_{\mu \nu }`$ is the field strength tensor, and $`V(\phi )`$ is the Higgs potential. Renormalizability plus assuming symmetry under $`\phi \phi `$ constrains $`V(\phi )`$ to have the form $$V(\phi )=\frac{1}{2}m^2\phi ^2+\frac{1}{4}\lambda \phi ^4,$$ (22) where $`m`$ is the mass of the excitations of $`\phi `$ about $`\phi =0`$, and $`\lambda `$ is a self-coupling constant. For spontaneous symmetry breaking, $`m^2<0`$ is required. Given the Lagrangian (21), the action for matter is $$S_m=d^4x\sqrt{g}_m,$$ (23) where $`g`$ here denotes the determinant of the metric tensor, and now the covariant derivative $`D_\mu `$ in (21) is a gauge and metric covariant derivative. The energy-momentum tensor is obtained by varying this action with respect to the metric. The contributions of the scalar fields to the energy density $`\rho `$ and pressure $`p`$ are $`\rho (\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\phi }^2+{\displaystyle \frac{1}{2}}a^2(\phi )^2+V(\phi )`$ (24) $`p(\phi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\phi }^2{\displaystyle \frac{1}{6}}a^2(\phi )^2V(\phi ).`$ (25) It thus follows that if the scalar field is homogeneous and static, but the potential energy positive, then the equation of state $`p=\rho `$ necessary for exponential inflation results. This is the idea behind potential-driven inflation. Note that given the restrictions imposed by minimal coupling, gauge invariance and renormalizability, scalar fields with nonvanishing potentials are required in order to obtain inflation. Mass terms for fermionic and gauge fields are not compatible with gauge invariance, and renormalizability forbids nontrivial potentials for fermionic fields. The initial hope of the inflationary Universe scenario was that the Higgs field required for gauge symmetry breaking in “grand unified” (GUT) models would serve the role of the inflaton, the field generating inflation. As will be seen in the following subsection, this hope cannot be realized. Most of the current realizations of potential-driven inflation are based on satisfying the conditions $$\dot{\phi }^2,a^2(\phi )^2V(\phi ),$$ (26) via the idea of slow rolling . Consider the equation of motion of the scalar field $`\phi `$ which can be obtained by varying the action $`S_m`$ with respect to $`\phi `$: $$\ddot{\phi }+3H\dot{\phi }a^2^2\phi =V^{}(\phi ).$$ (27) If the scalar field starts out almost homogeneous and at rest, if the Hubble damping term (the second term on the l.h.s. of (27) is large), and if the potential is quite flat (so that the term on the r.h.s. of (27) is small), then $`\dot{\phi }^2`$ may remain small compared to $`V(\phi )`$, in which case the slow rolling conditions (26) are satisfied and exponential inflation will result. If the spatial gradient terms are initially negligible, they will remain negligible since they redshift. To illustrate the slow-roll inflationary scenario, consider the simplest model, a toy model with quadratic potential $$V(\phi )=\frac{1}{2}m^2\phi ^2.$$ (28) Consider initial conditions for which $`\phi m_{pl}`$ and $`\dot{\phi }=0`$. At the beginning of the evolution, $`\phi `$ will be rolling slowly and one finds approximate solutions by neglecting the $`\ddot{\phi }`$ term (the self-consistency of this approximation needs to be checked in every model independently!). In this simple model, the system of approximate equations $$3H\dot{\phi }=V^{}(\phi )$$ (29) and $$H^2=\frac{8\pi }{3}GV(\phi )$$ (30) can be solved exactly, yielding $$\dot{\phi }=\frac{1}{\sqrt{12\pi }}mm_{pl}.$$ (31) Since $`\dot{\phi }`$ is constant, neglecting the $`\ddot{\phi }`$ term in the equation of motion (27) is a self-consistent approximation. From (31) it also follows that inflation will occur until the slow-rolling condition (26) breaks down, i.e. until $$\phi =\frac{1}{\sqrt{12\pi }}m_{pl}.$$ (32) When $`\phi `$ falls below the above value, it starts oscillating about its minimum with an amplitude that decays due to Hubble friction (the damping term $`3H\dot{\phi }`$ in the field equation of motion (27)) and microscopic friction (see Section 4.1). Microscopic friction will lead to rapid heating of the Universe. For historic reasons, the time $`t_R`$ corresponding to the end of inflation and the onset of microscopic friction is called the reheating time. The evolution of the scalar field $`\phi `$ and of the temperature $`T`$ as a function of time is sketched in Figure 6. ### 3.3 Some Models of Inflation Old Inflation The first potential-driven model of inflation was, however, not based on slow rolling, but on false vacuum decay. It is the “Old Inflationary Universe” which was formulated in the context of a scalar field theory which undergoes a first order phase transition. As a toy model, consider a scalar field theory with the potential $`V(\phi )`$ of Figure 7. This potential has a metastable “false” vacuum at $`\phi =0`$, whereas the lowest energy state (the “true” vacuum) is $`\phi =a`$. Finite temperature effects lead to extra terms in the effective potential which are proportional to $`\phi ^2T^2`$ (the resulting finite temperature effective potential is also depicted in Figure 7). Thus, at high temperatures, the energetically preferred state is the false vacuum state. Note that this is only true if $`\phi `$ is in thermal equilibrium with the other fields in the system. The origin of the finite temperature corrections to the effective potential can be qualitatively understood as follows. Consider a theory with potential (22). If it is in thermal equilibrium, then the expectation value of $`\phi `$ is given by $$<\phi ^2>T^2.$$ (33) In the Hartree-Fock approximation, the interaction term $`\lambda \phi ^3`$ in the scalar field equation of motion (27) can be replaced by $`3\phi <\phi ^2>`$. Making use of (33) (with constant of proportionality designated by $`\alpha `$) to substitute for the expectation value of $`\phi ^2`$, we then get the same equation of motion as would follow for a scalar field with potential $$V_T(\phi )=V(\phi )+\frac{3}{2}\alpha \lambda T^2\varphi ^2.$$ (34) For a rigorous derivation, the reader is referred to the original articles or the review article . However, from the heuristic analysis given above, it is already clear that the finite temperature corrections to the potential can only be applied if $`\phi `$ is in thermal equilibrium. For fairly general initial conditions, $`\phi (x)`$ is trapped in the metastable state $`\phi =0`$ as the Universe cools below the critical temperature $`T_c`$. As the Universe expands further, all contributions to the energy-momentum tensor $`T_{\mu \nu }`$ redshift, except for the contribution $$T_{\mu \nu }V(\phi )g_{\mu \nu }.$$ (35) Hence, provided that the potential $`V(\phi )`$ is shifted upwards such that $`V(a)=0`$, the equation of state in the false vacuum approaches $`p=\rho `$, and inflation sets in. After a period $`\mathrm{\Gamma }^1`$, where $`\mathrm{\Gamma }`$ is the tunnelling rate, bubbles of $`\phi =a`$ begin to nucleate in a sea of false vacuum $`\phi =0`$. Inflation lasts until the false vacuum decays. During inflation, the Hubble constant is given by $$H^2=\frac{8\pi G}{3}V(0).$$ (36) The condition $`V(a)=0`$, which looks rather unnatural, is required to avoid a large cosmological constant today (none of the present inflationary Universe models manage to circumvent or solve the cosmological constant problem). It was immediately realized that old inflation has a serious “graceful exit” problem . The bubbles nucleate after inflation with radius $`r2t_R`$. Even if the bubble walls expand with the speed of light, the bubbles would at the present time be much smaller than our apparent horizon. Thus, unless bubbles percolate, the model predicts extremely large inhomogeneities inside the Hubble radius, in contradiction with the observed isotropy of the microwave background radiation. For bubbles to percolate, a sufficiently large number must be produced so that they collide and homogenize over a scale larger than the present Hubble radius. However, because of the exponential expansion of the regions still in the false vacuum phase, the volume between bubbles expands exponentially whereas the volume inside bubbles expands only with a low power. This prevents percolation. One way to overcome this problem is by realizing old inflation in the context of Brans-Dicke gravity . New Inflation Because of the graceful exit problem, old inflation never was considered to be a viable cosmological model. However, soon after the seminal paper by Guth, Linde and Albrecht and Steinhardt independently put forwards a modified scenario, the “New Inflationary Universe”. The starting point is a scalar field theory with a double well potential which undergoes a second order phase transition (Fig. 8). $`V(\phi )`$ is symmetric and $`\phi =0`$ is a local maximum of the zero temperature potential. Once again, it was argued that finite temperature effects confine $`\phi (x)`$ to values near $`\phi =0`$ at temperatures $`TT_c`$, where the critical temperature $`T_c`$ is characterized by the vanishing of the second derivative of $`V_T(\phi )`$ at the origin. For $`T<T_c`$, thermal fluctuations trigger the instability of $`\phi (x)=0`$ and $`\phi (x)`$ evolves towards either of the global minima at $`\phi =\pm \sigma `$ by the classical equation of motion (27). Within a fluctuation region, $`\phi (x)`$ will be homogeneous. In such a region, we can neglect the spatial gradient terms in Eq. (27). Then, from (24) and (25) we can read off the induced equation of state. The slow rolling condition required to obtain inflation is given by (26). There is no graceful exit problem in the new inflationary Universe. Since the fluctuation domains are established before the onset of inflation, any boundary walls will be inflated outside the present Hubble radius. In order to obtain inflation, the potential $`V(\phi )`$ must be very flat near the false vacuum at $`\phi =0`$. This can only be the case if all of the coupling constants appearing in the potential are small. However, this implies that $`\phi `$ cannot be in thermal equilibrium at early times, which would be required to localize $`\phi `$ in the false vacuum. In the absence of thermal equilibrium, the initial conditions for $`\phi `$ are only constrained by requiring that the total energy density in $`\phi `$ not exceed the total energy density of the Universe. Most of the phase space of these initial conditions lies at values of $`|\phi |>>\sigma `$. This leads to the “chaotic” inflation scenario . Chaotic Inflation Consider a region in space where at the initial time $`\phi (x)`$ is very large, homogeneous and static. In this case, the energy-momentum tensor will be immediately dominated by the large potential energy term and induce an equation of state $`p\rho `$ which leads to inflation. Due to the large Hubble damping term in the scalar field equation of motion, $`\phi (x)`$ will only roll very slowly towards $`\phi =0`$. The kinetic energy contribution to $`T_{\mu \nu }`$ will remain small, the spatial gradient contribution will be exponentially suppressed due to the expansion of the Universe, and thus inflation persists. Note that in contrast to old and new inflation, no initial thermal bath is required. Note also that the precise form of $`V(\phi )`$ is irrelevant to the mechanism. In particular, $`V(\phi )`$ need not be a double well potential. This is a significant advantage, since for scalar fields other than GUT Higgs fields used for spontaneous symmetry breaking, there is no particle physics motivation for assuming a double well potential, and the inflaton (the field which gives rise to inflation) cannot be a conventional Higgs field, due to the severe fine tuning constraints. The field and temperature evolution in a chaotic inflation model is similar to what is depicted in Figure 8, except that $`\phi `$ is rolling towards the true vacuum at $`\phi =\sigma `$ from the direction of large field values. Chaotic inflation is a much more radical departure from standard cosmology than old and new inflation. In the latter, the inflationary phase can be viewed as a short phase of exponential expansion bounded at both ends by phases of radiation domination. In chaotic inflation, a piece of the Universe emerges with an inflationary equation of state immediately after the quantum gravity (or string) epoch. The chaotic inflationary Universe scenario has been developed in great detail (see e.g. for a recent review). One important addition is the inclusion of stochastic noise in the equation of motion for $`\phi `$ in order to account for the effects of quantum fluctuations. In fact, it can be shown that for sufficiently large values of $`|\phi |`$, the stochastic force terms are more important than the classical relaxation force $`V^{}(\phi )`$. There is thus equal probability for the quantum fluctuations to lead to an increase or decrease of $`|\phi |`$. Hence, in a substantial fraction of the comoving volume, the field $`\phi `$ climbs up the potential. This leads to the conclusion that chaotic inflation is eternal. At all times, a large fraction of the physical space will be inflating. Another consequence of including stochastic terms is that on large scales (much larger than the present Hubble radius), the Universe will look extremely inhomogeneous. It is difficult to realize chaotic inflation in conventional supergravity models since gravitational corrections to the potential of scalar fields typically render the potential steep for values of $`|\phi |`$ of the order of $`m_{pl}`$ and larger. This prevents the slow rolling condition (26) from being realized. Even if this condition can be met, there are constraints from the amplitude of produced density fluctuations which are much harder to satisfy (see Section 5). Note that it is not impossible to obtain single field potential-driven inflation in supergravity models. For examples which show that is possible see e.g. . Hybrid Inflation Hybrid inflation is a solution to the above-mentioned problem of chaotic inflation. Hybrid inflation requires at least two scalar fields to play an important role in the dynamics of the Universe. As a toy model, consider the potential of a theory with two scalar fields $`\phi `$ and $`\psi `$: $$V(\phi ,\psi )=\frac{1}{4}\lambda (M^2\psi ^2)^2+\frac{1}{2}m^2\phi ^2+\frac{1}{2}\lambda ^{^{}}\psi ^2\phi ^2.$$ (37) For values of $`|\phi |`$ larger than $`\phi _c`$ $$\phi _c=\left(\frac{\lambda }{\lambda ^{^{}}}M^2\right)^{1/2}$$ (38) the minimum of $`\psi `$ is $`\psi =0`$, whereas for smaller values of $`\phi `$ the symmetry $`\psi \psi `$ is broken and the ground state value of $`|\psi |`$ tends to $`M`$. The idea of hybrid inflation is that $`\phi `$ is slowly rolling, like the inflaton field in chaotic inflation, but that the energy density of the Universe is dominated by $`\psi `$, i.e. by the contribution $$V_0=\frac{1}{4}\lambda M^4$$ (39) to the potential. Inflation terminates once $`|\phi |`$ drops below the critical value $`\phi _c`$, at which point $`\psi `$ starts to move (and is not required to move slowly). Note that in hybrid inflation $`\phi _c`$ can be much smaller than $`m_{pl}`$ and hence inflation without super-Planck scale values of the fields is possible. It is possible to implement hybrid inflation in the context of supergravity (see e.g. . For a detailed discussion of inflation in the context of supersymmetric and supergravity models, the reader is referred to . Comments At the present time there are many realizations of potential-driven inflation, but there is no canonical theory. A lot of attention is being devoted to implementing inflation in the context of unified theories, the prime candidate being superstring theory or M-theory. String theory or M-theory live in 10 or 11 space-time dimensions, respectively. When compactified to 4 space-time dimensions, there exist many moduli fields, scalar fields which describe flat directions in the complicated vacuum manifold of the theory. A lot of attention is now devoted to attempts at implementing inflation using moduli fields (see e.g. and references therein). Recently, it has been suggested that our space-time is a brane in a higher-dimensional space-time (see for the basic construction). Ways of obtaining inflation on the brane are also under active investigation (see e.g. ). It should also not be forgotten that inflation can arise from the purely gravitational sector of the theory, as in the original model of Starobinsky (see also Section 5), or that it may arise from kinetic terms in an effective action as in pre-big-bang cosmology or in k-inflation . ### 3.4 First Predictions of Inflation Theories with (almost) exponential inflation generically predict an (almost) scale-invariant spectrum of density fluctuations, as was first realized in and then studied more quantitatively in . Via the Sachs-Wolfe effect , these density perturbations induce CMB anisotropies with a spectrum which is also scale-invariant on large angular scales. The heuristic picture is as follows (see Fig. 5). If the inflationary period which lasts from $`t_i`$ to $`t_R`$ is almost exponential, then the physical effects which are independent of the small deviations from exponential expansion are time-translation-invariant. This implies, for example, that quantum fluctuations at all times have the same strength when measured on the same physical length scale. If the inhomogeneities are small, they can described by linear theory, which implies that all Fourier modes $`k`$ evolve independently. The exponential expansion inflates the wavelength of any perturbation. Thus, the wavelength of perturbations generated early in the inflationary phase on length scales smaller than the Hubble radius soon becomes equal to (“exits”) the Hubble radius (this happens at the time $`t_i(k)`$) and continues to increase exponentially. After inflation, the Hubble radius increases as $`t`$ while the physical wavelength of a fluctuation increases only as $`a(t)`$. Thus, eventually the wavelength will cross the Hubble radius again (it will “enter” the Hubble radius) at time $`t_f(k)`$. Thus, it is possible for inflation to generate fluctuations on cosmological scales by causal physics. Any physical process which obeys the symmetry of the inflationary phase and which generates perturbations will generate fluctuations of equal strength when measured when they cross the Hubble radius: $$\frac{\delta M}{M}(k,t_i(k))=\mathrm{const}$$ (40) (independent of $`k`$). Here, $`\delta M(k,t)`$ denotes the r.m.s. mass fluctuation on a length scale $`k^1`$ at time $`t`$. It is generally assumed that causal physics cannot affect the amplitude of fluctuations on super-Hubble scales (see, however, the comments at the end of Section 4.1). Therefore, the magnitude of $`\frac{\delta M}{M}`$ can change only by a factor independent of $`k`$, and hence it follows that $$\frac{\delta M}{M}(k,t_f(k))=\mathrm{const},$$ (41) which is the definition of a scale-invariant spectrum . In terms of quantities usually used by astronomers, (41) corresponds to a power spectrum $$P(k)k.$$ (42) Analyses from galaxy redshift surveys (see e.g. ) give a power spectrum of density fluctuations which is consistent with a scale-invariant primordial spectrum as given by (41). The COBE observations of CMB anisotropies are also in good agreement with the scale-invariant predictions from exponential inflation models, and in fact already give some bounds on possible deviations from scale-invariance. This agreement between the inflationary paradigm and observations is without doubt a major success of inflationary cosmology. However, it is worth pointing out (see for recent reviews) that topological defect models also generically predict a scale-invariant spectrum of density fluctuations and CMB anisotropies. Luckily, the predictions of defect models and of inflationary theories differ in important ways: the small-scale CMB anisotropies are very different (see e.g. the lectures by Mageuijo in these proceedings ), and the relative normalization of density and CMB fluctuations also differs. Observations will in the near future be able to discriminate between the predictions of inflationary cosmology and those of defect models. ## 4 Progress in Inflationary Cosmology ### 4.1 Parametric Resonance and Reheating Reheating is an important stage in inflationary cosmology. It determines the state of the Universe after inflation and has consequences for baryogenesis, defect formation and other aspects of cosmology. After slow rolling, the inflaton field begins to oscillate uniformly in space about the true vacuum state. Quantum mechanically, this corresponds to a coherent state of $`k=0`$ inflaton particles. Due to interactions of the inflaton with itself and with other fields, the coherent state will decay into quanta of elementary particles. This corresponds to post-inflationary particle production. Reheating is usually studied using simple scalar field toy models. The one we will adopt here consists of two real scalar fields, the inflaton $`\phi `$ with Lagrangian $$_o=\frac{1}{2}_\mu \phi ^\mu \phi \frac{1}{4}\lambda (\phi ^2\sigma ^2)^2$$ (43) interacting with a massless scalar field $`\chi `$ representing ordinary matter. The interaction Lagrangian is taken to be $$_I=\frac{1}{2}g^2\phi ^2\chi ^2.$$ (44) Self interactions of $`\chi `$ are neglected. By a change of variables $$\phi =\stackrel{~}{\phi }+\sigma ,$$ (45) the interaction Lagrangian can be written as $$_I=g^2\sigma \stackrel{~}{\phi }\chi ^2+\frac{1}{2}g^2\stackrel{~}{\phi }^2\chi ^2.$$ (46) During the phase of coherent oscillations, the field $`\stackrel{~}{\phi }`$ oscillates with a frequency $$\omega =m_\phi =\lambda ^{1/2}\sigma ,$$ (47) neglecting the expansion of the Universe, although this can be taken into account ). In the elementary theory of reheating (see e.g. and ), the decay of the inflaton is calculated using first order perturbation theory. According to the Feynman rules, the decay rate $`\mathrm{\Gamma }_B`$ of $`\phi `$ (calculated assuming that the cubic coupling term dominates) is given by $$\mathrm{\Gamma }_B=\frac{g^2\sigma ^2}{8\pi m_\varphi }.$$ (48) The decay leads to a decrease in the amplitude of $`\phi `$ (from now on we will drop the tilde sign) which can be approximated by adding an extra damping term to the equation of motion for $`\phi `$: $$\ddot{\phi }+3H\dot{\phi }+\mathrm{\Gamma }_B\dot{\phi }=V^{}(\phi ).$$ (49) From the above equation it follows that as long as $`H>\mathrm{\Gamma }_B`$, particle production is negligible. During the phase of coherent oscillation of $`\phi `$, the energy density and hence $`H`$ are decreasing. Thus, eventually $`H=\mathrm{\Gamma }_B`$, and at that point reheating occurs (the remaining energy density in $`\phi `$ is very quickly transferred to $`\chi `$ particles. The temperature $`T_R`$ at the completion of reheating can be estimated by computing the temperature of radiation corresponding to the value of $`H`$ at which $`H=\mathrm{\Gamma }_B`$. From the FRW equations it follows that $$T_R(\mathrm{\Gamma }_Bm_{pl})^{1/2}.$$ (50) If we now use the “naturalness” constraint<sup>2</sup><sup>2</sup>2At one loop order, the cubic interaction term will contribute to $`\lambda `$ by an amount $`\mathrm{\Delta }\lambda g^2`$. A renormalized value of $`\lambda `$ smaller than $`g^2`$ needs to be finely tuned at each order in perturbation theory, which is “unnatural”. $$g^2\lambda $$ (51) in conjunction with the constraint on the value of $`\lambda `$ from (85), it follows that for $`\sigma <m_{pl}`$, $$T_R<\mathrm{\hspace{0.17em}10}^{10}\mathrm{GeV}.$$ (52) This would imply no GUT baryogenesis, no GUT-scale defect production, and no gravitino problems in supersymmetric models with $`m_{3/2}>T_R`$, where $`m_{3/2}`$ is the gravitino mass. As we shall see, these conclusions change radically if we adopt an improved analysis of reheating. As was first realized in , the above analysis misses an essential point. To see this, we focus on the equation of motion for the matter field $`\chi `$ coupled to the inflaton $`\phi `$ via the interaction Lagrangian $`_I`$ of (46). Considering only the cubic interaction term, the equation of motion becomes $$\ddot{\chi }+3H\dot{\chi }\left((\frac{}{a})^2m_\chi ^22g^2\sigma \phi \right)\chi =\mathrm{\hspace{0.17em}0}.$$ (53) Since the equation is linear in $`\chi `$, the equations for the Fourier modes $`\chi _k`$ decouple: $$\ddot{\chi }_k+3H\dot{\chi }_k+(k_p^2+m_\chi ^2+2g^2\sigma \phi )\chi _k=\mathrm{\hspace{0.17em}0},$$ (54) where $`k_p=k/a`$ is the time-dependent physical wavenumber. Let us for the moment neglect the expansion of the Universe. In this case, the friction term in (54) drops out and $`k_p`$ is time-independent, and Equation (54) becomes a harmonic oscillator equation with a time-dependent mass determined by the dynamics of $`\phi `$. In the reheating phase, $`\phi `$ is undergoing oscillations. Thus, the mass in (54) is varying periodically. In the mathematics literature, this equation is called the Mathieu equation. It is well known that there is an instability. In physics, the effect is known as parametric resonance (see e.g. ). At frequencies $`\omega _n`$ corresponding to half integer multiples of the frequency $`\omega `$ of the variation of the mass, i.e. $$\omega _k^2=k_p^2+m_\chi ^2=(\frac{n}{2}\omega )^2n=1,2,\mathrm{},$$ (55) there are instability bands with widths $`\mathrm{\Delta }\omega _n`$. For values of $`\omega _k`$ within the instability band, the value of $`\chi _k`$ increases exponentially: $$\chi _ke^{\mu t}\mathrm{with}\mu \frac{g^2\sigma \phi _0}{\omega },$$ (56) with $`\phi _0`$ being the amplitude of the oscillation of $`\phi `$. Since the widths of the instability bands decrease as a power of the (small) coupling constant $`g^2`$ with increasing $`n`$, for practical purposes only the lowest instability band is important. Its width is $$\mathrm{\Delta }\omega _kg\sigma ^{1/2}\phi _0^{1/2}.$$ (57) Note, in particular, that there is no ultraviolet divergence in computing the total energy transfer from the $`\phi `$ to the $`\chi `$ field due to parametric resonance. It is easy to include the effects of the expansion of the Universe (see e.g. ). The main effect is that the value of $`\omega _k`$ can become time-dependent. Thus, a mode may enter and leave the resonance bands. In this case, any mode will lie in a resonance band for only a finite time. This can reduce the efficiency of parametric resonance, but the amount of reduction is quite dependent on the specific model. This behavior of the modes, however, also has positive aspects: it implies that the calculation of energy transfer is perfectly well-behaved and no infinite time divergences arise. It is now possible to estimate the rate of energy transfer, whose order of magnitude is given by the phase space volume of the lowest instability band multiplied by the rate of growth of the mode function $`\chi _k`$. Using as an initial condition for $`\chi _k`$ the value $`\chi _kH`$ given by the magnitude of the expected quantum fluctuations, we obtain $$\dot{\rho }\mu (\frac{\omega }{2})^2\mathrm{\Delta }\omega _kHe^{\mu t}.$$ (58) From (58) it follows that provided that the condition $$\mu \mathrm{\Delta }t>>1$$ (59) is satisfied, where $`\mathrm{\Delta }t<H^1`$ is the time a mode spends in the instability band, then the energy transfer will procede fast on the time scale of the expansion of the Universe. In this case, there will be explosive particle production, and the energy density in matter at the end of reheating will be approximately equal to the energy density at the end of inflation. The above is a summary of the main physics of the modern theory of reheating. The actual analysis can be refined in many ways (see e.g. ). First of all, it is easy to take the expansion of the Universe into account explicitly (by means of a transformation of variables), to employ an exact solution of the background model and to reduce the mode equation for $`\chi _k`$ to a Hill equation, an equation similar to the Mathieu equation which also admits exponential instabilities. The next improvement consists of treating the $`\chi `$ field quantum mechanically (keeping $`\phi `$ as a classical background field). At this point, the techniques of quantum field theory in a curved background can be applied. There is no need to impose artificial classical initial conditions for $`\chi _k`$. Instead, we may assume that $`\chi `$ starts in its initial vacuum state (excitation of an initial thermal state has been studied in ), and the Bogoliubov mode mixing technique (see e.g. ) can be used to compute the number of particles at late times. Using this improved analysis, we recover the result (58) . Thus, provided that the condition (59) is satisfied, reheating will be explosive. Working out the time $`\mathrm{\Delta }t`$ that a mode remains in the instability band for our model, expressing $`H`$ in terms of $`\phi _0`$ and $`m_{pl}`$, and $`\omega `$ in terms of $`\sigma `$, and using the naturalness relation $`g^2\lambda `$, the condition for explosive particle production becomes $$\frac{\phi _0m_{pl}}{\sigma ^2}>>\mathrm{\hspace{0.17em}1},$$ (60) which is satisfied for all chaotic inflation models with $`\sigma <m_{pl}`$ (recall that slow rolling ends when $`\phi m_{pl}`$ and that therefore the initial amplitude $`\phi _0`$ of oscillation is of the order $`m_{pl}`$). We conclude that rather generically, reheating in chaotic inflation models will be explosive. This implies that the energy density after reheating will be approximately equal to the energy density at the end of the slow rolling period. Therefore, as suggested in and , respectively, GUT scale defects may be produced after reheating and GUT-scale baryogenesis scenarios may be realized, provided that the GUT energy scale is lower than the energy scale at the end of slow rolling. Note that the state of $`\chi `$ after parametric resonance is not a thermal state. The spectrum consists of high peaks in distinct wave bands. An important question which remains to be studied is how this state thermalizes. For some interesting work on this issue see . As emphasized in and , the large peaks in the spectrum may lead to symmetry restoration and to the efficient production of topological defects (for a differing view on this issue see ). Since the state after explosive particle production is not a thermal state, it is useful to follow and call this process “preheating” instead of reheating. Note that the details of the analysis of preheating are quite model-dependent. In fact$`^{\text{[54, 60]}}`$, in most models one does not get the kind of “narrow-band” resonance discussed here, but “wide-band” resonance. In this case, the energy transfer is even more efficient. Many important questions, e.g. concerning thermalization and back-reaction effects during and after preheating (or parametric resonance) remain to be fully analyzed. Recently it has been argued that parametric resonance may lead to resonant amplification of super-Hubble-scale cosmological perturbations and might possibly even modify some of the first predictions of inflation mentioned in Section 3.4. The point is that in the presence of an oscillating inflaton field, the equation of motion for the cosmological perturbations takes on a similar form to the Mathieu equation discussed above (54). In some models of inflation, the first resonance band included modes with wavelength larger than the Hubble radius, leading to the apparent amplification of super-Hubble-scale modes which will destroy the scale-invariance of the fluctuations. Such a process would not violate causality since it is driven by the inflaton field which is coherent on super-Hubble scales at the end of inflation as a consequence of the causal dynamics of an inflationary Universe. However, careful analyses for simple single-field and double-field models demonstrated that there is no net growth of the physical amplitude of gravitational fluctuations beyond what the usual theory of cosmological perturbations (see the following subsection) predicts. It is still possible, however, that in more complicated model a net physical effect of parametric resonance of gravitational fluctuations persists . ### 4.2 Quantum Theory of Cosmological Perturbations On scales larger than the Hubble radius $`(\lambda >t)`$ the Newtonian theory of cosmological perturbations obviously is inapplicable, and a general relativistic analysis is needed. On these scales, matter is essentially frozen in comoving coordinates. However, space-time fluctuations can still increase in amplitude. In principle, it is straightforward to work out the general relativistic theory of linear fluctuations . We linearize the Einstein equations $$G_{\mu \nu }=8\pi GT_{\mu \nu }$$ (61) (where $`G_{\mu \nu }`$ is the Einstein tensor associated with the space-time metric $`g_{\mu \nu }`$, and $`T_{\mu \nu }`$ is the energy-momentum tensor of matter) about an expanding FRW background $`(g_{\mu \nu }^{(0)},\phi ^{(0)})`$: $`g_{\mu \nu }(\underset{¯}{x},t)`$ $`=`$ $`g_{\mu \nu }^{(0)}(t)+h_{\mu \nu }(\underset{¯}{x},t)`$ (62) $`\phi (\underset{¯}{x},t)`$ $`=`$ $`\phi ^{(0)}(t)+\delta \phi (\underset{¯}{x},t)`$ (63) and pick out the terms linear in $`h_{\mu \nu }`$ and $`\delta \phi `$ to obtain $$\delta G_{\mu \nu }=\mathrm{\hspace{0.25em}8}\pi G\delta T_{\mu \nu }.$$ (64) In the above, $`h_{\mu \nu }`$ is the perturbation in the metric and $`\delta \phi `$ is the fluctuation of the matter field $`\phi `$. We have denoted all matter fields collectively by $`\phi `$. In practice, there are many complications which make this analysis highly nontrivial. The first problem is “gauge invariance” Imagine starting with a homogeneous FRW cosmology and introducing new coordinates which mix $`\underset{¯}{x}`$ and $`t`$. In terms of the new coordinates, the metric now looks inhomogeneous. The inhomogeneous piece of the metric, however, must be a pure coordinate (or ”gauge”) artefact. Thus, when analyzing relativistic perturbations, care must be taken to factor out effects due to coordinate transformations. There are various methods of dealing with gauge artefacts. The simplest and most physical approach is to focus on gauge invariant variables, i.e., combinations of the metric and matter perturbations which are invariant under linear coordinate transformations. The gauge invariant theory of cosmological perturbations is in principle straightforward, although technically rather tedious. In the following I will summarize the main steps and refer the reader to for the details and further references (see also for a pedagogical introduction and for other approaches). We consider perturbations about a spatially flat Friedmann-Robertson-Walker metric $$ds^2=a^2(\eta )(d\eta ^2d\underset{¯}{x}^2)$$ (65) where $`\eta `$ is conformal time (related to cosmic time $`t`$ by $`a(\eta )d\eta =dt`$). At the linear level, metric perturbations can be decomposed into scalar modes, vector modes and tensor modes (gravitational waves). In the following, we will focus on the scalar modes since they are the only ones which couple to energy density and pressure. A scalar metric perturbation (see for a precise definition) can be written in terms of four free functions of space and time: $$\delta g_{\mu \nu }=a^2(\eta )\left(\begin{array}{cc}2\varphi & B_{,i}\\ B_{,i}& 2(\psi \delta _{ij}+E_{,ij})\end{array}\right).$$ (66) The next step is to consider infinitesimal coordinate transformations which preserve the scalar nature of $`\delta g_{\mu \nu }`$, and to calculate the induced transformations of $`\varphi ,\psi ,B`$ and $`E`$. Then we find invariant combinations to linear order. (Note that there are in general no combinations which are invariant to all orders .) After some algebra, it follows that $`\mathrm{\Phi }`$ $`=`$ $`\varphi +a^1[(BE^{})a]^{}`$ (67) $`\mathrm{\Psi }`$ $`=`$ $`\psi {\displaystyle \frac{a^{}}{a}}(BE^{})`$ (68) are two invariant combinations (a prime denotes differentiation with respect to $`\eta `$). Perhaps the simplest way to derive the equations of motion for gauge invariant variables is to consider the linearized Einstein equations (64) and to write them out in the longitudinal gauge defined by $`B=E=0`$, in which $`\mathrm{\Phi }=\varphi `$ and $`\mathrm{\Psi }=\psi `$, to directly obtain gauge invariant equations. For several types of matter, in particular for scalar field matter, $`\delta T_j^i\delta _j^i`$ which implies $`\mathrm{\Phi }=\mathrm{\Psi }`$. Hence, in this case the scalar-type cosmological perturbations can be described by a single gauge invariant variable. The equation of motion takes the form $$\dot{\xi }=O\left(\frac{k}{aH}\right)^2H\xi $$ (69) where $$\xi =\frac{2}{3}\frac{H^1\dot{\mathrm{\Phi }}+\mathrm{\Phi }}{1+w}+\mathrm{\Phi }.$$ (70) The variable $`w=p/\rho `$ (with $`p`$ and $`\rho `$ background pressure and energy density respectively) is a measure of the background equation of state. In particular, on scales larger than the Hubble radius, the right hand side of (69) is negligible, and hence $`\xi `$ is constant. If the equation of state of matter is constant, i.e., $`w=\mathrm{const}`$, then $`\dot{\xi }=0`$ implies that the relativistic potential is time-independent on scales larger than the Hubble radius, i.e. $`\mathrm{\Phi }(t)=\mathrm{const}`$. During a transition from an initial phase with $`w=w_i`$ to a phase with $`w=w_f`$, $`\mathrm{\Phi }`$ changes. In many cases, a good approximation to the dynamics given by (69) is $$\frac{\mathrm{\Phi }}{1+w}(t_i)=\frac{\mathrm{\Phi }}{1+w}(t_f),$$ (71) In order to make contact with matter perturbations and Newtonian intuition, it is important to remark that, as a consequence of the Einstein constraint equations, at Hubble radius crossing $`\mathrm{\Phi }`$ is a measure of the fractional density fluctuations: $$\mathrm{\Phi }(k,t_H(k))\frac{\delta \rho }{\rho }(k,t_H(k)).$$ (72) As mentioned earlier, the primordial fluctuations in an inflationary cosmology are generated by quantum fluctuations. What follows is a very brief description of the unified analysis of the quantum generation and evolution of perturbations in an inflationary Universe (for a detailed review see ). The basic point is that at the linearized level, the equations describing both gravitational and matter perturbations can be quantized in a consistent way. The use of gauge invariant variables makes the analysis both physically clear and computationally simple. The first step of this analysis is to consider the action for the linear perturbations in a background homogeneous and isotropic Universe, i.e. to expand the gravitational and matter action $`S(g_{\mu \nu },\phi )`$ to quadratic order in the fluctuation variables $`h_{\mu \nu },\delta \phi `$ $$S(g_{\mu \nu },\phi )=S_0(g_{\mu \nu }^{(0)},\phi ^{(0)})+S_2(h_{\mu \nu },\delta \phi ;g_{\mu \nu }^{(0)},\phi ^{(0)}),$$ (73) where $`S_2`$ is quadratic in the perturbation variables. Focusing on the scalar perturbations, it turns out that one can express the resulting $`S_2`$ in terms of the joint metric and matter gauge invariant variable $$v=a\left(\delta \phi +\frac{\phi ^{(0),}}{}\mathrm{\Phi }\right)$$ (74) describing the fluctuations. In the above, a prime denotes the derivative with respect to conformal time, and $`=a^{}/a`$. It turns out that, after a lot of algebra, the action $`S_2`$ reduces to the action of a single gauge invariant free scalar field (namely $`v`$) with a time dependent mass (the time dependence reflects the expansion of the background space-time) $$S_2=\frac{1}{2}𝑑td^3x\left(v^2(v)^2+\frac{z^{\prime \prime }}{z}v^2\right),$$ (75) with $$z=\frac{a\phi _0^{}}{}.$$ (76) This result is not surprising. Based on the study of classical cosmological perturbations, we know that there is only one field degree of freedom for the scalar perturbations. Since at the linearized level there are no mode interactions, the action for this field must be that of a free scalar field. The action thus has the same form as the action for a free scalar matter field in a time dependent gravitational or electromagnetic background, and we can use standard methods to quantize this theory (see e.g. ). If we employ canonical quantization, then the mode functions of the field operator obey the same classical equations as we derived in the gauge-invariant analysis of relativistic perturbations. The time dependence of the mass is reflected in the nontrivial form of the solutions of the mode equations. The mode equations have growing modes which correspond to particle production or equivalently to the generation and amplification of fluctuations. We can start the system off (e.g. at the beginning of inflation) in the vacuum state (defined as a state with no particles with respect to a local comoving observer). The state defined this way will not be the vacuum state from the point of view of an observer at a later time. The Bogoliubov mode mixing technique (see e.g. for a detailed exposition) can be used to calculate the number density of particles at a later time. In particular, expectation values of field operators such as the power spectrum can be computed. The resulting power spectrum gives the following result for the mass perturbations at time $`t_i(k)`$: $$\left(\frac{\delta M}{M}\right)^2(k,t_i(k))k^3\left(\frac{V^{}(\phi _0)\delta \stackrel{~}{\phi }(k,t_i(k))}{\rho _0}\right)^2\left(\frac{V^{}(\phi _0)H}{\rho _0}\right)^2.$$ (77) If the background scalar field is rolling slowly, then $$V^{}(\phi _0(t_i(k)))=3H|\dot{\phi }_0(t_i(k))|.$$ (78) and $$(1+p/\rho )(t_i(k))\rho _0^1\dot{\phi }_0^2(t_i(k)).$$ (79) Combining (71), (77), (78) and (79) and we get $$\frac{\delta M}{M}(k,t_f(k))\frac{3H^2|\dot{\phi }_0(t_i(k))|}{\dot{\phi }_0^2(t_i(k))}=\frac{3H^2}{|\dot{\phi }_0(t_i(k))|}$$ (80) This result can now be evaluated for specific models of inflation to find the conditions on the particle physics parameters which give a value $$\frac{\delta M}{M}(k,t_f(k))10^5$$ (81) which is required if quantum fluctuations from inflation are to provide the seeds for galaxy formation and agree with the CMB anisotropy limits. For chaotic inflation with a potential $$V(\phi )=\frac{1}{2}m^2\phi ^2,$$ (82) we can solve the slow rolling equations for the inflaton to obtain $$\frac{\delta M}{M}(k,t_f(k))10^2\frac{m}{m_{pl}}$$ (83) which implies that $`m10^{13}\mathrm{GeV}`$ to agree with (81). Similarly, for a quartic potential $$V(\phi )=\frac{1}{4}\lambda \phi ^4$$ (84) we obtain $$\frac{\delta M}{M}(k,t_f(k))10^2\lambda ^{1/2}$$ (85) which requires $`\lambda 10^{12}`$ in order not to conflict with observations. Demanding that (83) and (85) yield the observed amplitude of the density perturbations requires the presence of small parameters in the particle physics models. It has been shown that, quite generally, small parameters are required in any particle physics model if potential-driven inflation is to solve the fluctuation problem. To summarize the main results of the analysis of density fluctuations in inflationary cosmology: 1. Quantum vacuum fluctuations in the de Sitter phase of an inflationary Universe are the source of perturbations. 2. As a consequence of the change in the background equation of state, the evolution outside the Hubble radius produces a large amplification of the perturbations. In fact, unless the particle physics model contains very small coupling constants, the predicted fluctuations are in excess of those allowed by the bounds on cosmic microwave anisotropies. 3. The quantum generation and classical evolution of fluctuations can be treated in a unified manner. The formalism is no more complicated that the study of a free scalar field in a time dependent background. 4. Inflationary Universe models generically produce an approximately scale invariant Harrison-Zel’dovich spectrum $$\frac{\delta M}{M}(k,t_f(k))\mathrm{const}.$$ (86) It is not hard to construct models which give a different spectrum. All that is required is a significant change in $`H`$ during the period of inflation. ## 5 Problems of Inflationary Cosmology ### 5.1 Fluctuation Problem A generic problem for all realizations of potential-driven inflation studied up to now concerns the amplitude of the density perturbations which are induced by quantum fluctuations during the period of exponential expansion . From the amplitude of CMB anisotropies measured by COBE, and from the present amplitude of density inhomogeneities on length scales of clusters of galaxies, it follows that the amplitude of the mass fluctuations $`\delta M/M`$ on a length scale given by the comoving wavenumber $`k`$ at the time $`t_H(k)`$ when that scale crosses the Hubble radius in the FRW period is of the order $`10^4`$. However, as was discussed in detail in the previous section, the present realizations of inflation based on scalar quantum field matter generically predict a much larger value of these fluctuations, unless a parameter in the scalar field potential takes on a very small value. For example, in a single field chaotic inflationary model with potential given by (84) the mass fluctuations generated are of the order $`10^2\lambda ^{1/2}`$ (see (85)). Thus, in order not to conflict with observations, a value of $`\lambda `$ smaller than $`10^{12}`$ is required. There have been many attempts to justify such small parameters based on specific particle physics models, but no single convincing model has emerged. ### 5.2 Super-Planck-Scale Physics Problem In many models of inflation, in particular in chaotic inflation, the period of inflation is so long that comoving scales of cosmological interest today corresponded to a physical wavelength much smaller than the Planck length at the beginning of inflation. In extrapolating the evolution of cosmological perturbations according to linear theory to these very early times, we are implicitly making the assumption that the theory remains perturbative to arbitrarily high energies. If there were completely new physics at the Planck scale, the predictions might change. For example, if there were a sharp ultraviolet cutoff in the theory, then, if inflation lasts many e-folding, the modes which represent fluctuations on galactic scales today would not be present in the theory since their wavelength would have been smaller than the cutoff length at the beginning of inflation. A similar concern about black hole Hawking radiation has been raised in . As an example of how Planck-scale physics may dramatically alter the usual predictions of inflation, consider “Pre-big-bang Cosmology” which can be viewed as a toy model for how to include some effects of string theory in cosmological considerations. The pre-big-bang scenario is based on a dilaton-dominated super-exponentially expanding Universe smoothly connecting to an expanding FRW Universe dominated by matter and radiation. In this model of the early Universe, scalar metric perturbations on large scales are highly suppressed in the absence of excited axionic degrees of freedom . ### 5.3 Singularity Problem Scalar field-driven inflation does not eliminate singularities from cosmology. Although the standard assumptions of the Penrose-Hawking theorems break down if matter has an equation of state with negative pressure, as is the case during inflation, nevertheless it can be shown that an initial singularity persists in inflationary cosmology . This implies that the theory is incomplete. In particular, the physical initial value problem is not defined. ### 5.4 Cosmological Constant Problem Since the cosmological constant acts as an effective energy density, its value is bounded from above by the present energy density of the Universe. In Planck units, the constraint on the effective cosmological constant $`\mathrm{\Lambda }_{eff}`$ is (see e.g. ) $$\frac{\mathrm{\Lambda }_{eff}}{m_{pl}^4}\mathrm{\hspace{0.17em}10}^{122}.$$ (87) This constraint applies both to the bare cosmological constant and to any matter contribution which acts as an effective cosmological constant. The true vacuum value of the potential $`V(\phi )`$ acts as an effective cosmological constant. Its value is not constrained by any particle physics requirements (in the absence of special symmetries). The cosmological constant problem is thus even more acute in inflationary cosmology than it usually is. The same unknown mechanism which must act to shift the potential such that inflation occurs in the false vacuum must also adjust the potential to vanish in the true vacuum. Supersymmetric theories may provide a resolution of this problem, since unbroken supersymmetry forces $`V(\phi )=0`$ in the supersymmetric vacuum. However, supersymmetry breaking will induce a nonvanishing $`V(\phi )`$ in the true vacuum after supersymmetry breaking. ## 6 New Avenues In the light of the problems of potential-driven inflation discussed in the previous sections, many cosmologists have begun thinking about new avenues towards early Universe cosmology which, while maintaining (some of) the successes of inflation, address and resolve some of its difficulties. One approach which has received a lot of recent attention is pre-big-bang cosmology . A nice feature of this theory is that the mechanism of inflation is completely independent of a potential and thus independent of the cosmological constant issue. The scenario, however, is confronted with a graceful exit problem , and the initial conditions need to be very special (see, however, the discussion in ). String theory may lead to a natural resolution of some of the puzzles of inflationary cosmology. This is an area of active research. The reader is referred to for a review of recent studies of obtaining inflation with moduli fields, and to for attempts to obtain inflation with branes. Below, three more conventional approaches to addressing some of the problems of inflation will be summarized. ### 6.1 Inflation from Condensates At the present time there is no direct observational evidence for the existence of fundamental scalar fields in nature (in spite of the fact that most attractive unified theories of nature require the existence of scalar fields in the low energy effective Lagrangian). Scalar fields were initially introduced in particle physics to yield an order parameter for the symmetry breaking phase transition. Many phase transitions exist in nature; however, in all cases, the order parameter is a condensate. Hence, it is useful to consider the possibility of obtaining inflation using condensates, and in particular to ask if this would yield a different inflationary scenario. The analysis of a theory with condensates is intrinsically non-perturbative. The expectation value of the Hamiltonian $`H`$ of the theory contains terms with arbitrarily high powers of the expectation value $`\phi `$ of the condensate. A recent study of the possibility of obtaining inflation in a theory with condensates was undertaken in (see also for some earlier work). Instead of truncating the expansion of $`H`$ at some arbitrary order, the assumption was made that the expansion of $`H`$ in powers of $`\phi `$ is asymptotic and, specifically, Borel summable (on general grounds one expects that the expansion will be asymptotic - see e.g. ) $`H`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}n!(1)^na_n\phi ^n`$ (88) $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑s{\displaystyle \frac{f(s)}{s(sm_{pl}+\phi )}}e^{1/s}.`$ The cosmological scenario is as follows: the expectation value $`\phi `$ vanishes at times before the phase transition when the condensate forms. Afterwards, $`\phi `$ evolves according to the classical equations of motion with the potential given by (88) (we have no information about the form of the kinetic term but will assume that it takes the standard form). Hence, the initial conditions for the evolution of $`\phi `$ are like those of new inflation. It can be easily checked that the slow rolling conditions are satisfied. However, the slow roll conditions remain satisfied for all values of $`\phi `$, thus leading to a graceful exit problem - inflation will never terminate. However, we have neglected the fact that correlation functions, in particular $`\varphi ^2`$, are in general infrared divergent in the de Sitter phase of an expanding Universe. It is natural to introduce a phenomenological cutoff parameter $`ϵ(t)`$ into the vacuum expectation value (VEV), and to replace $`\phi `$ by $`\phi /ϵ`$. It is natural to expect that $`ϵ(t)H(t)`$ (see e.g. ). Hence, the dynamical system consists of two coupled functions of time $`\phi `$ and $`ϵ`$. A careful analysis shows that a graceful exit from inflation occurs precisely if $`H`$ tends to zero when $`\phi `$ tends to large values. As is evident, the scenario for inflation in this composite field model is very different from the standard potential-driven inflationary scenario. It is particularly interesting that the graceful exit problem from inflation is linked to the cosmological constant problem. ### 6.2 Nonsingular Universe Construction A natural approach to resolving the singularity problem of general relativity is to consider an effective theory of gravity which contains higher order terms, in addition to the Ricci scalar of the Einstein action. This approach is well motivated, since any effective action for classical gravity obtained from string theory, quantum gravity, or by integrating out matter fields, will contain higher derivative terms. Thus, it is quite natural to consider higher derivative effective gravity theories when studying the properties of space-time at large curvatures. Most higher derivative gravity theories have much worse singularity problems than Einstein’s theory. However, it is not unreasonable to expect that in the fundamental theory of nature, be it string theory or some other theory, the curvature of space-time is limited. In Ref. the hypothesis was made that when the limiting curvature is reached, the geometry must approach that of a maximally symmetric space-time, namely de Sitter space. The question now becomes whether it is possible to find a class of higher derivative effective actions for gravity which have the property that at large curvatures the solutions approach de Sitter space. A nonsingular Universe construction which achieves this goal was proposed in Refs. . It is based on adding to the Einstein action a particular combination of quadratic invariants of the Riemann tensor chosen such that the invariant vanishes only in de Sitter space-times. This invariant is coupled to the Einstein action via a Lagrange multiplier field in a way that the Lagrange multiplier constraint equation forces the invariant to zero at high curvatures. Thus, the metric becomes de Sitter and hence explicitly nonsingular. If successful, the above construction will have some very appealing consequences. Consider, for example, a collapsing spatially homogeneous Universe. According to Einstein’s theory, this Universe will collapse in a finite proper time to a final “big crunch” singularity. In the new theory, however, the Universe will approach a de Sitter model as the curvature increases. If the Universe is closed, there will be a de Sitter bounce followed by re-expansion. Similarly, spherically symmetric vacuum solutions of the new equations of motion will presumably be nonsingular, i.e., black holes would have no singularities in their centers. This would have interesting consequences for the black hole information loss problem. In two dimensions, this construction has been successfully realized . The nonsingular Universe construction of and its applications to dilaton cosmology are reviewed in an accompanying article in these proceedings . Here is just a very brief summary of the points relevant to the problems listed in Section 5. The procedure for obtaining a nonsingular Universe theory is based on a Lagrange multiplier construction. Starting from the Einstein action, one can introduce a Lagrange multiplier $`\phi _1`$ coupled to the Ricci scalar $`R`$ to obtain a theory with bounded $`R`$: $$S=d^4x\sqrt{g}(R+\phi _1R+V_1(\phi _1)),$$ (89) where the potential $`V_1(\phi _1)`$ satisfies the asymptotic conditions coming from demanding that at small values of $`\phi _1`$ (small curvature), the Einstein theory is recovered, and that at large values of $`\phi _1`$ the Ricci scalar tends to a constant. However, this action is insufficient to obtain a nonsingular gravity theory. For example, singular solutions of the Einstein equations with $`R=0`$ are not affected at all. The minimal requirements for a nonsingular theory are that all curvature invariants remain bounded and the space-time manifold is geodesically complete. It is possible to achieve this by a two-step procedure. First, we choose one curvature invariant $`I_1(g_{\mu \nu })`$ (e.g. $`I_1=R`$ in (89)) and demand that it be explicitely bounded by the construction of (89). In a second step, we demand that as $`I_1(g_{\mu \nu })`$ approaches its limiting value, the metric $`g_{\mu \nu }`$ approach the de Sitter metric $`g_{\mu \nu }^{DS}`$, a definite nonsingular metric with maximal symmetry. In this case, all curvature invariants are automatically bounded (they approach their de Sitter values), and the space-time can be extended to be geodesically complete. The second step can be implemented by another Lagrange multiplier construction . Consider a curvature invariant $`I_2(g_{\mu \nu })`$ with the property that $$I_2(g_{\mu \nu })=0g_{\mu \nu }=g_{\mu \nu }^{DS}.$$ (90) Next, introduce a second Lagrange multiplier field $`\phi _2`$ which couples to $`I_2`$ and choose a potential $`V_2(\phi _2)`$ which forces $`I_2`$ to zero at large $`|\phi _2|`$: $$S=d^4x\sqrt{g}[R+\phi _1I_1+V_1(\phi _1)+\phi _2I_2+V_2(\phi _2)],$$ (91) with asymptotic conditions $`V_2(\phi _2)`$ $``$ $`\mathrm{const}\mathrm{as}|\phi _2|\mathrm{}`$ (92) $`V_2(\phi _2)`$ $``$ $`\phi _2^2\mathrm{as}|\phi _2|0,`$ (93) for $`V_2(\phi _2)`$. The first constraint forces $`I_2`$ to zero, the second is required in order to obtain the correct low curvature limit. The invariant $$I_2=(4R_{\mu \nu }R^{\mu \nu }R^2+C^2)^{1/2},$$ (94) singles out the de Sitter metric among all homogeneous and isotropic metrics (in which case adding $`C^2`$, the Weyl tensor square, is superfluous), all homogeneous and anisotropic metrics, and all radially symmetric metrics. As a specific example one can consider the action $$S=d^4x\sqrt{g}\left[R+\phi _1R(\phi _2+\frac{3}{\sqrt{2}}\phi _1)I_2^{1/2}+V_1(\phi _1)+V_2(\phi _2)\right]$$ (95) with $`V_1(\phi _1)`$ $`=`$ $`12H_0^2{\displaystyle \frac{\phi _1^2}{1+\phi _1}}\left(1{\displaystyle \frac{\mathrm{ln}(1+\phi _1)}{1+\phi _1}}\right)`$ (96) $`V_2(\phi _2)`$ $`=`$ $`2\sqrt{3}H_0^2{\displaystyle \frac{\phi _2^2}{1+\phi _2^2}}.`$ (97) It can be shown that all solutions of the equations of motion which follow from this action are nonsingular. They are either periodic about Minkowski space-time $`(\phi _1,\phi _2)=(0,0)`$ or else asymptotically approach de Sitter space ($`|\phi _2|\mathrm{}`$). One of the most interesting properties of this theory is asymptotic freedom , i.e., the coupling between matter and gravity goes to zero at high curvatures. It is easy to add matter (e.g., dust, radiation or a scalar field) to the gravitational action in the standard way. One finds that in the asymptotic de Sitter regions, the trajectories of the solutions projected onto the $`(\phi _1,\phi _2)`$ plane are unchanged by adding matter. This applies, for example, in a phase of de Sitter contraction when the matter energy density is increasing exponentially but does not affect the metric. The physical reason for asymptotic freedom is obvious: in the asymptotic regions of phase space, the space-time curvature approaches its maximal value and thus cannot be changed even by adding an arbitrarily high matter energy density. Hence, there is the possibility that this theory will admit a natural suppression mechanism for cosmological fluctuations. If this were the case, then the solution of the singularity problem would simultaneously help resolve the fluctuation problem of potential-driven inflationary cosmology. ### 6.3 Back-Reaction of Cosmological Perturbations The linear theory of cosmological perturbations in inflationary cosmology is well studied. However, effects beyond linear order have received very little attention. Beyond linear order, perturbations can effect the background in which they propagate, an effect well known from early studies of gravitational waves. As will be summarized below, the back-reaction of cosmological perturbations in an exponentially expanding Universe acts like a negative cosmological constant, as first realized in the context of studies of gravitational waves in de Sitter space in . Gravitational back-reaction of cosmological perturbations concerns itself with the evolution of space-times which consist of small fluctuations about a symmetric Friedmann-Robertson-Walker space-time with metric $`g_{\mu \nu }^{(0)}`$. The goal is to study the evolution of spatial averages of observables in the perturbed space-time. In linear theory, such averaged quantities evolve like the corresponding variables in the background space-time. However, beyond linear theory perturbations have an effect on the averaged quantities. In the case of gravitational waves, this effect is well known : gravitational waves carry energy and momentum which affect the background in which they propagate. Here, we shall focus on scalar metric perturbations. The analysis of gravitational back-reaction is related to early work by Brill, Hartle and Isaacson , among others. The idea is to expand the Einstein equations to second order in the perturbations, to assume that the first order terms satisfy the equations of motion for linearized cosmological perturbations (hence these terms cancel), to take the spatial average of the remaining terms, and to regard the resulting equations as equations for a new homogeneous metric $`g_{\mu \nu }^{(0,br)}`$ which includes the effect of the perturbations to quadratic order: $$G_{\mu \nu }(g_{\alpha \beta }^{(0,br)})=\mathrm{\hspace{0.17em}8}\pi G\left[T_{\mu \nu }^{(0)}+\tau _{\mu \nu }\right]$$ (98) where the effective energy-momentum tensor $`\tau _{\mu \nu }`$ of gravitational back-reaction contains the terms resulting from spatial averaging of the second order metric and matter perturbations: $$\tau _{\mu \nu }=<T_{\mu \nu }^{(2)}\frac{1}{8\pi G}G_{\mu \nu }^{(2)}>,$$ (99) where pointed brackets stand for spatial averaging, and the superscripts indicate the order in perturbations. As formulated in (98) and (99), the back-reaction problem is not independent of the choice of coordinates in space-time and hence is not well defined. It is possible to take a homogeneous and isotropic space-time, choose different coordinates, and obtain a nonvanishing $`\tau _{\mu \nu }`$. This “gauge” problem is related to the fact that in the above prescription, the hypersurface over which the average is taken depends on the choice of coordinates. The key to resolving the gauge problem is to realize that to second order in perturbations, the background variables chage. A gauge independent form of the back-reaction equation (98) can hence be derived by defining background and perturbation variables $`Q=Q^{(0)}+\delta Q`$ which do not change under linear coordinate transformations. Here, $`Q`$ represents collectively both metric and matter variables. The gauge-invariant form of the back-reaction equation then looks formally identical to (98), except that all variables are replaced by the corresponding gauge-invariant ones. We will follow the notation of , and use as gauge-invariant perturbation variables the Bardeen potentials $`\varphi `$ and $`\mathrm{\Psi }`$ which in longitudinal gauge coincide with the actual metric perturbations $`\delta g_{\mu \nu }`$. Calculations hence simplify greatly if we work directly in longitudinal gauge. Recently, these calculations have been confirmed by working in a completely different gauge, making use of the covariant approach. In , the effective energy-momentum tensor $`\tau _{\mu \nu }`$ of gravitational back-reaction was evaluated for long wavelength fluctuations in an inflationary Universe in which the matter responsible for inflation is a scalar field $`\phi `$ with the potential $$V(\phi )=\frac{1}{2}m^2\phi ^2.$$ (100) Since there is no anisotropic stress in this model, in longitudinal gauge the perturbed metric can be written in terms of a single gravitational potential $`\varphi `$ $$ds^2=(1+2\varphi )dt^2a(t)^2(12\varphi )\delta _{ij}dx^idx^j,$$ (101) where $`a(t)`$ is the cosmological scale factor. It is now straightforward to compute $`G_{\mu \nu }^{(2)}`$ and $`T_{\mu \nu }^{(2)}`$ in terms of the background fields and the metric and matter fluctuations $`\varphi `$ and $`\delta \phi `$, By taking averages and making use of (99), the effective energy-momentum tensor $`\tau _{\mu \nu }`$ can be computed . The general expressions for the effective energy density $`\rho ^{(2)}=\tau _0^0`$ and effective pressure $`p^{(2)}=\frac{1}{3}\tau _i^i`$ involve many terms. However, they greatly simplify if we consider perturbations with wavelength greater than the Hubble radius. In this case, all terms involving spatial gradients are negligible. From the theory of linear cosmological perturbations (see e.g. ) it follows that on scales larger than the Hubble radius the time derivative of $`\varphi `$ is also negligible as long as the equation of state of the background does not change. The Einstein constraint equations relate the two perturbation variables $`\varphi `$ and $`\delta \phi `$, enabling scalar metric and matter fluctuations to be described in terms of a single gauge-invariant potential $`\varphi `$. During the slow-rolling period of the inflationary Universe, the constraint equation takes on a very simple form and implies that $`\varphi `$ and $`\delta \phi `$ are proportional. The upshot of these considerations is that $`\tau _{\mu \nu }`$ is proportional to the two point function $`<\varphi ^2>`$, with a coefficient tensor which depends on the background dynamics. In the slow-rolling approximation we obtain $$\rho ^{(2)}4V<\varphi ^2>$$ (102) and $$p^{(2)}=\rho ^{(2)}.$$ (103) This demonstrates that the effective energy-momentum tensor of long-wavelength cosmological perturbations has the same form as a negative cosmological constant. This back-reaction mechanism may thus relate closely to the cosmological constant problem . ## 7 Conclusions Inflationary cosmology is an attractive scenario. It solves some problems of standard cosmology and leads to the possibility of a causal theory of structure formation. The specific predictions of an inflationary model of structure formation, however, depend on the specific realization of inflation, which makes the idea of inflation hard to verify or falsify. Many models of inflation have been suggested, but at the present time none are sufficiently distinguished to form a “standard” inflationary theory. There has been a lot of recent progress in inflationary cosmology. As explained in Section 4.1, a new theory of inflationary reheating (preheating) has been developed based on parametric resonance. Preheating has dramatic consequences for baryogenesis and for the production of particles and solitons at the end of inflation. A consistent quantum theory of the generation and evolution of linear cosmological perturbations has been developed (see Section 4.2). At the present time, a lot of work is being devoted to extend the analysis of cosmological perturbations beyond linear order. A third area of dramatic progress in inflationary cosmology has been the development of precision calculations of the power spectrum of density fluctuations and of CMB anisotropies which will allow detailed comparisons between current and upcoming observations and inflationary models. However, there are important unsolved problems of principle in inflationary cosmology. Four such problems discussed in these lectures (in Section 5) are the fluctuation problem, the super-Planck-scale physics problem, the singularity problem and the cosmological constant problem, the last of which is probably the Achilles heel of inflationary cosmology. It may be that a convincing realization of inflation will have to wait for an improvement in our understanding of fundamental physics. In Section 6, we described some promising but incomplete avenues which address some of the above problems, while still yielding an inflationary epoch.
no-problem/9910/cond-mat9910309.html
ar5iv
text
# Distribution of lipids in non-lamellar phases of their mixtures ## I Introduction The lipid bilayer which provides the basic structure of biological membranes is composed of a large number of different lipids, of which many, on their own, form non-lamellar phases. Just what role these non-lamellar forming lipids play in the properties of membranes has been the subject of much speculation. That they serve important functions is indicated by the fact that cells with common lipids regulate their composition to maintain a proper balance between those that form lamellar phases and those that do not. At least two major roles for non-lamellar forming lipids have been proposed. One is that, because such lipids are characterized by tails which tend to splay outward, their presence alters the pressure profile within a bilayer permitting embedded proteins to function. The second, noting that the lipid balance referred to above is close to a lamellar non-lamellar phase transition , posits that these lipids with their splaying tails may serve to facilitate the formation of structures not unlike the non-lamellar phases, structures which are characterized by volumes which are difficult for other lipid tails to fill, or regions of non-zero curvature. In this way they could stabilize transient fusion intermediates. For example, some scenarios of membrane fusion involve the formation of a thin cylindrical stalk created from the lipids of the opposing leaves of the bilayers. Such a structure has a non-zero curvature, and creates regions surrounding it which are difficult for the tails to fill. By concentrating in such regions, non-lamellar forming lipids could make such structures, and the processes they bring about, less costly. In a series of experiments, Gruner and co-workers have shown that inverted hexagonal, $`H_{II}`$, phases which are characterized by periodic regions which are difficult to fill can be stabilized by the addition of alkanes. Presumably the alkanes are found preferentially in these regions and lower the free energy to form these structures. Theoretical confirmation of this idea in the somewhat analogous system of block copolymer and homopolymer mixtures was provided by Matsen. Tate and Gruner further showed that the addition of small amounts of long chain (two tails of 22 or 24 carbons) phosphatidylcholine to dioleoylphosphatidylethanolamine (two tails of 18 carbons) stabilized the $`H_{II}`$ phase relative to that produced by the addition of phosphatidylcholine with two shorter tails of 18 carbons. This provided further indirect evidence for the common hypothesis that difficult to fill, “frustrated” volumes and/or regions of high curvature should be correlated with a density difference between lamellar- and non lamellar-forming lipids. However there appears to be neither experimental evidence nor theoretical calculations that directly bear on this hypothesis, or provide an indication of the magnitude of the variation in density of the different lipids. It is the purpose of this paper to demonstrate and to quantify, in a model lipid mixture, this variation of density of the lamellar- and non lamellar-forming lipids in inverted hexagonal, $`H_{II}`$, and gyroid ($`Ia3d`$) phases. Our paper is organized as follows. In section II, we present the model of the lipids and develop the self consistent field theory in real space. In section III we develop the self consistent field theory in Fourier space. In section IV(A) we present our results for the phase diagrams of the single lipid as a function of its architecture, and for the anhydrous mixture of a lamellar forming and a non-lamellar forming lipid. We also show that there exists an “effective single lipid approximation” by which the latter phase diagram can be obtained from the former. In section IV(B) we present our results for the variation of the densities of the two lipids in the lamellar, inverted hexagonal, and gyroid phases. This variation is on the order of 1 to 10%. We also compare qualitatively this variation with that of the mean curvature of the structures. ## II Theory: Real Space The model which we employ has been presented elsewhere, so we will be brief here. We consider an anhydrous mixture of $`n_1`$ lipids of type 1 and $`n_2`$ lipids of type 2. Below we shall choose their architecture so that type 1 lipids form lamellar phases while type 2 lipids form $`H_{II}`$ phases. All lipids consist of the same head group of volume $`v_h`$ and two equal-length tails. Thus we model mixtures of lipids drawn from a homologous series, such as the phosphatidylethanolamines studied by Seddon et al.. Each tail of lipid 1 consists of $`N_1`$ segments of volume $`v_t`$, while those of lipid 2 consist of $`N_2`$ such segments. For convenience, we denote $`N_L=N\alpha _L`$ for $`L=1,2`$. The tails are treated as being completely flexible, with radii of gyration $`R_{g,L}=(N_La^2/6)^{1/2}`$ for each tail. The statistical segment length is $`a`$. The configuration of the $`l`$’th lipid of type $`L`$ is described by a space curve $`𝐫_{l,L}(s)`$ where $`s`$ ranges from 0 at one end of one tail, through $`s=\alpha _L/2`$ at which the head is located, to $`s=\alpha _L`$, the end of the other tail. The system is completely described by the dimensionless densities of the head groups $`\widehat{\mathrm{\Phi }}_h^{(L)}(𝐫)`$ and of the tail segments, $`\widehat{\mathrm{\Phi }}_t^{(L)}(𝐫)`$, which can be written $`\widehat{\mathrm{\Phi }}_h^{(L)}(𝐫)`$ $`=`$ $`v_h{\displaystyle \underset{l=1}{\overset{n_L}{}}}\delta \left(𝐫𝐫_{l,L}(\alpha _L/2)\right),`$ (1) $`\widehat{\mathrm{\Phi }}_t^{(L)}(𝐫)`$ $`=`$ $`v_h{\displaystyle \underset{l=1}{\overset{n_L}{}}}{\displaystyle _0^{\alpha _L}}\delta \left(𝐫𝐫_{l,L}(s)\right)𝑑s,`$ (2) where $`v_h`$ has been chosen as a convenient volume to make all densities dimensionless. The number density of tail segments of type $`L`$ is $`(2N/v_h)\widehat{\mathrm{\Phi }}_t^{(L)}`$, and their volume fraction is $`(2Nv_t/v_h)\mathrm{\Phi }_t^{(L)}\gamma _t\mathrm{\Phi }_t^{(L)}`$. The sole explicit interaction in the system is between head and tail segments, and this interaction energy $`E`$ takes the form $$\frac{1}{kT}E[\widehat{\mathrm{\Phi }}_h^{(1)}+\widehat{\mathrm{\Phi }}_h^{(2)},\widehat{\mathrm{\Phi }}_t^{(1)}+\widehat{\mathrm{\Phi }}_t^{(2)}]\frac{2N\chi }{v_h}\left[\widehat{\mathrm{\Phi }}_h^{(1)}(𝐫)+\widehat{\mathrm{\Phi }}_h^{(2)}(𝐫)\right]\left[\widehat{\mathrm{\Phi }}_t^{(1)}(𝐫)+\widehat{\mathrm{\Phi }}_t^{(2)}(𝐫)\right]𝑑𝐫,$$ (3) where $`\chi `$ is the strength of the interaction, and $`T`$ is the temperature. The effect of a hard-core repulsion between all elements of the system is accounted for approximately by requiring that the system be incompressible. The grand canonical partition function of the system is $$𝒵=\underset{n_1,n_2}{}\frac{z_1^{n_1}z_2^{n_2}}{n_1!n_2!}\underset{l=1}{\overset{n_1}{}}\stackrel{~}{𝒟}𝐫_{l,1}\underset{m=1}{\overset{n_2}{}}\stackrel{~}{𝒟}𝐫_{m,2}\mathrm{exp}(E/kT)\delta (\widehat{\mathrm{\Phi }}_h^{(1)}+\gamma _t\widehat{\mathrm{\Phi }}_t^{(1)}+\widehat{\mathrm{\Phi }}_h^{(2)}+\gamma _t\widehat{\mathrm{\Phi }}_t^{(2)}1).$$ (4) The delta function in the above enforces the constraint of incompressibility. The notation $`\stackrel{~}{𝒟}𝐫_{l,L}`$ denotes a functional integral over the possible configurations of the $`l`$’th lipid of type $`L`$ and in which, in addition to the Boltzmann weight, the path is weighted by the factor $`𝒫[𝐫_{l,L}(s);0,\alpha _L]`$, with $$𝒫[𝐫,s_1,s_2]=𝒩\mathrm{exp}\left[\frac{1}{8R_g^2}_{s_1}^{s_2}𝑑s|\frac{d𝐫(s)}{ds}|^2\right],$$ (5) where $`𝒩`$ is an unimportant normalization constant and $`R_g(Na^2/6)^{1/2}`$ is the radius of gyration of a tail of length $`N`$. We note that because of the choice of this weight function, $`𝒫`$, and the lack of any explicit interaction between chain segments to prevent their intersection, the behavior of the chains is Gaussian. This is appropriate because we view the chains as forming an incompressible melt. Under such conditions, a flexible, interacting, polymer chain behaves as an ideal, and therefore Gaussian, one. It is not clear, of course, that an ideal lipid chain, which is certainly not flexible, can be treated as Gaussian. This approximation must overestimate the entropy of the tails, whose fewer thermally accessible configurations are presumably modelled more accurately by the Restricted Isomeric States Model. But how serious is this overestimation, is also unclear. Ultimately the efficacy of our model in capturing the behavior of lipids can be judged only by a comparison with experiment. This was done for the lipid phase behavior in reference, and the comparison was very good. Even the variation with solvent concentration and with temperature of the characteristic period of the lamellar and hexagonal phases agreed very well. It is this agreement with experiment of the calculated phase behavior which provides the support for applying the model to the calculation of other properties, such as the distribution of lipids in mixtures investigated here. To proceed, we make the partition function of Eq. 4 more tractable by introducing into it the identity $`1`$ $`=`$ $`{\displaystyle 𝒟\mathrm{\Phi }_h^{(L)}\delta (\mathrm{\Phi }_h^{(L)}\widehat{\mathrm{\Phi }}_h^{(L)})},`$ (6) $`=`$ $`{\displaystyle 𝒟\mathrm{\Phi }_h^{(L)}𝒟W_h^{(L)}\mathrm{exp}\left\{\frac{1}{v_h}W_h^{(L)}(𝐫)[\mathrm{\Phi }_h^{(L)}(𝐫)\widehat{\mathrm{\Phi }}_h^{(L)}(𝐫)]𝑑𝐫\right\}},`$ (7) where the integration on $`W_h^{(L)}`$ extends up the imaginary axis. We also insert such an identity for the density $`\widehat{\mathrm{\Phi }}_t^{(L)}(𝐫)`$, and a similar representation for the delta function which enforces the incompressibility constraint. The partition function becomes $$𝒵=𝒟\mathrm{\Xi }\underset{L=1}{\overset{2}{}}𝒟\mathrm{\Phi }_h^{(L)}𝒟W_h^{(L)}𝒟\mathrm{\Phi }_t^{(L)}𝒟W_t^{(L)}\mathrm{exp}[\mathrm{\Omega }/kT],$$ (8) where the grand potential $`\mathrm{\Omega }`$ is given by $`\mathrm{\Omega }`$ $`=`$ $`{\displaystyle \frac{kT}{v_h}}{\displaystyle \underset{L=1}{\overset{2}{}}}\left\{z_L𝒬_L[W_h^{(L)},W_t^{(L)}]+{\displaystyle 𝑑𝐫\left[W_h^{(L)}(𝐫)\mathrm{\Phi }_h^{(L)}(𝐫)+W_t^{(L)}(𝐫)\mathrm{\Phi }_t^{(L)}(𝐫)\right]}\right\}`$ (9) $`+`$ $`E[\mathrm{\Phi }_h^{(1)}+\mathrm{\Phi }_h^{(2)},\mathrm{\Phi }_t^{(1)}+\mathrm{\Phi }_t^{(2)}]{\displaystyle \frac{kT}{v_h}}{\displaystyle 𝑑𝐫\mathrm{\Xi }(𝐫)(\mathrm{\Phi }_h^{(1)}(𝐫)+\gamma _t\mathrm{\Phi }_t^{(1)}(𝐫)+\mathrm{\Phi }_h^{(2)}(𝐫)+\gamma _t\mathrm{\Phi }_t^{(2)}(𝐫)1)},`$ (10) and $`𝒬_L`$ is the partition functions of a single lipid, of type L, in the external fields $`W_h^{(L)}`$ and $`W_t^{(L)}`$; $$𝒬_L=\stackrel{~}{𝒟}𝐫_L\mathrm{exp}\left\{W_h^{(L)}(𝐫_L(\alpha _L/2))_0^{\alpha _L}𝑑sW_t^{(L)}(𝐫_L(s))\right\},L=1,2.$$ (11) To this point, no approximations in the evaluation of the partition function of the model have been made, but it has been put in a form in which the self-consistent field (SCF) approximation appears naturally. The need for an approximation arises from the fact that the functional integrals in Eq. 8 over $`W_h^{(L)}`$ and $`W_t^{(L)}`$ cannot be carried out. The SCF consists in replacing the exact free energy, $`kT\mathrm{ln}𝒵`$ by the extremum of $`\mathrm{\Omega }`$. We denote the values of the $`\mathrm{\Phi }_h^{(L)}`$, $`W_h^{(L)}`$, $`\mathrm{\Phi }_t^{(L)}`$, $`W_t^{(L)}`$ and $`\mathrm{\Xi }`$ which extremize $`\mathrm{\Omega }`$ by lower case letters. They are obtained from the following set of self consistent equations: $`\varphi _h^{(L)}(𝐫)`$ $`=`$ $`z_L{\displaystyle \frac{\delta 𝒬_L}{\delta w_h^{(L)}(𝐫)}},L=1,2,`$ (12) $`\varphi _t^{(L)}(𝐫)`$ $`=`$ $`z_L{\displaystyle \frac{\delta 𝒬_L}{\delta w_t^{(L)}(𝐫)}},L=1,2,`$ (13) $`w_h^{(L)}(𝐫)`$ $`=`$ $`2\chi N{\displaystyle \underset{L^{}}{}}\varphi _t^{(L^{})}(𝐫)\xi (𝐫),L=1,2,`$ (14) $`w_t^{(L)}(𝐫)`$ $`=`$ $`2\chi N{\displaystyle \underset{L^{}}{}}\varphi _h^{(L^{})}(𝐫)\gamma _t\xi (𝐫),L=1,2`$ (15) $`1`$ $`=`$ $`{\displaystyle \underset{L}{}}\varphi _h^{(L)}(𝐫)+\gamma _t{\displaystyle \underset{L}{}}\varphi _t^{(L)}(𝐫).`$ (16) Note that $`w_h^{(1)}=w_h^{(2)}`$ and $`w_t^{(1)}=w_t^{(2)},`$ so that henceforth we shall drop the superscripts on the fields $`w_h`$ and $`w_t`$. It is convenient, further, to introduce the total headgroup and total tail densities $`\varphi _h(𝐫)`$ $`=`$ $`\varphi _h^{(1)}(𝐫)+\varphi _h^{(2)}(𝐫)`$ (17) $`\varphi _t(𝐫)`$ $`=`$ $`\varphi _t^{(1)}(𝐫)+\varphi _t^{(2)}(𝐫),`$ (18) and to measure chemical potentails relative to that of lipid 1. This has the effect that $`z_1=1`$, and $`z_2=z`$, the chemical potential of lipid 2 relative to that of 1. The free energy in this approximation, $`\mathrm{\Omega }_{scf}`$ is $`\mathrm{\Omega }_{scf}`$ $`=`$ $`{\displaystyle \frac{kT}{v_h}}\left[{\displaystyle \underset{L=1}{\overset{2}{}}}z_L𝒬_L[w_h,w_t]+{\displaystyle 𝑑𝐫[w_h(𝐫)\varphi _h(𝐫)+w_t(𝐫)\varphi _t(𝐫)]}\right]E[\varphi _h,\varphi _t],`$ (19) $`=`$ $`{\displaystyle \frac{kT}{v_h}}{\displaystyle \underset{L=1}{\overset{2}{}}}z_L𝒬_L[w_h,w_t]+E[\varphi _h,\varphi _t],`$ (20) $`=`$ $`kT(n_1+n_2)+E[\varphi _h,\varphi _t],`$ (21) with $`E`$ given by Eq.3. There remains only the calculation of the partition function of the single lipid $`l`$ of type $`L`$ in the external fields $`w_h`$ and $`w_t`$. One defines the end-segment distribution function $$q^{(L)}(𝐫,s)=𝒟𝐫_l(s)\delta (𝐫𝐫_l(s))\mathrm{exp}\left\{_0^s𝑑t\left(\left[\frac{1}{8R_g^2}|\frac{d𝐫_l(t)}{dt}|^2\right]+w_h(𝐫_l(t))\delta (t\alpha _L/2)+w_t(𝐫_l(t))\right)\right\},$$ (22) which satisfies the equation $$\frac{q^{(L)}(𝐫,s)}{s}=2R_g^2^2q^{(L)}(𝐫,s)[w_h(𝐫)\delta (s\alpha _L/2)+w_t(𝐫)]q^{(L)}(𝐫,s),$$ (23) with initial condition $$q^{(L)}(𝐫,0)=1.$$ (24) The partition functions of the two types of lipid are, then, $$𝒬_L=𝑑𝐫q^{(L)}(𝐫,\alpha _L).$$ (25) It then follows from Eqs. 12 and 13 that $`\varphi _h(𝐫)`$ $`=`$ $`\mathrm{exp}[w_h(𝐫)]{\displaystyle \underset{L=1}{\overset{2}{}}}z_Lq^{(L)}(𝐫,{\displaystyle \frac{\alpha _L}{2}})q^{(L)}(𝐫,{\displaystyle \frac{\alpha _L}{2}}),`$ (26) $`\varphi _t(𝐫)`$ $`=`$ $`{\displaystyle \underset{L=1}{\overset{2}{}}}z_L{\displaystyle _0^{\alpha _L}}𝑑sq^{(L)}(𝐫,s)q^{(L)}(𝐫,\alpha _Ls),`$ (27) $`=`$ $`2{\displaystyle \underset{L=1}{\overset{2}{}}}z_L{\displaystyle _0^{\alpha _L/2}}𝑑sq^{(L)}(𝐫,s)q^{(L)}(𝐫,\alpha _Ls),`$ (28) where $`z_1=1`$ and $`z_2=z`$. The self-consistent equations 12 to 16 can now be solved in real space. For the periodic phases in which we are interested, such as the lamellar, inverted hexagonal, gyroid, etc., it is more convenient to do so in Fourier space. ## III Theory: Fourier Space Because the densities, fields, and end point distribution function depend only on a single coordinate, they reflect the space group symmetry of the ordered phase they describe. To make that symmetry manifest, we expand all functions of position in a complete, orthonormal, set of functions $`f_i(𝐫),i=1,2,3\mathrm{},`$ which have the desired space group symmetry; e.g. $`\varphi _h(𝐫)`$ $`=`$ $`{\displaystyle \underset{i}{}}\varphi _{h,i}f_i(𝐫),`$ (29) $`\delta _{i,j}`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle 𝑑𝐫f_i(𝐫)f_j(𝐫)},`$ (30) where $`V`$ is the volume of the system. Furthermore we choose the $`f_i(𝐫)`$ to be eigenfunctions of the Laplacian $$^2f_i(𝐫)=\frac{\lambda _i}{D^2}f_i(𝐫),$$ (31) where $`D`$ is a length scale for the phase. We set $`f_1=1`$. Expressions for the unnormalized basis functions for all space-group symmetries can be found in X-ray tables. The self-consistent equations in Fourier space become $`w_{h,i}`$ $`=`$ $`2\chi N\varphi _{t,i}\xi _i,`$ (32) $`w_{t,i}`$ $`=`$ $`2\chi N\varphi _{h,i}\gamma _t\xi _i,`$ (33) $`\delta _{i,1}`$ $`=`$ $`\varphi _{h,i}+\gamma _t\varphi _{t,i}.`$ (34) To obtain the partition functions and densities, we proceed as follows. For any function $`G(𝐫)`$, we can define a symmetric matrix $$(G)_{ij}\frac{1}{V}f_i(𝐫)G(𝐫)f_j(𝐫)𝑑𝐫$$ (35) Note that $`(G)_{1i}=(G)_{i1}=G_i`$, the coefficient of $`f_i(𝐫)`$ in the expansion of $`G(𝐫)`$. Matrices corresponding to functions of $`G(𝐫)`$, such as $$\left(e^G\right)_{ij}\frac{1}{V}f_i(𝐫)e^{G(𝐫)}f_j(𝐫)𝑑𝐫,$$ (36) are evaluated by making an orthogonal transformation which diagonalizes $`(G)_{ij}`$. The densities are obtained from the end point distribution function. After Fourier transforming the diffusion equation, Eq. 23, one readily obtains the solution $`q_i^{(L)}(s)`$ $`=`$ $`\left(e^{As}\right)_{i,1},\mathrm{if}s<\alpha _L/2`$ (37) $`=`$ $`{\displaystyle \underset{j}{}}\left(e^{w_h}\right)_{ij}\left(e^{\alpha _LA/2}\right)_{j,1},s=\alpha _L/2`$ (38) $`=`$ $`{\displaystyle \underset{j,k}{}}\left(e^{A(s\alpha _L/2)}\right)_{i,j}\left(e^{w_h}\right)_{j,k}\left(e^{\alpha _LA/2}\right)_{k,1},s>\alpha _L/2,`$ (39) where the elements of the matrix $`A`$ are given by $$A_{i,j}=\frac{2R_g^2}{D^2}\lambda _i\delta _{ij}+(w_t)_{ij}.$$ (40) From this, the Fourier amplitudes of the densities follow from eqs. 26 and 28; $`\varphi _{h;i}`$ $`=`$ $`{\displaystyle \underset{L=1}{\overset{2}{}}}z_L{\displaystyle \underset{jkl}{}}\left(e^{w_h}\right)_{ij}\mathrm{\Gamma }_{jkl}q_k^{(L)}({\displaystyle \frac{\alpha _L}{2}})q_l^{(L)}({\displaystyle \frac{\alpha _L}{2}}),`$ (41) $`\varphi _{t;i}`$ $`=`$ $`2{\displaystyle \underset{L=1}{\overset{2}{}}}z_L{\displaystyle _0^{\alpha _L/2}}𝑑s{\displaystyle \underset{jk}{}}\mathrm{\Gamma }_{ijk}q_j^{(L)}(s)q_k^{(L)}(\alpha _Ls),`$ (42) with $$\mathrm{\Gamma }_{ijk}\frac{1}{V}f_i(𝐫)f_j(𝐫)f_k(𝐫).$$ (44) The grand potential within the self-consistent field approximation, Eq 21, becomes $$\mathrm{\Omega }_{scf}=\frac{kTV}{v_h}\left[\varphi _{h,1}+2\chi N\underset{i}{}\varphi _{h,i}\varphi _{t,i}\right]$$ (45) This free energy still depends on $`D`$, the length scale of the periodic phase, which must be determined by minimization of the free energy. After this is done, we compare the free energies obtained for phases of different space group symmetries, and thus obtain the phase diagram of the system. The infinite set of self-consistent equations 32 to 42 must be truncated to be solved numerically, and we have employed up to 50 basis functions in our calculations for the lamellar, hexagonal, and b.c.c. phases, and 100 for the gyroid phases. ## IV Results ### A Phase Diagrams We begin by presenting, in Fig. 1, our results for the phase diagram of a single lipid with tails each of $`N`$ units as a function of its architecture. We have defined an effective temperature, $`T_N^{}1/2\chi N`$. Note that $`T_N^{}`$ is length-dependent, i.e., the value of $`T_N^{}`$ differs for lipids of different length $`N`$ even at the same physical temperature $`T`$. In our previous work, we found that for the system of dioleoylphosphatidylethanolamine a $`T_N^{}`$ of 0.06 corresponded to a physical temperature of approximately 20C. The architecture is characterized by the volume of the head group relative to that of the entire lipid, $`fv_h/(v_h+2Nv_t)`$. We have examined the common lipid phases, including the bi-continuous double-diamond phase (Pn3m), for stability, and found in addition to the lamellar phase only the normal and inverted versions of the body-centered cubic ($`Im3m`$), hexagonal, and gyroid $`Ia3d`$, phases to be stable. One sees that the diagram is quite reasonable and illustrates quantitatively the qualitative ideas of structure being driven by packing considerations. We now turn to a mixture of two lipids. We have chosen $`N_1=N`$ and $`N_2=1.5N`$ (i.e., $`\alpha _1=1`$, and $`\alpha _2=1.5`$) so that the extended length of the tails of lipid 2 are 1.5 times those of lipid 1. Further we have taken $`\gamma _t2N_1v_t/v_h=2.5`$. With these parameters, the volume of the head group relative to that of the entire lipid is, for lipid 1, $`f_11/(1+\alpha _1\gamma _t)=0.2857`$, while that of lipid 2 is $`f_21/(1+\alpha _2\gamma _t)=0.2105`$. For comparison, the relative head group volume of dioleoylphosphatidylethanolamine calculated from volumes given in the literature is $`f=0.254`$. From the phase diagram of Fig. 1, one sees that lipid 1 forms a lamellar phase, while lipid 2 forms an inverted hexagonal, $`H_{II}`$, phase. The phase diagram of the lipid mixture is shown in the solid lines of Fig. 2 as a function of the volume fraction of lipid 1, $`\mathrm{\Theta }\varphi _h^{(1)}+\gamma _t\varphi _t^{(1)}`$ and the reduced temperature $`T^{}1/2\chi N_1`$. Here $`N_1`$ is a constant, the length of the tails of lipid 1, so that the definition of the temperature $`T^{}`$ is independent of the composition of the mixture as it should be. Small regions of body-centered cubic phase near the transition to the disordered phase have been ignored in the phase diagram. We are unaware of experimental phase diagrams from anhydrous mixtures of lamellar- and hexagonal-forming lipids with the same headgroup, as we have calculated here. However phase diagrams have been obtained for mixtures of lamellar-forming phosphatidylcholine and hexagonal-forming phosphatidylethanolamine. Those obtained for anhydrous mixtures of dilinoleoylphosphatidylethanolamine and palmitoyloleoylphosphatidylcholine, and for mixtures of dioleoylphosphatidylcholine and dioleoylphosphatidylethanolamine and 10% water by weight are quite similar to Fig. 2 as they each show a significant region of cubic phase between the inverted hexagonal phase, which dominates at low concentrations of the lamellar-forming lipid, and the lamellar phase, which dominates at high concentration. We also observe that about 20% of one lipid added to the other is sufficient to bring about a change of phase, which is in accord with experiment. Lastly, if the temperature is not too low, a decrease of the volume fraction of the lamellar-forming lipid causes the gyroid phase to be stable at increasing temperatures. Therefore the addition of longer non-lamellar lipids to the mixture stabilizes the non-lamellar phase, as in the experiment of Tate and Gruner. We now show that there is a simple “effective single lipid” approximation by which one can obtain rather well the phase diagram of the mixture, Fig. 2, utilizing only the information from the phase diagram of the single lipid, Fig. 1. For this purpose, we must find a relationship between the coordinates $`(f,T_N^{})`$ in Fig.1 and $`(\mathrm{\Theta },T^{})`$ in Fig. 2. Note that whereas the temperature scale $`T^{}`$ is independent of the composition of the mixture, the appropriate scale of Fig. 1 would vary from $`T_{N_2}^{}=T^{}/1.5`$ when the mixture contained lipid 2 only to $`T_{N_1}^{}=T^{}`$ at the other extreme. To obtain the desired relationship, we first note that from the definitions of $`f`$ and $`\gamma _t`$, it follows that $`2Nv_t/v_h=(1f)/f`$ and $`2N_1v_t/v_h=\gamma _t`$, from which one obtains $`T_N^{}/T^{}=N_1/N=f\gamma _t/(1f)`$. Second, as $`f`$ is defined as the volume fraction of the head group for a single lipid, it is natural to assume $`f=\mathrm{\Theta }f_1+(1\mathrm{\Theta })f_2`$, the volume fraction of the head groups, in the mixture. Hence, in the “effective single lipid” approximation the coordinates of a point on the phase diagram of the mixture may be obtained from those of the single lipid according to $`\mathrm{\Theta }`$ $`=`$ $`{\displaystyle \frac{ff_2}{f_1f_2}},`$ (46) $`T^{}`$ $`=`$ $`{\displaystyle \frac{1f}{f\gamma _t}}T_N^{}.`$ (47) The results of this “single effective lipid” approximation is shown by the open diamonds in Fig. 2. The approximation is obviously very good. Thus we can obtain very easily the phase diagram of any anhydrous mixture of our model lipids from the results for a single lipid, given in Fig. 1. ### B Distribution of Lipids We now turn to the central results of this paper, which is the distribution of the different lipids in the various phases. It is convenient at the outset to define two local order parameters, $`\psi (𝐫)`$ and $`\zeta (𝐫)`$: $`\psi (𝐫)`$ $`=`$ $`\psi ^{(1)}(𝐫)+\psi ^{(2)}(𝐫),`$ (48) $`\psi ^{(L)}(𝐫)`$ $``$ $`\varphi _h^{(L)}(𝐫){\displaystyle \frac{\varphi _t^{(L)}(𝐫)}{\alpha _L}},L=1,2,`$ (49) $`\zeta (𝐫)`$ $``$ $`{\displaystyle \frac{\varphi _t^{(1)}(𝐫)}{<\varphi _t^{(1)}(𝐫)>}}{\displaystyle \frac{\varphi _t^{(2)}(𝐫)}{<\varphi _t^{(2)}(𝐫)>}}.`$ (50) Each order parameter, $`\psi ^{(L)}(𝐫)`$ measures the local difference in head and tail segments of type $`L`$ normalized such that the integral of the order parameter over the unit cell vanishes. Thus $`\psi (𝐫)`$ measures the local difference of all head and tail segments. It provides information on the separation of the lipid heads and tails. On the other hand, the order parameter $`\zeta (𝐫)`$ measures the difference in local fractions of the tail segments belonging to the two different lipids. The brackets in its definitions denotes an average over some suitably defined tail region in which the order parameter $`\psi (𝐫)`$ takes on negative values. For clarity, we employ somewhat different definitions of the tail region for the different phases. The average value of $`\zeta (𝐫)`$ is zero over the defined tail region. We begin with the lamellar phase and show in Fig. 3 the density profiles at a temperature $`T^{}=0.0454`$ and volume fraction of lipid 1, $`\mathrm{\Theta }=0.6588`$, a point very close to the phase boundaries between the lamellar phase and the inverted gyroid, $`G_{II}`$, phase. In Fig 3(a) we have plotted the volume fractions of the heads of lipids 1 and 2, $`\varphi _h^{(1)}`$ and $`\varphi _h^{(2)}`$, and the volume fractions of the tails, $`\gamma _t\varphi _t^{(1)}`$, and $`\gamma _t\varphi _t^{(2)}`$. Although the lamellar forming lipid 1 dominates, one can easily see that the density of lipid 1 tails decreases near the center of the lamellae by the order of 5 per cent from its maximum value of about 0.6. As the system is incompressible, this implies that the volume fraction of the tails of the non-lamellar forming lipid increases by about 8 per cent over the same range. To illustrate the relative difference in their volume fractions, we plot in Fig. 3(b) the order parameter $`\zeta (x)`$ where the tail region is defined here as $`\psi (x)0`$. Again one sees that the relative change in volume fractions is less than, but the order of 10 per cent, with the longer lipid 2 predominating in the center of the tail region. This comes about for two reasons. First, the tails of lipid 1 are shorter than those of lipid 2, so one expects there to be less lipid 1 near the center. Second, the effect of the temperature, which is to shorten the average end to end distance of the tails, is greater on the tails of lipid 1 than on the tails of lipid 2 because $`T_{N_1}^{}>T_{N_2}^{}`$. We have not plotted the relative variation of the head group volume fractions because it is so small, of the order of 0.5 per cent. Recall that all the head groups are identical. We now turn to the inverted hexagonal, $`H_{II}`$, phase. We consider the system again at $`T^{}=0.0454`$ but now at volume fraction of lipid 1 of $`\mathrm{\Theta }=0.27`$ where the $`H_{II}`$ and gyroid phases are almost in coexistence. In Fig. 4, we plot the order parameter $`\psi (𝐫)`$. The dark regions correspond to positive values of the order parameter where the head groups dominate and the lighter regions where the tails dominate. The maximum value is 0.719 and the minimum value $`0.286`$. Each gradation represents a change of 10 per cent. To make manifest the difference in densities of the tails from each lipid, we again look at the order parameter $`\zeta (𝐫)`$ which is shown in Fig. 5. Here the tail region we have averaged over is the locus of points for which the order parameter $`\psi (𝐫)0.282`$. Regions with $`\psi (𝐫)>0.282`$ are simply shown in white. Again the darker regions correspond to positive values of this order parameter where the tails of the lamellar forming lipid are relatively more probable to be found, while the lighter regions denote negative values where the non-lamellar forming lipid is more probable. One see that the latter lipid is more likely to be found in the next-nearest neighbor direction, in the region which is most difficult for the tails to fill. The maximum value of $`\zeta (𝐫)`$ is 0.0352, and the minimum value is $`0.0377`$ indicating that the relative volume fractions vary by the order of 3 per cent over the tail region. To make the variation in densities even clearer, we plot in Fig. 6(a) and 6(b) the volume fractions of the tails of the lipids as measured along the boundary of the Brillouin zone with the angle $`\theta =0`$ corresponding to the direction of the nearest neighbors, and $`\theta _0=\pi /6`$ corresponding to the direction of the next nearest neighbors. One sees that the density of the minority lamellar forming lipid is reduced by about 2 per cent in the next-nearest neighbor direction, while that of the majority non-lamellar forming lipid is is increased by about 0.6 per cent. Again, the variation with direction of the density of head groups is negligible. In order to compare these densities changes with the curvature of the inverted hexagonal phase, we plot in Fig. 6(c) the mean curvature along the locus of points defined by $`\psi (𝐫)=0`$, i.e., on the curve on which the difference in volume fractions of all heads and tails vanishes. The mean curvature as a function of angle is well fit by $$H(\theta )=H_0+\underset{n=1}{\overset{5}{}}H_n\mathrm{cos}(6n\theta ),$$ (51) with $`H_0=1.55800`$, $`H_1=3.33\times 10^3`$, $`H_2=3.2\times 10^4`$, $`H_3=5.3\times 10^4`$,$`H_4=3.59\times 10^3`$, and $`H_5=8.2\times 10^4`$. One see that the variation in the mean curvature is an order of magnitude smaller than that of the variation of the tails of the minority lipid, and also smaller than that of the majority lipid. There is also much more structure in the mean curvature. It is not surprising that this structure near the region where the head groups and tails meet is washed out in the region of the other end of the tails. We now consider the inverted gyroid phase. We show in Fig. 7 and 8 two views of this phase at $`T^{}=0.045`$ and $`\mathrm{\Theta }=0.4921`$. The phase consists of two sublattices of tubes filled for the most part with head groups. The sublattices are related by mirror reflection. We have chosen to plot the surfaces defined by the order parameter $`\psi (𝐫)=0.5`$ rather than $`\psi (𝐫)=0`$ for in the latter case the tubes are much more difficult to recognize. Fig. 7 shows a view along the direction while Fig. 8 is viewed from point (0.9,-2.4,2). These figures define the coordinate system which we use and show a cubic cell of side unity. Were the tubes shrunk to lines, three such lines would meet at nodes, and there would be 16 such nodes in the unit cell shown. We wish to show what the relative densities of the two lipids are at the positions which are the easiest for the tails to fill, and those which are the most difficult for the tails to fill. These correspond to the points midway between nearest neighbors and next-nearest neighbors in the $`H_{II}`$ phase. These positions are not obvious in the gyroid phase, but can be calculated. An example of a longest distance is shown in Fig. 8 by a solid line. The coordinates of the point shown, which is furtherest from any tube, is $`(1/2,3/16,5/8)`$ while the points on the tube centers closest to it are $`(15/32,7/32,7/8)`$ and $`(17/32,7/32,3/8)`$. Thus this longest distance is $`d_{long}=(66)^{1/2}/32`$. The shortest distances link two three-fold coordinated sites on the different sublattice, and one such shortest distance is shown in Fig. 8 by a dotted line. Two particular points on the tube centers connected by this distance and which are shown in the figure have the coordinates $`(5/8,3/8,7/8)`$ and $`(7/8,1/8,5/8)`$. The distance between these points is twice the shortest distance between a point midway between tubes and the tube centers themselves, and is therefore $`d_{short}=3^{1/2}/8`$. Interestingly, the ratio of longest to shortest distances in the gyroid phase $`d_{long}/d_{short}=(33/8)^{1/2}/(3)^{1/2}`$ is only about 2 per cent larger than the corresponding ratio in the hexagonal and b.c.c. phases which is $`2/(3)^{1/2}`$ in both cases. At the temperature and composition selected, we find that at the point nearest to neighboring tubes and which requires the least stretching of the lipid tails, the volume fraction of the tails of the lamellar forming lipid 1 is $`\gamma _t\varphi _t^{(1)}=0.448`$ and that of the non-lamellar forming lipid 2 is $`\gamma _t\varphi _t^{(2)}=0.537`$. At the position requiring the most stretching of the tails, we find the volume fraction of the tails of lipid 1 has decreased to 0.431, while that of the longer lipid 2 has increased to 0.561. In Fig. 9 the order parameter $`\zeta (𝐫)`$, which shows the relative difference between the volume fractions of the tails of lipid 1 and lipid 2, is shown in a cut through the gyroid taken in the \[$`1\overline{1}0`$\] direction and which passes through two points, $`(1/8,1/8,1/8)`$ and $`(1/8,1/8,1/8)`$, which require the least stretch. Only the portion of the tail region defined by $`\psi (𝐫)0.2`$ is shown with any gray scale variation. The solid line connects the tube centers which lie closest to one another. The point midway along this line is that most easily reached by the lipid tails. The fact that the region around this point is dark indicates that the shorter lipid 1 is relatively more probable here. The maximum (black) and minimum values (white) of $`\zeta (𝐫)`$ are $`9.26\times 10^2`$ and $`6.64\times 10^2`$ indicating a variation of relative volume fraction in the tail region shown which is on the order of 8 per cent. Figure 10 shows a cut in the direction which passes through the point which is furtherest from the center of any tube. Again the region shown in gray scale variation is the tail region defined by $`\psi (𝐫)0.2`$. The maximum and minimum values of $`\zeta (𝐫)`$ are $`10.35\times 10^2`$ and $`6.99\times 10^2`$ indicating a similar variation in volume fraction. The point at the center of the bent dark line is furtherest from the center of any tube. The black line ends on the centers of the two tubes closest to it. The lightest part of the generally dark tail region is very close to the point furtherest from any tube center and indicates that the relative probability of finding the tails of the non-lamellar forming lipid is large there. That the lightest part does not correspond exactly to the point furtherest from the tube center is probably due to the fact that this point is not necessarily the furtherest from the head-tail interface, and thus not necessarily the most difficult for the tails to reach, although it is probably quite close to being so. In Fig. 11 we show the surface defined by the order parameter $`\psi (𝐫)=0.2`$ in the region of the three-fold connectors. On that surface the order parameter $`\zeta (𝐫)`$ is plotted. The largest value, which is black and at the center of this region, is $`3.25\times 10^2`$ showing that the lamellar-forming lipids are more probable there. The smallest value, shown in the lightest gray, is $`1.82\times 10^2`$ indicating a few per cent variation in this region. For comparison, the mean curvature on the surface defined by $`\psi (𝐫)=0`$ is plotted in Fig. 12. The smallest value of the mean curvature, $`0.20`$, occurs at the center, while the largest values, $`0.52`$, occur away from it. As in the inverted hexagonal phase, one sees that the preferential location of the lamellar-forming lipids is at low curvature sites. Further there is more structure in the plot of the curvature on a surface near where the heads and tails meet than in the plot of the distribution of tail volume fractions deep within the tail region. That the region near the center of these three-fold connectors is characterized by small mean curvature is analogous to the results obtained by Matsen for the gyroid phase of block copolymers. We used two hundred basis functions for this plot. In sum, we have employed a lipid model which has given excellent results previously for the phase diagram of dioleoylphosphatidyethanolamine in order to examine the distribution of tails in a mixture of lamellar forming and non-lamellar forming lipids. The two lipids are characterized by the same head group, but tails of different length. We have shown, both in the inverted hexagonal and gyroid phases, that the tails of the non-lamellar forming lipid are found preferentially in regions of the unit cell which are difficult to fill, and those of the lamellar forming lipid are found in the regions most easily filled. The variation in volume fraction is of the order of 1 to 10 per cent. The difference in lipid tail density is also correlated with the curvature of the surface which, loosely, separates the head groups from the tails although the former shows much less structure than the latter. It will be most interesting to apply this model to stalk-like structures thought to be of importance in membrane fusion to determine by what amount the presence of non-lamellar forming lipids can lower the free energy barrier to their formation. This work was supported in part by the National Science Foundation under grant number DMR9876864. Phase diagram of an anhydrous system of lipids as a function of the temperature $`T_N^{}`$ and the volume of the head group relative to that of the entire lipid, $`fv_n/(v_n+2Nv_t)=1/(1+\gamma _t)`$. In addition to the disordered ($`D`$) and lamellar $`L_\alpha `$ phases, there are body-centered cubic ($`bcc`$), hexagonal ($`H`$), and gyroid $`G`$ phases. The subscripts $`I`$ and $`II`$ denote normal and inverted phases respectively. The dashed lines indicate extrapolated boundaries. Phase diagram of a mixture of two lipids with the same head group, but one with tails 1.5 times the length of the other. Lipid 1, a lamellar-forming lipid, is characterized by $`1/(1+\gamma _t)=0.2857`$ while the non-lamellar forming lipid 2, with the longer tails, is characterized by $`1/(1+1.5\gamma _t)=0.2105`$. $`T^{}=1/2\chi N_1`$ is a measure of the temperature, and $`\mathrm{\Theta }`$ is the volume fraction of lipid 1. Solid lines result from the full self-consistent field calculation of the mixture, while open diamonds result from the single effective lipid approximation. Very small regions of b.c.c phases have been ignored. (a) Volume fractions of the head groups $`\varphi _h^{(1)}`$ and $`\varphi _h^{(2)}`$ of lipids 1 and 2 and of the tail groups $`\gamma _t\varphi _h^{(1)}`$ and $`\gamma _t\varphi _h^{(2)}`$ in the lamellar phase at $`T^{}=0.045`$ and volume fraction of lipid 1 $`\mathrm{\Theta }=0.6588`$. (b) The order parameter $`\zeta (x)\varphi _t^{(1)}(x)/<\varphi _t^{(1)}(x)>\varphi _t^{(2)}(x)/<\varphi _t^{(2)}(x)>`$, where the averages are taken over the region $`\psi (x)0`$. The order parameter $`\psi (𝐫)=\varphi _h^{(1)}(𝐫)\varphi _t^{(1)}(𝐫)+\varphi _h^{(2)}(𝐫)(1/\alpha _2)\varphi _t^{(2)}(𝐫)`$ in the $`H_{II}`$ phase at $`T^{}=0.045`$ and $`\mathrm{\Theta }=0.27`$. The maximum value is 0.719 and the minimum value $`0.286`$. Each gradation represents a change of 10 per cent. The order parameter $`\zeta (𝐫)`$ in the hexagonal phase of Fig. 4. The averages are taken over the region $`\psi (𝐫)0.282`$. Regions with $`\psi (𝐫)>0.282`$ are shown in white. Light parts of the tail region indicate an excess of non-lamellar forming lipids, while darker areas represent excess of lamellar-forming lipids. (a) Volume fraction of non-lamellar forming lipid 2 evaluated on the Brillouin zone edge from nearest-neighbor direction, $`\theta =0`$, to next-nearest-neighbor direction, $`\theta =\pi /6.`$ (b) Volume fraction of lamellar forming lipid 1 evaluated on the Brillouin zone edge. (c) Mean curvature of the surface defined by $`\psi (𝐫)=0`$. A view of the gyroid taken along the $`[001]`$ direction. The surface is defined by the value of the order parameter $`\psi (𝐫)=0.5`$ which is well within the region in which the head groups dominate. A second view of the gyroid surface, defined as in Fig. 7. The longest distance between a point in the tail region and the axis of any tube is shown by two intersecting solid lines. The shortest distance between the axes of two tubes is shown by a dotted line. Such a line connects nearby regions of three-fold symmetry on the two different sublattices. Order parameter $`\zeta (𝐫)`$ shown in a cut which passes through the point requiring the least stretch of the lipid tails. The points closest together are shown connected by a solid line. The volume fractions of the tails are averaged over the region defined by $`\psi (𝐫)0.2`$. The region in which $`\psi (𝐫)>0.2`$ is shown in white. The maximum value of $`\zeta (𝐫)`$, in black, is $`9.26\times 10^2`$ and the minimum value is $`6.64\times 10^2`$. Order parameter $`\zeta (𝐫)`$ shown in a cut which passes through the point furtherest from the axis of any tube. The volume fractions of the tails are averaged over the region defined by $`\psi (𝐫)0.2`$.The maximum value of $`\zeta `$ is $`10.35\times 10^2`$ and the minimum is $`6.99\times 10^2`$. The order parameter $`\zeta (𝐫)`$ is plotted on the surface defined by the order parameter $`\psi (𝐫)=0.2`$ in the region of the three-fold connectors. The largest value, which is black and at the center of this region, is $`3.25\times 10^2`$ showing that the lamellar-forming lipids are more probable there. The smallest value, shown in the lightest gray, is $`1.82\times 10^2`$ indicating a few per cent variation in this region. The mean curvature on the surface defined by $`\psi (𝐫)=0`$ is shown. The smallest value of the mean curvature, $`0.20`$, occurs at the center, while the largest values, $`0.52`$, occur away from it.
no-problem/9910/astro-ph9910470.html
ar5iv
text
# Confirming EIS Clusters. Multi-object Spectroscopy ## References Danese, L., De Zotti, G., di Tullio, G. 1980, A&A, 82, 322 Lilly, S.J., Le Fèvre, O., Crampton, D., Hammer, F., Tresse, L. 1995, ApJ, 455, 50 Nonino, M., Bertin, E., da Costa, L.N. et al. 1999, A&AS, 137, 51 Olsen, L.F., Scodeggio, M., da Costa, L.N. et al. 1999, A&A, 345, 681 Postman, M., Lubin, L.M., Gunn, J.E. et al. 1996, AJ, 111, 615 Renzini, A., da Costa, L.N. 1997, Messenger, 87, 23 Scodeggio, M., Olsen, L.F., da Costa, L.N. et al. 1999, A&AS, 137, 83
no-problem/9910/hep-ph9910215.html
ar5iv
text
# REFERENCES DSF-T-99/33 Testing quark mass matrices with right-handed mixings D. Falcone<sup>1</sup> and F. Tramontano<sup>1,2</sup> <sup>1</sup>Dipartimento di Scienze Fisiche, Università di Napoli, Mostra d’Oltremare, Pad. 19, I-80125, Napoli, Italy; <sup>2</sup>INFN, Sezione di Napoli, Napoli, Italy e-mail: falcone@na.infn.it e-mail: tramontano@na.infn.it ## Abstract In the standard model, several forms of quark mass matrices which correspond to the choice of weak bases lead to the same left-handed mixings $`V_L=V_{CKM}`$, while the right-handed mixings $`V_R`$ are not observable quantities. Instead, in a left-right extension of the standard model, such forms are ansatze and give different right-handed mixings which are now observable quantities. We partially select the reliable forms of quark mass matrices by means of constraints on right-handed mixings in some left-right models, in particular on $`V_{cb}^R`$. Hermitian matrices are easily excluded. PACS numbers: 12.15.Ff, 12.10.Dm In the framework of the Standard Model (SM), based on the gauge group $`SU(3)_c\times SU(2)_L\times U(1)_Y`$, the right-handed mixings are not observable quantities, but they become observable in extensions of the SM such as the Left-Right Model (LRM) $`SU(3)_c\times SU(2)_L\times SU(2)_R\times U(1)_{BL}`$ , the Pati-Salam model $`SU(4)_{PS}\times SU(2)_L\times SU(2)_R`$ , and the grand unified model $`SO(10)`$ . Right-handed mixings are the most direct tool to test models of quark mass matrices. Let us explain how this may happen. In the LRM the quark mass and charged current terms are $$\overline{u}_LM_uu_R+\overline{d}_LM_dd_R+g_L\overline{u}_Ld_LW_L+g_R\overline{u}_Rd_RW_R.$$ (1) Diagonalization of $`M_u`$, $`M_d`$ by means of the biunitary transformations $$U_u^{}M_uV_u=D_u,U_d^{}M_dV_d=D_d$$ gives (renaming the quark fields) $$\overline{u}_LD_uu_R+\overline{d}_LD_dd_R+g_L\overline{u}_LV_Ld_LW_L+g_R\overline{u}_RV_Rd_RW_R,$$ (2) where $$V_L=U_u^{}U_d=V_{CKM},V_R=V_u^{}V_d$$ are the left- and right-handed mixing matrices of quarks, and $`D_u`$, $`D_d`$ have non-negative matrix elements. In the SM the last term in Eqns.(1),(2) is absent and it is possible to perform, without physical consequences, that is without changing the observable quantities appearing in Eqn.(2), the following unitary transformations on the quark fields: $$u_L𝒰u_L,d_L𝒰d_L,$$ (3) $$u_R𝒱_uu_R,d_R𝒱_dd_R.$$ (4) In the LRM Eqn.(4) must be replaced by $$u_R𝒱u_R,d_R𝒱d_R,$$ (5) that is also $`u_R`$ and $`d_R`$ must transform in the same way because of the last term in Eqns.(1),(2). From the point of view of quark mass matrices, the consequences of replacing Eqn.(4) with Eqn.(5), keeping Eqn.(3), are the following. In the SM we can use the freedom in $`𝒰`$ and $`𝒱_u`$ to choose $`M_u=D_u`$. Further we can use the freedom in $`𝒱_d`$ to choose $`M_d`$ to be hermitian or to have three zeros . In the LRM, the second freedom is not there because both the diagonalizing matrices of $`M_d`$ are physical observables. This fact means that $`bases`$ $`inthe`$ $`SM`$ $`become`$ $`ansatze`$ $`inthe`$ $`LRM`$, giving the same $`V_L`$ but different $`V_R`$. The aim of this Letter is to begin a selection of quark mass matrices in the LRM by using informations on right-handed mixings. In fact, if without loss of generality one sets $`M_u=D_u`$, then $`V_L^{}M_dV_R=D_d`$, and thus $$V_R^{}=D_d^1V_L^{}M_d$$ (6) permits to calculate the right-handed mixing matrix $`V_R`$ (values of quark masses at the scale $`M_Z`$ and of the mixing $`V_L`$ are extracted from refs. and ). It is well-known that if $`M_d`$ is hermitian or symmetric then $`|V_R|=|V_L|`$. These conditions correspond to manifest and pseudomanifest left-right symmetry, respectively . In the general case, however, $`V_R`$ is not related to $`V_L`$ . Notice that different quark mass and mixing matrices, which correspond to bases in the SM, are connected by a suitable unitary $`U_R`$, because $$V_L^{}M_dV_R=V_L^{}M_dU_RU_R^{}V_R=V_L^{}M_d^{}V_R^{},$$ (7) where $`M_d^{}=M_dU_R`$ and $`V_R^{}=U_R^{}V_R`$. Therefore, they give different right-handed mixings in the LRM. For example, let us consider the simple case of the first two generations with real mass matrices (in the LRM mass matrices are complex in general, even for only two generations). The left-handed mixings are given by $$V_L\left(\begin{array}{cc}1& \lambda \\ \lambda & 1\end{array}\right),\lambda =0.22.$$ In the SM, using a right-handed rotation $`𝒱_d`$ it is possible to put one zero in $`M_d`$ in any position. In the LRM such four forms give different $`V_R`$. We can get all forms from just one, for example from $$M_d\left(\begin{array}{cc}0& \sqrt{m_dm_s}\\ \sqrt{m_dm_s}& m_s\end{array}\right)V_R\left(\begin{array}{cc}1& \lambda \\ \lambda & 1\end{array}\right),$$ by using in Eqn.(7) $`U_R`$ like $$\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),\left(\begin{array}{cc}c& s\\ s& c\end{array}\right).$$ In fact $$M_dU_R=\left(\begin{array}{cc}0& \sqrt{m_dm_s}\\ \sqrt{m_dm_s}& m_s\end{array}\right)\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)=\left(\begin{array}{cc}\sqrt{m_dm_s}& 0\\ m_s& \sqrt{m_dm_s}\end{array}\right)=M_d^{},$$ $$U_R^{}V_R=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\left(\begin{array}{cc}1& \lambda \\ \lambda & 1\end{array}\right)=\left(\begin{array}{cc}\lambda & 1\\ 1& \lambda \end{array}\right)=V_R^{}.$$ The mixing $`V_{us}^R`$ is small on the first basis and large on the second. Moreover, $$\left(\begin{array}{cc}0& \sqrt{m_dm_s}\\ \sqrt{m_dm_s}& m_s\end{array}\right)\left(\begin{array}{cc}c& s\\ s& c\end{array}\right)=\left(\begin{array}{cc}\sqrt{m_dm_s}s& \sqrt{m_dm_s}c\\ \sqrt{m_dm_s}cm_ss& \sqrt{m_dm_s}s+m_sc\end{array}\right),$$ and imposing the element 2-1 to vanish, we have $$c=\sqrt{\frac{m_s}{m_s+m_d}}1,s=\sqrt{\frac{m_d}{m_s+m_d}}\lambda ,$$ and the third basis $$\left(\begin{array}{cc}0& \sqrt{m_dm_s}\\ \sqrt{m_dm_s}& m_s\end{array}\right)\left(\begin{array}{cc}\sqrt{\frac{m_s}{m_s+m_d}}& \sqrt{\frac{m_d}{m_s+m_d}}\\ \sqrt{\frac{m_d}{m_s+m_d}}& \sqrt{\frac{m_s}{m_s+m_d}}\end{array}\right)\left(\begin{array}{cc}m_d& \sqrt{m_dm_s}\\ 0& m_s\end{array}\right),$$ $$\left(\begin{array}{cc}1& \lambda \\ \lambda & 1\end{array}\right)\left(\begin{array}{cc}1& \lambda \\ \lambda & 1\end{array}\right)\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right).$$ And again we can get the fourth basis, from the third, through $$\left(\begin{array}{cc}m_d& \sqrt{m_dm_s}\\ 0& m_s\end{array}\right)\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)=\left(\begin{array}{cc}\sqrt{m_dm_s}& m_d\\ m_s& 0\end{array}\right),$$ $$\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right).$$ Mixing is nearly zero on the third basis and nearly one on the fourth. In this way, also for more than two generations, one can construct different bases in the SM, which are different ansatze in the LRM, from just few of them. Therefore, let us consider now three generations and label elements in $`M_d`$ as $$\left(\begin{array}{ccc}1& 2& 3\\ 4& 5& 6\\ 7& 8& 9\end{array}\right).$$ There are several SM bases with three zeros in $`M_d`$ . For example zeros can be put in positions 137 and 236 , 478 , 124. The last form can be obtained from 137 by just relabeling the family indices 2,3. From bases 124, 137, 478 we can calculate fifty-four bases by means of the six special rotations $$\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 1\\ 0& 1& 0\end{array}\right),$$ $$\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right),\left(\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 1& 0& 0\end{array}\right),$$ $$\left(\begin{array}{ccc}0& 0& 1\\ 1& 0& 0\\ 0& 1& 0\end{array}\right),\left(\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right),$$ which produce permutations of columns in $`M_d`$ and of rows in $`V_R`$, and suitable unitary transformations of the type $$\left(\begin{array}{ccc}1& 0& 0\\ 0& e^{i\alpha }c& e^{i\beta }s\\ 0& e^{i\gamma }s& e^{i\delta }c\end{array}\right),\alpha +\gamma =\beta +\delta .$$ For each of the three starting forms under examination we have calculated the matrix $`M_d`$ by the relation $$M_dM_d^{}=V_LD_d^2V_L^{}$$ (8) and $`V_R`$ by Eqn.(6). For $`V_L`$ we use the standard parametrization . Moreover, to keep arbitrary representation of $`V_L`$ one must put three phases (not just one SM observable) in $`M_d`$ . Their positions for our starting bases are 356, 256, 236, respectively. Putting three phases and their position is part of the ansatze in the LRM, because for three generations of quarks there are seven observable phases, one can be inserted in $`V_L`$ and six in $`V_R`$ . However, due to the position of the three zeros, putting the three phases in another position, or more than three phases, up to six, does not change the moduli of $`V_R`$. From the three starting bases, by means of the six rotations we get eighteen bases and with the help of the unitary transformation we get other six bases (imposing one element in the second column to vanish) which become thirty-six by using again the six rotations, making a total of fifty-four. These are all the SM bases with $`M_u=D_u`$ and $`M_d`$ containing three zeros, out of eighty-four possibilities. We try to understand if some of the fifty-four bases satisfy constraints coming from $`B`$ decay, $`K_LK_S`$ mass difference and $`B\overline{B}`$ mixing, within the LRM. As an example of a successful ansatz we report $`|M_d|`$ and $`|V_R|`$ for model 124: $$|M_d|=\left(\begin{array}{ccc}0& 0& 0.023\\ 0& 0.106& 0.104\\ 0.541& 2.687& 1.213\end{array}\right),|V_R|=\left(\begin{array}{ccc}0.958& 0.221& 0.180\\ 0.205& 0.393& 0.896\\ 0.199& 0.892& 0.405\end{array}\right).$$ The near equality of elements 5 and 6 in $`|M_d|`$ is due to the specific value $`\delta 75^{}`$ . The matrix $`|V_R|`$ has an approximate symmetric expression $$|V_R|\left(\begin{array}{ccc}1& \lambda & \lambda \\ \lambda & 2\lambda & 1\\ \lambda & 1& 2\lambda \end{array}\right).$$ Further constraints on the form of $`V_R`$ come from $`K_LK_S`$ mass difference and $`B\overline{B}`$ mixing. Assuming that each row and column of $`V_R`$ contains only one large element and forbidding fine-tuning, these constraints give $`|V_{us}^R|\lambda ^2`$, $`|V_{ub}^R|\lambda `$, $`|V_{tb}^R|\lambda `$, when $`|V_{ts}^R|`$ is large, and $`|V_{ud}^R|\lambda `$, $`|V_{ub}^R|\lambda `$, $`|V_{tb}^R|\lambda ^3`$, when is $`|V_{td}^R|`$ large. Out of the sixteen models in Table 1, the 128 satisfies quite well the second set of constraints: $$|M_d|=\left(\begin{array}{ccc}0& 0& 0.023\\ 0.103& 0.021& 0.104\\ 2.741& 0& 1.213\end{array}\right),|V_R|=\left(\begin{array}{ccc}0.199& 0.892& 0.405\\ 0.088& 0.395& 0.914\\ 0.976& 0.217& .0003\end{array}\right).$$ The approximate expression for $`|V_R|`$ is $$|V_R|\left(\begin{array}{ccc}\lambda & 1& 2\lambda \\ 2\lambda ^2& 2\lambda & 1\\ 1& \lambda & \lambda ^5\end{array}\right).$$ Model 124 gives $`|V_{us}^R|\lambda `$ rather than $`|V_{us}^R|\lambda ^2`$. Also models 479 and 589 are reliable, with very small mixings. To better explain the physical content of the foregoing calculation we present some comments. We have considered fifty-four forms of quark mass matrices with three zeros and three phases inside $`M_d`$ and a diagonal $`M_u`$. As we said, $`|V_R|`$ does not change if we put more than three phases. On the other hand the existence of three zeros in $`M_d`$ is a strong restriction because in the LRM just one or two zeros settle an ansatz. Nevertheless, we have found a few models that satisfy the constraints from $`K`$ and $`B`$ physics. Other ansatze can be obtained starting from a diagonal $`M_d`$. We stress the simple result that, if $`|V_{cb}^R|`$ is large, hermitian or symmetric quark mass matrices are not reliable . Non-symmetric mass matrices have important applications in the leptonic sector, mainly in connection with the large mixing of neutrinos . Using Eqns.(3),(5) it is possible to change the structure of both $`M_u`$ and $`M_d`$. For example, from model 128, multiplying to the right by a simple unitary transformation in the 2-3 sector, it yields $`M_d`$ with one zero in position 1 and $`M_u`$ with four zeros in positions 2347. Although such forms can be more interesting to discover an underlying theory of fermion masses and mixings, they lead to the same parameters of Eqn.(2), and we need other observable quantities to make a selection of such models with non diagonal mass matrices. Such new physical parameters exist in extensions of the LRM. Actually, in the SM one can get $`M_u`$ diagonal and $`M_d`$ with three zeros; in the LRM $`M_u`$ can be diagonalized but $`M_d`$ is fixed. In the Pati-Salam model, due to the relation between quarks and leptons, we cannot choose $`M_u`$ diagonal in the general case. In $`SO(10)`$ also $`u_L`$ and $`u_R`$ transform in the same way and then it is never possible to choose $`M_u`$ diagonal. Non-symmetric mass matrices can be obtained by using also the antisymmetric Higgs 120 in the Yukawa couplings with fermions, or if one allows for effective non-renormalizable couplings of the light generations . In conclusion, for the first time, we have performed, within the LRM, a systematic study of quark mass matrices which have a general structure in the SM. Constraints on right-handed mixings, coming from various experimental and theoretical sources, permit to select three reliable forms, apart from phases. The authors thank F. Buccella for comments on the manuscript and T. Rizzo for communications.
no-problem/9910/cond-mat9910228.html
ar5iv
text
# Coexistence of Weak Localization and a Metallic Phase in Si/SiGe Quantum Wells ## Abstract Magnetoresistivity measurements on p-type Si/SiGe quantum wells reveal the coexistence of a metallic behavior and weak localization. Deep in the metallic regime, pronounced weak localization reduces the metallic behavior around zero magnetic field without destroying it. In the insulating phase, a positive magnetoresistivity emerges close to B=0, possibly related to spin-orbit interactions. The recently discovered metal-insulator transition (MIT) in Si-MOSFETs has meanwhile been observed in a variety of material systems, such as p-type and n-type GaAs heterostructures, Si/SiGe and AlAs quantum wells. These experiments challenge the scaling theory of localization for non-interacting electrons in two dimensions (2D) in the weakly disordered ($`k_fl1`$) regime. Since then an increasing number of experiments have investigated more details of this MIT. In spite of considerable theoretical research the origin of the metallic phase is still controversially discussed. In high density 2D carrier systems, which can be treated as non-interacting, the scaling theory of localization fits the experimental data well, yielding insulating behavior as one approaches the zero temperature limit. But in all systems showing a MIT (with the possible exception of Refs. and ), the ratio $`r_s`$ between carrier-carrier interaction energy and kinetic energy is of the order of 10, suggesting that interactions are driving the formation of the metallic phase and cannot be neglected when calculating corrections to the conductivity. This path is followed in the majority of the theoretical models, although several ideas not relying on strong interactions have been developed as well. Weak localization (WL) can only describe one part of the total conductivity correction and additional contributions such as particle-particle interactions, spin-orbit interactions or multi-subband transport, must be included. Experimentally only the superposition of all contributions at B=0 can be detected. In total, a complex conductivity behavior $`\sigma (T,B)`$ is expected. Recent studies on the low-field magnetoresistance in the metallic phase have been done in Si-MOSFETs and p-type Si/SiGe quantum wells. In this publication we investigate WL effects as a function of magnetic field and temperature in the regime where the system shows metallic behavior. The samples used in this study are p-type Si/SiGe quantum wells exhibiting the MIT as a function of hole density. We find that 1. the shape of the low-field magnetoresistance in the metallic phase can be well described by standard WL theory, 2. there is no indication for a novel dephasing mechanism in the metallic regime, 3. the magnitude and even the sign of the temperature dependence of the resistivity can depend on the applied magnetic field, and 4. a broad negative magnetoresistance develops in the insulating phase, with a small positive magnetoresistance superimposed around zero magnetic field. These observations indicate that the resistivity in the metallic phase is determined by different, similarly important contributions. The samples investigated in this study were grown by molecular beam epitaxy, and consist of a 200Å $`\mathrm{Si}_{0.85}\mathrm{Ge}_{0.15}`$ layer surrounded by undoped Si layers, a 150Å B-doped Si layer with a setback of 180Å from the well, and a 200Å undoped Si cap. The SiGe layer forms a triangular potential well for the two-dimensional hole gas. Due to the lattice mismatch between Si and SiGe as well as due to size quantization, the heavy hole ($`m_J`$ = $`\pm `$3/2) potential is split from the light hole ($`m_J`$ = $`\pm `$1/2) potential, and ensures that the lowest occupied bound state has heavy hole character. The transport effective mass of this state is $`m^{}0.25m_0`$, as extracted from the temperature dependence of Shubnikov- de Haas oscillations. Conventional Hall bar structures were fabricated with a source-drain length of 0.6mm and a width of 0.2mm. The distance between the voltage probes was 0.3mm. The hole density $`p`$ could be tuned between $`1.110^{11}cm^2p2.610^{11}cm^2`$ using a Ti/Al Schottky gate. Transport measurements using standard four terminal lock-in techniques were performed in a pumped liquid He cryostat, as well as in the mixing chamber of a $`{}_{}{}^{3}He/^4He`$ dilution refrigerator. The mobility in these structures was found to increase strongly with carrier concentration, from 1000 $`cm^2/Vs`$ (for $`p=1.110^{11}cm^2`$) to 7800 $`cm^2/Vs`$ ($`p=2.610^{11}cm^2`$). Figure 1 shows a series of magnetoresistance measurements for several carrier densities and temperatures. From top to bottom, the carrier density decreases and the sample undergoes a transition from metallic to insulating behavior at B=0 as well as for small magnetic fields. For large hole densities (Figs. 1 a, b), the resistivity at B=0 clearly decreases with decreasing temperature, indicating metallic behavior. Similar results have been obtained in the metallic regime in Si MOSFETs, and in SiGe quantum wells with fixed carrier density, where the authors also discuss the broad background in terms of interactions. In the present paper, we focus on the evolution of $`\rho (B)`$ as a function of $`p`$. As the hole density is decreased by suitable gate voltages, the metallic behavior becomes weaker (Fig.1b). The sample behaves insulating as the carrier density is further reduced (Figs.1 d, e). At intermediate hole densities (Fig.1c), $`d\rho /dT<0`$ at low temperatures, but $`d\rho /dT>0`$ at higher temperatures. Magnetoresistivity measurements allow to distinguish different contributions to the total resistivity. While the WL effect leads to negative magnetoresistivity $`\rho (B)`$, spin-orbit coupling results in a positive magnetoresistivity. Interactions produce a complex magnetoresisitivity, which depends on the sample parameters. From the magnetic field dependence of the resistivity one can clearly discern a negative magnetoresistance in the metallic phase (Figs.1 a, b). Fig. 2a shows the longitudinal magnetoconductivity $`\sigma (B)`$ for $`p=2.610^{15}m^2`$ around B=0 in the metallic phase. In addition, theorectical curves for the WL correction of $`\sigma (B)`$, i.e. $$\delta \sigma (B,T)=\alpha \frac{e^2}{2\pi \mathrm{}^2}[\mathrm{\Psi }(\frac{1}{2}+\frac{\tau _B}{2\tau _\varphi })\mathrm{\Psi }(\frac{1}{2}+\frac{\tau _B}{2\tau _e})]$$ (1) are fitted to the data with the temperature dependent phase coherence time $`\tau _\varphi (T)`$ and $`\alpha `$ as parameters. Here, $`\tau _B`$ denotes the magnetic time, $`\tau _e`$ the elastic scattering time, and $`\mathrm{\Psi }`$ is the digamma function. The constant $`\alpha `$ is a phenomenological parameter that describes additional mechanisms, for example scattering by the Maki-Thompson process, or anisotropic scattering. If no such additional scattering mechanism exist, $`\alpha `$ is expected to be 1. In n-type Si MOSFETs, intervalley scattering is supposed to determine $`\alpha `$. Our data are fitted best for $`\alpha =0.61`$, similar to the results of Ref. . The mechanism that leads to this reduction of $`\alpha `$ remains an open question. It can not, however, be explained by spin-orbit scattering between the light hole and the heavy hole band, since their energy separation is more than $`24`$ meV in our system and therefore much larger than the Fermi energy. For the temperature dependence of $`\tau _\varphi `$, we find $`\tau _\varphi T^\gamma `$, with $`\gamma =1.09\pm 0.2`$ for $`\alpha =1`$, and $`\gamma =1.29\pm 0.2`$ for $`\alpha =0.61`$. For dephasing by quasi-elastic electron-electron collisions (i.e. Nyquist noise), $`\gamma `$=1 is expected. Similar agreement between experiment and theory has also been found in insulating 2D systems. Hence, from the temperature dependence of $`\tau _\varphi `$, there is no indication of a novel dephasing mechanism due to the presence of the metallic phase. Furthermore, neither $`\alpha `$ nor $`\gamma `$ depend significantly on $`p`$ in the metallic phase. Assuming that Nyquist noise causes the dephasing, we find that $`\tau _\varphi `$ is smaller than expected from theory, which states according to Ref. , $$\frac{1}{\tau _\varphi T}=\frac{k_Be^2}{2\pi \mathrm{}^2}\rho ln\frac{\pi \mathrm{}}{e^2\rho }$$ (2) From our fits, we find $`(\tau _\varphi T)^1=3.010^{11}s^1K^1`$ (using $`\alpha `$=0.61), which is a factor of $`3.2`$ below the value expected from theory. Similar discrepancies between experiment and theory are found for insulating 2D carrier systems. These results indicate that even in the metallic regime, a significant amount of carriers still contributes to WL. We do not find clear evidence for a different dephasing mechanism than in other 2D systems. Furthermore, we conclude from the existence of the WL peak that in our system, a spontaneous flux state at $`B=0`$, which would break the time reversal symmetry, is of minor importance. At $`B=0`$ and in the metallic phase, the resistance drops faster with decreasing temperature than the WL peak increases. In order to distinguish the temperature dependence of WL from the background resistance, we compare the resistivity at $`B=0`$ with the one at $`B=0.3T`$. This field is larger than the characteristic field $`B_\tau =\mathrm{}/(4eD\tau )=0.11T`$, and therefore the WL is quenched (Fig. 3). Especially at low temperatures the metallic behavior becomes more pronounced as one moves out of the WL peak. This suggests that two different contributions to the conductivity (or two conducting systems) may exist, one with a metallic temperature behavior and another one with a standard, insulating WL behavior. A possible theoretical description could be the two-phase model proposed recently in Ref.. As one enters the insulating regime at B=0 (Fig. 1 d), a very broad negative magnetoresistivity develops that determines the overall temperature dependence. In this situation, (i.e. for $`k_Fl1`$, where $`l`$ is the elastic mean free path) $`\tau _\varphi `$ cannot be extracted from fitting eq. 1 to the data. In this regime, the sample looks rather like a conventional two-dimensional carrier gas with low mobility. We would like to report another finding occurring in the insulating phase. For very low temperatures $`T200`$ mK and small carrier densities, an additional minimum occurs in the magnetoresistance around B=0. Similar features have been observed on n-type Ga\[Al\]As heterostructures and explained by spin-orbit coupling. Also, recent data on p-type GaAs heterostructures show a dip in the magnetoresistance around $`B=0`$ which, however, is superimposed on a rather flat background. Spin-orbit coupling effects are expected to be important in p-type SiGe heterostructures and could be the reason for this low-temperature feature. Note, however, that in contrast to Ref. , we observe this feature only deep in the insulating phase. In summary, we have investigated the influence of perpendicular magnetic fields on the resistance in the metallic regime of a two-dimensional hole gas in Si/SiGe quantum wells. A dip in the magnetoresistivity at B=0, possibly due to spin-orbit coupling, is found deep in the insulating phase. We have observed the coexistence of WL and metallic behavior. Time inversion symmetry seems not to be spontaneously broken at B=0 in our samples. The temperature dependence of the dephasing time $`\tau _\varphi `$ suggests that Nyquist noise determines the dephasing even when the sample is in the metallic phase. We find no significant indication that $`\tau _\varphi `$ behaves differently than in insulating 2D systems. Our data are consistent with a model based on (at least) two different conductivity contributions for the metallic phase. We have enjoyed fruitful discussions with P.T. Coleridge, S.V. Kravchenko, D. Popovic, and F. C. Zhang. Financial support from ETH Zürich and the Schweizerischer Nationalfonds is gratefully acknowledged.
no-problem/9910/physics9910036.html
ar5iv
text
# HeXLN: A 2-Dimensional nonlinear photonic crystal ## Abstract We report on the fabrication of what we believe is the first example of a two dimensional nonlinear periodic crystal, where the refractive index is constant but in which the 2nd order nonlinear susceptibility is spatially periodic. Such a crystal allows for efficient quasi-phase matched 2nd harmonic generation using multiple reciprocal lattice vectors of the crystal lattice. External 2nd harmonic conversion efficiencies $`>60\%`$ were measured with picosecond pulses. The 2nd harmonic light can be simultaneously phase matched by multiple reciprocal lattice vectors, resulting in the generation of multiple coherent beams. The fabrication technique is extremely versatile and allows for the fabrication of a broad range of 2-D crystals including quasi-crystals. 42.65.K,42.65.-k, 42.70.Qs,42.70.M The interaction of light with periodic media is an area of intense interest both theoretically and experimentally. A central theme of this work is the idea of a linear photonic crystal in which the linear susceptibility is spatially periodic. Photonic crystals can have a complete photonic bandgap over some frequency range and this bandgap can be exploited for a wide variety of processes such as zero threshold lasers, inhibited spontaneous emission, or novel waveguiding schemes such as photonic bandgap fibres. In one dimension photonic crystals, or Bragg gratings, have been well studied for many years. In three dimensions a complete photonic bandgap at long wavelengths has already been demonstrated and work on extending this to the visible region is rapidly progressing. Recently V. Berger proposed extending the idea of photonic crystals to include nonlinear photonic crystals. In a nonlinear photonic crystal (NPC) there is a periodic spatial variation of a nonlinear susceptibility tensor while the refractive index is constant. This is in contrast with other work done on nonlinear interactions in photonic crystals where the nonlinearity is assumed constant throughout the material and the photonic properties derive from the variation of the linear susceptibility. The simplest type of NPCs are the 1-D quasi-phase-matched materials, first proposed by Armstrong et al. in which the second order susceptibility undergoes a periodic change of sign. This type of 1-D structure has attracted much interest since the successful development of periodically-poled lithium niobate based devices. Generalisation to two and three dimensions in analogy with linear photonic crystals, was recently proposed by Berger and here we report its experimental realisation as a 2-D periodic structure with hexagonal symmetry in lithium niobate (HeXLN). First we briefly summarise the well known 1-D quasi-phase matching (QPM) concept before treating the 2-D case. To this end consider the case of 2nd harmonic generation in a $`\chi ^{(2)}`$ material where light at a frequency $`\omega `$ is converted to a signal at $`2\omega `$. In general the refractive index at $`\omega `$ and $`2\omega `$ are different and hence after a length $`L_c`$ (the coherence length) the fundamental and the generated 2nd harmonic will be $`\pi `$ out of phase. Then in the next coherence length all of the 2nd harmonic is back-converted to the fundamental - resulting in poor overall conversion efficiency. The idea of quasi-phase matching is to change the sign of the nonlinearity periodically with a period of $`L_c`$, thus periodically reversing the phase of the generated 2nd harmonic. This ensures that the 2nd harmonic continues to add up in phase along the entire length of the crystal, resulting in a large overall conversion efficiency. An alternative way to understand the physics of quasi-phase matching is through conservation of momentum. 2nd harmonic generation is a three photon process in which two photons with momentum $`\mathrm{}k^\omega `$ are converted in a photon of momentum $`\mathrm{}k^{2\omega }`$ and if $`k^{2\omega }=2k^\omega `$ (ideal phase matching) then the momentum is conserved and the interaction is efficient. However in general due to dispersion ideal phase matching is not possible and different techniques must be used to insure conservation of momentum. In the quasi-phase matched case conservation of momentum becomes $`k^{2\omega }=2k^\omega +G,`$ where $`G`$ is the crystal momentum corresponding to one of the reciprocal lattice vectors (RLV) of the macroscopic periodic structure of the NPC. Clearly this technique allows one to phase-match any desired nonlinear interaction, assuming that one can fabricate an appropriate NPC. In 1-D quasi-phase matching can occur in either the co- or counter-propagating direction. For a strictly periodic lattice quasi-phase matching can only occur over limited wavelength ranges since the RLVs are discrete and periodically spaced in momentum space. In order to obtain broader bandwidths one approach is to use aperiodic structures which have densely spaced RLVs. An alternative approach which is taken here is to move to a two dimensional NPC which brings added functionality compared to a 1-D crystal. Clearly in a 2-D NPC the possibility of non-collinear phase matching exists due to the structure of the reciprocal lattice. Once again we restrict ourselves to the case of 2nd harmonic generation and linearly polarised light such that we can use the scalar wave equation. Then making the usual slowly varying envelope approximation and assuming a plane wave fundamental incident upon the crystal, the evolution equation for the 2nd harmonic in the undepleted pump regime can be written as: $$𝐤^{2\omega }E^{2\omega }(𝐫)=2i\frac{\omega ^2}{c^2}\chi ^{(2)}(𝐫)(E^\omega )^2e^{\left(i(𝐤^{2\omega }2𝐤^\omega )𝐫\right)}.$$ (1) Since $`\chi ^{(2)}`$ is periodic we can write it as a Fourier series using the RLVs $`𝐆_{n,m}`$ $$\chi ^{(2)}(𝐫)=\underset{n,m}{}\kappa _{n,m}e^{i𝐆_{n,m}𝐫},n,m.$$ (2) The phase matching condition, $$𝐤^{2\omega }2𝐤^\omega 𝐆_{n,m}=0,$$ (3) arises from requiring that the exponent in Eq. (1) be set equal to zero ensuring growth of the 2nd harmonic along the entire length of the crystal. Eq. (3) is a statement of conservation of momentum as discussed earlier. For each RLV $`𝐆_{n,m}`$ and a prescribed $`𝐤^\omega `$ there is at most a unique angle of propagation for the 2nd harmonic such that Eq. (3) is satisfied. The coupling strength of a phase matching process using $`𝐆_{n,m}`$ is proportional to $`\kappa _{n,m}`$. If a particular Fourier coefficient is zero then no 2nd harmonic generation will be observed in the corresponding direction. In order to demonstrate the idea of a 2-D NPC we poled a wafer of lithium niobate with a hexagonal pattern. Fig. 1 shows an expanded view of the resulting structure, which was revealed by lightly etching the sample in acid. Each hexagon is a region of domain inverted material - the total inverted area comprises $`30\%`$ of the overall sample area. The fabrication procedure was as follows. A thin layer of photoresist was first deposited onto the -z face of a 0.3mm thick, z-cut wafer, of LiNb$`\mathrm{O}_3`$, and then photolithographically patterned with the hexagonal array. The x-y orientation of the hexagonal structure was carefully aligned to coincide with the crystal’s natural preferred domain wall orientation : LiNb$`\mathrm{O}_3`$ itself has triagonal atomic symmetry (crystal class 3m) and shows a tendency for domain walls to form parallel to the y-axis and at $`\pm 60^{}`$ as seen in Fig. 1. Poling was accomplished by applying an electric field via liquid electrodes on the +/-z faces at room temperature. Our HeXLN crystal has a period of $`18.05\mu `$m: suitable for non-collinear frequency doubling of 1536nm at 150C (an elevated temperature was chosen to eliminate photorefractive effects). The hexagonal pattern was found to be uniform across the sample dimensions of 14 $`\times `$ 7mm (x-y) and was faithfully reproduced on the +z face. Lastly we polished the $`\pm x`$ -faces of the HeXLN crystal allowing a propagation length of 14mm through the crystal in the $`\mathrm{\Gamma }\mathrm{K}`$ direction (see Fig. 1). In Fig. 2 we show the reciprocal lattice (RL) for our HeXLN crystal. In contrast with the 1-D case there are RLVs at numerous angles, each of which allows phase matching in a different direction (given by Eq. 3). Note that for a real space lattice period of $`d`$ the RL has a period of $`4\pi /(\sqrt{3}d)`$ as compared with $`2\pi /d`$ for a 1-D crystal allowing us to compensate for a greater phase mismatch in a 2-D geometry than in a 1-D geometry with the same spatial period. From Eq. (3) and using simple trigonometry it is possible to show that $$\frac{\lambda ^{2\omega }}{n^{2\omega }}=\frac{2\pi }{|𝐆|}\sqrt{\left(1\frac{n^\omega }{n^{2\omega }}\right)^2+4\frac{n^\omega }{n^{2\omega }}\mathrm{sin}^2\theta }$$ (4) where $`\lambda ^{2\omega }`$ is the vacuum wavelength of the second harmonic and $`2\theta `$ is the walk off angle between the fundamental and 2nd harmonic wavevectors. To investigate the properties of the HeXLN crystal we proceeded as follows. The HeXLN crystal was placed in an oven and mounted on a rotation stage which could be rotated by $`\pm 15^{}`$ around the z-axis while still allowing light to enter through the $`+x`$ face of the crystal. The fundamental consisted of 4ps, 300kW pulses obtained from a high power all-fibre chirped pulse amplification system (CPA) operating at a pulse repetition rate of 20kHz. The output from the CPA system was focussed into the HeXLN crystal using a 10cm focal length lens giving a focal spot diameter of $`150\mu \mathrm{m}`$ and a corresponding peak intensity of $`1.8`$GW/$`\mathrm{cm}^2`$. The initial experiments were done at zero angle of incidence corresponding to propagation in the $`\mathrm{\Gamma }\mathrm{K}`$ direction. At low input intensities ($`0.2`$GW/$`\mathrm{cm}^2`$) the output was as shown in Fig. 2(b) and consisted of multiple output beams of different colours emerging from the crystal at different angles. In particular two 2nd harmonic beams emerged from the crystal at symmetrical angles of $`\pm (1.1\pm 0.1)^{}`$ from the remaining undeflected fundamental. At slightly wider angles were two green beams (third harmonic of the pump) and at even wider angles were two blue beams (the fourth harmonic, not shown here). There was also a third green beam copropagating with the fundamental. The output was symmetrical since the input direction corresponded to a symmetry axis of the NPC. As the input power increased the 2nd harmonic spots remained in the same positions while the green light appeared to be emitted over an almost continuous range of angles rather than the discrete angles observed at low powers. The two 2nd harmonic beams can be understood by referring to the reciprocal lattice of our structure (Fig. 2). From Fig. 2 it can be seen that for propagation in the $`\mathrm{\Gamma }K`$ direction the closest RLVs are in the $`\mathrm{\Gamma }M`$ directions and it is these RLVs that account for the 2nd harmonic light. After filtering out the other wavelengths the 2nd harmonic (from both beams) was directed onto a power meter and the efficiency and temperature tuning characteristics for zero input angle were measured. These results are shown in Fig. 3 and Fig. 5. Note that the maximum external conversion efficiency is greater than $`60\%`$ and this is constant over a wide range of input powers. Taking into account the Fresnel reflections from the front and rear faces of the crystal this implies a maximum internal conversion efficiency of $`82\%`$$`41\%`$ in each beam. As the 2nd harmonic power increases the amount of back conversion increases which we believe is the main reason for the observed limiting of the conversion efficiency at high powers. Evidence of the strong back conversion can be seen in Fig. 4 which shows the spectrum of the remaining fundamental for both vertically (dashed) i.e. in the z-direction and horizontally (solid line) polarised input light. As the phase matching only works for vertically polarised light the horizontally polarised spectrum is identical to that of the input beam and when compared with the other trace (dashed line) shows the effect of pump depletion and back-conversion. Note that for vertically polarised light the amount of back-converted light is significant compared to the residual pump which is as expected given the large conversion efficiency. Fig. 4 shows $`8`$dB ($`85\%`$) of pump depletion which agrees well with the measured value for the internal efficiency calculated using the average power. In the 1-D case, for an undepleted pump, the temperature tuning curve of a 14mm long length of periodically poled material is expected to have a $`sinc^2(T)`$ shape and to be quite narrow – $`4.7^{}`$C for a 1-D PPLN crystal with the same length and period as the HeXLN crystal used here. However, as can be seen from Fig. 5, the temperature tuning curve (obtained in a similar manner to the power characteristic) is much broader with a FWHM of $`25^{}`$C, and it exhibits considerable structure. The input power was 300kW. We believe that the increased temperature bandwidth may be due to the multiple reciprocal lattice vectors that are available for quasi-phase matching with each RLV producing a beam in a slightly different direction. Thus the angle of emission of the 2nd harmonic should vary slightly with temperature if this is the case. Due to the limitations of the oven we were not able to raise the temperature above $`205^{}`$C and hence could not completely measure the high temperature tail of the temperature tuning curve. Note that temperature tuning is equivalent to wavelength tuning of the pump pulse and hence it should be possible to obtain efficient phase-matching over a wide wavelength range at a fixed temperature. After the properties of the HeXLN crystal at normal incidence we next measured the angular dependance of the 2nd harmonic beams. As the crystal was rotated phase-matching via different RLVs could be observed. For a particular input angle (which determined the angle between the fundamental and the RLVs) quasi-phase matched 2nd harmonic generation occurred, via a RLV, and produced a 2nd harmonic beam in a direction given by Eq. (4). These results are shown in Fig. 6 where the solid circles indicate the measured angles of emission for 2nd harmonic while the open squares are the predicted values. In the figure zero degrees corresponds to propagation in the $`\mathrm{\Gamma }K`$ direction. Also indicated on the figure are the RLVs used for phase-matching, where $`[n,m]`$ refers to the RLV $`𝐆_{n,m}`$. Note that there is good overall agreement between the theoretical and experimental results even for higher order Fourier coefficients which indicates the high quality of the crystal. The inversion symmetry of Fig. 6 results from the hexagonal symmetry of the crystal. To further highlight this symmetry we have labeled the negative output angles with the corresponding positive RLVs. The only obvious discrepancy comes from the $`[1,1]`$ RLVs where two closely separated spots are observed rather than a single one. This may be due to a small amount of linear diffraction from the periodic array. At the domain boundaries of the HeXLN crystal there are likely to be small stress-induced refractive index changes giving a periodic variation in the refractive index. If this indeed proves to be the case then it should be possible to eliminate this by annealing the crystal at high temperatures. For applications where collinear propagation of the fundamental and 2nd harmonic is desirable propagation along the $`\mathrm{\Gamma }M`$ axis of the HeXLN crystal could be used (since the smallest RLV is in that direction). For the parameters of our crystal collinear 2nd harmonic generation of $`1.446\mu \mathrm{m}`$ in the $`\mathrm{\Gamma }M`$ direction is expected. Visually the output of the HeXLN crystal is quite striking with different colours (red, green and blue) being emitted in different directions (see Fig. 7). For a range of input angles and low powers distinct green and red spots can been seen each emitted in a different direction, often with the green light emitted at a wider angle than the 2nd harmonic. The presence of the green light implies sum frequency generation between the fundamental and the 2nd harmonic. For this to occur efficiently it must also be quasi-phase-matched using a RLV of the lattice. In certain regimes (of angle and temperature) simultaneous quasi-phase-matching of both 2nd harmonic generation and sum frequency mixing occurs with as much as 20% of the 2nd harmonic, converted to the green (in multiple beams). As mentioned earlier at higher powers the green light appears to be emitted over a continuous range of angles. We believe that this might be due to an effect similar to that observed in fibres where phase-matching becomes less critical at high intensities. If this were the case then the green light would have a broader spectrum in the non-phase-matched case than for the quasi-phase-matched case but we have not yet been able to verify this. Lastly we believe that the 4th harmonic results from quasi-phase matching of two 2nd harmonic photons by a higher order RLV since it is observed at quite wide angles. It should be noted that although lithium niobate preferentially forms domains walls along the $`y`$ axis and at $`\pm 60^{}`$ we are not limited to hexagonal lattices. In fact essentially any two dimensional lattice can be fabricated, however the patterned region of the unit cell will always consist of either a hexagon or a triangle. The shape of the poled region will determine the strength of each of the Fourier coefficients for the RLVs while the lattice structure will determine their position. One can envisage creating more complicated structures such as a 2-D quasi-crystal in which a small poled hexagon is situated at every vertex. Such a 2-D quasi-crystal could give improved performance for simultaneously phase matching multiple nonlinear processes, as demonstrated recently with a 1-D poled quasi-crystal. Alternatively a HeXLN crystal could be used as an efficient monolithic optical parametric oscillator. Lastly we note that NPCs are a specific example of more general nonlinear holographs which would convert a beam profile at one wavelength to an arbitrary profile at a second profile. For example Imeshevx et al. converted a gaussian profile beam at the fundamental to a square top 2nd harmonic using tranversely patterned periodically poled lithium niobate. In conclusion we have fabricated what we believe to be the first example of a two dimensional nonlinear photonic crystal in Lithium Niobate. Due to the periodic structure of the crystal, quasi-phase matching is obtained for multiple directions of propagation with internal conversion efficiencies of $`>80\%`$. Such HeXLN crystals could find many applications in optics where simultaneous conversion of multiple wavelengths is required.
no-problem/9910/cond-mat9910194.html
ar5iv
text
# Doping dependence of the Néel temperature in Mott-Hubbard antiferromagnets: Effect of vortices \[ ## Abstract The rapid destruction of long-range antiferromagnetic order upon doping of Mott-Hubbard antiferromagnetic insulators is studied within a generalized Berezinskii-Kosterlitz-Thouless renormalization group theory in accordance with recent calculations suggesting that holes dress with vortices. We calculate the doping-dependent Néel temperature in good agreement with experiments for high-$`T_c`$ cuprates. Interestingly, the critical doping where long-range order vanishes at zero temperature is predicted to be $`x_c0.02`$, independently of any energy scales of the system. \] The study of lightly doped Mott-Hubbard (MH) antiferromagnetic insulators is of great current interest, since the insulating parent compounds of cuprate high-$`T_c`$ superconductors are of this type. The various parts of the phase diagram of these compounds are believed to be intimately related. Therefore it is important to understand the properties of the antiferromagnetic phase, in particular the rapid destruction of magnetic order upon doping and the anomalously small critical doping of $`x_c0.02`$ holes per copper ion . In the present letter we derive the Néel temperature $`T_N`$ as a function of doping using a generalized Berezinskii-Kosterlitz-Thouless (BKT) renormalization group theory for the vortices in the antiferromagnetic state. There are two types of vortices, thermally created electrically neutral ones and electrically charged ones, which are centered at the holes. Both types are nucleated separately (as vortex-antivortex pairs), but additively screen the vortex interaction, with a common unbinding temperature $`T_N(x)`$. This temperature is indeed strongly reduced upon doping and vanishes at $`x_c0.02`$ independently of the energy scales of the system. Our approach is independent of any particular microscopic model and can thus serve as a guide for electronic theories. Our physical picture is the following: The holes introduced by doping are mainly located at the planar oxygen sites, where they frustrate the antiferromagnetic exchange interaction between copper spins due to their tendency to form copper-oxygen spin singlets . This may lead to ferromagnetic coupling between the two spins . Since the system is approximately two-dimensional, the staggered magnetization can form a vortex as sketched in Fig. 1 to evenly distribute the frustration induced by the hole. On the other hand, neutral vortices without a hole in their core can be thermally created. Due to the easy-plane Dzyaloshinskii-Moriya anisotropy , the staggered magnetization can be described by a two-component order parameter at low energies, leading to a logarithmic size dependence of the single-vortex energy . This implies a logarithmic vortex interaction, making BKT scaling ideas applicable. To describe the interplay of charged and neutral vortices, which determines the Néel temperature $`T_N`$, we have to extend the BKT theory to a system with two kinds of topological defects. One important point is that the density of charged vortices is given by the doping $`x`$. The other is that we have to describe the screening of the vortex interaction due to both types. Electronic theory supports our picture: Within unrestricted Hartree-Fock theory Vergés et al. found several competing low-energy configurations for a lightly hole-doped Hubbard model, including spin polarons, domain walls, and holes dressed with vortices and antivortices. In a spin polaron the staggered local moments in the vicinity of the hole are reduced but still collinear, while in a vortex (antivortex) they rotate through $`2\pi `$ ($`2\pi `$). Seibold using a slave-boson approach and Berciu and John within a self-consistent Hartree-Fock theory found that an even number of holes dress with vortices and antivortices (or merons ) in the ground state for appropriate parameters. Since the energy of a single vortex diverges (logarithmically ) with system size, whereas that of a vortex-antivortex pair remains finite, only pairs are created in infinite systems. An advantage of BKT-type theories is that they include fluctuations on all length scales, in particular on large ones, which are crucial close to the phase transition. Previous studies that only included fluctuating local moments without correlations between them overestimated the critical doping $`x_c`$ . Similarly, the decrease of $`T_N`$ with $`x`$ obtained from the fluctuation exchange approximation is also too slow . By including correlations between neighboring spins into a slave-boson theory for the three-band Hubbard model, Schmalian et al. obtained a critical doping of $`x_c0.025`$ in better agreement with experiment. However, this approach takes only fluctuations on the length scale of the lattice constant into account, but neglects fluctuations on larger length scales. In BKT theory the interaction of vortices is screened by the polarization of vortex pairs lying between them . As the temperature is increased more pairs are thermally created leading to increased screening. At the Néel temperature the screening becomes strong enough for the largest pairs to break up. The resulting free vortices destroy the magnetic order. The situation here is more complex: First, we have to take account of the screening due to both charged and neutral vortices, and second, the density of charged vortices is fixed by the doping level $`x`$. Only neutral pairs are thermally created, whereas charged vortices enter the system only upon doping. In principle one could imagine a single hole doped into the system to form a pair consisting of a charged vortex and a neutral antivortex or vice versa, but microscopic calculations do not find this configuration at $`T=0`$. Rather, two holes are needed to produce a vortex-antivortex pair. For simplicity we assume this to hold also at finite temperatures. However, even if mixed pairs are not created upon doping, they are formed when vortex pairs exchange partners. The density of neutral vortices is controlled by their chemical potential $`\mu _{\text{neu}}`$ or equivalently the vortex core energy $`E_{\text{core}}=\mu _{\text{neu}}`$, which depends on details of the copper-oxygen and copper-copper interactions and is treated here as a parameter. The energy of a vortex-antivortex pair is $`2E_{\text{core}}+V`$ with the interaction $`V(r)=q^2\mathrm{ln}(r/r_0)`$, where $`q^2=2\pi JS^2`$ is the strength of the interaction, $`J`$ is the exchange interaction between nearest neighbors, $`S=1/2`$ is the spin, and $`r_0`$ is the small-distance cutoff of BKT theory. $`r_0`$ can be interpreted as the smallest possible vortex-antivortex separation . Two charged vortices additionally experience a Coulomb interaction, which, however, is irrelevant in the renormalization-group sense, since it falls off faster than $`V(r)\mathrm{ln}r`$. The probability of creating a neutral or charged vortex in an area $`r_0^2`$ is given by its fugacity $`y_{\text{neu}}`$ and $`y_{\text{ch}}`$, respectively. Since we have assumed that vortices are only created in neutral or charged pairs, we consider the pair fugacities $`y_{\text{neu}}^2`$ and $`y_{\text{ch}}^2`$. For the smallest possible neutral pairs of size $`r_0`$, $$y_{\text{neu}}^2(r_0)=C_{\text{neu}}^2e^{2\beta \mu _{\text{neu}}},$$ (1) where $`C_{\text{neu}}`$ is a constant of order unity and $`\beta `$ is the inverse temperature. The constraint on the density of charged vortices is implemented by choosing $`y_{\text{ch}}^2(r_0)`$ in such a way that their total density equals the hole density, see below. The vortex interaction is screened by the polarization of smaller vortex pairs, $`V(r)=q^2/ϵ(r)\mathrm{ln}(r/r_0)`$. The screening is described by the spin-wave stiffness $`K(r)=\beta q^2/2\pi ϵ(r)`$. In the renormalization group, small pairs of sizes between $`r`$ and $`r+dr`$ are integrated out and their effect is approximately incorporated into renormalized quantities $`K(r)`$, $`y_{\text{neu}}^2(r)`$, and $`y_{\text{ch}}^2(r)`$. Starting from $`r=r_0`$, this operation is repeated for larger and larger pairs leading to the recursion relations $`dy_\eta ^2/dl`$ $`=`$ $`2(2\pi K)y_\eta ^2,`$ (2) $`dK/dl`$ $`=`$ $`4\pi ^3(y_{\text{neu}}^2+y_{\text{ch}}^2)K^2.`$ (3) The initial conditions are Eq. (1) and $`K(r_0)=\beta q^2/2\pi `$. Equation (2) determines the fugacities of neutral ($`\eta =\text{neu}`$) and charged ($`\eta =\text{ch}`$) pairs of size $`r=r_0e^l`$. Two separate equations for $`y_{\text{neu}}^2`$ and $`y_{\text{ch}}^2`$ are present, since we assume that vortices are created either as neutral pairs or as charged pairs with two holes. Both types feel the same screened interaction $`V`$ at large distances so that the same stiffness $`K`$ appears. Differences at smaller separation are incorporated into the core energies. Equation (3) describes the additional screening due to pairs of size $`r_0e^l`$. Their total density is proportional to $`y_{\text{neu}}^2+y_{\text{ch}}^2`$. If the stiffness $`K`$ vanishes for $`l\mathrm{}`$, the interaction is fully screened for large pairs ($`ϵ\mathrm{}`$), which thus become unbound, destroying the magnetic order. Since the interaction on large length scales is the same for neutral and charged vortices, this unbinding happens at a single transition for both types. While solving Eqs. (2) and (3) we have to simultaneously satisfy the constraint on the density $`n_{\text{ch}}`$ of charged vortex pairs. As shown in Ref. , this density can be expressed in terms of the fugacity, $$n_{\text{ch}}=_{r_0}^{\mathrm{}}𝑑r\mathrm{\hspace{0.17em}2}\pi r\frac{y_{\text{ch}}^2(r)}{r^4}=\frac{2\pi }{r_0^2}_0^{\mathrm{}}𝑑le^{2l}y_{\text{ch}}^2(l).$$ (4) The pair density has to equal half the density of holes, $`n_{\text{ch}}=x/2a^2`$, where $`a`$ is the lattice constant. In practice, the recursion relations are integrated numerically to find $`y_{\text{ch}}^2(l)`$, from which we calculate $`n_{\text{ch}}`$. The initial value $`y_{\text{ch}}^2(0)`$ is varied until the contraint is satisfied. The resulting phase diagram is shown in Fig. 2. We used $`C_{\text{neu}}=1`$, $`J=1800\text{ K}`$, and $`r_0=2a`$, and varied the core energy $`E_{\text{core}}`$. The phase below the Néel temperature $`T_N`$ shows quasi-long-range antiferromagnetic order, which is made long range by the weak interlayer exchange. The phase for $`T>T_N`$ or $`x>x_c`$ is characterized by free vortices, which destroy the long-range order, but leave short-range order on the length scale of the separation between free vortices intact. Short range correlations have indeed been observed in cuprates up to much larger dopings. For $`T0`$ we find the critical pair density, $$n_{\text{ch}}^c0.04273r_0^2.$$ (5) The numerical factor is universal: Since neutral vortices do not exist for $`T0`$, it cannot depend on $`E_{\text{core}}`$ and $`C_{\text{neu}}`$. The remaining energy scale $`q^2`$ does not enter the result, since it is multiplied by the diverging $`\beta =1/k_BT`$. While $`n_{\text{ch}}^c`$ and thus the critical doping $`x_c=2n_{\text{ch}}^ca^2`$ are independent of energy scales, they do depend on the non-universal minimal separation $`r_0`$ of two vortices which is of the order of twice the core radius or twice the correlation length. Slave-boson and Hartree-Fock calculations show that the core radius is not significantly larger than a lattice spacing $`a`$. In Fig. 2 we have taken the core radius to be $`a`$ so that $`r_0=2a`$, which results in $`x_c0.021`$ in very good agreement with experiments on $`\mathrm{L}a_{2x}\mathrm{S}r_x\mathrm{C}uO_4`$ . For $`\mathrm{Y}_{1z}\mathrm{C}a_z\mathrm{B}a_2Cu_3O_6`$ experiments find $`z_c/20.03`$ , of the same order of magnitude as our value of $`0.021`$ (the number of holes per copper atom is $`z/2`$ in this double-layer compound). For $`\mathrm{Y}Ba_2Cu_3O_{7\delta }`$ a critical hole concentration of $`0.021`$ corresponds to $`\delta _c0.68`$, following Tallon et al. , in good agreement with experiments . In the high-temperature region of the phase diagram the overall temperature scale is set by the maximal possible Néel temperature $`T_N^{\text{max}}=q^2/4=\pi JS^2/2`$. For $`S=1/2`$ this gives $`T_N^{\text{max}}0.393J`$, compared to the mean-field result $`T_N^{\text{mf}}=0.5J`$ for a Heisenberg antiferromagnet on a cubic lattice with interlayer exchange $`J_{}J`$. The reduction of $`T_N`$ is due to fluctuations, which are strong for quasi-two-dimensional systems. The actual value of $`T_N(x=0)`$ and the shape of the curve $`T_N(x)`$ at small doping are determined by the core energy $`E_{\text{core}}`$. Note, we obtain the correct temperature scale under the reasonable assumption that $`E_{\text{core}}`$ is not very much smaller than the interaction strength $`q^2`$. Experimentally, $`T_N`$ is found to depend only weakly on doping for small $`x`$ , which requires a small core energy. Then many neutral vortices are created at a given temperature so that charged vortices only become relevant at higher doping. Conversely, for large core energy only a few thermal vortices are present even at $`T_N^{\text{max}}`$. For $`E_{\text{core}}2q^2`$ the curve $`T_N(x)`$ in Fig. 2 does not change appreciably with $`E_{\text{core}}`$ so that the charged vortices would determine the magnetic properties even at very small doping. Quantitative agreement with $`\mu `$SR and NQR experiments on $`\mathrm{L}a_{2x}\mathrm{S}r_x\mathrm{C}uO_4`$ by Borsa et al can be obtained by appropriate choices of the exchange $`J`$, the core energy $`E_{\text{core}}`$, and the core radius $`r_0`$, see Fig. 3. For this plot, $`J=2410\text{ K}`$, $`E_{\text{core}}=0.1303q^2=493\text{ K}`$, and $`r_0=2.052a`$. Typical experimental values are $`J1400\text{ K}`$ . This discrepancy may be due to the simplified description of the anisotropic antiferromagnet by two-component spins or to the neglect of the interlayer exchange $`J_{}`$ and the doping dependence of $`J`$. We now briefly comment on electron-doped cuprates. While we reproduce the correct critical doping for hole-doped compounds, our approach would not yield the much larger critical doping $`x_{c,e}0.14`$ in electron-doped cuprates . The reason is that the additional electrons mainly fill up the copper $`3d`$ orbitals and destroy the magnetic moments at the copper sites. Thus, the main effect of electron doping is to dilute the antiferromagnet. There is no spin-singlet formation involved and hence no tendency towards vortex formation. Then differently as in the case of hole doping $`T_N`$ decreases due to spin dilution. The validity of our approach is questionable if the charged vortices become immobile. There is experimental evidence that this happens below a crossover temperature $`T_f`$, which increases with doping and reaches about $`16\text{ K}`$ in $`\mathrm{L}a_{2x}\mathrm{S}r_x\mathrm{C}uO_4`$ for $`xx_c`$ and falls off again for larger $`x`$ . $`T_f`$ is sketched in Fig. 3. Below this temperature the holes (charged vortices) form a glass and their dynamics strongly slows down. The experimental Néel temperature in this region should depend strongly on the time scale of the experiment. On the other hand, the true phase transition is governed by the behavior in the limit of infinite time so that the formation of a glass below $`T_f`$ should affect it only weakly. The doping dependence of $`T_f`$ can be qualitatively understood in our picture: In the magnetically disordered phase the logarithmic part of the interaction of charged vortices is screened on the length scale of the correlation length , which is still large close to $`T_N`$. Thus the interaction of charged vortices (holes) changes smoothly at the transition and decreases for larger doping, leading to similar behavior of the freezing temperature $`T_f`$. There are theoretical indications that the holes may form one-dimensional stripes at low temperatures . Modeling stripes by a phenomenological anisotropic Heisenberg model, Castro Neto and Hone calculated the Néel temperature $`T_N(x)`$ within a renormalization-group scheme and found good agreement with experiment. However, in this theory the critical concentration $`x_c`$ is basically a free parameter. Note, stripes formed by charged vortices consist of alternating vortices and antivortices, in order to lower the interaction energy. These stripes are automatically anti-phase domain walls , which are observed experimentally. In conclusion, by starting from the assumption that holes doped into the Mott-Hubbard antiferromagnet dress with vortices and using independently obtained values for the exchange interaction and the antiferromagnetic correlation length in the ordered phase, we obtain a doping-temperature phase diagram for the antiferromagnetic phase in qualitative agreement with experiment. In particular, the predictions for the critical doping at zero temperature and the Néel temperature at zero doping are of the observed order of magnitude. Our approach uses a generalized BKT theory, which does not depend on any particular microscopic model. With an appropriate choice of the core energy of thermally created vortices we can obtain quantitative agreement with experiment. The core energy controls the shape of the phase boundary at small doping, but does not affect the region of higher doping, where the Néel temperature approaches zero at a critical hole density that is universal in natural units. Our results show that stripes are not required to understand the data. The success of this theory based on vortex fluctuations emphasizes the importance of two-dimensionality for understanding the cuprates. This should also hold in the more strongly doped superconducting region. It would be desirable to include the spin rearrangement around holes, which is induced by frustration due to singlet formation, into an electronic theory of underdoped cuprates. For the antiferromagnetic phase such a theory should yield results similar to the ones shown here.
no-problem/9910/quant-ph9910036.html
ar5iv
text
# A realist view of the electron: recent advances and unsolved problems ## I Introduction As long as the size of elementary particles remained somewhat insignificant, the theoretical efforts could be limited to a description of point-like entities with qualities like mass, charge, or spin. The theoretical framework gradually evolving in quantum electrodynamics (QED) or quantum field theory consequently knew little more than these point-particles and their interactions. But experimental methods have developed rapidly. On the one hand, the precision of measurements reaches already far into a regime comparable to the size of particles. The Compton wavelength of an electron, for example, is not smaller than current resolutions in surface science. On the other hand, quantum effects play a major role also in experiments, where the length scale is in the range of centimeters or even meters. This situation makes the development of consistent models of particles, taking into account the relationship between classical and quantum systems, an even more important issue. Following the traditional method of development, one starts with the definition of e. g. an electron, while the relation between electron properties and electromagnetic fields is determined in a second step. On this basis the treatment of the problem could be commenced with Barut’s assessment of the main problem in electron theory : If a spinning particle is not quite a point-particle, nor a solid three dimensional top, what can it be? According to Bunge or Recami there are in general only three possibilities: A particle either is (i) strictly point-like, (ii) actual extended, or (iii) a point-like structure in motion within the actual volume of the particle. And the only possible solution for an electron is of type (iii). The thesis elaborated in this paper uses a different method, which can be described by four assertions: (i) An electron is determined by its intrinsic properties. (ii) The exact numerical value of these properties, and consequently the actual size of the electron remains undefined. (iii) The statistical ensembles in quantum theory (QT) are based on this indefiniteness. (iv) Quantized properties like mass, charge, or spin are due to a change of intrinsic properties in the presence of external fields. Although the picture is far from complete, it suggests a modification of current concepts in the following sense: even if there are intrinsic properties of single entities called electrons, there may not be single and well defined objects called electrons. Since the ensembles in QT are related to electrons with a defined range of intrinsic properties, the interaction of a member of the ensemble with exactly determined intrinsic properties is exactly determined. We use the term elementary process in such a case, and the statistical results of measurements in QT are thought to contain an arbitrary number of such processes. This definition of elementary processes implicitly contains the assumption of hidden variables, although the present approach is substantially different from Bohm’s theory: (i) The notion of a particle is not fundamental. (ii) We start with a description of intrinsic properties and relate them only later to the formulations in QT and classical ED. (iii) In this way we regain a statistical (and non-local) interpretation of QT, where the statistical ensembles result from an unknown phase (similar to Bohm’s picture), but also, due to the uncertainty relations, from an unknown energy component. Especially the latter quality of the ensembles in QT accounts for a fair share of the paradoxes, which have been irritating \- or exciting - the physical community for quite some time. And it is exactly this quality of the quantum ensembles which makes QT an incomplete theory. The results are described in view of a non-specialized readership, specialist readers are referred to existing publications, describing all the necessary steps in great detail . ## II The electron: intrinsic properties There can be no doubt that the electron, of all the elementary particles, is by far the best researched, both experimentally and theoretically. Should it therefore be a mere exaggeration, if a book by MacGregor, appearing in 1992, is entitled: The Enigmatic Electron . Or is there substance to this claim? The main problem with all existing models of the electron (see the introduction) is that none of them can explain all the observed experimental features. An extended particle, for example, is claimed to contradict scattering experiments, a point-particle, on the other hand, leads to the well known infinity problems in QED. Both of these problems have, in the model about to be described, a common solution, which can only be systematically displayed, if we focus our attention on the intrinsic properties of an electron, while we shall recover the solutions to the above problems only afterwards. We start with a non-relativistic frame of reference, assuming that the electron has a finite volume $`V`$, which is not specified, and that it moves with constant velocity $`\stackrel{}{u}=u\stackrel{}{e}_u`$. Its intrinsic structure is described by a wave equation for its density of mass $`\rho (\stackrel{}{r},t)`$, and a wave equation for an additional field energy $`\varphi (\stackrel{}{r},t)`$, which was called the intrinsic potential for two reasons: First, the same model can be used for photons (in this case the particles proceed with $`c`$), the intrinsic field energy $`\varphi (\stackrel{}{r},t)`$ then is equal in magnitude to the electromagnetic potential $`\stackrel{}{A}`$. And second, due to electron propagation the energy is shifted from the propagating mass to the correlating intrinsic field energy and vice versa, the field energy therefore behaves like a periodic potential. The impact of this concept on the fundamental statements in QT is quite substantial: (i) The energy of the electron is double the kinetic energy or equal to $`m_eu^2`$. (The energy is computed by integrating intrinsic properties over the volume $`V`$. The exact value of $`V`$ is not required for the integration.) $`W_{kin}`$ $`=`$ $`{\displaystyle \frac{1}{2}}m_eu^2W_{pot}={\displaystyle \frac{1}{2}}m_eu^2`$ (1) $`W_{tot}`$ $`=`$ $`W_{kin}+W_{pot}=m_eu^2=\mathrm{}\omega `$ (2) If the total energy is used to satisfy Planck’s relation, the dispersion relations for monochromatic plane waves are recovered, then (ii) regarding its intrinsic properties, an electron can be described as a monochromatic plane wave. But this means that (iii) the Schrödinger equation , which neglects the intrinsic energy components, is no longer an exact equation . And on this basis it can be deduced that (iv) the Heisenberg uncertainty relations actually describe the errors due to the omission of intrinsic energy. The latter point is interesting for three separate reasons. First, it is well known that the uncertainty relations are responsible for the spreading of a wave packet (see e.g. ). If they are not interpreted as physical causes - as they are thought to be in the standard framework, even if the more innocent term principle is used - then the spreading of a wave packet is not a physical effect, but only a consequence of the logical structure of QT. Second, the result seems to settle the long-standing controversy between the axiomatic and the empirical interpretation of Heisenberg’s relation. It is not empirical, since it does not depend on any measurement process; but it is also not a physical principle, because it is due to the fundamental assumptions of QT. Third, if the uncertainty relations are not a universal physical principle, then experiments can be described with unlimited precision also at the microlevel: only in this case does the definition of fundamental processes at all make sense. So far the proposition of intrinsic structures and intrinsic properties is a mere speculation, which is not very different from other speculations including a more or less substantial part of the known qualities of electrons into a single picture. In particular it is yet unclear, how magnetic properties come into play. This is done in two steps: (i) The intrinsic potential $`\varphi `$ is interpreted as an electromagnetic property, related to intrinsic electromagnetic fields $`\stackrel{}{E}`$ and $`\stackrel{}{B}`$ of transversal orientation, and which comply with a wave equation. The direction of intrinsic fields is shown in Fig. 1. And (ii) it is proved that these conditions are generally sufficient to derive Maxwell’s equations . Therefore, the intrinsic structures lie at the bottom of two hitherto separate concepts: * They are the origin of wave-like qualities of the electrons described by Schrödinger’s equation. * And they are the origin of electromagnetic qualities of electrons (and photons), since they lead to our - to date - best theory of electromagnetic fields. The peculiar features of spin in QT can only be fully understood from interactions of electrons with external fields. This will be done in the following sections. Here we wish to add a few remarks on the measurements of Bell type inequalities, performed in ever increasing perfection since the Eighties , and which seem to indicate what is usually called action at a distance. It is a common error, especially among experimenters, to assume that these measurements demonstrate that nature allows action at a distance, that nature is non-local. Without claiming that this is impossible, it can nevertheless be said that it cannot be proven by these \- EPR type - measurements due to the fact that a valid measurement of spin correlations violates the uncertainty relations. Why is this decisive? In order to understand the argument, let us analyze the theoretical basis of these measurements. Interpreting a measurement of spin, spin has to be defined, and it can only be defined within QT, but not in classical electrodynamics. For EPR like experiments we also require a conservation principle, since the total spin of two particles must be known. It could be analyzed, furthermore, what theoretical basis is required for the deduction of the Bell inequalities, and it can be argued, that some of the assumptions going into these deductions are a lot less important than locality in physics (see e.g. ). This is not needed in the present context, since it can be shown that no valid interpretation of spin correlation measurements within QT is possible. To this aim we define the spin of a particle by the magnetic moment $`\stackrel{}{\mu }`$ and a magnetic field $`\stackrel{}{B}`$, which shall be determined from intrinsic properties ($`\stackrel{}{u}`$ is the velocity, $`\rho `$ the density of the electron as previously defined): $`W`$ $`=`$ $`\stackrel{}{\mu }\stackrel{}{B}={\displaystyle \frac{1}{2}}\mathrm{}\omega \stackrel{}{\mu }=g{\displaystyle \frac{e}{2m}}\stackrel{}{s}`$ (3) $`\stackrel{}{B}`$ $`=`$ $`{\displaystyle \frac{1}{2\overline{\sigma }}}\times \rho \stackrel{}{u}`$ (4) $`\overline{\sigma }`$ in these relations is a dimensional constant to make mechanical units compatible with electromagnetic variables. Assuming that the kinetic energy $`\frac{1}{2}\mathrm{}\omega `$ is due to the interaction of a constant magnetic moment with the intrinsic magnetic field of the electron, we arrive at $`g_e=2`$ and $`s_e=\mathrm{}/2`$, while the direction of spin is equal to the direction of the intrinsic magnetic field . For photons the same calculation yields $`g_{ph}=1`$ and $`s_{ph}=\mathrm{}`$. These results will be clarified in the calculation of interactions with external fields further down. If we now consider a correlation measurement of photon spin we are confronted with the problem that spin is parallel to the intrinsic magnetic field, which is a periodic variable: spin therefore cannot be constant but will oscillate from $`+s`$ to $`s`$ with a period of half the particle’s wavelength. For a valid correlation measurement the local precision therefore must be higher than $`\lambda /2`$. But it has been demonstrated, in the deduction of the uncertainty relations via the omission of intrinsic potentials , that this is the highest limit of precision possible in QT; thus a valid measurement exceeds the level of precision provided for in QT, thus it is incompatible with the axioms of QT: it can therefore not be consistently interpreted within this same framework. Independently of any other consideration. And since EPR measurements rely on QT for the definition and conservation of spin, they are generally inconclusive. Returning to Barut’s dilemma quoted in the introduction, it can be said that in this model the electron is neither a spinning top nor any modified point particle: it is an extended structure, and so far it is not clear, which of the intrinsic properties of the electron is actually related to its charge. ## III Interactions with external fields To elucidate the problem, let us consider the interaction of an electron (density amplitude $`\rho _0`$, charge density amplitude $`\sigma _0`$) with a photon (density amplitude $`\rho _{ph}`$) under the presence of an external electrostatic field $`\varphi _{ext}`$. The procedure used for the calculation is pretty standard: we define the Lagrange density of an electron in motion, including an external potential and a presumed photon. $$:=TV=\rho _0\dot{x}_i^2+\rho _{ph}c^2\sigma _0\varphi _{ext}$$ (5) A variation with fixed endpoints and using the principle of least action allows to calculate, by way of a Legendre transformation and in a first order approximation of a Taylor series, the Hamiltonian of the system as : $$H=\frac{}{\dot{x}_i}\dot{x}_i\sigma _0\varphi _{ext}$$ (6) The result seems paradoxical in view of kinetic energy of the moving electron, which does not enter into the Hamiltonian. Assuming, that an electron is accelerated in an external field, its energy density after interaction with this field would only be altered according to the change of location. The contradiction with the energy principle is only superficial, though. Since the electron will have been accelerated, its energy density must be changed. If this change does not affect its Hamiltonian, the only possible conclusion is, that photon energy has equally changed, and that the energy acquired by acceleration has simultaneously been emitted by photon emission. Therefore, the initial system was over–determined, and the simultaneous existence of an external field and interaction photons is no physical solution to the interaction problem. It should be noted that the conclusion is only valid in the first order expansion used, an approach which was necessary due to the unknown relation between photon density and electron velocity. In this case the process of electron acceleration must be interpreted as a process of simultaneous photon emission: the acquired kinetic energy is balanced by photon radiation. A different way to describe the same result would be saying that electrostatic interactions are accomplished by an exchange of photons: the potential of electrostatic fields then is not so much a function of location than a history of interactions. This can be shown by calculating the interaction Hamiltonian $`H_w`$: $`H_0=\rho _0\dot{x}_i^2+\sigma _0\varphi H=\sigma _0\varphi `$ (7) $`H_w=HH_0=\rho _0\dot{x}_i^2=\rho _{ph}c^2`$ (8) But if electrostatic interactions can be referred to an exchange of photons, and if these interactions apply to accelerated electrons, then an electron in constant motion does not possess an intrinsic energy component due to its electric charge: electrons in constant motion are therefore stable structures. The photon-interaction model of electrostatic fields allows a further extension of the current framework. Since, what might be called charge, finds its expression in the properties of the emitted and absorbed photons, and since these photons can either lead - by way of their intrinsic properties - to attraction or repulsion of other charges, the sign of the charge is no longer a quantity fixed for all time and under every condition. Although there is, currently, no comprehensive way of describing the origin of a specific charge/anticharge in a specific situation, it seems that the model should also be suitable for questions of this type and which are well beyond the rather phenomenological (Heisenberg) description used in the current standard models of elementary particles. As a second example we calculate the interaction of an electron with an external magnetic field. The field in this case changes the intrinsic energy components of the electron. The local and deterministic calculation of these interactions is based on the field equations of intrinsic fields. The units of these fields are due to the derivation of the Maxwell equations from intrinsic properties, an analysis of electromagnetic units has been given in : $`{\displaystyle \frac{1}{u^2}}{\displaystyle \frac{\stackrel{}{E}}{t}}`$ $`=`$ $`\times \stackrel{}{B}{\displaystyle \frac{\stackrel{}{B}}{t}}=\times \stackrel{}{E}`$ (9) $`\varphi _{em}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1}{u^2}}\stackrel{}{E}^2+\stackrel{}{B}^2\right)`$ (10) We use a dynamic model by assuming that the external magnetic field is switched on in a finite interval $`[0,\tau ]`$. Then the intrinsic energy component in the magnetic field has changed and will be : $$\varphi (\stackrel{}{B}_{ext})=\varphi _{em}+|\stackrel{}{B}_{ext}|^2$$ (11) The crucial feature of magnetic interaction is, that the acquired energy is independent of the angle $`\vartheta `$ between the intrinsic magnetic field $`\stackrel{}{B}`$ and the external magnetic field $`\stackrel{}{B}_{ext}`$ (see Fig.2). It can therefore not be formalized as the scalar product of an intrinsic (and constant) magnetic moment $`\stackrel{}{\mu }`$ and an external field $`\stackrel{}{B}_{ext}`$: $$W\stackrel{}{\mu }\stackrel{}{B}_{ext}\stackrel{}{\mu },\stackrel{}{B}_{ext}R^3$$ (12) or only, if the magnetic moment is a non-local variable: the non-local definition of particle spin in quantum theory, or the impossibility to describe spin as a vector in $`R^3`$, can only be understood on the basis of interactions. More specifically, it is the \- failed - attempt to describe the changes due to magnetic interactions with a formulation inherited from classical electrodynamics. Therefore, spin in quantum theory cannot be a vector, because interactions do not depend on the direction of field polarization. This result, which only applies to free particles, also illustrates the importance of dynamic models of interactions. As the intrinsic component of particle energy is increased due to external fields, total energy of the particle can either be increased, which should be the case for charged particles like electrons, or it can remain constant, which we, tentatively, assume for neutral particles like neutrons. In any case the kinetic components of particle energy change, and this change corresponds to a changed wavelength of its wavefunction $`\psi `$. If the change of the wavelength is calculated and the phase difference to a beam not subjected to this magnetic field estimated, we arrive at the following result for the phase-difference $`\alpha `$: $$\alpha =2\pi \left(\frac{l}{\lambda }\frac{|\stackrel{}{B}_{ext}|}{\sqrt{\overline{\rho }u^2}}n\right)nN$$ (13) where $`l`$ is the path-length in the field of the magnet and $`\lambda `$ the original wavelength of the beam. The result indicates that the phase difference is linear with the intensity of the magnetic field: a result confirmed by neutron interference measurements of Rauch and Zeilinger . A short remark is in place concerning the relation of the present concept to the concept of quantization in QT, which seems to be used in various, and sometimes incompatible meanings. Provided, the structure of matter consists of atoms, every detector and consequently every measurement only yields discrete results: there must, necessarily, exist a threshold which is required to trigger a reaction. On the level of measurements quantization is consequently a necessity (this also applies, for example, to Millikan’s experiments to determine the ”elementary charge”). But while this is trivial in the atomic domain, it does not mean that we have to encounter discrete quantities in a point-like volume when electrons are separated from atoms: given the infinity problems connected with such an idea, it seems amazing that it prevailed for so long, even in de Broglie’s ”double-solution”. The only real argument, which suggests such an approach, originates from scattering experiments: if electrons were extended structures (three dimensional aggregations of mass) like atomic nuclei, then the scattering cross section would be affected. This has never been observed and it was concluded, therefore, that electrons cannot be extended structures (see e.g. Bender et al. ). Comparing with the particle models introduced previously (see the introduction), these experiments only exclude electrons of type (ii): three dimensional tops. They do not exclude any other type, especially not the extended electrons introduced in this paper, where interactions apply to every single point of the internal structure. It shall not be hidden, though, that a mathematical model for scattering processes based on photon interactions has yet to be developed: not an easy task, it seems, since the reduction to a one body problem in a potential, like in standard solutions, is not generally applicable. ## IV Ensembles in quantum theory It was already noted by David Bohm that QT does not differ between elementary processes (or physical interactions) and statistical results : Yet it is not immediately clear how the ensembles, to which … probabilities \[in QT\] refer, are formed and what their individual elements are. For the very terminology of quantum mechanics contains an unusual and significant feature, in that what is called the physical state of an individual quantum mechanical system is assumed to manifest itself only in an ensemble of systems. The Copenhagen interpretation seeks to make up for this conceptual deficiency by asserting that nature itself is the origin of this feature. However, we shall try in this section to determine the exact borderline between the elementary processes and the statistical picture in QT. As will be seen presently, the (unusual) statistics of quantum systems have two separate origins: (i) The unknown intrinsic energy components. (ii) Normalization of the wave function. The first accounts for the change of the ensemble structure in measurements, since it affects the range of allowed intrinsic energies, the latter introduces non-locality into the framework of QT, because normalization requires an integral of the wavefunction over the whole system considered: after normalization the amplitude of the wavefunction in one region of the system depends on the potentials and amplitudes of the wavefunction in all the other regions of the system. QT therefore cannot be a local theory. Which does not mean, as demonstrated above, that nature itself must be non-local. Starting with the ensemble structure in QT pertaining to the omission of intrinsic energy components, let us first consider the situation of a free particle. In this case the external potential $`V(\stackrel{}{r})`$ is zero, and the maximum of $`k`$, in a plane wave basis of possible solutions of the Schrödinger equation for fixed total energy $`E_T`$, is described by $`k^2=mE_T/\mathrm{}^2`$. Since the phase of the wave-like intrinsic components is unknown, the total energy can be distributed in an unknown manner between the kinetic components, described in QT, and the electromagnetic components, not considered in QT. At a specific point $`\stackrel{}{r}`$ of our system this means, that we are dealing with a Fourier integral over an allowed range of states, which we called the quantum ensemble of free electrons : $`\psi (\stackrel{}{r})`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi )^{3/2}}}{\displaystyle _0^{k_0}}d^3k\psi _0(\stackrel{}{k})\mathrm{exp}i\stackrel{}{k}\stackrel{}{r}`$ (14) $`k_0`$ $`=`$ $`\sqrt{{\displaystyle \frac{m}{\mathrm{}^2}}E_T}E_T=mu^2`$ (15) where $`\psi _0(\stackrel{}{k})`$ is the $`k`$-dependent amplitude. This quantum ensemble, which is defined according to the omission of intrinsic energy and thus according to the uncertainty relations (see above), describes a range of allowed kinetic energies, and it applies to every single point of a given system. In this sense the wavefunction $`\psi (\stackrel{}{r})`$, given by Eq. (14), is a statistical measure . The unusual feature, Bohm refers to is thus, on closer scrutiny, removed: although the ensemble is an integral part of QT, it does not mean, that we cannot go beyond the purely statistical picture of e.g. the Copenhagen interpretation to an analysis of the underlying fundamental processes. This can be done in two steps: (i) The physical environment determines, by way of the potential $`V(\stackrel{}{r})`$ and the local boundary conditions the structure of the ensemble, i.e. the range of intrinsic properties in a given environment. A potential $`V(\stackrel{}{r})`$, for example, changes the structure of the quantum ensemble, since it affects the range of allowed $`k`$-values. For a negative potential, the range is enhanced, for a positive one, diminished (see Fig. 3). (ii) Once the range of intrinsic properties is determined, the problem for a single member of the ensemble can be treated, at this level we are dealing with a classic physical problem where the interactions and boundary conditions can be included by field theory, e.g. the wave picture of classical electrodynamics. This analysis of the ensemble structure in QT explains also, in a quite natural way, how the statistical picture of QT is related to electrodynamics: the ensembles in classical electrodynamics are in fact quantum ensembles, where the allowed energy range is vanishing, the energy of ensemble members is thus exactly determined, although neither their phase nor their exact location. This is one half of the notorious wave/particle problem, which hounded physics since the establishment of QT (see the collection in ): particles, in QT, are an ensemble of wave-like structures of finite volume $`V`$ and a defined range of energies. It also provides a reason for the validity of von Neumann’s proof, that QT cannot contain a theory of hidden variables, although in quite a different sense than expected by von Neumann : not, because quantum theory is complete, but because quantum theory contains a - in von Neumann’s words - normal ensemble, an ensemble which cannot be described as a sum of members of exactly determined properties (e.g. exact location and exact energy), since the range of allowed energy values pertains to every single point of our system . If we consider a measurement of energy on the quantum ensemble of free electrons, e.g. by a positive potential like in low energy electron diffraction (LEED) experiments, it is immediately clear that the ensemble after the potential, assumed rectangular for simplicity, is diminished compared to the ensemble before it. The wavefunction $`\psi (\stackrel{}{r})`$ has collapsed in $`k`$-space (see Fig. 3). This process, which cannot be consistently described in the conventional formulation of QT, has led to a host of proposed modifications, among the more daring the many world interpretation of Everett, where every result of a measurement occurs in a different universe : since its publication a continuous source of inspiration for quite a few science fiction authors. The main point here is, that if the wavefunction is interpreted as the wavefunction of one single particle, it must remain a mystery, how - to put it a little sloppily - most of the particle can vanish in the measurement, although the potential, seemingly, is not affected. The effect is only understandable, if the ensembles underneath a specific $`\psi `$ are considered. A similar consideration applies to the notorious interaction free measurements, where the wavefunction of a system changes, even if no interaction occurs . A paradoxical consequence of this type of measurement would be, that the energy of a system could change, even if that system does not experience any interaction . Within the present theory this behavior is completely understandable, although it points to a statistical, rather than a physical effect: since no interaction with a particle means, in the region where the particle is appreciable, that the wavefunction must necessarily vanish, it excludes the existence of single members of the ensemble in that region. Compared to the case, where no measurement has been performed, the knowledge about the ensemble has been changed. And since energy in QT is computed via the wavefunction, thus the ensemble, the energy in the latter case can be different. Without any spooky physical events, also without assuming, that the apparent lack of interaction … is only illusionary . As a last example we mention the quantum eraser measurements, where the existence of interference patterns between two orthogonally polarized photons in a double-slit system depends on the insertion of a polarizer with a diagonal plane of polarization . In the conventional framework, this behavior is attributed to the path information, which, after the diagonal polarizer, is said to have been ”erased”. In the new framework, an identical result can been computed by estimating the effect of polarizers on the orientation of intrinsic field components . Currently the main focus in developments is on interference measurements, since it has been found, that a local and causal description of this type of measurement can neither be given in QT, nor in classical electrodynamics . The main problem in electrodynamics is the result, that if the extension of wave structures is limited, the scattering amplitude, in a Kirchhoff approximation, contains the final result of measurements already at the moment, when the structure passes the slit environment. It seems therefore, that the mathematical formulation by way of Green’s functions and the scalar theory should be more or less algorithmic, while actual physical processes - the interactions with the atoms of the slit environment - are not described. Which suggests a new theory of interferences including these interactions. ## V A sideview of Special Relativity (STR) It will have been noted that the total energy of an electron, equal to $`m_eu^2`$, bears a slight resemblance to Einstein’s energy expression , a resemblance, which becomes especially obvious, if photons are considered in this model, and which possess a total energy of $`m_{ph}c^2`$. It can be shown that these expressions are more than mere coincidences, they lead, in fact, to one of the most interesting consequences of the theory, touching a problem known for more than fifty years and inciting the late Dirac to qualify QED, in its present form, as a very wrong theory . Since the model starts from a non-relativistic frame of reference, a Lorentz transformation of the fundamental equations into a moving reference frame changes the physical state of the system, because in this case the intrinsic potentials increase with the electron velocity . In view of consistency, this result seemed, initially, questionable, since it is incompatible with the relativity principle. As further research revealed, this behavior is closely related to the process of interaction in electrostatic fields. If electrostatic interactions are accomplished by photons, the absorption of a photon by an electron depends on proper time in the electron system, and the acceleration is then a function of the electron’s velocity. This effect has been known for some time. Adler remarks on that subject that the time kept by the rapidly moving particle is dilated and hence, as the particle’s speed increases, apparently greater intervals are taken to produce the same effect, hence the apparent increase in resistance . But if this is the case, then the energy of the electron, in the limit of $`n\mathrm{}`$ absorptions $$E_n=\mathrm{}\omega _0\left[1+\underset{i=0}{\overset{n1}{}}\sqrt{1\left(\frac{u_i}{c}\right)^2}\right]=mu_n^2$$ (16) will not be infinite, but converges to a finite limit, where the total energy of the electron is, incidentally, equal to Einstein’s rest energy term $`m_ec^2`$. By comparing the classical interactions due to electrostatic fields with the interactions pertaining to photon absorption with dilated proper time it can be established that the electron mass seems enhanced, and that this enhancement is equal to the mass effect in STR . To prove the equivalence we have calculated the (virtual) mass enhancement due to time dilation, described by a variable $`\alpha (u)`$ $$m(u)=\alpha m_e\alpha =\sqrt{\frac{\mathrm{}\omega _0}{m}}\frac{\sqrt{n}\sqrt{n1}}{u_nu_{n1}}\frac{u_n^c}{u_n^r}$$ (17) where $`u_n^c`$ and $`u_n^r`$ denote the classical and the interaction model velocities of the electron in an electrostatic field, and compared $`\alpha `$ to Einstein’s $`\gamma `$. The results of these (numerical) calculations are displayed in Fig. 4. It should be noted that for reasons of comparison we have taken only the kinetic energy of the electron and added the rest energy. As can be seen, the mass effect in STR coincides nicely with the virtual mass enhancement due to time dilation. From a physical point of view, the result means that the relativistic mass formulas, in STR artifacts of the kinematical transformation of space and time , are an expression of changed interaction characteristics, described by the time dilation in moving frames. That the result is consistent with measurements has been shown elsewhere , in addition, it sheds a new light on the so called renormalization procedures in QED, which were the reason for Dirac’s uneasiness. As Weisskopf showed in his treatment of the free electron, the infinite contributions to the self energy of the electron have two origins: (i) the electrostatic energy, diverging with the radius $`a`$ of the electron, and (ii) the energy due to vacuum fluctuations of the electromagnetic fields. For these two energies $`W_{st}`$ and $`W_{fluct}`$ he found : $$W_{st}=\underset{a0}{lim}\frac{e^2}{a}W_{fluct}=\underset{a0}{lim}\frac{e^2h}{\pi mca^2}$$ (18) The electrostatic contribution vanishes, if an electron in constant motion is considered, since in this case no emission or absorption of photons will occur. The second contribution, the vacuum fluctuations, sums up the energies due to the interaction of the electron with its own created and annihilated photons in a statistical picture which considers all possible events. In the first calculation to master the infinity problems of quantum electrodynamics Bethe derived the following expression for the Lamb shift of the hydrogen electron in an s-state : $$W_{ns}^{}=C\mathrm{ln}\frac{K}{E_nE_m_{AV}}$$ (19) where C is a constant $`E_nE_m_{AV}`$ the average energy difference between states $`m`$ and $`n`$, and K determined by the cutoff of electromagnetic field energy. The prime refers to mass renormalization, since the - infinite - contribution to the electron energy due to electrostatic mass has already been subtracted. The second infinity, the infinity of vacuum fluctuations, is discarded by defining the cutoff K, which in Bethe’s calculation is equal to $`mc^2`$. But while the energy of the field could have any value, if the actual energy of the electron has a singularity at $`u=c`$ (and K could therefore be infinite), this is not the case if the energy remains finite in this limit: in this case the total energy difference between a relativistic electron and an electron at rest is $`mc^2`$ according to our calculations. This is, incidentally, equal to the rest energy of the electron. It seems, therefore, that the renormalization procedures may have their ultimate justification in finite electron energy. It should be noted that it is not yet sufficiently clear, from the viewpoint of this new theory, how the more subtle theoretical developments of QED shall be put into the new framework. In addition, it has been seen by reanalyzing experiments and their description in the standard theory, that progress in not to be expected by an equation for everything. Rather by careful revision of experimental evidence and subtle speculation within the new framework: a tedious task, it seems, but which is the price paid for the insight gained into fundamental processes. ## VI Conclusions We have shown in this paper that a new theoretical framework, based on the intrinsic properties of electron, is suitable to remove the notorious infinity problems in QED and to describe a realistic electron in accordance with experimental data. Electrostatic and magnetic interactions have been treated in this framework, and the origin of the non-local properties of particle spin has been determined. We have also described the borderline between the usually statistical interpretation of the wavefunction and the, physically relevant, intrinsic wave properties. In this case a novel structure of the ensembles in quantum theory was proposed, which is due to the omission of intrinsic enery components. Finally, we have described how the theory treats photon absorption processes in a relativistic context, which led to the conclusion that the mass enhancement in the electron system is only virtual and an effect of time dilation. It was shown, how this result lies underneath the hitherto unexplained renormalization procedures in relativistic quantum field theory. ## VII Acknowledgements I’d like to thank Prof. Dvoeglazov for his kind invitation to contribute to this volume.
no-problem/9910/astro-ph9910462.html
ar5iv
text
# An exact Riemann Solver for multidimensional special relativistic hydrodynamics ## 1 Introduction The decay of a discontinuity separating two constant initial states (Riemann problem) has played a very important role in the development of numerical hydrodynamic codes in classical (Newtonian) hydrodynamics after the pioneering work of Godunov . Nowadays, most modern high-resolution shock-capturing methods are based on the exact or approximate solution of Riemann problems between adjacent numerical cells and the development of efficient Riemann solvers has become a research field in numerical analysis in its own (see, e.g., ). Riemann solvers began to be introduced in numerical relativistic hydrodynamics at the beginning of the nineties and, presently, the use of high-resolution shock-capturing methods based on Riemann solvers is considered as the best strategy to solve the equations of relativistic hydrodynamics in nuclear physics (heavy ion collisions) and astrophysics (stellar core collapse, supernova explosions, extragalactic jets, gamma-ray bursts). This fact has caused a rapid development of Riemann solvers for both special and general relativistic hydrodynamics . The main idea behind the solution of a Riemann problem (defined by two constant initial states, $`L`$ and $`R`$, at left and right of their common contact surface) is that the self-similarity of the flow through rarefaction waves and the Rankine-Hugoniot relations across shocks allow one to connect the intermediate states $`I_{}`$ ($`I=L,R`$) with their corresponding initial states, $`I`$. The analytical solution of the Riemann problem in classical hydrodynamics rests on the fact that the normal velocity in the intermediate states, $`v_I_{}^n`$, can be written as a function of the pressure $`p_I_{}`$ in that state (and the flow conditions in state $`I`$). Thus, once $`p_I_{}`$ is known, $`v_I_{}^n`$ and all other unknown state quantities of $`I_{}`$ can be calculated. In the case of relativistic hydrodynamics the same procedure can be followed , the major difference with classical hydrodynamics stemming from the role of tangential velocities. While in the classical case the decay of the initial discontinuity does not depend on the tangential velocity (which is constant across shock waves and rarefactions), in relativistic calculations the components of the flow velocity are coupled through the presence of the Lorentz factor. ## 2 The equations of relativistic hydrodynamics The equations of relativistic hydrodynamics admit a conservative formulation which has been exploited in the last decade to implement high-resolution shock-capturing methods. In Minkowski space time the equations in this formulation read $$_t𝐔+_i𝐅^{(i)}=0$$ (1) where $`𝐔`$ and $`𝐅^{(i)}(𝐔)`$ ($`i=1,2,3`$) are, respectively, the vectors of conserved variables and fluxes $$𝐔=(D,S^1,S^2,S^3,\tau )^T$$ (2) $$𝐅^{(i)}=(Dv^i,S^1v^i+p\delta ^{1i},S^2v^i+p\delta ^{2i},S^3v^i+p\delta ^{3i},S^iDv^i)^T.$$ (3) The conserved variables (the rest-mass density, $`D`$, the momentum density, $`S^i`$, and the energy density $`\tau `$) are defined in terms of the primitive variables,$`(\rho ,v^i,\epsilon )`$, according to $`D=\rho W,S^i=\rho hW^2v^i,\tau =\rho hW^2pD`$ (4) where $`W=(1v^2)^{1/2}`$ is the Lorentz factor and $`h=1+\epsilon +p/\rho `$ the specific enthalpy. In the following we shall restrict our discussion to an ideal gas equation of state with constant adiabatic exponent, $`\gamma `$, for which the specific internal energy is given by $$\epsilon =\frac{p}{(\gamma 1)\rho }.$$ (5) ## 3 Relation between the normal flow velocity and pressure behind relativistic rarefaction waves Choosing the surface of discontinuity to be normal to the $`x`$-axis, rarefaction waves are self-similar solutions of the flow equations depending only on the combination $`\xi =x/t`$. Getting rid of all the terms with $`y`$ and $`z`$ derivatives in equations (1) and substituting the derivatives of $`x`$ and $`t`$ in terms of the derivatives of $`\xi `$, the system of equations can be reduced to just one ordinary differential equation (ODE) and two algebraic conditions $`\rho hW^2(v^x\xi )dv^x+(1\xi v^x)dp=0`$ (6) $`hWv^y=\text{constant}`$ (7) $`hWv^z=\text{constant},`$ (8) with $`\xi `$ constrained by $`\xi `$ $`=`$ $`{\displaystyle \frac{v^x(1c_s^2)\pm c_s\sqrt{(1v^2)[1v^2c_s^2(v^x)^2(1c_s^2)]}}{1v^2c_s^2}},`$ (9) because non-trivial similarity solutions exist only if the Wronskian of the original system vanish. We have denoted by $`c_s`$ the speed of sound, provided by the equation of state. The plus and minus sign correspond to rarefaction waves propagating to the right ($`_{}`$) and left ($`_{}`$), respectively. The two solutions for $`\xi `$ correspond to the maximum and minimum eigenvalues of the Jacobian matrix associated to $`𝐅^{(x)}(𝐔)`$ , generalizing the result found for vanishing tangential velocity . From equations (7) and (8) it follows that $`v^y/v^z=`$ constant, i.e., the tangential velocity does not change direction across rarefaction waves. Notice that, in a kinematical sense, the Newtonian limit ($`v^i1`$) leads to $`W=1`$, but equations (7) and (8) do not reduce to the classical limit $`v^{y,z}=`$ constant, because the specific enthalpy still couples the tangential velocities. Thus, even for slow flows, the Riemann solution presented in this paper must be employed for thermodynamically relativistic situations ($`h1`$). The same result can be deduced from the Rankine-Hugoniot relations for shock waves (see next section). Using (9) and the definition of the sound speed, the ODE (6) can be written as $$\frac{dv^x}{dp}=\pm \frac{c_s}{W^2\gamma p}\frac{1}{\sqrt{1+g(\xi _\pm ,v^x,v^t)}}$$ (10) where $`v^t=\sqrt{(v^y)^2+(v^z)^2}`$ is the absolute value of the tangential velocity and $$g(\xi _\pm ,v^x,v^t)=\frac{(v^t)^2(\xi _\pm ^21)}{(1\xi _\pm v^x)^2}.$$ (11) Considering that in a Riemann problem the state ahead of the rarefaction wave is known, equation (10) can be integrated with the constraint $`hWv^t=`$ constant, allowing to connect the states ahead ($`a`$) and behind ($`b`$) the rarefaction wave. The solution is only a function of $`p_b`$ and can be stated in compact form as $$v_b^x=_{}^a(p_b).$$ (12) ## 4 Relation between post-shock flow velocities and pressure for relativistic shock waves. The Rankine-Hugoniot conditions relate the states on both sides of a shock and are based on the continuity of the mass flux and the energy-momentum flux across shocks. Their relativistic version was first obtained by Taub (see also ). Considering the surface of discontinuity as normal to the $`x`$-axis, the invariant mass flux across the shock can be written as $$jW_sD_a(V_sv_a^x)=W_sD_b(V_sv_b^x).$$ (13) where $`V_s`$ is the coordinate velocity of the hyper-surface that defines the position of the shock wave and $`W_s`$ is the correspondent Lorentz factor, $`W_s=(1V_s^2)^{1/2}`$. According to our definition, $`j`$ is positive for shocks propagating to the right. In terms of the mass flux, $`j`$, the Rankine-Hugoniot conditions are $$[v^x]=\frac{j}{W_s}\left[\frac{1}{D}\right],$$ (14) $$[p]=\frac{j}{W_s}\left[\frac{S^x}{D}\right],$$ (15) $$\left[hWv^y\right]=0,$$ (16) $$\left[hWv^z\right]=0,$$ (17) $$[v^xp]=\frac{j}{W_s}\left[\frac{\tau }{D}\right].$$ (18) Equations (16) and (17) imply that the quantity $`hWv^{y,z}`$ is constant across a shock wave and, hence, that the orientation of the tangential velocity does not change. The latter result also holds for rarefaction waves (see §3). Equations (14), (15) and (18) can be manipulated to obtain $`v_b^x`$ as a function of $`p_b`$, $`j`$ and $`V_s`$. Using the relation $`S^x=(\tau +p+D)v^x`$ and after some algebra, one finds $$v_b^x=\left(h_aW_av_a^x+\frac{W_s(p_bp_a)}{j}\right)\left(h_aW_a+(p_bp_a)\left(\frac{W_sv_a^x}{j}+\frac{1}{\rho _aW_a}\right)\right)^1.$$ (19) The final step is to express $`j`$ and $`V_s`$ as a function of the post-shock pressure. First, from the definition of the mass flux we obtain $$V_s^\pm =\frac{\rho _a^2W_a^2v_a^x\pm |j|\sqrt{j^2+\rho _a^2W_a^2(1v_{a}^{x}{}_{}{}^{2})}}{\rho _a^2W_a^2+j^2}$$ (20) where $`V_s^+`$ ($`V_s^{}`$) corresponds to shocks propagating to the right (left). Second, from the Rankine-Hugoniot relations and the physical solution of $`h_b`$ obtained from the Taub adiabat (the relativistic version of the Hugoniot adiabat), that relates only thermodynamic quantities on both sides of the shock, the square of the mass flux $`j^2`$ can be obtained as a function of $`p_b`$. Using the positive (negative) root of $`j^2`$ for shock waves propagating towards the right (left), the desired relation between the post-shock normal velocity $`v_b^x`$ and the post-shock pressure $`p_b`$ is obtained. In a compact way the relation reads $$v_b^x=𝒮_{}^a(p_b).$$ (21) We refer to the interested reader to references for further details. ## 5 The solution of the Riemann problem with arbitrary tangential velocities. The time evolution of a Riemann problem can be represented as: $$IL𝒲_{}L_{}𝒞R_{}𝒲_{}R$$ (22) where $`𝒲`$ denotes a simple wave (shock or rarefaction), moving towards the initial left ($``$) or right ($``$) states. Between them, two new states appear, namely $`L_{}`$ and $`R_{}`$, separated from each other through the third wave $`𝒞`$, which is a contact discontinuity. Across the contact discontinuity pressure and normal velocity are constant, while the density and the tangential velocity exhibits a jump. As in the Newtonian case, the compressive character of shock waves (density and pressure rise across the shock) allows us to discriminate between shocks ($`𝒮`$) and rarefaction waves ($``$): $$𝒲_{()}=\{\begin{array}{ccc}\hfill _{()}& ,& p_bp_a\hfill \\ \hfill 𝒮_{()}& ,& p_b>p_a\hfill \end{array}$$ (23) where $`p`$ is the pressure and subscripts $`a`$ and $`b`$ denote quantities ahead and behind the wave. For the Riemann problem $`aL(R)`$ and $`bL_{}(R_{})`$ for $`𝒲_{}`$ and $`𝒲_{}`$, respectively. The solution of the Riemann problem consists in finding the positions of the waves separating the four states and the intermediate states, $`L_{}`$ and $`R_{}`$. The functions $`𝒲_{}`$ and $`𝒲_{}`$ allow one to determine the functions $`v_R^x(p)`$ and $`v_L^x(p)`$, respectively. The pressure $`p_{}`$ and the flow velocity $`v_{}^x`$ in the intermediate states are then given by the condition $$v_R^x(p_{})=v_L^x(p_{})=v_{}^x.$$ (24) which is an implicit algebraic equation in $`p_{}`$ and can be solved by means of an iterative method. When $`p_{}`$ and $`v_{}^x`$ have been obtained, the equation of state gives the specific internal energy and the remaining state variables of the intermediate state $`I_{}`$ can be calculated using the relations between $`I_{}`$ and the respective initial state $`I`$ given through the corresponding wave. Notice that the solution of the Riemann problem depends on the modulus of $`v^t`$, but not on the direction of the tangential velocity. Figure 1 shows the solution of a particular Riemann problem for different values of the tangential velocity $`v^y=0,0.5,0.9,0.99`$. The crossing point of any two lines in the upper panel gives the pressure and the normal velocity in the intermediate states. Whereas the pressure in the intermediate state can take any value between $`p_L`$ and $`p_R`$, the normal flow velocity can be arbitrarily close to zero in the case of an extremely relativistic tangential flow. To study the influence of tangential velocities on the solution a Riemann problem, we have calculated the solution of a standard test involving the propagation of a relativistic blast wave produced by a large jump in the initial pressure distribution for different combinations of tangential velocities . Although the structure of the solution remains unchanged for different tangential velocities, the values in the constant state may change by a large amount. ## 6 Discussion and conclusions. We have obtained the exact solution of the Riemann problem in special relativistic hydrodynamics with arbitrary tangential velocities. Unlike in Newtonian hydrodynamics, tangential velocities are coupled with the rest of variables through the Lorentz factor, present in all terms in all equations. It strongly affects the solution, especially for ultra-relativistic tangential flows. In addition, the specific enthalpy also acts as a coupling factor and modifies the solution for the tangential velocities in thermodynamically relativistic situations (energy density and pressure comparable to or larger than the proper rest-mass density), rendering the classical solution incorrect in slow flows with very large internal energies. Our solution has interesting practical applications. First, it can be used to test the different approximate relativistic Riemann solvers and the multi-dimensional hydrodynamic codes based on directional splitting. Second, it can be used to construct multi-dimensional relativistic Godunov-type hydro codes. As an example, we have simulated a relativistic tube , in a $`100\times 100`$ Cartesian grid, where the initial discontinuity was located in a main diagonal. The initial states were $`\rho _L=10`$, $`\rho _R=1`$, $`p_L=13.3`$, $`p_R=0.66E3`$, $`v_L=0`$, $`v_R=0`$, and the adiabatic index is $`\gamma =5/3`$. Spatial order of accuracy was set to second order by means of a monotonic piecewise linear reconstruction procedure and second order in time is obtained by using a Runge-Kutta method for time advancing. The exact solution of the Riemann problem is used at every interface to calculate the numerical fluxes. The results are shown in Figure 2, and are comparable to those obtained with other HRSC methods. Profiles of all variables are stable and discontinuities are well resolved without excessive smearing. An efficient implementation of this exact Riemann solver in the context of multidimensional relativistic PPM is in progress and will be reported elsewhere.
no-problem/9910/cond-mat9910368.html
ar5iv
text
# Weak localisation, hole-hole interactions and the “metal”-insulator transition in two dimensions ## Abstract A detailed investigation of the metallic behaviour in high quality GaAs-AlGaAs two dimensional hole systems reveals the presence of quantum corrections to the resistivity at low temperatures. Despite the low density ($`r_s>10`$) and high quality of these systems, both weak localisation (observed via negative magnetoresistance) and weak hole-hole interactions (giving a correction to the Hall constant) are present in the so-called metallic phase where the resistivity decreases with decreasing temperature. The results suggest that even at high $`r_s`$ there is no metallic phase at T=0 in two dimensions. Since the claimed observation of metallic behaviour in strongly interacting two-dimensional (2D) systems over 5 years ago experimentalists have tried to provide data from which an understanding of the conduction processes in high quality 2D systems can be obtained. Initial studies of these new strongly interacting systems revealed that the resistivity data can be “scaled” over a wide range of temperatures indicating the presence of a true phase transition between insulating and metallic states . Following this an empirical formula for $`\rho `$(T) has been put forward which fits all the available experimental data of the metallic state . This formula describes a saturation of the resistivity as the temperature is reduced, giving a finite resistance at T=0, further testifying the existence of a 2D metallic state. Despite these results the nature of the metallic state and whether it really persists to the zero of temperature remains unclear. Early theoretical and experimental studies of weakly interacting systems (low $`r_s`$) revealed that the presence of any disorder would give rise to logarithmic corrections to the conductivity. Since these corrections become increasingly important as $`T`$ is reduced the question of what happens to the metallic behaviour as T$`0`$ in 2D systems remains. This paper reports the observation of both weak localisation and weak hole-hole interactions in the “metallic” phase of a high quality 2D GaAs hole system. First we demonstrate that the system studied here exhibits all of the characteristics previously associated with the 2D “metal”-insulator transition. Magnetoresistance measurements are then used to extract the logarithmic corrections to the Drude conductivity at low temperatures. The data show that: (1) the anomalous exponential decrease of resistivity with decreasing temperature in the metallic phase is not due to quantum interference or strong interaction effects, (2) phase coherence is preserved in the metallic regime with evidence for normal Fermi liquid behaviour, and (3) hole-hole interactions provide a localising correction to the conductivity. The sample used here is a gated, modulation doped GaAs quantum well grown on a (311)A substrate . Four terminal magnetoresistance measurements were performed at temperatures down to 100mK using low frequency (4 Hz) ac lockin techniques and currents of 0.1-5 nA to avoid electron heating. The hole density could be varied in the range $`03.5\times 10^{11}\text{cm}^{\text{-2}}`$, with a peak mobility of $`2.5\times 10^5\text{cm}^\text{2}\text{V}^{\text{-1}}\text{s}^{\text{-1}}`$. Only the heavy hole ($`|M_J|`$=3/2) subband is occupied, although there is some mixing between light and heavy hole bands for $`|k|>0`$. Figure 1(a) shows the temperature dependence of the $`B`$=0 resistivity $`\rho `$ plotted for carrier densities close to the transition, from $`p_s=3.25.6\times 10^{10}\text{cm}^{\text{-2}}`$. Strongly localised behaviour is observed at the lowest carrier densities, with $`\rho `$ taking the familiar form for variable range hopping: $`\rho (T)=\rho _{\text{VRH}}\mathrm{exp}[(T/T_{\text{VRH}})^m]`$, with m=1/2 far from the transition and m=1/3 close to the transition. A transition from insulating to metallic behaviour occurs as the carrier density is increased, with a critical density of $`p_c=4.6\times 10^{10}\text{cm}^{\text{-2}}`$ ($`r_s=12`$) at the transition. Above this critical density the resistivity drops markedly as the temperature is reduced, although it is difficult to see this drop on the logarithmic axis of Fig. 1(a). The metallic behaviour can be seen more clearly in the “scaled” data shown in Fig. 1(b). Each $`\rho `$(T) trace was individually scaled along the T-axis in order to collapse all the data onto one of two separate branches. It has been suggested that the ability to scale the data both in the strongly localised and metallic branches is evidence for a phase transition between insulating and metallic states in a 2D system. More recently Pudalov et al. have shown that $`\rho (T)`$ in the metallic regime is well described by: $$\rho (T)=\rho _0+\rho _1\mathrm{exp}[T_a/T]$$ (1) Fig. 1(c-e) shows the temperature dependent resistivity for three different carrier densities in the metallic regime, with fits to Eqn. (1) shown as dashed lines. In Fig. 1(c), at a density close to the transition, saturation of the resistivity is just visible at the lowest temperatures (100mK). As the density is increased and we move further into the metallic regime this saturation becomes visible at higher temperatures until at the highest density $`\rho (T)`$ saturates below 350mK. The empirical formula (1), which characterises the metallic behaviour observed in all 2D systems therefore dictates a saturation of $`\rho (T)`$ as $`T0`$. Although different from the scaling analysis of Ref. and shown in Fig. 1(b), it is still consistent with the existence of a 2D metal-insulator transition because $`\rho (T)`$ remains finite as $`T0`$. Early studies of weakly interacting, disordered 2D systems ($`r_s4`$ demonstrated that both weak localisation and weak electron-electron interactions caused a logarithmic reduction of the conductivity as $`T0`$. More recently it has been shown that the same interaction effects occur in slightly less disordered samples ($`r_s6`$) that exhibit “metallic behaviour”, at high carrier densities, far from the metal-insulator transition. However neither the scaling analysis in Fig. 1(b) nor the empirical Eqn. (1) address what has happened to these logarithmic corrections near the metal-insulator transition, and whether the conductivity remains finite as $`T0`$. We now turn to one of two main results of this paper. Figure 2 shows the temperature dependence of the $`B`$=0 resistivity (left hand panel) and magnetoresistance (right hand panel) at different densities on both sides of the “metal”-insulator transition. In Fig. 2(a) we are just on the insulating side of the transition. The left hand panel shows that $`\rho (T)`$ is essentially $`T`$-independent down to 300mK and then increases by 2.5$`\%`$ as the temperature is further reduced. This weak increase in the resistivity has been previously taken as evidence for weak localisation and weak electron-electron interaction effects. It is however not possible to determine the precise origins of this weak increase in resistivity solely from the $`B`$=0 data, and we therefore look at the magnetoresistance shown in the right hand panel of Fig. 2(a). A characteristic signature of weak localisation is a strong temperature dependent negative magnetoresistance, since the perpendicular magnetic field breaks time reversal symmetry, removing the phase coherent back-scattering. As observed previously there is no evidence of weak localisation for temperatures down to 300mK in these high quality samples . However, as $`T`$ is lowered below 300mK a strong negative magnetoresistance peak develops as phase coherent effects become important, mirroring the small increase in the resistivity at $`B`$=0. Increasing the carrier density brings us into the metallic regime (Fig. 2(b)) where the exponential drop in the resistivity with decreasing temperature predicted from Eqn. (1) starts to become visible. The upturn in $`\rho (T)`$ marked by the arrow has moved to lower temperatures and the negative magnetoresistance in the right hand panel has become less pronounced. Further increasing the density (Fig. 2(c)) causes the metallic behaviour to become stronger, with the upturn in $`\rho (T)`$ moving to even lower temperatures, until at $`p_s=5\times 10^{10}\text{cm}^{\text{-2}}`$ the upturn is no longer visible within the accessible temperature measurement range. However, the magnetoresistance still exhibits remnants of the weak localisation temperature dependent peak at $`B`$=0. The weak localisation is therefore always present and is neither destroyed in the metallic regime, nor is it “swamped” by the exponential decrease in resistivity with decreasing temperature. Instead what can clearly be seen in the left hand panel of Fig. 3, is that the upturn in $`\rho (T)`$ due to weak localisation marked by the arrows, moves to lower $`T`$ as the carrier density is increased. This is not surprising since as we move further into the metallic regime both the conductivity and therefore the mean free path ($`l\sigma /\sqrt{p_s}`$) increase, such that the weak localisation corrections are only visible at lower temperatures (larger $`l_\varphi `$). In contrast to experimental studies of high carrier density hole gas quantum well samples there are no signs of weak anti-localisation in these low density samples. This is perhaps to be expected since recent theoretical work has predicted that the magnetoresistance behaviour is determined by the degree of heavy-hole/light-hole mixing at the Fermi energy, which is characterised by the parameter $`k_Fa/\pi `$, where $`a`$ is the width of the quantum well. In our sample the carrier concentration is small, such that $`k_Fa/\pi 1`$, and only negative magnetoresistance is expected. In Fig. 3 (a) we fit the temperature dependent magnetoconductance data to the formula of Hikami et al.. $$\mathrm{\Delta }\sigma (B)=\frac{e^2}{\pi h}\left[\mathrm{\Psi }\left(\frac{1}{2}+\frac{B_\varphi }{B}\right)\mathrm{\Psi }\left(\frac{1}{2}+\frac{B_0}{B}\right)\right]$$ (2) where $`\mathrm{\Psi }(x)`$ is the digamma function, B<sub>0</sub> and B<sub>ϕ</sub> are characteristic magnetic fields related to the transport scattering rate and the phase relaxation rate. We obtain $`\sigma _{xx}`$ by matrix inversion of $`\rho _{xx}`$ and $`\rho _{xy}`$. Using Eqn. (2) we fit the experimental data just on the metallic side of the transition ($`p_s=4.7\times 10^{10}\text{cm}^{\text{-2}}`$) for different temperatures as shown in Fig. 3(a). These fits are in good agreement with the experimental data, and from this it is possible to extract the fitting parameter B<sub>ϕ</sub> and thus the phase relaxation time $`\tau _\varphi `$. Figure 3(b) shows the temperature dependence of the phase breaking rate, $`1/\tau _\varphi `$ for three different densities on both sides and close to the “metal”-insulator transition. The phase breaking rate falls approximately linearly with decreasing temperature for all three traces. The linear dependence agrees well with that predicted for disorder enhanced hole-hole scattering , where $`1/\tau _\varphi 2k_BT/(\mathrm{}k_Fl)`$. This phase breaking mechanism should only depends on $`k_Fl`$ and not on the carrier density, mobility or interaction strength. It is therefore particularly noteworthy that the phase breaking rates in these low density p-GaAs samples, with $`2.5<k_Fl<5`$, are almost identical to those found in n-type silicon MOSFETs with $`k_Fl1`$, despite a factor of 20 difference in the carrier densities (see data in Fig. 3(b)). This agreement with scattering limited electron lifetime suggests that the electron states are only mildly perturbed by the strong interactions and essentially remain Fermi liquid like. Another important feature of these results is that there is little variation in $`\tau _\varphi `$ with density and in particular there is no dramatic change in $`\tau _\varphi `$ as we cross from insulating ($`p_s=4.5\times 10^{10}\text{cm}^{\text{-2}}`$) to metallic behaviour ($`p_s=5.2\times 10^{10}\text{cm}^{\text{-2}}`$). There is therefore no reflection of the exponential decrease of $`\rho (T)`$ with decreasing temperature in the phase breaking rate. This implies that whatever mechanism is causing metallic behaviour does not suppress weak localisation as originally believed and is further evidence that the system is behaving as a Fermi liquid. Since all models of the resistivity in the metallic phase predict that the exponential drop saturates at low temperatures, our data shows that localisation effects will again take over as T$``$0. Finally we address the role of electron-electron (hole-hole) interactions in the 2D metallic phase - the second important result from this paper. Unlike weak localisation, interactions not only affect the $`B=0`$ resistivity, but also cause a correction to the Hall resistance: $$\frac{\mathrm{\Delta }R_H}{R_H}=2\frac{\mathrm{\Delta }\sigma _I}{\sigma }$$ (3) By measuring the low field Hall effect it is thus possible to distinguish between weak localisation and interaction effects. Figure 4(a) shows the Hall resistivity $`\rho _{xy}`$ measured on the metallic side of the transition (i.e. where the zero field resistivity shows an exponential drop with decreasing temperature as shown in Fig. 2(c)). The data reveal a small, but significant, decrease of the Hall slope with increasing temperature. Whilst a series of different temperatures traces from 100-700mK were taken only three of these traces are presented for clarity. Upon closer investigation this small decrease of the Hall slope is found to vary as $`\mathrm{log}(T)`$. We extract the interaction correction to the zero field conductivity, $`\mathrm{\Delta }\sigma _I`$ from the temperature dependent Hall data using equation (3). Figure 4(b) shows a plot of the interaction correction for different carrier densities on both sides of the transition. All the data collapses onto a single line, clearly demonstrating a $`\mathrm{log}(T)`$ dependence of $`\mathrm{\Delta }\sigma _I`$, which reduces the conductivity to zero as T$``$0. Logarithmic corrections to the Hall resistivity have previously been observed in studies of interaction effects in high density electron systems . It is perhaps surprising that results observed in, and derived from, weakly interacting systems apply to our system where interactions are strong and $`r_s>10`$. Nevertheless, we find reasonable agreement between the magnitude of the logarithmic corrections due to interactions in our system and those predicted by Altshuler et al. (within a factor of 2). As with the phase coherent effects this logarithmic correction due to hole-hole interactions is independent of whether we are in the insulating or metallic phase and is present despite the exponential drop in resistivity. This is the first proof that electron-electron interactions are not responsible for the 2D “metal”-insulator transition observed in high mobility (low $`E_F`$) systems. In summary we have presented a comprehensive study of localisation and interaction effects in a high mobility two-dimensional hole gas sample that shows all the signatures of a $`B=0`$ “metal”-insulator transition. The results clearly demonstrate that neither phase coherent effects nor electron-electron interactions are responsible for the apparent 2D metal-insulator transition. Both of these effects are present in the metallic regime and both give rise to localising corrections to the conductivity at low temperatures. The importance of phase coherent effects in studies of the metallic behaviour in high mobility 2D systems has not previously been recognised because the mean free path in these systems is large, so that weak localisation is only observable at very low, often inaccessible, temperatures. Nevertheless we clearly observe negative magnetoresistance characteristic of weak localisation in the so-called metallic phase. There is no suppression of phase coherent effects in the metallic regime. Instead the weak localisation simply moves to lower temperatures as we go further into the metallic regime. Most importantly we demonstrate that the metallic behaviour is not due to hole-hole interactions, because these also cause a logarithmic localising correction to the conductivity. These results strongly suggest that the metallic behaviour must be a finite temperature effect, and that as T$``$0 the old results of scaling theory and weak electron-electron interactions remain valid - there is no genuine 2D metallic phase. We are indebted to D.E. Khmel’nitskii, D. Neilson, and B. Altshuler for many useful discussions. This work was funded by EPSRC (U.K.). MYS acknowledges a QEII Fellowship from the Australian Research Council.
no-problem/9910/cond-mat9910236.html
ar5iv
text
# Dependence of Conductance on Percolation Backbone Mass ## I Introduction There has been considerable study of the bond percolation cluster considered as a random-resistor network, with each occupied bond having unit resistance and non-occupied bonds having infinite resistance . In two dimensions, the configuration studied is typically an $`L\times L`$ lattice and the conductance is measured between two opposite sides which are assumed to have infinite conductance . The backbone of the cluster is then defined as the set of bonds that are connected to the two sides having infinite conductance through paths that have no common bond. At the percolation threshold, the backbone mass scales as $`M_BL^{d_B}`$ with $`d_B=1.6432\pm 0.0008`$ and in this “bus bar” geometry is strongly correlated with $`L`$. The total conductance of the backbone as a function of $`L`$ has been studied extensively and has been found to scale as $`\sigma L^{\stackrel{~}{\mu }}`$ with $`\stackrel{~}{\mu }=0.9826\pm 0.0008`$ . Recently, the distribution of masses of backbones defined by two points, i.e., backbones defined as the set of those bonds that are connected by paths having no common bonds to two points separated by distance $`r`$ within an $`L\times L`$ lattice, has been studied . One finds that when $`rL`$, there is a very broad distribution of backbone masses for a given $`r`$. Figure 1 illustrates some typical percolation clusters and their backbones defined in this configuration. Because of the broad distribution of backbone masses we have the opportunity to study the conductance between these two points separated by a fixed distance, $`r`$, as a function of the mass of the backbone defined by these points. One might expect that, for fixed $`r`$, the average conductance would increase with increasing backbone mass because there could be more paths through which current can flow. In fact, we find that the average conductance decreases monotonically with increasing backbone size, in contrast with the behavior of homogeneous systems and non-random fractals in which conductance increases with increasing $`M_B`$. We explain our finding by noting that the conductance is strongly correlated with the shortest path between the two points, and then studying the distribution of shortest paths between the two points for a given $`M_B`$. This analysis extends recent studies of the distribution of shortest paths where no restriction on $`M_B`$ is placed . ## II Simulations Our system is a two-dimensional square lattice of side $`L=1000`$ with points $`A`$ and $`B`$ defined as $`A=(Lr/2,500)`$, $`B=(L+r/2,500)`$. For each realization of bond percolation on this lattice, if there is a path of connected bonds between $`A`$ and $`B`$, we calculate (i) the length of the shortest path between $`A`$ and $`B`$, (ii) the size of the backbone defined by $`A`$ and $`B`$ and (iii) the total conductance between $`A`$ and $`B`$. We obtain data from 100,000 realizations for each of 8 values of $`r`$ (1, 2, 4, 8, 16, 32, 64, and 128) at the percolation threshold, $`p_c=0.5`$. We bin these results based on the value of the backbone mass, $`M_B`$ by combining results for all realizations with $`2^n<M_B<2^{n+1}`$ and taking the center of each bin as the value of $`M_B`$. In Fig. 2(a), we plot the simulation results for the average conductance $`\sigma (M_B,r)`$ and find that the conductance, in fact, decreases with increasing $`M_B`$. The decrease is seen more clearly in Fig. 2(b), in which we plot scaled values as discussed below. ## III Sierpinski Gasket In non-fractal systems, the conductance increases as the mass of the conductor increases. We next consider the average conductance on the Sierpinski gasket, a non-random fractal, the first three generations of which are illustrated in Fig. 3(a)–(c). Because the Sierpinksi gasket is not translationally invariant, the analogue of the average conductivity between two points in the percolation cluster is the conductivity averaged over all pairs of points separated by distance $`r`$ . At each successive generation, there are two types of pairs: (i) pairs which correspond to pairs in the previous generation (e.g., A and B) and (ii) pairs which do not correspond to pairs in the previous generation (e.g., D and E). It is obvious that as we move from one generation to the next, the conductance between pairs of type (i) increases because there are more paths between the points then in the previous generation. On the other hand, the conductance between the pairs of type (ii) are lower on average than between the pairs present in the previous generation because on average the shortest path between the two points is longer than between the pairs in the previous generation. However, for any given $`r`$, the shortest path between any two points has a fixed upper bound independent of the generation. Due to this bound on the shortest path, the net result is that the average conductivity increases with succeeding generations. This is shown in Fig. 3(d) in which we plot the average conductivity calculated exactly for generations 1 to 6 for $`r=1`$, 2 and 4. ## IV Shortest Path Distribution In order to understand why the average conductance of the percolation backbone decreases with increasing $`M_B`$, we must (i) recognize that the conductance is strongly correlated with the shortest path between the two points and (ii) study $`P(\mathrm{}|M_B,r)`$, the distribution of shortest paths between the two points for a given backbone mass. Hence we next create the $`P(\mathrm{}|M_B,r)`$ probability distribution, binning the data logarithmically by taking the average over samples centered at $`\mathrm{log}_2\mathrm{}`$. Figure 4(a) shows the results of the simulations for $`P(\mathrm{}|M_B,r)`$ for $`r=1`$ for various backbone masses. The plots collapse, the only difference in the plots being the values of the upper cut-offs due to the finite size of the backbone. Figure 1 illustrates how the size of the backbone constrains the possible values of the shortest path. For all values of $`M_B`$, a section of each plot in Fig. 4(a) exhibits power law behavior. In Fig. 4(b), we show the distributions $`P(\mathrm{}|M_B,r)`$ for different $`r`$ and a given $`M_B`$. In Fig. 4(c) we see that when scaled with $`r^{d_{\text{min}}}`$ the plots collapse, so we can write $`P(\mathrm{}|M_B,r)`$ in the scaling form $$P(\mathrm{}|M_B,r)\frac{1}{r^{d_{\text{min}}}}\left(\frac{\mathrm{}}{r^{d_{\text{min}}}}\right)^\psi .$$ (1) An expression for $`\psi `$ can be found by recognizing that we can write the well-studied distribution $`P(\mathrm{}|r)`$, the probability that the shortest path between two points separated by Euclidean distance $`r`$ is $`\mathrm{}`$, independent of $`M_B`$, as $$P(\mathrm{}|r)=_c_{\mathrm{}}^{\mathrm{}}P(\mathrm{}|M_B,r)P(M_B|r)𝑑M_B,$$ (2) where (i) $`P(M_B|r)`$ is the distribution of backbone masses given distance $`r`$ between the points which determine the backbone and (ii) $`c_{\mathrm{}}`$ is the lower cutoff on $`M_B`$ given $`\mathrm{}`$. $`P(M_B|r)`$ has the form $$P(M_B|r)\frac{1}{r^{d_B}}\left(\frac{M_B}{r^{d_B}}\right)^{\tau _B},[rL]$$ (3) where $`d_B`$ is the fractal dimension of the backbone and $`\tau _B=d/d_B`$ is the exponent for the blob size distribution . From Ref. $$P(\mathrm{}|r)\frac{1}{r^{d_{\text{min}}}}\left(\frac{\mathrm{}}{r^{d_{\text{min}}}}\right)^g_{\mathrm{}},$$ (4) where $`d_{\text{min}}`$ is the fractal dimension of the shortest path. Since $`\mathrm{}r^{d_{\text{min}}}`$ and $`M_Br^{d_B}`$, implying $`\mathrm{}M_B^{d_{\text{min}}/d_B}`$, the lower cutoff $`c_{\mathrm{}}`$ in Eq. (2) scales as $$c_{\mathrm{}}\mathrm{}^{d_B/d_{\text{min}}}.$$ (5) As $`L\mathrm{}`$, the upper cutoff is $`\mathrm{}`$ because the maximum backbone mass is not constrained by the length of the shortest path. Substituting Eqs. (3), (4), and (5) into Eq. (2), and equating powers of $`r`$ (or powers of $`\mathrm{}`$) of the left and right hand sides of the resulting equation, we find $$\psi =g_{\mathrm{}}\frac{d_B}{d_{\text{min}}}(\tau _B1).$$ (6) Using $`\tau _B=d/d_B`$ and the values $`g_{\mathrm{}}=2.04`$ and $`d_{\text{min}}=1.13`$ , we find $`\psi =1.72`$, in good agreement with our simulation result $$\psi =1.7\pm 0.05.$$ (7) ## V Average Conductance We can now calculate the average conductance. Since $`\sigma `$ is strongly correlated with $`\mathrm{}`$, and since $`\sigma `$ scales with $`r`$ as $`r^{\stackrel{~}{\mu }}`$ and $`\mathrm{}`$ scales with $`r`$ as $`r^{d_{\text{min}}}`$, we have $$\sigma \mathrm{}^{\stackrel{~}{\mu }/d_{\text{min}}}.$$ (8) Then using the fact that $`P(\mathrm{}|M_B,r)\mathrm{}^\psi `$ we have $`P(\sigma |M_B,r)`$ $``$ $`P(\mathrm{}=\sigma ^{d_{\text{min}}/\stackrel{~}{\mu }}|M_B,r){\displaystyle \frac{d\mathrm{}}{d\sigma }}`$ (9) $``$ $`\sigma ^{(\psi 1)(d_{\text{min}}/\stackrel{~}{\mu })1}=\sigma ^z,`$ (10) where $$z(\psi 1)(d_{\text{min}}/\stackrel{~}{\mu })1=0.17.$$ (11) Now $`P(\mathrm{}|M_B,r)`$ is nonzero only for $$(ar)^{d_{\text{min}}}\stackrel{<}{}\mathrm{}\stackrel{<}{}(bM_B)^{d_{\text{min}}/d_B},$$ (12) where $`a`$ and $`b`$ are constants. Hence using $`\sigma \mathrm{}^{\stackrel{~}{\mu }/d_{\text{min}}}`$, we find $`P(\sigma |M_B,r)`$ is nonzero for $$(bM_B)^{(d_{\text{min}}/d_B)(\stackrel{~}{\mu }/d_{\text{min}})}=(bM_B)^{\stackrel{~}{\mu }/d_B}\stackrel{<}{}\sigma \stackrel{<}{}(ar)^{\stackrel{~}{\mu }}.$$ (13) Using these bounds to normalize the distribution, we find $$P(\sigma |M_B,r)=\frac{(z+1)\sigma ^z}{(ar)^{\stackrel{~}{\mu }(z+1)}(bM_B)^{(\stackrel{~}{\mu }/d_B)(z+1)}}.$$ (14) Then $`\sigma (M_B,r)`$ $`=`$ $`{\displaystyle _{(bM)^{\stackrel{~}{\mu }/d_B}}^{(ar)^{\stackrel{~}{\mu }}}}\sigma P(\sigma |M_B,r)𝑑\sigma `$ (15) $`=`$ $`{\displaystyle \frac{z+1}{z+2}}(ar)^{\stackrel{~}{\mu }}{\displaystyle \frac{1\left[\frac{(bM_B)^{1/d_B}}{ar}\right]^{\stackrel{~}{\mu }(z+2)}}{1\left[\frac{(bM_B)^{1/d_B}}{ar}\right]^{\stackrel{~}{\mu }(z+1)}}}.`$ (17) Thus as $`M_B`$ goes to infinity, $`\sigma (M_B,r)`$ decreases asymptotically to a constant as $$\sigma (M_B,r)\frac{z+1}{z+2}(ar)^{\stackrel{~}{\mu }}\left[1+\left[\frac{(bM_B)^{1/d_B}}{ar}\right]^{\stackrel{~}{\mu }(z+1)}\right].$$ (18) By considering the asymptotic dependence of $`\sigma (M_B,r)`$ on $`M_B`$, we can reasonably fit the simulation results by choosing the parameters $`a`$ and $`b`$ in Eq. (15) to be 0.9 and 6, respectively. Using these values for $`a`$ and $`b`$, in Fig. 2(a) we plot $`\sigma `$ from Eq. (15) for multiple values of $`r`$ and find that agreement with the simulation results improves with increasing $`r`$. For large $`r`$, the curves for the simulation results and the curves for the theoretical results are coincident at large $`M_B`$. The poor results for small $`r`$ are due to corrections-to-scaling not included in our derivation (e.g., for small $`r`$, there are significant corrections-to-scaling for the relations $`\sigma r^{\stackrel{~}{\mu }}`$ and $`M_Br^{d_B}`$ ). Equation (15) can be re-cast in terms of the scaled variable $`xM_B/r^{d_B}`$ as $$\sigma (x,r)=\frac{z+1}{z+2}(ar)^{\stackrel{~}{\mu }}f(x),$$ (19) where $$f(x)=\frac{1\left(\frac{b}{a^{d_B}}x\right)^{(\stackrel{~}{\mu }/d_B)(z+2)}}{1\left(\frac{b}{a^{d_B}}x\right)^{(\stackrel{~}{\mu }/d_B)(z+1)}}.$$ (20) In Fig. 2(b), we plot the average conductance scaled in accordance with Eqs. (19) and (20). The expected collapse improves with increasing $`r`$ for the same reason as noted above. Above the percolation threshold, for backbones of size larger than the correlation length, the strong correlation between the conductance and the shortest path breaks down and we expect the conductance to increase with the mass of the backbone, as is the case in non-random systems. This is seen in Fig. 2(c), where we plot conductance versus backbone mass for the bond occupation probabilities $`p=0.56`$ and $`p=0.60`$, which are above the percolation threshold and, for comparison, conductance at the percolation threshold, $`p=0.50`$ . Figure 2(c) shows that for $`p=0.60`$, all backbone masses sampled are of size greater than the correlation length and the conductance increases monotonically. For $`p=0.56`$, the smaller backbone masses are of size less than the correlation length and Fig. 2(c) shows that the conductance initially decreases; for larger backbone masses, however, the sizes of the backbones are greater than the correlation length and Fig. 2(c) shows that the conductance then increases. ## VI Discussion The derivation of Eq. (15) and its agreement with the results of our simulations confirm our understanding of why the average conductance decreases with increasing backbone mass: the smaller contributions to the average conductance from the longer minimal paths possible in the clusters with larger backbone size cause the average conductance to be smaller. Our derivation was not specific to two dimensions, and should also hold in higher dimensions. ## Acknowledgments We thank J. Andrade for helpful discussions, and BP Amoco for financial support.
no-problem/9910/cond-mat9910398.html
ar5iv
text
# Exactly Solvable Model of Inergodic Spin Glass ## 1 Introduction The possibility of existence of spin glass phases as specific thermodynamic states in solid solutions of ferromagnets and antiferromagnets has been first suggested in pioneering work by Edwards and Anderson . According to , these phases are characterized by the appearance of local spontaneous magnetic moments with chaotic orientations, which are determined by a random distribution ferromagnetic and antiferromagnetic interactions throughout the crystal. The first attempt to give the quantitative description of spin glass transition was made by Sherrington and Kirkpatrick (SK), who considered a mean-field model with infinite range random interaction. But the solution they got appeared to be unstable in the glass phase for $`T<T_{sg}`$, $`H<H_{AT}\left(T_{sg}T\right)^{3/2}`$. The further attempts to describe the thermodynamics of spin glass phase below the Almeida-Thouless (AT) line, $`H<H_{AT}`$, resulted in the construction of ’replica symmetry breaking’ scheme by Parisi being the procedure of the analytical continuation in the replica method used in the studies of the SK model. Now it is common belief that Parisi solution is stable below AT-line and it is the basic result in the spin glass theory. With some reservations concerning the mathematical foundations of the replica method and the suggested procedure of the analytical continuation, one may consider the Parisi’s solution as the first exact description of the thermodynamics of inergodic spin glass phase. Later, the other exactly solvable spin glass models have been studied with the use of the replica method (the vector models , $`p`$-spin spherical models ) and without it (ordinary spherical spin glass , Bethe lattice spin glass). Let us also notice the study of the mean-field equations for the local magnetic moments of the SK model . In these studies the equilibrium thermodynamic parameters averaged over random interactions were obtained in the framework of the standard statistical mechanics. Still these results appear to be insufficient for the description of properties of real spin glasses. The reason lies in the known inergodicity of the spin glass phases , i. e. the existence a large number of metastable states in these phases. So the experiments give the physical quantities proper to one of these states, in which crystal comes depending on a regime and a sequence of cooling and application of magnetic field . Meanwhile, the equilibrium quantities refer only to the state with the lowest thermodynamic potential and could be obtained after sufficiently large observation time such that crystal could come to the lowest equilibrium state when field and/or temperature are changed. As the barriers between metastable states are macroscopic (divergent as $`N\mathrm{}`$), the corresponding relaxation times are generally astronomically large. Thus crystal can stay in the initial state while his potential becomes larger then some other state. The situation can be elucidated by considering the uniaxial (Ising) ferromagnet being the simplest inergodic system below $`T_c`$. In the fields, smaller then coercive one, it has two stable states: equilibrium state with magnetization $`m`$ parallel to $`H`$, and metastable one having $`m`$ antiparallel $`H`$ and greater potential. The standard result of the equilibrium statistical mechanics for the dependency $`m`$ on $`H`$ in this case is a function with a jump at $`H=0`$ corresponding to the equilibrium states, while in the real experiments the hysteresis loop is observed, in which metastable states are also present. Thus the description of real experiments needs also the description of the properties of metastable states. In this example this is trivial at least in the mean-field approximation, but in spin glasses the description of the properties of a large number of metastable states is rather difficult task. In particular, the description of magnetic properties of a spin glass phase must include a set of functions $`m(H)`$, corresponding to various metastable states, and their lines on the $`mH`$ plane would fill some region around the origin. In a rather simplified form, the theoretical problem is to determine the boundaries of this inergodic region and the values of various thermodynamic quantities for all points inside it. Such theoretical results could describe a number of slow nonequilibrium processes in the inergodic spin glass phases. Meanwhile, all data about metastable states obtained via Parisi’s ansatz come to the probability distribution of the ’overlaps’ of magnetizations in various metastable states . Now it is not clear how this distribution can describe the real experiments. Also there was not got any information on the properties of metastable states of the SK model from the TAP equations , it was only established that their number is exponentially large . The studies of the other spin glass models have not also result in the elucidation of the physical characteristics of metastable states. The only exception is the Ising spin glass on the Bethe lattice for which the numerical study of the internal field distribution has explicitly shown the existence of a number of metastable states at $`T=0`$. Generally speaking, the study of the properties of metastable states is not necessary for a description of experiments in inergodic systems as it could be obtained from the study of nonequilibrium dynamics which incorporates automatically the effects of their existence. Thus one can get the hysteresis loop as a reaction on the large slow varying field in the dynamic treatment of the uniaxial ferromagnet. But in doing this one must eliminate large unobservable times of the relaxation between metastable states. Such elimination procedure has been developed in the study of Langevin dynamics of SK model in zero field , which help to establish the difference between the unobservable equilibrium susceptibility and those in the local metastable states measured at real times smaller than that of relaxation over macroscopic barriers. Still now it is not clear how this procedure could be generalized to describe in the SK model the reaction on the alternating field of finite amplitude. Now the inergodic effects in finite fields were described only in the simplest spherical spin glass model having two stable states . To resume we may say that the spin glass models considered until now are not sufficiently simple to obtain the thermodynamic properties of metastable states needed for description of inevitably nonequilibrium processes in spin glasses. Hence it seems worthwhile to find and to study more simple models allowing for more complete description of physical properties in the inergodic glass phases. Here we present one such model which allows the analytical description of thermodynamic characteristics of all metastable states at $`T=0`$ and near the transition point. ## 2 Hamiltonian of the model and its properties at $`T=0`$. The most simple treatment of metastable states is possible in the framework of mean-field models with infinite range interactions when it comes to the determination of local minimums thermodynamic potential. Also the random spin glass interaction should be sufficiently simple in all its realizations to make possible an analytical treatment. In Ising spin glasses the glass transition results from the competition between ferromagnetic and antiferromagnetic interactions so the task is to imitate this competition using simple infinite range interactions. Let us consider the system having $`N`$ Ising spins $`S_{i\alpha }=\pm 1`$, divided on $`N_b`$ blocks which consist of $`N_s`$ spins, $`N=N_bN_s`$. Here index $`\alpha `$ is the block number and index $`i`$ is the number of the spin inside the block. So the magnetization of the block is $$m_\alpha =N_s^1\underset{i=1}{\overset{N_s}{}}S_{\alpha i}$$ (1) and the total magnetization is $$m_\alpha =N^1\underset{\alpha =1}{\overset{N_b}{}}\underset{i=1}{\overset{N_s}{}}S_{\alpha i}=N_b^1\underset{\alpha =1}{\overset{N_b}{}}m_\alpha $$ (2) Let us also introduce the ’antiferromagnetic’ order parameter $$\mu _\alpha =m_\alpha m$$ (3) The model Hamiltonian is a sum of ferromagnetic and ’antiferromagnetic’ interactions and the external field term $$=\frac{N}{2}Jm^2\frac{N_s}{2}J_1\underset{\alpha =1}{\overset{N_b}{}}\mu _\alpha ^2NHm$$ (4) Here $`H`$ is the homogeneous external field, $`J>0`$ \- ferromagnetic exchange integral with a fixed value, $`J_1>0`$\- random ’antiferromagnetic’ exchange integral. The term proportional to $`J_1`$ imitates the antiferromagnetic bonds distributed throughout a crystal so we assume that $`N_b>>1`$. Also we assume that $`N_s`$ diverges in thermodynamic limit ($`N\mathrm{}`$) in order the mean-field approximation to be valid. In general, for large $`N_b`$ the ’antiferromagnetic’ term in Eq.4 gives rise to a number of various types of ordering with $`m=0`$, so it would be more correct to call it the ’glass’ term. The more so that the block analog of the Edwards-Anderson order parameter $$q=N_b^1\underset{\alpha =1}{\overset{N_b}{}}m_\alpha ^2m^2$$ can be represented as $$q=N_b^1\underset{\alpha =1}{\overset{N_b}{}}\mu _\alpha ^2$$ (5) Thus we may say that for $`N_b>>1`$ the model Hamiltonian, Eq.4, describes the competition of ferromagnetic and ’glass’ ordering. For $`N_s\mathrm{}`$ it is easy to find the (nonequilibrium) thermodynamic potential depending on $`m_\alpha `$: $$F\left(𝐦\right)=H\left(𝐦\right)/NTS\left(𝐦\right)$$ where $`S\left(𝐦\right)`$ is the entropy per spin: $$S\left(𝐦\right)=N^1\mathrm{ln}\left[Tr\underset{\alpha =1}{\overset{N_b}{}}\delta _{N_sm_\alpha ,\underset{i}{}S_{i\alpha }}\right]$$ $$N_b^1\underset{\alpha =1}{\overset{N_b}{}}\left[\mathrm{ln}2\frac{1+m_\alpha }{2}\mathrm{ln}\left(1+m_\alpha \right)\frac{1m_\alpha }{2}\mathrm{ln}\left(1m_\alpha \right)\right]$$ (6) For $`N_s\mathrm{}`$ the description of the equilibrium thermodynamics comes to the finding of the lowest minimum of $`F\left(𝐦\right)`$ while the less deep minimums correspond to metastable states. The equations defining the extrema of $`F\left(𝐦\right)`$ are: $$Tarc\mathrm{tanh}m_\alpha +\left(J_1J\right)mJ_1m_\alpha =H$$ (7) The minimums correspond to the solutions of Eqs.7 with the positively defined Hessian $$\frac{^2F\left(𝐦\right)}{m_\alpha m_\beta }=\left[T/\left(1m_\alpha ^2\right)J_1\right]\delta _{\alpha \beta }+\left(J_1J\right)/N$$ (8) It is easy to show that this simple model is inergodic at $`T=0`$. In this case Eqs.7 become $$m_\alpha =sign\left[H+\left(JJ_1\right)m+J_1m_\alpha \right]$$ (9) When $`\left|H\right|>\mathrm{max}(J,2J_1J)`$ Eqs.7 have unique solution $`m_\alpha =signH`$, while at $`\left|H\right|>\mathrm{max}(J,2J_1J)`$ they also have a number of solutions with arbitrary signs of $`m_\alpha `$ limited only by the condition $$\left[H+\left(JJ_1\right)m\right]^2<J_1^2$$ (10) All these solutions are stable thus corresponding to the metastable states of the model. The total magnetization in these states acquires a set of discrete values $$m=\frac{2n}{N_b}1$$ Here $`n`$ is the number of blocks with magnetization $`m_\alpha =1`$, $`n=0,\mathrm{},N_b`$. There are $`\left(\genfrac{}{}{0pt}{}{N_b}{n}\right)`$ states with a given $`m`$ which differs by the permutations of $`m_\alpha `$. The total number of inhomogeneous metastable states cold be up to $`2^{N_b}2`$ for a given $`H`$. The energy per spin in these states is determined by their magnetization $$E=\left(JJ_1\right)m^2/2mHJ_1/2$$ and the entropy, Eq.6 is zero. The equilibrium magnetization corresponding to the states with minimal energy is $`m_{eq}\left(H\right)=signH`$ for $`J>J_1`$ and $$m_{eq}\left(H\right)=2\underset{n=1}{\overset{N_b1}{}}\frac{n}{N_b}\theta \left(N_b^2\epsilon _n^2\right)+sign\left(\epsilon _{N_b1}N_b^1\right)$$ (11) when $`J<J_1`$. Here $`\epsilon _n\frac{H}{J_1J}\frac{2n}{N_b}+1`$, and $`\theta `$ is the Haviside’s step function. The field dependencies of magnetization in the equilibrium and metastable states are shown in Fig.1. The steps of the function $`m_{eq}\left(H\right)`$ for $`J<J_1`$demonstrate the existence of the first order phase transitions at field values $$H_n=\left(\frac{2n+1}{N_b}1\right)\left(J_1J\right)$$ $`n=0,\mathrm{},N_b`$, at which the upturns of the block magnetizations take place. Qualitatively, just this behavior of $`m_{eq}\left(H\right)`$ one may expect in the glass phase of the magnet with equal concentrations of ferromagnetic and antiferromagnetic bonds, while the case $`J>J_1`$ gives a picture proper for a magnet with domination of ferromagnetic bonds. Further we will consider the most interesting case when model Hamiltonian imitates a spin glass, so we assume that probability distribution for $`J_1`$, $`P(J_1)`$ , is zero for $`J>J_1`$ and has the form $$P(J_1)=\theta \left(J_1J\right)W\left(J_1J\right)$$ (12) In general, one may notice that averaging over random interaction is superficial when all thermodynamic parameters can be obtained for every random realization as experimental data are not usually averaged over a number of samples. Still in more complex models it is often possible to find only average equilibrium quantities. So it is interesting to compare them with the corresponding quantities in the metastable states. Thus in present model the magnetization in all states does not depend on the field and magnetic susceptibility in them is zero at $`T=0`$. In the same time, averaging of Eq.11 over $`J_1`$ gives for $`N_b>>1`$ $$m_{eq}\left(H\right)=H\underset{\left|H\right|}{\overset{\mathrm{}}{}}\frac{dJ^{}}{J^{}}W\left(J^{}\right)+signH\underset{0}{\overset{\left|H\right|}{}}𝑑J^{}W\left(J^{}\right)$$ (13) $$\chi _{eq}=\frac{m_{eq}\left(H\right)}{H}=\underset{\left|H\right|}{\overset{\mathrm{}}{}}\frac{dJ^{}}{J^{}}W\left(J^{}\right)$$ (14) As one can see from Fig. 1(a), the non-zero $`\chi _{eq}`$ appears due to the series of transitions between metastable states. Indeed the differentiation of Eq.11 gives $$\chi _{eq}=\frac{m_{eq}\left(H\right)}{H}=\frac{2}{N_b\left(J_1J\right)}\underset{n=1}{\overset{N_b1}{}}\delta \left(N_b^1\epsilon _n\right)$$ (15) When $`N_b\mathrm{}`$ Eq.15 becomes in the sense of distributions $$\chi _{eq}=\frac{1}{\left(J_1J\right)}\theta \left[\left(J_1J\right)^2H^2\right]$$ (16) The average value of this expression coincides with Eq.14. Thus $`\chi _{eq}`$ is generally the unobservable quantity as it describes the changes of $`m`$ in the series of transitions which take place only astronomically large time scale of overbarrier relaxation. What is more, there are no traces of the transitions in the limiting expression in Eq.16 when the number of metastable states goes to infinity ($`N_b\mathrm{}`$) in the thermodynamic limit. The indication of their presence via delta functions in $`\chi _{eq}`$ exists only when this number stays finite in the limit $`N\mathrm{}`$. Meanwhile in the framework of the present model $`\chi _{eq}`$ from Eq.16 contains some information on the reaction of inergodic system on the slow varying external field. Thus the application of slow AC field with amplitude greater than $`2J_1J`$ would give a hysteresis loop and $`\chi _{eq}`$ defines its slope. In the same time ordinary measurement of susceptibility in small fields would give zero value. Possibly the qualitatively similar meaning $`\chi _{eq}`$ has in the real systems. Yet we must note that $`\chi _{eq}`$ in present model is a non-self-averaging quantity in the sense that being constant before averaging it becomes a function of $`H`$ after it, see Eq.14. Still more dramatic effect is caused by the averaging on the nonlinear susceptibilities. It follows from Eq.16 ($`k>1`$) $$\chi _{eq}^{\left(k\right)}\frac{^km_{eq}}{H^k}=\frac{1}{J_1J}\frac{^{k2}}{H^{k2}}\left[\delta \left(J_1J+H\right)\delta \left(J_1JH\right)\right]$$ and the averaging of this equation gives $$\chi _{eq}^{\left(k\right)}\frac{^km_{eq}}{H^k}=\frac{^{k2}}{H^{k2}}\left[\frac{W\left(\left|H\right|\right)}{H}\right]$$ (17) The absence of self-averaging of magnetic susceptibilities seems to be the specific property of the present model in which small fluctuations of $`J_1`$ can cause large deviations in $`m_{eq}`$ and could be absent in more realistic spin glass models. Let us note that singularities of non-averaged susceptibilities at $`H=\pm \left(J_1J\right)`$ correspond to the points of the transitions from the inhomogeneous phase into the homogeneous one, thus $`J_1J`$ has the meaning of the (non-averaged) Almeida-Thouless field. The corresponding anomalies of the averaged quantities would exist at finite $`H=\pm H_{AT}`$ if function $`W\left(J^{}\right)`$ in Eq.12 has a bounded support, i.e. when $`W\left(J^{}\right)=0`$ for $`J^{}`$ greater some $`\overline{J}>0`$ and $`W\left(J^{}\right)>0`$ otherwise. Then $`H_{AT}=\overline{J}`$ and anomalies of $`\chi _{eq}^{\left(k\right)}`$ for $`H\pm H_{AT}`$ will be determined by the behavior of $`W\left(J^{}\right)`$ at $`J^{}\overline{J}`$. This behavior determines also how the block Edwards-Anderson order parameter, Eq.5, goes to zero when $`H\pm H_{AT}`$. For $`N_b\mathrm{}`$ we get from Eq.5 $$q_{eq}=1m_{eq}^2=\theta \left(\overline{J}\left|H\right|\right)\underset{\left|H\right|}{\overset{\overline{J}}{}}𝑑J^{}W\left(J^{}\right)\left(1\frac{H^2}{J^2}\right)$$ In general case ($`W\left(\overline{J}\right)<\mathrm{}`$) the transition into spin glass phase with $`q_{eq}0`$ at $`\left|H\right|<H_{AT}`$ is not accompanied with divergencies of $`\chi _{eq}^{\left(k\right)}`$ in contrast with SK model. This is because the upturn of the last block along the field is the first order transition. So the instabilities which could cause such divergencies does not occur in the present model. Let us also note that there are two ferromagnetic phases: inergodic phase with a number of inhomogeneous metastable states and ergodic one having the unique ferromagnetic state. The average value of the field corresponding the transition point between these phases can be obtained by considering the boundaries (upper and lower) of the region where metastable states exist which represent also the upper and lower branches of hysteresis loop . They are (see Fig.1(a)): $$m_\pm \left(H\right)=\frac{H\pm J_1}{J_1J}\theta \left[\left(J_1J\right)^2\left(H\pm J_1\right)^2\right]+sign\left(H\pm J_1\right)\theta \left[\left(H\pm J_1\right)^2\left(J_1J\right)^2\right]$$ The averaging of this expression over $`J_1`$ gives $$m_\pm \left(H\right)=sign\left(H\pm J\right)+\theta \left(HJ\right)\underset{\left(HJ\right)/2}{\overset{\mathrm{}}{}}𝑑J^{}W\left(J^{}\right)\left(\frac{H\pm J}{J^{}}\pm 2\right)$$ (18) For $`\left|H\right|`$ greater some $`H_e`$ these branches coincide thus indicating the transition into ergodic phase. The condition defining $`H_e`$ is vanishing of the integral in Eq.18 so $`H_e`$ will be finite when $`W\left(J^{}\right)`$ has a bounded support. In this case $$H_e=2\overline{J}+J=2H_{AT}+J$$ There exists a functional relation between $`m_\pm \left(H\right)`$ and $`m_{eq}\left(H\right)`$, Eq.13 of the following form: $$m_\pm \left(H\right)=\left[2\theta \left(HJ\right)m_{eq}\left(\frac{HJ}{2}\right)1\right]$$ (19) From Eq.19 follows also: $$\frac{m_\pm \left(H\right)}{H}=\theta \left(HJ\right)\chi _{eq}\left(\frac{HJ}{2}\right)$$ (20) These relations are specific for the model under consideration but, probably, some relations between field dependency of the average equilibrium magnetization and hysteresis loop contour exist also in other spin glass models at $`T=0`$. For the simple ’rectangular’ function $`W`$ $$W\left(J^{}\right)=\theta \left(J^{}\overline{J}\right)/\overline{J}$$ we get $$q_{eq}=\theta \left(\overline{J}^2H^2\right)\left(1\frac{\left|H\right|}{\overline{J}}\right)^2$$ $$m_{eq}\left(H\right)=\theta \left(\overline{J}^2H^2\right)\frac{H}{\overline{J}}\mathrm{ln}\left(\frac{e\overline{J}}{\left|H\right|}\right)+\theta \left(H^2\overline{J}^2\right)$$ $$\overline{J}\chi _{eq}=\theta \left(\overline{J}^2H^2\right)\mathrm{ln}\left(\frac{\overline{J}}{\left|H\right|}\right)$$ $$\chi _{eq}^{\left(k\right)}=\left(k1\right)!\left(H\right)^{1k}/\overline{J}$$ The divergency of magnetic susceptibilities at $`H=0`$ is a consequence of $`W\left(0\right)0`$. They would be finite at zero field if $`W`$ goes to zero as some power of $`J^{}`$ or faster when $`J^{}0`$. For example, the averaging with the ’triangle’ function $$W\left(J^{}\right)=2J^{}\theta \left(J^{}\overline{J}\right)/\overline{J^2}$$ (21) gives the following results $$q_{eq}=\theta \left(\overline{J}^2H^2\right)\left(1+\frac{H^2}{\overline{J}^2}\mathrm{ln}\frac{H^2}{e\overline{J}^2}\right)$$ $$m_{eq}\left(H\right)=\theta \left(\overline{J}^2H^2\right)\frac{H}{\overline{J}}\left(2\frac{\left|H\right|}{\overline{J}}\right)+\theta \left(H^2\overline{J}^2\right)signH$$ $$\overline{J}\chi _{eq}=\theta \left(\overline{J}^2H^2\right)2\left(1\frac{\left|H\right|}{\overline{J}}\right)$$ $$\chi _{eq}^{\left(k\right)}=0,k>1$$ The averaging of the thermodynamic parameters of metastable states existing inside hysteresis loop is trivial their magnetizations do not depend on $`H`$ and susceptibilities are zero. Also $`q=1m^2`$ and $`S=0`$ in all states. But we must note that equilibrium entropy is not strictly zero as for a given $`m`$ there are $`\left(\genfrac{}{}{0pt}{}{N_b}{N_b\left(1m\right)/2}\right)`$ states with equal potentials so the configurational entropy term $$S_{conf}\left(m\right)=N^1\mathrm{ln}\left(\genfrac{}{}{0pt}{}{N_b}{N_b\left(1m\right)/2}\right)$$ is added to the expression in Eq.6. But in the thermodynamic limit $`S_{conf}\left(m\right)`$ goes to zero as $`N_s^1`$. ## 3 Thermodynamics near the transition. The stable inhomogeneous solutions of equations of state, Eq.7, appear at $`T<J_1`$. So in the case considered here ($`J_1>J`$) there is a second order phase transition from homogeneous paramagnetic phase into inhomogeneous inergodic spin glass one at $`T=J_1,H=0`$. Let us consider the thermodynamics of the model in the vicinity of this transitions assuming $$m_\alpha <<1$$ (22) In this case Eq.7 acquire the form: $$\tau _1m_\alpha +\left(\tau \tau _1\right)m+m_\alpha ^3/3=h$$ (23) Here $`h=H/T,\tau _1=1J_1/T,\tau =1J/T,\tau _1>\tau `$. Hessian, Eq.8, becomes in this region $$T^1\frac{^2F\left(𝐦\right)}{m_\alpha m_\beta }=\left(\tau _1+m_\alpha ^2\right)\delta _{\alpha \beta }+\left(\tau \tau _1\right)/N$$ (24) It follows from Eq.22 and Eq.23 that $$h<<1,\tau _1<<1,\tau <<1$$ For these conditions to be fulfilled for every random $`J_1`$ we must assume that $`W\left(J^{}\right)`$ in Eq.12 has sufficiently narrow bounded support, that is the possible values of $`J_1J`$ must be less than some $`\overline{J}>0`$ obeying the condition $$\overline{J}<<J$$ When $`\tau _1>0`$ Eqs.23 have unique homogeneous solution. Let us denote it as $`m_0`$. It does not depend on $`\tau _1`$ and obeys the equation $$\tau m_0+m_0^3/3=h$$ (25) When $`\tau _1<0`$ Eqs.23 have $`3^{N_b}3`$ inhomogeneous solutions beside $`m_0`$. As all $`m_\alpha `$ obey the same cubic equation they can acquire only three different values which we denote as $`\stackrel{~}{m}_s,s=1,2,3`$. Then all inhomogeneous solutions can be characterized by the three numbers $`n_sN_b`$, $$\underset{s}{}n_s=N_b$$ which show how many $`m_\alpha `$ have the value $`\stackrel{~}{m}_s`$. There are $`\frac{N_b}{n_1!n_2!n_3!}`$ solutions which differ by the permutations of $`m_\alpha `$ and the total number of solutions is $$\underset{n_sN_b}{}\frac{N_b}{n_1!n_2!n_3!}=3^{N_b}3$$ But only $`2^{N_b}2`$ of them could be stable. Indeed, Hessian, Eq.24, has three eigenvalues equal to $`\tau _1+\stackrel{~}{m}_s^2`$ with degeneracy $`n_s1`$, which correspond to the eigenvectors having a zero sum of components. There are also three non-degenerate eigenvalues which are the solutions of the equation $$1+\frac{\tau \tau _1}{N_b}\underset{s}{}\frac{n_s}{\tau _1+\stackrel{~}{m}_s^2\lambda }=0$$ (26) Using the Viet’s theorem for Eq.23 according to which $$\underset{s}{}\stackrel{~}{m}_s=0$$ (27) $$\underset{s<s^{}}{}\stackrel{~}{m}_s\stackrel{~}{m}_s^{}=3\tau _1$$ (28) we can get the relation $$\underset{s}{}\frac{1}{\tau _1+\stackrel{~}{m}_s^2}=0$$ It shows that all three eigenvalues $`\tau _1+\stackrel{~}{m}_s^2`$ could not be positive simultaneously so the stable solutions must have at least one of the numbers $`n_s`$ equal to 0 or 1. But if all $`n_s>0`$ then one of the solutions of Eq.26 becomes negative for large $`N_b>>1`$. Thus the stable solutions must have one of $`n_s`$ equal to zero. Further we will consider just these solutions putting $`n_3=0`$. The stability condition for them reduces to one inequality $$\tau _1+\stackrel{~}{m}_3^2<0$$ (29) It follows from Eq.27 and Eq.28 that $`\stackrel{~}{m}_s`$ can be represented in the following form $$\stackrel{~}{m}_1=2\left(\tau _1\right)^{1/2}\mathrm{cos}\left(\phi \frac{\pi }{6}\right)$$ $$\stackrel{~}{m}_2=2\left(\tau _1\right)^{1/2}\mathrm{cos}\left(\phi +\frac{\pi }{6}\right)$$ $$\stackrel{~}{m}_3=2\left(\tau _1\right)^{1/2}\mathrm{sin}\phi $$ (30) so the stability condition, Eq.29, is equivalent to the inequality $$\left|\phi \right|<\pi /6$$ (31) It follows from the definition of $`m`$: $$m=\nu _1\stackrel{~}{m}_1+\nu _2\stackrel{~}{m}_2$$ Here $$\nu _s=\frac{n_s}{N_b}$$ (32) $$\nu _1+\nu _2=1$$ so $$m=\left(\tau _1\right)^{1/2}\left(\sqrt{3}\mathrm{\Delta }\mathrm{cos}\phi +\mathrm{sin}\phi \right)$$ (33) $$\mathrm{\Delta }=\nu _1\nu _2$$ Inserting Eq.30 into Eq.23 we get $$3\left(\tau \tau _1\right)m=3h2\left(\tau _1\right)^{3/2}\mathrm{sin}3\phi $$ (34) Excluding $`m`$ from Eq.33 and Eq.34 we obtain the equation for $`\phi `$: $$2\left(\tau _1\right)^{3/2}\mathrm{sin}3\phi +3\left(\tau \tau _1\right)\left(\tau _1\right)^{1/2}\left(\sqrt{3}\mathrm{\Delta }\mathrm{cos}\phi +\mathrm{sin}\phi \right)=3h$$ (35) At all $`\tau >\tau _1`$ the left side of Eq.35 is a monotonously growing function of $`\phi `$ for $`\left|\phi \right|<\pi /6`$. Hence, there is only one stable solution for $`\stackrel{~}{m}_s`$ at a given $`\mathrm{\Delta }`$. There are $`\left(\genfrac{}{}{0pt}{}{N_b}{n_1}\right)`$ metastable states corresponding to this solution which differ by $`m_\alpha `$ permutations. The explicit solution of Eq.35 can be found for $`\mathrm{\Delta }=0`$ when it becomes cubic. In the limiting cases $`\mathrm{\Delta }=\pm 1`$ Eq.35 reduces also to a cubic one for $`\stackrel{~}{m}_1`$ or $`\stackrel{~}{m}_2`$ which coincides with Eq.25. In general case Eq.35 and Eq.33 (or Eq.34) give a parametric representation of a dependency of the homogeneous magnetization in the metastable states with a given $`\mathrm{\Delta }`$ on $`\tau `$, $`\tau _1`$ and $`h`$. The parameter $`\phi `$ can be excluded from these equations with the result $$\left[\left(3\mathrm{\Delta }^2+1\right)^2\tau 3\left(13\mathrm{\Delta }^2\right)\left(1\mathrm{\Delta }^2\right)\tau _1\right]m+\frac{8\left(9\mathrm{\Delta }^21\right)}{3\left(3\mathrm{\Delta }^2+1\right)}m^3+$$ $$+2\sqrt{3}\frac{\mathrm{\Delta }\left(1\mathrm{\Delta }^2\right)}{3\mathrm{\Delta }^2+1}\left[\left(3\mathrm{\Delta }^2+1\right)\tau _1+4m^2\right]\left[\left(3\mathrm{\Delta }^2+1\right)\tau _1m^2\right]^{1/2}=\left(3\mathrm{\Delta }^2+1\right)^2h$$ (36) From the stability condition, Eq.31, and Eq.34 it follows that solutions of Eq.36 is stable in the region $$9\left[\left(\tau \tau _1\right)mh\right]^2<4\tau _1^3$$ (37) which is the band on the $`mh`$ plane. The magnetization is a monotonously growing function of $`h`$ and $`\mathrm{\Delta }`$ inside this band so the field dependencies of magnetization can be represented as a set of uncrossing lines bounded from above and below by the $`m_0(h)`$ line as shown in Fig.2. The other thermodynamic parameters of metastable states can be obtained by differentiation of thermodynamic potential which near transition has the form $$12F/T=6\left(\tau \tau _1\right)m^2+6\tau _1\underset{s}{}\nu _s\stackrel{~}{m}_s^2+\underset{s}{}\nu _s\stackrel{~}{m}_s^412hm12\mathrm{ln}2$$ (38) Expressed via $`\phi `$ these parameters are $$q=3\tau _1\left(\mathrm{\Delta }^21\right)\mathrm{cos}\phi $$ $$T^1\chi ^1=\tau \tau _1\left[1+2\mathrm{cos}3\phi /\left(\mathrm{cos}\phi \sqrt{3}\mathrm{\Delta }\mathrm{sin}\phi \right)\right]$$ $$S=\mathrm{ln}2+\tau _1\left(2+\mathrm{cos}2\phi +\sqrt{3}\mathrm{\Delta }\mathrm{sin}2\phi \right)/2$$ For the heat capacity we get rather more cumbersome expression $$C=\frac{2\tau _1\left(\sqrt{3}\mathrm{\Delta }\mathrm{sin}\phi \mathrm{cos}\phi 2\mathrm{cos}3\phi \right)+3\left(\tau \tau _1\right)\left(1\mathrm{\Delta }^2\right)\mathrm{cos}\phi }{2\left(\mathrm{cos}\phi \sqrt{3}\mathrm{\Delta }\mathrm{sin}\phi \right)4\tau _1\mathrm{cos}\phi }$$ In spite of the absence of explicit expression for $`\phi `$ as a function of $`h`$, $`\tau `$ and $`\tau _1`$ the above formulae allow to get some notion about the field and temperature dependencies of these quantities. Thus at the boundaries of stability region, $`\left|\phi \right|=\pm \pi /6`$ or at $$h=\left(\tau _1\right)^{1/2}\left(\tau \tau _1\right)\left(3\mathrm{\Delta }\pm 1\right)/2\pm 2\left(\tau _1\right)^{3/2}/3$$ (39) $`q`$ and $`\chi ^1`$ has the lowest values $$\chi ^1=J_1J$$ $$q=9\tau _1\left(\mathrm{\Delta }^21\right)/4$$ (40) while the entropy and heat capacity are $$S=\mathrm{ln}2+3\tau _1\left(1\pm \mathrm{\Delta }\right)/2$$ $$C=3\left(1\pm \mathrm{\Delta }\right)/2\tau _1/\left(\tau \tau _1\right)$$ (41) It follows from Eq.39 that metastable states exist when $$\left|h\right|<\stackrel{~}{h}_e2\left(\tau _1\right)^{1/2}\left(\tau \frac{4}{3}\tau _1\right)$$ (42) When $`h`$ goes to $`\pm h_e`$ the more homogeneous states with $`\mathrm{\Delta }\pm 1`$ stay stable and their magnetization tends to $`m_0\left(\stackrel{~}{\pm h}_e\right)=\pm 2\left(\tau _1\right)^{1/2}`$. However the limiting values of magnetic susceptibility, entropy and heat capacity differ from those in homogeneous state: $`\chi _0^1=J_1\left(\tau +m_0^2\right)`$, $`S_0=\mathrm{ln}2m_0^2/2`$, $`C=m_0^2/\left(\tau +m_0^2\right)`$. In the middle of the stability band (at $`\phi =0`$ or $`h=\mathrm{\Delta }\left(3\tau _1\right)^{1/2}\left(\tau \tau _1\right)`$) we get: $`q=\tau _1\left(\mathrm{\Delta }^21\right)`$, $`\chi ^1=J_1\left(12\tau _1\right)J`$, $`S=\mathrm{ln}2+3\tau _1/2`$, $`C=\frac{3}{2}\left(1\mathrm{\Delta }^2\frac{\tau \tau _1}{\tau 3\tau _1}\right)`$. In this case the diminishing of inhomogeneity when $`h\pm \stackrel{~}{h}_{AT}`$, $$\stackrel{~}{h}_{AT}=\left(3\tau _1\right)^{1/2}\left(\tau \tau _1\right)$$ (43) $`m`$, $`\chi `$, $`S`$ and $`C`$ tend to the corresponding values of the homogeneous phase. The Almeida-Thouless field $`\stackrel{~}{h}_{AT}`$, Eq. 43 determines (to the order $`N_b^1`$) the point of the transition into the homogeneous phase. To show this let us find the values $`\mathrm{\Delta }_{eq}`$ corresponding to the states with the lowest potential. Differentiating $`F`$ , Eq.38 over $`\mathrm{\Delta }`$ and using Eqs.23, 27 and Eq.28 we get $$\frac{F}{\mathrm{\Delta }}=T\stackrel{~}{m}_3\left(\stackrel{~}{m}_1\stackrel{~}{m}_2\right)^3/24$$ $$\frac{^2F}{\mathrm{\Delta }^2}=\frac{T\left(\tau \tau _1\right)\left(\stackrel{~}{m}_1\stackrel{~}{m}_2\right)^2}{8\left[1+\underset{s}{}\frac{\nu _s}{\tau +\stackrel{~}{m}_s^2}\right]}$$ Thus the states with $`\stackrel{~}{m}_3=0`$ or, equivalently, $`\phi =0`$ (cf. Eq.30) have the lowest potential. One can see that Eq.35 has solution $`\phi =0`$ when $`\mathrm{\Delta }=h/\stackrel{~}{h}_{AT}`$ which is possible at $`h^2<\stackrel{~}{h}_{AT}^2`$. When $`h^2>\stackrel{~}{h}_{AT}^2`$ $`F\left(\mathrm{\Delta }\right)`$ has no minima inside the region $`\mathrm{\Delta }^2<1`$ in which it is defined and the minimal values occur at its boundaries for $`\mathrm{\Delta }_{eq}=signH`$. So the transition into homogeneous state takes place at $`h=\pm \stackrel{~}{h}_{AT}`$. As $`\mathrm{\Delta }`$ is a rational number of the form $`2n/N_b1`$ (cf. Eq.32) it can not be exactly equal to $`h/\stackrel{~}{h}_{AT}`$ at all $`h^2<\stackrel{~}{h}_{AT}^2`$. Hence $`\mathrm{\Delta }_{eq}`$ is defined so that $`\left|\mathrm{\Delta }h/\stackrel{~}{h}_{AT}\right|`$ is minimal and can be represented as $$\mathrm{\Delta }_{eq}=\underset{n=1}{\overset{N_b1}{}}\left(\frac{2n}{N_b}1\right)\theta \left(N_b^2\stackrel{~}{\epsilon }_n^2\right)+signH\theta \left[h^2\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2\right]$$ $$\stackrel{~}{\epsilon }_n\frac{h}{\stackrel{~}{h}_{AT}}\frac{2n}{N_b}+1$$ Inserting this $`\mathrm{\Delta }_{eq}`$ into Eq.35 we get the corresponding values of $`\phi _{eq}`$ at $`h^2<\stackrel{~}{h}_{AT}^2`$: $$\phi _{eq}=\sqrt{3}\frac{\tau \tau _1}{\tau 3\tau _1}\underset{n=1}{\overset{N_b1}{}}\stackrel{~}{\epsilon }_n\theta \left(N_b^2\stackrel{~}{\epsilon }_n^2\right)$$ Inserting $`\mathrm{\Delta }_{eq}`$ and $`\phi _{eq}`$ into the parametric representations of $`q`$ and $`m`$ we obtain the equilibrium values of these quantities $$q_{eq}=3\tau _1\left(1h^2/\stackrel{~}{h}_{AT}^2\right)$$ $$m_{eq}=\frac{h}{\tau \tau _1}\theta \left[\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2h^2\right]\frac{2\sqrt{3}\left(\tau _1\right)^{3/2}}{\tau 3\tau _1}\underset{n=1}{\overset{N_b1}{}}\stackrel{~}{\epsilon }_n\theta \left(N_b^2\stackrel{~}{\epsilon }_n^2\right)+$$ $$+m_0\theta \left[h^2\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2\right]$$ Differentiating $`m_{eq}`$ over $`h`$ we get the equilibrium susceptibility $$\chi _{eq}=\frac{h}{\tau 3\tau _1}\theta \left[\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2h^2\right]\frac{4\tau _1}{N_b\left(\tau \tau _1\right)\left(\tau 3\tau _1\right)}\underset{n=1}{\overset{N_b1}{}}\delta \left(N_b^1\stackrel{~}{\epsilon }_n\right)+$$ $$+\frac{1}{\tau +m_0^2}\theta \left[h^2\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2\right]$$ The equilibrium entropy can be obtained by the differentiation of the equilibrium potential which to the $`\stackrel{~}{\epsilon }_n^2`$ order is $$F_{eq}=F\left(\mathrm{\Delta }=h/\stackrel{~}{h}_{AT}\right)TS_{conf}$$ where configurational entropy $`S_{conf}`$ is determined by the logarithm of the number of states with the same potential $`F`$, $$S_{conf}=N^1\mathrm{ln}\left(\genfrac{}{}{0pt}{}{N_b}{N_b\left(1\mathrm{\Delta }_{eq}\right)/2}\right)$$ As at $`T=0`$, $`S_{conf}`$ is of the order $`N_s^1`$and can be neglected so $$S_{eq}=\mathrm{ln}2+\frac{3}{2}\tau _1\theta \left[\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2h^2\right]\frac{2\sqrt{3}\left(\tau _1\right)^{1/2}h}{\tau 3\tau _1}\underset{n=1}{\overset{N_b1}{}}\stackrel{~}{\epsilon }_n\theta \left(N_b^2\stackrel{~}{\epsilon }_n^2\right)$$ $$\frac{m_0^2}{2}\theta \left[h^2\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2\right]$$ For the equilibrium heat capacity we get $$C_{eq}=\left(\frac{3}{2}+\frac{h^2}{\tau _1\left(\tau \tau _1\right)\left(\tau 3\tau _1\right)}\right)\theta \left[\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2h^2\right]$$ $$\frac{2h^2}{N_b\tau _1\left(\tau \tau _1\right)\left(\tau 3\tau _1\right)}\underset{n=1}{\overset{N_b1}{}}\delta \left(N_b^1\stackrel{~}{\epsilon }_n\right)+\frac{m_0^2}{\tau +m_0^2}\theta \left[h^2\left(\frac{N_b1}{N_b}\right)\stackrel{~}{h}_{AT}^2\right]$$ The averaging of these expressions over $`J_1`$ gives at large $`N_b`$ $$q_{eq}=\underset{H/m_0}{\overset{\overline{J}}{}}𝑑J^{}W\left(J^{}\right)\left(3\frac{J^{}}{J}3\tau \frac{H^2}{J^2}\right)$$ $$m_{eq}=\underset{H/m_0}{\overset{\overline{J}}{}}\frac{dJ^{}}{J^{}}W\left(J^{}\right)+m_0\underset{0}{\overset{H/m_0}{}}𝑑J^{}W\left(J^{}\right)$$ $$\chi _{eq}=\underset{H/m_0}{\overset{\overline{J}}{}}\frac{dJ^{}}{J^{}}W\left(J^{}\right)+\frac{1}{J\left(\tau +m_0^2\right)}\underset{0}{\overset{H/m_0}{}}𝑑J^{}W\left(J^{}\right)$$ $$S_{eq}=\mathrm{ln}2+\frac{3}{2}\underset{H/m_0}{\overset{\overline{J}}{}}𝑑J^{}W\left(J^{}\right)\left(\tau \frac{J^{}}{J}\right)\frac{m_0^2}{2}\underset{0}{\overset{H/m_0}{}}𝑑J^{}W\left(J^{}\right)$$ $$C_{eq}=\frac{3}{2}\underset{H/m_0}{\overset{\overline{J}}{}}𝑑J^{}W\left(J^{}\right)+\frac{m_0^2}{\tau +m_0^2}\underset{0}{\overset{H/m_0}{}}𝑑J^{}W\left(J^{}\right)$$ In derivation of these expression we have used the equivalence of the condition $`h^2<\stackrel{~}{h}_{AT}^2`$ and the inequality $`m_0^2\left(h\right)<m_0^2\left(h_{AT}\right)=3\tau _1`$ where $`m_0\left(h\right)`$ is a solution of Eq.25 such as $`m_0\left(h\right)h>0`$. In its turn, from Eq.25 it follows that the last inequality is equivalent to $`J_1J>H/m_0`$. Evidently, the average equilibrium parameters transfer continuously into corresponding values of homogeneous phase at $`H>\overline{J}m_0`$. Thus the average Almeida-Thouless field $`H_{AT}`$ is defined as a solution of the equation $`H_{AT}=\overline{J}m_0\left(H_{AT}\right)`$ or its equivalent $$m_0^2\left(H_{AT}\right)=\sigma $$ (44) where $`\sigma =\overline{J}/T\tau =\left(J+\overline{J}\right)/T1`$. Let us remind that from Eq.22 it follows that $`J>>\overline{J}`$. The solution of Eq.44 exists when $`\sigma >0`$ or $`T>T_{sg}=J+\overline{J}`$. For $`TT_{sg}`$, $`\sigma <<\overline{J}/J`$ we get $$H_{AT}\overline{J}\left(3\sigma \right)^{1/2}$$ and for $`\tau <0`$, $`\tau >>\overline{J}/J`$ $$H_{AT}\overline{J}\left(3\tau \right)^{1/2}$$ Let us further consider the average boundaries of inergodic region on $`mh`$ plane, i.e. the branches of the average hysteresis loop $$m_\pm \left(H\right)=m_0\theta \left(m_0^24\sigma \right)+m_0^\pm \theta \left(\tau \right)\theta \left(\pm h_\pm \right)\theta \left[4\sigma \left(m_0^\pm \right)^2\right]\theta \left[\left(m_0^\pm \right)^2\sigma \right]+$$ $$+\theta \left(\pm h_\pm \right)\theta \left[\sigma \left(m_0^\pm \right)^2\right]A_\pm \left[\tau +\left(m_0^\pm \right)^2\right]+\theta \left(h_\pm \right)\theta \left[4\sigma \left(m_0^\pm \right)^2\right]A_\pm \left(\tau +m_0^2/4\right)$$ (45) Here $`h_\pm =h\pm \theta \left(\tau \right)2\left(\tau \right)^{3/2}/3`$, $$A_\pm \left(z\right)=J\underset{z}{\overset{\overline{J}/T}{}}\frac{dx}{x}W\left(Jx\right)\left[h\pm \frac{2}{3}\left(x\tau \right)^{3/2}\right]+Jm_0^\pm \underset{0}{\overset{z}{}}𝑑xW\left(Jx\right)$$ and $`m_0^+`$ and $`m_0^{}`$ are the maximal and minimal solutions of Eq.25 correspondingly which exist at $`\tau <0`$, $`4\tau +m_0^2<0`$. When $`\tau >0`$ or $`\tau <0`$, $`4\tau +m_0^2>0`$ then $`m_0^\pm =m_0`$. Eq.45 shows that the branches of the average hysteresis loop coincide when $`H^2>H_e^2`$, $`H_e`$ being the solution of equation $$m_0^2\left(H_e\right)=4\sigma $$ For $`TT_{sg}`$, $`\sigma <<\overline{J}/J`$ we get $$H_e2\overline{J}\sigma ^{1/2}$$ and for $`\tau <0`$, $`\tau >>\overline{J}/J`$, $`H_e`$ almost coincide with the coercive field for the homogeneous solution $$H_e2J\left(\tau \right)^{3/2}/3$$ so the hysteresis loops becomes similar to that of ordinary ferromagnet. Let us present the explicit expressions for the functions $`A_\pm `$ in Eq.45 for the ’triangular’ function $`W`$, Eq.21,: $$A_\pm \left[\tau +\left(m_0^\pm \right)^2\right]=\frac{J^2}{\overline{J}^2}\left[2h\sigma \pm \frac{8}{15}\sigma ^{5/2}+m_0^\pm \left(\tau ^2\frac{1}{5}\left(m_0^\pm \right)^4\right)\right]$$ $$A_\pm \left[\tau +m_0^2/4\right]=\frac{J^2}{\overline{J}^2}\left[2h\sigma \pm \frac{8}{15}\sigma ^{5/2}+m_0\left(\tau ^2\frac{7}{80}m_0^4\right)\right]$$ For the same $`W`$ the average equilibrium parameters at $`H^2<H_{AT}^2`$ are $$q_{eq}=2\sigma \left[1\left(\frac{H}{m_0\overline{J}}\right)^3\right]\tau \left[13\left(\frac{H}{m_0\overline{J}}\right)^2+2\left(\frac{H}{m_0\overline{J}}\right)^3\right]+2\left(\frac{H}{\overline{J}}\right)^2\mathrm{ln}\left(\frac{H}{m_0\overline{J}}\right)$$ $$m_{eq}=2\frac{H}{\overline{J}}\frac{H^2}{m_0\overline{J}^2}$$ $$\chi _{eq}=\frac{2}{\overline{J}}\frac{2H}{m_0\overline{J}^2}+\frac{H^2}{m_0^2\overline{J}^2J\left(\tau +m_0^2\right)}$$ $$S_{eq}=\mathrm{ln}2\sigma \left[1\left(\frac{H}{m_0\overline{J}}\right)^3\right]+\frac{\tau }{2}\left[13\left(\frac{H}{m_0\overline{J}}\right)^2+2\left(\frac{H}{m_0\overline{J}}\right)^3\right]\frac{1}{2}\left(\frac{H}{\overline{J}}\right)^2$$ $$C_{eq}=\frac{3}{2}\left[1\left(\frac{H}{m_0\overline{J}}\right)^2\right]+\frac{H^2}{\overline{J}^2\left(\tau +m_0^2\right)}$$ Let us note once more that the average equilibrium parameters are generally unobservable quantities. Probably, the experimental values being rather close to them are obtained after cooling in small external fields (field-cooled (FC) regime) for $`T`$ near $`T_{sg}`$ when barriers between metastable states are relatively small and system could relax into the lowest (or close to it) state at a sufficiently slow cooling. In zero field cooled (ZFC) regime when field is applied after cooling below $`T_{sg}`$ in zero field, the observed thermodynamical parameters would differ from equilibrium ones as the system would at first be trapped in the state with $`\mathrm{\Delta }=0`$ and will stay in it if applied field does not exceed $`h_c=\left(\tau _1\right)^{1/2}\left(\tau 4\tau _1/3\right)/2`$, cf. Eq.39. Thus at $`h<h_c`$ the ZFC parameters are those of $`\mathrm{\Delta }=0`$ metastable states. When applied field $`h>h_c`$ the system relaxes into the metastable state at the boundary of stability region (on the lower branch of hysteresis loop) having some $`\mathrm{\Delta }>0`$ which is a solution of Eq.39. Inserting this $`\mathrm{\Delta }`$ in Eq.40 and Eq.41 we get the values of thermodynamic parameters the observed quantities would relax to in ZFC regime at $`h>h_c`$. Similarly, the parameters of metastable states define the other quantities which are determined in the slow nonequilibrium processes in the spin glass phase, such as thermo-remanent magnetization, $`m_{TRM}`$, which remains after FC process and subsequent switching off the field, and isothermal remanent magnetization, $`m_{IRM}`$, remaining after ZFC process followed by the application for some time (longer than the intravalley relaxation time) an external field . Thus $`m_{IRM}`$ is apparently nonzero only at $`h>h_c`$ and an equation defining it can be obtained by putting $`h=0`$ in Eq.36 and inserting in this equation the value of $`\mathrm{\Delta }`$ we get from Eq.39. The equation for $`m_{IRM}`$ can also be obtained from Eq.36 by putting in it $`h=0`$ and $`\mathrm{\Delta }=\mathrm{min}(1,h/h_{AT})`$. ## 4 Conclusions The most remarkable feature of the model considered is the possibility to imitate the properties of such complex systems as spin glass with the aid of very simple Hamiltonian. It is common belief that the existence of a number of metastable states in spin glasses is caused by the frustration of random Hamiltonian, that is the absence of unique spin configuration providing the minimal energy . In the present model the degeneracy of the ground state results from the permutational symmetry of the Hamiltonian instead of frustration. Nevertheless there also exists the transition into the inergodic phase and its magnetic properties appear to be very similar to those of real spin glasses including a set of transitions between metastable states and inclined hysteresis loop resulting from their presence . One may suppose that more realistic random Hamiltonians can also have some approximate permutational symmetry and corresponding quasidegeneracy of ground state more essential, perhaps, than that caused by the frustration. We may note that present model implies the definite mechanism of the transitions between metastable states in a field, namely, the spin-flop transitions caused by the antiferromagnetic interaction between macroscopically large spin blocks. It seems rather probable that in some Ising short-range models of spin glasses it is possible to distinguish many clusters with the mostly antiferromagnetic interactions at the boundaries and relatively weak interactions inside them. Still it is rather evident that there can be also a many more other bond configurations in which degenerate spin configurations can exist at some field values. Thus it is hard to say to what extent spin-flop transitions are characteristic for real spin glasses and if the Hamiltonian, Eq.4, is a reasonable approximation for some random short-range Hamiltonian with some specific type of disorder. Still such possibility seems to be rather probable in view of similarity of the properties of the model to those of some real disordered magnets . Finally we may state that in spite of the qualitative nature of the model it allows to get some notion about the character of theoretical results relevant for the description of real experiments in inergodic systems. It shows how these result may look like and how the thermodynamic parameters of metastable states are related to the characteristics of nonequilibrium processes in spin glass phases. This work was made under support from Russian Foundation for Basic Researches, Grants N 98-02-18069 and 97-02-17878.
no-problem/9910/cond-mat9910272.html
ar5iv
text
# Electron-hole liquid in the hexaborides \[ ## Abstract We investigate the energetics of the electron-hole liquid in stoichiometric divalent metal hexaborides. The ground state energy of an electron-hole plasma is calculated using RPA and Hubbard schemes and compared to the binding energy of a single exciton. Intervalley scattering processes play an important role in increasing this binding energy and stabilizing a dilute Bose gas of excitons. \] The remarkable discovery of the high-temperature weak ferromagnetism in La-doped CaB<sub>6</sub>, SrB<sub>6</sub>, and BaB<sub>6</sub> has opened a new page in the physics of magnetism . In our previous work we attributed this effect to an unusual ground state of undoped divalent hexaborides. This so-called excitonic insulator is characterized by a condensation of bound electron-hole pairs (excitons). An excitonic instability in narrow gap semiconductors or semimetals was predicted and studied theoretically in the mid-sixties . However, its occurrence in any real compound is still controversial. Band structure calculations predict a small direct overlap in divalent-metal hexaborides between a boron-derived valence band and a cation-derived conduction band at three equivalent $`X`$-points in the cubic Brillouin zone. This feature, together with absence of direct electric-dipole transitions between the two bands, is extremely favorable for electron-hole pairing and leads to the excitonic instability. Weak ferromagnetism develops, then, in a triplet excitonic insulator due to spontaneous time-reversal symmetry breaking under doping . There are many open questions in the physics of the excitonic ferromagnetism. Some of them were raised recently in Refs. . In the present work we, however, want to shift emphasis from the (un)doped excitonic state to general properties of electron-hole ($`e`$-$`h`$) liquids in the hexaborides. Charge conservation does not fix the number of $`e`$-$`h`$ pairs, which in thermal equilibrium depends on the $`e`$-$`h`$ chemical potential (band overlap). In the original theory of the excitonic insulator the band gap (overlap) $`E_G>0`$ ($`E_G<0`$) was considered to be a free parameter that changes continuously through the value $`E_G=0`$. Subsequently, investigation of optically pumped electrons and holes in semiconductors showed that a first order transition between two states, one with a substantial band gap and the other with a substantial band overlap, will occur and that smaller values of $`|E_G|`$ lie in the unphysical intermediate region. The details of such transition are very sensitive to the actual band degeneracies and anisotropies. Here, we study possible scenarios for the transition from a semiconducting state into a metallic $`e`$-$`h`$ liquid in CaB<sub>6</sub>, including the appearance of a free exciton gas. We find that a novel mechanism of intervalley scattering of excitons is important in stabilizing an intermediate gas phase. Electrons and holes in CaB<sub>6</sub> have the following values of the effective masses measured in units of the bare electron mass : $`m_e^{}=0.504`$, $`m_e^{}=0.212`$ (conduction band) and $`m_h^{}=2.17`$, $`m_h^{}=0.206`$ (valence band). These values agree with the early results using the muffin-tin approximation . First, we retain only the dominant intravalley scattering processes shown in Fig. 1a, when an electron (hole) scatters between states near a single $`X`$-point. In the small-$`q`$ limit the scattering matrix element is given by a screened Coulomb potential $`V_q=4\pi e^2/\kappa q^2`$ ($`\kappa `$ is a static dielectric constant). We define an $`e`$-$`h`$ chemical potential as a sum of two individual potentials $`\mu =\mu _e+\mu _h`$. The relation to the band model is made by $`\mu =E_G`$, i.e. the band gap is equivalent to the chemical potential. Natural units for energies and lengths are effective Rydberg $`E_x`$ and Bohr radius $`a_x`$: $`E_x=m^{}e^4/2\kappa ^2=e^2/2\kappa a_x`$, where reduced mass $`m^{}=m_{oe}m_{oh}/(m_{oe}+m_{oh})`$ is determined by optical masses: $`3/m_o=2/m^{}+1/m^{}`$. The $`e`$-$`h`$ pair density $`n`$ is characterized by a dimensionless parameter, $`r_s=(3/4\pi na_x^3)^{1/3}`$. At high densities, small $`r_s`$, we use the random phase approximation (RPA) . Strictly speaking, a dense metallic $`e`$-$`h`$ liquid can transform into an excitonic insulator . This assumption was a key starting point in our explanation of the high-temperature weak ferromagnetism in the hexaborides . However, the ground state energy correction from the excitonic instability in a semimetal is of order $`\mathrm{\Delta }^2/\epsilon _F`$ and is small ($`\mathrm{\Delta }\epsilon _F`$), so we neglect it. The kinetic energy per $`e`$-$`h`$ pair is given by $$E_K=\frac{3m^{}}{5\alpha r_s^2\nu ^{2/3}}\left[\frac{1}{(m_e^{}m_e^2)^{1/3}}+\frac{1}{(m_h^{}m_h^2)^{1/3}}\right],$$ (1) where $`\alpha =(4/9\pi )^{1/3}`$ and $`\nu `$ ($`=3`$) is the number of valleys. The exchange energy for anisotropic bands is $`E_{\mathrm{exch}}`$ $`=`$ $`{\displaystyle \frac{3}{2\pi \alpha r_s\nu ^{1/3}}}\left[\varphi (m_e^{}/m_e^{})+\varphi (m_h^{}/m_h^{})\right],`$ (3) $`\varphi (x)=x^{1/6}{\displaystyle \frac{\mathrm{arcsin}\sqrt{1x}}{\sqrt{1x}}}.`$ The correlation energy describes the remaining contribution to $`E_{\mathrm{g}.\mathrm{s}.}`$ $$E_c=\frac{i}{2}\frac{dqd\omega }{(2\pi )^4}_0^1𝑑\lambda [\frac{V_q\mathrm{\Pi }^{}(q,\omega )}{1\lambda V_q\mathrm{\Pi }^{}(q,\omega )}V_q\mathrm{\Pi }^0(q,\omega )],$$ (4) where $`\mathrm{\Pi }^{}(q,\omega )`$ is the irreducible polarization operator and $`\mathrm{\Pi }^0(q,\omega )=_i^{2\nu }\mathrm{\Pi }_i^0(q,\omega )`$ is a sum of (anisotropic) RPA-polarizabilities for each species of electrons or holes. Substitution $`\mathrm{\Pi }^{}(q,\omega )=\mathrm{\Pi }^0(q,\omega )`$ in Eq. (4) gives the RPA-expression for $`E_c`$. An approximate way of treating the higher order exchange corrections was considered by Hubbard . His expression generalized to the multicomponent plasma is $$\mathrm{\Pi }^{}(q,\omega )=\underset{i}{\overset{2\nu }{}}\frac{\mathrm{\Pi }_i^0(q,\omega )}{1+f(q)V_q\mathrm{\Pi }_i^0(q,\omega )},$$ (5) with $`f(q)=0.5q^2/(q^2+k_F^2)`$. We also change an $`\omega `$-integration from real to imaginary axis, which avoids a difficulty related to a plasmon pole in $`\mathrm{\Pi }^0(q,\omega )`$. Numerical results for the ground state energy are shown in Fig. 2. The band degeneracy improves substantially the accuracy of the RPA, because corrections to the RPA diagrams have extra smallness in $`1/2\nu `$. The minimum of the ground state energy is reached at $`r_s=0.92`$ with a minimum value $`E_{\mathrm{g}.\mathrm{s}.}^{\mathrm{min}}=1.55E_x`$($`1.51E_x`$) in the RPA (Hubbard) scheme. The use of RPA is justified by a small value of $`r_s`$ at the minimum, which corresponds to a dense plasma with strong screening and small corrections from multiple $`e`$-$`h`$ scattering. Such corrections become significant at $`r_s3`$. The ground state energy is also known in the limit $`r_s\mathrm{}`$, where it approaches the binding energy of a single exciton $`E_e`$. (Here, we disregard possible formation of exciton molecules.) The presence of a local minimum at metallic densities for $`E_{\mathrm{g}.\mathrm{s}.}(r_s)`$ has an important effect on transformation from a semiconductor to a semimetal . The two possible scenarios are shown schematically in Fig. 3. In the first case, curve (a), the local minimum is also the absolute one: $`E_{\mathrm{g}.\mathrm{s}.}(r_{sA})<E_e`$. The pair chemical potential is related to the ground state energy by: $`\mu =E_{\mathrm{g}.\mathrm{s}.}+n\frac{E_{\mathrm{g}.\mathrm{s}.}}{n}`$. At the extremal point the second term is zero and $`\mu =E_{\mathrm{g}.\mathrm{s}.}^{\mathrm{min}}`$. Therefore, when the band gap decreases to $`E_G=|E_{\mathrm{g}.\mathrm{s}.}^{\mathrm{min}}|`$, a first-order metal-insulator transition takes place. The number of $`e`$-$`h`$ pairs jumps from zero to $`n(r_{sA})`$. Smaller densities with $`r_s>r_{sA}`$ correspond to unstable states. In optically pumped semiconductors, where a number of carriers is fixed instead of a chemical potential, this effect leads to $`e`$-$`h`$ droplet condensation . In the second case, curve (b), the exciton energy lies below the metallic minimum. Semiconducting state becomes unstable at $`E_G=|E_e|`$ and transforms into a low-density exciton gas. If $`E_G`$ is further reduced, a first order transition of gas-liquid type takes place between two states $`B_1`$ and $`B_2`$, which have same pressure $`P=n^2\frac{E_{\mathrm{g}.\mathrm{s}.}}{n}`$. The case (b) is believed to be realized in isotropic one-component $`e`$-$`h`$ plasma, whereas in many semiconductors band degeneracies and anisotropies favor the case (a) bypassing a free exciton gas state . To check which of the two scenarios occurs in the hexaborides we now compare the above value of $`E_{\mathrm{g}.\mathrm{s}.}^{\mathrm{min}}`$ to the binding energy of a single exciton. The simplest estimate for the exciton energy is $`E_x`$. It is obtained by substituting an isotropic $`1s`$-type wave function into the Schrödinger equation. This estimate predicts the same binding energy for excitons formed by an electron and a hole from same or from different valleys. We improve this result by using a simple variational ansatz appropriate to the cylindrical symmetry of the bands near each of the $`X`$-points similar to the treatment of shallow impurity states . The binding energy of an electron and a hole from the same valley is $`E_e=1.12E_x`$ ($`a_X^{}b_X0`$), whereas an electron and a hole from different valleys have $`E_e^{}=1.02E_x`$ ($`a_X^{}b_X^{}0`$). The former case with a more anisotropic reduced mass tensor is favored since lower dimensionality gives, as usual, a stronger binding. Thus, band anisotropy favors a particular type of excitons diagonal in the valley index. The chemical potential of the metallic $`e`$-$`h`$ liquid is below the exciton binding energy in this approximation. Hence, a dilute exciton gas is unstable and scenario (a) is predicted instead. The dominant term approximation (Fig. 1a) used above leaves threefold degeneracy of the excitonic level corresponding to $`e`$-$`h`$ pairing at three $`X`$-points. Coherence between different excitons is established by intervalley scattering processes shown in Fig. 1b. Instead of a single hydrogenic equation, one must solve now a system of three coupled integral equations for a three-component excitonic wave function $`\psi _\nu (r)`$. The scattering matrix element $`g_q`$ includes a large momentum transfer on $`Q=b/\sqrt{2}`$, where $`b`$ is an elementary reciprocal lattice vector. Consequently, we can neglect its momentum dependence and estimate $$g\underset{G}{}\frac{4\pi e^2}{|Q+G|^2}F_G^{aa}(X,X^{})F_G^{bb}(X^{},X).$$ (6) Here, summation is over reciprocal lattice, $`F_G^{aa}(X,X^{})`$ is a form factor of two conduction band states at points $`X`$ and $`X^{}`$. The absence of the dielectric constant in the Fourier transform of the Coulomb potential reflects a lack of lattice screening for these processes. Another vertex, which has the same strength $`g_q`$ but does not contribute to the electron-hole pairing, is shown in Fig. 1c. Several other intervalley vertices correspond to processes, when an electron transforms into a hole. However, they correspond to higher momentum transfers and we neglect them. Generally, the form factors in Eq. (6) reduce the matrix element compared to the amplitude of a Coulomb potential by a factor of $`2`$–5 and can also change the sign of $`g_q`$. Nevertheless, in the absence of necessary numerical band structure results we use a somewhat optimistic estimate $`|g|4\pi e^2/Q^2`$. Returning back to a spatial form of the Schrödinger equation we obtain a contact-like interaction of excitons in different valleys with a strength $`g`$. Since the total momentum of excitons vanishes, eigenstates of the Schrödinger equation are classified according to the irreducible representations of the cubic point group $`O_h`$. Treating the intervalley scattering perturbatively, we find $`E_{e1}=E_e2\lambda `$ for a nondegenerate exciton state with $`A_1`$ symmetry $`\psi _\nu \psi _B(r)(1,1,1)`$ and $`E_{e2}=E_e+\lambda `$ for a degenerate doublet with $`E`$ symmetry: $`\psi _\nu ^{(1)}\psi _B(r)(1,e^{2\pi i/3},e^{4\pi i/3})`$ and $`\psi _\nu ^{(2)}\psi _B(r)(1,e^{2\pi i/3},e^{4\pi i/3})`$, where $`\psi _B(r)`$ is a $`1s`$ hydrogenic wave function. The coupling constant is $`\lambda =g|\psi _B(0)|^2`$ and, therefore, choice of the lowest energy exciton depends on the sign of $`g`$. If interaction between excitons is attractive ($`g>0`$) the symmetric $`A_1`$-state has lower energy. If it is repulsive, then the $`E`$-doublet is more stable. One can easily estimate $`\lambda `$ using unperturbed $`\psi _B(r)`$: $`\lambda =0.15E_x`$. We have also calculated the effect of intervalley vertices Figs. 1b and 1c on the ground state energy of the metallic $`e`$-$`h`$ liquid shown by dashed line in Fig. 2. This correction is much smaller than a change of the exciton energy and does not exceed 3%. Qualitatively, such a difference is explained by different orders of the two corrections: for the metallic plasma it is a second-order effect, while the shift of exciton energy is a first order effect. Another effect of lifting threefold degeneracy of excitons in different valleys is suppression of multi-exciton molecules. As one can see from Fig. 2 the exciton energy is still above the ground state energy of the metallic phase, though the two energies move closer to each other. However, it appears that our estimate for the intervalley scattering effect may be too conservative. The origin of extra enhancement is similar to a mechanism of anomalously large hyperfine splitting of donor states in Si proposed by Kohn and Luttinger . The actual exciton energy is shifted below prediction of the effective mass theory because of both the intervalley scattering effect and a so called central cell correction . Contrary to a naive point of view, related changes in the amplitude $`\psi (0)`$ are determined by a long distance Schrödinger equation rather than by an exact short distance $`e`$-$`h`$ Hamiltonian. Since the actual $`E_e`$ is not an eigenvalue of the effective mass equation, there is no solution for this energy which satisfies both boundary conditions at $`r=0,\mathrm{}`$. Therefore, actual exponentially decaying wave-function develops a singularity at short distances. We find that for a moderate 20% shift of $`E_e`$ the probability to find electron near hole $`|\psi (0)|^2`$ is enhanced by a factor of 4 compared to $`|\psi _B(0)|^2`$. This effect significantly increases $`\lambda `$, especially for the $`A_1`$-singlet . As a result, the excitonic level in the hexaborides must lie well below the metallic minimum of $`E_{\mathrm{g}.\mathrm{s}.}(r_s)`$. We have considered stability of different phases of the $`e`$-$`h`$ liquid in the hexaborides. They include a semiconducting state with no carriers, a dilute Bose gas of excitons and a dense electron-hole liquid. The latter can be clearly distinguished from the other two states in infrared optical conductivity by its large Drude peak. If an excitonic instability develops in a dense $`e`$-$`h`$ liquid, $`\sigma (\omega )`$ must also have an edge-type singularity at the excitonic gap. Semiconducting and free exciton gas states, on the other hand, have no significant features in $`\sigma (\omega )`$ because of the absence of direct optical transitions between the two bands. Which of the three states really occurs depends on the band gap parameter $`E_G`$. Note, that in the scenario (a), see Fig. 3, a first order transition between a semiconducting state and a dense plasma takes place at a positive $`E_G`$, which always corresponds to a semiconductor in a single electron picture. Small values of $`|E_G|E_x=0.08`$ eV found in the band structure calculations indicate that the hexaborides can be close to an instability of semiconducting state, which, as we argued for CaB<sub>6</sub>, transforms into a dilute exciton gas. The other possible instability in the hexaborides, if $`E_G`$ is varied on experiment, is a gas-liquid transition between a dilute Bose gas of excitons and a dense $`e`$-$`h`$ plasma. We suggest that some of the above phase transformations could be induced in the hexaborides by applying hydrostatic pressure, which changes the band gap $`E_G`$. Applied uniaxial stress can further lift the degeneracy between different valleys and, thus, reduce significantly an energy of the metallic phase . Our main result, stability of a dilute exciton gas in CaB<sub>6</sub> either at normal conditions or under pressure, provides a new way to look at the Bose condensation of excitons. Another intrinsic mechanism for gap variations can also come from impurity doping, in particular from La- substitution for a divalent element. Therefore, doped hexaborides can differ from undoped compounds not only because of unequal number of electrons and holes, but also in terms of an appropriate starting picture: a Bose gas of excitons or a dense $`e`$-$`h`$ liquid. In the latter case screening effects must be quite significant due to a high density restriction for $`e`$-$`h`$ plasma: $`r_s1`$. We thank W. Kohn for useful discussions. Financial support for this work was provided by Swiss National Fund.
no-problem/9910/cond-mat9910051.html
ar5iv
text
# Untitled Document SIGN REVERSAL OF THE QUANTUM HALL EFFECT AND HELICOIDAL MAGNETIC-FIELD-INDUCED SPIN-DENSITY WAVES IN ORGANIC CONDUCTORS N. Dupuis<sup>(1)</sup> and Victor M. Yakovenko<sup>(2)</sup> <sup>(1)</sup>Laboratoire de Physique des Solides, Bat. 510, Centre Universitaire, 91405 Orsay Cédex, France <sup>(2)</sup>Department of Physics, University of Maryland, College Park 20742-4111, USA Proceedings of the International Workshop on electronic crystals (ECRYS99) Abstract. Within the framework of the quantized nesting model, we study the effect of umklapp scattering on the magnetic-field-induced spin-density-wave (SDW) phases which are experimentally observed in the quasi-one-dimensional organic conductors (TMTSF)<sub>2</sub>X. We discuss the conditions under which umklapp processes may explain the sign reversals (Ribault anomaly) of the quantum Hall effect (QHE) observed in these conductors. We find that the ‘Ribault phase’ is characterized by the coexistence of two SDWs with comparable amplitudes. This gives rise to additional long wavelength collective modes besides the Goldstone modes due to spontaneous translation and rotation symmetry breaking. These modes strongly affect the optical conductivity. We also show that the Ribault phase may become helicoidal (i.e with circularly polarized SDWs) if the strength of umklapp processes is sufficiently strong. The QHE vanishes in the helicoidal phases, but a magnetoelectric effect appears. 1. INTRODUCTION The organic conductors of the Bechgaard salts family (TMTSF)<sub>2</sub>X (where TMTSF stands for tetramethyltetraselenafulvalene) have remarkable properties in a magnetic field. In three members of this family (X=ClO<sub>4</sub>, PF<sub>6</sub>, ReO<sub>4</sub>), a moderate magnetic field of a few Tesla destroys the metallic phase and induces a series of SDW phases separated by first-order phase transitions . According to the so-called quantized nesting model (QNM) , the formation of the FISDWs results from the strong anisotropy of these organic materials, which can be viewed as weakly coupled chain systems (the typical ratio of the electron transfer integrals in the three crystal directions is $`t_a:t_b:t_c=3000:300:10`$ K). The SDW opens a gap, but leaves closed pockets of electrons and/or holes in the vicinity of the Fermi surface. In presence of a magnetic field $`H`$, these pockets are quantized into Landau levels (more precisely Landau subbands). In each FISDW phase, the SDW wave vector is quantized, $`𝐐_N=(2k_F+NG,Q_y)`$ with $`N`$ integer, so that an integer number of Landau subbands are filled. \[Here $`k_F`$ is the Fermi momentum along the chains, $`e`$ the electron charge, $`b`$ the interchain spacing, and $`G=eHb/\mathrm{}`$.\] As a result, the Fermi level lies in a gap between two Landau subbands, the SDW phase is stable, and the Hall conductivity is quantized: $`\sigma _{xy}=2Ne^2/h`$ per one layer of the TMTSF molecules . As the magnetic field increases, the value of the integer $`N`$ changes, which leads to a cascade of FISDW transitions. A striking feature of the QHE in Bechgaard salts is the coexistence of both positive and negative Hall plateaus. While most plateaus are of the same sign, referred to as positive by convention, a negative Hall effect is also observed at certain pressures (the so-called Ribault anomaly) . We have recently proposed an explanation of the Ribault anomaly by taking umklapp processes into account . The latter are allowed in the Bechgaard salts due to the half-filled electron band which results from the dimerization along the chains. Within our explanation , two SDWs with comparable amplitudes coexist in the negative phases, which therefore differ significantly from the positive ones. In particular, they exhibit an unusual structure of long-wavelength collective modes , and their polarization may become circular (helicoidal SDWs) above a critical value of the umklapp scattering strength . 2. SIGN REVERSAL OF THE QHE In the absence of umklapp scattering, a magnetic field along the $`z`$ axis induces a series of SDW phases characterized by a quantized wave vector $`𝐐_N=(2k_F+NG,Q_y)`$ and a quantification of the Hall effect: $`\sigma _{xy}=2Ne^2/h`$. The sign of $`N`$ is entirely determined by the electron dispersion , which seems to preclude any sign reversal of the QHE. Umklapp scattering transfers $`4k_F`$ and therefore couples the wave vectors $`𝐐_N`$ and $`𝐐_N4k_F`$. As a result two SDWs, with wave vectors $`𝐐_N=(2k_F+NG,Q_y)`$ and $`𝐐_N=(2k_FNG,Q_y)`$ form simultaneously. In the random-phase approximation, the transition temperature is determined by the modified Stoner criterion $$[1g_2\chi _0(𝐐_N)][1g_2\chi _0(𝐐_N)]g_3^2\chi _0(𝐐_N)\chi _0(𝐐_N)=0,$$ (1) where $`g_2`$ and $`g_3`$ are the scattering amplitudes of the normal and umklapp processes, respectively. $`\chi _0`$ is the spin susceptibility in the absence of electron-electron interaction. A very small $`g_3`$ does not qualitatively change the phase diagram compared to the case $`g_3=0`$. Now the main SDW at wave vector $`𝐐_N`$ coexists with a weak SDW at wave vector $`𝐐_N`$. The values of $`N`$ follow the usual ‘positive’ sequence $`N=\mathrm{},4,3,2,1,0`$ with increasing magnetic field. A larger value of $`g_3`$ increases the coupling between the two SDWs. This leads to a strong decrease of the transition temperature or even the disappearance of the SDWs. However, for even $`N`$, there exists a critical value of $`g_3`$ above which the system prefers to choose the transversely commensurate wave vector $`Q_y=\pi /b`$ for both SDWs. \[This follows from the structure of the bare spin susceptibility $`\chi _0(𝐐)`$ at $`Q_y=\pi /b`$.\] The two SDWs have now comparable amplitudes. For certain dispersion laws , the main SDW corresponds to the wave vector $`𝐐_{|N|}`$, which yields a negative Hall plateau. Thus, for $`g_3/g_20.03`$, we find the sequence $`N=\mathrm{},5,4,2,2,1,0`$ (Fig. 1a). In Bechgaard salts, the strength of umklapp scattering is very sensitive to pressure. Therefore, we conclude that sign reversals of the QHE can be induced by varying pressure as observed experimentally . Figure 1: (a) Phase diagram for $`g_3/g_2=0.03`$. The phase $`N=3`$ is suppressed, and the negative commensurate phase with $`N=2`$ and $`Q_y=\pi /b`$ appears in the cascade (the shaded area). All the phases are sinusoidal, and the Hall effect is quantized: $`\sigma _{xy}=2Ne^2/h`$. The vertical lines are only guides for the eyes and do not necessarily correspond to the actual first-order transition lines. (b) Phase diagram for $`g_3/g_2=0.06`$. Two negative phases, $`N=2`$ and $`N=4`$, are observed (the shaded areas). The phase $`N=2`$ splits into two subphases: helicoidal (the dark shaded area) and sinusoidal (the light shaded area). 3. HELICOIDAL PHASES We have also shown that umklapp scattering can change the polarization of the SDWs from linear (sinusoidal SDWs) to circular (helicoidal SDWs) . The QHE vanishes in the helicoidal phases, but a magnetoelectric effect appears. For $`g_3/g_20.06`$, there are two negative phases: $`N=2`$ and $`N=4`$ (Fig. 1b). The phase $`N=2`$ splits into two subphases: helicoidal and sinusoidal. In order to observe helicoidal phases experimentally, it would be desirable to stabilize the negative phase $`N=2`$ at the lowest possible pressure (which corresponds to the strongest $`g_3`$). 4. COLLECTIVE MODES The Ribault phase exhibits an unusual structure of long-wavelength collective modes. The coexistence of two SDWs in this phase gives rise to additional modes besides the Goldstone modes resulting from spontaneous rotation and translation symmetry breaking . Below we discuss only the sliding modes, restricting ourselves to longitudinal (i.e. parallel to the chains) fluctuations. There is a Goldstone mode with a linear dispersion law $`\omega =v_Fq_x`$, which corresponds to out-of-phase oscillations of the two SDWs: $`\theta _N(q_x,\omega )=\theta _N(q_x,\omega )`$, where the phases $`\theta _N`$ and $`\theta _N`$ determine the positions of the two SDWs with respect to the crystal lattice. The fact that the two SDWs can be displaced in opposite directions without changing the energy of the system is related to the pinning that would occur for a single commensurate SDW. Besides the Goldstone mode, there is a gapped mode $`\omega ^2=\omega _0^2+v_F^2q_x^2`$, which corresponds to in-phase oscillations of the two SDWs. $`\omega _0`$ depends on the strength of umklapp scattering and is generally larger than the mean-field gap. We therefore expect this mode to be strongly damped due to the coupling with the quasi-particle excitations. The presence of two SDWs can be detected by measuring the optical conductivity. In the limit $`𝐪=0`$, the dissipative part of the conductivity is given by $$\mathrm{Re}[\sigma (\omega )]=\frac{\omega _p^2}{4}\left(\delta (\omega )\frac{3(1\stackrel{~}{\gamma }^2)}{3+5\stackrel{~}{\gamma }^2}+\delta (\omega \pm \omega _0)\frac{4\stackrel{~}{\gamma }^2}{3+5\stackrel{~}{\gamma }^2}\right),$$ (2) where $`\stackrel{~}{\gamma }`$ is the ratio of the SDW amplitudes, and $`\omega _p`$ the plasma frequency. Eq. (2) satisfies the conductivity sum rule $`_{\mathrm{}}^{\mathrm{}}\mathrm{Re}[\sigma (\omega )]=\omega _p^2/4`$. Quasi-particle excitations above the mean-field gap do not contribute to the optical conductivity, a result well known in clean SDW systems. Because both modes contribute to the conductivity, the low-energy (Goldstone) mode carries only a fraction of the total spectral weight. We obtain Dirac peaks at $`\pm \omega _0`$ because we have neglected the coupling of the gapped mode with quasi-particle excitations. Also, in a real system (with impurities), the Goldstone mode would broaden and appear at a finite frequency (below the quasi-particle excitation gap) due to pinning by impurities. References 1. For recent reviews, see P. M. Chaikin, J. Phys. (Paris) I 6, 1875 (1996); P. Lederer, ibid, p. 1899; V. M. Yakovenko and H. S. Goan, ibid, p. 1917. 2. M. Ribault, Mol. Cryst. Liq. Cryst. 119, 91 (1985); L. Balicas et al., Phys. Rev. Lett. 75, 2000 (1995). 3. N. Dupuis and V. M. Yakovenko, Phys. Rev. Lett. 80, 3618 (1998); Phys. Rev. B 58, 8773 (1998). 4. N. Dupuis and V. M. Yakovenko, Europhys. Lett. 45, 361 (1999). 5. Another explanation of the negative Hall plateaus, based on the dependence of the electron dispersion on pressure, has been proposed by D. Zanchi and G. Montambaux, Phys. Rev. Lett. 77, 366 (1996). 6. The exact condition is $`t_{2b}t_{4b}>0`$, where $`t_{2b}`$ and $`t_{4b}`$ are the second and fourth harmonics of the transverse dispersion law.
no-problem/9910/nucl-th9910016.html
ar5iv
text
# In-Medium Modifications of Hadron Masses and Chemical Freeze-Out in Ultra-Relativistic Heavy-Ion Collisions ∗ ## Abstract We confront the hypothesis of chemical freeze-out in ultra-relativistic heavy-ion collisions with the hypothesis of large modifications of hadron masses in nuclear medium. We find that the thermal-model predictions for the ratios of particle multiplicities are sensitive to the values of in-medium hadronic masses. In particular, the $`\pi ^+/p`$ ratio decreases by 35% when the masses of all hadrons (except for pseudo-Goldstone bosons) are scaled down by 30%. INP 1829/PH thanks: Research supported in part by the Polish State Committee for Scientific Research, grant 2P03B-080-12 PACS: 25.75.Dw, 21.65.+f, 14.40.-n Recent theoretical studies indicate that hadron yields and ratios in ultra-relativistic heavy-ion collisions agree with predictions of a simple thermal model. According to this model all measured abundances of hadrons are consistent with the assumption of complete thermalization of hadronic matter at a temperature $`T_{chem}`$, baryon chemical potential $`\mu _{chem}^B`$, strangeness chemical potential $`\mu _{chem}^S`$, and isospin chemical potential $`\mu _{chem}^I`$. Thermodynamic parameters obtained from this type of analysis characterize the point in the evolution of the system when “primordial” hadron content is established. One refers to this point as to the chemical freeze-out. At this stage the system is a mixture of stable particles (pions, kaons, nucleons,…) and resonances ($`\rho ,\omega ,\mathrm{\Delta },\mathrm{}`$). In the subsequent evolution the resonances decay, contributing to the final (observed) multiplicities of stable particles. It has been shown that the chemical freeze-out parameters at CERN/SPS, BNL/AGS and GSI/SIS energies all correspond to a unique value of the energy per hadron . Moreover, statistical models are able to reproduce the particle yields in $`e^+e^{}`$ collisions. A striking observation is that at very high energies the temperature $`T_{chem}170`$MeV turns out to be the same for both elementary and nuclear collisions, although the final-state hadronic interactions are completely absent in the former case. This may indicate that chemical equilibrium is pre-established by the hadronization process . In recent studies one distinguishes the chemical freeze-out from the thermal or kinetic freeze-out . The latter is defined as the decoupling of strongly interacting matter produced in high-energy nuclear collisions into a system of essentially free-streaming particles. After the thermal freeze-out the hadrons practically stop to interact and travel freely to detectors. Chemical freeze-out is expected to occur at the same time or before the thermal freeze-out . For Pb+Pb collisions at CERN/SPS energies one finds that the chemical freeze-out point occurs significantly earlier than the thermal point. This is indicated by the measurements of the hadron momentum spectra as well as two-particle momentum correlations , which show that the thermal freeze-out temperature $`T_{therm}100`$ MeV is substantially lower than $`T_{chem}170`$ MeV. In addition, $`T_{chem}`$ is very close to the expected critical value for the deconfinement/hadronization phase transition, therefore the chemical composition of a hadronic system, according to the presented scenario, should be established shortly after hadronization. Since the chemical freeze-out occurs at an early stage of the evolution of the system, where temperatures and densities are very high, we expect that hadronic properties, such as masses, widths, or coupling constants are influenced strongly by the medium. Such modifications are predicted in many theoretical calculations . Moreover, they seem much desired in view of the low-mass dilepton enhancement observed in the CERES and HELIOS experiments. In order to study how the chemical freeze-out parameters are affected by the in-medium change of the hadron masses, we calculate the particle densities from the standard ideal-gas equilibrium expression $$n_i=\frac{g_i}{2\pi ^2}_0^{\mathrm{}}\frac{p^2dp}{\mathrm{exp}\left[\left(E_i^{}\mu _{chem}^BB_i\mu _{chem}^SS_i\mu _{chem}^II_i\right)/T_{chem}\right]\pm 1},$$ (1) where $`g_i`$ is the spin degeneracy factor of the $`i`$th hadron, $`B_i,S_i,I_i`$ are its baryon number, strangeness, and the third component of isospin, and $`E_i^{}=\sqrt{p^2+\left(m_i^{}\right)^2}`$ is its energy. The latter explicitly depends on the in-medium hadron mass $`m_i^{}`$. In the thermal-model fits one uses Eq. (1) with vacuum values of hadron masses $`m_i^{}=m_i`$, and, in addition, applies finite–size and excluded–volume corrections. They account for the finite size of the nuclear system and the finite volume occupied by the individual hadrons. The main effect of such corrections is a reduction of the absolute yields of particles with the particle ratios remaining close to predictions of the ideal-gas approach . For Pb+Pb collisions at CERN/SPS energies the predictions of the thermal model are : $`T_{chem}=168\mathrm{MeV},\mu _{chem}^B=266\mathrm{M}\mathrm{e}\mathrm{V},`$ (2) $`\mu _{chem}^S=71\mathrm{M}\mathrm{e}\mathrm{V},\mu _{chem}^I=5\mathrm{M}\mathrm{e}\mathrm{V}.`$ In our study we use these values in Eq. (1) and calculate the particle densities $`n_i`$. We take into account all meson resonances with masses smaller than 1.28 GeV and baryon resonances with masses smaller than 1.45 GeV. Decays of resonances contribute to the final (measured) yields of stable hadrons. This is an important effect . The needed branching ratios for the decays of resonances are taken from experiment . In this paper, for clarity, we neglect the finite-size and the excluded-volume corrections. As mentioned above, they do not change significantly the ratios of particles. Indeed, within our simplified approach we reproduce the numbers of Ref. at the level of 15%. In principle, the in-medium masses of all hadrons may behave differently. For practical reasons we perform our calculation with the meson and baryon masses rescaled by the two universal parameters, $`x_M`$ and $`x_B`$, namely $$m_M^{}=x_Mm_M,m_B^{}=x_Bm_B.$$ (3) In this way we change the masses of all hadrons except for pseudo-Goldstone bosons, i.e., pions, kaons and the eta, whose masses are kept fixed at vacuum values. In fact, results of many model calculations show stability of the pion mass against the change of temperature and density up to the point where the chiral phase transition occurs . The branching ratios are kept at the vacuum values. In Fig. 1 we show the $`\pi ^+/p`$ ratio calculated for different values of the parameters $`x_M`$ and $`x_B`$. The solid line represents the case when the meson and baryon masses are rescaled the same way, $`x_M=x_B=x`$ (BR-scaling ), the dashed line corresponds to the case when only the baryon masses are changed, $`x_M=1`$ and $`x_B=x`$, and the dotted line shows the case $`x_M=x`$ and $`x_B=1`$. In all three cases the values of $`x_M`$ and $`x_B`$ are restricted to the reasonable range of $`0.6<x<1.1`$ Our results presented in Fig. 1 show that large mass modifications lead to a substantial change of the $`\pi ^+/p`$ ratio. The reduction of masses by 20% changes the $`\pi ^+/p`$ ratio at least by 30%, which occurs in the case where $`x_M=x_B`$. In the case $`x_M=1`$ a decrease of baryon masses by 20% reduces the $`\pi ^+/p`$ ratio by 50%. On the other hand, in the case of constant baryon masses a decrease of meson masses by 20% enhances the $`\pi ^+/p`$ ratio by 60%. The displayed behavior can be easily understood in qualitative terms. The strong increase of the dotted curve in Fig. 1, as the meson masses are decreased, is mostly caused by a larger population of the rho and omega mesons at the chemical freeze-out point. Subsequent decays of these mesons produce more pions, raising the $`\pi ^+/p`$ ratio. We note that the rho and omega bring about half of the effect shown in Fig. 1. The other half comes from heavier resonances. Similarly, for the dashed curve, a lower mass of baryons results in the larger population of protons at the chemical freeze-out point, thus lowering the $`\pi ^+/p`$ ratio. A universal scaling of the meson and baryon masses (solid line) partially cancels the above-described effects. Still, a significant effect remains. Other particle ratios are less sensitive to the mass modifications. To conclude, we note that the thermal fit analysis leaves little room for modifications of hadron masses larger than, say, 10-20%. The quite impressive agreement with data reached in would be deteriorated with significant mass modifications. However, it seems premature to jump to a general conclusion that masses of hadrons cannot significantly change in hot and dense medium. One should bare in mind that at present we do not understand in sufficient detail the mechanisms of hadron production and evolution of the system created in heavy-ion collisions, and the appealing simplicity and numerical success of the thermal model may be misleading. One cannot exclude the possibility that a more elaborate or altogether different treatment will leave room for a substantial modification of hadron masses. In order to shed more light on this issue it would be useful to incorporate scaling of hadron masses, as well as other hadronic parameters, in various existing approaches to heavy-ion collisions. Acknowledgment: We thank Krzysztof Golec-Biernat for valuable comments and discussions.
no-problem/9910/solv-int9910006.html
ar5iv
text
# References BEYOND NONLINEAR SCHRÖDINGER EQUATION APPROXIMATION FOR AN ANHARMONIC CHAIN WITH HARMONIC LONG RANGE INTERACTIONS D. Grecu,<sup>1</sup><sup>1</sup>1E-mail address: dgrecu@theor1.theory.nipne.ro Anca Vişinescu, A.S. Cârstea Department of Theoretical Physics National Institute of Physics and Nuclear Engineering Bucharest, Romania ## Abstract Multi scales method is used to analyze a nonlinear differential-difference equation. In the order $`ϵ^3`$ the NLS eq. is found to determine the space-time evolution of the leading amplitude. In the next order this has to satisfy a complex mKdV eq. (the next in the NLS hierarchy) in order to eliminate secular terms. The zero dispersion point case is also analyzed and the relevant equation is a modified NLS eq. with a third order derivative term included. Many one-dimensional systems of biological interest are very complicated structures, formed from complexes of atoms - we shall call them ”molecules” - connected by hydrogen bounds. It is usually assumed that only one of the intra-molecular excitations plays an active role in the storage and transport of energy in these systems. In the case of $`\alpha `$-helix structure in protein this corresponds to the amide I vibration (C=O stretching). We shall call this intra-molecular excitation the vibronic field. Localized excitations of solitonic type can exist in these systems, due to a nonlinear interaction between the vibronic field and the acoustic phonon field, describing the molecule oscillations along the chain. The simplest model starts from a Frölich Hamiltonian and with an ansatz - coherent state approximation \- for the state vector describing this type of localized excitation ( \- and references therein). After eliminating phonon variables a nonlinear differential (time) - difference (space) equation for the vibronic coordinate is obtained; $$L(\{y_n\})=G(\{y_n\})$$ (1) where $`L`$ is the linear part and $`G`$ the nonlinear one. For the specific example we have in mind, originating from Takeno’s model , $`L`$ and $`G`$ are given by $$L(\{y_n\})=\frac{d^2y_n}{dt^2}+\omega _0^2y_n\underset{mn}{}J_{mn}y_m$$ (2) $$G(\{y_n\})=\frac{1}{2}Ay_n(y_{n+1}^2+y_{n1}^2)By_n^3.$$ (3) In the linear part the last term is a long range interaction between vibrons, and we shall assume that $`J_{mn}`$ decreases exponentially (Kac-Baker model) $$J_{mn}=J_{|mn|}=\omega _{LR}^2\frac{1r}{2r}e^{\gamma |mn|},r=e^\gamma .$$ (4) The first term in r.h.s. of $`G`$ results from the nonlinear interaction between vibrons and phonons, while the second one from a quartic anharmonicity in the vibron Hamiltonian. The linear equation admit plane wave solutions $`e^{i\theta }`$, $`\theta =kan\omega t`$, where $`\omega (k)`$ is given by the dispersion relation $`D(\omega k)=\omega ^2(k)\left(\omega _0^22{\displaystyle \underset{p=1}{\overset{\mathrm{}}{}}}J_p\mathrm{cos}kap\right)=`$ $`\omega ^2\left(\omega _0^2\omega _{LR}^2{\displaystyle \frac{1r}{2r}}\left({\displaystyle \frac{\mathrm{sinh}\gamma }{\mathrm{cosh}\gamma \mathrm{cos}ka}}1\right)\right)`$ (5) describing an optical vibrational branch with $`\omega ^2(k)`$ a monotonously increasing function of $`k`$ from $`\omega ^2(0)=\omega _0^2\omega _{LR}^2`$ to $`\omega ^2(\pi )=\omega _0^2+\omega _{LR}^2\frac{1r}{1+r}`$. We shall assume that a no-resonance condition takes place $$D_\nu =D(\nu \omega ,\nu k)0,\nu N^{},\nu 1.$$ (6) It is well known that the effect of a weak nonlinearity occurs at large space-time scales, determining a redistribution of energy on higher harmonics, and a modulation of amplitude. In order to investigate these effects we shall use the multi-scales method (reductive perturbation method) . The method starts by introducing slow space-time variables $$x=ϵan,t_j=ϵ^jt$$ (7) and expanding $`y_n`$ in an asymptotic perturbative series, which due to the form (3) of the nonlinearity $`G`$ is given by $$y_n=\underset{\nu =1}{\overset{odd}{}}e^{i\nu \theta }\underset{p=\nu }{\overset{\mathrm{}}{}}ϵ^\nu Y_{p,\nu }(x;t_1,t_2,\mathrm{})+c.c$$ (8) Recently several papers have used this method to discuss the propagation of quasi-monochromatic waves in weakly nonlinear media -, or of long surface waves in shallow waters . Very interesting are the conclusions concerning the role played by the NLS hierarchy , or the KdV hierarchy in eliminating the secular terms which would destroy the asymptotic character of the perturbative series. Of special interest for the present paper is the reference , which will be followed as close as possible. In calculating the time derivative we have to take into account that $`t`$ appears in $`\theta `$ as well as in the slow time variables $`t_1,t_2,`$… Also in writing the expressions for $`y_{n\pm 1},y_m`$ we have to expand the corresponding amplitudes around the point $`n`$. Taken these precautions the calculations are straightforward (although quite tedious in the higher orders): the asymptotic expansion is introduced in (1), (2), (3) and the coefficient of each power of $`ϵ`$ and each harmonic $`e^{i\nu \theta }`$ is equated with zero. In the first order in $`ϵ`$ we re-obtain the dispersion relation (5). In the order $`ϵ^2`$ the amplitude $`Y_{1,1}`$ has to satisfy the equation $$L_+Y_{1,1}=\left(\frac{}{t_1}+v_g\frac{}{x}\right)Y_{1,1}=0$$ (9) and consequently $`Y_{1,1}`$ will depend only on the variable $`\xi =xvt`$, where $`v_g=\frac{d\omega }{dk}=\omega _1`$ is the group velocity. In the next order $`ϵ^3`$, from the terms proportional with $`e^{i\theta }`$ we get $$L_+Y_{2,1}=\frac{Y_{1,1}}{t_2}K_2(Y_{1,1})$$ (10) $$K_2(Y_{1,1})=i\omega _2\left(\frac{^2Y_{1,1}}{\xi ^2}+q|Y_{1,1}|^2Y_{1,1}\right)$$ (11) Here $`\omega _n=\frac{1}{n!}\frac{d^n\omega }{dk^n}`$ and $`q=\frac{A}{2\omega }(2+\mathrm{cos}2ka3\frac{B}{A})`$. As the r.h.s. of (10) is in the null space of $`L_+,Y_{2,1}`$ will blow up linearly in $`t_1`$ unless the r.h.s. is strictly equal with zero, i.e. $`Y_{1,1}`$ has to evolve in $`t_2`$ according to the cubic nonlinear Schrödinger equation $`(c=\frac{q}{2\omega _2})`$ $$\frac{Y_{1,1}}{t_2}=i\omega _2\left(\frac{^2Y_{1,1}}{\xi ^2}+2c|Y_{1,1}|^2Y_{1,1}\right)$$ (12) In this case $`Y_{2,1}`$ will depend also on the characteristic coordinate $`\xi `$ only. From terms proportional with the third harmonic $`e^{3i\theta }`$ one obtains $$D_3Y_{3,3}+(A\mathrm{cos}3kaB)Y_{1,1}^3=0$$ (13) and due to the no-resonance condition (6) it is an algebraic equation giving $`Y_{3,3}`$ in terms of $`Y_{1,1}`$. The same thing happens for all the higher harmonics and the corresponding amplitudes $`Y_{p,\nu }`$ can be explicitly written in terms of $`Y_{p,1}`$ and their derivatives. Therefore we shall concentrate our attention to the amplitudes $`Y_{p,1},`$ related to the first harmonic $`e^{i\theta }`$. The solution of the NLS eq. (12) depends on the sign of $`\omega _2`$ and $`q`$. As $`\omega _1`$ vanishes at $`k=0`$ and $`k=\pi `$, there is a point $`k_c(0,\pi )`$ for which $`\omega _2(k_c)=0`$. If $`k<k_c(k>k_c)`$ we have $`\omega _2>0(\omega _2<0)`$. The sign of $`q`$ depends on the constants $`A`$ and $`B`$. For $`A>0`$ and $`B<\frac{A}{3}`$ it is always positive, while for $`B>A`$ it is negative. Depending on the sign of $`\omega _2`$ and $`q`$ the NLS eq. (12) can have bright or dark soliton solutions. In the order $`ϵ^4`$ from the terms proportional with $`e^{i\theta }`$ we get $$\frac{Y_{2,1}}{t_2}K_2^{}(Y_{2,1})=\frac{Y_{1,1}}{t_3}+\omega _3\frac{^3Y_{1,1}}{\xi ^3}2c\frac{\omega _1\omega _2}{\omega }Y_{1,1}\frac{|Y_{1,1}|^2}{\xi }+q_1|Y_{1,1}|^2\frac{Y_{1,1}}{\xi }.$$ (14) Here $$K_2^{}(Y_{1,1})=i\omega \left(\frac{^2Y_{2,1}}{\xi ^2}+2c(Y_{1,1}^2Y_{2,1}^{}+2|Y_{1,1}|^2Y_{2,1})\right)$$ (15) is the Frechet derivative of $`K_2`$, and $`q_1=\frac{dq}{dk}`$. The l.h.s. of (14) is the linearized NLS eq. It is well known that the commuting symmetries $`\sigma _j`$ of the NLS eq. are solutions of this equation. As they are important for our further discussion we remained the expression of the first ones (by $`\mathrm{\Psi }`$ we shall denote a solution of the NLS eq.) $`\sigma _0=i\mathrm{\Psi },\sigma _1={\displaystyle \frac{\mathrm{\Psi }}{\xi }}`$ $`\sigma _2=i({\displaystyle \frac{^2\mathrm{\Psi }}{\xi ^2}}+2c|\mathrm{\Psi }|^2\mathrm{\Psi })\sigma _3=({\displaystyle \frac{^3\mathrm{\Psi }}{\xi ^3}}+6c|\mathrm{\Psi }|^2{\displaystyle \frac{\mathrm{\Psi }}{\xi }})`$ (16) The eq. (14) is a forced linear equation for $`Y_{2,1}`$. It is necessary to identify secular terms in the r.h.s. of (14) and then to fix the $`t_3`$ dependence of $`Y_{1,1}`$ in such a way to eliminate their effect. These secular terms have to be found between the members of the null space of linearized NLS eq., i.e. between the commuting symmetries $`\sigma _j`$. Indeed if such a symmetry $`\sigma `$ would exists it will generate a $`t_2\sigma `$ contribution to $`Y_{2,1}`$, and the asymptotic character of the expansion (8) would be destroyed in a time $`t_2=O(ϵ^1)`$. Two such symmetries $`(\sigma _0,\sigma _3)`$ are easily seen in the r.h.s. of (14), if it is written in the form $$\frac{Y_{1,1}}{t_3}+\omega _3\left(\frac{^3Y_{1,1}}{\xi ^3}+6c|Y_{1,1}|^2Y_{1,1}\right)+N(Y_{1,1})$$ (17) where $$N(Y_{1,1})=2c\frac{\omega _1\omega _2}{\omega }Y_{1,1}\frac{|Y_{1,1}|^2}{\xi }+(q_16c\omega _3)|\mathrm{\Psi }_{1,1}|^2\frac{\mathrm{\Psi }_{1,1}}{\xi }.$$ In order to avoid this secular behaviour we require that the $`t_3`$ dependence of $`Y_{1,1}`$ is given by the following complex modified KdV equation $$\frac{Y_{1,1}}{t_3}+\omega _3\left(\frac{^3Y_{1,1}}{\xi ^3}+6c|Y_{1,1}|^2Y_{1,1}\right)=0$$ (18) which is the next equation in the NLS hierarchy. The influence of the rest $`N(Y_{1,1})`$ on $`Y_{2,1}`$ can be further treated using a Green function formalism . Let us consider a single soliton solution $$\mathrm{\Psi }=2\frac{P_1}{\sqrt{c}}\frac{e^{i\varphi }}{\mathrm{cosh}z}$$ (19) $$\varphi (\xi ,t_2)=2S_1\xi +4\omega _2(S_1^2P_1^2)t_2+\varphi _0$$ $$z(\xi ,t_2)=2P_1(\xi \xi _0+4\omega _2S_1t_2)$$ where $`S_1,P_1`$ are the real and imaginary part of the complex eigenvalue $`\zeta _1=S_1+iP_1`$ in the inverse scattering transform method, and $`\varphi _0,\xi _0`$ are the initial phase and the initial position of the soliton. Applying the above procedure, in order to eliminate the possible secularities, the soliton parameters must be $`t_3`$-dependent. This dependence can be found introducing (19) in (18). The complex eigenvalue will remain unchanged, while for $`\varphi _0,\xi _0`$ the following linear equations are found $`{\displaystyle \frac{d\varphi _0}{dt_3}}=8\omega _3S_1(S_1^23P_1^2)`$ $`{\displaystyle \frac{d\xi _0}{dt_3}}=8\omega _3P_1(3S_1^2P_1^2)`$ (20) A similar analysis was given by Kodama , and the same results are obtained using the direct perturbation method of Keener and McLaughlin . More complex situations and details will be published elsewhere. Let us consider now the situation when $`\omega _2=0`$, i.e. the propagation of a wave with the wave vector $`k_c`$. As $`\omega _1`$ has a maximum at this point it represents the wave propagating with the highest group velocity. A similar situation is encountered in the case of pulse propagation in nonlinear optical fibers where this point is known as the ”zero dispersion point” (ZDP) -. The power required to generate an optical soliton is minimal in this point, and its evolution in space and time is governed by a modified NLS eq., with a third order derivative included. We shall show that a similar situation appears in the present case. In applying the multiple scale method we shall use the same asymptotic expansion (8) for the vibronic variable. Then the nonlinearity contribution begins with terms of order $`ϵ^3`$. To have contributions of the same order from third order derivatives we have to change the scaling of the $`\xi `$ variable, namely $$\xi =ϵ^{\frac{2}{3}}(an\omega _1t).$$ (21) We have to take into account also a dependence of the phase $`\theta `$ of the propagating wave on the slow variable $`\xi `$. Defining the local wave number $`k`$ as the derivative of the phase $`\theta `$ with respect to $`(an)`$ we find that $`k`$ is slightly different from $`k_c`$, and the simplest choice is $$k=k_c(1+ϵ^{\frac{2}{3}}).$$ (22) Expanding all the quantities depending on $`k`$ around the point $`k_c`$ in the order $`ϵ^3`$ the following equation is found for the leading amplitude $`Y_{1,1}\mathrm{\Psi },`$ $$i\mathrm{\Psi }_T+3\mathrm{\Psi }_{XX}+i\mathrm{\Psi }_{XXX}+Q|\mathrm{\Psi }|^2\mathrm{\Psi }=0$$ (23) where $`X=k_c\xi ,T=\mathrm{\Omega }t_2,\mathrm{\Omega }=\omega _3k_c^3`$ and $`Q=\frac{q}{\mathrm{\Omega }}.`$ It has the same form as the equation describing the propagation of nonlinear pulses in optical fibers in the ZDP region -. In our case it makes the transition between the two regions, where bright and dark solitons exist. It seems that it is not completely integrable, but some analytical and numerical results suggest that some long-living localized excitations exist . Further investigations are necessary. Acknowledgements: Two of the authors (DG and ASC) would like to thank the Organizing Committee of NEEDS 99 for financial support. Helpful discussions with Professors A. Degasperis, L. Kalyakin, V.V. Konotop, P.M. Santini are kindly acknowledged.
no-problem/9910/astro-ph9910525.html
ar5iv
text
# A New Pulsar/SNR Pair: AX J1845-0258 in G29.6+0.1 ## 1. Introduction The 7-s pulsar, AX J1845$``$0258, was discovered during an automatic search of the ASCA archival data (Gotthelf & Vasisht 1998; Torii et al. 1998). Based on its spectral and timing properties, AX J1845$``$0258 is likely the latest addition to the class of anomalous x-ray pulsars (AXPs) (Duncan & Thomson 1996; Mereghetti & Stella 1995; van Paradijs et al. 1995). Evidence included a long rotation period, large modulation ($`30\%`$), steady short-term X-ray flux during the original ASCA observation, steep characteristic spectrum (power-law photon index $`\mathrm{\Gamma }5`$), location at low Galactic latitude, and the lack of known counterpart. A rough distance estimate derived from the X-ray absorption places the pulsar at distance of $`515`$ kpc giving an inferred X-ray luminosity of order $`10^{35}`$ erg s<sup>-1</sup>. Herein we report on new ASCA X-ray and VLA radio observations directed at the pulsar’s location. Our goal was to identify the pulsar and confirm or repudiate the AXP hypothesis by measuring the spin-down rate of the pulsar and searching for an associated radio supernova remnant (SNR). We succeeded in finding a young radio SNR within the pulsar’s error circle, however the pulsator was not seen again. Instead, we find a faint ASCA point source (Vasisht et al. 2000) at the center of the newly discovered radio SNR G29.6+0.1 (Gaensler et al. 1999). We argue that this faint source is the pulsar AX J1845$``$0258 in a low state; we consider the pulsar’s location at the center of a young SNR and the lack of a radio counterpart as evidence for the AXP interpretation, but with a twist. ## 2. THE X-RAY OBSERVATIONS We revisited the field containing the pulsar AX J1845$``$0258 with the ASCA observatory on March 28-29, 1999 UT. Figure 1 reproduced the smoothed and exposure corrected image taken with the gas imaging spectrometers (GISs) aboard ASCA. The GIS is sensitive to photon in the $`110`$ keV energy range and has a spatial resolution $`12^{}`$. All data were edited following the standard ASCA reduction procedures resulting in an effective observation time is $`49`$ ks. Near the center of the field-of-view we find a faint unresolved point source within the large $`3^{}`$ radius error circle for AX J1845$``$0258. The pulsar’s poor astrometry is due to the extreme off-axis detector location of the discovery observation. The faint source is also detected by ASCA’s solid-state imaging spectrometers (SISs) (see Fig 2) with a similar significance of $`5\sigma `$. The spatial resolution of the SIS is $`1^{}`$, but the derived coordinates of $`18^h45^m53.3^s`$, $`02^{}56^{}42^{\prime \prime }`$ (J2000) have an uncertainty of only $`20^{\prime \prime }`$ after correcting for the temperature dependent coordinate offsets (Gotthelf et al. 2000). We refer to this source as AX J184453.3$``$025642, and consider whether this is indeed the expected pulsar, but at a flux level an order of magnitude less than expected; the dearth of source photon prohibits a proper spectral analysis or search for pulsations, which might allow identification with AX J1845$``$0258. ## 3. THE VLA RADIO IMAGES Radio observations of the field of AX J1845$``$0258 were made with the D-configuration of the Very Large Array (VLA) on 1999 March 26. The total observing time was 6 hr, of which 4.5 hr was spent observing in the 5 GHz band, and the remainder in the 8 GHz band. At both 5 and 8 GHz a distinct shell of emission is seen, which is designated G29.6+0.1 (see Fig. 2). The shell is clumpy, with a particularly bright clump on its eastern edge. In the east the shell is quite thick (up to 50% of the radius), while the north-western rim is brighter and narrower. Two point sources can be seen within the shell interior. The shell-like radio emission ($`5.^{}0`$ in diameter) is found to be linearly polarized and non-thermal, which, along with the lack of significant counterpart in 60 $`\mu `$m IRAS data, are characteristic properties of supernova remnants (e.g. Whiteoak & Green 1996). G29.6+0.1 is thus classified as a previously unidentified SNR. Its inferred age suggests a young remnant, with an upper limit of 8000 yr. The location of the X-ray source AX J184453.3$``$025642 at the center of the SNR is highly unlikely to be due to a chance superposition, suggestion that the two are related. ## 4. THE NATURE OF AX J1845$``$0258: A VARIABLE AXP? The lack of a bright pulsator in the new ASCA observation of AX J1845$``$0258 is quite surprising. The spectral and temporal properties of this pulsar had strongly implied an AXP interpretation. Indeed the discovery of a young radio remnant coincident with the pulsar is consistent with the AXP hypothesis. Conversely, the detection of an X-ray point source in the center of the SNR is in itself indicative of a neutron star candidate associated with the remnant. This new source is exactly where we would expect the AXP to be, to within errors, consistent with this interpretation. We therefore suggest that AX J184453.3$``$025642 is indeed the pulsar, but at a much reduced ($``$ order of magnitude) X-ray flux. We now consider the interesting possibility that AXP can exhibit extreme, factor of ten, variability. There is some evidence for this already from two well studied AXPs which show large $`\stackrel{>}{}4`$ variations in flux on year timescales (e.g. 1E 1048.1-593, Oosterbroek et al. 1998). Most intriguing, the properties of the central, unpulsed, neutron star candidate in SNR RCW 103 are otherwise typical of the AXPs, but its flux has also been found to vary by an order of magnitude at energies $`>3`$ keV (Gotthelf, Petre, & Vasisht 1999), just what is observed for AX J1845$``$0258. Conversely, this provides further evidence that RCW 103 is an AXP with unseen pulsations, perhaps due to unfavorable beaming geometry. The identification of another AXP at the center of a young SNR have important implications on the birth properties of pulsars. This result is certainly consistent with AXPs being young, isolated neutron stars, as argued by the magnetar hypothesis. There is the possibility then that AXPs might exhibit periods of enhanced emission. In this case, the population of AXPs might be much greater than previously thought, and we are only detecting a fraction of AXP, those currently in their bright “on” state. Perhaps a duty-cycle (fraction of time the AXP is “on”) of only $`\stackrel{<}{}5\%`$ would be required to square the known Galactic SNRs population with the detected AXPs, if most young neutron stars manifest themselves as AXPs as some authors suggest (see Gotthelf 1998). Although the magnetar hypothesis is attractive, we cannot reject a a binary system origin, perhaps embedded and associated with a young SNR. Further monitoring of this region is planned. ## ACKNOWLEDGEMENTS This research is support in part by a NASA Hubble Fellowship grant HF-01107.01-98A (B.M.G.) and by a NASA LTSA grant NAG5–7935 (E.V.G. & G.V.). ## REFERENCES Duncan,R. & Thompson,C. 1995, MNRAS, 275, 255 Mereghetti,S. & Stella,L. 1995, ApJ,442,L17 Gaensler,B.M., et al. 1999, ApJ, 526, TBD Oosterbroek,T.et al. 1998 A&A, 334, 925 Gotthelf,E.V. & Vasisht, G. 1998, NA, 3, 293 Torii,K., et al., 1998 ApJ, 503, 843 Gotthelf,E.V., 1998, astro-ph/9809139 van Paradijs,J. et al.1995, A&A, 299, L41 Gotthelf,E.V. et al. 1999, ApJ, L514, 107 Vasisht,G. et al. 2000 ApJ, in prep. Gotthelf,E.V., 2000 ApJ, submitted. Whiteoak,J. & Green,A.J. 1996, A&ASS, 118, 329
no-problem/9910/hep-th9910019.html
ar5iv
text
# 1 Introduction ## 1 Introduction In string theory, the perturbative two-dimensional world-sheet data contains information about the target-space geometry background on which the string propagates. Perturbatively one expects that an effective target-space theory will have on its world volume a definite geometry, the very same as that encoded in the two-dimensional world-sheet theory. However, one knows from $`T`$-duality that the metric on that world-volume may actually be ambiguous . Moreover there is no a priori reason why new ambiguities may not emerge also non-perturbatively. Actually in some topological world-sheet theories the topology of the target space in the sigma-model does not define unambiguously the topology of the world-volume in target space . In other cases such as type IIA string theory the world-volume dimensionality of the effective target-space theory is larger than ten and is actually eleven . Recently it was found that the world-volume of the effective target-space theory may be smaller than that expected from the two-dimensional sigma-model, moreover the effective target-space theory may be just a field theory not containing gravity. For example $`𝒩=4`$, $`D=4`$, supersymmetric Yang–Mills (SYM) describes a string propagating in the ten-dimensional target space $`AdS_5\times 𝐒^5`$. Not only are the geometry, the dimensionality and topology of the target-space manifolds largely modified but also the commutativity properties of the target-space coordinates were found to be ambiguous: in a certain limit , in which the background field $`B`$ is non-vanishing in some directions, the metric components are of $`O(ϵ)`$ and the string Regge slope $`\alpha ^{}`$ is of order $`O(\sqrt{ϵ})`$, a string probe will not distinguish between one gauge system on a target-space manifold whose variables are commutative and another gauge system on a noncommutative (NC) target space whose coordinates satisfy $$[x^i,x^j]=i\theta ^{ij},$$ (1.1) with $`\theta B^1`$. This limit was originally introduced as a limit of $`M`$-theory in the context of the matrix model of ref. . The weak-coupling picture in terms of standard open strings on D-branes was presented in (see also ) and further studied from various points of view in . For many years one has been curious about the manner in which such non-commutativity would influence the behaviour of the system. Given that in string theory there are equivalences between theories on commutative and non commutative spaces it may have not been a total surprise that the entropies of both systems were found to be indentical in a certain large $`N`$ limit. In fact it was shown that, at the level of the large $`N`$ diagrammatic combinatorics, the modifications implied by the presence of the torsion can be relegated to torsion-dependent phase factors mutiplying torsionless $`n`$-point functions in momentum space . In particular zero-point functions such as the free energy and the entropy are unchanged. Although the close similarity of thermodynamical functions at low temperatures (compared to the noncommutativity scale) was expected, the fact that similarities persist, for large $`N`$, at high temperatures was more of a surprise. These identities have been demonstrated also by applying the AdS/CFT correspondence and considering the supergravity limit on the AdS side, a description valid at strong ’t Hooft coupling $`g_{\mathrm{YM}}^2N1`$. The appropriate masterfield has been identified for the zero-temperature system and the additional masterfield, a black-hole configuration, has been identified for the finite-temperature case . While these supergravity configurations do depend on the background torsion field, it has been shown by calculating classically the black-hole entropy that large $`N`$ thermodynamical functions are $`B`$-independent. The classical supergravity results capture the leading $`N`$ properties of the gauge-theory side. There are no known limitations, based on diagrammatic analysis, on the $`B`$-dependence of the entropy to next to leading order in $`1/N^2`$. It may not be so easy to estimate these corrections in the non-commutative gauge-theory language. However one may also approach the calculation from the supergravity side of the correspondence. We have used in ref. a WKB approximation to estimate the next to leading order effects on the entropy for vanishing values of $`B`$. This method did not take into account stringy quantum corrections but did account for quantum corrections to the classical supergravity picture. This is actually usefull for the limit at hand in which stringy effects are decoupled. As the main result of this paper, we find that the indroduction of the magnetic $`B`$-field, or in other variables a non-infinite $`\theta `$ background, leads to a large suppression of the next to leading order free energy and entropy, in fact the entropy correction vanishes at large temperatures, leaving only the leading order entropy. Equivalently, the limit of infinite NC length $`\theta \mathrm{}`$, at fixed energy is completely saturated by the planar limit. For vanishing $`\theta `$, i.e. for the gauge theory on a commuting manifold the next to leading order results are of the same qualitative properties as the leading result. In section 2 we review the WKB approximation to the $`1/N`$ corrections to the free energy and entropy. We apply the method to obtain highly suppressed $`1/N`$ corrections in the case of the four-dimensional gauge theory, i.e. the case of a large stack of $`N`$ D3-branes. We also recover the commuting case results for a vanishing value of $`\theta `$, that is in the limit of small and large $`B`$ fields. In section 3 these results are generalized to D$`p`$ branes (as well as NS5-branes). We study the entropy of the system as a function of both $`p`$ and the rank $`r`$ of the magnetic field $`B`$. For all $`p<7`$ the large-temperature planar entropy of the black-hole configuration does not depend on the value of $`B`$. The more complicated behaviour for $`7>p>4`$ is also unchanged. The $`1/N^2`$ corrections do depend on the value of $`B`$. The main result is again that we find them to be always much softer than those corresponding to the $`B=0`$ case. We finally touch upon the most general configuration which supposedly describes the supergravity side in the aditional presence of a constant electric field. After this work has been completed we received the article , which contains some overlap with the material in section 3. ## 2 WKB Estimations of 1/N Corrections to the Entropy In the context of the AdS/CFT correspondence, the sum over diagrams with toroidal topology in the ’t Hooft classification should be related, at strong coupling, to the one-loop diagrams of the corresponding dual string theory. In our case, we are interested in the one-loop (toroidal world-sheet) diagram of type IIB string theory in the appropriate string background which leads to a gauge theory. Lacking an operative description in terms of an exact CFT, the calculation of this diagram in closed form is beyond our capabilities. We can, however, produce estimates by means of approximations to its low-energy limit: the one-loop diagrams of IIB supergravity. One observable which is rather independent of the regularization ambiguities of supergravity is the vacuum-subtracted statistical free energy. At one-loop, the supergravity fields can be regarded as free (interacting only with the background) and the thermal free energy can be evaluated by means of an oscillator sum for each field: $$\beta F(\beta )=\underset{\mathrm{species}}{}\mathrm{Tr}(1)^\mathrm{F}\mathrm{log}\left(1(1)^\mathrm{F}e^{\beta \omega }\right),$$ (2.2) where F is the space-time fermion number and the trace is over the spectrum of physical fluctuations in the background, with frequencies $`\omega `$ given by the eigenvalues of the operator $`i_t`$, associated to one particular temporal Killing vector of the background, which we assume static. If we evaluate (2.2) by some determinant on the compactified euclidean continuation of the background, the inverse temperature $`\beta `$ is the period of identification of the euclidean time. The vacuum energy has been subtracted in this definition of the free energy, so that the relation to the one-loop path integral is $$I^{(1)}=\beta E_{\mathrm{vac}}+\beta F(\beta ).$$ (2.3) Under the assumption of local extensivity we can estimate the statistical sum by $$\beta F(\beta )\mathrm{d}^dx\sqrt{|g|}\beta _{\mathrm{loc}}F_0(\beta _{\mathrm{loc}}),$$ (2.4) with a red-shifted local inverse temperature $`\beta _{\mathrm{local}}=\beta \sqrt{g_{tt}}`$, and $`F_0`$ the flat-space free energy. Namely, this is an adiabatic approximation in which one assumes local thermal equilibrium in cells which are still small compared to the global features of the geometry, so that their contribution to the total free energy is given by the red-shifted flat-space free energy of the cell, and one further assumes extensivity with respect to the partition in cells. These assumptions can be justified with a standard application of the WKB approximation to the solutions of the wave equation $`\omega ^2+_t^2=0`$ (see for example refs. ). Thus, the WKB approximation is good if the metric is sufficiently smooth. For example, for $`AdS_{d+1}`$ with radius $`R`$, the derivative of the inverse local temperature $`|_r\beta _{\mathrm{loc}}|`$ is bounded by $`\beta /R`$, with $`\beta `$ the temperature at the centre. According to the AdS/CFT correspondence, this ratio is small precisely in the high-temperature limit, and our WKB approximation is better the higher the temperature. For massless fields in $`d`$ space-time dimensions, one has $`\beta F_0=AT^{d1}`$, with $`A`$ a constant proportional to the total number of particle degrees of freedom. Thus, the WKB approximation involves integration of $`(\sqrt{g_{tt}})^{1d}`$ over the volume. A more geometrical characterization can be given defining the optical metric by the conformal transformation $$ds_{\mathrm{optical}}^2=\frac{1}{g_{tt}}ds_{\mathrm{euclidean}}^2,$$ (2.5) from the Wick-rotated metric with compact time $`tt+\beta `$. Then, the contribution of a given region $`𝐗`$ of space-time to the one-loop free energy is proportional to the optical volume of this region, which we shall denote by $`\stackrel{~}{\mathrm{Vol}}(𝐗)`$. For space-times with a black hole, we should consider only the optical volume of the region not excluded by the black-hole horizon. The reason being that the euclidean metric of a black hole terminates at the horizon, which represents a radial cut-off. Strictly speaking, the red-shift estimate breaks down very close to the horizon, because the local temperature diverges. This local divergence can be interpreted as contributing to the renormalization of the Newton constant, as in ref. . Thus, in dealing with black-hole spacetimes, we shall consider only the optical volume of the asymptotic region, sufficiently far from the horizon, i.e. we consider the free energy of the Hawking radiation in equilibrium with the black hole. This naive subtraction of the horizon divergence is enough for the purpose of order of magnitude estimates . For a curved space-time, a field is regarded as massless if its mass is smaller than the local temperature $`T_{\mathrm{loc}}=T/\sqrt{g_{tt}}`$. Otherwise, it is massive, and can be decoupled from the statistical sum in that region of space-time. This means that, for a situation with locally varying Kaluza–Klein thresholds one may partition the whole manifold $`𝐗`$ in cells $`𝐗_i`$ of effective dimension $`d_i`$, defined by the condition that the effective radii be sufficiently large compared to the local temperature: $$\beta _{\mathrm{loc}}R_{\mathrm{loc}}.$$ (2.6) Finally, neglecting threshold effects, from the regions where $`\beta _{\mathrm{loc}}R_{\mathrm{loc}}`$, the WKB approximation to the one-loop free energy can be written: $$I_{\mathrm{WKB}}^{(1)}=\beta E_{\mathrm{vac}}\underset{i}{}A_iT^{d_i}\stackrel{~}{\mathrm{Vol}}(𝐗_i),$$ (2.7) for a decomposition in “cells” $`𝐗_i`$, each of effective naive dimension $`d_i`$. If we can isolate a regime where the dominant asymptotics is of the form $`\beta FAT^\gamma `$, the corresponding one-loop entropy is $$S_{\mathrm{WKB}}^{(1)}(\gamma +1)AT^\gamma .$$ (2.8) In this case, we can define $$d_{\mathrm{eff}}=\gamma +1$$ (2.9) to be the effective dimension, as determined by the high-temperature asymptotics. Notice that this dimension is in general different from the naive dimensions $`d_i`$ of the cells in which we have partitioned the manifold. The reason is that the optical volume of a given cell may depend non-trivially on the asymptotic reference temperature. For example, for $`AdS_{d+1}`$, and generalizations involved in the AdS/CFT correspondence, the effective dimension as defined by the high-temperature asymptotics is $`d`$ instead of $`d+1`$, . This is in fact a manifestation of holography at the level of $`O(1/N^2)`$ corrections. ### 2.1 The Basic Example The simplest example is given by the gravitational description of the $`𝒩=4`$ SYM theory at large $`N`$, obtained from a stack of D3-branes with a non-zero $`B`$-field on a single spatial two-plane. At large ’t Hooft coupling $`\lambda =g_{\mathrm{YM}}^2N`$, the master field of the theory with a NC parameter $`\theta `$ is encoded in the metric derived in refs. and : $$\frac{ds^2}{\alpha ^{}}=\frac{U^2}{\sqrt{\lambda }}\left(dt^2+dy^2+\widehat{f}(U)d𝐱^2\right)+\sqrt{\lambda }\left(\frac{dU^2}{U^2}+d\mathrm{\Omega }_5^2\right)$$ (2.10) with $$\widehat{f}(U)=\frac{1}{1+(U\mathrm{\Delta })^4}$$ (2.11) and $`\mathrm{\Delta }=\lambda ^{1/4}\sqrt{\theta }`$. Notice that the perturbative NC energy scale, $`\theta ^{1/2}`$, differs by powers of the ’t Hooft coupling from the value of the $`U`$-coordinate threshold for the onset of NC effects in the metric (2.10), which is given by $`U_\mathrm{\Delta }=1/\mathrm{\Delta }`$. The associated length scale in the gauge theory, according to the UV/IR correspondence , is given by $$a=\frac{\sqrt{\lambda }}{U_\mathrm{\Delta }}=\lambda ^{1/4}\sqrt{\theta },$$ (2.12) it differs from the weak-coupling NC length scale, $`\sqrt{\theta }`$, by powers of the ’t Hooft coupling<sup>1</sup><sup>1</sup>1 We have absorbed various $`O(1)`$ constants in the definition of $`\lambda `$ and $`\alpha ^{}`$.. The small $`U`$ or infrared region is the standard $`AdS_5\times 𝐒^5`$ space with radius $`R=\sqrt{\alpha ^{}}\lambda ^{1/4}`$, in agreement with expectations, since NC effects should be irrelevant in the deep infrared regime. Conversely, the $`\theta 0`$ limit at fixed energy and coupling gives back the standard large $`N`$ master field of the commutative theory. In the non-extremal case the horizon sits at $`U_0=T\sqrt{\lambda }`$. The local value of the inverse temperature is $`\beta _{\mathrm{loc}}=\beta UR/\sqrt{\lambda }`$ for $`UU_0`$, while the local value of the $`𝐒^5`$ radius is $`R(𝐒^5)=R`$. So, for $`U>U_0=T\sqrt{\lambda }`$ we drop the five-sphere and the effective (euclidean) optical metric of interest is $$ds_{\mathrm{optical}}^2=dt^2+dy^2+\widehat{f}(U)d𝐱^2+\lambda \frac{dU^2}{U^4},$$ (2.13) with optical volume $$\stackrel{~}{\mathrm{Vol}}=\sqrt{\lambda }_{U_0}^{\mathrm{}}𝑑t𝑑y𝑑𝐱𝑑U\frac{\widehat{f}(U)}{U^2}.$$ (2.14) So we finally obtain $$I_{\mathrm{WKB}}^{(1)}=\beta E_{\mathrm{vac}}A(LT)^3(aT)$$ (2.15) in terms of the integral $$(aT)=_1^{\mathrm{}}\frac{dx}{x^2(1+(aT)^4x^4)},$$ (2.16) which can be explicitly evaluated: $$=1\frac{\pi aT}{\sqrt{8}}+\frac{aT}{4\sqrt{2}}\left[2\mathrm{arctan}(1+\sqrt{2}aT)2\mathrm{arctan}(1\sqrt{2}aT)+\mathrm{log}\left(\frac{1\sqrt{2}aT+(aT)^2}{1+\sqrt{2}aT+(aT)^2}\right)\right]$$ The important feature of this function is that it represents a small correction at low temperature $`aT1`$: $$=1\frac{\pi aT}{\sqrt{8}}+\frac{(aT)^4}{3}\frac{(aT)^8}{7}+\mathrm{},$$ (2.17) but a large suppression for very high temperatures, compared to the NC scale $`aT1`$: $$\frac{1}{5(aT)^4}\frac{1}{9(aT)^8}+\mathrm{}$$ (2.18) This means that the one-loop free energy scales like a vacuum contribution in the large temperature limit $`aT1`$. In other words, the $`1/N^2`$ corrections to entropy vanish in such a limit: the extra contribution from the extreme ultraviolet regime is as if it represented a zero-dimensional volume. As mentioned in the introduction, the planar $`O(N^2)`$ entropy is expected to be the same as the non-commutative one, on the grounds of weak-coupling arguments . This fact was verified at strong-coupling in ref. , i.e. the horizon area in Einstein frame does not change beyond the NC scale $`a`$. Our result indicates that, at least to leading order, the source of all the high-temperature entropy is in the planar evaluation of degrees of freedom. Conversely, we can say that, in the limit of large NC parameter $`\theta \mathrm{}`$, at fixed energy, we are led to a purely planar theory, in the sense that the large $`N`$ description is effectively classical (trivial $`1/N`$ corrections). ## 3 Generalizations Most of the previous discussion admits generalization to general D$`p`$-branes with $`1<p<7`$. The supergravity string-frame solution for a stack of $`N`$ D$`p`$-branes with an aligned $`B`$-field (before the decoupling limits) is given in ref. : $$ds^2=\frac{1}{\sqrt{H(\rho )}}\left(dt^2+d𝐲^2+f(\rho )d𝐱^2\right)+\sqrt{H(\rho )}\left(d\rho ^2+\rho ^2d\mathrm{\Omega }_{8p}^2\right).$$ (3.19) There is a $`B`$-field of rank $`2r`$ in the spatial directions $`𝐱`$ of intensity<sup>2</sup><sup>2</sup>2We write here the value of the skew-eigenvalues, which we assume all equal in magnitude, for simplicity of notation. $$\alpha ^{}\overline{B}=\mathrm{tan}\vartheta \frac{f(\rho )}{H(\rho )}$$ (3.20) and dilaton $$e^{2\varphi }=e^{2\varphi _{\mathrm{}}}H(\rho )^{\frac{3p}{2}}f(\rho )^r.$$ (3.21) The functions $`H(\rho )`$ and $`f(\rho )`$ are given by $$H(\rho )=1+(R/\rho )^{7p},f(\rho )^1=\mathrm{sin}^2\vartheta H(\rho )^1+\mathrm{cos}^2\vartheta .$$ (3.22) The basic low-energy scaling of ref. $`\rho =\alpha ^{}U`$ leads to $$H(U)\frac{\lambda }{(\alpha ^{})^2U^{7p}},$$ (3.23) where $`R=\sqrt{\alpha ^{}}(G_sN)^{\frac{1}{7p}}`$ in terms of the NC string coupling $`G_s`$ and the corresponding ’t Hooft coupling $`\lambda =g_{\mathrm{YM}}^2N=(\alpha ^{})^{\frac{p3}{2}}G_sN,`$ where we have again absorbed various constants of $`O(1)`$ into the definitions of the parameters. The low-energy scaling introduced by Seiberg and Witten in ref. shrinks the closed string metric in the direction of the $`𝐱`$ coordinates, with a constant $`B`$-field. We may achieve this in the previous solution by a rescaling of the coordinates $`𝐱\alpha ^{}𝐱/\theta `$, in the $`\alpha ^{}0`$ limit with constant $`\theta =\alpha ^{}\mathrm{tan}\vartheta `$. At the same time, the original string coupling is scaled $`e^\varphi _{\mathrm{}}G_s(\alpha ^{}/\theta )^r`$, and the $`B`$-field transforms like a normal tensor under the rescaling: $`\overline{B}B(\alpha ^{}/\theta )^2.`$ In terms of the convenient NC length scale $`\mathrm{\Delta }`$ defined by $$\mathrm{\Delta }^{7p}=\frac{\theta ^2}{\lambda },$$ (3.24) we get the following string metric after this double Maldacena–Seiberg–Witten scaling: $$\frac{ds^2}{\alpha ^{}}=\frac{U^{\frac{7p}{2}}}{\sqrt{\lambda }}\left(dt^2+d𝐲^2+\widehat{f}(U)d𝐱^2\right)+\frac{\sqrt{\lambda }}{U^{\frac{7p}{2}}}\left(dU^2+U^2d\mathrm{\Omega }_{8p}^2\right),$$ (3.25) with $$\widehat{f}(U)=\frac{1}{1+(U\mathrm{\Delta })^{7p}}.$$ (3.26) This result agrees with the recent determination of this function in ref. . The $`U`$-dependent $`B`$-field profile is $$B=B_{\mathrm{}}(U\mathrm{\Delta })^{7p}\widehat{f}(U),$$ (3.27) with $`B_{\mathrm{}}=1/\theta `$. This asymptotic value of the $`B`$-field agrees with the zero slope limit of ref. for the NC parameter matrix: $$\theta ^{ij}=2\pi \alpha ^{}\left(\frac{1}{g+2\pi \alpha ^{}B_{\mathrm{}}}\right)_A^{ij}$$ (3.28) with $`g_{ij}`$ the closed-string metric. A potential confusion stems from the fact that the NC parameter in this formula vanishes both for large and small values of the $`B`$-field. On the other hand, if $`B_{\mathrm{}}=1/\theta `$, the limit of vanishing $`B`$-field seems to make NC effects blow up. This is resolved by noticing that the NC parameter vanishes with the $`B`$-field only if $`\alpha ^{}`$ and the closed-string metric are kept fixed, namely the two limits that turn-off $`\theta `$ do not commute. In the supergravity solution, keeping the open-string scale fixed (Born–Infeld corrections) amounts to keeping the “neck” of the throat at $`U_s(\lambda /(\alpha ^{})^2)^{\frac{1}{7p}}`$ in place in the full solution (3.19). If we now take the vanishing $`B`$-field limit with constant $`g_{ij}`$ and constant $`\alpha ^{}`$, we find $`f(\rho )0`$ and NC features vanish as it should be. Coming back to the scaled solution, the dilaton is $$e^{2\varphi }=G_s^2\widehat{f}(U)^rH(U)^{\frac{3p}{2}}=e^{2\varphi _\mathrm{C}}\widehat{f}(U)^r,$$ (3.29) where $`\varphi _\mathrm{C}`$ denotes the dilaton of the $`\mathrm{\Delta }=0`$ theory. With these data, one could study the interplay of phase transitions in these models, depending on the local duality transformations appropriate for each description. Compared to the analysis of the commutative case in ref. , the NC character introduces a new scale in the problem at $`U_\mathrm{\Delta }=1/\mathrm{\Delta }`$, associated to the onset of NC effects. The corresponding length scale in the gauge theory, according to the generalized UV/IR correspondence of is $$a=\sqrt{\frac{\lambda }{U_\mathrm{\Delta }^{5p}}}=\sqrt{\lambda \mathrm{\Delta }^{5p}}.$$ (3.30) Following , the applicability of the supergravity description is controlled by the size of $`\alpha ^{}`$ corrections in the string-metric background. In terms of the “correspondence point” $`U_c=\lambda ^{\frac{1}{3p}}`$ of ref. , one finds that the geometric description is good for $`UU_c`$ when $`p<3`$. Therefore, we need $`U_\mathrm{\Delta }<U_c`$ in order to trust the supergravity solution in the region where NC effects are sizeable. In terms of the ’t Hooft coupling versus the gauge-theory NC length scale, this condition is $`\lambda ^{3p}a<1`$, i.e. we require a sufficently weak coupling. Otherwise, the NC features of the ultraviolet regime must be studied entirely by means of perturbative techniques. For $`p=3`$, the condition for the supergravity picture to capture NC features in the ultraviolet is the ordinary one, independent of the scale: $`\lambda >1`$. On the other hand, for $`p=4`$ the supergravity patch is $`UU_c=1/\lambda `$, so that the NC features are visible in the supergravity description for sufficiently strong coupling: $`\lambda >a`$. Finally, the cases $`p=5,6`$ are somewhat different since they do not follow a standard IR/UV correspondence (equation (3.30) does not have a clear physical interpretation in these cases). Still, we can associate NC effects to the energy scale $`U_\mathrm{\Delta }`$, as measured for example by the mass of a stretched fundamental string probe. The condition for the metric (3.25) to accurately describe the NC effects is thus $`\lambda (U_\mathrm{\Delta })^{p3}>1`$. The non-perturbative thresholds associated with large values of the string dilaton are generally relaxed by turning on the NC moduli. Since $`\widehat{f}U^{p7}`$ vanishes in the large $`U`$ regime, this means that the present metrics have small local string coupling in the $`U\mathrm{}`$ region for all values of $`p`$, provided $`r1`$ (in fact, one needs the slightly stronger condition $`r2`$ for $`p=6`$). Following the general rule, the infrared thresholds associated with small $`U<U_\mathrm{\Delta }`$ singularities are qualitatively the same in the NC case. In general, there could be intermediate regimes with large local string coupling, but such transients can be ignored when working in the ’t Hooft limit with fixed values of the typical energies in the system, as well as $`\lambda `$ and $`\mathrm{\Delta }`$, of $`O(N^0)`$. ### 3.1 Planar Thermodynamics The (somewhat surprising) robustness of the planar thermodynamics of the $`p=3`$ case, discussed in , persists for general values of $`p`$. The non-extremal metric is obtained by replacing $$dt^2+hdt^2,dU^2dU^2/h,$$ (3.31) with the euclidean time identified with period the inverse temperature $`tt+\beta `$ and $$h=1(U_0/U)^{7p}$$ (3.32) as usual. Since no $`B`$-field lies in the time direction, these replacements do not affect the parts of the metric which depend on $`\mathrm{\Delta }`$ (the $`𝐱`$ space). Therefore, the NC Hawking temperature is the same as in the commutative case. $$T_{\mathrm{NC}}=T_\mathrm{C}=\frac{7p}{4\pi }\sqrt{\frac{U_0^{5p}}{\lambda }}.$$ (3.33) Moreover, the planar entropy is also independent of the NC deformation parameter. Since it must be computed in the Einstein frame, we have to multiply the string metric by $`e^{\varphi _{\mathrm{NC}}/2}=e^{\varphi _\mathrm{C}/2}\widehat{f}(U)^{r/4}`$. The horizon being eight-dimensional, this yields a factor of $`(\widehat{f}^{r/8})^8`$, which exactly cancels the extra factor of $`\left(\sqrt{\widehat{f}}\right)^{2r}`$ coming from the $`2r`$ directions with a non-vanishing $`B`$-field. So, the NC horizon area is $$A_{\mathrm{NC}}=A_\mathrm{C}(\widehat{f}^{1/2})^{2r}(\widehat{f}^{r/8})^8=A_\mathrm{C}.$$ (3.34) Both the planar entropy and the temperature are exactly the same as in the commutative case, which means that all planar thermodynamical functions are the same. ### 3.2 WKB Corrections to the Entropy In order to estimate the $`1/N`$ corrections, we consider the corresponding optical metric $$ds_{\mathrm{optical}}^2=dt^2+d𝐲^2+\widehat{f}(U)d𝐱^2+\frac{\lambda }{U^{7p}}\left(dU^2+U^2d\mathrm{\Omega }_{8p}^2\right),$$ (3.35) and compute the optical volume of the region $`U_0<U<\mathrm{}`$. The conditions for decoupling the angular sphere $`𝐒^{8p}`$ are the same as in the commutative case, again because the NC character only affects the $`𝐱`$–space. We discuss the qualitatively different cases in turn. D$`p`$-branes with $`p<5`$ For $`p<5`$, the temperature is small: $`\beta _{\mathrm{loc}}>R(𝐒^{8p})_{\mathrm{loc}}`$ in the region of interest, so that we can drop the angular sphere in estimating the free energy of thermal radiation outside the black-brane. $$I_{\mathrm{WKB}}^{(1)}T^{p+2}\stackrel{~}{\mathrm{Vol}}_{p+2}=T^{p+1}L^p_{U_0}^{\mathrm{}}𝑑U\widehat{f}(U)^r\sqrt{\lambda U^{p7}}(LT)^p_1^{\mathrm{}}\frac{dx\sqrt{x^{p7}}}{(1+(xU_0\mathrm{\Delta })^{7p})^r}.$$ (3.36) This is the standard result of the commutative theory $`I_{\mathrm{WKB}}^{(1)}(LT)^p`$ for $`U_0\mathrm{\Delta }1`$. On the other hand, in the opposite limit $`TaT\sqrt{\lambda \mathrm{\Delta }^{5p}}1`$, we get a strong suppression $$I_{\mathrm{WKB}}^{(1)}\frac{(LT)^p}{(aT)^{\frac{2r(7p)}{5p}}}.$$ (3.37) Thus, we find the soft behaviour at high temperatures of the one-loop free energy, much like the D3-brane case. Notice that for all $`p<5`$, the asymptotic effective exponent of $`T`$ is negative provided $`r1`$. Therefore, the effective dimensionality, as determined by the one-loop corrections, drops to zero or is even “negative” at $`Ta1`$. D5-branes For $`p=5`$ one gets, independently of the issue of angular decoupling: $$I_{\mathrm{WKB}}^{(1)}(LT)^5_1^{\mathrm{}}\frac{dx}{x}\frac{1}{(1+(xU_0\mathrm{\Delta })^2)^r}.$$ (3.38) As long as $`r>0`$, the integral converges! This is an improvement with respect to the commutative case, with $`\mathrm{\Delta }=0`$, in which one gets a logarithmic divergence of dubious interpretation. In the $`U_0\mathrm{\Delta }1`$ regime one finds $$I_{\mathrm{WKB}}^{(1)}\frac{(LT)^5}{(U_0\mathrm{\Delta })^{2r}}.$$ (3.39) However, now the energy-density parameter $`U_0`$ is unrelated to the temperature, which is constant and equal to $`\lambda ^{1/2}`$, i.e. there is a suppression of the non-planar corrections, although the effective dimension remains $`d_{\mathrm{eff}}=6`$. D6-branes On the other hand, for $`p=6`$, the local temperature outside the horizon is higher than the mass of angular modes and we must consider the optical volume of the angular sphere $`𝐒^{8p}`$ as well. The resulting one-loop free energy is $$I_{\mathrm{WKB}}^{(1)}(LT)^6_1^{\mathrm{}}\frac{dx\sqrt{x}}{(1+xU_0\mathrm{\Delta })^r}.$$ (3.40) Now, we need a $`B`$-field turned on in at least two planes ($`r>1`$), in order to achieve convergence at large $`U`$ (this is reminiscent of the analogous condition to have a vanishing string coupling at infinity). In any case, the interpretation is not clear, because the standard UV/IR relation breaks down at the level of the formula for the Hawking temperature, since large energies (large $`U_0`$), correspond to low temperatures. In fact, the scaling at large temperature is that of a higher-than-seven-dimensional theory: $$I_{\mathrm{WKB}}^{(1)}\frac{(LT)^6}{(U_0\mathrm{\Delta })^r}L^6a^{2r}T^{6+2r}.$$ (3.41) Therefore, the cases $`p=5,6`$ continue to have non-standard features, although we do see a general tendency of the $`B`$-fields to make the large $`U`$ behaviour less singular in all cases. Other Models These WKB estimates can be extended to other interesting models. For example, we may consider NS5-branes of type IIB and IIA related to D5-branes by a sequence of $`S`$\- and $`T`$-dualities. The behaviour of $`1/N^2`$ corrections for all these models is essentially equivalent to that of type IIB D5-branes, i.e. the commutative versions have semi-infinite cylinders that produce logarithmically divergent Hawking-radiation entropies . On the other hand, turning on $`B`$-fields regulates this divergence and implies an effective quenching of $`1/N`$ corrections at large temperature. This is particularly clear for the case of type IIB NS5-branes, whose metric is $`S`$-dual to that of D5-branes. Since this duality amounts to a conformal transformation of the metric, to which the optical metric is insensitive, we get the same physics of $`1/N^2`$ corrections: for large energy densities $`\mathrm{\Delta }U_01`$, $$\left[\frac{I_{\mathrm{NC}}}{I_{\mathrm{free}\mathrm{gas}}}\right]_{\mathrm{IIB}\mathrm{NS5}}^{(1)}(U_0\mathrm{\Delta })^{2r}.$$ (3.42) Type IIA NS5-branes can be obtained from type IIB NS5-branes by a further $`T`$-duality along a commutative direction. We have a global factor of $`\widehat{f}^{r/2}`$ from the $`S`$-duality transformation from D5-branes to type IIB NS5-branes. $`T`$-duality inverts this factor on one of the commuting coordinates. In addition, there are the usual factors of $`\widehat{f}`$ for each NC coordinate. Thus, the optical volume integrand gets an additional factor of $`\widehat{f}^{r/2}`$ in all, leading to $$\left[\frac{I_{\mathrm{NC}}}{I_{\mathrm{free}\mathrm{gas}}}\right]_{\mathrm{IIA}\mathrm{NS5}}^{(1)}(U_0\mathrm{\Delta })^{3r},$$ (3.43) again in the large density limit. When the type IIA D4-brane solution is lifted to eleven dimensions, one obtains a NC M5-brane model with $`AdS_7\times 𝐒^4`$ geometry in the infrared region. The previous scaling gives now $$\left[\frac{I_{\mathrm{NC}}}{I_\mathrm{C}}\right]_{\mathrm{M5}}^{(1)}(U_0\mathrm{\Delta })^{9r/2}(aT)^{9r}.$$ (3.44) We remark that one interesting case was not discussed here in detail. It is the case in the presence of a nonvanishing “electric” NS-fields: $`B_{0i}0`$. Naively, one expects similar results to the purely “magnetic” case, at least as far as the arguments of ref. concern. However, it was pointed out in ref. that, at least in the particular case of $`p=3,r=2`$, the supergravity picture of the thermodynamics is fundamentally different at temperatures of the order of the timelike noncommutative scale. Assuming that, for $`B_{0i}0`$, the dominant finite-temperature master field (the black hole) is also given by the substitutions (3.31) and (3.32) on the extremal solution, one finds that the behaviour described in for $`p=3`$ actually generalizes to all D$`p`$-branes (this assumption might actually hide important subtleties, related to the proper treatment of the Wick rotation). The perturbative scale of “electric” non-locality is $`\sqrt{\theta _e}=1/\sqrt{B_{0i}}`$. At large ’t Hooft coupling it develops into the length scales $$\mathrm{\Delta }_e=\left(\frac{\theta _e^{\mathrm{\hspace{0.17em}2}}}{\lambda }\right)^{\frac{1}{7p}},a_e=\sqrt{\lambda \mathrm{\Delta }_e^{5p}},$$ (3.45) that characterize the supergravity solution. The temperature is related to the horizon radius $`U_0`$ by $`T=T_\mathrm{C}(U_0)\sqrt{\widehat{f}_e(U_0)}`$, where $`T_\mathrm{C}`$ is the temperature of the commutative theory and $`\widehat{f}_e(U)`$ is given by eq. (3.26) upon replacing $`\mathrm{\Delta }`$ by $`\mathrm{\Delta }_e`$. This temperature/mass relation leads to negative specific heat for $`U_01/\mathrm{\Delta }_e`$ and a maximum temperature of order $`T_{\mathrm{max}}1/a_e`$, a behaviour reminiscent of standard black-branes in asymptotically flat space, before the near-horizon scaling is taken as in eq. (3.19). In fact, the Einstein-frame metrics are exactly equal to the string-frame metrics of such asymptotically flat D-brane metrics (up to some rescalings of the coordinates,) precisely if $`2r=p+1`$, i.e. when all the world-volume of the brane is noncommutative. This property was noticed in ref. for the $`p=3,r=2`$ case. However, we see that the important qualitative features (a maximum temperature and a negative specific heat branch at large energy densities) generalize for arbitrary values of $`p`$ and $`r`$, provided the time direction is noncommutative. Having asymptotically flat regions at large $`U`$ will surely complicate the workings of holography in these models. In particular, our WKB estimate for the one-loop entropy gives a ten-dimensional contribution, $`S^{(1)}T^9`$, from supergravity modes at large $`U1/\mathrm{\Delta }_e`$ in these models. Perhaps the theory imposes an effective cut-off of order $`U_{\mathrm{max}}1/\mathrm{\Delta }_e`$ already at the planar level, as suggested by the existence of a maximum temperature at this scale. In this respect, it is interesting to notice that the branch with negative specific heat at large energy densities is dynamically suppressed in the canonical ensemble. The planar entropy in these models is given by $`S=\widehat{f}_e(U_0)^{1/2}S_\mathrm{C}`$, with $`S_\mathrm{C}`$ the entropy function of the commutative theory. Using the $`T(U_0)`$ function one finds that the entropy scales as $`SN^2T^{p8}`$ in the negative specific heat branch. From here one can get the free-energy excess over the vacuum: $$(I\beta E_{\mathrm{vac}})_{\mathrm{planar}}+\frac{C_p}{7p}\frac{N^2L^p}{\lambda ^{\frac{11p}{2}}}\mathrm{\Delta }^{(p7)^2/2}(\beta _{\mathrm{NC}})^{8p}>0,$$ (3.46) with $`C_p>0`$. It is positive in the region with negative specific heat. Therefore, there is an $`O(e^{N^2})`$ suppression of this unstable branch in the canonical ensemble. Another aspect of these solutions that we did not analyze in detail is the presence of light thermal winding modes at large radial coordinates. These cannot be eliminated through $`T`$-duality, and are sure to affect the physics at large values of $`U`$. ## Acknowledgements J.L.F.B. would like to thank A. González-Arroyo and C. Gómez for useful discussions and hospitality at the “Instituto de Física Teórica, C–XVI, Universidad Autónoma de Madrid,” where part of this work was carried out, as well as to the Spinoza Institute, University of Utrecht, where this work was initiated. E.R. would like to thank A. Hashimoto, N. Itzhaki and N. Seiberg for discussions, the “ITP Program on Gauge Theories and String Theory” at Santa Barbara, and the Randall Laboratory of Physics at the University of Michigan, where part of this work was done, for providing stimulating enviroments. The research of E.R. is partially supported by the BSF-American Israeli Bi-National Science Foundation and the IRF Centres of Excelency Program.
no-problem/9910/astro-ph9910548.html
ar5iv
text
# Resolved Spectroscopy of the Narrow-Line Region in NGC 1068. II. Physical Conditions Near the NGC 1068 “Hot-Spot”Based on observations made with the NASA/ESA Hubble Space Telescope. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under the NASA contract NAS5-26555. ## 1 Introduction NGC 1068, one of the initial set of emission line galaxies studied by Seyfert (1943), is the nearest (z=0.0038) and best studied of the Seyfert 2 galaxies. Based on the widths of their emission lines in optical spectra, Seyfert galaxies are generally divided into two types (Khachikian & Weedman 1971). Seyfert 1s possess broad permitted lines, with full widths at half maximum (FWHM) $``$ 10<sup>3</sup> km s<sup>-1</sup>, and narrower forbidden lines, with FWHM $``$ 500 km s<sup>-1</sup>, while Seyfert 2s show only narrow emission lines. The optical continua of Seyfert 1 galaxies are dominated by nonstellar emission that can be characterized by a power-law (Oke & Sargent 1968), whereas this component is much weaker in Seyfert 2 galaxies (Koski 1978). Spectropolarimetric studies (Miller & Antonucci 1983; Antonucci & Miller 1985) revealed the presence of strongly polarized continuum and broad permitted line emission within a 3$`^{\prime \prime }.`$0 aperture centered on the optical nucleus of NGC 1068. These observations were the inspiration for the unified model for Seyfert galaxies, according to which the differences between types 1 and 2 are due to viewing angle, with Seyfert 2 galaxies characterized by obscuration of their broad-line regions and central engines (cf. Antonucci 1994). The polarization is wavelength independent, which implies that the emission from the hidden continuum source and broad-line region is scattered into our line-of-sight by free electrons within a hot ($``$ 3 x 10<sup>5</sup> K) plasma in the inner Narrow-Line Region (NLR) (Miller, Goodrich, & Mathews 1991). The structure of the nuclear region has been observed extensively with HST, using the Planetary Camera (pre-COSTAR; Lynds et al. 1991), and the Faint Object Camera (Capetti et al. 1995). The peak in the optical continuum is resolved, with a FWHM of approximately 0$`^{\prime \prime }.`$15, roughly centered within a cloud of dimensions 3$`^{\prime \prime }.`$5 x 1$`^{\prime \prime }.`$7 elongated in the NE-SW direction, the latter consisting primarily of starlight (Lynds et al. 1991; Crenshaw & Kraemer 1999, hereafter Paper I). The NE part of the cloud has the physical dimensions and location attributed to the scattering medium modeled by Miller, et al. (1991), which has been revealed in the UV (Kriss et al. 1993; Macchetto et al. 1994) and which we have discussed in detail in Paper I. A bright knot of \[O III\] $`\lambda `$5007 emission, often referred to as Cloud B (Evans et al. 1991), overlaps the continuum emission from the hot spot; its centroid is $``$ 0$`^{\prime \prime }.`$2 north of that of the hot spot. While Cloud B lies close to the apex of emission-line bicone (Evans et al. 1991), there is evidence that the actual hidden nucleus is further to the south and west (Capetti, Macchetto, & Lattanzi 1997). It has been suggested that a thermal radio source, S1, 0$`^{\prime \prime }.`$3 south of the continuum peak, is the true nucleus (Evans et al. 1991; Gallimore et al 1997). The discovery of an H<sub>2</sub>O megamaser with a velocity width of 600 km s<sup>-1</sup> (Claussen & Lo 1986) associated with S1 (cf. Greenhill & Gwinn 1997), is further evidence that S1 is the nucleus. Even so, the optical continuum peak and a portion of Cloud B are within 30 pc of the nucleus, and therefore exposed to an intense flux of ionizing radiation. Coronal iron lines have been detected in a number of Seyfert galaxies (Grandi 1978; Osterbrock 1981; Penston et al. 1984; Osterbrock 1985). The following coronal-lines are known to be present in the spectrum of NGC 1068: \[Fe X\] $`\lambda `$6374 (Koski 1976), \[Fe XI\] $`\lambda `$7892 (Penston et al. 1984; Osterbrock & Fulbright 1996), and \[Si X\] 1.430 $`\mu `$m (Thompson 1996). It has been recently suggested (Reynolds et al. 1997) that the coronal-line emission arises in the outer regions of the X-ray absorber present in many Seyfert galaxies (see Reynolds (1997) and George et al. (1998) for a discussion of the properties of X-ray absorbers). Furthermore, Krolik & Kriss (1995) have postulated that the scattering medium in Seyfert 2 galaxies may be the same gas that produces the X-ray absorption in Seyfert 1s. Providing better constraints on the physical conditions in the gas in which the coronal lines arise will help in understanding the connection between the emission-line gas and the scattering medium and, possibly, the X-ray absorber. In this paper we will examine the physical conditions near Cloud B and the optical hot spot in the nucleus of NGC 1068. Among our results, we report the presence of the highest ionization energy UV/optical lines ever detected in a Seyfert 2 galaxy. In Section 3 we will discuss the observed relationship between ionization potential and the velocity of the emission-lines with respect to the systemic velocity of the host galaxy. In Sections 4 and 5 we will present the details of photoionization models of the emission-line gas. Finally we will discuss the relation between the emission-line gas, the scattering of the continuum radiation, and the structure of the inner NLR. ## 2 Observations and Analysis We obtained STIS long-slit spectra of NGC 1068 over 1150 – 10,270 Å on 1998 August 15. Paper I shows the slit position and describes the observations and data reduction. The spectra that are analyzed in this paper are from the central 0$`^{\prime \prime }.`$2 x 0$`^{\prime \prime }.`$1 bin in Paper I, which includes the brightest portion of the hot spot in our slit and a portion of knot B south of its centroid. We measured the fluxes of most of the narrow emission lines by direct integration over a local baseline determined by linear interpolation between adjacent continuum regions. For severely blended lines such as H$`\alpha `$ and \[N II\] $`\lambda \lambda `$6548, 6584, we used the \[O III\] $`\lambda `$5007 profile as a template to deblend the lines (see Crenshaw & Peterson 1986). We then determined the reddening of the narrow emission lines from the observed He II $`\lambda `$1640/$`\lambda `$4686 ratio, the Galactic reddening curve of Savage & Mathis (1979), and an intrinsic He II ratio of 7.2, which is expected from recombination (Seaton 1978) at the temperatures and densities typical of the NLR (see also Section 4). We determined errors in the dereddened ratios from the sum in quadrature of the errors from three sources: photon noise, different reasonable continuum placements, and reddening. Table 1 gives the observed and dereddened narrow-line ratios, relative to H$`\beta `$, and errors in the dereddened ratios for each position. At the end of the table, we give the H$`\beta `$ flux (ergs s<sup>-1</sup> cm<sup>-2</sup>) in the bin and the reddening value that we determined from the He II ratios. ## 3 Spectral Properties of the Gas Near the Hot Spot Figure 1 shows the UV and optical spectra of the continuum hot spot in NGC 1068. Emission lines are present from a wide range in ionization state for the most numerous elements, such as strong \[N II\] $`\lambda \lambda `$6548, 6484 and N V $`\lambda `$1240, and emission lines from the first four ionization states of oxygen, as seen in the earlier FOS spectra (Kraemer, Ruiz, & Crenshaw; hereafter KRC). The continuum shows no strong evidence of a stellar component, and is clearly the result of scattered continuum radiation from the hidden central source, as we discussed in Paper I. Also, the strongest permitted lines show broad wings, which is due to reflected broad-line emission. Netzer (1997) attributed the apparent weakness of O III\] $`\lambda `$1663 in the FOS spectra of NGC 1068 to an underabundance of oxygen. On the other hand, we used the ratio of N III\] $`\lambda `$1750/O III\] $`\lambda `$1663 as evidence of an overabundance of nitrogen (KRC). However, it is clear from these STIS data that O III\] $`\lambda `$1663 has been absorbed by Galactic Al II $`\lambda `$1671 (see Figure 1) and, therefore, cannot be used to estimate either the O/H or N/O abundance ratios. The most intriguing feature of these spectra, which was not readily apparent in the FOS data, is the presence of a number of coronal lines, including those from extremely high ionization states. As shown in Table 1, we have confirmed the presence of \[Fe X\] $`\lambda `$6374 and \[Fe XI\] $`\lambda `$7892, which had been previously detected, as noted in Section 1. Furthermore, we have unambiguously detected \[Fe XIV\] $`\lambda `$5303 (see Figure 1), which previously had only been confirmed to be present in the spectra of the Seyfert 1 galaxies III Zw 77 (Osterbrock 1981) and MCG -6-30-15 (Reynolds et al. 1997) and, possibly, the Seyfert 2 galaxy Tololo 0109 -383 (Fosbury & Samsom 1983; Durret & Bergeron 1988). Perhaps more unexpected is the presence of \[S XII\] $`\lambda `$7611 (see Figure 1). S<sup>+10</sup> has an ionization energy of 504.7 eV, which makes this highest ionization line ever detected in the spectrum of a Seyfert galaxy, outside the X-ray region, and certainly the highest ever seen in the NLR, besting \[Si X\] 1.43 $`\mu `$m, which has an ionization energy of 401.4 eV (Thompson 1996). Since 7611 Å is within one of the O<sub>2</sub> telluric bands, detection is difficult from the ground. In fact, the only confirmed detection of \[S XII\] has been in observations of the solar corona during the total eclipse of 30 May 1965 (Jefferies, Orrall, & Zirker 1971). Osterbrock (1981) detected a line at a measured wavelength of 7613.1 Å in the spectrum of III Zw 77, but did not identify it; we would suggest that this is also \[S XII\]. In addition to the above-mentioned lines, we have identified a number of other coronal lines in these spectra, as indicated by the ionization potentials listed in Table 1 (for the purposes of this paper, we refer to ions with ionization energies greater than 100 eV as “coronal”; this leaves out the \[Ne V\] and \[Fe VII\] lines, which, although detected in the solar corona, are formed under more typical NLR conditions). To our knowledge, these other lines have not been previously identified in the spectra of Seyfert galaxies, although we would expect that they should be present, since their ionization energies are lower than that of Fe<sup>+13</sup>. Exceptions, perhaps, are the tentatively identified nickel lines, \[Ni XV\] $`\lambda `$6702 and \[Ni XIII\] $`\lambda `$5116, since the other lines are from more abundant elements. However, Halpern & Oke (1986) detected \[Ni II\] $`\lambda `$7378 in an off-nuclear spectrum of NGC 1068, and determined that the nickle abundance may be at least 4 times solar. One of the interesting aspects of the coronal lines is that they are blueshifted with respect to the lower ionization lines, an effect first noted in a number of Seyfert galaxies by Penston et al. (1984). In Figure 2, we plot ionization potential versus recession velocity (c$`z`$) for all the observed emission lines. Note that the presence of a blueshifted absorption feature can bias the centroiding of the line redward, as appears to be the case for N V $`\lambda `$1240 and C IV $`\lambda `$1550 (KRC). Similarly, the velocity measurement is more difficult for severly blended lines. Nevertheless, there is a strong correlation between ionization potential and velocity. A simple explanation is that the lines originate in different regions that are superimposed, such that we are viewing components with different velocities along our line-of-sight. Note that most of the lower ionization lines are redshifted with respect to the systemic velocity of the host galaxy. Figure 3 shows spatial cross-cut profiles of the \[Fe XIV\] $`\lambda `$5303 and \[S XII\] $`\lambda `$7611 lines, along the slit. Clearly, the coronal line emission is concentrated directly on the optical hot spot, in contrast to the suggestion that it extends throughout the NLR in Seyfert galaxies (Korista & Ferland 1989), but in general agreement with Penston et al. (1984) regarding the origin of the \[Fe X\] $`\lambda `$6374 and \[Fe XI\] $`\lambda `$7892 lines, and results from line profile studies (cf. Moore, Cohen, & Marcy 1996). Furthermore, the coronal line emission is clumpier and not as extended as the continuum radiation. Of particular interest is the region NE of the hot spot, where the scattered continuum is strongest (see Paper I). This does not necessarily mean that the coronal lines cannot arise in the same gas that scatters the continuum radiation, but, if they do, conditions in the scatterer must vary such that the lines are often weak. The possible connection between the scatterer and the coronal gas is discussed in Section 7.1. ## 4 Photoionization Models ### 4.1 Abundances and Ionizing Continuum Our approach in photoionization modeling of NGC 1068 was described in detail in KRC. We will not repeat the details in the current paper, except to point out important differences. As usual, the models are parameterized in terms of the dimensionless ionization parameter, U, which is the number of ionizing photons per hydrogen atom at the illuminated face of the cloud. Since the lines of Ne<sup>+4</sup> and Fe<sup>+6</sup> are quite strong in in the spectrum of NGC 1068, we had assumed that this is in part due to an overabundance of these elements relative to solar (cf. Oliva 1997). We still consider this to be true for iron, since ASCA data indicate a large Fe/O ratio in the X-ray emitting gas (Netzer & Turner 1997). However, there is reason to believe that the Ne/H ratio is not substantially greater than solar at least in the extreme inner NLR. The strength of the \[Ne V\] $`\lambda `$3426 relative to hydrogen is sensitive to the optical thickness of the emission-line gas. That is, if a cloud does not extend much beyond the He<sup>++</sup> zone, \[Ne V\] $`\lambda `$3426/H$`\beta `$ can be quite large ($``$ 5). Also, if the ionizing continuum is somewhat harder than we had assumed, there are more photons capable of ionizing Ne<sup>+3</sup>. In KRC, we had made the argument for a supersolar abundance of nitrogen in the inner NLR of NGC 1068, based on the strength of N V $`\lambda `$1240 relative to He II $`\lambda `$1640 and C IV $`\lambda `$1550 (cf. Ferland et al. 1996). However, the models significantly overpredicted N IV\] $`\lambda `$1486, which might indicate that that we overestimated the nitrogen abundance. The relative strength of N V $`\lambda `$1240, and other resonance lines, can be boosted somewhat by fluoresence and scattering of ultraviolet continuum radiation from the central source (cf. Grandi 1975a, b). As we will discuss in Section 6.1, continuum scattering and fluorescence cannot fully explain the strength of N V $`\lambda `$1240. However, based on the observed N IV\] $`\lambda `$1486/H$`\beta `$ ratio, there is no reason to believe that the relative abundance of nitrogen is significantly greater than solar. Thus, other than iron, we have chosen to assume solar abundances (cf. Grevesse & Anders 1989) for these models. The numerical abundances, relative to hydrogen, are as follows: He=0.1, C=3.4x10<sup>-4</sup>, O=6.8x10<sup>-4</sup>, N=1.2x10<sup>-4</sup>, Ne=1.1x10<sup>-4</sup>, S=1.5x10<sup>-5</sup>, Si=3.1x10<sup>-5</sup>, Mg=3.3x10<sup>-5</sup>, Fe=8.0x10<sup>-5</sup>. In KRC, we assumed an ionizing continuum similar to that proposed by Pier et al. (1994), which is a conservative power-law fit from the UV and to the X-ray, using the 2 keV flux and X-ray continuum derived from the BBXRT data (Marshall et al. 1993). However, fits to the X-ray continuum by combining ROSAT/PSPC and Ginga, or Einstein/IPC and EXOSAT data (see Pier et al. 1994, and references therein) are consistent with a somewhat harder spectral energy distribution (SED). As noted above, the strength of \[Ne V\] $`\lambda `$3426 depends on the hardness of the ionizing continuum. Therefore, we modeled the SED as a broken power-law of the form, F<sub>ν</sub> $`=`$ K$`\nu ^\alpha `$, as follows: $$\alpha =1.0,h\nu <13.6eV$$ (1) $$\alpha =1.4,13.6eVh\nu <1000eV$$ (2) $$\alpha =0.5,h\nu 1000eV$$ (3) Note that we have assumed a harder EUV - Xray continuum than in KRC (1.4, rather than 1.6) and have positioned the X-ray break at one-half the energy. We have, however, assumed the same intrinsic luminosity above the Lyman limit, Q $`=`$ 4x10<sup>54</sup> photons sec<sup>-1</sup> (Pier et al. 1994), which is typical of Seyfert 1 nuclei . ### 4.2 Component Parameters As noted in Section 3, these spectra show emission from a wide range of ionization states, which is an indication that we are seeing emission from a range of physical conditions. Also, there is an apparent correlation between the redshift of the emission lines and the ionzation potential of the ion from which they arise. Thus, it is likely that we are observing emission from several distinct components. An initial guess at temperature and density of much of the emission-line gas (which we will refer to as the HIGHION component) can be derived from the ratio of \[O III\] $`\lambda \lambda `$5007,4959/\[O III\] $`\lambda `$4363 (Osterbrock 1989). The dereddened observed ratio is $``$ 47, indicating an electron temperature T<sub>e</sub> $``$ 20,000K in the low density limit, and little modification by collisional de-excitation of the <sup>1</sup>D<sub>2</sub> level. \[Fe VII\] lines ratios can also be used to estimate temperature. The ratio \[Fe VII\] $`\lambda `$3759/\[Fe VII\] $`\lambda `$6087 $``$ 0.79, which indicates T<sub>e</sub> $``$ 25,000K, at electron densities, n<sub>e</sub>, $``$ 10<sup>6</sup> cm<sup>-3</sup> (Nussbaumer & Storey 1982). Still, it is possible that much of the \[Fe VII\] and \[O III\] emission arises in different zones within the same gas, particularly since these lines are at the same approximate velocity (see Table 1). If one kinematic component contributes most of the \[O III\] and \[Fe VII\] emission, and presumably other lines within the same range of ionization energy (see Table 1), it is not likely to contribute the highest ionization lines in these spectra. As we discussed in Section 3, such lines as \[Fe IX\] $`\lambda `$7892, \[Fe XIV\] $`\lambda `$5303, and \[S XII\] $`\lambda `$7611, are blueshifted with respect to the lower ionization lines by several hundred km sec<sup>-1</sup>, which suggests that they arise in a different kinematic component (which we will refer to as CORONAL). It has been previously suggested that a rough estimate of the temperature of the coronal gas can be obtained from the ratio of \[Fe XI\] $`\lambda `$2649/\[Fe XI\] $`\lambda `$7892 (Penston et al. 1984; Osterbrock & Fulbright 1996). The observed ratio is \[Fe XI\] $`\lambda `$2649/\[Fe XI\] $`\lambda `$7892 $`=`$ 1.4 $`{}_{.35}{}^{}{}_{}{}^{+.43}`$. Unfortunately, based on the most recent calculations for effective collision strengths (Tayal 1999), this ratio is not particularly sensitive to temperature at electron densities $``$ 10<sup>8</sup> cm<sup>-3</sup>. Using these collision strengths and the transition probabilities from Mason (1975), the dereddened \[Fe XI\] ratio indicates that T<sub>e</sub> $`>`$ 3 x 10<sup>4</sup> K in the Fe<sup>+10</sup> zone. Since the redshifts of the Balmer lines are similar to those of the \[O III\] lines, we expect CORONAL will contribute little to the total hydrogen recombination line flux. Since the lowest ionization lines appear redshifted compared to the other emission-lines, it is probable that they arise in a separate component, rather than in the more neutral parts of HIGHION. This component must be of fairly low density, since the ratio of \[S II\] $`\lambda `$6716/\[S II\] $`\lambda `$6731 $``$ 0.81, indicating n<sub>e</sub> $``$ 2 x 10<sup>3</sup> cm<sup>-2</sup> (Osterbrock 1989). The S<sup>+</sup> lines can arise in partially neutral gas and, thus, we estimate that the atomic hydrogen density for this component is somewhat higher, n<sub>H</sub> $``$ a few times 10<sup>4</sup> cm<sup>-2</sup>. The nature of such a component (which we will refer to as LOWION) was discussed extensively in KRC. The fact that the strengths of the low-ionization, collisionally-excited lines must be large relative to the Balmer lines in order for them to appear strong in a composite spectrum led us to believe that the gas was screened and, thus, ionized by an absorbed continuum strongly weighted to the X-ray (KRC). However, the paucity of ionizing photons in the absorbed continuum and the constraint that this component had the same covering factor as the screening gas led to an underprediction of its contribution to the total emission-line flux. In these spectra it is apparent that much of the high ionization gas is optically thin, since the observed He II $`\lambda `$4686/H$`\beta `$ ratio is 0.60, which is unattainable if much of the gas is optically thick near the Lyman limit. Thus, neither of the other components can provide an effective screen for LOWION. Kraemer et al. (1999a) have modeled the effect on the NLR of an absorber with a high covering factor within a few parsecs of the continuum source, and this effect is clearly evident in the NLR spectrum of NGC 4151 (Alexander et al. 1999; Kraemer et al 1999b). Thus, we suggest the following geometry for the inner NLR of NGC 1068. The components represented by HIGHION and CORONAL are essentially co-located, and are ionized by unabsorbed continuum radiation from the hidden nucleus. The low ionization gas lies out of the plane occupied by HIGHION and CORONAL, and is irradiated by X-rays which penetrate a layer of absorbing gas closer to the nucleus. Also a small fraction of unabsorbed continuum, scattered out of the plane by free electrons, is incident upon the low ionization gas. This simple geometry is illustrated in Figure 4. There is no simple way to quantify the fractions of scattered and direct continuum irradiating LOWION, but it is thought that at least 1% of the continuum radiation in NGC 1068 is scattered into our line of sight (Miller et al. 1991) and there is the example of a large column, X-ray absorber in NGC 4151 (cf. Barr et al. 1977) which may vary in optical depth as a function of angle with respect to the ionization cone (Kraemer et al. 1999b). We have modeled the absorber assuming an ionization parameter, U $`=`$ 0.1, and a column density, N<sub>H</sub> $`=`$ 7.4 x 10<sup>22</sup> cm<sup>-2</sup>, where N<sub>H</sub> is the sum of the columns of ionized and neutral hydrogen. These parameters are similar to those describing the X-ray absorber in NGC 4151 (Yaqoob, Warwick & Pounds 1989; Weaver et al. 1994). Given our estimate of the luminosity of the active nucleus, an absorber of density n<sub>H</sub> $`=`$ 10<sup>7</sup> cm<sup>-3</sup> would lie at a distance of $``$ 1 pc from the central engine, closer to the nucleus than the inner part of hot spot. The ionizing continuum incident upon LOWION is shown in Figure 5. For the purposes of these models, we assume that HIGHION and LOWION are at the same radial distance from the nucleus. Based upon the observed physical properties derived from the emission-lines and the simple geometric picture discussed above, we have generated a three-component photoionization model to describe the physical conditions of line emitting gas. Most of the emission arises in the component HIGHION, with n<sub>H</sub> $`=`$ 6 x 10<sup>4</sup> cm<sup>-2</sup>, U $`=`$ 10<sup>-1.52</sup>, and , N<sub>H</sub> $`=`$ 10<sup>21</sup> cm<sup>-2</sup>. The ionization parameter was chosen to produce strong \[Fe VII\] emission with negligible \[Fe XI\] $`\lambda `$7892. The model was truncated at the termination of the He<sup>+2</sup> zone, in order to produce the large He II $`\lambda `$4686/H$`\beta `$ ratio and strong \[Ne V\] described above. The density was constrained on the high end by the presence of strong \[Ne IV\] $`\lambda `$2423, which has a critical density $``$ 10<sup>5</sup> cm<sup>-3</sup>, and on the low end by the ionization parameter. LOWION is characterized by n<sub>H</sub> $`=`$ 3 x 10<sup>4</sup> cm<sup>-2</sup>, U $`=`$ 10<sup>-3.2</sup> (from the combined continuum). Since the size of the partially neutral zone in X-ray ionized gas can be inordinately large, we chose to truncate the model at the same column density as HIGHION. Although not completely radiation bounded, LOWION has a considerable extended zone behind the H<sup>+</sup>/H<sup>0</sup> boundary, from which \[O I\] $`\lambda \lambda `$6300, 6364, Mg II $`\lambda `$2800, and \[S II\] $`\lambda \lambda `$6716, 6731 arise. For LOWION, we used a solar iron abundance, since the enhanced cooling by \[Fe II\] in the extended zone would suppress other lines. The lack of reliable atomic constants for coronal gas at nebular temperatures (i.e., T<sub>e</sub> $``$ 10<sup>5</sup>K) makes it difficult to set the model input parameters for CORONAL on the basis of emission-line ratios. We have generated this component using CLOUDY90 (Ferland et al 1998) since it includes more a complete model for the coronal-line emission, and includes elements, specifically argon and nickel, which are not included in our code. Therefore, we generated a single coronal component to produce the mix of ionization states observed in the blue-shifted emission-lines, assuming the following: n<sub>H</sub> $`=`$ 7 x 10<sup>2</sup> cm<sup>-3</sup>, U = 1.7, N<sub>H</sub> = 4 x 10<sup>22</sup> cm<sup>-2</sup>. In Table 1, the dereddened L$`\alpha `$/H$`\beta `$ is 30.75, which is roughly equal to the value from recombination plus collisional excitation in low ionization gas (Osterbrock 1989). Thus, there is no evidence for the destruction of trapped L$`\alpha `$ photons by dust. Furthermore, dust is not responsible for scattering of continuum radiation by the hot spot (see Paper I), which is another indication of the absence of dust in this region. Therefore, we have assumed that the emission-line gas is dust-free. ## 5 Model Results In creating a composite model, we first scaled the contributions from HIGHION to provide a rough fit to the high ionization lines such as C IV $`\lambda `$1550, \[Ne V\] $`\lambda `$3426, and \[Fe VII\] $`\lambda `$6087 and LOWION to fit lines such as \[N II\] $`\lambda `$6584, \[O II\] $`\lambda `$3727, with the result that the contribution of HIGHION to the total H$`\beta `$ flux is 3 times that of LOWION. Due to the uncertainties in the atomic data, we have elected not to include the predicted forbidden line strengths from CORONAL in our scaling. Since the Balmer lines are associated kinematically with the non-coronal emission lines, it seems reasonable to expect that the contribution from CORONAL is $``$ 15% of the total H$`\beta `$ flux, which is only slightly more than the uncertainty in measurement. In Table 2, we compare the predicted line ratios for HIGHION and LOWION and the composite model to the observed/dereddened values. Given the simplicity of the model, we have obtained very satisfactory fits for the vast majority of the observed emission lines. There is good agreement over nearly the full ionization sequence, for example C IV $`\lambda `$1550, C III\] $`\lambda `$1909, and C II\] $`\lambda `$2325, and O IV\] $`\lambda `$1402, \[O III\] $`\lambda `$5007, \[O II\] $`\lambda `$3727, and \[O I\] $`\lambda `$6300. The predicted \[O III\] $`\lambda \lambda `$5007,4959/\[O III\] $`\lambda `$4363 ratio is 46, which indicates that T<sub>e</sub> in the O<sup>+2</sup>, averaged over the two components, is correct. The \[Fe VII\] $`\lambda `$3759/\[Fe VII\] $`\lambda `$6087 ratio is 0.55, lower than the observed value, which indicates a somewhat higher temperature in the Fe<sup>+6</sup> zone than predicted, which is not surprising since given our model requirement that this zone is co-located with the O<sup>+2</sup> zone. The predicted \[S II\] $`\lambda `$6716/$`\lambda `$6731 ratio is 1.15, identical to that observed within the errors, which confirms our assumptions regarding the density and ionization structure of LOWION. In general, the model predictions demonstrate that our assumptions regarding the elemental abundances are approximately correct. However, the models do underpredict the strengths of the neon lines somewhat, which may indicate that the neon is supersolar, but probably less than a factor of two. The model prediction for the He II $`\lambda `$4686/H$`\beta `$ and ratio is only slightly higher than observed. The predicted strengths of the lines formed in the partially neutral envelope of the low ionization gas, such as \[S II\] $`\lambda \lambda `$6716, 6731 and \[O I\] $`\lambda `$6300, are in reasonable agreement with the observations. This indicates that the combined effects of SED and column density are well represented by the models. This result is of particular importance, given the assumption that the components are ionized by different continua. Also listed in Table 2 are the model predictions for the emitted H$`\beta `$ flux, the emitting surface area (the scaled to the reddening-corrected H$`\beta `$ luminosity divided by the emitted flux), and covering factor for each component, assuming a distance of $``$ 25 pc from the hidden continuum source. At the distance of NGC 1068 (14.4 Mpc, Bland-Hawthorne 1997), the 0$`^{\prime \prime }.`$1 slit width corresponds to 7.2 pc, yielding a covering factor for the slit of $``$ 0.05. Since the covering factor of these components are substantially lower than 0.05, there is no evidence that we are seeing substantial effects of superimposition of clouds along our line-of-sight. Our photoionizaton code does not include pumping of UV resonance lines by scattering of continuum radiation and continuum fluoresence (cf. Ferguson, Ferland, & Pradhan 1994), which may explain the underpredictions of the strengths of several UV resonance lines, including N V $`\lambda `$1240, C II $`\lambda `$1335 and Mg II $`\lambda `$2800. Therefore, we recomputed the LOWION model using CLOUDY90, assuming a turbulent velocity of 50 km s<sup>-1</sup>. Although pumping of UV resonance lines is most efficient for gas with large turbulent velocities ($``$ 1000 km s<sup>-1</sup>; Ferguson et al. 1994), it can still be an important process if the covering factor of the emitting gas is sufficiently large (cf. Hamann & Korista 1996) and/or if the optical depths of the scattered lines are small (Ferland 1992). We derive a relatively large covering factor for LOWION ($``$ 20% that of the slit), and thus it is not surprising that the model predicts a significant contribution to resonance lines from scattered continuum radiation (interestingly, this is all due to direct pumping of the UV resonance line, since there is insufficient ionizing radiation incident upon LOWION to pump the EUV driver lines). The CLOUDY90 predictions for C II $`\lambda `$1335 and Mg II $`\lambda `$2800 are listed in Table 2, alongside those from our code (the predictions for the non-resonance lines were quite similar for the two codes), and the agreement with observed flux ratios is quite good. In addition to the large column densities of C<sup>+</sup> (3.22 x 10<sup>17</sup> cm<sup>-2</sup>) and Mg<sup>+</sup> (1.54 x 10<sup>16</sup> cm<sup>-2</sup>), LOWION predicts a large column density for O<sup>0</sup> (6.02 x 10<sup>17</sup> cm<sup>-2</sup>). O I $`\lambda `$1302 is present in the far-UV spectrum (see Figure 1) and we expect that is also formed by continuum scattering (unfortunately, it is not included in the code output). However, continuum scattering does not appear to have a similarly strong affect in boosting the N V $`\lambda `$1240 line, as we discuss below. In Table 3, we list the predictions of the CORONAL model for the mean ionization fractions; ions with observed lines are flagged. We elected not to list the predicted emission-line ratios since they may be misleading given the lack of reliable atomic data (cf. Moorwood et al. 1997). The model predicts non-negligible populations for each of the observed ionic states, except Si<sup>+6</sup> and Fe<sup>+6</sup>, however it is clear that the \[Fe VII\] lines arise in lower ionization gas. If we assume that CORONAL contributes 15% of the observed H$`\beta `$ flux, the covering factor is 2.2 x 10<sup>-3</sup>. Assuming isotropic scattering, at small electron scattering optical depths, $`\tau `$<sub>electron</sub> $`<`$ 1, the reflected fraction of continuum radiation, f<sub>refl</sub> $``$ N<sub>e</sub>F<sub>c</sub>$`\sigma _{Thomson}`$, where N<sub>e</sub> is the column density of electrons, F<sub>c</sub> is the covering factor, and $`\sigma _{Thomson}`$ is the Thomson cross-section. For CORONAL, f<sub>refl</sub> $``$ 7.4 x 10<sup>-5</sup>. However, the observed reflected continuum fraction in our spectrum, based on the estimated central source luminosity (Pier et al. 1994), is f<sub>refl</sub> $``$ 1.4 x 10<sup>-3</sup> (the total reflected fraction is f<sub>refl</sub> $``$ 1.5 x 10<sup>-2</sup>, consistent with the larger region sampled; Miller et al. 1991). Thus, the coronal-line emitting gas near the hot spot makes a neglible contribution to the scattered continuum radiation. ## 6 Open Issues ### 6.1 The Nitrogen Problem In KRC, we attributed the unusual strength of the N V $`\lambda `$1240 line to an overabundance of nitrogen, however, the models significantly overpredicted N IV\] $`\lambda `$1486. Even with solar nitrogen abundance, HIGHION overpredicts N IV\] $`\lambda `$1486 by a factor of nearly 3. A check of HIGHION using CLOUDY90, assuming a turbulent velocity of 500 km s<sup>-1</sup>, produced only a 70% increase in the strength of N V $`\lambda `$1240. The models do not significantly underpredict other lines formed in the same zone as N V $`\lambda `$1240, such as \[Ne V\] $`\lambda `$3426 and \[Fe VII\] $`\lambda `$6087, and the predictions for other UV resonance lines match the data or are well understood (see above). One possible explanation is that N V $`\lambda `$1240 and N IV\] $`\lambda `$1486 are formed in different regions, with much different nitrogen abundances, but we find no evidence for abundance inhomogeneities for elements other than iron. Therefore, the problem with the nitrogen lines remains unresolved. ### 6.2 Coronal Component The predicted temperatures for CORONAL (T<sub>e</sub> $`=`$ 9.29 x 10<sup>4</sup> K at the ionized face, 3.67 x 10<sup>4</sup> K at the point of truncation), are within the range expected from the \[Fe XI\] $`\lambda `$2649/$`\lambda `$7892 ratio. However, since the \[Fe XI\] ratio is not particularly sensitive to temperature at low density, the estimate of the coronal gas temperature is not well constrained. Hopefully, better estimates may be obtained once more accurate collision strengths are calculated for other coronal lines in the temperature range 10<sup>4</sup> – 10<sup>5</sup> K. In any case, it is clear from our model predictions that the observed ionic states can co-exist in photoionized gas characterized by a single ionization parameter. ## 7 Discussion ### 7.1 Coronal Gas, the Scattering Medium, and X-ray Absorbers As we have demonstrated, the ionic states indicated by the coronal emission-lines can co-exist in a large column of highly ionized gas. The ionization parameter, U $`=`$ 1.7, is characteristic of the X-ray absorbers present in many Seyfert 1 galaxies (cf. Reynolds 1997). It is tempting to associate the coronal-line emitting gas with the X-ray absorber, as Reynolds et al. (1997) has done for the case of the Seyfert 1 galaxy, MCG -6-30-15. Based on the fraction of Seyfert 1s that possess an X-ray absorber, it is likely that the covering factor of the absorber is 0.5 – 1 (cf. George et al. 1998). However, we find that the covering factor of the coronal-line gas in NGC 1068 is quite low, and thus different from typical X-ray absorbers. The low covering factor of the coronal gas also makes it unlikely to be the continuum scattering region. Furthermore, Miller et al. (1991) estimated a temperature for the scattering medium, T<sub>e</sub> $``$ 3 x 10<sup>5</sup> K, which is significantly greater than our model predictions. Therefore, it is likely that the scattering occurs in a component of gas associated with the hot spot that is more highly ionized than our coronal component. If we use the initial conditions, n<sub>H</sub> $`=`$ 200 cm<sup>-3</sup>, U $`=`$ 8, and N<sub>H</sub> $``$ 10<sup>22</sup> cm<sup>-2</sup>, CLOUDY90 predicts a mean T<sub>e</sub> $``$ 4.5 x 10<sup>5</sup> K, which is close to Miller et al.’s value. If we, again, apply the constraint that this additional component contributes $``$ 15% of the total H$`\beta `$, the covering factor for this component is $``$ 0.07 and, thus, f<sub>refl</sub> $``$ 1.2 x 10<sup>-3</sup>, or approximately 85% that observed. We would expect to see some line emission from this component, but only from the most highly ionized species, such as \[S XII\] $`\lambda `$7611. The covering factor for this component is slightly greater than that constrained by the slit width, which would not be surprising if the scatterer is indeed an X-ray absorber viewed across our line-of-sight, as suggested by Krolik & Kriss (1995), and there was some superposition of clouds. To summarize, we think it is unlikely that the coronal-line gas has a sufficient covering factor to produce the scattered continuum radiation. It is plausible that the scattering occurs in a component of more highly ionized gas, with a high covering factor, which may contribute a fraction of the coronal-line emission. Although the physical conditions of both components are within the range observed for X-ray absorbers (cf. Reynolds 1997), we suggest that while the absorber may be associated with the scatterer, neither are associated with the coronal-line gas that we observe. ### 7.2 Photoionization versus Collisional Processes The relative contributions of photoionization and collisional processes (e.g., shocks, heating by cosmic rays) to the physical state of the emission-line gas in NGC 1068 has been a matter of some debate. While Kriss et al. (1992) have attributed the strong C III $`\lambda `$977 and N III $`\lambda `$990 seen in HUT spectra to shock heating, Ferguson et al. (1994) suggest that the strength of these lines result from a combination of continuum fluoresence and dielectronic recombination. Given the importance of continuum scattering to O I $`\lambda `$1302, C II $`\lambda `$1335 and Mg II $`\lambda `$2800, it is not surprising that the same process enhances C III $`\lambda `$977 and N III $`\lambda `$990. In fact, the recomputed LOWION predicts relatively strong C III $`\lambda `$977 and N III $`\lambda `$990 (see Table 2). The contribution to both lines from continuum scattering is $``$ 90% for LOWION and $``$ 50% in HIGHION, primarily from zones near the ionized face of the cloud, where the driver lines are optically thin and pumping is most efficient (Ferland 1992). The models predict C III\] $`\lambda `$1909/C III $`\lambda `$977 $``$ 5.2 and N III\] $`\lambda `$1750/N III $`\lambda `$990 $``$ 1.1, compared to 3.15 $`\pm `$ 0.51 and 1.46 $`\pm `$ 0.34, respectively, from the HUT spectra (Kriss et al. 1992), noting that these ratios are quite sensitive to the atomic parameters used in the code. Thus, although we cannot rule out additional heating mechanisms, it is clear that continuum pumping can dramatically enhance these lines in photoionized gas. Nevertheless, it is interesting to note that turbulent velocities as low as 50 km s<sup>-1</sup> can fully account for the resonance scattering. The full width at half maximum of C II $`\lambda `$1335 is $``$ 1240 km s<sup>-1</sup>, corrected for Galactic absorption, which indicates that the line is broadened by the summation of different kinematic components. Interestingly, the widths of the resolved kinematic components of instrinsic UV absorbers in Seyfert 1 galaxies are also typically $``$ 50 km s<sup>-1</sup> (Crenshaw et al. 1999). The lack of large scale turbulence implies a lack of violent disruption of the gas. In KRC, we attributed the large H$`\alpha `$/H$`\beta `$ ratio in at least one region to collisional excitation of H$`\alpha `$ by an injection of energetic particles, possibly associated with the radio jet, into the NLR gas. This effect was not as apparent in the optical nucleus, and we find no evidence in our STIS data for enhancement of H$`\alpha `$ beyond the predictions of the photoinization models. However, there may still be examples of jet/cloud interaction at other locations in the inner NLR OF NGC 1068. This will be addressed in a subsequent paper. One piece of evidence that heating processes other than photoionization are present is that the electron temperatures predicted by the models are somewhat lower than those estimated from the \[Fe VII\]. However, the underprediction of the \[Fe VII\] ratio is probably due to our assumption that these lines arise in the same gas as the \[O III\] lines. Therefore, while it is possible that collisional effects are important in some of the high ionization gas in the inner NLR, most of the observed properties are consistent with photoionization by the central source. ### 7.3 Geometry of the Inner NLR While there is no additional evidence to support our proposed geometry for the inner NLR in NGC 1068, assuming that the low ionization gas is screened by a large column absorber resolves a problem with the KRC model regarding the covering factor of the screened gas. Also, it is apparent that conditions in the NLR of NGC 4151 are due to absorption of the ionizing continuum by intervening gas (Alexander et al. 1999; Kraemer et al. 1999b). If a column of extremely optically thick gas is present, it may have an important effect in the collimation of the ionizing radiation. However, in order to do so, the absorber must have a large covering factor (i.e., 0.5). In addition to NGC 4151, a large column of X-ray absorbing gas has been detected in the Seyfert 1.5, Mrk 6 (Feldmeier et al. 1999), but there are not enough examples to make a statistical determination of the covering factor of the absorber, or its column density as a function of scale-height. Nevertheless, if this component has a large covering factor, it will be important, along with the putative molecular torus and any intrinsic anisotropy of the radiation field, in determining the distribution of ionized gas in the NLR. ## 8 Conclusions We have analyzed the STIS UV and optical spectra of the inner nuclear region of NGC 1068, near the optical continuum peak or hot spot. We have contructed photoionization models of the narrow-line gas and have been able to successfully match nearly all the observed dereddened emission-line ratios. From our analysis and modeling of these spectra, we can report the following discoveries regarding the physical conditions near the hot spot. 1. We report the detection of a number of strong coronal emission lines, including \[Fe XIV\] $`\lambda `$5303 and \[S XII\] $`\lambda `$7611. The latter is the highest ionization UV/optical line ever identified in the NLR of a Seyfert galaxy. The coronal lines are blueshifted with respect to both the systemic velocity of the host galaxy and the lower ionization lines in these spectra. This indicates that the coronal lines arise in a distinct component of narrow-line gas. The lower velocities of other emission-lines may be due to contributions from gas at larger radial distances along our line-of-sight, but it is also possible that much of this gas is co-located with the coronal component, which implies that the velocity of the gas is related to its ionization state and/or density. If this gas is in radial outflow from the nucleus, these observations may provide important constraints for models of cloud acceleration (the kinematics of the inner NLR of NGC 1068 will be the subject of a later paper). 2. We have used a three-component model to match the narrow emission-line flux ratios, since the kinematics and range of physical conditions clearly indicate that we are sampling distinct emission-line clouds. Most of the high ionization lines, such as C IV $`\lambda `$1550, \[Ne V\] $`\lambda `$3426 and \[Fe VII\] $`\lambda `$6087, arise in a component, HIGHION, which is directly ionized by the hidden continuum source and is optically thin at the Lyman limit. Low ionization lines, such as Mg II $`\lambda `$2800, \[O II\] $`\lambda `$3727, and \[N II\] $`\lambda `$6584, are formed in a component, LOWION, which is screened from the central source by an optically thick layer of gas (N<sub>H</sub> $`=`$ 7.4 x 10<sup>22</sup> cm<sup>-2</sup>), similar to that seen in NGC 4151. Several of the UV resonance lines formed in the low ionization gas (O I $`\lambda `$1302, C II $`\lambda `$1335, and Mg II $`\lambda `$2800) are enhanced by the scattering of continuum radiation. The turbulent velocities required to produce this effect are low, $``$ 50 km s<sup>-1</sup>. Although this is greater than the thermal velocities within the emitting clouds, it is much less than one might expect if the clouds were being disrupted (e.g., by fast shocks). Also, it is an interesting coincidence that the required turbulent velocities are similar to those measured in the resolved kinematic components of the intrinsic UV absorbers in Seyfert 1s. Unlike our previous study (KRC), we find no strong evidence for a large overabundance of neon and nitrogen, although the Fe/H ratio in the higher ionization gas appears to be approximately twice solar. Interestingly, there cannot be significant iron enhancement in the low ionization gas, otherwise the model underpredicts the electron temperature in the partially neutral zone. Although iron could be depleted onto dust grains in the neutral gas, we find no evidence of the effects of dust mixed in with the emission-line gas. Assuming solar abundances, the models underpredict N V $`\lambda `$1240 and overpredict N IV\] $`\lambda `$1486. Resonance scattering and continuum fluoresence cannot sufficiently pump the N V line to match the observed flux, and, in any case, these processes do not affect the N IV\] strength. Although extremely inhomogeneous nitrogen abundances could cause this effect, we think that this is quite unlikely. 3. Although the atomic data is not sufficiently reliable to predict coronal-line flux ratios, we have generated a component, CORONAL, in which all the observed ions are present. Hence, it is plausible that these lines form in photoionized gas. We do not believe that this component is responsible for the scattered continuum radiation, since its covering factor and electron temperature are too low. As such, we postulate that a more highly ionized, co-located component is the scatterer. This component has a covering factor and ionization state similar to the X-ray absorbers detected in Seyfert 1 galaxies. S.B.K would like to thank Swaraj Tayal, Don Osterbrock, Anand Bhatia, Dick Fisher, and Bruce Woodgate for illuminating conversations about coronal emission-lines. S.B.K. would also like to thank Gary Ferland for useful discussions about resonance scattering and continuum pumping of emission lines. We also thank Cherie Miskey for help with the figures. S.B.K. and D.M.C. acknowledge support from NASA grant NAG 5-4103.
no-problem/9910/hep-ph9910479.html
ar5iv
text
# Non-local regularization of chiral quark models in the soliton sectortalk presented by G. Ripka at the International Workshop on Hadron Physics “Effective Theories of Low Energy QCD”, Coimbra, Portugal, September 10-15, 1999 ## I Some specificities of chiral quark models This work was done in collaboration with Wojciech Broniowski from Krakow. We consider chiral quark models which encompass three sectors. The vacuum and soliton sectors, which are treated in the mean-field (leading order in $`N_c`$) approximation, and the meson sector, which describes the (next to leading order in $`N_c`$) vibrations of the vacuum sector. Not all models are applicable to the three sectors. For example, constituent quark models, in which quarks interact with confining forces, cannot describe the vacuum sector, that is, the Dirac sea. However, they can and do describe the excited states of baryons, a thing which the chiral quark models cannot do (except, possibly, the $`\mathrm{\Delta }`$) for lack of confinement. Chiral quark models (nor any of the other low energy quark models) have not been derived from QCD. The only serious attempt to derive them from QCD is the instanton gas model Diakonov86 ; Shuryak82 . In this approach, the chiral quark model is derived by calculating the propagation of quarks in a gas of instantons. A regularized effective theory results, as it should. It predicts both the value of the cut-off and the form of the regulator. The non-local regularization discussed here has the same form as the one derived from the instanton gas model. Unfortunately, the quark models derived from the instanton structure of the vacuum do not lead to quark or color confinement. This serious limitation serves as a reminder that we have not really succeeded in deriving low energy effective theories from QCD. Other so-called “derivations” of quark models from QCD involve more guesswork than derivation. Most telling is their inability to derive a regularized model. If infinities appear in an effective theory, one should seek the physical processes which prevent the infinities from occurring. Invoking the roughly $`200`$ MeV QCD cut-off is not a serious argument. Nor does QCD imply in any sense that the quark-quark interaction at low energy should be a one-gluon exchange with a modified gluon propagator. The regularizations used so far in the Nambu Jona-Lasinio type models for example (proper-time regularization being the most commonly used one so far), are nothing but renormalization techniques in which a finite cut-off is maintained. Not only is this arbitrary but such regularizations are flawed with problems. One might argue that the value of the cut-off should not matter. Indeed it would not if the effective theory consisted, for example, in eliminating some high energy degrees of freedom and using the remaining degrees of freedom to work out the dynamics of low energy phenomena. In such a case, one might expect the cut-off to be much larger than the inverse size of the composite particles and the results not to be sensitive to the cut-off. In chiral quark models, however, this is not the case. The cut-offs required to fit $`f_\pi `$ are about $`700`$ MeV, hardly larger than the $`\rho `$ or the nucleon mass. This is a fact of life, whether we like it or not. One can of course simply discard such models, but better models do not seem to be forthcoming. ## II The soliton in the non-local chiral quark model The non-local chiral quark model is defined by the euclidean action: $$I(q,q^{})=q\left|_\tau +\frac{\stackrel{}{\alpha }.\stackrel{}{}}{i}+m\right|q\frac{G^2}{2}d_4x\left(q\left|r\right|x\beta \mathrm{\Gamma }_ax\left|r\right|q\right)^2.$$ (1) In this expression, $`\mathrm{\Gamma }_a=(1,i\gamma _5\tau _a)`$, $`q\left(x\right)x|q`$ is the quark field, and $`r`$ is a regulator. The regulator is assumed to be diagonal in momentum space and it has a range which defines an effective euclidean cut-off $`\mathrm{\Lambda }`$. For example, we could take $`k\left|r\right|k^{}=\delta _{k,k^{}}r\left(k^2\right)`$ with $`r\left(k^2\right)=e^{\frac{k^2}{2\mathrm{\Lambda }^2}}`$, where $`k`$ is a euclidean 4-vector $`k_\mu =(\omega ,\stackrel{}{k})`$ with $`k^2=\omega ^2+\stackrel{}{k}^2`$. The interaction term of the action (1) can be viewed as a contact 4-fermion interaction involving the *delocalized quark fields*: $$\psi \left(x\right)=x\left|r\right|q=d_4yx\left|r\right|yq\left(y\right).$$ (2) An action of the form (1) is derived from the instanton gas model of the QCD vacuum Diakonov86 ; Shuryak82 , which predicts a cut-off function of the form: $$r\left(k^2\right)=f\left(k\rho /2\right),f\left(z\right)=z\frac{d}{dz}\left(I_0\left(z\right)K_0\left(z\right)I_1\left(z\right)K_1\left(z\right)\right)$$ (3) where $`\rho `$ is the instanton size. The the cut-off is determined by the inverse instanton size $`\rho `$. The form (3) has $`r\left(z=0\right)=1`$ and $`r\left(z\right)\underset{z\mathrm{}}{}\frac{9}{2k^6\rho ^6}`$. However, at large euclidean momenta $`k`$, the form (3) is no longer valid and the cut-off function is dominated by one gluon exchange. It decreases as $`\frac{1}{k^2}`$ (with possible logarithmic corrections) and not as $`\frac{1}{k^6}`$. We find that the fall-off of the regulator at large euclidean $`k^2`$ does not affect the soliton properties very much. For this reason, we have felt free to use various simple forms of cut-off functions, such as a gaussian, which have an additional advantage in that they can be analytically (although arbitrarily) continued to negative values of $`k^2`$. We shall see below that the analytic continuation is required to include the valence orbit. Similar regularization has been used by the Manchester group Birse98 in the meson and vacuum sectors. Various regularization schemes are reviewed in chapter 6 of Ref.Ripka97 . The euclidean action allows us to calculate the partition function $`Z=D\left(a\right)D\left(a^{}\right)e^{I(a,a^{})}`$ and the ground state energy $`E=\frac{}{\beta }\mathrm{ln}Z.`$ The partition function cannot be written in the form $`Z=Tre^{\beta H}`$ because the regulator in the action (1) prevents us from defining a hamiltonian $`H`$. We are also unable to quantize the quark fields but we shall see that the baryon number is nonetheless properly quantized. We work with the equivalent bosonized form of the action: $$I\left(\phi \right)=Tr\mathrm{ln}\left(_\tau +\frac{\stackrel{}{\alpha }.\stackrel{}{}}{i}+\beta m+\beta r\phi _a\mathrm{\Gamma }_ar\right)+\frac{1}{2G^2}d_4x\phi _a^2\left(x\right)$$ (4) in which case the partition function is given by the path integral $`Z=D\left(\phi \right)e^{I\left(\phi \right)}`$. We refer to $`\phi _a\mathrm{\Gamma }_a=S+i\gamma _5\tau _aP_a`$, as the “chiral field” and we say that the chiral field is “on the chiral circle” if, for all $`x`$, we have $`S^2\left(x\right)+P_a^2\left(x\right)=M_0^2`$, where $`M_0`$ is an $`x`$-independent constant mass. We have calculated a localized and time independent stationary point of the action (4), consisting of a chiral field with a hedgehog shape $`S\left(r\right)+i\gamma _5\widehat{x}_a\widehat{\tau }_aP\left(r\right)`$ Ripka98 . The shape of the fields and the soliton energy can be calculated in terms of the energies $`e_\lambda \left(\omega \right)`$ of the quark orbit. The “Dirac hamiltonian” is diagonal in the energy representation, although it remains energy dependent. The quark orbits $`|\omega ,\lambda _\omega `$ satisfy the equations : $$_\tau |\omega ,\lambda _\omega =i\omega |\omega ,\lambda _\omega ,\left(\frac{\stackrel{}{\alpha }.\stackrel{}{}}{i}+\beta m+\beta r\phi _a\mathrm{\Gamma }_ar\right)|\omega ,\lambda _\omega =e_\lambda \left(\omega \right)|\omega ,\lambda _\omega .$$ (5) The energy of the soliton is: $$E_{sol}=N_ce_{val}+\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}\omega 𝑑\omega \underset{\lambda _\omega }{}\frac{i+\frac{de_\lambda \left(\omega \right)}{d\omega }}{i\omega +e_\lambda \left(\omega \right)}+\frac{1}{2G^2}d_3x\phi _a^2\left(\stackrel{}{x}\right)vac.$$ (6) where $`vac.`$ means that we subtract the vacuum energy. In the vacuum, $`P=0,`$ $`S=M_0`$ and there is no valence orbit contribution $`e_{val}`$. The latter is discussed in the next section. ## III The quantization of the baryon number and the valence orbit We calculate the baryon number from the Noether current associated to the gauge transformation $`q\left(x\right)e^{i\alpha \left(x\right)}q\left(x\right)`$. It turns out to be: $$B=\frac{1}{2\pi iN_c}_{\mathrm{}}^{\mathrm{}}𝑑\omega \underset{\lambda _\omega }{}\frac{i+\frac{de_\lambda \left(\omega \right)}{d\omega }}{i\omega +e_\lambda \left(\omega \right)}.$$ (7) The extra term $`\frac{de_\lambda \left(\omega \right)}{d\omega }`$ in the numerator arises from the fact that the regulator $`r`$ does not commute with $`\alpha \left(x\right).`$ Its effect is to make the residues of all the poles of the quark propagator $`\frac{1}{i\omega +e_\lambda \left(\omega \right)}`$ equal to unity. This effectively quantizes the baryon number in a manner which does not seem to be related to the topology of the hedgehog field.<sup>1</sup><sup>1</sup>1Nor is the soliton stabilized by the topology of the chiral field. This is most fortunate because, a priori, there is no reason to expect a theory, in which we cannot quantize the quark field, to yield a properly quantized baryon number. The expression (7) suggests a way to include the valence orbit so as to ensure that the baryon number of the soliton, relative to the vacuum, is equal to unity. We calculate “on-shell” pole of the quark propagator in the hedgehog background field by searching for a solution of the equation $`i\omega +e_\lambda \left(\omega \right)|_{\omega =ie_{val}}=0`$. Because of the regulator, the solutions are scattered all over the complex $`\omega `$ plane. However, it is well known that, in the local theory, where we set $`r=1`$, and for a hedgehog field with winding number unity, a well separated bound orbit with grand spin and parity $`0^+`$ occurs with energy $`e_{val}`$ close to zero Ripka84 . In the non-local theory, we find that a solution of the equation $`\omega =ie_{val}\left(\omega \right)`$ can always be found on the imaginary $`\omega `$ axis, close to the origin $`\omega =0`$, and that no other pole occurs in the vicinity. We therefore ensure that the soliton has a baryon number $`B=1`$ by deforming the integration path over $`\omega `$ in such a way as to include the contribution of this pole. This requires an analytic continuation of the regulator. Such a continuation is arbitrary but the analytic continuation does not extend as far from the origin as $`e_{val}`$. Indeed, since the soliton size is small, $`\stackrel{}{k}^2>0`$ is large and this, on the average, makes $`k^2=e_{val}^2+\stackrel{}{k}^2`$ less negative. Unfortunately however, the form (3) of the regulator, predicted in the instanton model, does not allow any analytic continuation whatsoever, thereby, strictly, prohibiting its use in the soliton calculation. ## IV Results of self-consistent soliton calculations The model parameters are the coupling constant $`G`$ appearing in the lagrangian, the cut-off $`\mathrm{\Lambda }`$ appearing in the regulator and the current quark mass $`m`$. The values of the three parameters are constrained by fitting the pion decay constant $`f_\pi =93`$ MeV and the pion mass to $`m_\pi =139`$ MeV. The expression used to calculate the pion decay constant $`f_\pi `$ is: $$f_\pi ^2=2N_fN_cM_0^2\frac{d_4k}{\left(2\pi \right)^4}\frac{r_k^4k^2r_k^2\frac{dr_k^2}{dk^2}+k^4\left(\frac{dr_k^2}{dk^2}\right)^2}{\left(k^2+r_k^2M_0^2\right)^2}$$ (8) valid in the chiral limit $`m0`$ and it is not identical to the Pagels-Stokar formula Pagels79 . This leaves one undetermined parameter which we choose to be the constituent quark mass $`M_0`$ at zero 4-momentum. The pion decay constant $`f_\pi `$ sets the scale. Grossly, soliton energies increase and soliton radii diminish as $`f_\pi `$ increases (see table 2). Figure 1 shows the soliton energy $`E_{sol}`$ as a function of the free parameter $`M_0`$. A soliton is a bound state of $`N_c=3`$ quarks which polarize the Dirac sea. With a gaussian regulator, it is formed if $`M_0276`$ MeV, that is, for a sufficiently strong coupling constant $`G4.7\times 10^3`$ MeV<sup>-1</sup>. The bound state occurs when the energy of the system is lower than the energy $`N_cM_q`$ of $`N_c`$ free constituent quarks in the vacuum: $`E_{sol}<N_cM_q`$. The mass $`M_q`$ is the on-shell constituent quark mass, obtained by searching for the pole of the quark propagator in the vacuum. It is the solution of the equation $`k^2+\left(r_k^2M_0+m\right)^2|_{k^2=M_q^2}=0`$, which requires an analytic continuation of the regulator to negative values of $`k^2`$. Figure 1 also shows $`N_cM_q`$. At the critical value $`M_0276`$ MeV, the two curves merge. The contribution $`N_ce_{val}`$ of the valence orbit is also shown. At the critical value of $`M_0`$, the energy $`e_{val}`$ of the valence orbit, which is the on-shell mass of a quark propagating in the hedgehog field, becomes a well distinguished bound orbit. At $`M_0309`$ MeV, the curve displaying $`N_cM_q`$ on figure 1 abruptly stops. Indeed, for larger values of $`M_0`$, the poles of the quark propagator no longer occur for real values of $`k^2`$. This means that quarks can no longer materialize on-shell in the vacuum. This feature is discussed in chapter 6 of Ref.Ripka97 and it has been considered by several authors as a sign of quark confinement Krewald92 ; Roberts94 ; Birse95 . In fact, when a pole of the quark propagator disappears from the real $`k^2`$ axis, it simply moves into the complex plane. Such poles indicate instability of the assumed vacuum state against the addition of a single quark. However, our calculation shows that, in the background soliton field, the on-shell valence orbit continues to exist and so does the soliton. Unfortunately, the regulator also introduces extra unwanted poles in the propagators of colorless mesons, so that the model does not express color confinement. Similar unwanted poles occur in proper-time regularization Ripka95 . Our ignorance as how to continue propagators in the complex $`k^2`$ plane reflects our ignorance of the confining mechanism Stingl90 . Apart from the solitons consisting of three valence quarks we find stable solitons consisting of a single valence quark in the background soliton field (see figure 2) as well as of two valence quarks. Similar solutions have been found in the linear sigma model with valence quarks Golli97 . Figure 3 shows the scalar and pseudoscalar fields $`S\left(x\right)/M_0`$ and $`P\left(x\right)/M_0`$ of the soliton obtained with several values of $`M_0`$, together with the soliton quark density $`\rho \left(x\right)`$. Note that, within the soliton, the fields *do not* lie on the chiral circle and $`S^2\left(x\right)+P^2\left(x\right)<M_0^2`$. Indeed, the pion component $`P\left(x\right)`$ never reaches the values $`M_0`$. This is a new dynamical result. This is the only calculation, as far we know, in which one can check dynamically whether the chiral field remains or not on the chiral circle. It could not be checked in the renormalized linear sigma model, because close lying Landau poles occur which make the soliton unstable against high gradients in the fields Ripka87 ; Perry87 . It could also not be checked in local theories which use proper-time regularization because, in such theories, the soliton is unstable unless the fields are constrained to remain on the chiral circle Goeke92 ; Ripka93d . No such instability occurs with the non-local regularization. The soliton we obtain with non-local regularization has a structure which lies midway between a Friedberg-Lee soliton Lee81 ; Wilets89 (in which the pion field has a vanishing classical value), and a Skyrmion Skyrme62 ; Holzwarth93 (in which the chiral field is constrained to remain on the chiral circle). This raises the problem of the collective rotational motion of the soliton. If the deformation in spin and isospin space is stable enough to sustain a rotation without significant distortion, then the $`\mathrm{\Delta }`$ can be described as a rotation of the soliton and the $`N\mathrm{\Delta }`$ mass splitting can be estimated by cranking. If, however, the deformation is small, the $`\mathrm{\Delta }`$ may be better described as a bound state of quarks with aligned spins and isospins. We have not tackled this problem yet. Table 1 shows some properties of calculated solitons for various values of the mass parameter $`M_0`$. Rather good values of $`g_A`$ are obtained. The soliton mass and energies need to be corrected for spurious centre of mass motion (see table 2). The fields which describe the soliton break translational symmetry. The center of mass of the system is not at rest and it makes a spurious contribution both to the energy and to the mean square radius (more generally, to the form factor). This spurious contribution is not measured and it should be subtracted from the calculated values. The subtraction occurs at the next to leading order (in $`N_c`$) approximation. A rough estimate can be obtained from an oscillator model. If $`N_c`$ particles of mass $`m`$ move in a $`1s`$ state of a harmonic oscillator of frequency $`\mathrm{}\omega `$, the centre of mass of the system is also in a $`1s`$ state and it contributes $`\frac{3}{4}\mathrm{}\omega =P^2/2N_cm`$ to the energy. We have therefore corrected the soliton energies by subtracting $`P^2/2E_{sol}`$ from the calculated energy. Furthermore, in the oscillator model, the center of mass contributes a fraction $`\frac{1}{N_c}`$ of the mean square radius, so that we have corrected the mean square radius by multiplying the calculated value by a factor equal to $`\left(1\frac{1}{N_c}\right)`$. Table 2 shows the result. The soliton energies and radii are then considerably closer to the experimental values observed in the nucleon. ## V Conclusion: why take the trouble? The non-local regularization effectively cuts out of the quark propagators the 4-momenta which are larger than the cut-off. The non-local regularization makes the theory finite at all loop orders. The simpler proper-time and Pauli-Villars regularization schemes regularize the quark loop only and they require extra independent cut-offs when next to leading order meson loops are included. Both the real and the imaginary parts of the action are regularized, while the anomalous properties remain independent of the cut-off Cahill88 ; Holdom89 ; Ripka93 , and the baryon number remains properly quantized. In proper time and Pauli-Villars regularization schemes only the real part of the action is regularized and the imaginary part is left unregularized in order to enforce correct anomalous processes. Why not limit the 3-momenta of the quarks, thereby avoiding unwanted extra poles in the propagators? Breaking Lorentz covariance in the meson sector is annoying in that it requires to boost composite particles calculated in their rest frame.
no-problem/9910/hep-ph9910436.html
ar5iv
text
# KFKI - 1999 - 06/A Signal of Partial 𝑈_𝐴⁢(1) Symmetry Restoration from Two-Pion Bose-Einstein Correlations ## 1 $`U_A(1)`$ symmetry restoration and the core-halo model In this conference contribution, we follow the lines of ref. to show that a link exists between the symmetry properties of hot and dense hadronic matter and the strength of the two-pion Bose-Einstein correlation functions. In the chiral limit ($`m_u=m_d=m_s=0`$), QCD possesses a $`U(3)`$ chiral symmetry. When broken spontaneously, $`U(3)`$ implies the existence of nine massless Goldstone bosons. In nature, however, there are only eight light pseudoscalar mesons, a discrepancy which is resolved by the Adler-Bell-Jackiw $`U_A(1)`$ anomaly; the ninth would-be Goldstone boson gets a mass as a consequence of the nonzero density of topological charges in the QCD vacuum . In recent papers , it was argued that the partial restoration of $`U_A(1)`$ symmetry of QCD and related decrease of the $`\eta ^{}`$ mass in regions of sufficiently hot and dense matter should manifest itself in a factor of 3 to 50 increase in the production of $`\eta ^{}`$ mesons, relative to nuclear interactions which do not produce the phase transition. It was also observed, however, that the $`\eta ^{}`$ decays are characterized by a small signal-to-background ratio in the direct two-photon decay mode. We have shown in ref. , that the momentum dependence of $`\lambda _{}`$, that characterizes the strength of Bose-Einstein correlations of pions, provides an experimentally well observable signal for partial $`U_A(1)`$ restoration. As was shown in several papers, at incident beam energies of 200 AGeV at the CERN SPS, the space-time structure of pion emission in high energy nucleus-nucleus collisions can be separated into two regions: the core and the halo. The pions which are emitted from the core or central region are either produced from a direct production mechanism such as the hadronization of wounded string-like nucleons in the collision region, rescattering as they flow outward with a rescattering time on the order of 1 fm/c, or they are produced from the decays of short-lived hadronic resonances such as the $`\rho `$, $`N^{}`$, $`\mathrm{\Delta }`$ and $`K^{}`$, whose decay time is also on the order of 1-2 fm/c. This core region is resolvable by Bose-Einstein correlation (BEC) measurements. The halo region, however, consists of the decay products of long-lived hadronic resonances such as the $`\omega `$, $`\eta `$, $`\eta ^{}`$ and $`K_S^0`$ whose lifetime is greater than 20 fm/c. This halo region is not resolvable by BEC measurements but it contributes to the reduction of the effective intercept parameter, $`\lambda _{}`$. The two-particle Bose-Einstein correlation function is defined as $$C(\mathrm{\Delta }k,K)=\frac{N_2(𝐩_\mathrm{𝟏},𝐩_\mathrm{𝟐})}{N_1(𝐩_\mathrm{𝟏})N_1(𝐩_\mathrm{𝟐})},$$ (1) where the inclusive $`1`$ and $`2`$-particle invariant momentum distributions are $$N_1(𝐩_1)=\frac{1}{\sigma _{in}}E_1\frac{d\sigma }{d𝐩_1},N_2(𝐩_1,𝐩_2)=\frac{1}{\sigma _{in}}E_1E_2\frac{d\sigma }{d𝐩_1d𝐩_2},$$ (2) with $`p=(E_𝐩,𝐩)`$, $`\mathrm{\Delta }k=p_1p_2`$, and $`K=(p_1+p_2)/2`$. From the four assumptions made in the core-halo model, the Bose-Einstein correlation function is found to be $$C(\mathrm{\Delta }k,K)1+\lambda _{}R_c(\mathrm{\Delta }k,K),$$ (3) where the effective intercept parameter $`\lambda _{}`$ and the correlator of the core, $`R_c(\mathrm{\Delta }k,K)`$ are defined, respectively, as $$\lambda _{}=\lambda _{}(K=p;Q_{min})=\left[\frac{N_c(𝐩)}{N_c(𝐩)+N_h(𝐩)}\right]^2$$ (4) and $$R_c(\mathrm{\Delta }k,K)=\frac{|\stackrel{~}{S}_c(\mathrm{\Delta }k,K)|^2}{|\stackrel{~}{S}_c(\mathrm{\Delta }k=0,K=p)|^2}.$$ (5) Here, $`\stackrel{~}{S}_c(\mathrm{\Delta }k,K)`$ is the Fourier transform of the core one-boson emission function, $`S_c(x,p)`$, and the subscripts $`c`$ and $`h`$ indicate the contributions from the core and the halo, respectively, see refs. for further details. If the core-halo model is applicable, the intercept parameter $`\lambda _{}`$ becomes a momentum-dependent measure of the core/halo fraction as follows from eq. (4). If the $`\eta ^{}`$ mass is decreased, a large fraction of the $`\eta ^{}`$s will not be able to leave the hot and dense region through thermal fluctuation since they need to compensate for the missing mass by large momentum . These $`\eta ^{}`$s will thus be trapped in the hot and dense region until it disappears, after which their mass becomes normal again and as a consequence of this mechanism, they will have small $`p_t`$. The $`\eta ^{}`$s then decay to pions via $$\eta ^{}\eta +\pi ^++\pi ^{}(\pi ^0+\pi ^++\pi ^{})+\pi ^++\pi ^{}.$$ (6) Assuming a symmetric decay configuration $`(|p_t|_{\pi ^+}|p_t|_\pi ^{}|p_t|_\eta )`$ and letting $`m_\eta ^{}=958`$ MeV, $`m_\eta =547`$ MeV and $`m_{\pi ^+}=140`$ MeV, the average $`p_t`$ of the pions from the $`\eta ^{}`$ decay is found to be $`p_t138`$ MeV. As the $`\eta ^{},\eta `$ decays contribute to the halo due to their large decay time ($`1/\mathrm{\Gamma }_{\eta ^{},\eta }>20`$ fm/c), we expect a hole in the low $`p_t`$ region of the effective intercept parameter, $`\lambda _{}=[N_{core}(𝐩)/N_{total}(𝐩)]^2`$, centered around $`p_t138`$ MeV. If the masses of the $`\omega `$ and $`\eta `$ mesons also decrease in hot and dense matter, this $`\lambda _{}(m_t)`$ hole may even be deepened further, as discussed in ref. . ## 2 Numerical simulation In this section, we briefly review the essential steps of the numerical simulations and highlight some selected results following ref. . In the numerical calculation of $`\lambda _{}`$, we suppressed the rapidity dependence by considering the central rapidity region, $`(0.2<y<0.2)`$ only. As a function of $`m_t=\sqrt{p_t^2+m^2}`$, $`\lambda _{}(m_t)`$ is given by eq. (4), where the numerator represents the invariant $`m_t`$ distribution of $`\pi ^+`$ emitted from the core and where the denominator represents the invariant $`m_t`$ distribution of the total number of $`\pi ^+`$ emitted. To calculate the $`\pi ^+`$ contribution from the halo region, the resonances $`\omega `$, $`\eta ^{}`$, $`\eta `$ and $`K_S^0`$ were given an $`m_t`$ according to the distribution of $$N(m_t)=Cm_t^\alpha e^{m_t/T_{eff}},$$ (7) where C is a normalization constant, $`\alpha =1d/2`$, $`d=3`$ is the dimension of the expansion, and the mass-dependence of the slope parameter is $$T_{eff}=T_{fo}+mu_t^2.$$ (8) with $`T_{fo}=140`$ MeV being the freeze-out temperature and $`u_t=0.5`$ is the average transverse flow velocity. This way, the long lived resonances were generated, then they were decayed using Jetset 7.4 . The $`m_t`$ distribution of the core pions was also obtained from Eqs. (7) and (8). The contributions from the decay products of the core and the halo were then added together according to their respective fractions, as given in ref. , allowing for the determination of $`\lambda _{}(m_t)`$. The presence of the hot and dense region involves including an additional relative fraction of $`\eta ^{}`$ with a medium modified $`p_t`$ spectrum. The $`p_t`$ spectrum of these $`\eta ^{}`$ is obtained by assuming energy conservation and zero longitudinal motion at the boundary between the two phases, $$m_\eta ^{}^2+p_{t}^{}{}_{\eta ^{}}{}^{2}=m_\eta ^{}^2+p_{t}^{}{}_{\eta ^{}}{}^{2},$$ (9) where the ($``$) denotes the $`\eta ^{}`$ in the hot dense region. The $`p_t`$ distribution then becomes a twofold distribution. The first part of the distribution is from the $`\eta ^{}`$ which have $`p_t^{}\sqrt{m_\eta ^{}^2m_\eta ^{}^2}`$. These particles are given a $`p_t=0`$. The second part of the distribution comes from the rest of the $`\eta ^{}`$’s which have big enough $`p_t`$ to leave the hot and dense region. These have the same, flow-motivated $`p_t`$ distribution as the other produced resonances and were given a $`p_t`$ according to the $`m_t`$ distribution of eq. (8) with $`d=3`$; the vacuum value of the $`\eta ^{}`$ mass, $`m_\eta ^{}`$ was replaced by the medium-modified $`m_\eta ^{}^{}`$ and the temperature of the hot and dense region was assumed to be $`T^{}=200`$ MeV. Calculations of $`\lambda _{}(m_t)`$ including the hot and dense regions are shown in Fig. 1. The abundances of long-lived resonances were estimated with the help of the Fritiof model . The two data-points shown on Fig. 1 are from S+Pb reactions at 200 AGeV as measured by the NA44 collaboration . The lowering of the $`\eta ^{}`$ mass and the partial chiral restoration result in a deepening of the hole in the effective intercept parameter at low $`m_t`$. This “$`\lambda _{}`$-hole” appears even for a modest enhancement of a factor of 3 in the $`\eta ^{}`$ production, that corresponds to a slightly reduced effective mass, $`m_\eta ^{}^{}=738`$ MeV. The effective masses $`m_\eta ^{}^{}=403`$ MeV and $`m_\eta ^{}^{}=176`$ MeV correspond to enhancement factors of 16 and 50, respectively. The onset of the full $`U_A(1)`$ symmetry restoration in the $`m_u=m_d=m_s=0`$ limiting case corresponds to equal probability of the $`\eta ^{}`$, $`\eta `$ and direct $`\pi `$ meson production, in which case the intercept parameter $`\lambda _{}`$ reaches its minimum value, $`\lambda _{U_A(1)}0.02`$, in the transverse mass region of $`m_t220`$ MeV. Thus a measurement of the intercept parameter $`\lambda _{}`$ in a large transverse mass interval may determine whether a hole in the low $`p_t`$ region exists or not. If the hole is present, its deepness characterizes the level of partial $`U_A(1)`$ symmetry restoration in hot and dense matter. Full $`U_A(1)`$ restoration corresponds to the maximum size of the hole, bottoming at a value of $`\lambda _{U_A(1)}`$. In this sense, the value of the $`\mathrm{\Delta }\lambda =\lambda _{}(m_t)\lambda _{U_A(1)}`$ function in the $`m_\pi m_t220`$ MeV region plays the role of an experimentally measurable, effective order parameter of $`U_A(1)`$ symmetry restoration: its value is $`\mathrm{\Delta }\lambda =0`$ for the fully symmetric phase, while the inequality $`\mathrm{\Delta }\lambda >0`$ is satisfied if the $`U_A(1)`$ symmetry is not fully restored in hot and dense hadronic matter produced in high energy heavy ion collisions. ## 3 Summary Partial $`U_A(1)`$ symmetry restoration in hot and dense hadronic matter results in an observable hole for $`m_t220`$ MeV region in the shape of the $`\lambda _{}(m_t)`$ function, that is measurable by plotting the intercept parameter of the two-pion Bose-Einstein correlation function versus the mean transverse mass of the pair. The $`\lambda _{}`$-hole signal of partial $`U_A(1)`$ restoration cannot be faked in a conventional thermalized hadron gas scenario, as it is not possible to create significant fraction of the $`\eta `$ and $`\eta ^{}`$ mesons with $`p_t0`$ in such a case. See refs. for further details and for a discussion of possible coherence effects . A qualitative analysis of NA44 S+Pb data suggests no visible sign of $`U_A(1)`$ restoration at SPS energies. The signal of partial $`U_A(1)`$ symmetry restoration should be searched for in Pb + Pb collisions at CERN SPS, and in forthcoming nuclear collisions at BNL RHIC and CERN LHC, by the experimental determination of the $`\lambda _{}(m_t)`$ function at $`m_t<`$ 220 MeV. ## Acknowledgments We thank M. Gyulassy, U. Heinz, J. Kapusta, L. McLerran, X. N. Wang and U. Wiedemann for useful discussions. This work was supported by the OTKA Grants T024094, T026435, by the NWO - OTKA Grant N25487, by the US- Hungarian Joint Fund MAKA 652/1998, and by the Director, Office of Energy Research, Division of Nuclear Physics of the Office of High Energy and Nuclear Physics of the U.S. Department of Energy under Contract No. DE-FG02-93ER40764.
no-problem/9910/astro-ph9910125.html
ar5iv
text
# 1 Introduction ## 1 Introduction We describe particle acceleration in an expanding, spherical SNR blast wave with a plane-wave, steady-state shock model. The justification for using a steady-state calculation to model time-dependent SNRs is given in Ellison & Berezhko (1999) and the full details and assumptions of our model are given in Baring et al. (1999). Briefly, the global SNR shock parameters (e.g., shock speed and radius as a function of remnant age) used as input for the model, are estimated with a simple Sedov solution of the evolving blast wave in an uniform medium. Given the global shock dynamics, we are able to calculate the shock structure and first-order particle acceleration assuming that electrons and ions are accelerated directly by the forward shock, leaving considerations of the reverse shock, where substantial X-ray emission may originate, for later work. Important limitations of our current model is that we do not consider oblique magnetic field structures or include second-order Fermi acceleration, which may be important in the low Alfvén Mach number shocks ($`M_{A1}4`$ for the parameters we show here) implied by the large $`B`$-fields (see Bykov & Uvarov 1999 for a model of electron injection which does include 2nd-order acceleration). For the particular case of Cas A, the forward shock may be interacting with pre-supernova wind material swept into a relatively dense shell (e.g., Borkowski et al. 1996) which may account for the high magnetic field if the stellar magnetic field was compressed along with the wind material. Alternatively, the large field may be the result of amplification by turbulent eddies (Keohane 1998 and references therein). If magnetic fields $`B1000\mu \mathrm{G}`$ dominate the acceleration region, ions can be accelerated to well above $`10^{15}`$ eV/nuc. While the overall fluxes of these particles may not be sufficient to provide the bulk of the cosmic rays at 1-10 GeV because of the small numbers of particles swept up, particle spectra from young SNRs might be harder than from older, slower SNRs and may dominate above $`10^{14}`$ eV through the ‘knee’ near $`10^{15}`$ eV. Not only does the high $`B`$-field make it possible to produce cosmic rays to $`>10^{15}`$ eV, but a homogeneous model with a single set of parameters can be found which affords a reasonable fit to the intensities of diffuse photon emission from radio to $`\gamma `$-rays. Limits imposed by the radio and $`\gamma `$-ray observations allow us to place constrains on the electron-to-proton ratios produced by shock acceleration. The detailed shapes of individual components (e.g., radio, X-ray), however, are difficult to model with a single set of parameters. Despite this limitation, important model and/or environmental constraints can be inferred if it is assumed that the relativistic electrons responsible for the radio synchrotron emission also produce the diffuse infrared and X-ray continuum via the synchrotron mechanism, (and in the same emitting volume). Likewise, if the GeV and TeV $`\gamma `$-ray emission is dominated by inverse Compton and bremsstrahlung emission (rather than pion-decay from energetic ions), it can be assumed that these same relativistic electrons are responsible for the GeV and TeV gamma rays. Assuming a homogeneous environment precludes, of course, the modeling of emission from the lumpy morphology, knots, ring structure, etc., that characterizes high spatial resolution observations of Cas A. ## 2 Results In Fig. 1 we compare our results to observations of Cas A from radio to TeV $`\gamma `$-rays. We have a single normalization for all of the model components and this has been chosen to match the radio flux. Since the same electron distribution that produces the radio emission can also produce bremsstrahlung and inverse Compton emission at GeV-TeV energies, the combination of the observed radio intensity and the low $`\gamma `$-ray upper limits forces the conclusion that the magnetic field is exceedingly high. Otherwise, the bremsstrahlung and inverse Compton emission at TeV energies would have been observed. This might not be the case if the radio emitting electrons occupy a greater emission volume than GeV-TeV emitting electrons, or that the lower energy (i.e., radio-emitting) electrons can preferentially sample regions of clumped magnetic field and/or density. Even if either of these cases arises, the inverse Compton component, which depends at most only weakly on the background particle density, will set a lower limit on $`B`$ that remains well above standard ISM values. Note that the cosmic microwave background radiation forms the seed for inverse Compton scattering in the present exposition; work is in progress to include synchrotron self-Compton contributions, which will tighten the constraints discussed here, implying even higher $`B`$-fields. The relative importance of the pion-decay emission depends sensitively on the model parameters and we have not yet done a careful survey of the parameter space. For our preliminary results shown in Fig. 1, we have chosen our electron injection parameters to give an electron to proton ratio at relativistic energies, $`(e/p)_{10\mathrm{G}\mathrm{e}\mathrm{V}}0.08`$, somewhat above observed cosmic ray values. We again emphasize that even though our model is for plane-parallel, steady-state shocks, detailed comparisons (Ellison & Berezhko 1999) with the spherically symmetric, time-dependent model of Berezhko (1996; see also Berezhko, et al. 1996) show that the steady-state and plane shock approximations do not seriously influence the results as long as the diffusion length of the highest energy particles is a small fraction of the shock radius, $`R_{\mathrm{sk}}`$, as should be the case in Cas A and other young SNRs. If Cas A is currently interacting with a relatively dense shell of material formed from the pre-SN wind (Borkowski et al. 1996), the Sedov solution can be replaced by estimates of the ambient density, $`n_1`$, magnetic field, $`B_1`$, $`R_{\mathrm{sk}}`$, and $`V_{\mathrm{sk}}`$, which translate to maximum particle energies. ## 3 Conclusions The observed radio intensity of Cas A, combined with the EGRET and TeV upper limits, imply magnetic fields $`\begin{array}{c}>\hfill \\ \hfill \end{array}1000\mu \mathrm{G}`$. Fields this large make it possible to accelerate cosmic ray ions to above $`10^{15}`$ eV/nuc in the $`300\mathrm{yr}`$ lifetime of the remnant, since the time to shock accelerate ions of charge $`Q`$ to $`E_{\mathrm{max}}`$ is (e.g., Baring et al. 1999) $$t_{\mathrm{acc}}190\left(\frac{\eta }{Q}\right)\left(\frac{V_{\mathrm{sk}}}{2000\mathrm{km}/\mathrm{s}}\right)^2\left(\frac{B}{10^3\mathrm{G}}\right)^1\left(\frac{E_{\mathrm{max}}}{10^{15}\mathrm{eV}}\right)\mathrm{yr}.$$ Here, $`\eta `$ is the number of gyroradii in a scattering mean free path and is approximately one in the Bohm limit, which we assume in this model. In Fig. 2 we compare the model spectra, all multiplied by $`R^{0.55}`$ \[$`R=pc/(Qe)`$\] to model rigidity-dependent escape during propagation, with cosmic ray observations. No attempt has been made to model the abundances of these components, each being separately normalized to match the observations. It’s clear that the spectra extend through the knee region. Young SNR shocks sweep up far less ISM material than older, larger shocks, but the high energy cosmic rays produced may have flatter spectra than the bulk of the cosmic rays accelerated by older SNRs which have weaker shocks and, if so, could dominate the cosmic ray flux near the knee. If this is the case, spectral (and perhaps compositional) features should exist in the cosmic ray spectra as these components become dominant. If high $`B`$-fields are common in young SNRs, it has important implications for $`\gamma `$-ray emissivity as well as cosmic ray production. High fields imply that radio intensities will be high, concomitant with relatively low relativistic electron fluxes (and presumably low relativistic ion fluxes as well), lowering the $`\gamma `$-ray emissivity. Therefore, radio loud SNRs may not be the best candidates for $`\gamma `$-ray studies, and some other indicator may be required to guide observational programs. Allen, G.E. et al. 1997, Ap.J.(Letts), 487, L97 Baars, J.W.M., Genzel, R., Pauliny-Toth, I.I.K., & Witzel, A. 1977, A&A, 61, 99 Baring, M. G., Ellison, D. C., Reynolds, S. P., Grenier, I. A., & Goret P. 1999, Ap.J., 513, 311 Berezhko, E.G. 1996, Astroparticle Phys., 5, 367 Berezhko, E.G., Yelshin, V.K. & Ksenofontov, L.T. 1996, ZhETF, 109, 3. Borkowski, K.J., Szymkowiak, A.E., Blondin, J.M., & Sarazin, C.L. 1996, Ap.J., 466, 866 Bykov, A.M., & Uvarov, Yu.A. 1999, JETP, 88, 465 Cowsik, R., & Sarkar, S. 1980, M.N.R.A.S., 191, 855 Ellison, D.C., & Berezhko, E.G. 1999, 26th ICRC (Salt Lake City), OG 3.3.27. Esposito, J.A., Hunter, S.D., Kanbach, G., & Sreekumar, P. 1996, Ap.J., 461, 820 Goret, P. et al. 1999, 26th ICRC (Salt Lake City), OG 2.2.18. Keohane, J.W. 1998, Ph.D. Thesis, University of Minnesota. Lessard, R. (Whipple Collaboration), 1999, Proc. 19th Texas Symposium, Paris 1998, in press. Shibata, T. 1996, Nuovo Cimento C, 19, 713. Shibata, T. 1998, private communication. The, L.-S., et al. 1996, A.A. Suppl., 120, 357
no-problem/9910/cond-mat9910417.html
ar5iv
text
# A transient network of telechelic polymers and microspheres : structure and rheology ## I introduction The study of transient networks has been renewed by investigations of systems of telechelic polymers, modified at both extremities with end groups differing in affinity from the main chain (). To obtain a so-called transient network, one starts from a monodisperse oil/water droplets microemulsion of oil droplets in water, stabilised at a fixed radius by surfactant coating of the droplets, and add telechelic polymers whose extremities will preferentially fix to the droplets (). In this paper we report first on the influence of the added polymer to the structure of the equilibrium solution at various concentrations. Then we study the dynamics of such a system under shear. ## II System description : structure and interaction ### A Experimental We first obtain a thermodynamically stable oil/water microemulsion droplets, and then add the hydrophobically endcapped polymers. Cetyl pyridinium chloride (CPCL) and octanol were used as surfactant and cosurfactant to stabilise an emulsion of decane in brine. The spontaneous radius of the surfactant film is adjusted by the surfactant to cosurfactant ratio. The decane is added up to a value just below the line of emulsification failure. For a typical volume concentration in oil drpolets $`\varphi =`$10%, the mass ratios were 4.5% of decane, 1.6% of octanol, and 6.3% of CPCL, corresponding to a cosurfactant/surfactant ratio of 0.25. Under these conditions, it is well known () that one obtains a stable dispersion of monodisperse droplets. The associative polymers consist of a poly(ethylene-oxide) (PEO) chain chemically modified at both extremities by addition of a C<sub>18</sub> alkyl group. ### B Phase behaviour and structure The telechelic polymers introduces an effective attraction between the micelles through bridging. The relevant parameter is in this respect the number $`r`$ of alkyl goups per droplet. Below $`\mathrm{\Phi }=`$10%, the initially clear and homogeneous microemulsion undergo a phase separation upon addition of polymers. Small angle neutron scattering data have been obtained at LLB-Saclay, on the PACE spectrometer. Neutrons are mainly scattered by the hydrophobic cores. From these studies (see ), the average radius is $`R=62\text{Å}`$. It is also apparent that the associative polymer has no influence on the droplets shape. At small concentrations, the addition of polymer seems to induce both the apparition of a correlation peak in neutrons 9see ), and a strong diffusion at small angles. In this case the droplets are induced to spend a longer time at a preferential distance from each other, fixed by the polymer end-to-end distance, during a typical bridge life time, due to polymer attraction (see ). This effective attraction accounts for the increase in osmotic compressibility and hence to the small $`q`$ enhanced scattering, as well as for the phase separation. At high concentrations, the bridging has no other effect than strenghtening the weak interaction between the droplets. Slightly below 10%, there exist a balance concentration for which the polymer natural lenght exactely matches the constraint of liquid-like homogeneity for the droplets. ## III Dynamical properties and rheology ### A Introduction We select a concentration of 10%, in which the droplets remain statistically most of the time at distances slightly below the average end to end distance of the polymer. Thus we can study the influence of $`r`$ on a large range (1 to 20) without phase separation. Rheological measurments have been performed both in stress controlled and strain controlled setups. The usual setup was cone and plate <sup>*</sup><sup>*</sup>*The rheometers used respectively in stress-controlled and strain-controlled experiments were a Physica US200, and a Rheometrics RFS II . ### B Linear behaviour and stress relaxation Step-strain experiments were performed on the system. In figure 1, we show typical responses for strains varying from 50% to 400%. For small deformations the response for all $`r`$ values takes the form of a slightly stretched exponential, $`G(t)=G_{0r}e^{(\frac{t}{\tau _r})^{0.8\pm 0.05}}`$ with a value of the exponent independent of $`r`$. The elastic modulus $`G_{0r}`$ is in contrast strongly $`r`$-dependent, as can be seen in figure 2. The elastic modulus goes to zero for a finite value of $`r`$, together with the time $`\tau _r`$. This is clearly a percolation effect. The elastic stress being transported through the sample by the polymers acting as hookean springs, an infinite geometrical path of strings should exist (at least) in the system for a non-zero modulus to be observed. Indeed the $`r`$ dependence of $`G_{0r}`$ follows very closely the power-law behaviour $`G_{0r}(rr_p)^{1.7}`$ predicted years ago from an analogy with electrical networks (see ). The $`r=6`$ value is confirmed by independent observations of the relaxation time by dynamic light scattering. The fraction of polymer forming loops instead of bridges is unknown. If the number of bridges per droplet at the percolation point was 1.5, as expected from the geometrical theory of percolation (see ), the rate of bridges to loops would be around 25%. The typical scale of the relaxation time should be fixed by the average life time of a bond related to the energy necessary to break the bond by an Arrhenius-like law $`\tau =\frac{1}{\nu }e^{(\frac{W}{k_BT})}.`$ (see ). The stress is relaxed as the loaded bridges break and reform. But they reform in an unloaded state, and henceforth do not participate anymore to the stress if the sample is no more macroscopically deformed after $`t=0`$. In this picture, at any time $`t`$, the number of chains still loaded can be described by an effective connectivity $`r(t)`$, whose evolution is $`r(t)r_0e^{(\frac{t}{\tau })}`$. From the modulus dependence at fixed r an evolution law follows () : $`G(t)(r(t)r_p)^{1.7}(r_0e^{(\frac{t}{\tau })}r_p)^{1.7}.`$ This is plainly wrong as can be seen from figure 3a : the predicted behaviour is not a streched exponential : the modulus should go to zero in a finite time as $`r(t)`$ reaches $`r_p`$! Two assumptions underlies this prediction : first, that links are randomly broken in the percolating network. This would be wrong if the network was not deformed in an affine way, some bridges being more streched locally. The second hypothesis is the elimination in the stress calculations of all the broken bridges, although they are reconnected. This assumption, from the original Green-Tobolski model (), has been central in all transient network theories. It could be challenged by the observation that the reconnected bridges, even if they cannot participate immediately to the stress, due to the isotropic character of their distribution, could later participate again in the stress relaxation, due to rearrangements of the droplets. Upon breaking of a bond, the droplets will relax to a position governed on average by their simple mechanical equilibrium under the springs forces. This will occur on a time short compared to the duration of a bond. During these droplets relaxations, polymers reconnected earlier in a statistically isotropic unloaded state will again carry some load. Thus the finite time percolation breaking is prevented by the loading of new bridges. We show in figure 3b results from 2D off-lattice numerical simulations in which the network has been allowed or not to relax. Moreover the mechanisms of breaking and reconnections have been introduced following a realistic prescription. The simulations support the view that the contribution of the reconnected bridges to the stress can prevent the percolation failure at a finite time and lead to an exponential relaxation. ### C Non linear regime When the amplitude of the strain is increased, the stress relaxation transforms at a critical $`\gamma _f`$ value independent of $`r`$ into a two steps process : an initial fast exponential decay ($``$ 10% of the linear time) is followed by a linear behaviour. The life time of a bridge depend on the stretching of the polymer chain, whose stored elastic energy can supply through mechanical work a part of the activation energy : the time $`\tau `$ becomes $`\tau =\frac{1}{\nu }e^{(\frac{Wfa}{k_BT})},`$ where $`f`$ is the elastic force exerted by the polymer, and $`a`$ a typical lenght on which the force works. Thus from an isotropic unloaded distribution, the deformation makes an ellipsoid, which then relaxes (as the stress), with stretched bridges breaking more quickly. In a short time, the system is back in a configuration of unstretched bonds, whose dynamics obeys henceforth the linear rate of breaking. The stress thus always relaxes at long times with a typical rate identical with the one observed in the linear regime, as can be seen comparing the asymptotic slopes of the curves in figure 1. ### D Flow behaviour The non linear response gives us a hint of the flow behaviour. Under constant shear rate, only the slowest relaxation processes are relevant. The effective elasticity associated with these, as obtained from the $`t=0`$ extrapolation of the long time relaxation, decreases strongly as the strain amplitude is increased. Thus a non monotonous stress/rate relation is expected for the elastic part of the stress since $`\sigma (\dot{\gamma })\sigma _0(\dot{\gamma }\tau )=\dot{\gamma }\tau G_0(\dot{\gamma }\tau )`$ (see ). On other systems such as giant micelles, this leads to singularities in the flow curve such as plateaux behaviours (). The picture emerging from the stationary flow curve represented in figure 4 (for a value of $`r=12`$) is quite different. An initial Newtonian behaviour is abruptely followed by a sudden drop of the stress at a critical shear rate $`\dot{\gamma }_c`$, with $`\dot{\gamma _c}\tau 0.3`$, $`\tau `$ being the linear time of the same system. A second regime of flow follows, which can be described by a Bingham fluid law $`\sigma =\sigma _1+\eta \dot{\gamma }`$. Between these two regimes a $`d\sigma /d\dot{\gamma }<0`$ stationary region is observed. The results compare well to the stress-controlled rheology. Both the relaxation after high deformation and the flow curve suggest that we are observing a fracture behaviour. The singular drop in stress cannot be acounted for by an underlying non monotonic constitutive equation. This should lead to a more progressive (continuous) behaviour of $`\sigma `$. Thus we propose to consider this behaviour in terms of fracture propagation through the material. We expect that localised fractures appear all the time du to local percolation breakings. The concentration of stress at the boundaries of the fracture can induce the movment of the fracture tip, at a stress-dependent speed. Once a fracture has propagated throughout the geometry of the sample, the stress drops. But when the stress exceeds again the critical value, a second fracture should occur. This does not happen. We tentatively conclude that it is a boundary fracture. The intermediate stationary region of decreasing stress indicates also that under monitoring of the shear rate in the $`\dot{\gamma }_c`$ region, the fracture can heal, the linear (newtonian) behaviour being recovered under decreasing $`\dot{\gamma }`$. ### E Conclusion On a model system of transient network, we have demonstrated first the role of percolation in determining the instantaneous elastic behavior. We then stressed the importance of the elastic reorganisations of the network, and of the re-loading of the broken bonds. When the system is highly deformed ($``$ 300%), a fracture behavior is observed. Many important informations concerning the evolution of the structure of the material are lacking. We just do not know if the organisation of the droplets or of the polymers changes under flow, or under large deformations. Future X-ray scattering observations or NMR will be the only way to gather further insight into the non linear and flow behaviour.
no-problem/9910/hep-th9910049.html
ar5iv
text
# REFERENCES S-DUALITY FOR LINEARIZED GRAVITY J. A. Nieto<sup>1</sup><sup>1</sup>1nieto@uas.uasnet.mx Facultad de Ciencias Físico-Matemáticas de la Universidad Autónoma de Sinaloa, 80010 Culiacán Sinaloa, México Abstract We develope the analogue of S-duality for linearized gravity in (3+1)-dimensions. Our basic idea is to consider the self-dual (anti-self-dual) curvature tensor for linearized gravity in the context of the Macdowell-Mansouri formalism. We find that the strong-weak coupling duality for linearized gravity is an exact symmetry and implies small-large duality for the cosmological constant. Pacs numbers: 04.60.-m, 04.65.+e, 11.15.-q, 11.30.Ly August, 1999 1.- INTRODUCTION It is well known that linearized Einstein gravitational theory in four dimensions is similar to the source free Maxwell theory. Since S-duality has been successfully applied to Abelian gauge theories and non-Abelian gauge theories \[1-10\] as well as gravity \[11-15\], it seems natural to ask whether similar methods may be applied to linearized gravity. However, while in the Abelian case there is an exact strong-weak duality, Yang-Mills and gravity do not possess such a symmetry. In this work, we show that in spite of gravity, in general, does not possess an exact strong-weak symmetry linearized gravity does. In fact, we derive a S-dual action for linearized gravity which is connected to the original linearized gravitational action by strong-weak duality transformation. Our strategy essentially consists in two steps; first in writing linearized gravity as Abelian gauge theory and second taking recourse of the Macdowell-Mansouri formalism. Just as $`U(1)`$ duality has been very important to explore S-duality in larger theories such as $`N=2`$ super Yang-Mills theory with gauge group $`SU(2)`$ \[16-17\], one should expect that linearized gravitational duality may also be an important tool for investigating S-duality analogue of other theories such as $`N=1`$ supergravity in four dimensions. The plan of this work is as follows: In section II, we show that it is possible to see linearized gravity as an Abelian gauge theory. In section III, we briefly review the Macdowell-Mansouri theory and propose an action for linearized gravity in the context of such a theory. In section IV, we generalize our proposed action to include self-dual and anti-self-dual linearized gravity and we compute the partition function of such a generalized action. Finally, in section V, we make some final comments. 2.- LINEARIZED GRAVITY AS A GAUGE THEORY The starting point in linearized gravity is to write the metric of the space-time $`g_{\mu \nu }=g_{\mu \nu }(x^\alpha )`$ as $$g_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu },$$ (1) where $`h_{\mu \nu }`$ is a small deviation of the metric $`g_{\mu \nu }`$ from the Minkowski metric $`(\eta _{\mu \nu })=diag(1,1,1,1).`$ The first-order curvature tensor is $$F_{\mu \nu \alpha \beta }=\frac{1}{2}(_\mu _\alpha h_{\nu \beta }_\mu _\beta h_{\nu \alpha }_\nu _\alpha h_{\mu \beta }+_\nu _\beta h_{\mu \alpha }),$$ (2) which is invariant under the gauge transformation $$\delta h_{\mu \nu }=_\mu \xi _\nu +_\nu \xi _\mu ,$$ (3) Here, $`\xi _\mu `$ is an arbitrary vector field and $`_\mu \frac{}{x^\mu }`$. It is not difficult to see that $`F_{\mu \nu \alpha \beta }`$ satisfies the following relations: $$\begin{array}{c}F_{\mu \nu \alpha \beta }=F_{\mu \nu \beta \alpha }=F_{\nu \mu \alpha \beta }=F_{\alpha \beta \mu \nu },\\ \\ F_{\mu \nu \alpha \beta }+F_{\mu \beta \nu \alpha }+F_{\mu \alpha \beta \nu }=0,\\ \\ _\lambda F_{\mu \nu \alpha \beta }+_\mu F_{\nu \lambda \alpha \beta }+_\nu F_{\lambda \mu \alpha \beta }=0.\end{array}$$ (4) It is interesting to note that the dual $`{}_{}{}^{}F_{\mu \nu \alpha \beta }^{}\frac{1}{2}ϵ_{\mu \nu \tau \sigma }F_{\alpha \beta }^{\tau \sigma },`$ where $`ϵ_{\mu \nu \alpha \beta }`$ is a completely antisymmetric Levi-Civita tensor with $`ϵ_{0123}=1`$ and the indices are raised and lowered by means of $`\eta ^{\alpha \beta }`$ and $`\eta _{\alpha \beta },`$ does not satisfy the relations (4) unless the vacuum Einstein equations $`F_{\mu \nu }=F_{\mu \alpha \nu }^\alpha =0`$ are satisfied . Let us introduce the ‘gauge’ field $$A_{\mu \alpha \beta }=\frac{1}{2}(_\alpha h_{\mu \beta }_\beta h_{\mu \alpha }).$$ (5) Note that $`A_{\mu \alpha \beta }=A_{\mu \beta \alpha }.`$ Using (3), we find that $`A_{\mu \alpha \beta }`$ transforms as $$\delta A_{\mu \alpha \beta }=_\mu \lambda _{\alpha \beta },$$ (6) where $`\lambda _{\alpha \beta }=_\alpha \xi _\beta _\beta \xi _\alpha `$. The expression (5) allows to write the curvature tensor $`F_{\mu \nu }^{\alpha \beta }`$ as $$F_{\mu \nu }^{\alpha \beta }=_\mu A_\nu ^{\alpha \beta }_\nu A_\mu ^{\alpha \beta }.$$ (7) Thus, we have shown that the tensor $`F_{\mu \nu }^{\alpha \beta }`$ may be written in the typical form of an abelian Maxwell field strength $`F_{\mu \nu }^a=_\mu A_\nu ^a_\nu A_\mu ^a,`$ where the index $`a`$ runs over some abelian group such as $`U(1)\times U(1)\times \mathrm{}\times U(1)`$. Note that $`F_{\mu \nu }^{\alpha \beta }`$ is invariant under the transformation $`\delta A_\mu ^{\alpha \beta }=_\mu \lambda ^{\alpha \beta }`$ which has exactly the same form as the transformation of Abelian gauge fields $`\delta A_\mu ^a=_\mu \lambda ^a,`$ where $`\lambda ^a`$ is an arbitrary function of the coordinates $`x^\mu `$. 3.- LINEARIZED GRAVITY a lá MACDOWELL-MANSOURI The central idea in the Macdowell-Mansouri theory \[19-20\] (see also Refs. \[21-31\]) is to represent the gravitational field as a connection one-form associated to some group that contains the Lorentz group as a subgroup. The typical example is provided by the SO(3,2) anti-de Sitter gauge theory of gravity. In this case the SO(3,2) gravitational gauge field $`\omega _\mu ^{AB}=`$ $`\omega _\mu ^{BA}`$ is broken into the SO(3,1) connection $`\omega _\mu ^{ab}`$ and the $`\omega _\mu ^{4a}=e_\mu ^a`$ vierbein field. Thus, the anti-de Sitter curvature $$_{\mu \nu }^{AB}=_\mu \omega _\nu ^{AB}_\nu \omega _\mu ^{AB}+\omega _\mu ^{AC}\omega _{\nu C}^B\omega _\nu ^{AC}\omega _{\mu C}^B$$ (8) leads to $$_{\mu \nu }^{ab}=R_{\mu \nu }^{ab}+\mathrm{\Sigma }_{\mu \nu }^{ab}$$ (9) and $$_{\mu \nu }^{4a}=_\mu e_\nu ^a_\nu e_\mu ^a+\omega _\mu ^{ac}e_{\nu c}\omega _\nu ^{ac}e_{\mu c},$$ (10) where $$R_{\mu \nu }^{ab}=_\mu \omega _\nu ^{ab}_\nu \omega _\mu ^{ab}+\omega _\mu ^{ac}\omega _{\nu c}^b\omega _\nu ^{ac}\omega _{\mu c}^b$$ (11) is the SO(3,1) curvature and $$\mathrm{\Sigma }_{\mu \nu }^{ab}=e_\mu ^ae_\nu ^be_\nu ^ae_\mu ^b.$$ (12) It turns out that $`T_{\mu \nu }^a_{\mu \nu }^{4a}`$ can be identified with the torsion. The Macdowell-Mansouri’s action is $$S=\frac{1}{4}d^4x\epsilon ^{\mu \nu \alpha \beta }_{\mu \nu }^{ab}_{\alpha \beta }^{cd}ϵ_{abcd},$$ (13) where $`\epsilon ^{\mu \nu \alpha \beta }`$ is the completely antisymmetric tensor associated to the space-time, with $`\epsilon ^{0123}=1`$ and $`\epsilon _{0123}=1`$, while $`ϵ_{abcd}`$ is also the completely antisymmetric tensor but now associated to the internal group S(3,1), with $`ϵ_{0123}=1.`$ We assume that the internal metric is given by $`(\eta _{ab})=(1,1,1,1).`$ Therefore, we have $`ϵ^{0123}=1.`$ It is well known that, using (9), (11) and (12), the action (13) leads to three terms; the Hilbert Einstein action, the cosmological constant term and the Euler topological invariant (or Gauss-Bonnet term). It is worth mentioning that the action (13) may also be obtained using Lovelock theory (see and references there in). Let us now apply the Macdowell-Mansouri formalism to linearized gravity as developed in section 2. First, consider the extended curvature $$_{\mu \nu }^{\alpha \beta }=F_{\mu \nu }^{\alpha \beta }+\mathrm{\Omega }_{\mu \nu }^{\alpha \beta },$$ (14) where $$\mathrm{\Omega }_{\mu \nu }^{\alpha \beta }=\delta _\mu ^\alpha h_\nu ^\beta \delta _\mu ^\beta h_\nu ^\alpha \delta _\nu ^\alpha h_\mu ^\beta +\delta _\nu ^\beta h_\mu ^\alpha .$$ (15) The central idea is to consider the extended curvature (14) as the analogue of the curvature (9) of the Macdowell-Mansouri formalism. Thus, we may identify $`F_{\mu \nu }^{\alpha \beta }`$ and $`\mathrm{\Omega }_{\mu \nu }^{\alpha \beta }`$ with the linearized versions of $`R_{\mu \nu }^{ab}`$ and $`\mathrm{\Sigma }_{\mu \nu }^{ab},`$ respectively. In fact, if we write $`e_\mu ^a=b_\mu ^a+l_\mu ^a`$, where $`b_\mu ^a`$ corresponds to the flat metric $`\eta _{\mu \nu }`$ and $`l_\mu ^a`$ is a small deviation of $`e_\mu ^a`$ , we find $`h_{\mu \nu }=b_\mu ^al_\nu ^b\eta _{ab}+b_\nu ^al_\mu ^b\eta _{ab}`$ and therefore using (5) we get $`A_{\mu \alpha \beta }=\omega _{\mu \alpha \beta }(l_\lambda ^a)+_\mu f_{\alpha \beta }.`$ Here, $`\omega _{\mu \alpha \beta }(l_\lambda ^a)=b_{a\alpha }b_{b\beta }\omega _\mu ^{ab}(l_\lambda ^a)`$ and $`f_{\alpha \beta }=b_{a\alpha }l_\beta ^ab_{a\beta }l_\alpha ^a.`$ (It is important to note that $`\omega _\mu ^{ab}(l_\lambda ^a)`$ can be obtained by setting the torsion (10), for weak gravity, equal to zero.) Thus, up to a gauge transformation $`A_{\mu \alpha \beta }`$ is equal to the linearized connection $`\omega _\mu ^{ab}(l_\lambda ^a).`$ The essential difference between $`\omega _\mu ^{ab}(l_\lambda ^a)`$ and the full connection $`\omega _\mu ^{ab}`$ is that the latter is associated to the Lorentz group while the former to an abelian group. This can easily be seen from the transformation $`\delta A_\mu ^{\alpha \beta }=_\mu \lambda ^{\alpha \beta }`$ of the gauge field $`A_\mu ^{\alpha \beta },`$ since in general a gauge field $`A`$ transforms as $`\delta A=\lambda +[A,\lambda ].`$ Let us now propose the action $$S=\frac{1}{4}d^4xϵ^{\mu \nu \alpha \beta }_{\mu \nu }^{\tau \lambda }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }.$$ (16) We shall show that this action is reduced to Einstein linearized gravitational action (Fierz-Pauli action ) with cosmological constant and a total derivative (‘topological’ term). Substituting (14) into (16) yields $$\begin{array}{c}S=\frac{1}{4}d^4xϵ^{\mu \nu \alpha \beta }F_{\mu \nu }^{\tau \lambda }F_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{1}{2}d^4xϵ^{\mu \nu \alpha \beta }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }F_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ \\ +\frac{1}{4}d^4xϵ^{\mu \nu \alpha \beta }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }.\end{array}$$ (17) Using (15) we find that (17) is reduced to $$\begin{array}{c}S=\frac{1}{4}d^4xϵ^{\mu \nu \alpha \beta }F_{\mu \nu }^{\tau \lambda }F_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+2d^4xϵ^{\mu \nu \alpha \beta }\delta _\mu ^\tau h_\nu ^\lambda F_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ \\ +4d^4xϵ^{\mu \nu \alpha \beta }\delta _\mu ^\tau h_\nu ^\lambda \delta _\alpha ^\sigma h_\beta ^\rho ϵ_{\tau \lambda \sigma \rho }.\end{array}$$ (18) Since $`ϵ^{\mu \nu \alpha \beta }\delta _\mu ^\tau ϵ_{\tau \lambda \sigma \rho }=\delta _{\lambda \sigma \rho }^{\nu \alpha \beta }`$ and $`ϵ^{\mu \nu \alpha \beta }\delta _\mu ^\tau \delta _\alpha ^\sigma ϵ_{\tau \lambda \sigma \rho }=2\delta _{\lambda \rho }^{\nu \beta }`$, where in general $`\delta _{\tau \lambda \sigma \rho }^{\mu \nu \alpha \beta }`$ is a generalized delta, we discover that (18) can be written as $$\begin{array}{c}S=\frac{1}{4}d^4xϵ^{\mu \nu \alpha \beta }F_{\mu \nu }^{\tau \lambda }F_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+8d^4xh^{\mu \nu }(F_{\mu \nu }\frac{1}{2}\eta _{\mu \nu }F)\\ \\ 8d^4x(h^2h^{\mu \nu }h_{\mu \nu }).\end{array}$$ (19) Here, we used the following definitions: $`F_{\mu \nu }\eta ^{\alpha \beta }F_{\mu \alpha \nu \beta },F\eta ^{\mu \nu }\eta ^{\alpha \beta }F_{\mu \alpha \nu \beta }`$ and $`h=\eta ^{\mu \nu }h_{\mu \nu }.`$ We recognize the second term and the third term in (19) as the Einstein action for linearized gravity with cosmological constant, while the first term is a total derivative (an Euler topological invariant or Gauss-Bonnet term). Note that the usual cosmological factor $`\mathrm{\Lambda }`$ in the third term can be derived simply by changing $`\mathrm{\Omega }a^2\mathrm{\Omega },`$ where $`a`$ is a constant, and rescaling the total action $`S\frac{1}{4}\mathrm{\Lambda }^1S,`$ with $`\mathrm{\Lambda }=a^2.`$ 4.- S-DUALITY FOR LINEARIZED GRAVITY In order to develope a S-dual linearized gravitational action we generalize the action (16) as follows; $$𝒮=\frac{1}{4}(\tau ^+)d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}_{\mu \nu }^{\tau \lambda }{}_{}{}^{}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho },$$ (20) where $`\tau ^+`$ and $`\tau ^{}`$ are two different constant parameters and $`{}_{}{}^{\pm }_{\mu \nu }^{\alpha \beta }`$ is given by $${}_{}{}^{\pm }_{\mu \nu }^{\alpha \beta }=(\frac{1}{2})^\pm M_{\tau \lambda }^{\alpha \beta }_{\mu \nu }^{\tau \lambda },$$ (21) where $${}_{}{}^{\pm }M_{\tau \lambda }^{\alpha \beta }=\frac{1}{2}(\delta _{\tau \lambda }^{\alpha \beta }iϵ_{\tau \lambda }^{\alpha \beta }).$$ (22) It turns out that $`{}_{}{}^{+}_{\mu \nu }^{\alpha \beta }`$ is self-dual, while $`{}_{}{}^{}_{\mu \nu }^{\alpha \beta }`$ is anti self-dual curvature. Therefore, the action (20) describes self-dual and anti-self-dual linearized gravity. We may now follow a similar procedure as in the references \[5-8\] for Abelian Yang-Mills and for gravity. Here, however, in order to show the exactness of S-duality symmetry in linearized gravity, it turns out to be more convenient to follow the procedure of reference due to Witten. For this purpose let us first introduce a two form $`G`$ and let us set $$_{\mu \nu }^{\alpha \beta }_{\mu \nu }^{\alpha \beta }G_{\mu \nu }^{\alpha \beta }.$$ (23) We assume that $`G_{\mu \nu }^{\alpha \beta }`$ satisfies the same symmetric properties as $`F_{\mu \nu }^{\alpha \beta },`$ given in (4), namely $$\begin{array}{c}G_{\mu \nu \alpha \beta }=G_{\mu \nu \beta \alpha }=G_{\nu \mu \alpha \beta }=G_{\alpha \beta \mu \nu },\\ \\ G_{\mu \nu \alpha \beta }+G_{\mu \beta \nu \alpha }+G_{\mu \alpha \beta \nu }=0.\end{array}$$ (24) Now, consider the extended action $$\begin{array}{ccc}𝒮_E=& \frac{1}{4}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathcal{0}_{\mu \nu }^{\tau \lambda }{}_{}{}^{}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }& \\ & & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho },& \end{array}$$ (25) where $`W_{\mu \nu \alpha \beta }=_\mu V_{\nu \alpha \beta }_\nu V_{\mu \alpha \beta }`$ is the dual field strength satisfying the Dirac quatization law $$W2\pi 𝐙.$$ (26) It is not difficult to see that, beyond the gauge invariance $`AAd\lambda ,`$ $`GG`$, the partition function $$Z=d^+Gd^{}G𝑑A𝑑h𝑑Ve^{𝒮_E}$$ (27) is invariant under $$AA+BandGG+dB,$$ (28) where $`B_\mu ^{\alpha \beta }`$ is an arbitrary one form. Starting from (25) one can proceed in two different ways. For the first possibility, we note that the path integral that involves $`V`$ is $$DV\mathrm{exp}(\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }).$$ (29) Integrating over the dual connection $`V`$, we get a delta function setting $`dG=0.`$ Thus, using the gauge invariance (28), we may gauge $`G`$ to zero, reducing (25) to the original action (20). Therefore, the actions (25) and (20) are, in fact, classically equivalents. For the second possibility, we note that the gauge invariance (28) enables one to fix a gauge with $`A=0.`$ (It is important to note that, at this stage, we are considering $`A_{\mu \alpha \beta }`$ and $`h_{\mu \nu }`$ as independent fields in the sense of Palatini.) The action (25) is then reduced to $$\begin{array}{ccc}𝒮_E=& \frac{1}{4}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}𝒢_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}𝒢_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}𝒢_{\mu \nu }^{\tau \lambda }{}_{}{}^{}𝒢_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }& \\ & & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho },& \end{array}$$ (30) where $$𝒢_{\mu \nu }^{\tau \lambda }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }G_{\mu \nu }^{\tau \lambda }.$$ (31) In virtue of (31) the extended action (30) becomes $$\begin{array}{ccc}𝒮_E=& \frac{1}{4}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }& \\ & & \\ & \frac{1}{2}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{1}{2}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }& \\ & & \\ & +\frac{1}{4}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{}\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }& \\ & & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }.& \end{array}$$ (32) Before integrating $`h_{\mu \nu },`$ let us consider the identity $${}_{}{}^{\pm }M_{\mu \nu }^{\tau \lambda }{}_{}{}^{\pm }M_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }=\pm 4i^\pm M_{\mu \nu \alpha \beta }$$ (33) which can be obtained from the definition (22). Here, $`{}_{}{}^{\pm }M_{\mu \nu \alpha \beta }^{}=^\pm M_{\mu \nu }^{\tau \lambda }\eta _{\alpha \tau }\eta _{\beta \lambda }.`$ Using this identity and the definition (15) for $`\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }`$ we obtain $$\begin{array}{c}\frac{1}{2}(\tau ^+)d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{1}{2}(\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ \\ =\frac{1}{2}d^4xϵ^{\mu \nu \alpha \beta }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }B_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\end{array}$$ (34) and $$\begin{array}{c}\frac{1}{4}(\tau ^+)d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }{}_{}{}^{}\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ \\ =\frac{b}{4}d^4xϵ^{\mu \nu \alpha \beta }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho },\end{array}$$ (35) where $$B_{\alpha \beta }^{\sigma \rho }=\{(\tau ^+)^+G_{\alpha \beta }^{\sigma \rho }(\tau ^{})^{}G_{\alpha \beta }^{\sigma \rho }\}$$ (36) and $$b=\frac{1}{2}(\tau ^+\tau ^{}).$$ (37) Using (34)-(37) we learn that the action (32) can be written as $$\begin{array}{cc}𝒮_E=& \frac{1}{4}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & +\frac{1}{2}d^4xϵ^{\mu \nu \alpha \beta }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }B_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{b}{4}d^4xϵ^{\mu \nu \alpha \beta }\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }\mathrm{\Omega }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }.\end{array}$$ (38) We note that the third and fourth term in (38) have exactly the same form as the second and third term of (17). Therefore, using the definition (15) for $`\mathrm{\Omega }_{\mu \nu }^{\tau \lambda }`$ and making similar calculations for obtaining (19) we get $$\begin{array}{cc}𝒮_E=& \frac{1}{4}(\tau ^+)𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})𝑑x^4\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & +8d^4xh^{\mu \nu }(B_{\mu \nu }\frac{1}{2}\eta _{\mu \nu }B)8bd^4x(h^2h^{\mu \nu }h_{\mu \nu })\\ & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho },\end{array}$$ (39) where $`B_{\mu \nu }\eta ^{\alpha \beta }B_{\mu \alpha \nu \beta },`$ and $`B\eta ^{\mu \nu }B_{\mu \nu }.`$ Note that $`b`$ plays the role of a cosmological constant. We can now integrate out $`h_{\mu \lambda }`$ in the reduced partition function $$Z=d^+Gd^{}G𝑑h𝑑Ve^{𝒮_E},$$ (40) with $`𝒮_E`$ given by (39). We first get the relation $$B_{\mu \nu }\frac{1}{2}\eta _{\mu \nu }B=2b(\eta _{\mu \nu }hh_{\mu \nu }).$$ (41) Using this expression the action $`𝒮_E`$ given in (39) becomes $$\begin{array}{cc}𝒮_E=& \frac{1}{4}(\tau ^+)d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & +\frac{2}{b}d^4x(\frac{1}{3}B^2B^{\mu \nu }B_{\mu \nu })\\ & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{+}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{+}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \tau \lambda }{}_{}{}^{}W_{\mu \nu }^{\alpha \beta }{}_{}{}^{}G_{\tau \lambda }^{\sigma \rho }ϵ_{\alpha \beta \sigma \rho }.\end{array}$$ (42) Before we perform the integrals over $`{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }`$ and $`{}_{}{}^{}G_{\mu \nu }^{\tau \lambda }`$ we still need to rearrange the third integral of (42). For this purpose let us introduce the definition $$B_{\mu \nu }=C_{\mu \nu }\eta _{\mu \nu }C.$$ (43) Note first that $`B=3C.`$ Therefore, we find $`\frac{1}{3}B^2B^{\mu \nu }B_{\mu \nu }=C^2C^{\mu \nu }C_{\mu \nu }.`$ We now use the identity $$\frac{1}{32}ϵ^{\mu \nu \alpha \beta }S_{\mu \nu }^{\tau \lambda }S_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }=(C^2C^{\mu \nu }C_{\mu \nu }),$$ (44) where $$S_{\mu \nu }^{\alpha \beta }=\delta _\mu ^\alpha C_\nu ^\beta \delta _\mu ^\beta C_\nu ^\alpha \delta _\nu ^\alpha C_\mu ^\beta +\delta _\nu ^\beta C_\mu ^\alpha $$ (45) and we observe that according to (36) we can write $`S_{\mu \nu }^{\alpha \beta }=(\tau ^+)_+S_{\mu \nu }^{\alpha \beta }+(\tau ^{})_{}S_{\mu \nu }^{\alpha \beta },`$ where $${}_{+}{}^{}S_{\mu \nu }^{\alpha \beta }=\delta _\mu ^{\alpha +}\chi _\nu ^\beta \delta _\mu ^{\beta +}\chi _\nu ^\alpha \delta _\nu ^{\alpha +}\chi _\mu ^\beta +\delta _\nu ^{\beta +}\chi _\mu ^\alpha $$ (46) and $${}_{}{}^{}S_{\mu \nu }^{\alpha \beta }=\delta _\mu ^\alpha \chi _\nu ^\beta \delta _\mu ^\beta \chi _\nu ^\alpha \delta _\nu ^\alpha \chi _\mu ^\beta +\delta _\nu ^\beta \chi _\mu ^\alpha .$$ (47) Here, $`{}_{}{}^{\pm }\chi _{\nu }^{\beta }=G_{\nu \alpha }^{\pm \beta \alpha }\frac{1}{3}\delta _\nu ^{\beta \pm }G_{\lambda \alpha }^{\lambda \alpha }`$ and $`C_{\mu \nu }=(\tau ^+)^+\chi _{\mu \nu }+(\tau ^{})^{}\chi _{\mu \nu }.`$ But, due to the symmetry properties of $`G_{\mu \nu }^{\alpha \beta }`$ given in (24) we discover that $`\chi _{\mu \nu }=^+\chi _{\mu \nu }=^{}\chi _{\mu \nu }`$. So, $`C_{\mu \nu }=(\tau ^+\tau ^{})\chi _{\mu \nu }.`$ Thus, we find that (42) can be written as $$\begin{array}{cc}𝒮_E=& \frac{1}{4}(\tau ^+)d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}G_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & \frac{1}{4}(\tau ^+)d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}\mathrm{S}_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}\mathrm{S}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{1}{4}(\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathrm{S}_{\mu \nu }^{\tau \lambda }{}_{}{}^{}\mathrm{S}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & +\frac{1}{2}d^4x\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{+}W_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{2}d^4x\epsilon ^{\mu \nu \alpha \beta }{}_{}{}^{}W_{\mu \nu }^{\tau \lambda }{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }.\end{array}$$ (48) Here, $`\mathrm{S}_{\mu \nu }^{\alpha \beta }`$ is defined as $$\mathrm{S}_{\mu \nu }^{\alpha \beta }=\delta _\mu ^\alpha \chi _\nu ^\beta \delta _\mu ^\beta \chi _\nu ^\alpha \delta _\nu ^\alpha \chi _\mu ^\beta +\delta _\nu ^\beta \chi _\mu ^\alpha .$$ (49) We are ready to perform the path integral of (48) with respect to $`{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }`$ (similarly for $`{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }`$). We get the constraint $$(\tau ^+)^+G_{\mu \nu }^{\tau \lambda }+^+W_{\mu \nu }^{\tau \lambda }+\frac{1}{4}(\tau ^+)ϵ^{\tau \lambda \sigma \rho }\mathrm{S}_{\sigma \rho }^{\alpha \beta }ϵ_{\alpha \beta \mu \nu }=0.$$ (50) It is not difficult to see that $`(\tau ^+)^+G_{\tau \nu }^{\tau \lambda }+^+W_{\tau \nu }^{\tau \lambda }=0`$ and $`(\tau ^+)^+G_{\tau \lambda }^{\tau \lambda }+^+W_{\tau \lambda }^{\tau \lambda }=0.`$ Using these two expressions we find $$(\tau ^+)^+G_{\mu \nu }^{\tau \lambda }+^+W_{\mu \nu }^{\tau \lambda }\frac{1}{4}ϵ^{\tau \lambda \sigma \rho }\mathrm{L}_{\sigma \rho }^{\alpha \beta }ϵ_{\alpha \beta \mu \nu }=0,$$ (51) where $$\mathrm{L}_{\mu \nu }^{\alpha \beta }=\delta _\mu ^\alpha \zeta _\nu ^\beta \delta _\mu ^\beta \zeta _\nu ^\alpha \delta _\nu ^\alpha \zeta _\mu ^\beta +\delta _\nu ^\beta \zeta _\mu ^\alpha ,$$ (52) with $`{}_{}{}^{\pm }\zeta _{\nu }^{\beta }=W_{\nu \alpha }^{\pm \beta \alpha }\frac{1}{3}\delta _\nu ^{\beta \pm }W_{\lambda \alpha }^{\lambda \alpha }.`$ Now, solving (51) for $`{}_{}{}^{+}G_{\mu \nu }^{\tau \lambda }`$ and substituting the result into (48) yields $$\begin{array}{cc}S_E=& \frac{1}{4}(\frac{1}{\tau ^+})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}W_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}W_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\frac{1}{\tau ^{}})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}W_{\mu \nu }^{\tau \lambda }{}_{}{}^{}W_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\\ & \\ & \frac{1}{4}(\frac{1}{\tau ^+})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}\mathrm{L}_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}\mathrm{L}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{1}{4}(\frac{1}{\tau ^{}})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}\mathrm{L}_{\mu \nu }^{\tau \lambda }{}_{}{}^{}\mathrm{L}_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho },\end{array}$$ (53) where we apply similar procedure for $`{}_{}{}^{}G_{\alpha \beta }^{\sigma \rho }`$ . But, this action can be obtained from the following action: $$\begin{array}{cc}S_D=& \frac{1}{4}(\frac{1}{\tau ^+})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{+}Q_{\mu \nu }^{\tau \lambda }{}_{}{}^{+}Q_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }\frac{1}{4}(\frac{1}{\tau ^{}})d^4xϵ^{\mu \nu \alpha \beta }{}_{}{}^{}Q_{\mu \nu }^{\tau \lambda }{}_{}{}^{}Q_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho },\end{array}$$ (54) where $${}_{}{}^{\pm }Q_{\mu \nu }^{\tau \lambda }=^\pm W_{\mu \nu }^{\tau \lambda }+^\pm \mathrm{K}_{\mu \nu }^{\tau \lambda },$$ (55) with $$\mathrm{K}_{\mu \nu }^{\alpha \beta }=\delta _\mu ^\alpha \gamma _\nu ^\beta \delta _\mu ^\beta \gamma _\nu ^\alpha \delta _\nu ^\alpha \gamma _\mu ^\beta +\delta _\nu ^\beta \gamma _\mu ^\alpha .$$ (56) Here, $`\gamma _{\mu \nu }`$ is an auxiliary field. In fact, by integrating out (54) with respect to $`\gamma _{\mu \nu }`$ we get (53). The action (54) is, of course, the dual action. Note that the action (54) has the dual gauge invariance $`VVd\alpha `$ and is of the general type (20) but with $`\tau \frac{1}{\tau },`$ where $`\tau `$ can be either $`\tau ^+`$ or $`\tau ^{}.`$ Therefore, we have shown that the coupling transforms as $`\tau \frac{1}{\tau }`$ when one changes from the action (20) to the action (54). It is interesting to note that $`\gamma _{\mu \nu }`$ plays the role of dual perturbation metric. 5.-FINAL COMMENTS In this article, we have shown that it is possible to construct a S-dual action for linearized gravity. Our two main ideas were to rewrite linearized gravity as an Abelian gauge theory and to take recourse of Macdowell-Mansouri formalism. The analysis of S-duality for a linearized gravity then follows as in the case of Abelian gauge theories. An important step was the realization that the transformation (28) provides with enough symmetry to set $`A=0`$. After some long computations we finally prove that the partition function for linearized gravity has the S-dual symmetry $$Z_{A,h}(\tau )=Z_{V,\gamma }(\frac{1}{\tau }),$$ (57) where $`Z_{A,h}(\tau )`$ is the partition function associated to the action (20), while $`Z_{V,\gamma }(\frac{1}{\tau })`$ is the partition function associated to the action (54). It seems that the present work can be extended without essential complications to the case of linearized supergravity. As it is known, the weak field limit of supergravity in four dimensions reduces to the sum of the Fierz-Pauli and Rarita-Schwinger actions, which are the unique actions for spin 2 and 3/2 without ghost (see and references there in). This means that we can write the partition function for linearized supergravity as a product of two partition functions: one corresponding to linearized gravity and the other corresponding to Rarita-Schwinger field. Taking recourse of self-dual and antiself-dual supergravity and considering separately self-dual-antiself-dual linearized gravity and self-dual-antiself-dual Rarita-Shwinger, one can find the linearized supergravity-S-dual symmetry . For linearized gravity one may proceed as in previous sections, while for Rarita-Schwinger field one may use the method developed in reference . In this way, one should expect to get the S-dual linearized supergravity symmetry $`Z_{A,h,\psi }(\tau )=Z_{V,\gamma ,\phi }(\frac{1}{\tau }),`$ where $`\psi `$ is the Rarita-Shwinger field and $`\phi `$ its dual. At present, we are investigating in detail this possibility and we expect to report our results in the near future. It is known that the S-duality gauge invariance is a matter of great interest in connection with M-theory \[33-39\] and supersymmetry it may be interesting to see whether the present work can be useful in those directions. Finally, it is worth mentioning the physical meaning, as well as the possible relevance, of the strong coupling limit of linearized gravity. For this purpose, we first need to clarify the physical meaning of the coupling constants $`\tau ^+`$ and $`\tau ^{}.`$ Write the action (16) as $$S=\frac{1}{16\mathrm{\Lambda }}d^4xϵ^{\mu \nu \alpha \beta }_{\mu \nu }^{\tau \lambda }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }.$$ (58) As was mentioned at the end of the section 3, $`\mathrm{\Lambda }`$ can be identified with the cosmological constant. Now, using (22) it is not difficult to see that the action (20) can be reduced to $$𝒮=\frac{1}{8}(\tau ^+\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }_{\mu \nu }^{\tau \lambda }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{i}{8}(\tau ^++\tau ^{})d^4xϵ^{\mu \nu \alpha \beta }_{\mu \nu }^{\tau \lambda }_{\alpha \beta }^{\sigma \rho }\delta _{\tau \lambda \sigma \rho }.$$ (59) Thus, if we consider the expressions $`\tau ^+`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Theta }}{2\pi }}+{\displaystyle \frac{1}{4\mathrm{\Lambda }}},`$ (60) $`\tau ^{}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Theta }}{2\pi }}{\displaystyle \frac{1}{4\mathrm{\Lambda }}}`$ (61) we find that the action (59) can be written as $$𝒮=\frac{1}{16\mathrm{\Lambda }}d^4xϵ^{\mu \nu \alpha \beta }_{\mu \nu }^{\tau \lambda }_{\alpha \beta }^{\sigma \rho }ϵ_{\tau \lambda \sigma \rho }+\frac{i\mathrm{\Theta }}{8\pi }d^4xϵ^{\mu \nu \alpha \beta }_{\mu \nu }^{\tau \lambda }_{\alpha \beta }^{\sigma \rho }\delta _{\tau \lambda \sigma \rho }.$$ (62) If we now recall the case of gauge theory where $`\tau ^+`$ and $`\tau ^{}`$ are defined as $`\tau ^+`$ $`=`$ $`{\displaystyle \frac{\theta }{2\pi }}+{\displaystyle \frac{4\pi }{g^2}},`$ (63) $`\tau ^{}`$ $`=`$ $`{\displaystyle \frac{\theta }{2\pi }}{\displaystyle \frac{4\pi }{g^2}},`$ (64) we see that the cosmological constant $`\mathrm{\Lambda }`$ is playing the role the gauge coupling constant $`g^2`$ and that $`\mathrm{\Theta }`$ is playing the role of a $`\theta `$ constant. Just as in gauge theory the duality transformation $`\tau \frac{1}{\tau },`$ where $`\tau `$ can be either $`\tau ^+`$ or $`\tau ^{},`$ can be considered, in its simple form, as a duality of the gauge coupling constant $`g^2\frac{1}{g^2}`$ , in the case of linearized gravity (according to (60)) the duality $`\tau \frac{1}{\tau }`$ can be considered as a duality of the cosmological constant $`\mathrm{\Lambda }\frac{1}{\mathrm{\Lambda }}`$. This result seems to suggest a new mechanism to make small the cosmological constant, at least in linearized gravity (although similar conclusions must be possible in more general cases as in references -). In particular, the mechanism discussed in this work may be of physical interest in Kaluza-Klein theories where in the process called spontaneous compactifiction, normally, results a very large cosmological constant. Acknowledgments This work was supported in part by CONACyT Grant 3898P-E9608.
no-problem/9910/astro-ph9910340.html
ar5iv
text
# A First Comparison of the SBF Survey Distances with the Galaxy Density Field: Implications for 𝐻₀ and Ω ## 1. Introduction Gravitational instability theory posits that the present-day peculiar velocity $`v_p`$ of a galaxy will equal the time-integral of the gravitational acceleration due to nearby mass concentrations, i.e., $`v_p=gt`$. For galaxies outside virialized structures, linear perturbation theory can be used to relate $`v_p`$ to the present-day local mass density: $$v_p(𝒓)\frac{\mathrm{\Omega }^{0.6}H_0}{4\pi }d^3𝒓^{\mathbf{}}\delta _m(𝒓^{\mathbf{}})\frac{𝒓^{\mathbf{}}\mathbf{}𝒓}{|𝒓^{\mathbf{}}\mathbf{}𝒓|^3}$$ (1) (Peebles 1980), where $`\mathrm{\Omega }`$ is the mass density of the universe in units of the critical density, $`H_0`$ is the Hubble constant, and $`\delta _m(r)`$ is the mass density fluctuation field. A further common simplification is the linear biasing model, $`\delta _g=b\delta _m`$, where $`\delta _g`$ is the fluctuation field of the observed galaxy distribution, and $`b`$ is the linear bias. With these assumptions, the observed peculiar velocity will be proportional to the quantity $`\beta =\mathrm{\Omega }^{0.6}/b`$. Redshift surveys of complete samples of galaxies can be used to determine the galaxy density in redshift space $`\delta _g(z)`$, which is then smoothed and used to predict the peculiar velocity field. In this case, the distances are measured in units of the Hubble velocity and $`H_0`$ cancels out of Eq. (1). Comparison with observed peculiar velocities from distance surveys tied to the Hubble flow then yields a value for $`\beta `$ (see Strauss & Willick for a comprehensive review of the methods). Recent applications using Tully-Fisher distances and the density field of the IRAS 1.2 Jy redshift survey (Fisher et al. 1995) have found best-fit values $`\beta _I=0.4`$–0.6 (Schlegel 1995; Davis et al. 1996; da Costa et al. 1998; Willick et al. 1997; Willick & Strauss 1998), where the subscript denotes the IRAS survey. Riess et al. (1997) have done the comparison with Type Ia supernovae (SNIa) distances and find $`\beta _I=0.40\pm 0.15`$. On the other hand, the Potent method (Dekel et al. 1990), based on the derivative of Eq. (1) but incorporating nonlinear terms, gives a $`\beta _I`$ about twice this value, most recently $`\beta _I=0.89\pm 0.12`$ (Sigad et al. 1998). The surface brightness fluctuation (SBF) method (Tonry & Schneider 1988) of measuring early-type galaxy distances is new to the field of cosmic flows. A full review of the method is given by Blakeslee et al. (1999). The $`I`$-band SBF Survey (Tonry et al. 1997, hereafter SBF-I) gives distances to about 300 galaxies out to $``$4000 km s<sup>-1</sup>. Tonry et al. (1999, hereafter SBF-II) used these data to construct a parametric flow model of the surveyed volume, but as the analysis made no use of galaxy distribution information, it could not constrain $`\mathrm{\Omega }`$. It did, however, allow for a direct tie between the Cepheid-calibrated SBF distances and the unperturbed Hubble flow, and thus a value for $`H_0`$. Previous $`H_0`$ estimates with SBF (e.g., Tonry 1991; SBF-I) relied on far-field ties to methods such as Tully-Fisher, $`D_n`$$`\sigma `$, and SNIa, and thus incurred additional systematic uncertainty. Unfortunately, this “direct tie” of SBF to the Hubble flow was still not unambiguous. For instance, Ferrarese et al. (1999, hereafter F99) used a subset of the same SBF data but a different parametric flow model to derive an $`H_0`$ 10% lower than in SBF-II. The present work uses SBF Survey distances in an initial comparison to the peculiar velocity predictions from redshift surveys and derives simultaneous constraints on $`H_0`$ and $`\beta `$. The comparison also provides tests of linear biasing, the SBF distances, and claimed bulk flows. ## 2. Analysis We compute the redshift-distance ($`cz`$$`d`$) relation in the direction of each sample galaxy using the method of Davis & Nusser (1994), which performs a spherical harmonic solution of the redshift-space version of the Poisson equation. Computations are done for both the IRAS 1.2 Jy (Fisher et al.1995) and “Optical Redshift Survey” (Santiago et al. 1995) catalogs as a function of their respective density parameters $`\beta _I`$ and $`\beta _O`$. The smoothing scale of the $`cz`$$`d`$ predictions is typically $``$500 km s<sup>-1</sup>. The predictions are then compared to the SBF observations using the $`\chi ^2`$ minimization approach adopted by Riess et al. (1997), who assumed a fixed “redshift error” of $`\sigma _v=\mathrm{\hspace{0.17em}200}`$ km s<sup>-1</sup>. This term includes uncertainty in the linear predictions, unmodeled nonlinear motions, and true velocity measurement error; we also report results with $`\sigma _v=\mathrm{\hspace{0.17em}150}`$ km s<sup>-1</sup>. We use the same subset of galaxies as in SBF-II, namely those with high-quality data, $`(VI)_0>0.9`$, and not extremely anomalous in their velocities (e.g., Cen-45); we also omit Local Group members. The full sample then comprises 280 galaxies. Unlike the case for the SNIa distances, we lack a secure external tie to the Hubble flow, so we successively rescale the SBF distances to different values of $`H_0`$ and repeat the minimization. The Nusser-Davis method cannot reproduce the multivalued $`cz`$$`d`$ zones of clusters, but it can produce major stall regions in the $`cz`$$`d`$ relation. Since such regions will have high velocity dispersions, this deficiency in the method may be overcome with allowance for extra velocity error. A significant fraction of SBF galaxies reside in the Virgo and Fornax clusters. We adopt the following three approaches in dealing with these. “Trial 1” uses individual galaxy velocities but allows extra variance in quadrature for the clusters according to: $`\sigma _{\mathrm{cl}}(r)=\sigma _0/\sqrt{1+(r/r_0)^2}`$, where $`\sigma _0=700(400)`$ km s<sup>-1</sup> and $`r_0=2(1)`$ Mpc are adopted for Virgo (Fornax). These spatial profiles are meant to mimic the observed projected profiles and parameters from Fadda et al. (1996) and Girardi et al. (1998); at large-radius $`\sigma _{\mathrm{cl}}`$ follows the infall velocity profile (e.g., SBF-II). “Trial 2” uses a fixed velocity error but removes the virial dispersions by assigning galaxies their group-averaged velocities. The groups are defined in SBF-I: 29 galaxies are grouped into Virgo, 27 into Fornax, 7 into Ursa Major, and all other groups have 2-6 members, with 34% of the sample being ungrouped. “Trial 3” is similar to trial 2, but 72 sample galaxies within 10 Mpc of Virgo (about 50% larger than the zero-velocity radius found in SBF-II) and 30 within 5 Mpc of Fornax are removed, reducing the sample by 36% to 178 galaxies. For each of the trials, we calculate $`\chi ^2`$ for the comparisons with both the IRAS and ORS gravity fields, using $`\sigma _v`$ of both 150 and 200 km s<sup>-1</sup>, for a range in $`H_0`$ in steps of 0.1 km s$`^1`$Mpc and a range in $`\beta `$ in steps of 0.1. We then interpolate in $`\beta `$ by cubic splines to find the $`H_0`$-$`\beta `$ combination minimizing $`\chi ^2`$. ## 3. Results Figure 3 shows the 68%, 90%, and 99% joint confidence contours on $`H_0`$ and $`\beta `$ from the $`\chi ^2`$ analyses for trial 2 (which gives the median best-fit $`H_0`$) using $`\sigma _v=200`$ km s<sup>-1</sup>. Significant covariance exists between the two parameters. Table 1 shows the $`\chi ^2`$ minimization results for all the different comparison runs. In order to give a more realistic impression of the true uncertainties, the tabulated errors are for $`\delta \chi ^2=2.3`$, the 68% joint confidence range on 2 parameters, except for the last column which varies $`H_0`$ within uncertainty limits to explicitly take account of the covariance in $`\beta `$ (see below). On average, the IRAS comparisons prefer $`\beta _I0.4`$ and $`H_074`$, while the ORS, which more densely samples the clusters, prefers $`\beta _O0.25`$ and $`H_073`$. Of course, there can be only one value of $`H_0`$ for the sample galaxies. The table shows that when the cluster galaxies are removed in trial 3, the best-fit $`H_0`$ increases to 74 for the ORS, while it changes little for IRAS. We adopt $`H_0=74\pm 1.4`$ as the likely value from this velocity analysis, where we use the median $`H_0`$ error for the IRAS trials combined in quadrature with the variance in $`H_0`$ among the trials. The error increases to $`\pm 4`$ km s$`^1`$Mpc when the 5% statistical uncertainty in the tie to the Cepheid distance scale is added in quadrature (SBF-II). Finally, the estimated uncertainty in the Cepheid scale itself is $``$9% (F99; Mould et al. 1999). These uncertainties in $`H_0`$ due to the distance zero point have no effect on the uncertainty in $`\beta `$. Figure 3 illustrates how the reduced $`\chi ^2`$ for $`H_0=74`$ and $`\sigma _v=200`$ varies with $`\beta `$ for trials 2 and 3 (spanning the range in best-fit $`\beta `$ and $`\chi _\nu ^2`$) of the IRAS and ORS comparisons. Overall, the last columns of Table 1 indicate $`\beta _I=0.42_{0.06}^{+0.10}`$ and $`\beta _O=0.26\pm 0.08`$, where the errors include the uncertainty for a given trial (including the 2% uncertainty in $`H_0`$ from the velocity tie) and the variation among the trials. The results are independent of whether one takes a median or average of the trials, or adopts either $`\sigma _v=150`$ or 200 km s<sup>-1</sup>. However, $`\beta _I`$ and $`\beta _O`$ would increase for example by $``$30% to 0.56 and 0.33, respectively, if $`H_0`$ were known to be 5% larger for a fixed distance zero point, or decrease by 15–20% if $`H_0`$ were 5% smaller, but again, the $`\chi ^2`$ analysis indicates $`H_0`$ is constrained to 2% for a fixed distance zero point. Finally, Figure 3 compares the $`\beta _I=0.4`$ predicted and $`H_0=74`$ observed peculiar velocities in the Local Group frame for all the galaxies, using their group-averaged velocities to limit noise. Given that the two sets of peculiar velocities were derived independently (apart from our adjustment of $`H_0`$ and $`\beta _I`$), the agreement is quite good. The predictions resemble a heavily smoothed version of the observations, and $`\chi _\nu ^2`$ confirms this, although the residuals may hint at a slightly misaligned dipole. Large negative residuals near $`(l,b)(283^{},+74^{})`$ indicate Virgo backside infall in excess of what the spherical harmonic solution can produce. These issues will be addressed in more detail by a forthcoming paper (Willick et al. 2000, in preparation). ## 4. Discussion We have found the most consistent results for $`H_0=74`$ km s$`^1`$Mpc, intermediate between the favored values of 77 and 71 reported by SBF-II and the $`H_0`$ Key Project’s analysis of SBF (F99), respectively. The differences in these “SBF $`H_0`$” values result entirely from the flow models<sup>1</sup><sup>1</sup>1We have changed the Key Project value by 1.6% to be appropriate for our distance zero point, since our concern here is with the tie to the Hubble flow.. SBF-II explored a number of parametric flow models ranging from a pure Hubble flow plus dipole to models with two massive attractors and a residual quadrupole, as well as some with a local void component. The pure Hubble flow model gave the same $`H_0`$ as obtained by F99. Adding Virgo and Great Attractors gave $`H_0=\mathrm{\hspace{0.17em}73.5}`$, and the additional Local Group-centered quadrupole gave a further 6% increase in $`H_0`$. Each component significantly improved the model likelihood, but it was suggested that the quadrupole arose from inadequate modeling of the flattened Virgo potential. If so, then $`H_0`$ may be better estimated using a sharply cutoff, Virgo-centered quadrupole. Figure 23 of SBF-II shows that such a model would indeed yield $`H_074`$. We also note that most of the SBF-II models had a significant excess Local Group peculiar motion of $``$190 km s<sup>-1</sup>, unlike the IRAS density field predictions (e.g., Willick etal 1997). However, when the excess was modeled as a “push” from a large nearby void, $`H_0`$ dropped from 78 to 73 because of the underdensity introduced. This model achieved the best likelihood of any considered, but the treatment of the void was deemed too ad hoc to qualify as a standard component of the flow model. Thus, it may be that the SBF-II result for $`H_0`$ suffered from the arbitrariness inherent in parametric modeling. However, the good match we find between the predicted and observed velocity fields supports the claim by SBF-II that, after accounting for the attractor infalls, any bulk flow of the volume $`cz\mathrm{}<3000`$ km s<sup>-1</sup> is $`\mathrm{}<200`$ km s<sup>-1</sup>, as the structure within this volume accounts for most of our motion in the cosmic microwave background rest frame (see Nusser & Davis 1994). The best-fit values of $`\beta _I=0.42_{0.06}^{+0.10}`$ and $`\beta _O=0.26\pm 0.08`$ are similar to the $`\beta _I=0.40\pm 0.15`$ and $`\beta _O=0.3\pm 0.1`$ results found by Riess et al. (1997) using 24 SNIa distances out to $`cz9000`$ km s<sup>-1</sup>, nearly 3 times our survey limit. It is interesting that despite the close agreement on $`\beta `$, SBF and SNIa still disagree by nearly 10% on $`H_0`$ (e.g., Gibson et al. 1999). This implies that the discrepancy is mainly due to the respective distance calibrations against the Cepheids (otherwise the $`H_0`$ offset would cause major disagreement on $`\beta `$). We also note that the ratio $`\beta _I/\beta _O1.6`$ agrees well with the estimates by Baker et al. (1998). Our results are consistent with all other recent comparisons of the gravity and velocity fields (so-called “velocity-velocity” comparisons), regardless of the distance survey used. Thus, the near factor-of-two discrepancy with the Potent analysis (a “density-density” comparison) of Sigad et al. (1998) persists. Since the latter analysis is done at larger smoothing scales, one obvious explanation is non-trivial, scale-dependent biasing, and some simulations may give the needed factor-of-two change in bias over the relevant range of scales (Kauffmann et al. 1997; but see Jenkins et al. 1998). However, at this point, the inconsistency in the results of the two types of analysis remains unexplained. We are currently in the process of analyzing the SBF data set using methods that deal directly with multivalued redshift zones and incorporate corrections to linear gravitational instability theory to take advantage of SBF’s ability to probe the small-scale nonlinear regime (Willick et al. 2000, in preparation). We also plan in the near future to use the data for a “density-density” determination of $`\beta `$. In addition, we are working towards a direct, unambiguous tie of SBF to the far-field Hubble flow, which will significantly reduce the systematic uncertainty in $`\beta `$. We thank Laura Ferrarese for sharing Key results prior to publication. This work was supported by NSF grants AST9401519 and AST9528340. JPB thanks the Sherman Fairchild Foundation for support.
no-problem/9910/hep-ph9910533.html
ar5iv
text
# Nonequilibrium chiral perturbation theory and Disoriented Chiral Condensates Talk given in “Hadron Physics: Effective Theories of Low Energy QCD”, Coimbra, Portugal. ## Introduction and motivation The forthcoming experiments on Relativistic Heavy Ion Collision (RHIC) at BNL and CERN will be able to test accurately the dynamics of the QCD plasma. After the collision, the plasma formed in the central rapidity region cools down via hydrodynamic expansion, and nonequilibrium effects become important in that regime. Among them, one of the most interesting suggestions has been the formation of the so called Disoriented Chiral Condensates (DCC). The DCC were proposed originally in an89 as misaligned vacuum regions, where the chiral field points out in directions in isospin space different from that where the vacuum expectation value of the pion field vanishes. If such regions were formed, one could observe large clusters of pions emitted coherently from the plasma as the pion field relaxes to the normal vacuum. This kind of behaviour is indeed observed in Centauro and anti-Centauro events in cosmic ray experiments centauro . However, one should point out that a clear signal for DCC formation has not been observed yet in the RHIC at Fermilab fermilab , although it could well happen that the DCC’s are too small to be directly detected and one has to think of alternative observables (see below). On the other hand, after the hadronisation time, a proper description of the microscopic meson dynamics makes it compulsory to use an effective low-energy theory for QCD. In this context, the chiral symmetry plays a fundamental role. The effective theory must incorporate all the QCD symmetries and the chiral spontaneous symmetry breaking (SSB) pattern, so that the Nambu-Goldstone bosons (NGB) are the lightest mesons ($`\pi `$, $`K`$, $`\eta `$). The light quark masses are then introduced perturbatively. One possible choice is simply an $`O(N)`$ model, with the standard classical SSB potential. Its fundamental fields are the $`N1`$ pions and the $`\sigma `$, which acquires a nonzero vacuum expectation value $`v`$. This is the Linear Sigma Model (LSM) description. However, one should bear in mind that the LSM becomes nonperturbative in the coupling constant at low energies, so that alternative perturbative expansions have to be used, such as large $`N`$. Besides, the LSM only shares the QCD chiral symmetry breaking pattern for $`N=4`$. An alternative approach is to construct an effective theory as an infinite sum of terms with increasing number of derivatives, only for the NGB fields. This is the description based in the Nonlinear Sigma Model (NLSM), which is the lowest order action one can write down in this expansion. Higher order corrections come both from NGB loops and higher order lagrangians and can be renormalised order by order in energies, yielding finite predictions for the meson observables. The unknown coefficients, which encode all the information on the underlying theory, absorb the loop infinities and can be fitted to experiment. This framework constitutes the so called chiral perturbation theory (ChPT) we79 ; gale . The perturbative expansion is carried out in terms of the ratio of the $`𝒪(p)`$ meson energy scales of the theory (masses, external momenta, temperature and so on) and the chiral scale $`\mathrm{\Lambda }_\chi `$ 1 GeV (see dogoho92 ; meiss93 ; pich95 ; dogolope97 for a review). Nonequilibrium effects such as the DCC’s have been investigated in the literature using $`O(N)`$ models with initial thermal equilibrium conditions $`\sigma (t=0)=0`$ and $`\pi ^a(t=0)=0`$. In this context, two different scenarios for DCC formation have been proposed: the first one takes place in the early stages of the plasma evolution. Roughly speaking, as the field rolls down along the potential, long wavelength modes grow exponentially and enhance the formation of DCC’s. There have been several approaches in the literature to implement this idea in the $`O(N)`$ model, like classical simulations rawi93 , large $`N`$ bodeho95 , or analysis based on reasonable assumptions on the kinematics cooper95 ; lamp96 . Typical DCC sizes within this approach are of the order of 2-3 fm, containing $`n_\pi `$ 0.2 fm<sup>3</sup> pions, whereas the plasma cools down in a proper time of about $`\tau `$ 5-10 fm/c. As commented above, these numbers yield too small DCC’s to be observed directly. A second suggestion, which has been recently proposed mm95 ; kaiser ; hiro is based on the parametric resonance mechanism and inherits the idea from inflationary reheating linde . In this approach, the $`\sigma `$ field is very close to the bottom of the potential but it is still oscillating around it (it clearly overshoots the vacuum if the initial conditions are imposed on the top) in the late stage of the plasma evolution. Those oscillations transfer energy to the pion modes, giving rise to exponentially growing pion solutions for certain bands in momentum space, via parametric resonance. Recent work within the LSM in this approach predicts rather larger DCC, of sizes up to 5 fm kaiser . More details about this mechanism will be given below. In the present work, we will explore the applicability of ChPT to describe the meson plasma out of thermal equilibrium. So far, this formalism has been applied only in equilibrium, to study the low $`T`$ ($`T=𝒪(p)`$) meson gas and the chiral phase transition gele89 ; boka96 . The key idea of our approach is to make use of the derivative expansion consistently defined in ChPT in order to study the system not far from equilibrium. It is therefore best suited for the late stage evolution and has the additional advantages typical of standard ChPT, i.e, it deals only with NGB fields and is equally applicable to three flavours. We will show that a systematic power counting can be established in this case and, furthermore, that the renormalisation program can be consistently implemented. Details can be found in agg99 . In addition, in the last section we will explore the possibility of describing DCC formation within ChPT, via parametric resonance. ## The model and chiral power counting Our starting point is the nonlinear sigma model (NLSM) where we let the pion decay constant– the only relevant parameter to the lowest order in derivatives– be time dependent. In the context of a RHIC, such time dependence can be thought of as proper time evolution within the so called Bjorken initial conditions bjo83 , where observables depend only on proper time and not on rapidity. This picture is consistent with the experimental observations. We take the initial time $`t=0`$, having in mind that it would correspond to a proper time $`\tau _0`$ 1-2 fm/c, a typical hadronisation time. Thus, we will consider the following NLSM action $$S[U]=_C𝑑td^3\stackrel{}{x}\frac{f^2(t)}{4}\text{tr}_\mu U^{}(\stackrel{}{x},t)^\mu U(\stackrel{}{x},t)$$ (1) Here, $`C`$ is the Schwinger-Keldysh contour (see agg99 for details), which parametrises the nonequilibrium path integral where we are considering thermal equilibrium for $`t0`$ at a temperature $`T_i=\beta _i^1`$, as the initial condition. Note that the action (1) is chiral invariant ($`ULUR^{}`$) by construction, which will play an important role in what follows. As a first approximation, we will be interested only in the strict chiral limit for two light flavours, i.e, massless pions. Therefore, we are not including any explicit symmetry-breaking term in the action. Thus, $`f(t0)=f`$ 93 MeV to leading order ($`ff_\pi `$ to higher orders) and for $`t>0`$ the system departs from equilibrium. Note that, since we choose that departure to be instantaneous, $`f(t)`$ cannot be analytical at $`t=0`$. This is just an artifact of the approximation and should not have any effect on the long-time behaviour. Finally, as customary, $`U(x)`$ is parametrised in terms of pion fields $`\pi ^a`$ as: $`U(\stackrel{}{x},t)={\displaystyle \frac{1}{f(t)}}\left\{\left[f^2(t)\pi ^2\right]^{1/2}I+i\tau _a\pi ^a\right\}`$ (2) and $`\pi ^a(t_ii\beta _i)=\pi ^a(t_i)`$ is the equilibrium boundary condition, with $`t_i<0`$. The new ingredient we need to incorporate in the power counting in order to be consistent with ChPT is then $$\frac{\dot{f}(t)}{f^2(t)}𝒪\left(\frac{p}{\mathrm{\Lambda }_\chi }\right),\frac{\ddot{f}(t)}{f^3(t)},\frac{[\dot{f}(t)]^2}{f^4(t)}𝒪\left(\frac{p^2}{\mathrm{\Lambda }_\chi ^2}\right),$$ (3) and so on. Obviously, our results will depend upon the choice of $`f(t)`$. One can think of $`f(t)`$ as an external source, to which we wish to obtain the nonequilibrium response of the system. Alternatively, this model can be thought of to lowest order as the LSM with the time-dependent constraint $`\sigma ^2+\pi ^2=f^2(t)`$. We shall discuss below a reasonable assumption for $`f(t)`$ in connection with DCC formation. Meanwhile, we shall keep $`f(t)`$ arbitrary. To lowest order in the pion fields, the above NLSM action can be written as $`S_0[\pi ]`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _C}d^4x\pi ^a(\stackrel{}{x},t)\left[\mathrm{}+m^2(t)\right]\pi ^a(\stackrel{}{x},t)`$ (4) where $`_Cd^4x=_C𝑑td^3\stackrel{}{x}`$ and $`m^2(t)=\ddot{f}(t)/f(t)`$. That is, the model accommodates a time-dependent pion mass term, without breaking explicitly the chiral symmetry. This effect is the same as switching on an external curved space-time background, as we will see in the next section. ## Renormalisation and curved space-time Once we have defined our nonequilibrium power counting, we can apply ChPT to calculate the time evolution of the observables. In doing so, we must pay special attention to renormalisation. The fact that there is a time-dependent mass term indicates that there can be new time-dependent infinities in the chiral loops. However, we are in the chiral limit, so we are not allowed to introduce the usual $`𝒪(p^4)`$ mass and wave function counterterms breaking the chiral symmetry gale . In other words, we should be able to construct the most general fourth order action, which in particular should include new terms (and hence new low-energy constants) to cancel those extra divergences, preserving exactly the chiral symmetry. In order to find this $`𝒪(p^4)`$ lagrangian, we will make use of a very fruitful analogy: the action (1) is equivalent to formulate the NLSM on a curved space-time background corresponding to a spatially flat Robertson-Walker metric, with scale factor $`a(t)=f(t)/f(0^+)`$ (see agg99 for details). Note that in this language, $`m^2(t)`$ in (4) represents the minimal coupling with the RW metric preserving chiral invariance. Therefore, we can construct the $`𝒪(p^4)`$ action as: $`S_4[U,g,R]={\displaystyle _C}d^4x\sqrt{g}\left[_4[U,g]+(L_{11}Rg^{\mu \nu }+L_{12}R^{\mu \nu })\text{tr}_\mu U^{}_\nu U\right]`$ (5) where $`g`$ is the metric determinant, $`_4[U,g]`$ stands for the standard (equilibrium) lagrangian gale with indices raised and lowered with the $`g^{\mu \nu }`$ metric and the rest are new $`𝒪(p^4)`$ invariant couplings with the scalar curvature $`R(x)`$ and the Ricci tensor $`R_{\mu \nu }(x)`$ in the chiral limit. These are the new terms we need, where $`L_{11}`$ and $`L_{12}`$ are the new coupling constants. In fact, this problem has been already considered in dole91 in order to study the energy-momentum tensor of QCD at low energies. In that work it has been found that $`L_{11}`$ is renormalised in dimensional regularisation, whereas $`L_{12}`$ is already finite. Their numerical values can be obtained from the experimental information on the QCD energy-momentum form factors. They yield $`L_{12}2.7\times 10^3`$ and $`L_{11}^r(\mu =1GeV)1.4\times 10^3`$ where $`\mu `$ is the renormalisation scale. In our case, with our RW metric we get to $`𝒪(\pi ^2)`$, $`S_4[\pi ,g]={\displaystyle \frac{1}{2}}{\displaystyle _C}d^4x\pi ^a\left[f_1(t)_t^2f_2(t)^2+m_1^2(t)\right]\pi ^a+𝒪(\pi ^4)`$ (6) with $`f_1(t)`$ $`=`$ $`12\left[\left(2L_{11}+L_{12}\right){\displaystyle \frac{\ddot{f}(t)}{f^3(t)}}L_{12}{\displaystyle \frac{[\dot{f}(t)]^2}{f^4(t)}}\right]`$ $`f_2(t)`$ $`=`$ $`4\left[\left(6L_{11}+L_{12}\right){\displaystyle \frac{\ddot{f}(t)}{f^3(t)}}+L_{12}{\displaystyle \frac{[\dot{f}(t)]^2}{f^4(t)}}\right]`$ $`m_1^2(t)`$ $`=`$ $`\left[{\displaystyle \frac{f_1(t)\ddot{f}(t)+\dot{f}_1(t)\dot{f}(t)}{f(t)}}+{\displaystyle \frac{1}{2}}\ddot{f}_1(t)\right]`$ (7) The above lagrangian should take care of the nonequilibrium infinities we might find in the pion two-point function. We will see below that this is indeed the case. ## The pion decay functions $`f_\pi (t)`$ The first observable one might think of calculating in ChPT is the pion decay constant to one loop. In the nonequilibrium model, it will become a time-dependent function $`f_\pi (t)`$. One should point out that the definition of $`f_\pi `$ is subtle even in thermal equilibrium boka96 ; kash94 . In addition, one has in general $`f_\pi ^s(T)f_\pi ^t(T)`$ corresponding to the axial current spatial and temporal components and due to the loss of Lorentz covariance in the thermal bath pity96 . We refer to agg99 for details on how to define properly $`f_\pi (t)`$ out of equilibrium. Once this has been done, one has to consider the one loop diagrams for the pion two-point function coming from (1) plus the tree level ones from (6). The final result up to $`𝒪(p^4)`$ reads agg99 $`\left[f_\pi ^s(t)\right]^2`$ $`=`$ $`f^2(t)\left[1+2f_2(t)f_1(t)\right]2iG_0(t)`$ (8) $`\left[f_\pi ^t(t)\right]^2`$ $`=`$ $`f^2(t)\left[1+f_2(t)\right]2iG_0(t)`$ (9) for $`t>0`$, with $`f_{1,2}(t)`$ in (7) and $`G_0(t)`$ is nothing but the equal-time pion two-point function $`G_0(t)=G_0(x,x)`$ with $`G_0(x,y)`$ the solution of the differential equation $$\left\{\mathrm{}_x+m^2(x^0)\right\}G_0(x,y)=\delta _C(x^0y^0)\delta ^{(3)}(\stackrel{}{x}\stackrel{}{y})$$ (10) with KMS equilibrium conditions $`G_0^>(\stackrel{}{x},t_ii\beta _i;y)=G_0^<(\stackrel{}{x},t_i;y)`$, $`G_0^>`$ and $`G_0^<`$ being advanced and retarded correlation functions. Clearly, this equation cannot be solved analytically for an arbitrary $`f(t)`$, but it can be managed numerically. Therefore, one must remember that $`G_0(t)`$ depends implicitly on $`f(t)`$ through (10). As a consistency check, the results (8)-(9) reproduce the equilibrium result gale87 when we switch off the time derivatives of $`f(t)`$: $`\left[f_\pi ^s(T)\right]^2=\left[f_\pi ^t(T)\right]^2=f^2\left(1{\displaystyle \frac{T^2}{6f_\pi ^2}}\right)`$ (11) An interesting consequence of our result is that $`f_\pi ^s(t)f_\pi ^t(t)`$ to one-loop, unlike the equilibrium case. In addition, from (8)-(9) and (7) we see that the difference $`[f_\pi ^s(t)]^2[f_\pi ^t(t)]^2`$ is finite, so that $`f_\pi ^s(t)`$ and $`f_\pi ^t(t)`$ can be renormalised at the same time, which is another consistency check. We remark that $`G_0(t)`$ contains in general UV divergences, to be absorbed by $`f_1(t)`$ and $`f_2(t)`$ in the renormalisation of $`L_{11}`$. An explicit check of this renormalisation procedure will follow in the next section. ## Disoriented Chiral Condensates in ChPT In this section we will consider a particular choice of $`f(t)`$ and apply our previous results. Our motivation is the possibility of generating DCC-like structures in this context. We shall sketch some of our preliminary results here, while details of the calculation and further work will be postponed to a forthcoming paper. As we have discussed above, our approach is meant to be useful in a stage of the plasma evolution where the departure from equilibrium is of the same order as the meson energies. Hence, we should be able to obtain similar results as the analysis performed in the LSM in the parametric resonance regime mm95 ; kaiser ; hiro , where the rolling down of the $`\sigma `$ field is in its late oscillatory period. This is the same behaviour of the inflaton field in reheating linde . One then allows for a time-dependent classical background $`\sigma (t)`$ in the LSM, splitting the field as $$\sigma (\stackrel{}{x},t)=\sigma (t)+\delta \sigma (\stackrel{}{x},t)$$ (12) where $`\delta \sigma `$ is the quantum fluctuation. As a first approximation, one can neglect the pion fluctuations $`\pi ^2v^2`$ mm95 ; hiro and solve the equation of motion, which yields just $`\sigma (t)=\sigma _0\mathrm{cos}m_\sigma t`$. Here, $`\sigma _0`$ is the initial field amplitude, which in this approximation is a small quantity. Even though, one can still produce exponentially growing pion fields (DCC) which in the end will be responsible for the damping of the oscillations as the field relaxes to equilibrium. One should bear in mind that neglecting $`\pi ^2`$ to lowest order is a rather crude approximation, as pointed out in kaiser , which is clearly not valid for large times when the pion correlator grows significantly. Nonetheless, we will carry on with this simple case, just to understand qualitatively how ChPT can also account for the description of DCC’s. A better approximation would be to solve the coupled equations for the $`\sigma `$ and $`\pi `$ fields, which yields the solution for $`\sigma (t)`$ in terms of elliptic functions kaiser . Therefore, in this simple picture, we take our $`f(t)`$ of the same form as the lowest order $`\sigma (t)`$ in the LSM, i.e, $$f(t)=f\left[1\frac{q}{2}\left(\mathrm{cos}Mt1\right)\right]$$ (13) Here, $`q`$ is a small parameter, playing the role of $`\sigma _0`$ in the LSM. Notice that our nonequilibrium chiral power counting demands $`qM^2=𝒪(p^2)`$ and so on. Thus, for definiteness, we will take $`q=𝒪(p^2/\mathrm{\Lambda }_\chi ^2)`$, so that the $`𝒪(p^4)`$ corrections remain under control (see below), and $`M`$ arbitrary. In the end, we will discuss how the results are affected by $`T_i`$, $`q`$ and $`M`$. Therefore, we have $`m^2(t)=(qM^2/2)\mathrm{cos}Mt(1+𝒪(q))`$, so that the differential equation (10) becomes to leading order $$\left[\frac{d^2}{dt^2}+\frac{4k^2}{M^2}2q\mathrm{cos}Mt\right]G_0^>(k,t,t^{})=0$$ (14) where we have Fourier transformed in the spatial coordinates only ($`k^2=|\stackrel{}{k}|^2`$). The above equation is nothing but the Mathieu equation, which has several well-known interesting properties mac ; abramo . Among them, it admits unstable solutions exponentially growing in time, for certain values of $`4k^2/M^2`$. This is the simplest version of the parametric resonance mechanism. In particular, the instabilities develop in bands in $`k`$, centered at $`k_n=nM/2`$, of width $`\mathrm{\Delta }k_n=𝒪(q^n)`$. Hence, in the approximation we are working, only the first band is relevant, i.e, unstable solutions only exist for $`M/2\mathrm{\Delta }k_1<k<M/2+\mathrm{\Delta }k_1`$. This is known in the Cosmology literature as the narrow resonance approximation linde . A typical unstable solution $`G_0(k,t)`$ has been plotted in Figure 1 for a particular choice of the parameters in the first band. The solutions typically oscillate with an exponentially growing amplitude inside the unstable region. Therefore, we see that our ChPT approach allows for DCC-type configurations. Next, we will apply our results for $`f_\pi (t)`$ to this particular case. The equal-time correlation function $`G_0(t)=d^3kG_0(k,t,t)`$ turns out to be UV divergent, as expected. After standard manipulations in dimensional regularisation ($`d=4ϵ`$) one can cast the divergent part for $`t>0`$ as $$iG_0^{div}(t)=\frac{qM^2}{16\pi ^2}\mathrm{cos}Mt\left(\frac{1}{ϵ}+\frac{1}{2}\mathrm{log}\frac{\mu ^2}{M^2}\right)$$ (15) which is an example of the new time-dependent divergences we were talking about in previous sections. In fact, we see that it has exactly the same form as $`\ddot{f}(t)/f(t)`$. Furthermore, replacing in (8)-(9), we find that the result is rendered finite and scale independent with the same renormalisation of $`L_{11}`$ derived in dole91 . The final results for $`f_\pi ^s(t)`$ are plotted in Figure 2 for different choices of the parameters. We clearly observe the damping effect on the amplitude due to the unstable solutions at long times. In other words, the DCC’s accelerate thermalisation. We also observe that this mechanism becomes less efficient for smaller $`q`$ and $`M`$. Typically, the unstable corrections to the amplitude of $`f_\pi (t)`$ are proportional to $`(qM^2/4\pi f_\pi ^2)\mathrm{exp}(qMt)`$. On the other hand, this effect seems to be rather insensitive to the initial temperature and thus we expect to catch all the important qualitative behaviour concerning the DCC’s, regardless of the initial conditions. It should be pointed out that the curves have been cut off at the times where the one-loop contribution becomes of the same size as the tree level one. From that point onwards, the exponentially growing correlator dominates, yielding unphysical results. As commented above, we do not expect our simple cosine shape for $`f(t)`$ to be valid for all times, since it is derived neglecting the pion correlator. This final time $`t_f`$ roughly defines the applicability range of our results. We expect that this range is enough to account for all the plasma time evolution of a realistic RHIC. For instance, for $`M`$= 1 GeV and $`q=0.1`$, we get $`t_f35M^1`$ 8 fm/c. This is exactly the same as extrapolating the equilibrium result (11) to predict the critical temperature at $`T=T_c6f_\pi ^2`$, where all the higher order corrections become of the same order. Nonetheless, that formula predicts the right behaviour of $`f_\pi (T)`$ as it approaches the transition. In the same way, our results reproduce the expected qualitative behaviour as we extrapolate them up to times $`tt_f`$. Therefore, $`f_\pi (t)`$ can be regarded as an alternative observable (it is the residue of an axial-axial correlator and it can be measured in semileptonic decays) to test the size of DCC-like configurations in the late stage of the plasma expansion. ## Conclusions and Outlook We have reviewed recent work on the extension of ChPT to a nonequilibrium situation. The NLSM with a time dependent pion decay constant provides a nonequilibrium effective model with a well-defined perturbative expansion and power counting near equilibrium. The analogy of this model with curved space-time QFT allows to consistently construct higher order lagrangians and implement renormalisation. As a first application, we have obtained the renormalised one-loop $`f_\pi (t)`$. We have also shown how this model can be applied to describe DCC-like structures in the late stage of the expansion of a hot plasma formed after a RHIC. Work in progress includes a more realistic study of the parametric resonances, including consistently the pion correlations in $`f(t)`$ and calculating the correlation length and the number of pions. One can also think of including pion masses, extending the results to three flavours, using large $`N`$ methods and cosmological applications as other interesting aspects to be investigated in this context. ## Acknowledgments I wish to thank the organisers of the “Hadron Physics” conference and the Theory group in Coimbra for their kind help and hospitality. Financial support from CICYT, Spain, project AEN97-1693, is also acknowledged.
no-problem/9910/nucl-th9910019.html
ar5iv
text
# WORKING GROUP SUMMARY: ISOSPIN VIOLATION ## ISOSPIN VIOLATION: GENERAL ASPECTS Isospin symmetry was introduced in the thirties by Heisenberg in his studies of the atomic nucleus. Since then, many particles have been found to appear in iso–multiplets, like the nucleons, the pions, the delta isobars, a.s.o. With the advent of QCD, a deeper understanding of isospin symmetry has emerged. In the limit of equal up and down current quark masses and in the absence of electroweak interactions, isospin is an exact symmetry of QCD. The intra–multiplet mass splittings allow to quantify the breaking of this symmetry, which is caused by different mechanisms (for a detailed review, see ref.). First, the light quark masses are everything but equal (still, their absolute masses are much smaller than any other QCD scale and thus this breaking can be treated as a perturbation). Second, the light quarks have different charges and thus react differently to the electromagnetic (em) interactions. The em effects are also small since they are proportional to the fine structure constant $`\alpha =e^2/4\pi 1/137`$. In the case of the pions, the mass splitting is almost entirely of em origin. This can be traced back to the absence of d–like couplings in SU(2), thus promoting the quark mass difference to a second order effect. For the nucleons, matters are different, strong and em effects are of similar size but different signs. The fact that the neutron is heavier than the proton leads to the conclusion that $`m_d>m_u`$, consistent with the analysis of the kaon masses. Since we know that isospin is broken - so why bother? First, the picture that has emerged from the hadron masses can not be considered complete, there is still on–going discussion about the size of the violation of Dashen’s theorem, the possibility of a vanishing up quark mass to solve the strong CP problem and “strange” results from lattice gauge theory. Also, the analysis of the quark mass dependence of the baryon masses remains to be improved (for a classic, see ref. and a recent study, see ref.). Furthermore, only a few dynamical implications of isospin violation have been verified experimentally and a truely quantitative picture has not yet emerged. In addition, the nucleus as a many–body system offers a novel laboratory to study isospin violation. In addition, with the advent of CW electron accelerators and improved detectors, we now have experimental tools to measure threshold pion photoproduction with an unprecedented accuracy. ## THE PION SECTOR The purely mesonic sector was not touched upon in this working group, but there is one recent result which I would like to discuss. In elastic $`\pi \pi `$ scattering, the chiral perturbation analysis has been carried out to two loops. It was demonstrated in refs. that the em isospin–violating effects are of the same size as the hadronic two–loop corrections. For a precise description of low energy pion reactions, it is thus mandatory to include such effects consistently. A somewhat surprising result was found in case of the scalar and the vector form factor of the pion in ref.. It was shown that the em corrections to the momentum–dependence of both form factors are tiny (due to large cancellations between various contributions), much smaller than the corresponding hadronic two–loop contributions worked out in refs.. This result remains to be understood in more detail. It is particularly surprising for the scalar form factor since it is not protected by a conserved current theorem à la Ademello–Gatto. Only the normalization of the scalar form factor exhibits the few percent em corrections anticipated from the study of the $`\pi \pi `$ scattering lengths. Note, however, that the smallness of the effects of the light quark mass difference for the pion form factors has been known and understood since long . ## THE PION–NUCLEON SECTOR The pion–nucleon system plays a particular role in the study of isospin violation. First, the explicit chiral symmetry breaking and isospin breaking operators appear at the same order in the effective Lagrangian which maps out the symmetry breaking part of the QCD Hamiltonian, i.e. the quark mass term (restricted here to the two lightest flavors), $$_{\mathrm{QCD}}^{\mathrm{sb}}=m_u\overline{u}u+m_d\overline{d}d=\frac{1}{2}(m_u+m_d)(\overline{u}u+\overline{d}d)+\frac{1}{2}(m_um_d)(\overline{u}u\overline{d}d),$$ (1) so that the strong isospin violation is entirely due to the isovector term whereas the isoscalar term leads to the explicit chiral symmetry breaking. In the presence of nucleons (and in contrast to the pion case), both breakings appear at the same order. This can lead to sizeable isospin violation as first stressed in reactions involving neutral pions by Weinberg . Let me perform some naive dimensional analysis for the general case (say for any given channel in $`\pi N`$ scattering that is not suppressed to leading order). Isospin–violation (IV) should be of the size $$\mathrm{IV}\frac{m_dm_u}{\mathrm{\Lambda }_{\mathrm{hadronic}}}\frac{m_dm_u}{M_\rho }=𝒪(1\%),$$ (2) where the mass of the $`\rho `$ set the scale for the non–Goldstone physcis. In the presence of a close–by and strongly coupled baryonic resonance like the $`\mathrm{\Delta }(1232)`$, IV might be enhanced $$\mathrm{IV}\frac{m_dm_u}{m_\mathrm{\Delta }m_N}=𝒪(2\%).$$ (3) Of course, such type of arguments can not substitute for full scale calculations. Second, there are two analyses which seem to indicate a fair amount of isospin violation (of the order of 6…7%, which is much bigger than the dimensional arguments given above would indicate) in low–energy $`\pi N`$ scattering, see Gibbs’ talk . This can not be explained in conventional meson–exchange models by standard meson mixing mechanisms. I would also like to mention that in these two analyses the hadronic and the electromagnetic contributions are derived from different models. This might cause some concern about possible uncertainties due to a theoretical mismatch. Clearly, it would be preferable to use here one unique framework. That can, in principle, be supplied by chiral perturbation theory since electromagnetic corrections can be included systematically by a straightforward extension of the power counting. This is most economically, done by counting the electric charge as a small parameter, i.e. on the same footing as the external momenta and meson masses. The heavy baryon chiral perturbation theory machinery to study these questions to complete one–loop (fourth) order has been set up as shown by Müller . It is important to perform such calculations to fourth order since one–loop graphs appear at dimension three and four. Furthermore, it is known from many studies that one–loop diagrams with exactly one insertion from the dimension two $`\pi N`$ Lagrangian are (often) important. Finally, symmetry breaking (chiral and isospin) in the loops only starts at fourth order. In particular, questions surrounding the $`\pi N`$ $`\sigma `$–term or neutral pion scattering off nucleons can now be addressed to sufficient theoretical precision. A first step in this direction for all channels in $`\pi N`$ scattering was reported by Fettes , but a full scale one–loop calculation including all virtual photon effects still has to be done. Of particular interest is the novel relation between $`\pi ^0`$ and $`\pi ^\pm `$ scattering off protons that is extremely sensitive to isospin violation. It should also be stressed that for such tests, it is mandatory to better measure and determine the small isoscalar $`\pi N`$ amplitudes. Also, the relations which include the much bigger isovector amplitudes show IV consistent with the dimensional arguments given in eq.(2). I consider the “ordering schemes” discussed by Gibbs and Fettes very useful tools to pin down the strengths and sources of isospin breaking in $`\pi N`$ scattering. This also allows to see a priori which type of measurements are necessary to obtain complete information and to what extent various reactions can give redundant information (one example is discussed by Gibbs ). Intimately related to this is pion–photoproduction via the final–state theorem, i.e. certain $`\pi N`$ scattering phases appear in the imaginary part of the respective charged or neutral pion photoproduction multipoles. Bernstein stressed that in neutral pion photoproduction off protons, there are two places to look for isospin violation. One is below the $`\pi ^+n`$ threshold, which might give access to the elusive (but important) $`\pi ^0p`$ scattering length. At present, it does not appear that the original proposal of measuring the target polarization below the $`\pi ^+n`$ threshold to high precision is feasible at a machine like e.g. MAMI. The other important effect, which appears to be more easily accessible to an experiment, is the strength of the cusp at the opening of the $`\pi ^+n`$ threshold, which according to Bernstein’s three–channlel S–matrix analysis is quite sensitive to isospin violation. Such a calculation should also be done in the framework of heavy baryon chiral perturbation theory (beyond the charged to neutral pion mass difference effects included so far). Over the last years, there has been a very fruitful interplay between experimenters and theorists particularly in the field of pion photo– and electroproduction and it is of utmost important to further strengthen this. It is a theorists dream that reactions with neutral pions (elastic scattering and photoproduction) will be measured to a high precision. An important point was stressed by Lewis . In a “toy” calculation (i.e. an SU(2) approach to the strange vector form factor of the nucleon, which is clearly related to three flavor QCD), he showed that isospin–breaking effects can simulate a “strange” form factor that intrinsically vanishes in that approach. This nicely demonstrates that to reliably determine small quantities, may they be related to isospin conserving or violating operators, all possible effects have to be included. The recent measurements at BATES and JLAB, which seem to indicate small expectation values of the strange vector current in the proton, should therefore be reanalyzed. In this case, isospin violation appears to be a nuisance but can not be ignored. ## THE NUCLEON–NUCLEON SECTOR The only new data with respect to IV were presented by Machner . He analysed recent data from COSY and IUCF for $`pp\pi ^+d`$ and $`np\pi ^0d`$. For exact isospin symmetry (i.e. after removing the Coulomb corrections), the pertinent cross sections should be equal (up to a Clebsch). In the threshold region, one can make a partial wave expansion and finds that the S–wave contribution $`\alpha _0`$ shows IV of the order of 10% and no effect is observed in the P–wave terms. To my knowledge, a theoretical understanding of this effect is lacking. Despite a huge amount of efforts over the last years, a model–independent effective field theory description of pion production in proton–proton collisions has not yet been obtained. The energies involved to even produce a pion at rest are too large for the methods employed so far. More progress, however, has been made in the two–nucleon system at low energies. It is well known that IV appears in the NN scattering lengths. In the nuclear jargon, one talks about charge independence breaking (CIB) ($`a_{np}(a_{pp}+a_{nn})/2`$ after Coulomb subtraction, where $`a`$ denotes the scattering length) and charge symmetry breaking (CSB) ($`a_{pp}a_{nn}`$ after Coulomb subtraction). These effects are naturally most pronounced at threshold. Kaplan, Savage and Wise (KSW) have proposed a non–perturbative scheme that allows for power counting on the level of the nucleon–nucleon scattering amplitude. In that framework, IV (CIB and CSB) has recently been investigated . It was shown that isospin violation can be systematically included in the effective field theory approach to the two–nucleon system in the KSW formulation. For that, one has to construct the most general effective Lagrangian containing virtual photons and extend the power counting accordingly. This framework allows one to systematically classify the various contributions to CIB and CSB. In particular, the power counting combined with dimensional analysis allows one to understand the suppression of contributions from a possible charge–dependence in the pion–nucleon coupling constants. Including the pions, the leading CIB breaking effects are the pion mass difference in one–pion exchange together with a four–nucleon contact term. These effects scale as $`\alpha Q^2`$, where $`Q1/3`$ is the genuine expansion parameter of the KSW scheme. Power counting lets one expect that the much debated contributions from two–pion exchange and $`\pi \gamma `$ graphs are suppressed by factors of $`1/3`$ and $`(1/3)^2`$, respectively. This is in agreement with some, but not all, previous more model–dependent calculations. The leading charge symmetry breaking is simply given by a four–nucleon contact term. ## LIGHT NUCLEI Often, the nucleus can be used as a filter to enhance or suppress certain features of reactions as they appear in free space. Furthermore, measurements on the neutron, which are necessary to get the complete information in the isospin basis (for a discussion on this topic with respect to pion photoproduction, see ref.) can only be done on (preferably polarized) light nuclei. Gibbs has pointed out that a measurement of charge exchange on the proton and the neutron (in forward direction and close to the interference minimum near 45 MeV) could be done in the <sup>3</sup>He–triton system. This would be an interesting possibility to get another handle on the elusive neutron and allow one to pin down one of the amplitudes parametrizing IV (according to the ordering scheme mentioned above). For a more detailed discussion concerning the extraction of neutron properties from the deuteron, I refer to the recent summary by Beane . ## WHERE DO WE STAND AND WHERE TO GO For sure, isospin symmetry is broken. However, do we precisely know the size of IV from experiment? The answer is yes and no. We have some indicative information but no systematic investigations of all pertinent low energy reactions are available. Also, one might ask the question whether the methodology, which has been used so far to extract numbers on IV, say from low energy $`\pi N`$ scattering data, is reliable? If we assume that this is the case, we still have no deeper understanding of the mechanisms triggering IV. To my knowledge, the only machinery to consistently separate strong and electromagnetic IV is based on effective field theory. In that scheme, one can consider various reactions like elastic $`\pi N`$ scattering, pion photoproduction or even nucleon Compton scattering to try to get a handle on the symmetry breaking operators $`m_dm_u`$. Also, a systematic treatment of isospin violation is mandatory for the determiantion of small quantities like the isoscalar S–wave scattering length or the strange nucleon form factors. Based on that, I have the following wish list for theory and experiment: THEORY: 1. The effective chiral Lagrangian calculations can and need to be improved. In particular, it is most urgent to get a handle on the so–called low–energy constants, which parametrize the effective Lagrangian beyond leading order. Sum rules, models or even the lattice might be useful here. 2. A deeper theoretical understanding of certain phenomenological models (like e.g. the extended tree level model of ref.) in connection with the approaches to correct for Coulomb effects would be helpful. 3. The dispersion–theoretical approach should be revisited and set up in a way to properly include IV (beyond what has been done so far). For some first steps, see the talk by Oades . EXPERIMENT: 1. Clearly, we need more high precision data for the elementary processes, but not only for $`\pi N`$ scattering but also for (neutral) pion photo/electroproduction. 2. More precise nuclear data are also needed. Embedding the elementary reactions in the nucleus as a filter allows one to get information on the elusive neutron properties. Clearly, this refers to few–nucleon systems where precise theoretical calculations are possible. Finally, I would like to stress again that a truely quantitative understanding of isospin violation can only be obtained by considering a huge variety of processes. While pion–nucleon scattering is at the heart of these investigations, threshold pion photoproduction or the nucleon form factors also play a vital role in supplying additional information. ## ACKNOWLEDGEMENTS I would like to thank all participants of this working group for their contributions. I am grateful to my collaborators Evgeny Epelbaum, Nadia Fettes, Bastian Kubis, Guido Müller and Sven Steininger for sharing with me their insight into this topic. Last but not least the superbe organization by Christine Kunz and Res Badertscher is warmly acknowledged.
no-problem/9910/cond-mat9910269.html
ar5iv
text
# 1 The prefactor Θ_{𝑓₁⁢𝑓₂} of the force between two star polymers at short distance calculated in non-resummed 1-loop and resummed 3-loop RG analysis in comparison to the result of the cone approximation. Colloids with polymer stars: The interaction Ch. von Ferber<sup>a</sup>, Yu. Holovatch<sup>b</sup>, A. Jusufi<sup>a</sup> C.N. Likos<sup>a</sup>, H. Löwen<sup>a</sup> and M. Watzlawek<sup>a</sup> <sup>a</sup> Institut für Theoretische Physik II, Heinrich-Heine-Universität Düsseldorf, D-40225 Düsseldorf, Germany <sup>b</sup> Institute for Condensed Matter Physics, Ukrainian Acad. Sci., 1 Svientsitskii Str., UA-290011, Lviv, Ukraine We derive the short distance interaction of star polymers in a colloidal solution. We calculate the corresponding force between two stars with arbitrary numbers of legs $`f_1`$ and $`f_2`$. We show that a simple scaling theory originally derived for high $`f_1,f_2`$ nicely matches with the results of elaborated renormalization group analysis for $`f_1+f_26`$ generalizing and confirming a previous conjecture based only on scaling results for $`f_1=f_2=1,2`$. Star polymers, i.e. structures of polymer chains that are chemically linked with one end to a common core, have found recent interest as very soft colloidal particles . Increasing the number of chains $`f`$ they interpolate between the properties of linear polymers and polymeric micelles . For large $`f`$ the effective interaction between the cores of different polymer stars becomes strong enough to allow for crystal ordering of a dense solution of stars. While such a behavior was already predicted by early scaling arguments only recently corresponding experiments have become feasible with sufficiently dense stars. In addition theory and computer simulation have refined the original estimate for the number of chains $`f`$ necessary for a freezing transition from $`f100`$ to $`f34`$ and predicted a rich phase diagram . These results were derived using an effective pair potential between stars with a short distance behavior derived from scaling theory as explained below. The scaling theory of polymers was significantly advanced by de Gennes observation that the $`n`$-component spin model of magnetic systems applies to polymers in the formal $`n=0`$ limit . This opens the way to apply renormalization group (RG) theory to explain the scaling properties of polymer solutions that have been the subject of experimental and theoretical investigations since the pioneering works in this field . Many details of the behavior of polymer solutions may be derived using the RG analysis . Here, we use only the more basic results of power law scaling: The radius of gyration $`R_g(N)`$ of a polymer chain and the partition sum $`𝒵(N)`$ are found to obey the power laws $$R_g(N)N^\nu \text{ and }𝒵(N)z^NN^{\gamma 1}$$ (1) with some fugacity $`z`$ that shows the number of possibilities to locally add one more monomer to the chain, while the two exponents $`\nu `$ and $`\gamma 1`$ represent the nontrivial corrections that are due to the self-avoiding interaction that is non-local along the chain. These exponents are the $`n=0`$ limits of the correlation length exponent $`\nu (n)`$ and the susceptibility exponent $`\gamma (n)`$ of the $`n`$-component model. The exponents of any other power law for linear polymers may be expressed by these two exponents in terms of scaling relations. It has been shown that the $`n=0`$-component spin model can be extended to describe polymer networks and in particular star polymers . A family of additional exponents $`\gamma _f`$ governs the scaling of the partition sums $`𝒵_f(N)`$ of polymer stars of $`f`$ chains each with $`N`$ monomers: $$𝒵_f(N)z^{fN}N^{\gamma _f1}.$$ (2) Again the exponents of any other power law for more general polymer networks is given by scaling relations in terms of $`\gamma _f`$ and $`\nu `$. For details of a proof and necessary restrictions on the chain length distributions see . For large $`f`$ each chain of the star is restricted approximately to a cone of space angle $`\mathrm{\Omega }_f=4\pi /f`$. In this cone approximation one finds for large $`f`$ $$\gamma _f1f^{3/2}.$$ (3) Let us now turn to the effective interaction between the cores of two star polymers at small distance (small on the scale of the size $`R_g`$ of the star). In the formalism of the $`n=0`$ component model, the core of a star polymer corresponds to a local product of $`f`$ spin fields $`\varphi _1(𝐱)\mathrm{}\varphi _f(𝐱)`$, each representing the endpoint of one polymer chain. The probability of approach of the cores of two star polymers at small distance $`r`$ results in these terms from a short distance expansion for the composite fields. Let us consider the general case of two stars of different functionalities $`f_1`$ and $`f_2`$. The power law for the partition sum $`𝒵_{f_1f_2}^{(2)}`$ of two such star polymers at distance $`r`$ $$𝒵_{f_1f_2}^{(2)}(r)r^{\mathrm{\Theta }_{f_1f_2}^{(2)}}$$ (4) is governed by the contact exponent $`\mathrm{\Theta }_{f_1f_2}^{(2)}`$. Then the short distance expansion provides the scaling relations $`\nu \mathrm{\Theta }_{f_1f_2}`$ $`=`$ $`(\gamma _{f_1}1)+(\gamma _{f_2}1)(\gamma _{f_1+f_2}1),`$ $`\mathrm{\Theta }_{f_1f_2}`$ $`=`$ $`\eta _{f_1}+\eta _{f_2}\eta _{f_1+f_2}.`$ (5) Here, we have substituted an equivalent family of exponents $`\eta _f`$ to replace $`\gamma _f1=\nu (\eta _f\eta _2)`$. The mean force $`F_{f_1f_2}^{(2)}(r)`$ between two star polymers at short distance $`r`$ is easily derived from the effective potential $`V_{f_1f_2}^{\mathrm{eff}}(r)=\mathrm{log}𝒵_{f_1f_2}^{(2)}(r)`$ as $$F_{f_1f_2}^{(2)}(r)=\frac{\mathrm{\Theta }_{f_1f_2}^{(2)}}{r},\text{ with }\mathrm{\Theta }_{ff}^{(2)}\frac{5}{18}f^{3/2}.$$ (6) The factor $`5/18`$ is found by matching the cone approximation for $`\mathrm{\Theta }_{ff}^{(2)}`$ to the known values of the contact exponents for $`f=1,2`$ . This matching in turn proposes an approximate value for the $`\eta _f`$ exponents, $$\eta _f\frac{5}{18}(2^{3/2}2)f^{3/2}.$$ (7) Note that this is inconsistent with the exact result $`\eta _1=0`$. However, this assumption nicely reproduces the contact exponents as derived from 3-loop perturbation theory as is displayed in table 1. In table 1 we have used the approximate values of $`\eta _f`$ to calculate the cone estimation of the contact exponents and compare these with corresponding values of a renormalization group calculation. We use here a perturbation series in terms of an expansion in the parameter $`\epsilon =4d`$ where $`d`$ is the space dimension. In $`d=4`$ the theory becomes trivial as is intuitively understood, because in $`d=4`$ dimensions a random walk (polymer) never meets itself and thus the nontrivial self-avoiding interaction vanishes. The expansion in $`\epsilon `$ is thus an expansion starting from the theory that describes the polymer as a non-interacting random walk. The result labeled ‘1-loop’ corresponds to optimal truncation of the series simply inserting $`\epsilon =1`$ or $`d=3`$ in the first order term of the expansion. The 3-loop result includes a resummation procedure that takes into account the asymptotic nature of the series . The large $`f`$ result corresponds to the cone approximation. To summarize, we have shown that the large $`f`$ approximation for the short distance force between two star polymers can be consistently fitted to the results of perturbation theory for low values of the functionality $`f`$ of the stars. We have at the same time generalized the approach to the interaction between two stars of different functionalities $`f_1`$ and $`f_2`$. This is essential in extending the theory of colloidal solutions of star polymers to general polydispersity in $`f`$ as it appears naturally in any real experiment. Acknowledgements We acknowledge helpful discussions with L. Schäfer. We thank the Deutsche Forschungsgemeinschaft for support within SFB 237. Yu.H. gratefully acknowledges support by Deutsche Akademische Austauschdienst.
no-problem/9910/cond-mat9910487.html
ar5iv
text
# Dark Solitons in Bose-Einstein Condensates ## Abstract Dark solitons in cigar-shaped Bose-Einstein condensates of <sup>87</sup>Rb are created by a phase imprinting method. Coherent and dissipative dynamics of the solitons has been observed. The realization of Bose-Einstein condensation (BEC) of weakly interacting atomic gases stimulates strongly the exploration of nonlinear properties of matter waves. This supports the new field of nonlinear atom optics, e.g., four wave mixing in BEC’s , as well as the study of various types of excitations. Of particular interest are macroscopically excited Bose condensed states, such as vortices and solitons. Vortices, well known from the studies of liquid helium , have recently been observed in two component gaseous condensates . Soliton-like solutions of the Gross-Pitaevskii equation are closely related to similar solutions in nonlinear optics describing the propagation of light pulses in optical fibres. Here, bright soliton solutions correspond to short pulses where the dispersion of the pulse is compensated by the self-phase modulation, i.e., the shape of the pulse does not change. Similarly, optical dark solitons correspond to intensity minima within a broad light pulse . In the case of nonlinear matter waves, bright solitons are only expected for an attractive interparticle interaction ($`s`$-wave scattering length $`a<0`$) , whereas dark solitons, also called “kink-states”, are expected to exist for repulsive interactions ($`a>0`$). Recent theoretical studies discuss the dynamics and stability of dark solitons as well as concepts for their creation . Conceptually, solitons as particle–like objects provide a link of BEC physics to fluid mechanics, nonlinear optics and fundamental particle physics. In this Letter we report on the experimental investigation of dark solitons in cigar-shaped Bose-Einstein condensates in a dilute vapor of <sup>87</sup>Rb. Low lying excited states are produced by imprinting a local phase onto the BEC wavefunction. By monitoring the evolution of the density profile we study the successive dynamics of the wavefunction. The evolution of density minima travelling at a smaller velocity than the speed of sound in the trapped condensate is observed. By comparison to analytical and numerical solutions of the 3D Gross-Pitaevskii equation for our experimental conditions we identify these density minima to be moving dark solitons. In our experiment, a highly anisotropic confining potential leading to a strongly elongated shape of the condensate allows us to be close to the (quasi) 1D situation where dark solitons are expected to be dynamically stable . Parallel to this work, soliton-like states in nearly spherical BEC’s of <sup>23</sup>Na are investigated at NIST . Dark solitons in matter waves are characterized by a local density minimum and a sharp phase gradient of the wavefunction at the position of the minimum (see Fig.1a,b). The shape of the soliton does not change. This is due to the balance between the repulsive interparticle interaction trying to reduce the minimum and the phase gradient trying to enhance it. The macroscopic wavefunction of a dark soliton in a cylindrical harmonic trap forms a plane of minimum density (DS-plane) perpendicular to the symmetry axis of the confining potential. Thus, the corresponding density distribution shows a minimum at the DS-plane with a width of the order of the (local) correlation length. A dark soliton in a homogeneous BEC of density $`n_0`$ is described by the wavefunction (see and references therein) $$\mathrm{\Psi }_k(x)=\sqrt{n_0}\left(i\frac{v_k}{c_s}+\sqrt{1\frac{v_k^2}{c_s^2}}\mathrm{tanh}\left[\frac{xx_k}{l_0}\sqrt{1\frac{v_k^2}{c_s^2}}\right]\right),$$ (1) with the position $`x_k`$ and velocity $`v_k`$ of the DS-plane, the correlation length $`l_0=(4\pi an_0)^{1/2}`$, and the speed of sound $`c_s=\sqrt{4\pi an_0}\mathrm{}/m`$, where $`m`$ is the atom mass. For $`T=0`$ in 1D, dark solitons are stable. In this case, only solitons with zero velocity in the trap center do not move; otherwise they oscillate along the trap axis . However, in 3D at finite $`T`$, dark solitons exhibit thermodynamic and dynamical instabilities. The interaction of the soliton with the thermal cloud causes dissipation which accelerates the soliton. Ultimately, it reaches the speed of sound and disappears . The dynamical instability originates from the transfer of the (axial) soliton energy to the radial degrees of freedom and leads to the undulation of the DS-plane, and ultimately to the destruction of the soliton. This instability is essentially suppressed for solitons in cigar-shaped traps with a strong radial confinement , such as in our experiment . As can be seen from Eq.(1), the local phase of the dark soliton wave function varies only in the vicinity of the DS-plane, $`xx_k`$, and is constant in the outer regions, with a phase difference $`\mathrm{\Delta }\varphi `$ between the parts left and right to the DS-plane (see, e.g., Fig.1b). To generate dark solitons we apply the method of phase imprinting , which allows one also to create vortices and other textures in BEC’s. We apply a homogeneous potential $`U_{int}`$, generated by the dipole potential of a far detuned laser beam, to one half of the condensate wavefunction (Fig.1c). The potential is pulsed on for a time $`t_p`$, such that the wavefunction locally acquires an additional phase factor $`e^{i\mathrm{\Delta }\varphi }`$, with $`\mathrm{\Delta }\varphi =U_{int}t_p/\mathrm{}\pi `$. The pulse duration is chosen to be short compared to the correlation time of the condensate, $`t_c=\mathrm{}/\mu `$, where $`\mu `$ is the chemical potential. This ensures that the effect of the light pulse is mainly a change of the phase of the BEC, whereas changes of the density during this time can be neglected. Note, however, that due to the imprinted phase, at larger times one expects an adjustment of the phase and density distribution in the condensate. This will lead to the formation of a dark soliton and also to additional structures as discussed below. In our experimental setup (see ), condensates containing typically $`1.5\times 10^5`$ atoms in the ($`F`$=2, $`m_F`$=+2)-state, with less than $`10\%`$ of the atoms being in the thermal cloud, are produced every 20s. The fundamental frequencies of our static magnetic trap are $`\omega _x=2\pi \times 14`$Hz and $`\omega _{}=2\pi \times 425`$Hz along the axial and radial directions, respectively. The condensates are cigar-shaped with the long axis ($`x`$-axis) oriented horizontally. For the phase imprinting potential $`U_{int}`$, a blue detuned, far off resonant laser field ($`\lambda =`$532nm) of intensity $`I20`$W/mm<sup>2</sup> pulsed for a time $`t_p=20\mu `$s results in a phase shift $`\mathrm{\Delta }\varphi `$ of the order of $`\pi `$ . Spontaneous processes can be totally neglected. A high quality optical system is used to image an intensity profile onto the BEC, nearly corresponding to a step function with a width of the edge, $`l_e`$, smaller than 3$`\mu `$m (see Fig.1c). The corresponding potential gradient leads to a force transferring momentum locally to the wave function and supporting the creation of a density minimum at the position of the DS-plane for the dark soliton. Note that also the velocity of the soliton depends on $`l_e`$ (see Fig.3c). After applying the dipole potential we let the atoms evolve within the magnetic trap for a variable time $`t_{ev}`$. We then release the BEC from the trap (switched off within $`200\mu `$s) and take an absorption image of the density distribution after a time-of-flight $`t_{TOF}=4`$ms (reducing the density in order to get a good signal-to-noise ratio in the images). In series of measurements we have studied the creation and successive dynamics of dark solitons as a function of the evolution time and the imprinted phase. Fig.2 shows density profiles of the atomic clouds for different evolution times in the magnetic trap, $`t_{ev}`$. The potential $`U_{int}`$ has been applied to the part of the BEC with $`x<0`$. For this measurement the potential strength was estimated to correspond to a phase shift of $`\pi `$. For short evolution times the density profile of the BEC shows a pronounced minimum (contrast about 40%). After a time of typically $`t_{ev}1.5`$ms a second minimum appears. Both minima (contrast about 20% each) travel in opposite directions and in general with different velocities. Fig.3a) shows the evolution of these two minima in comparison to theoretical results obtained numerically from the 3D Gross-Pitaevskii equation. One of the most important results of this work is that both structures move with velocities which are smaller than the speed of sound ($`c_s3.7`$mm/s for our parameters) and depend on the applied phase shift. Therefore, the observed structures are different from sound waves in a condensate as first observed at MIT . We identify the minimum moving slowly in the negative $`x`$-direction to be the DS-plane of a dark soliton. We have performed series of measurements with different parameter sets for $`l_e`$ and the product of laser intensity and imprinting time. The velocity of the dark soliton could thereby be varied between $`v_k=2.0`$mm/s (Fig.3a) and $`v_k=3.0`$mm/s. For fixed $`l_e`$, the velocity $`v_k`$ decreases with increasing $`\mathrm{\Delta }\varphi `$. An increase of $`\mathrm{\Delta }\varphi \pi /2`$ by a factor of 1.5 results in decreasing $`v_k`$ by 10%, in agreement with theoretical results (see Fig.3b,c). For significantly reduced imprinted phase we did not observe any dark soliton structures. For higher imprinted phase values more complex structures with several density minima were observed. In addition to the dark soliton, the dipole potential creates a density wave travelling in the positive $`x`$-direction with a velocity close to $`c_s`$. After opening the trap, a complex dynamics results in the appearance of a second minimum behind the density wave as explained below. To understand the generation of dark solitons and their behavior in the initial stages of the evolution we have performed numerical simulations of the 3D Gross-Pitaevskii equation. Computing time limitations have restricted our studies to atom numbers below $`5\times 10^4`$. The simulations describe well the case $`T=0`$, but ignore the effects of thermodynamic instability. The latter was analyzed by using a generalization of the theory of Ref. . Our theoretical findings are summarized as follows: 1. The results of the simulations agree well with the experimental observations. After applying a phase changing potential, a dark soliton moves in the negative $`x`$-direction (Fig.4a). The generation of the soliton by the phase imprinting method is accompanied by a density wave moving in the opposite direction. The maximum of the density wave travels with a velocity $`c_s`$, independently of the values of $`\mathrm{\Delta }\varphi `$ and $`l_e`$. A characteristic time for the creation of the soliton is of order $`l_0/c_s`$ and in our case it does not exceed fractions of ms. Note that after this time the soliton-related phase slip in the wavefunction is affected by a complex dynamics of the soliton generation and will be different from the imprinted $`\mathrm{\Delta }\varphi `$. 2. For a fixed $`l_e`$, the increase of $`\mathrm{\Delta }\varphi `$ from $`\pi `$ to $`2\pi `$, $`3\pi `$, … leads to the creation of double, triple solitons etc. BEC’s with several dark solitons were also observed experimentally and are currently investigated in detail. 3. The initial soliton velocity decreases as $`l_e0`$ (Fig. 3c). As observed experimentally, typical soliton velocities (for $`l_e<3\mu `$m) are smaller than $`c_s`$ and grow with the number of atoms. Velocities of the accompanying density waves are close to $`c_s`$. These waves move away from the center of the trap, broaden and eventually vanish (Fig.4). This is in contrast to solitons, which are expected to oscillate in the trap, retaining their width and absolute depth. However, the observation of these oscillations would require longer lifetimes of the solitons (limited by dissipation to $`10`$ms, see below). Within this time scale we find no signatures of dynamical instability and only reveal a moderate change ($`<10`$%) of the soliton velocity, in agreement with our experiments. 4. The situation changes after opening the trap and allowing the condensate to ballistically expand in the radial direction. The simulation shows that the soliton velocity decreases rapidly, while its width grows. To understand this aspect we have used a scaling approach, similar to that of , for the radial ballistic expansion of an infinitely elongated condensate containing a moving kink. This approach (valid for $`\omega _{}^1t_{TOF}\mu /\mathrm{}\omega _{}^2`$) predicts a soliton velocity $`v_k(t_{TOF})v_k(t_{ev})\mathrm{ln}(2\omega _{}t_{TOF})/\omega _{}t_{TOF},`$ where $`v_k(t_{ev})`$ is the soliton velocity at $`t_{ev}`$ just before switching off the trap. This result agrees very well with both experimental data and numerical simulations for a finite size BEC. Moreover, in a short time $`t_{TOF}\omega _{}^1`$ after switching off the trap the density develops a second minimum located between the density wave and the dark soliton. This minimum has a width and depth comparable to those of the dark soliton (see Fig.4b). Its position regarded as a function of $`t_{ev}`$ moves with a constant velocity similar to that of the soliton. The creation of this second minimum is a coherent phenomenon and can be attributed to a dynamically acquired phase of the wave function in the region between the density wave and the dark soliton. The results of the experiment also show clear signatures of the presence of dissipation originating from the interaction of the soliton with the thermal cloud. We observe a decrease of the contrast of the soliton by $``$50% on a time scale of 10ms. This is in contradiction with the nondissipative dynamics, where the contrast should even increase for the soliton moving away from the trap center. The soliton energy is then proportional to $`n_0^{3/2}(x_k)(1v_k^2/c_s^2(x_k))^{3/2}`$ (see ) and should remain constant. This leads to a constant absolute depth of the soliton and hence gives the contrast proportional to $`(1v_k^2/c_s^2(x_k))n_0(x_k)^1`$. The decrease of the soliton contrast can therefore only be explained by the presence of dissipation decreasing the soliton energy. As the life time of the soliton is sensitive to the gas temperature, the studies of dissipative dynamics of solitons will offer a unique possibility for thermometry of BEC’s in the conditions where the thermal cloud is not discernible. In conclusion, we have created dark solitons by a phase imprinting method and studied their dynamics. A detailed comparison to theory and numerical simulations allows us to identify the creation of dark solitons travelling with approximately constant velocity smaller than the speed of sound. The initial stages of the evolution and the radial ballistic expansion of the sample are well described by the $`T=0`$ approach which also shows the absence of dynamical instabilities. The decrease of the soliton contrast gives a clear signature of dissipation in the soliton dynamics. For the study of dark solitons with even lower velocities, an initial density preparation of the BEC in the magnetic trap may be useful. This can be done, e.g., by applying adiabatically an additional blue detuned laser beam prior to the phase imprinting pulse. A promising attempt will be the realization of dark solitons in elongated dipole traps, e.g., generated by a blue detuned hollow laser beam . With spin as an additional degree of freedom, the dynamics of dark solitons in condensates containing spin-domaines or spin waves can be studied. We expect that the study of soliton structures in BEC’s opens a new direction in atomic physics, related to nonlinear phenomena in a dissipative environment. We thank D. Petrov and K. Rza̧żewski for fruitful discussions and K.-A. Suominen for help in the numerical work. This work is supported by SFB407 of the Deutsche Forschungsgemeinschaft. G.S. acknowledges support by the Humboldt-Foundation, the Dutch Foundation FOM and the Russian Foundation for Basic Studies.
no-problem/9910/hep-lat9910025.html
ar5iv
text
# Pushing NRQCD to the limit ## 1 Introduction Lattice NRQCD studies of heavy-quark systems have been, on the whole, very successful. The predicted spectra for the Upsilon, $`J/\mathrm{\Psi }`$, $`B`$ and $`D`$ systems, amongst others, agree with the overall structure of the experimental spectra, and for the heavier systems, the agreement at finer scales, such as hyperfine splittings, is at the percent level . Trottier’s work on the charmonium spectrum indicates that the $`𝒪(v^6)`$-improved NRQCD action *decreases* the hyperfine splitting significantly over the $`𝒪(v^4)`$ result, rather than taking it towards the experimental value. Our own results using similar parameters to Trottier confirm this result. This is an indication that NRQCD is not converging well for the charm quark. Yet it is interesting to note that many successful quark model predictions of the light-quark spectrum used a *non-relativistic* approximation for the light-quark dynamics . Here we have a highly relativistic system reasonably well described with a non-relativistic theory. Recently, Liu et al. published work on a model QCD theory known as Valence QCD. In VQCD, all z-graphs are removed, and the authors were able to make some links between NR quark models and the role of z-graphs in standard QCD. We would like to examine the behaviour of the low-mass limit of NRQCD, to examine the nature of the inevitable breakdown of the NR expansion, and see if this can also be linked to the remarkable success of NR quark models. In this report we present results for the $`Q\overline{Q}`$ and $`Q\overline{q}`$ spectra, as a function of decreasing heavy quark mass. ## 2 Simulation details We used an $`𝒪(v^6)`$-improved NRQCD Hamiltonian, $`H_0`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^{(2)}}{2M_0}},`$ (1) $`\delta H`$ $`=`$ $`{\displaystyle \frac{c_1}{8M_0^3}}\left(\mathrm{}^{(2)}\right)^2+{\displaystyle \frac{ic_2}{8M_0^2}}\left(\stackrel{~}{\mathrm{}}\stackrel{~}{𝐄}\stackrel{~}{𝐄}\stackrel{~}{\mathrm{}}\right)`$ (2) $`{\displaystyle \frac{c_3}{8M_0^2}}\sigma \left(\stackrel{~}{\mathrm{}}\times \stackrel{~}{𝐄}\stackrel{~}{𝐄}\times \stackrel{~}{\mathrm{}}\right)`$ $`{\displaystyle \frac{c_4}{2M_0}}\sigma \stackrel{~}{𝐁}+{\displaystyle \frac{c_5}{24M_0}}\mathrm{}^{(4)}`$ $`{\displaystyle \frac{c_6}{16nM_0^2}}\left(\mathrm{}^{(2)}\right)^2{\displaystyle \frac{c_7}{8M_0^3}}\{\stackrel{~}{\mathrm{}}^{(2)},\sigma \stackrel{~}{𝐁}\}`$ $`{\displaystyle \frac{3c_8}{64M_0^4}}\{\stackrel{~}{\mathrm{}}^{(2)},\sigma \left(\stackrel{~}{\mathrm{}}\times \stackrel{~}{𝐄}\stackrel{~}{𝐄}\times \stackrel{~}{\mathrm{}}\right)\}`$ $`{\displaystyle \frac{c_9}{8M_0^3}}\sigma \stackrel{~}{𝐄}\times \stackrel{~}{𝐄},`$ where a $`\stackrel{~}{\mathrm{}}`$ denotes the use of improved derivative operators, and $`\stackrel{~}{E}`$, $`\stackrel{~}{B}`$ are components of the improved field tensor. The heavy-quark propagator was calculated using the evolution quation $`G_{t+1}`$ $`=`$ $`\left(1{\displaystyle \frac{H_0}{2n}}\right)^nU_4^{}\left(1{\displaystyle \frac{H_0}{2n}}\right)^n`$ (3) $`\times \left(1\delta H\right)G_t.`$ For $`Q\overline{q}`$ mesons, we used the standard tadpole-improved SW action for the light quarks, with $`u_0=Plaq^4`$ and $`c_{sw}=u_0^3`$. The light quark $`\kappa =0.143`$, roughly comparable to the strange quark mass. Gauge field configurations were created using a standard tadpole- and rectangle-improved action . We used an ensemble of \*** configurations of size $`8^3\times 24`$ at $`\beta =6.8`$, which gives a tadpole factor of $`u_0=0.854`$. ## 3 Quarkonium Results for the $`{}_{}{}^{1}S_{0}^{}`$, $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{1}P_{1}^{}`$ states of quarkonium for various values of $`M_0`$ are shown in Figure 1. The stabelisation parameter $`n`$ in Equation (1) was given values ranging from $`2`$ to $`10`$ depending on the value of $`M_0`$. Meson operators were smeared at source and sink, and their specific forms are detailed in . The $`{}_{}{}^{1}S_{0}^{}`$ states in Figure 1 are found using the kinetic definition of the meson mass, $$E_𝐩=E_0+\frac{𝐩^2}{2aM_{kin}},$$ (4) where $`E_𝐩`$ and $`E_0`$ are the simulation energies for a meson with finite and zero momentum respectively. This method leads to the large error bars shown on the data, however the general trends are significant here, not the exact energies. The heaviest $`M_0`$ corresponds to roughly half the bottom quark mass, while the charmonium ground state energy of $`3GeV`$ corresponds to $`aM_01.5`$ to $`2`$. The lightest $`M_0`$ gives a kinetic mass close to the energy of an $`s\overline{s}`$ meson, and so we can assume the quarks are well within the relativistic regime at this stage. Note that the $`S`$ and $`P`$ states decrease quite linearly with $`M_0`$, until the bare mass drops significantly below $`1`$, when they begin to drop quite suddenly. The hyperfine splitting increases significantly at low $`M_0`$, as might be expected from light-meson spectroscopy, but the $`SP`$ splitting decreases, against observation. These splittings, with the $`{}_{}{}^{3}S_{1}^{}`$ state taken as the energy zero, are shown in Figure 2. These results for quarkonium indicate the NRQCD action is indeed beginning to have difficulties at these low bare masses. The situation is worse for the heavy-light spectrum. Figure 3 shows the hyperfine splittings for the $`Q\overline{q}`$ system, and note that for the lightest value of $`M_0`$, the splitting is *negative*—the $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{1}S_{0}^{}`$ energies are in the wrong order. This is obviously a serious pathology. ## 4 Discussion The results presented here are perhaps not surprising, given that we expect the $`M_0^1`$-dependence of the coefficients in the NRQCD action will ultimately lead to a breakdown as $`M_0`$ decreases. However, we are interested in the exact nature of this breakdown, not to see if NRQCD is a viable alternative for simulating light quarks, but to illuminate any connections between NRQCD and the successes of non-relativistic quark models. Lewis and Woloshyn have shown that unphysically large contributions from certain terms in the NRQCD action could be removed by subtracting their expectation values from the Hamiltonian. Their analysis consisted of a thorough systematic examination of the contributions of each correction term in the Hamiltonian. We suspect that the same style of analysis, applied to the low-mass breakdown of NRQCD, would flush out the terms that suffer the worst pathologies, and may even suggest ways they may be strengthened. One important issue to be aware of, however, is that the coefficients in Equations (1) and (2) are usually given their tree-level value of $`1`$ in NRQCD simulations. For heavy quarks, tadpole improvement of the gauge field links is usually sufficient to account for the most serious corrections beyond tree level. The coefficients $`c_i`$ in Equation (2) will, in general, have non-trivial $`\alpha _s(M_0)`$ dependence, and this will become more important as the quark mass is decreased. We would like to thank H. Trottier, R. Lewis, S. Collins and J. Heim for their valuable comments. This work is partially funded by the National Science and Engineering Research Council of Canada.
no-problem/9910/hep-th9910031.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently non-commutative Yang-Mills theory attracted a lot of attention, mainly due to discoveries of new connections to string theory. In a recent paper, it was suggested that the divergences of the non-commutative Yang-Mills theory are dictated by the large $`N_c`$ limit of the theory. Namely, that divergences occur only in planar diagrams (the observation that the planar commutative and non-commutative theories are the same up to phases in the Green functions, was already made in. A careful analysis of the divergences was carried out in ref.). Another direction of research is the study of orbifold field theories - motivated by the AdS/CFT correspondence . It was conjectured by Kachru and Silverstein that orbifolds of $`AdS_5\times S^5`$ which act on the $`S^5`$ part define a large $`N_c`$ finite theory, even when the R-symmetry is completely broken and the theory is not supersymmetric. This conjecture was later proved using both field theory and string theory techniques. In this note we would like to consider non-commutative Yang-Mills theories which are obtained by an orbifold truncation of $`𝒩=4`$SYM. We suggest that these theories are UV finite, namely that there are no divergent Feynman diagrams, even when the theory under consideration is non-supersymmetric and the number of colors is finite. Note, however, that our conjecture relies on recent suggestions about UV finiteness of non-planar graphs in the non-commutative theory. The later were not fully proved yet, but seems to be necessary if the non-commutative theory is renormalizable. ## 2 Orbifold field theories Orbifold field theories are obtained by a certain truncation of a supersymmetric Yang-Mills theory. Let us consider the special case of $`𝒩=4`$. The truncation procedure is as follows: consider a discrete subgroup $`\mathrm{\Gamma }`$ of the $`𝒩=4`$R-symmetry group $`SU(4)`$. For each element of the orbifold group, a representation $`\gamma `$ inside $`SU(|\mathrm{\Gamma }|N_c)`$ should be specified. Each field $`\mathrm{\Phi }`$ transform as $`\mathrm{\Phi }r\gamma ^{}\mathrm{\Phi }\gamma `$, where $`r`$ is a representation matrix inside the R-symmetry group. The truncation is achieved by keeping invariant fields. The resulting theory has a reduced amount of supersymmetry, or no supersymmetry at all. It was conjectured, based on the AdS/CFT conjecture, that the truncated large $`N_c`$ theories are finite as the parent $`𝒩=4`$theory. Later it was proved that the planar diagrams of the truncated theory and parent theories are identical. In particular it means that the perturbative beta function of the large $`N_c`$ daughter theory is zero and that the theory is finite. In the cases of $`𝒩=2`$truncations, there is only one-loop (perturbative) contribution to the beta function. Its vanishing indicates the perturbative finiteness of the daughter theory at finite $`N_c`$ as well. In $`𝒩=1`$truncations the situation is more subtle. Indeed, the theory is finite at finite values of $`N_c`$, but the finiteness is due to Leigh-Strassler type of arguments. In that case one should consider the $`SU(N_c)`$ version of the theory (and not the $`U(N_c)`$ theory which is obtained from the string theory orbifold), since the $`U(1)`$ beta function is always positive at the origin. In addition $`\frac{1}{N_c^2}`$ shifts of the Yukawa couplings are needed. In the non-supersymmetric case there are no known examples of finite theories at finite $`N_c`$, though attempts in this direction using orbifolds were recently made. As we shall see, due to non-commutativity such examples can be found. ## 3 Perturbative behavior of non-commutative Yang-Mills theories Recently, several authors analyzed the renormalization behavior of non-commutative Yang-Mills theories (for an earlier discussion see. Related works are ). We briefly review their analysis. The Feynman rules of the commutative and non-commutative theories in momentum space are very similar. In fact the only difference is that each vertex of the non-commutative theory acquires a phase, $`\mathrm{exp}i\mathrm{\Sigma }_{i<j}k_ik_j`$ (the Moyal phase), with respect to the vertex of ordinary commutative theory. For planar diagrams, this phase cancels at internal loops and the only remnant is an overall phase. Therefore the planar commutative and non-commutative theories are similar, in accordance with recent findings. In particular, thermodynamical quantities such as the free energy and the entropy are not affected by non-commutativity at the planar limit . Another claim is that the oscillations of the Moyal phases at high momentum would regulate non-planar diagrams, namely that UV divergences of non-planar diagrams would disappear. There are two exceptional cases in which non-planar diagrams will still diverge: (i). When the non-planar diagrams consist of planar sub-diagrams which might diverge. (ii). For specific values of momentum (a zero measure set) the Moyal phase can be zero. Indeed, it was shown lately that in theories which contain scalars - there are non-planar divergences. The theories that we will consider contain scalars and therefore contain infinities, however interpret these divergences as Infra-Red divergences. The reason is as follows: The integrals which are associated with non-planar graphs converge unless the Moyal phase is set to zero by a vanishing incoming momentum. In this case new type of divergences appear, and it seems that infinite number of counterterms are needed. Therefore the non-commutative theory seems to be non-renormalizable. However, the authors of ref. suggest that the re-summation of these divergent non-planar graphs would yield a finite result. Their observation is based on the similarity between the present case and the standard (commutative) IR divergences. Note that Infra-Red divergences are, anyways, expected in theories with massless particles. Thus, ’truly’ divergences in the non-commutative theory occur only in planar graphs. ## 4 Finiteness of non-commutative orbifold field theories Let us now consider an orbifold truncation of non-commutative $`𝒩=4`$SYM. These theories can be defined perturbatively by a set of Feynman rules. The natural definition would be to attach to each vertex the corresponding Moyal phase. According to ref., the only potential divergences are the ones which arise in planar diagrams. Non-planar diagrams are expected to be finite, except for some “accidental” divergences . Note that the infinities which do occur in these graphs are associated with the Infra-Red and therefore do not contradict our claim that orbifold field theories are UV finite. According to the analysis of , the planar diagrams of an orbifold theory can be evaluated by using the corresponding diagrams of the parent non-commutative $`𝒩=4`$. These diagrams are finite since they differ from the commutative $`𝒩=4`$only by an overall phase. In this way sub-divergences of non-planar diagrams will also be canceled. We therefore conclude that any orbifold truncation of non-commutative non-compact $`𝒩=4`$SYM is finite. In particular, it means that we might have a rich class of non-supersymmetric gauge theories which are finite, even at finite $`N_c`$ (in contrast to ordinary non-supersymmetric orbifold field theories, where the two loop beta function is generically non-zero). Note that though these theories might be finite, they are certainly not conformal. Let us consider a specific example. The example is an $`SU(N)\times SU(N)`$ gauge theory with 6 scalars in the adjoint of each of the gauge groups and 4 Weyl fermions in the $`(N,\overline{N})`$ and 4 Weyl fermions in the $`(\overline{N},N)`$ bifundamental representations. It is the theory which lives on dyonic D3 branes of type 0 string theory and can be also understood, from the field theory point of view, as a $`Z_2`$ orbifold projection of $`𝒩=4`$SYM. The large $`N_c`$ commutative theory contains a line of fixed points. We suggest that its analogous non-commutative (finite $`N_c`$) theory is finite at that line. Namely, at $`g_1=g_2=h_1=h_2`$, where $`g_1,g_2`$ are the gauge couplings and $`h_1,h_2`$ are the Yukawa couplings. Note that in contrast to the commutative case the position of the line of finite theories is not corrected by $`\frac{1}{N_c}`$ contributions. Finally, it might be interesting to understand how the finiteness of these theories arise from string theory orbifolds. ACKNOWLEDGEMENTS I would like to thank O. Aharony, N. Itzhaki and B. Kol for useful discussions and comments. This work was supported in part by EEC TMR contract ERBFMRX-CT96-0090.
no-problem/9910/hep-th9910132.html
ar5iv
text
# Solving the Hierarchy Problem with Noncompact Extra Dimensions ## 1 Introduction The possibility that we are living in more than four dimensions has been considered sporadically by physicists since the time of Kaluza and Klein . Although originally motivated by the possible unification of gravity with gauge forces, more recently there have been speculations that extra dimensions may also play an important role in addressing several other outstanding phenomenological problems — in particular, the smallness of the cosmological constant, the origin of the observed hierarchy between the $`W`$ mass and the Planck scale, the nature of flavor, the possible source of supersymmetry breaking, etc. In this paper we focus on the possibility that extra dimensions may provide a resolution of the hierarchy problem. There have been two distinct approaches using extra dimensions to deal with the hierarchy problem. One relies on compact extra dimensions, curled up in such a way that 4-d phenomenology prevails. The other uses noncompact extra dimensions of infinite extent. The first category includes the “large extra dimension” approach, which postulates a fundamental scale $`\mathrm{\Lambda }10100`$ TeV along with Kaluza-Klein compactification at a large radius $`R`$ ($``$ millimeters for two extra dimensions). The effective 4-d Planck scale is then the fundamental scale $`\mathrm{\Lambda }`$ times powers of $`\mathrm{\Lambda }R`$ . If $`R`$ is much larger than $`\mathrm{\Lambda }^1`$, the effective Planck scale will be much larger than the fundamental scale. This does not solve the hierarchy problem but rather reformulates it into a dynamical question: why is the radius $`R`$ of the extra dimensions so much larger than $`1/\mathrm{\Lambda }`$? Mechanisms exist which can answer this question . Another possible hierarchy solution arising from compact extra dimensions is the model of ref. , where an exponential warp factor gives rise to the observed hierarchy, requiring an extra dimension only one or two orders of magnitude larger than the fundamental length scale. A dynamical mechanism to stabilize the radius is still needed, such as first suggested in . It has been shown that ordinary 4-d gravity can result from a theory with infinite extra dimensions, provided that there exists a graviton bound to our 4-d submanifold . Such models do not in general solve the hierarchy problem. In refs. it was suggested that, since the wave function of the bound 4-d graviton falls off exponentially in the direction transverse to the 3-brane that binds it, the observed hierarchy of forces may be due to a displacement of our world from that 3-brane. In this paper we offer an alternative where the hierarchy arises from finite, but noncompact extra dimensions. The phenomenology closely resembles the large extra dimension scenario , but provides a dynamical explanation for the large radius. However, unlike these earlier scenarios, the extra dimensions we consider form a non-compact space of exponentially large proper size, terminating at a singularity. The spacetime we analyze is the solution of Einstein’s equation in the presence of a global cosmic string . We begin by solving Einstein’s equation for a global cosmic “string” in $`d+2`$ dimensions. By a “string” we will always mean a $`(d1)`$-brane with 2 transverse spatial dimensions. We analyze scalar and gravitational waves in this background metric, and demonstrate that although the spacetime includes a naked singularity, it is sufficiently mild that unitary boundary conditions can be applied. We then display the hierarchy between gravitational and gauge forces, showing that it is not the result of any fine tuning, and does not lead to disagreement with gravitational force experiments. We conclude by discussing possible generalizations of our example. ## 2 The metric about a global string A global cosmic string, or vortex, is a topologically stable scalar field configuration with nontrivial homotopy for some internal symmetry manifold. The simplest example arises in a scalar field theory with a spontaneously broken global $`U(1)`$ symmetry. If the expectation value of the scalar field in the ground state is $`\mathrm{\Phi }=f`$, a vortex with unit winding number corresponds to the field configuration $`\mathrm{\Phi }(r)=f(r)e^{i\theta }`$, where $`\underset{r\mathrm{}}{lim}f(r)=f`$. Unlike a vortex with the $`U(1)`$ gauged (such as the Abelian Higgs model), a global vortex has nonzero energy density outside its core, falling off as $`1/r^2`$. It is therefore not surprising that, when gravity is included, a curvature singularity appears exponentially far from the core of the vortex, as was found in earlier work by the present authors for vortices in four spacetime dimensions . Generalization of this earlier work to branes in $`d+2`$ spacetime dimensions is straightforward. We assume that outside the core of a $`(d1)`$-brane, the stress-energy tensor is that of a charged scalar field with Lagrange density<sup>1</sup><sup>1</sup>1We adopt the mostly plus sign convention. $`=g^{\mu \nu }_\mu \mathrm{\Phi }^{}_\nu \mathrm{\Phi }V(\mathrm{\Phi })`$ and expectation value $`\mathrm{\Phi }=f^{d/2}e^{i\theta },`$ (1) where $`f`$ has dimensions of mass. The scalar field eq. (1) is assumed to minimize the scalar potential $`V(\mathrm{\Phi })`$, and we tune the bulk cosmological constant to zero. We therefore look for a metric which has a $`d`$-dimensional Poincaré invariant “longitudinal” space, and rotational invariance in the transverse plane. The most general metric of this kind may be put in the form $$ds^2=\overline{A}(u)^2\eta _{ab}dx^adx^b+\gamma ^2\overline{B}(u)^2(du^2+d\theta ^2),$$ (2) where $`x^a`$ parameterizes the longitudinal $`d`$-dimensional space, $`\eta _{ab}`$ is the flat Minkowski metric, and $`\{u,\theta \}`$ are the transverse coordinates, with $`\theta [0,2\pi )`$. The parameter $`\gamma `$ has dimensions of length, while $`u`$, $`\theta `$ and the functions $`\overline{A}`$ and $`\overline{B}`$ are dimensionless. We will make the somewhat perverse choice of placing the string core at large values of $`u`$, while the singularity will appear at $`u=0`$. The non-zero components of the stress-energy tensor which follow from $``$ and eq. (1) are: $`T_j^i=\delta _j^i{\displaystyle \frac{f^d}{\gamma ^2\overline{B}^2}},T_u^u={\displaystyle \frac{f^d}{\gamma ^2\overline{B}^2}},T_\theta ^\theta ={\displaystyle \frac{f^d}{\gamma ^2\overline{B}^2}}.`$ (3) The Einstein equations for this system are $$G_\nu ^\mu =\frac{1}{M_{d+2}^d}T_\nu ^\mu $$ (4) where $`M_{d+2}`$ is the analogue of the Planck mass in $`d+2`$ dimensions. On computing the Einstein tensor from the metric in eq. (2), we arrive at the three differential equations $$\begin{array}{ccc}\hfill (d1)(\overline{A}^{})^2+\overline{A}\overline{A}^{\prime \prime }& =& 0\hfill \\ & & \\ \hfill \overline{A}(\overline{B}^{})^2+\overline{B}\left[d\overline{A}^{}\overline{B}^{}d\overline{B}\overline{A}^{\prime \prime }\overline{A}\overline{B}^{\prime \prime }\right]& =& 0\hfill \\ & & \\ \hfill \overline{A}(\overline{B}^{})^2\overline{B}\left[d\overline{A}^{}\overline{B}^{}+\overline{A}\overline{B}^{\prime \prime }\right]& =& \frac{2}{u_0}\overline{A}\overline{B}^2\hfill \end{array}$$ (5) with solution $`\overline{A}(u)=\left({\displaystyle \frac{u}{u_0}}\right)^{1/d},\overline{B}(u)=\left({\displaystyle \frac{u_0}{u}}\right)^{(d1)/2d}e^{(u_0^2u^2)/2u_0}.`$ (6) where $`u_0\left({\displaystyle \frac{M_{d+2}}{f}}\right)^d.`$ (7) For a complete exterior solution these functions must be matched onto a metric valid within the vortex core, determining the parameter $`\gamma `$, which naturally is of size $`\gamma 1/f`$. Note that our solution has a genuine curvature singularity at $`u=0`$. (For example, the Laplacian of the Ricci scalar diverges as $`u^{2/d}`$ at $`u=0`$.) Furthermore, this is a naked singularity, located at a finite proper distance from the string core near $`uu_0`$. Because of the existence of a naked singularity it is not immediately clear whether or not sensible physical results follow from our solution . There are two different problems that might arise. The first is that the existence of the singularity is not reliable, precisely because gravitational forces become strong there. A higher derivative correction to the Einstein-Hilbert action for example, would probably eliminate—or at least change—the nature of the singularity. We will argue later that, while this may indeed happen in extensions of Einstein’s theory, it would likely not change the conclusions we arrive at by assuming the singularity to be physical. A second problem is that the conservation of energy and momentum might be violated at the singularity. This can be phrased as a question concerning the boundary conditions that must be applied to fields at the singularity . Just as the irregular Legendre functions which appear when solving the non-relativistic Schrödinger equation for the hydrogen atom must be discarded, as they correspond to probability loss at the origin, so must any solution here which loses energy or momentum through the singularity. Thus we must search for conditions on fields propagating in the background metric that insure conservation of quantum numbers at the singular boundary of the spacetime. The ability to find such conditions is not guaranteed if the singularity is too strong, however. To address this issue we first consider propagation of a massless scalar in our background metric; we then consider gravitational perturbations. ## 3 Scalar fields and boundary conditions We consider a massless scalar field $`\varphi `$ satisfying the wave equation $$\mathrm{}\varphi =\frac{1}{\sqrt{g}}_\mu (\sqrt{g}g^{\mu \nu }_\nu \varphi )=0.$$ (8) It is convenient to introduce a second parameterization of the transverse radial dimension: $$ds^2=A(z)^2\eta _{ab}dx^adx^b+A(z)^2dz^2+\gamma ^2B(z)^2d\theta ^2,$$ (9) with $$z=\gamma _0^u\frac{\overline{B}(u^{})}{\overline{A}(u^{})}𝑑u^{},A(z)\overline{A}(u(z)),B(z)\overline{B}(u(z)).$$ (10) The functions $`\overline{A}(u)`$ and $`\overline{B}(u)`$ are the solutions found previously in eq. (6). For small $`u`$, $`z`$ behaves like $`zu^{(d1)/2d}`$, so that the singularity at $`u=0`$ is also at $`z=0`$. Near the singularity $`A(z)z^{2/(d1)},B(z){\displaystyle \frac{1}{z}}.`$ (11) Using the isometries of the background metric, we search for solutions of the form $`\varphi =e^{ipx}e^{in\theta }{\displaystyle \frac{\phi (z)}{\psi _1(z)}},\eta _{ab}p^ap^b=m^2,`$ (12) where $`px\eta _{ab}p^ax^b`$ and $`\psi _1(z)=(g^{zz}\sqrt{g})^{1/2}=(\gamma A^{d1}B)^{1/2}.`$ (13) The parameter $`m`$ will be the conventional $`d`$-dimensional mass of this mode. The function $`\phi (z)`$ then satisfies the equation $`\left[{\displaystyle \frac{\mathrm{d}^2}{\mathrm{d}z^2}}+V(z)+n^2{\displaystyle \frac{A^2}{B^2}}\right]\phi (z)=m^2\phi (z),V(z)={\displaystyle \frac{\psi _1^{\prime \prime }}{\psi _1}}`$ (14) This equation can be recast in the more convenient form $`\left[\overline{Q}Q+n^2{\displaystyle \frac{A^2}{B^2}}\right]\phi (z)=m^2\phi (z)`$ (15) where $`Q={\displaystyle \frac{\mathrm{d}}{\mathrm{d}z}}+{\displaystyle \frac{d\mathrm{log}\psi _1(z)}{dz}},\overline{Q}={\displaystyle \frac{\mathrm{d}}{\mathrm{d}z}}+{\displaystyle \frac{d\mathrm{log}\psi _1(z)}{dz}}.`$ (16) In this form, the scalar wave equation clearly has two zeromode solutions with $`n=m=0`$. One such solution is easily identified $`\phi (z)=\psi _1(z).`$ (17) The second zeromode follows as $`\phi (z)=\psi _2(z)\psi _1(z){\displaystyle ^z}{\displaystyle \frac{dz^{}}{\psi _1(z^{})^2}}=\psi _1(z)\mathrm{log}(u(z)).`$ (18) Near the singularity at $`z=0`$ these two solutions behave as $`\psi _1(z)\sqrt{z},\psi _2(z)\sqrt{z}\mathrm{log}z.`$ (19) Eq. (15) shows that the $`m^2`$ and $`n^2`$ terms become irrelevant near the singularity, and thus all solutions behave similarly to the two zeromode solutions in the limit $`z0`$. We will refer to the solutions behaving as $`\sqrt{z}`$ near $`z=0`$ as the “regular” solutions, while those behaving as $`\sqrt{z}\mathrm{ln}z`$ for small $`z`$ will be called the “irregular” solutions. The fact that our spacetime is geodesically incomplete should not matter provided that no conserved quantities are allowed to leak out through the boundary. The $`d`$-dimensional Poincaré isometries of the metric eq. (2), as well as the axial rotation isometry, lead to conservation laws. The Killing vectors corresponding to these isometries are $`\xi _\mu ^{(a)}`$ $`=`$ $`A^2\delta _\mu ^a`$ $`\xi _\mu ^{(ab)}`$ $`=`$ $`A^2\left(\delta _c^a\delta _\mu ^b\delta _\mu ^a\delta _c^b\right)x^c`$ (20) $`\xi _\mu ^{(\theta )}`$ $`=`$ $`B^2\delta _\mu ^\theta `$ with $`a,b,c=1\mathrm{}d`$. For each of these Killing vectors we may contract with the stress-energy tensor to form a current $`J^\mu =T^{\mu \nu }\xi _\nu `$ (21) satisfying a covariant conservation law $`{\displaystyle \frac{1}{\sqrt{g}}}_\mu (\sqrt{g}J^\mu )=0.`$ (22) Therefore we demand that the flux through the singular boundary of our spacetime must vanish for each of these currents. For example, for the translation isometries this requires $`\underset{z0}{lim}\sqrt{g}J_{(a)}^z=\underset{z0}{lim}\sqrt{g}g^{zz}_a\varphi _z\varphi =0.`$ (23) Using eq. (19) and the form of $`\varphi `$ eq. (12) we see that the flux through the singularity for the regular solutions vanishes, while that for the irregular solutions diverges. Therefore the boundary conditions boundary conditions we have chosen eliminate the irregular solutions. The other currents lead to the same condition. Once the irregular solutions have been discarded, it follows that that $`\overline{Q}=Q^{}`$; that is, $`(\phi _1,\overline{Q}\phi _2)=(Q\phi _1,\phi _2)`$ where $`\phi _{1,2}`$ are arbitrary regular solutions, since the difference between the two expressions is a vanishing surface term. Therefore $`\overline{Q}Q`$ is a positive semidefinite operator, implying that the $`m^2`$ eigenvalues are all positive, so that no tachyonic modes are seen in the $`d`$-dimensional world. In fact the boundary condition eq. (23) (supplemented by appropriate boundary conditions near the string core) is precisely the condition necessary to insure that the eigenvalue problem specified by eqs. (8,12) is of the classic Sturm-Liouville type. Thus not only is the spectrum non-tachyonic, but the eigenfunctions are also complete, discrete, and my be chosen to be orthogonal. ## 4 Gravitational perturbations and Newton’s law of gravitation We turn to the metric fluctuations about the solution eq. (6). The metric in $`(d+2)`$-dimensions has $`(d+2)(d1)/2`$ physical polarizations; for a $`d`$-dimensional observer these decompose into a $`d`$-dimensional graviton, 2 $`d`$-dimensional vectors and 3 $`d`$-dimensional scalars. We first focus on the $`d`$-dimensional graviton fluctuations. We consider a perturbation of the metric of the form $$ds^2=A(z)^2(\eta _{ab}+h_{ab})dx^adx^b+A(z)^2dz^2+\gamma ^2B(z)^2d\theta ^2,$$ (24) and impose the gauge conditions $`_ah^{ab}=h_a^a=0`$, following the analysis of ref. . Using the isometries of the background metric we will search for solutions of the form $$h_{ab}=\epsilon _{ab}e^{ipx}e^{in\theta }\frac{\phi (z)}{\psi _1(z)},$$ (25) where $`ϵ_{ab}`$ is a constant polarization tensor, and $`\psi _1`$ is the same function<sup>2</sup><sup>2</sup>2Note that, strictly speaking, it is the combination $`(\sqrt{g}g^{ii})^{1/2}`$ that appears in the denominator of eq. (25). For our coordinate choice, in which $`g^{ii}=g^{zz}=1/A^2`$, this equals $`\psi _1`$. defined in eq. (13). We can already identify one solution of the form eq. (25): the original metric eq. (9) has an invariance under restricted general coordinate transformations along the brane. This is enough to insure the presence of a massless graviton along the brane. The fluctuation in the metric corresponding to this mode may be found by replacing $`\eta _{ab}\overline{g}_{ab}`$; that is, the wave function of this mode is just an $`h_{ab}`$ independent of the transverse coordinates: $$\phi (z)=\psi _1(z),$$ (26) with $`n=m=0`$. The coupling of this mode to stress-energy along the brane may be computed by examining the action for this fluctuation. Using eqs. (25, 26) we get an action $$S=M_{d+2}^d\sqrt{g}𝑑z𝑑\theta d^dx\sqrt{(\eta +h)}\frac{R_d}{A^2}$$ (27) where $`R_d`$ is the $`d`$-dimensional curvature computed from $`(\eta +h)_{ab}`$. This allows identification of the $`d`$-dimensional Planck scale as <sup>3</sup><sup>3</sup>3In principle we only trust the integrand in the region outside the string core. However the integral is dominated by this region—inclusion of the string core changes the value of the integral by an exponentially small amount.: $$M_d^{d2}=M_{d+2}^d\psi _1(z)^2𝑑z𝑑\theta =e^{u_0}u_0^{(d+1)/2d}\mathrm{\Gamma }\left(\frac{d1}{2d}\right)\pi \gamma ^2M_{d+2}^d.$$ (28) This form demonstrates the connection between the gravitational coupling in $`d`$-dimensions, and the normalization of the zeromode $`\psi _1`$. The factor $`e^{u_0}`$ dominates this expression, providing an exponential hierarchy. For the especially interesting case of $`d=4`$, with $`\gamma =1/f`$, a ratio of $`M_6/f2.7`$ yields a hierarchy for $`M_4/M_6`$ of over $`10^{17}`$! The Einstein equation for general fluctuations of the form eq. (25) reduces to the scalar wave equation eq. (15) which we considered in the previous section: $$\left[\overline{Q}Q+n^2\frac{A^2}{B^2}\right]\phi (z)=m^2\phi (z)$$ (29) where $`Q`$ is the same operator as in the scalar case. Our analysis is therefore identical to that of the previous section: we discard all solutions for which $`\phi (z)`$ behaves like $`\sqrt{z}\mathrm{log}z`$ near the singularity. The remaining solutions include the zeromode, behaving as a massless $`d`$-dimensional graviton on the brane<sup>4</sup><sup>4</sup>4Unlike in the scalar case, we are assured that there will be a solution with $`m^2=0`$, since whatever occurs in the core of the vortex will maintain general coordinate invariance., and the Kaluza-Klein like modes with non-zero $`m^2`$. All these modes have $`\phi (z)`$ vanishing like $`\sqrt{z}`$ as we approach the singularity. Since $`\overline{Q}Q`$ is positive semidefinite, we are assured that the Kaluza-Klein modes all have real masses, and that there are no gravitational instabilities. Furthermore, the spectrum is discrete, with spacing set by $`\mathrm{\Delta }m^21/z_0^2{\displaystyle \frac{(M_{d+2})^d}{(M_d)^{d2}}},`$ (30) where $`z_0=z(u_0)`$ and we made use of eq. (28). This is the same relation one expects in the large extra dimension scenario. Since these Kaluza-Klein modes are coupled with gravitational strength, they are still acceptable, even if they are extremely light . As for the other modes of the graviton field, we expect the three scalar modes and one of the vector modes that arise from the decomposition to have masses on the same scale as $`\mathrm{\Delta }m`$. However, one of the vector modes is corresponds to the rotational isometry of the metric. Normally, one would then expect this field to be the vector potential associated with an unbroken $`U(1)`$ gauge symmetry in the $`d`$-dimensional world. However, the scalar field $`\mathrm{\Phi }`$ breaks the rotational symmetry. Therefore this gravi-photon eats the Goldstone boson associated with phase rotations of $`\mathrm{\Phi }`$, and gets a mass<sup>2</sup> comparable to the $`\mathrm{\Delta }m^2`$ of eq. (30). ## 5 Conclusions and speculations We have shown that the metric outside a global cosmic 3-brane provides a dynamical determination of the hierarchy between the high scale associated with Newton’s constant, $`10^{19}`$ GeV, and the low scale associated with particle physics, a few TeV. With no fine tuning of fundamental parameters, this metric produces a finite, large transverse area. Small fluctuations about this background metric behave as a normal, 4-d graviton along the brane. The coupling of this mode to excitations on the brane, the effective Newton constant, is the 6-d Newton constant, taken to be characterized by the TeV scale<sup>5</sup><sup>5</sup>5This scale may need to be more nearly 100 TeV to satisfy all phenomenological constraints ., divided by the area of the transverse dimensions. Since this area is related to the intrinsic scale by a factor $`\mathrm{exp}((M_6/f)^4)`$, a ratio of $`M_6/f2.7`$ easily provides the necessary hierarchy. The fact that our metric is geodesically incomplete does not make our solution inconsistent, since we are able to find boundary conditions on our fields that guarantee no conserved quantities are lost through the singularity. Nevertheless, we expect that physics beyond Einstein’s theory will alter the nature of the singularity, and possibly eliminate it altogether. Even in the latter case, gravity would become strong, even if not singular, only at an exponentially large distance from the vortex. The key feature of our solution — the dynamical generation of an exponentially large length scale — would remain intact. Furthermore, it is quite plausible that the low lying solutions to the wave equations we have considered will not be appreciably altered, precisely because the boundary conditions we have chosen make the solutions insensitive to what is happening at the singularity. They would be analogous to the low lying quantum mechanical states in a deep square well potential, which are insensitive to whether the potential is actually infinite outside the box, or whether it is finite but large. In our discussion the nature of the 3-brane played little role. The only requirement was the existence of stress-energy of the form eq. (3). Such a stress-energy tensor applies when scalar field equations have stable brane solutions from spontaneously broken global symmetries (such as a cosmic “string”), or from long range fieldss attached to D-brane<sup>6</sup><sup>6</sup>6The example of Arkani-Hamed, Dimopolous and March-Russell for obtaining logarithmic running through a massless scalar field coupled to a 3-brane provides another example of a D-brane as a source for a massless scalar field. While we specifically discussed a brane with codimension equal to two, it seems possible that one could find similar solutions with exponentially large extra dimensions starting with branes of other codimensions — preferably a D-brane that could support a realistic 4-d world. It is interesting to speculate whether the winding number of a bulk topological field about this brane could determine the number and chirality of families living on the brane. Acknowledgments We thank Nima Arkani-Hamed, Ann Nelson, Lisa Randall, Martin Schmaltz, and Raman Sundrum for useful conversations. We would like to express our gratitude to the Aspen Center for Physics, where this work was begun. A.G.C. is supported in part by DOE grant #DE-FG02-91ER-40676; D.B.K. is supported in part by DOE grant #DOE-ER-40561.
no-problem/9910/cond-mat9910496.html
ar5iv
text
# Importance of matrix elements in the ARPES spectra of BISCO ## Abstract We have carried out extensive first-principles angle-resolved photointensity (ARPES) simulations in Bi2212 wherein the photoemission process is modelled realistically by taking into account the full crystal wavefunctions of the initial and final states in the presence of the surface. The spectral weight of the ARPES feature associated with the $`CuO_2`$ plane bands is found to undergo large and systematic variations with $`k_{}`$ as well as the energy and polarization of the incident photons. These theoretical predictions are in good accord with the corresponding measurements, indicating that the remarkable observed changes in the spectral weights in Bi2212 are essentially a matrix element effect and that the importance of matrix elements should be kept in mind in analyzing the ARPES spectra in the high-Tc’s. Angle-resolved photoemission spectroscopy (ARPES) has contributed significantly towards an understanding of the nature of the normal as well as the superconducting state of the cuprates. Much of the data on the cuprates, however, has been analyzed by assuming that the ARPES essentially measures the one-particle spectral function of the initial states. While this simple approach yields insights into the underlying physics, a satisfactory description of the spectra must necessarily model the photo-excitation process properly by taking into account the matrix element involved, the complex modifications of the wavefunctions resulting from a specific surface termination, and the effects of multiple scattering and of finite lifetimes of the initial and final states. This article presents first-principles simulations of ARPES spectra in Bi2212 (BISCO) using the one-step model of photoemission which incorporates the aforementioned effects realistically. We focus on the ARPES signature of $`CuO_2`$ plane bands which are widely believed to be the key to the mechanism of superconductivity in the cuprates. The spectral weight of the ARPES peak associated with the $`CuO_2`$ plane bands is found to undergo large variations with $`𝐤_{}`$ as well as the energy and polarization of the incident photons. These theoretical predictions are in remarkable accord with the corresponding measurements on BISCO, and show clearly the importance of ”matrix element effects” in the ARPES spectra. Notably, a substantial increase in the ARPES spectral weight in BISCO in going from $`\overline{\mathrm{\Gamma }}`$ to $`\overline{M}`$ was noted early by Anderson, who speculated that this puzzling behavior may be the hallmark of spin charge separation. The physical and formal underpinnings of our approach may be exposed by starting with the following Golden rule-based expression for photointensity from initial states at energy E with photons of energy $`\mathrm{}\omega `$: $`I(E,\mathrm{}\omega )={\displaystyle \frac{2\pi e}{\mathrm{}}}{\displaystyle \underset{i,f}{}}|<\mathrm{\Psi }_f|\mathrm{\Delta }|\mathrm{\Psi }_i>|^2\delta (E_fE_i\mathrm{}\omega )`$ (1) $`\mathrm{\Psi }_i(\mathrm{\Psi }_f)`$ are the initial (final) states of the semi-infinite solid, and $`\mathrm{\Delta }=e\mathrm{}/2mc(𝐩𝐀+𝐀𝐩)`$ is the interaction Hamiltonian with the electron momentum operator $`𝐩`$ and the vector potential $`𝐀`$ of the photon field. In the one-step model used in the present computations, Eq. 1 is manipulated into the form $`I(𝐤_{},E,\mathrm{}\omega )`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}Im<𝐤_{}|G_2^+(E+\mathrm{}\omega )\mathrm{\Delta }G_1^+(E)\mathrm{\Delta }^{}`$ (3) $`\times G_2^{}(E+\mathrm{}\omega )|𝐤_{}>`$ where the matrix element involves the free electron final state of momentum $`𝐤_{}`$. $`G_2`$ and $`G_1`$ denote the retarded (+) or advanced (-) one-electron Green functions at appropriate energies. Notably, the so-called three-step model of photoemission approximates the matrix element in Eq. 1 via the bulk Bloch wavefunctions yielding for the photointensity within the solid: $`P(E,\omega )`$ $`=`$ $`{\displaystyle \underset{f,i}{}}{\displaystyle d^3𝐤|<\mathrm{\Psi }_f^{bulk}|\mathrm{\Delta }|\mathrm{\Psi }_i^{bulk}>|^2}`$ (5) $`\times A_f(E+\mathrm{}\omega )A_i(E)`$ which is cast in terms of the one-particle spectral functions of the initial and final states ($`A_i`$ and $`A_f`$), and the processes of transport and emission are to be considered separately. Assuming that: (i) The system is strictly 2D, and (ii) the final states form a structureless continuum, Eq. 3 reduces to: $$P(E,\omega )\underset{i}{}|<\mathrm{\Psi }_f^{bulk}|\mathrm{\Delta }|\mathrm{\Psi }_i^{bulk}>|^2A_i(E)$$ $`(4)`$ in terms of just the $`A_i(E)`$’s. Further, for a single band solid, one obtains: $$P(E,\omega )|<\mathrm{\Psi }_f^{bulk}|\mathrm{\Delta }|\mathrm{\Psi }_i^{bulk}>|^2A_i(E)$$ $`(5)`$ where the Fermi function on the right side is suppressed. Brief comments on forms 1-5 are appropriate in order to highlight the underlying approximations. Since forms 3-5 ignore the presence of the surface, it is very difficult to include effects of different surface terminations, and the associated distortions of bulk wavefunction which can be quite severe. In form 4, even near an ARPES peak from a specific initial state, other states will in general contribute a background upon being broadened due to their finite lifetimes. Forms 4 and 5 ignore the structure in the final state spectrum which will be seen below to be quite significant. We emphasize that the distinctions between the processes of excitation, transport and emission through the surface invoked by the three-step model (Eqs. 3-5) are artificial since the more satisfactory one-step formula (Eq. 2) does not admit such a decomposition. The relevant technical details of our computations are as follows. In order to keep the problem manageable, the modulation of the lattice is neglected and the crystal structure of BISCO is assumed to be perfectly tetragonal; this still involves 30 atoms per unit cell, and a substantial extension of the earlier work on relatively simpler lattices The crystal potential was first obtained self-consistently within the KKR scheme, and essentially yielded the well-known LDA-based band structure and Fermi surface of Bi2212 ; however, the actual potential used here is slightly modified in that the Bi-O pockets around the $`\overline{M}`$-point are lifted above the Fermi energy to account for their absence in the experimental spectra. The surface is assumed to terminate in the Bi-O layer in accord with the general consensus that this is an easy cleavage plane in BISCO; the incident light is assumed polarized in the $`(k_x,k_y)`$ plane since the ARPES spectra of interest here seem to be insensitive to the z-component $`A_z`$ of the vector potential, and further, dielectric effects would in general complicate the relationship between the value of $`A_z`$ inside and outside the crystal. The ARPES simulations of Fig. 1 help set the stage for our discussion. As expected, the separation between the spectral features A and B in Fig. 1a associated with the CuO<sub>2</sub> plane bands increases as we go from $`\overline{\mathrm{\Gamma }}`$ towards $`\overline{M}`$. Figs. 1b and 1c show that the spectral weight of A or B depends dramatically on the values of $`k_{}`$ and photon energy $`h\nu `$. The weights display large changes for either peak as a function of $`h\nu `$ for a fixed $`k_{}`$, or as a function of $`k_{}`$ for a fixed $`h\nu `$. These results highlight the importance of matrix element effects in BISCO since the weights will be constant independent of $`h\nu `$ or $`k_{}`$ in the absence of these effects. In fact, our theoretically predicted $`h\nu `$ and $`k_{}`$ dependencies of spectral weights are in substantial accord with the measurements; we discuss this aspect now with reference to Figs. 2-4. Fig. 2 compares the measured and computed total spectral weights in a 500 meV energy window below $`E_f`$ along the three high symmetry directions. The characteristic features of the experimental data are: a relatively low flat intensity along $`\overline{\mathrm{\Gamma }}\overline{X}`$, a steep rise along $`\overline{\mathrm{\Gamma }}\overline{M}`$ compared to $`\overline{\mathrm{\Gamma }}\overline{Y}`$, and the presence of the prominent bump around $`k_{}0.25`$ along $`\overline{\mathrm{\Gamma }}\overline{Y}`$. All these aspects of the data are essentially reproduced by the theory. Fig. 3 considers the photointensity in the $`(k_x,k_y)`$ plane for two different polarizations of the incident light where the initial state is held fixed at $`E_f`$. The color plots give computed intensities over a dense grid of $`k_{}`$-values and display the two CuO<sub>2</sub> plane band sheets A and B in the band structure; these plots are representative of what could be measured in a suitably arranged constant-initial-energy angle-scanned (CIE-AS) ARPES measurement. Incidently, Figs. 3a2-3c2 are not symmetric about a horizontal or vertical line through the center. In Figs. 3b2 and 3c2, the light is incident along the $`\overline{\mathrm{\Gamma }}\overline{Y}`$ direction and the figure therefore is symmetric only about this diagonal line. In Fig. 3a2, on the other hand, the light is polarized horizontally, and the intensities are symmetric around the $`\overline{\mathrm{\Gamma }}\overline{M}`$ line; this symmetry becomes visible only when the figure is extended in the vertical direction to include a larger range of momenta. The intensity associated with the outer plane band sheet A as one goes around the Fermi surface is shown in quantitative detail in Figs. 3a1-3a3. Since A is generally more intense than the inner plane band B, A is presumably more relevant in connection with the experimental data near $`E_f`$. For light polarized along the horizontal direction (Fig. 3a), we see that the intensity is large around $`\alpha =0^o`$ and decreases rapidly beyond $`\alpha 20^o`$. A 45<sup>o</sup> rotation of the polarization vector (Figs. 3b and 3c) induces substantial changes in the shape and magnitude of the intensity, and the appearance of a minimum at $`\beta `$ or $`\gamma =45^o`$. The experimental points in Fig. 3 are in good overall accord with the measurements, some discrepancies around $`\alpha `$ and $`\beta 0^o`$ notwithstanding. Thus quite large observed variations (nearly an order of magnitude) in the emission intensity from different parts of the Fermi surface are mainly the consequence of matrix element effects. Finally, Fig. 4 considers the photon energy dependence of the emission intensity of the spectral feature around the $`\overline{M}`$-point. The theoretical curve displays a prominent peak around 22 eV and indicates clearly that the final states in BISCO possess considerable structure which is neglected in approximations of Eqs. 5 and 6, especially when the matrix element in these equations is further replaced by a constant. The experimental points which show the presence of a broad peak centered around 21 eV are in good overall agreement when we keep in mind that errors of the order of a few eV’s in locating the final states are generally expected in the first principles band structures. We emphasize that the agreement seen in Figs. 2-4 is robust to uncertainties inherent in such a comparison on the theoretical as well as the experimental side. The experimental weights depend on the specific energy window used in their definition, and on whether or not a suitable background is subtracted. In this vein, the computed weights depend also of course on the specific values of the initial and final state damping parameters $`\mathrm{\Sigma }_i^{^{\prime \prime }}`$ and $`\mathrm{\Sigma }_f^{^{\prime \prime }}`$. In order to assess these effects, we have carried out extensive simulations using a variety of different values and energy-dependencies of $`\mathrm{\Sigma }_i^{^{\prime \prime }}`$ and $`\mathrm{\Sigma }_f^{^{\prime \prime }}`$, in addition to varying the real part of the initial state self-energy (in order to mimic correlation effects, even though the LDA framework underlies our computations), and find that the main features of the results of Figs. 2-4 are insensitive to such variations. In summary, we have carried out extensive first-principles one-step ARPES simulations in Bi2212 wherein the photoemission process is modelled realistically by taking into account the nature of the initial and final state crystal wavefunctions as well as the multiple scattering effects in the presence of a specific surface termination. We focus on the nature of the ARPES feature arising from $`CuO_2`$ plane bands, and consider in particular its spectral weight as a function of $`k_{}`$ as well as the energy and polarization of the incident photons. Large variations in the spectral weights predicted theoretically along three different high symmetry directions are in good accord with the corresponding measurements. A good agreement between theory and experiment is also seen with regard to changes in spectral weights with photon energy around the $`\overline{M}`$-point, as well as along the Fermi surface contours in the $`(k_x,k_y)`$ plane for two different polarizations. This study shows clearly that the remarkable observed changes in the ARPES spectral weights in Bi2212 are essentially a matrix element effect and that the importance of matrix elements should be kept in mind in analyzing the ARPES spectra in the high-Tc’s. Another notable implication of this work is that the integral (over energy) of the ARPES intensity does not yield the momentum density of the electron gas. This work is supported by the US Department of Energy under contract W-31-109-ENG-38, including a subcontract to Northeastern University, and the allocation of supercomputer time at the NERSC and Northeastern University Advanced Scientific Computation Center (NU-ASCC). Figure captions Figure 1: (a): Simulated ARPES intensity in BISCO for $`k_{}`$ varying between $`\overline{\mathrm{\Gamma }}`$ and $`\overline{M}`$ (bottom to top) at $`h\nu =22eV`$. (b) and (c) show computed variations in spectral weights (areas under peaks) of the two CuO<sub>2</sub> plane band features A and B with photon energy for different $`k_{}`$-values, based on a crystal potential without Bi-O hole pockets at $`\overline{M}`$. A small value of the initial and final state damping parameters is used to highlight spectral features. Figure 2: Theoretical weights obtained by integrating the $`h\nu =22eV`$ ARPES spectra over the binding energy range of 0-500 meV are compared with the corresponding experimental results . Curves for the three symmetry lines (Brillouin zone in inset) are offset vertically for clarity; $`k_{}`$ is given in relative units such that the distance from $`\mathrm{\Gamma }`$ to X, Y or M is defined to be unity for each direction. Theoretical values are normalized to match the experimental value at the maximum around $`k_{}0.8`$ in the $`\overline{\mathrm{\Gamma }}\overline{M}`$ curve. Figure 3: The color plots give simulated ARPES intensities for emission from $`E_f`$ at $`h\nu =22`$ eV for two different polarizations (white arrows) of the incident light. The CuO<sub>2</sub> plane band sheets are marked A and B. The intensity (area under peak) of the outer plane band A is shown in (a1)-(c1) as a function of the angles $`\alpha `$, $`\beta `$ and $`\gamma `$. Experimental data after Ref. . Theory normalized around $`\alpha 0`$ as shown in (a1). Figure 4: Spectral weight (over a 500 meV window) of the feature at $`\overline{M}`$-point is compared with the corresponding experimental results as a function of the photon energy. Theory normalized to experiment around 21 eV.
no-problem/9910/gr-qc9910023.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION In the recent paper , Adler, Nemenman, Overduin and Santiago criticize a limit on the measurability of distances which was originally derived by Salecker and Wigner in the 1950s . If correct, this criticism would have implications for all the recent papers which have used in one way or another the celebrated Salecker-Wigner study. In particular, some of quantum-gravity ideas that can be tested using the interferometry-based experiments I proposed in Refs. are motivated by the Salecker-Wigner limit; moreover, the Salecker-Wigner limit is the common ingredient (even though this ingredient was used in very different ways and from very different viewpoints ) of several recent studies concerning possible limitations on the measurability of distances or limitations in the “tightness” achievable in the operative definition of a network of geodesics. I show here that the analysis reported in Ref. is incorrect. It relies on assumptions which cannot be justified in the framework set up by Salecker and Wigner (while the same assumptions would be reasonable in the context of certain measurements using rudimentary experimental setups). In particular, contrary to the claim made in Ref. , the source of $`\sqrt{T_{obs}}`$ uncertainty (with $`T_{obs}`$ denoting the time of observation in a sense which will be clarified in the following) that was considered by Salecker and Wigner cannot be truly eliminated; unsurprisingly, it can only be traded for another source of $`\sqrt{T_{obs}}`$ uncertainty. The analysis reported in Ref. also handles inadequately the idealized concept of “clock” relevant for the type of “in-principle analysis” discussed by Salecker and Wigner. In addition to this incorrect criticism of the limit derived by Salecker and Wigner, Ref. also misrepresented the role of the Salecker-Wigner limit in providing motivation for the mentioned proposal of interferometry-based space-time-foam studies. The reader unfamiliar with the relevant literature would come out of reading Ref. with the impression that such interferometry-based tests could only be sensitive to quantum-gravity ideas motivated by the Salecker-Wigner limit; instead<sup>2</sup><sup>2</sup>2This was already discussed in detail in Ref. which appeared six moths before Ref. but was not mentioned (or taken into account in any other way) in Ref. . only some of the quantum-gravity ideas that can be probed with modern interferometers are motivated by the Salecker-Wigner limit. The bulk of the insight we can expect from such interferometric studies concerns the stochastic properties of ”foamy” models of space-time, which are intrinsically interesting independently of the Salecker-Wigner limit. I shall articulate this Letter in sections, each making one conceptually-independent and simple point. I start in the next Section 2 by reviewing which type of ideas concerning stochastic properties of ”foamy” models of space-time can be tested with modern interferometers. From the discussion it will be clear that interest in these “foamy” models of space-time is justified quite independently of the Salecker-Wigner limit (in fact, this limit will not even be mentioned in Section 2). Section 2 is perhaps the most important part of this Letter, since its primary objective is the one of making sure that experimentalists do not loose interest in the proposed interferometry-based tests as a result of the confusion generated by Ref. . The remaining sections do concern the Salecker-Wigner limit, reviewing some relevant results and clarifying various incorrect statements provided in Ref. . Section 3 briefly reviews the argument put forward by Salecker and Wigner. Section 4 emphasizes that the Salecker-Wigner limit is obtained in ordinary quantum mechanics, but it can provide motivation for a certain type of ideas concerning quantum properties of space-time. The nature of the idealized clock relevant for the type of analysis performed by Salecker and Wigner is discussed in Section 5, also clarifying in which sense some comments on this clock that were made in Ref. are incorrect. Section 6 clarifies how the potential well considered in Ref. would simply trade one source of $`\sqrt{T_{obs}}`$ uncertainty for another source of $`\sqrt{T_{obs}}`$ uncertainty. In Section 7 I clarify that the comments on decoherence of the clock presented in Ref. would not apply to the Salecker-Wigner setup. Section 8 is devoted to some closing remarks. ## 2 FOAMY SPACE-TIME AND MODERN INTERFEROMETERS A prediction of nearly all approaches to the unification of gravitation and quantum mechanics is that at very short distances the sharp classical concept of space-time should give way to a somewhat “fuzzy” (or “foamy”) picture, possibly involving virulent geometry fluctuations (sometimes intuitively/heuristically associated with virtual black holes and wormholes). This subject originates from observations made by Wheeler and Hawking and has developed into a rather vast literature. Examples of recent proposals in this area (and good starting points for a literature search) can be found in Refs. , which explored possible implementations/consequences of space-time foam ideas in various versions of quantum gravity, and in Refs., which performed similar studies in an approach based on non-critical strings. Although the idea of space-time foam appears to have significantly different incarnations in different quantum-gravity approaches, a general expectation that emerges from this framework is that the distance between two bodies “immerged” in the space-time foam would be affected by (quantum-gravity-induced) fluctuations. A phenomenological model of fluctuations affecting a quantum-gravity distance must describe the underlying stochastic processes. As explained in detail in Refs. , from the point of view of comparison with data obtainable with modern interferometers the best way to characterize such models is through the associated amplitude spectral density of distance fluctuations . A natural starting point for the parametrization of this amplitude spectral density is given by<sup>3</sup><sup>3</sup>3Of course, a parametrization such as the one in Eq. (1) could only be valid for frequencies $`f`$ significantly smaller than the Planck frequency $`c/L_p`$ and significantly larger than the inverse of the time scale over which the classical geometry of the space-time region where the experiment is performed manifests significant curvature effects. $`S(f)=f^\beta (_\beta )^{\frac{3}{2}\beta }c^{\beta \frac{1}{2}},`$ (1) where $`c`$ is the speed-of-light constant, the dimensionless parameter $`\beta `$ carries the information on the nature of the underlying stochastic processes and the dimensionful (length) parameter $`_\beta `$ carries the information on the magnitude and rate of the fluctuations<sup>4</sup><sup>4</sup>4I am assigning an index $`\beta `$ to $`_\beta `$ just in order to facilitate a concise description of experimental bounds. For example, if data were to rule out the fluctuations scenario with, say, $`\beta =0.6`$ for all values of the effective length scale down to, say, $`10^{27}m`$ one could simply write the formula $`_{\beta =0.6}<10^{27}m`$.. A detailed discussion of the definition and applications of this type of amplitude spectral density can be found in Ref. . For the readers unfamiliar with the use of amplitude spectral densities some useful intuition can be obtained from the fact that the standard deviation of the fluctuations is formally related to $`S(f)`$ by $`\sigma ^2={\displaystyle _{1/T_{obs}}^{f_{max}}}[S(f)]^2𝑑f,`$ (2) where $`T_{obs}`$ is the time over which the distance is kept under observation. In Eq. (1) the parameter $`\beta `$ could in principle take any value, and it is even quite plausible that in reality the stochastic processes (if at all present) would have a more complex structure than the simple power law codified in Eq. (1). Still, Eq. (1) appears to be the natural starting point for a phenomenological programme of exploration of the possibility of “distance-fuzziness” effects induced by quantum properties of space-time. In particular, it seems natural to devote special attention to values of $`\beta `$ in the range $`1/2\beta 1`$; in fact, as explained in greater detail in Refs. , $`\beta =1/2`$ is the type of behaviour one would expect in fuzzy space-times without quantum decoherence (without “information loss”), while the case $`\beta =1`$ provides the simplest model of stochastic (quantum) fluctuations of distances, in which a distance is affected by completely random minute (possibly Planck-length size) fluctuations which can be modeled as stochastic processes of random-walk type. Values of $`\beta `$ somewhere in between the cases $`\beta =1/2`$ and $`\beta =1`$ could provide a rough model of space-times with decoherence effects somewhat milder than the $`\beta =1`$ random-walk case. In other words, in light of the realization that foamy space-times without decoherence would only be consistent with distance fluctuations of type $`\beta =1/2`$ the popular arguments that support quantum-gravity-induced deviations from quantum coherence motivate interest in values of $`\beta `$ somewhat different from $`1/2`$. Readers unfamiliar with the subject can get an intuitive picture of the relation between the value of $`\beta `$ and decoherence by resorting again to Eq. (2). For example, as discussed in greater detail in Ref. , the case $`\beta =1`$ corresponds to $`\sigma \sqrt{T_{obs}}`$, the standard deviation characteristic of random-walk processes, and this type of $`T_{obs}`$-dependence would be consistent with decoherence in the sense that the information stored in a network of distances would degrade over time<sup>5</sup><sup>5</sup>5For example, an observer could store “information” in a network of bodies by adjusting their distances to given values at a given initial time. If space-time did involve distance fluctuations with standard deviation that grows with the time of observation, there would be an intrinsic mechanism for this information to degrade over time. Other intuitive descriptions of the relation between certain fuzzy space-times and decoherence can be found in Ref. . Depending on the reader’s background it might also be useful to adopt the language of the “memory effect”, as done, for example, in Ref. .. Similar observations, but with weaker power-law dependence on $`T_{obs}`$, hold for values of $`\beta `$ in the range $`1/2<\beta <1`$. In the limiting case $`\beta =1/2`$ the $`T_{obs}`$-dependence turns from power-law to logarythmic, and this is of course the closest one can get to modeling space-times without intrinsic decoherence (i.e. such that the associated standard deviation is $`T_{obs}`$-independent) within the parametrization set up in Eq. (1)<sup>6</sup><sup>6</sup>6As explained in Refs. and reviewed here below, we are still very far from being able to test the type fuzziness one might expect for space-times without decoherence. It is therefore at present quite sufficient to model this type of fuzziness by taking $`\beta =1/2`$ in (1). Readers with an academic interest in seeing a more complete description of stochastic processes plausible for a space-time without decoherence can consult Ref. .. As observed in Ref. , independent support for a fuzzy picture of space-time of the type here being considered comes from recent studies suggesting that space-time foam might induced a deformation of the dispersion relation that characterizes the propagation of the massless particles used as space-time probes in the operative definition of distances. Such a deformation of the dispersion relation would affect the measurability of distances just in the way expected for a fuzzy picture of space-time of the type here being considered. In general the connection between loss of quantum coherence and a foamy/fuzzy picture of space-time is very deep and has been discussed in numerous publications (a sample of recent ideas in this area can be found in Refs. ). However, while a substantial amount of work has been devoted to the “physics case” for quantum-gravity induced decoherence, enormous difficulties have been encountered in developing a satisfactory formalism for this type of quantum gravity. The primary obstruction for the search of the correct decoherence-encoding formalism is the fact that a new mechanics would be needed (ordinary quantum mechanics evolves pure states into pure states) and the identification of such a new mechanics in the absence of any guidance from experiments is extremely hard. It is in this context that a phenomenology based on the parametrization (1) finds its motivation. When a satisfactory workable formalism implementing the intuition on quantum-gravity-induced decoherence becomes available, we will be in a position to extract from it a specific form of the stochastic processes characterizing the associated foamy space-time, with a definite prediction for $`S(f)`$. While waiting for these developments on the theoretical-physics side we might get some help from experiments; in fact, as observed in Refs. , the remarkable sensitivity of modern interferometers (the ones whose primary objective is the detection of the classical-gravity phenomenon of gravity waves ) allows us to put significant bounds on the parameters of Eq. (1). While it is remarkable that some candidate quantum-gravity phenomena are within reach of doable experiments, it is instead quite obvious that interferometers would be the natural in-principle tools for the study of distance fluctuations. In fact, the operation of interferometers is based on the detection of minute changes in the positions of some test masses (relative to the position of a beam splitter), and, if these positions were affected by quantum fluctuations of the type discussed above, the operation of interferometers would effectively involve an additional source of noise due to quantum gravity . The data obtained at the Caltech 40-meter interferometer, which in particular achieved displacement noise levels with amplitude spectral density of about $`310^{19}m/\sqrt{Hz}`$ in the neighborhood of $`450`$ $`Hz`$, allow us to set the bound $`[_\beta ]_{Caltech}<\left[{\displaystyle \frac{310^{19}m}{\sqrt{Hz}}}(450Hz)^\beta c^{(12\beta )/2}\right]^{2/(32\beta )}.`$ (3) In order to get some intuition for the significance of this bound let us consider the case $`\beta =1`$. For $`\beta =1`$ the bound in Eq. (3) takes the form $`L_{\beta =1}<10^{40}m`$. This is quite impressive since $`\beta =1`$, $`L_{\beta =1}10^{35}m`$ corresponds to fluctuations in the 40-meter arms of the Caltech interferometer that are of Planck-length magnitude ($`L_p10^{35}m`$) and occur at a rate of one per each Planck-time interval ($`t_p=L_p/c10^{44}s`$). The data obtained at the Caltech 40-meter interferometer therefore rule out this simple model in spite of the minuteness (Planck-length!!) of the fluctuations involved. Another intuition-building observation concerning the significance of this result is obtained by considering the standard deviation $`\sigma \sqrt{L_pcT_{obs}}`$ which would correspond to such Planck-length fluctuations occurring at $`1/t_p`$ rate. From $`\sigma \sqrt{L_pcT_{obs}}`$ one predicts fluctuations with standard deviation even smaller than $`10^5m`$ on a time of observation as large as $`10^{10}`$ years (the size of the whole observable universe is about $`10^{10}`$ light years!!) but in spite of their minuteness these can be ruled out exploiting the remarkable sensitivity of modern interferometers. Additional comments on values of $`\beta `$ in the range $`1/2<\beta <1`$ can be found in Refs. (in Ref. the reader will find a detailed discussion of the case $`\beta =5/6`$). In the present Letter it suffices to observe that the bound encoded in Eq. (3) becomes less stringent as the value of $`\beta `$ decreases. In particular, in the limit $`\beta =1/2`$, the case providing an effective model for space-times without intrinsic decoherence, Eq. (3) only implies $`_{\beta =1/2}<10^{17}m`$, which is still very comfortably consistent with the natural expectation that within that framework one would have $`_{\beta =1/2}L_p10^{35}m`$. In this section I have in no way considered the statements on the Salecker-Wigner limit reported in Ref. . As anticipated in the Introduction, I have opened the paper with this section briefly summarizing the status of interferometry-based studies of distance fuzziness. The fact that the Salecker-Wigner limit was not even mentioned in this section should however clarify that, contrary to the impression one gets from reading Ref. , these interferometric studies are intrinsically interesting, quite independently of any consideration concerning the Salecker-Wigner limit. This is already clear at least to a portion of the community; for example, in recent work on foamy space-times (without any reference to the Salecker-Wigner related literature) the type of modern-interferometer sensitivity exposed in Refs. was used to constrain certain novel candidate light-cone-broadening effects . The brief review provided in this section should also clarify in which sense another statement provided in Ref. is misleading. It was in fact stated in Ref. that, since the sensitivity of modern interferometers is at the level<sup>7</sup><sup>7</sup>7For example, planned interferometers with arm lengths of a few $`Km`$ expect to detect gravity waves of amplitude as low as $`310^{22}`$ (at frequencies of about $`100Hz`$). This roughly means that these modern gravity-wave interferometers should monitor the (relative) positions of their test masses (the beam splitter and the mirrors) with an accuracy of order $`10^{18}m`$. of $`10^{18}m`$, any quantum-gravity model tested by such interferometers should predict a break down of the classical space-time picture on distance scales of order $`10^{18}m`$. Let me illustrate in which sense this statement misses the substance of the proposed tests by taking again as an example the one with $`\beta =1`$, which allows an intuitive discussion in terms of simple random-walk processes. We have seen that this can describe fluctuations of Planck-length magnitude occurring at $`1/t_p`$ rate. All the scales involved in the stochastic picture are at the $`10^{35}m`$ scale, but we can rule out this scenario using a “$`10^{18}m`$ machine” because this machine operates at frequencies of order a few hundred $`Hz`$ (which correspond to time scales of order a few milliseconds) and therefore is effectively sensitive to the collective effect of a very large number of minute Planck-scale effects (e.g., in the simple random-walk case, during a time of a few milliseconds as many as $`10^{41}`$ Planck-length fluctuations would affect the arms of the interferometer). This is not different from other similar experiments probing fundamental physics. For example, proton-decay experiments use protons at rest (objects of size $`10^{16}m`$) to probe physics on distance scales of order $`10^{32}m`$ (the conjectured size of gauge bosons mediating proton decay), and this is done by monitoring a very large number of protons so that the apparatus is sensitive to a collective effect which is much larger than the decay probability of each individual proton. A similar idea has already been exploited in “quantum-gravity phenomenology” ; in fact, the experiment proposed in Ref. is possible only because the photons that reach us from distant astrophysical sources have traveled for such a long time that they are in principle sensitive to the collective effect of a very large number of interactions with the space-time foam. ## 3 THE SALECKER-WIGNER LIMIT IN ORDINARY QUANTUM MECHANICS Having clarified what part of the motivation for interferometric studies is completely independent of the Salecker-Wigner limit I have two remaining tasks: the one of providing a brief review of the Salecker-Wigner limit and the one of correcting the incorrect statements on the Salecker-Wigner limit which were given in Ref. . Let me start by considering the original Salecker-Wigner limit within ordinary quantum mechanics. The analysis reported by Salecker and Wigner in Ref. concerns the measurability of distances. In particular, they considered the measurement of the distances defined by the network of free-falling bodies that might compose an idealized “material reference system” . Those who have been developing the research line started by Salecker and Wigner have also considered more general distance measurements, but the emphasis has remained on measurement analyses that might provide intuition on the way in which distances could be in principle operatively defined in quantum gravity. The essence of the Salecker-Wigner argument can be summarized as follows. They “measured” (in the “gedanken” sense) the distance $`D`$ between two bodies by exchanging a light signal between them. The measurement procedure requires attaching<sup>8</sup><sup>8</sup>8Of course, for consistency with causality, in such contexts one assumes devices to be “attached non-rigidly,” and, in particular, the relative position and velocity of their centers of mass continue to satisfy the standard uncertainty relations of quantum mechanics. a light-gun (i.e. a device capable of sending a light signal when triggered), a detector and a clock to one of the two bodies and attaching a mirror to the other body. By measuring the time $`T_{obs}`$ (time of observation) needed by the light signal for a two-way journey between the bodies one also obtains a measurement of the distance $`D`$. For example, in flat space and neglecting quantum effects one simply finds that $`D=cT_{obs}/2`$. Unlike most conventional measurement analyses, Salecker and Wigner were concerned with the quantum properties of the devices involved in the measurement procedure. In particular, since they were considering a distance measurement, it was clear that quantum uncertainties in the position (relative to, say, the center of mass of the two bodies whose distance is being measured) of some of the devices involved in the measurement procedure would translate into uncertainties in the overall measurement of $`D`$. Importantly, the analysis of these device-induced uncertainties leads to a lower bound on the measurability of $`D`$. To see this it is sufficient to consider the contribution to $`\delta D`$ coming from only one of the quantum uncertainties that affect the motion of the devices. In Ref. (and in the more recent studies reported in Refs. ) the analysis focused on the uncertainty in the position of the Salecker-Wigner clock, while in some of my related studies the analysis focused on the uncertainties that affect the motion of the center of mass of the system composed by the light-gun, the detector and the clock. These approaches are actually identical, since (as I shall discuss in greater detail later) the Salecker-Wigner clock is conceptualized as a device not only capable of keeping track of time but also capable of sending and receiving signals; it is therefore a composite device including at least a clock, a transmitter and a receiver. Moreover, the substance of the argument does not depend very sensitively on which position is considered, as long as it is associated with a device whose position must be known over the whole time required by the measurement procedure. For definiteness, let me here proceed denoting with $`x^{}`$ and $`v^{}`$ the position and the velocity of an idealized Salecker-Wigner clock. Assuming that the experimentalists prepare this device in a state characterised by uncertainties $`\delta x^{}`$ and $`\delta v^{}`$, one easily finds $`\delta D\delta x^{}+T_{obs}\delta v^{}\delta x^{}+\left({\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_d}}\right){\displaystyle \frac{\mathrm{}T_{obs}}{2\delta x^{}}},`$ (4) where $`M_b`$ is the sum of the masses of the two bodies whose distance is being measured, $`M_d`$ is the mass of the device being considered (e.g., the mass of the clock) and I also used the fact that Heisenberg’s Uncertainty Principle implies $`\delta x^{}\delta v^{}(1/M_b+1/M_d)\mathrm{}/2`$. \[The reduced mass $`(1/M_b+1/M_d)^1`$ is relevant for the relative motion of the clock with respect to the position of the center of mass of the system composed by the two bodies whose distance is being measured.\] Evidently, from (4) it follows that for given $`M_b`$ and $`M_d`$ there is a lower bound on the measurability of $`D`$ $`\delta D\sqrt{{\displaystyle \frac{\mathrm{}T_{obs}}{2}}\left({\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_d}}\right)}.`$ (5) The result (5) may at first appear somewhat puzzling, since ordinary quantum mechanics should not limit the measurability of any given observable. \[It only limits the combined measurability of pairs of conjugate observables.\] However, from a physical/phenomenological and conceptual viewpoint it is well understood that the proper framework for the application of the formalism of quantum mechanics is the description of the results of measurements performed by classical devices (devices that can be treated as approximately classical within the level of accuracy required by the measurement). It is therefore not surprising that the infinite-mass (classical-device<sup>9</sup><sup>9</sup>9A rigorous definition of a “classical device” is beyond the scope of this Letter. However, it should be emphasized that the experimental setups being here considered require the devices to be accurately positioned during the time needed for the measurement, and therefore an ideal/classical device should be infinitely massive so that the experimentalists can prepare it in a state with $`\delta x\delta v\mathrm{}/M0`$.) limit turns out to be required in order to bridge the gap between (5) and the prediction $`min\delta D=0`$ of the formalism of ordinary quantum mechanics.<sup>10</sup><sup>10</sup>10Perhaps more troubling is the fact that $`min\delta D=0`$ appears to require not only an infinitely large $`M_d`$ but also an infinitely large $`M_b`$. One feels somewhat uncomfortable treating the mass of the bodies whose distance is being measured as a parameter of the apparatus. This might be another pointer to the fact that quantum measurement of gravitational/geometric observables requires a novel conceptualization of quantum mechanics. I postpone the consideration of this point to future work. In this section on the Salecker-Wigner limit I have not taken into account the gravitational properties of the devices. It has been strictly confined within ordinary (non-gravitational) quantum mechanics. Actually, one can interpret the Salecker-Wigner limit as one way to render manifest the true nature of the physical applications of the quantum-mechanics formalism and its relation with a certain class of experiments (the ones performed by classical devices). The picture emerging from the analysis of Salecker and Wigner fits well within a general picture emerging from other similar studies. In particular, the celebrated Bohr-Rosenfeld analysis of the measurability of the electromagnetic field found that the accuracy allowed by the formalism of ordinary quantum mechanics could only be achieved using a very special type of device: idealized test particles with vanishing ratio between electric charge and inertial mass. ## 4 FROM THE SALECKER-WIGNER LIMIT TO QUANTUM GRAVITY Let me now take the Salecker-Wigner limit as starting point for a quantum-gravity argument. I will therefore now not only consider the quantum properties of the devices, but also their gravitational properties. It is well understood (see, e.g., Refs. ) that the combination of the gravitational properties and the quantum properties of devices can have an important role in the analysis of the operative definition of gravitational observables. Actually, by ignoring the way in which the gravitational properties and the quantum properties of devices combine in measurements of geometry-related physical properties of a system one misses some of the fundamental elements of novelty we should expect for the interplay of gravitation and quantum mechanics; in fact, one would be missing an element of novelty which is deeply associated with the Equivalence Principle. For example, attempts to generalize the mentioned Bohr-Rosenfeld analysis to the study of gravitational fields (see, e.g., Ref. ) are of course confronted with the fact that the ratio between gravitational “charge” (mass) and inertial mass is fixed by the Equivalence Principle. While ideal devices with vanishing ratio between electric charge and inertial mass can be considered at least in principle, devices with vanishing ratio between gravitational mass and inertial mass are not admissible in any (however formal) limit of the laws of gravitation. This observation provides one of the strongest elements in support of the idea that the mechanics on which quantum gravity is based must not be exactly the one of ordinary quantum mechanics. In turn this contributes to the whole spectrum of arguments that support the expectation that the loss of quantum coherence might be intrinsic in quantum gravity. Similar support for quantum-gravity-induced decoherence emerges from taking into account both gravitational and quantum properties of devices in the analysis of the Salecker-Wigner measurement procedure. The conflict with ordinary quantum mechanics immediately arises because the infinite-mass limit is in principle inadmissible for measurements concerning gravitational effects. As the devices get more and more massive they increasingly disturb the gravitational/geometrical observables, and well before reaching the infinite-mass limit the procedures for the measurement of gravitational observables cannot be meaningfully performed . These observations, which render unaccessible the limit of vanishingly small right-hand-side of Eq. (5), provide motivation for the possibility that in quantum gravity there be a $`T_{obs}`$-dependent intrinsic uncertainty in any measurement that monitors a distance $`D`$ for a time $`T_{obs}`$. Gravitation forces us to renounce to the idealization of infinitely-massive devices and this in turn forces us to deal with the element of decoherence encoded in the fact that measurements requiring longer times of observation are intrinsically/fundamentally affected by larger quantum uncertainty. It is important to realize that this element of decoherence found in the analysis of the measurability of distances comes simply from combining elements of quantum mechanics with elements of classical gravity. As it stands it is not to be interpreted as a genuine quantum-gravity effect, but of course this argument based on the Salecker-Wigner limit provides motivation for the exploration of the possibility that quantum gravity might accommodate this type of decoherence mechanism at the fundamental level. In the analysis of the Salecker-Wigner setup the $`T_{obs}`$ dependence is not introduced at the fundamental level; it is a derived property emerging from the postulates of gravitation and quantum mechanics. However, it is plausible that quantum gravity, as a fundamental theory of space-time, might accommodate this type of bound at the fundamental level (e.g., among its postulates or as a straightforward consequence of the correct short-distance picture of space-time). It is through this (plausible, but, of course, not self-evident) argument that the Salecker-Wigner limit provides additional motivation for the interferometric studies discussed in Section 2. The element of decoherence encoded in the stochastic models of fuzzy space-time is quite consistent with the type of decoherence mechanism suggested by the analysis of the Salecker-Wigner measurement procedure. One could see the Wheeler-Hawking picture of an “active” quantum-gravity vacuum and the measurability bound suggested by the analysis of the Salecker-Wigner measurement procedure as independent arguments in support of distance fuzziness of the type here reviewed in Section 2. Of course, the intuition associated to the arguments of Wheeler, Hawking and followers is more fundamental and has wider significance, but the analysis of the Salecker-Wigner measurement procedure has the advantage of allowing to develop (however heuristic) arguments in support of one or another form of fuzziness, whereas the lack of explicit models providing a satisfactory implementation of the Wheeler-Hawking intuition forces one to adopt parametrizations as general as the one in Eq. (1). From this point of view, arguments based on the Salecker-Wigner measurement procedure can play a role similar to the one played by the arguments based on quantum-gravity-induced deformations of dispersion relations, which, as already mentioned in Section 2, can also be used to support specific corresponding models of fuzziness (values of $`\beta `$) within the class of models parametrized in Eq. (1). Let me devote the rest of this section to some of the arguments based on analyses of the Salecker-Wigner measurement procedure that provide support for one or another form of distance fuzziness. As observed in Refs. a particular value of $`\beta `$ can be motivated by arguing in favour of a corresponding explicit form of the $`T_{obs}`$ dependence of the bound on the measurability of distances. Let me here emphasize that the robust part of the quantum-gravity argument based on the analysis of the Salecker-Wigner measurement procedure only allows one to conclude that the $`T_{obs}`$ dependence cannot be eliminated, and this is not sufficient for obtaining an explicit prediction for the $`T_{obs}`$-dependent measurability bound. A robust derivation of such an explicit formula would require one to have available the correct quantum gravity and derive from it whatever quantity turns out to play effectively the role of the minimum quantum-gravity-allowed value of $`M_b^1+M_d^1`$. Since quantum gravity is not available to us, we can only attempt intuitive/heuristic answers to questions such as: should quantum gravity host such an effective minimum value of $`M_b^1+M_d^1`$? how small could this effective minimum value of $`M_b^1+M_d^1`$ be? could this minimum value depend on $`T_{obs}`$? could it depend on the distance scales being probed? This questions are discussed in detail in Refs. . For the objectives of the present Letter it is important to discuss explicitly in which sense one is seeking answers to these questions. In seeking these answers one is trying to get an intuition for the fundamental conceptual structure of quantum gravity, and therefore one considers the measurement procedure from a viewpoint that would seem appropriate for the definition of distances possibly as short as the Planck length. \[Some authors (quite reasonably) would also expect quantum gravity to accommodate some sort of operative definition of space-time based on a network of material-particle (possibly minute clocks) worldlines.\] It is from these viewpoints that one must approach the questions raised by analyses of the Salecker-Wigner setup. As it will be discussed in the next three sections, one is led to very naive conclusions by adopting instead a conventional viewpoint based on the intuition that comes from present-day rudimentary (from a Planck-length perspective) experimental setups. The logic of the line of research started by the work of Salecker and Wigner is the one of applying the language/structures we ordinarily use in physical contexts we do understand to contexts that instead seem to lie in the realm of quantum gravity, hoping that this might guide us toward some features of the correct quantum gravity. We already know the answers to the above questions within ordinary gravitation and quantum mechanics, and therefore an exercise such as the one reported in Ref. could not possibly teach us anything. It is instead at least plausible that we get a glimpse of a true property of quantum gravity by exploring the consequences of removing one of the elements of the ordinary conceptual structure of quantum mechanics. The Salecker-Wigner study (just like the Bohr-Rosenfeld analysis) suggests that among these conceptual elements of quantum mechanics the one that is most likely (although there are of course no guarantees) to succumb to the unification of gravitation and quantum mechanics is the requirement for devices to be treated as classical. Removal of this requirement appears to guide us toward some candidate properties of quantum gravity (not of the ordinary laws of gravitation and quantum mechanics!), which we can then hope to test directly in the laboratory (as in some cases is actually possible ). I shall go back to these important points in the next three sections, but before I do that let me just briefly summarize the outcome of two simple attempts to extract quantum-gravity intuition from the conceptual framework set up by Salecker and Wigner. One of these approaches I have developed in Refs. . It is based on the simple observation that if in quantum gravity the effective minimum value of $`M_b^1+M_d^1`$ was $`T_{obs}`$-independent and $`\delta D`$-independent, say $`min(M_b^1+M_d^1)=[max(M^{})]^1cL_{QG}/\mathrm{}`$, we would then get a bound on the measurability of distances which goes like $`\sqrt{T_{obs}}`$ $`\delta D\sqrt{{\displaystyle \frac{\mathrm{}T_{obs}}{2max(M^{})}}}\sqrt{{\displaystyle \frac{cT_{obs}L_{QG}}{2}}},`$ (6) and would therefore be suggestive of random-walk stochastic processes. I also observed that, if this effective $`max(M^{})`$ of quantum gravity could still be interpreted as some maximum mass of the devices used in the measurement procedure, the value of $`max(M^{})`$ could be bound by the observation that in order to allow the measurement procedure to be performed these devices should at least be light enough not to turn into black holes. This allows one to trade the effective mass scale $`max(M^{})`$ for an effective length scale $`s^{}`$ which would be the maximum effective size<sup>11</sup><sup>11</sup>11From the viewpoint clarified above it is natural to envision that this length scale $`s^{}`$ would be a fundamental scale of quantum gravity. Instead of introducing a dedicated scale for it one could be tempted to consider the possibility that $`s^{}`$ be identified with the only known quantum-gravity scale $`L_p`$, even though this would render somewhat daring the possible interpretation of $`s^{}`$ as maximum size of the devices involved in the measurement. In a sense more precisely discussed in Refs. , this identification $`s^{}L_p`$ is already ruled out by the same Caltech data mentioned above . allowed in quantum gravity for the individual devices partecipating to the measurement procedure: $`\delta D\sqrt{{\displaystyle \frac{L_p^2cT_{obs}}{s^{}}}}.`$ (7) \[Of course, this whole exercise of trading $`max(M^{})`$ for $`s^{}`$ only serves the purpose of giving an alternative intuition for the new length scale $`L_{QG}`$, which can now be seen as related to some effective maximum size of devices $`s^{}`$ by the equation $`L_{QG}L_p^2/s^{}`$.\] Another approach to the derivation of a candidate quantum-gravity bound on the measurability of distances from an analysis of the Salecker-Wigner measurement procedure has been developed by Ng and Van Dam . These authors took a somewhat different definition of measurability bound and they also advocated a certain classical-gravity approach to the estimate of $`max(M^{})`$. The end result was $`\delta D(L_p^2cT_{obs})^{1/3}.`$ (8) In Ref. it was observed that a $`T_{obs}`$-dependence of the type in Eq. (8) would be suggestive of the stochastic space-time model with $`\beta =5/6`$. It is interesting to observe that relations such as (7) and (8) can take the form of $`D`$-dependent bounds on the measurability of $`D`$ by observing that $`DT_{obs}`$ in typical measurement setups. The bounds would be $`\delta D\sqrt{DL_{QG}}\sqrt{DL_p^2/s^{}}`$ and $`\delta D(DL_p^2)^{1/3}`$ respectively for (7) and (8). ## 5 ON THE SALECKER-WIGNER CLOCK As manifest in the brief review provided in the previous two sections, the Salecker-Wigner limit and the associated intuition concerning quantum properties of space-time is based on an in-principle analysis of the measurement of distances, with emphasis on the nature of the devices used in the measurement procedure. Accordingly, the measurement procedure is only schematically described and only from a conceptual point of view. The devices used in the measurement procedure are also only considered from the point of view of the role that they play in the conceptual structure of the measurement procedure. For example (an example which is relevant for some of the incorrect conclusions drawn in Ref. ), the Salecker-Wigner “clock” is not simply a timing device, but it is to be intended as the network of instruments needed for the “clock” to play its role in the measurement procedure (e.g. instruments needed to trigger the transfer of information from the clock to the rest of the network of devices that form the apparatus or instruments needed to affect the position of the clock in ways needed by the measurement procedure). This was already very clearly explained in the early works by Salecker and Wigner, which in various points state that the relevant idealized clocks are, for example, capable of sending and receiving signals (they are therefore composite devices including at least a clock, a transmitter and a receiver). It is in this sense that Salecker and Wigner consider the clock. As mentioned, they also had in mind a rough picture in which space-time could be in principle operatively defined by a network of such free-falling clocks, providing a material reference system . If this (as it might well be) was the proper way to obtain an operative definition of space-time, one would obviously be led to consider each of the clocks in the network to be extremely small and light. In general a rather natural intuition is that the ideal clocks to be used in the measurement of a gravitational observable should be very light, in order not to the disturb the observed quantity. The same of course holds for all other devices used in a gravitational measurement. How light all these devices should be might depend on the intended scale/sensitivity at which the measurement is performed; for the operative definition of Planckian distances one would expect that, since even tiny disturbances would spoil the measurement, this ideal devices should be very light, but the correct quantum gravity would be needed for a definite answer. The criticism of the Salecker-Wigner limit expressed in Ref. , was essentially based on two observations. One of the observations, which I will address in the next two sections, was based on the idea that it might be possible to avoid the $`\sqrt{T_{obs}}`$ dependence characteristic of the Salecker-Wigner limit. The other observation, which I want to address in this section, was based on the fact that the data already available from the Caltech 40-meter interferometer (the same here used in Section 2 to set bounds on simple models of fuzziness) imply that the effective clock mass to be used in the Salecker-Wigner formula would have to be larger than 3 grams, which the authors of Ref. felt to be to high a mass to be believable as a candidate mass of fundamental clocks in Nature. As underlined by the choice of observing that the 3-gram bound is comparable to masses of wristwatch components, this comment and criticism comes from taking literally the Salecker-Wigner clock as a somewhat ordinary timing device. This misses completely the point emphasized in the brief review I have given above, i.e. that the role of the effective Salecker-Wigner clock mass cannot be taken literally as the mass of an ordinary timing device: it is a more fundamental effective mass scale characterizing the devices being used (as clearly indicated by the fact that Salecker and Wigner attribute to their conceptualization of a “clock” the capability to transmit, receive and process signals). One must also consider that this idealized clock was conceived as a device needed for a proper operative definition of Planck-scale distances, and therefore there is little to be gained from the intuition of wristwatches and other ordinary timing devices. The comment on the 3-gram bound given in Ref. also fails to take into account the arguments, which had already appeared in the literature and have been here reviewed in Section 4, concerning the need to interpret the effective mass of the idealized Salecker-Wigner clock as a fundamental but not necessarily universal property of quantum gravity, possibly depending on the type of length scales involved/probed in the experiment (as argued above for the associated effective scale $`max(M^{})`$). For experiments involving distance scales as large as 40 meters, the result $`max(M^{})>3grams`$ seems perfectly consistent<sup>12</sup><sup>12</sup>12Perhaps a bound of the type $`max(M^{})>3grams`$ would instead be surprising if we had found it in experiments defining Planckian distances in the spirit of the type of networks of worldlines considered by Salecker and Wigner (experiments which of course are extremely far in the future if not impossible in principle). Actually, it is quite daring to trust our feeling of “surprise” when venturing so far from our present-day intuition: along the way to the Planck scale we might be forced to change completely our intuition about the natural world. For example, on the subject of timing devices here of relevance the interplay between gravitation and quantum mechanics might even provide us new types of timing devices. (One attempt to construct such new tools is discussed in Ref. and some of the references therein.) with the idea that there should be some absolute bound on $`max(M^{})`$ in any given quantum-gravity experimental setup. If experiments had given a positive result (say, $`max(M^{})2grams`$) it would have not upset anything else we know abut the physical world (only the most sensitive interferometers would be sensitive to the effects of a Salecker-Wigner limit with $`max(M^{})2grams`$), but at the same time the fact that it was instead found that $`max(M^{})>3grams`$ in experiments involving distance scales as large as 40 meters should not surprise us nor is it inconsistent with the arguments put forward by Salecker and Wigner and followers. Because of the present very early stage of development of quantum gravity, we are at the same time looking for the value (if any!) of $`max(M^{})`$ and looking for an understanding of what is the correct interpretation and the true physical origin of such a bound on $`max(M^{})`$ in a quantum gravity that would accommodate it at some fundamental level. The points I discussed in this section also clarify, within an explicit example, the sense in which the logic adopted in Ref. is inadequate for the analysis of the conceptual framework set up by Salecker and Wigner. In Ref. the whole discussion of the “Salecker-Wigner clock” remained strictly within the confines of the intuition and the logic of ordinary gravitation and quantum mechanics, where we have nothing to learn. The conceptual framework set up by Salecker and Wigner instead treats the clock in a way which, in as much as it renounces to the idealization of a classical clock, encodes one plausible departure from the ordinary laws of quantum mechanics that could be induced by the process of unification of gravitation with quantum mechanics. ## 6 ON THE USE OF A POTENTIAL WELL TO REDUCE CLOCK-INDUCED UNCERTAINTY In Ref. the work of Salecker and Wigner was also criticized by arguing that it would be inappropriate to treat the clock as freely moving, as effectively done in the derivation of the Salecker-Wigner limit. We were reminded in Ref. of the fact that for a clock appropriately bound (say, by some ideal springs) to another object in its vicinity the uncertainty in the position of the clock with respect to that object would not increase with time, unlike the case of a free clock. This observation completely misses the point of the Salecker-Wigner limit. The uncertainty responsible for the Salecker-Wigner limit comes from the uncertainty in the relative position between the clock and the two bodies whose distance is being measured (say, the distance between the clock and the center of mass of the system composed of the two bodies whose distance is being measured). By binding in an harmonic potential the clock and an external body one would not affect the nature of the Salecker-Wigner analysis. The position of the clock (or, say, the center of mass of the system composed of the clock and the external body) relative to the two bodies whose distance is being measured (or, say, relative to the center of mass of the system composed of the two bodies whose distance is being measured) is still a free coordinate whose uncertainty contributes directly to the uncertainty in our measurement of distance. The uncertainty in this free coordinate will spread according to the formula $`\delta x\sqrt{{\displaystyle \frac{\mathrm{}T_{obs}}{2}}\left({\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_c+M_{extra}}}\right)},`$ (9) where $`M_{extra}`$ is the mass of the mentioned external body. The $`T_{obs}`$ dependence necessary for all the significant implications of the Salecker-Wigner analysis is still with us. Contrary to the claim made in Ref. , by binding in an harmonic potential the clock and an external body, one does not truly eliminate the $`T_{obs}`$-dependent uncertainty: one simply trades one source of $`T_{obs}`$-dependent uncertainty for another essentially equivalent source. This simply provides one more example of intuition for the $`max(M^{})`$ discussed in the preceding sections (and in Refs. ), which in this context would be identified with the inverse of $`min\{1/M_b+[1/(M_c+M_{extra})]\}`$. \[In any case, as explained above, $`M^{}`$ would plausibly not only reflect the properties of the devices used for timing but of the whole set of devices needed for the measurement of distances.\] Whether or not there is a spring binding the clock and an external body, as a result of the analysis of the Salecker-Wigner measurement procedure we are still left with the intuition that some fundamental (although perhaps dependent on the distance scale which is to be measured ) value for $`max(M^{})`$ might be a prediction of quantum gravity and we are still left wondering how large this $`max(M^{})`$ could be. Perhaps when measuring large distances with relatively low accuracy quantum gravity might allow us to take rather large $`M^{}`$ (which, if so desired, one might effectively describe in the language of Ref. as the possibility to introduce a rather heavy external body to be “attached” to the clock), but as shorter distances are probed the disturbance of a large $`M^{}`$ (or the introduction of heavy bodies to which the clock would be attached) must eventually become unacceptable. This is certainly plausible, but what could be the value of $`max(M^{})`$ for measurements at a given distance scale? The correct answer of course requires full quantum gravity (because it must reflect the way in which the operative definition of distances in codified in quantum gravity), but we can try to gain some insight by pushing further the experimental bounds on $`max(M^{})`$. Even more complicated at the conceptual level is the search of an analog of $`M^{}`$ in attempts to operatively define a tight (perhaps Planck-length tight) network of geodesic (world) lines, in the spirit of “material reference systems” and of some of the comments found in the work of Salecker and Wigner . Is such a task to be required of quantum gravity? How large/heavy could the clocks suitable for this task be? Wouldn’t it be paradoxical to consider the possibility of attaching these free-falling clocks to some external bodies? As already emphasized in Refs. there are several quite overwhelming open issues, but it seems unlikely that we could gain some insight by extrapolating ad infinitum (as done in Ref. ) from the intuition of measurement-analysis ideas applicable to rudimentary present-day experimental setups. Before closing this section let me comment on another scenario that some readers might be tempted to consider as a modification of the potential-well proposal put forward in Ref. . One might envisage using some springs to connect the clock to one of the bodies (say body $`A`$) whose distance is being measured, rather than connecting the clock to an external body. This would assure that the uncertainty in relative position between the clock and that body $`A`$ does not increase with time, but it is easy to verify that the disturbance that this setup would introduce is of the same magnitude as the uncertainty it eliminates. In fact, the system composed of the clock and body $`A`$ would be free. Essentially the uncertainty in the initial momentum and position of the clock relative to the second body (body $`B`$) would now be transferred to the body $`A`$ “through the springs”. This would introduce an uncertain disturbance to the distance between body $`A`$ and body $`B`$ that is being measured, and the disturbance is of course just of the same magnitude as the uncertainty contribution arising in the original Salecker-Wigner setup. In addition, each time the (Salecker-Wigner-type) clock emits a signal the corresponding uncertain recoil would be transmitted through the spring to the body $`A`$. ## 7 ON THE POSSIBILITY OF A FUNDAMENTALLY CLASSICAL CLOCK As an alternative possibility to eliminate the $`\sqrt{T_{obs}}`$ dependence present in the Salecker-Wigner limit, in Ref. we are reminded of the fact that ordinary clocks are immerged in a (thermal or otherwise) environment that induces “wave-function collapse”. In fact, to extremely good approximation these clocks behave classically. Again this is a correct intuition derived from experience with rudimentary (from a Planck-scale viewpoint) experimental setups, which however (like the other points argued in Ref. ) appears to be incorrectly applied to the conceptual framework considered by Salecker and Wigner. While “environment-collapsed” clocks (and other environment-collapsed devices) could be natural in ordinary contexts, it seems worth exploring the idea that quantum gravity, as a truly fundamental theory of space and time, would not resort (at an in-principle level) to collapse-inducing environments for the operative definition of distances. In any case, this is the expectation concerning quantum gravity that is being explored through the relevant Salecker-Wigner-motivated research line. It also seems that quantum gravity, having to incorporate an operative definition of distances applicable even in the Planck regime, would have some difficulties introducing at a fundamental level the use of environments to collapse the wave function of devices. How would such an environment look like for the case in which one is operatively defining a nearly-Planckian distance? (and which type of environment would be suitable for the operative definition of a Planck-length-tight network of world lines? how would such an environment be introduced in the operative definition of a material reference system?) Concerning the possibility of a fundamentally classical clock in Ref. the reader also finds what appears to be a genuinely incorrect statement (not another example of ordinary intuition inappropriately applied to the forward-looking framework set up by Salecker and Wigner, but simply a case of incorrect analysis). In fact, Ref. appears to suggest that the interactions among the components of even a perfectly/ideally isolated clock might induce classicality of the position of the center of mass of the clock, which is the physical quantity whose quantum properties lead to the Salecker-Wigner limit. While the interactions among the components should lead to the emergence of some classical variables (e.g., the variable that keeps track of time), if the clock is ideally isolated interactions among its components should not have any effect on the quantum properties of the position of the center of mass of the clock. \[This is certainly the case for some of the explicit examples of “toy clocks” considered by Salecker and Wigner, one of which is only composed of three free-falling particles!\] ## 8 CLOSING REMARKS From a conceptual viewpoint the analysis reported in Ref. can be divided in two parts. In one part a set of questions was raised and in the other part tentative answers to these questions were given. As this Letter emphasized, some of the questions considered in Ref. are indeed the most fundamental questions facing research based on the Salecker-Wigner limit. However, all of these questions had already been raised in previous literature (see, e.g., Refs. ). These questions have been here compactly phrased as: should quantum gravity predict a $`max(M^{})`$ and could this be interpreted as the maximum acceptable mass of one or more devices? how large could $`max(M^{})`$ be? should $`max(M^{})`$ depend on the distance scales being probed? should the idealization of a classical clock survive the transition from ordinary quantum mechanics to quantum gravity? While the questions considered are just the right ones, the answers given in Ref. are incorrect. In this note I have tried to clarify how those answers are the result of inappropriately applying the intuition of rudimentary (from a Planck-scale viewpoint) measurement analysis to the forward-looking framework set up by Salecker and Wigner. The debate on the Salecker-Wigner limit must of course continue until the above-mentioned outstanding open questions get settled, but (if the objective remains the one of getting ideas on plausible quantum-gravity effects) the only possibly fruitful way to approach this problem is the one of seeking the answers within the same forward-looking framework where the questions arose. Nothing more than what we already know can be learned by assuming that the laws of ordinary gravitation and quantum mechanics remain unaltered all the way down to the Planck regime. As emphasized here, the logic of the line of research started by the work of Salecker and Wigner is the one of applying the language/structures we ordinarily use in those physical contexts that we do understand to contexts that instead would naturally lie in the realm of quantum gravity, and then exploring the consequences of removing one of the elements of the ordinary conceptual structure of quantum mechanics. The Salecker-Wigner study (just like the Bohr-Rosenfeld analysis) suggests that among these conceptual elements of quantum mechanics the one that is most likely to succumb to the unification of gravitation and quantum mechanics is the requirement for devices to be treated as classical. Removal of this requirement appears to guide us toward some candidate properties of quantum gravity (not of the ordinary laws of gravitation and quantum mechanics!), which we can then hope to test directly in the laboratory (as in some cases is actually possible ). Quite aside from the subject of open issues in the study of the Salecker-Wigner limit, I have also emphasized in this Letter that, contrary to the impression one gets from reading Ref. , there is substantial motivation for the phenomenological programme of interferometric studies of distance fuzziness here reviewed in Section 2, independently of the Salecker-Wigner limit (and independently of the fact that, as clarified above, the validity of this limit has not been seriously questioned). As discussed in Section 2 (and discussed in greater detail in Ref. ), the general motivation for that phenomenological programme comes from a long tradition of ideas (developing independently of the ideas related to the Salecker-Wigner limit) on foamy/fuzzy space-time, and also comes from more recent work on the possibility that quantum-gravity might induce a deformation of the dispersion relation that characterizes the propagation of the massless particles used as space-time probes in the operative definition of distances. It is actually quite important that this interferometry-based phenomenological programme, as well as other recently-proposed quantum-gravity-motivated phenomenological programmes , be pursued quite aggressively, since the lack of experimental input has been the most important obstacle in these many years of research on quantum gravity. Acknowledgements Part of this work was done while the author was visiting the Center for Gravitational Physics and Geometry of Penn State University. I happily acknowledge discussions on matters related to the subject of this Letter with several members and visitors of the Center, particularly with R. Gambini and J. Pullin. I am also happy to thank C. Kiefer, for discussions on decoherence, and D. Ahluwalia, for feed-back on a first rough draft of the manuscript.
no-problem/9910/astro-ph9910027.html
ar5iv
text
# Adaptive Optics Images at 3.5 and 4.8 𝜇⁢m of the Core Arcsec of NGC 1068: More Evidence for a Dusty/Molecular Torus. Based on observations collected at the European Southern Observatory, La Silla, Chile. ## 1 Introduction Series of observational facts assembled over the past decade on Active Galactic Nuclei have led to the so-called “unified” model of AGN (for a review of these facts, as well as for the detailed characteristics of the unified model see Krolik 1999). Our current specific interest in the unified model is that the central engine (black hole and accretion disk) and its close environment (dense gas clouds emitting the broad lines which constitute the broad line region, BLR) are embedded within an optically thick dusty/molecular torus. Along some lines of sight, the torus obscures and even fully hides the central engine and the BLR. In that respect, the case of ngc 1068, a bright Seyfert 2 active galaxy, is particularly enlightening. The spectrophotometry in polarized light reveals the presence of a hidden BLR (Antonucci & Miller 1985), the conical shape of the narrow line region (NLR) – both on large (Pogge 1988) and small (Evans et al. 1991) scales – indicates that the ionizing radiation is collimated by an opaque blocking torus and, finally, the symmetry center of the UV/optical polarization map (Capetti et al. 1995) is found to be coincident with the radio core (Gallimore et al. 1996a), the 12.4 $`\mu \mathrm{m}`$ peak (Braatz et al. 1993) and the maser emission (Gallimore et al. 1996b), suggesting that this is the location of the hidden true nucleus. This object appears then particularly suitable for unveiling the putative torus through its infrared emission, which, according to current models should be quite strong (e.g. Pier & Krolik 1992a, b, 1993; Granato & Danese 1994; Efstathiou, Hough & Young 1995; Granato, Danese & Franceschini 1997). In this search, high spatial resolution is a requisite in order to locate very precisely and to characterize the structure of emission sources. Hence, adaptive optics (hereafter abbreviated AO) in the 1-5 $`\mu \mathrm{m}`$ window is the tool. At the distance of ngc 1068 (14.4 Mpc), 1” is equivalent to 72 pc (assuming H<sub>0</sub>=75 km/s/Mpc), allowing to reach a spatial resolution of a few parsecs. Through AO observations, simultaneously in the visible and near-infrared, the 2.2$`\mu \mathrm{m}`$ peak emission has already been found to be offset by $``$0.3” S of the optical continuum peak as defined by Lynds et al. (1991) and to be coincident with the previously identified “hidden true nucleus” (Marco, Alloin & Beuzit 1997). In the current study, we are presenting new results obtained at 3.5 and 4.8$`\mu \mathrm{m}`$ with ADONIS, the AO system working at the ESO 3.6 m telescope on La Silla and fully described in Beuzit et al. (1994). ## 2 Observations and data reduction The observing run took place from august 13 to 19, 1996, under excellent seeing and transparency conditions (Table 1). The AO correction was performed on the brightest spot of ngc 1068 in the visible continuum light (Lynds et al. 1991). The wavefront sensor (EBCCD after a red dichroic splitter) has a pixel size of 0.7″ and takes into account the gravity center of the light within a 6″ diameter circular entrance. Due to the pixel size of the wavefront sensor, the contribution of the continuum and the lines from the central 50 pc around the Lynds et al. peak (1991) both fall in one pixel: so the contrast is maximum. The detector was the COMIC camera (Lacombe et al. 1997), at the f/45 Cassegrain focus, which provides an image scale of 0.1″/pixel, resulting in a field of view of 12.8″$`\times `$ 12.8″. ngc 1068 was observed in an imaging mode through the standard spectral L (3.48 $`\mu \mathrm{m}`$), L’ (3.81 $`\mu \mathrm{m}`$), and M (4.83 $`\mu \mathrm{m}`$) bands and through a circular variable filter for the PAH (line 3.3 $`\mu \mathrm{m}`$ rest wavelength, and continuum). Through the L, L’ and M bands, we observed in a chopping mode, alternating object and sky images by the use of a field selection mirror. We chose an offset of 10” to the N and 10” to the W. During the six nights, the visible seeing was measured by the ESO differential image motion monitor. It was excellent during four nights, ranging from 0.4” to 0.7”. Therefore, the efficiency of the AO correction was optimal and the images obtained with COMIC were diffraction-limited. However, the gain of the intensifier was not optimized at that time, which resulted in Strehl ratios lower than expected. In order to minimize position offsets between the calibration star and the AGN, we selected a reference star within 2 degrees of the target. For both the galaxy and the point spread function (PSF) reference star, the air-mass was at most of 1.3, ensuring differential refraction effects to be negligible (less than one pixel). Individual exposure times were chosen so as to observe under conditions of background limiting performances (BLIP). In this way, the readout-noise is dominated by the background photon noise, and we just take an average of the images to improve the signal-to-noise ratio. We observed several photometric standard stars to obtain a precise flux calibration and another star to determine the PSF for later deconvolution. Standard infrared data reduction procedures were applied to each individual frame, for both the galaxy and the reference stars : dead pixel removal, sky subtraction, flat fielding from sky images at each wavelength. As the AO system is compensates for image shifting, no additional shift correction was applied. Thanks to the length of our observing run, we obtained a largely redundant data set. We discovered in particular that we had experienced a problem of astigmatism during 2 nights, due to an unappropriate tuning of the visible wavefront sensor leading to a very low signal to noise ratio on the Shack-Hartmann analysor (Alloin & Marco 1997). Being aware of this flaw, we decided to review in depth the AO optimization parameters for the entire data set and to remove all suspicious blocks. In addition, we applied a selection procedure of 32-images data cubes, based upon the seeing value and the Strehl ratio. The corresponding equivalent integration times are reported in Table 1. ## 3 Wavefront analysis sensing To better interpret the ngc 1068 observations with ADONIS, we need to track precisely which visible image of ngc 1068 is seen by the wavefront analysis sensor (WFAS) in its evaluation of the AO correction. In order to recover this information, we have used HST images with 45 mas pixel size both in the continuum and in the lines (F791W, F547M F658N, and F502N), properly aligned and flux scaled to a same 1 sec exposure at the entrance of the telescope, to construct a composite image which mimics the one seen by the WFAS. The image alignment was performed using point-like sources in the field of view, which allows a registration within 10 mas (Tsetanov, private communication). The flux scaling has included corrections for the exposure time of each image and the mean HST camera efficiency over the corresponding filter. Then, each of these corrected images has been weighted by the corresponding mean WFAS wavelength response, before addition. We find indeed that the light received from ngc 1068 on the WFAS is largely dominated by continuum light and is not much weighted by the \[O III\] - line light distribution. And finally, to mimic the image seen on the WFAS CCD, from which the AO corrections are computed, we have degraded the HST composite image by a “seeing” effect of $``$1”: this includes both an atmospheric effect ($``$0.6” according to the mean seeing value -FWHM- measured on the selected nights of the observing run) and the instrumental spread function (0.85”), which is rather large because of the photocathode -CCD spacing. As the WFAS takes into account, to compute the AO correction, the light centroid over a 6” circular diaphragm, we have derived the light centroid within a 6” aperture on the degraded composite HST image. This position is the reference for the WFAS. We need also to locate the centroid of the degraded HST composite image with respect to the Lynds visible peak, as measured from the HST image F547M. We find a slight offset between the visible centroid and the Lynds visible peak: the visible centroid (using ngc 1068 for the correction) is located 99 mas to the N and 86 mas to the E of the Lynds peak. This offset is taken into account in our estimate of the ngc 1068 infrared sources location (Sect. 4.1). The offset may differ from one AO system to another. In the case of ADONIS, the offset is largely due to the 6” entrance to the WFAS. ## 4 Data analysis We present in Fig. 1 the L and M band images of ngc 1068. We used a magnitude (log) scale because of the high dynamics of the images provided by the AO. The images have been deconvolved using a Lucy-Richardson algorithm (MIDAS package). We have also observed ngc 1068 in the L’ band: this image is very similar to the one obtained in the L band. Thus, our current data analysis and subsequent discussion will be based on the two L and M bands, only. The L and M band images show: i) an unresolved core down to the resolution (FWHM) of 0.24” (16 pc) and 0.33” (22 pc), respectively at 3.5 and 4.8 $`\mu \mathrm{m}`$. This core has already been observed at 3.6 $`\mu \mathrm{m}`$ by Chelli et al. (1987) and at 2.2 $`\mu \mathrm{m}`$ by Marco et al. (1997), Thatte et al. (1997) and Rouan et al. (1998). The latter give an upper limit of the core size (FWHM) of 0.12” (less than 8 pc). ii) an elongated structure at P.A. $``$100 particularly prominent in the M band, but also quite well outlined in the L band. This structure is obviously coincident with the structure seen in the K band by Rouan et al. (1998) and is roughly perpendicular to the axis of the inner ionizing cone (P.A.=15, Evans et al. 1991). It extends in total over $``$80 pc, with a bright spot at each of the $``$E and $``$W edges at a radius of $``$25 pc from the central engine. iii) an extended emission along the NS direction, almost aligned with the radio axis and the ionizing cone axis. At low level isophotes (in particular in the L band), a change in the direction of the axis of this emission can be noticed, reminiscent of a similar change of direction of the radio jet (Gallimore et al. 1996a). Down to faint levels, the 4.8 $`\mu \mathrm{m}`$ thermal infrared emission appears to be extended over $``$3” in diameter ($``$210 pc). It is striking that the two different AO systems used, ADONIS for the L and M bands and PUEO for the K band, reveal a similar structure of the AGN environment. These AO systems are using WFS of different types, Shack-Hartmann for ADONIS and curvature for PUEO, as well as deformable mirrors of different types, piezo-stack for ADONIS and bimorph for PUEO. The Lucy-Richardson deconvolution applied on both data sets uses PSFs obtained in two different ways: we used an observed stellar PSF in the case of the ADONIS data set – as the L and M band data are less sensitive to rapid PSF fluctuations – and we used the PSF recovered from the AO loop parameters in the case of the PUEO data set. The two AO experiments differ in many aspects, while leading to a similar result for the structure of the AGN dusty environment. Therefore we are quite confident that this structure is real and not hampered by significant AO artifacts (Chapman et al. 1999). Finally it should be noticed that the high resolution image of the AGN in ngc 1068 obtained in the K band with the AO system at the Keck telescope (www2.keck.hawaii.edu/realpublic/ao/ngc1068.html) reveals a comparable structure, pending that a precise orientation and a scale be provided for the Keck AO data set. ### 4.1 Location of the emission peaks at 3.5 and 4.8 $`\mu \mathrm{m}`$ and nature of the unresolved core As it was not possible to observe simultaneously in the visible and in the infrared, we took advantage of a characteristic feature of AO systems which is to preserve the optical center for all objects: the infrared camera field has a position fixed in regard to the centroid of the visible counterpart of the object observed. Indeed, by observing a star (PSF or photometric standard), we determine a reference position in the infrared image to within the precision we are aiming at in this study (better than one infrared camera pixel, 0.1”). Any offset of the galaxy infrared peak relatively to the star infrared peak would then reveal an intrinsic offset between the galaxy infrared light peak and the galaxy visible light peak. This is a method for positioning infrared versus visible sources in the AGN. To improve the precision, we have fitted the PSF and the ngc 1068 emission peaks by Gaussian profiles. The L and M band peaks in ngc 1068 are coincident within the positional precision given above. We have also derived the position of this L and M peak in ngc 1068 with respect to the visible peak, following the procedure described in Sect. 3: it is offset by $`0.3\pm 0.05`$” S and $`0.1\pm 0.05`$” W of the visible continuum peak and therefore is found to be coincident with the K band emission peak (Marco et al. 1997), within the error bars. Therefore, the compact core at 3.5 and 4.8 $`\mu \mathrm{m}`$ can be identified with the unresolved core detected at 2.2$`\mu \mathrm{m}`$ (Marco et al. 1997; Thatte et al. 1997; Rouan et al. 1998), itself found to be coincident with the mid-infrared emission peak at 12.4$`\mu \mathrm{m}`$ (Braatz et al. 1993), the radio source S1 (Gallimore et al. 1996a, b) and the center of symmetry of the UV polarization map (Capetti et al. 1995). This strengthens considerably the interpretation of the core infrared emission originating from hot/warm dust in the immediate surroundings of the central engine. ### 4.2 The torus-like structure at 3.5 and 4.8 $`\mu \mathrm{m}`$ The location, position angle and extension of the P.A. $``$100 structure are strongly suggestive of a dusty/molecular torus. The two bright spots on the edges of the structure outline the “disky” nature of the torus, up to a radius of $``$40 pc from the central engine. This dusty/molecular torus would be responsible for the collimation of UV radiation from the central engine, leading to the ionizing cone (Pogge 1988; Evans et al. 1991). The overall spatial extension of the torus is found to be $``$80 pc at 3.5 and 4.8$`\mu \mathrm{m}`$, while it appears to be slightly smaller at 2.2$`\mu \mathrm{m}`$, $``$50 pc. Under the very simple assumption of optically thick grey-body dust radiation (Barvainis 1987), it is well understood that the emission at 2.2$`\mu \mathrm{m}`$ traces hotter dust (T$``$1300 K) than the emission at 4.8 $`\mu \mathrm{m}`$ (T$``$600 K). The observed difference in size would then signal the existence of a temperature gradient of the grains across the torus. ### 4.3 The North South extended emission The NS extended emission (overall extent $``$3” down to faint emission levels) is also detected at 2.2$`\mu \mathrm{m}`$ on a similar scale (Rouan et al. 1998) and at 10 and 20 $`\mu \mathrm{m}`$ on a larger scale, although along a similar P.A. (Alloin et al. 1999) . This structure can be related to the emission of hot/warm dust associated with NLR clouds identified in the northern side of the ionization cone from HST data (Evans et al. 1991) and hidden behind the disc of the galaxy in its southern side. Additional local heating processes, e.g. related to shocks induced by the jet propagation , might be at work as well along the NS extension. The latter suggestion stems from the conspicuous change of direction of the emission at 3.5 $`\mu \mathrm{m}`$, following that of the radio jet. Indeed, Kriss et al. (1992) have shown through the analysis of line emission that in ngc 1068 the emitting gas in the NLR is partly excited through shocks triggered by the radio jet. ## 5 Fluxes, SED and variability The spectral energy distribution (SED) of the central region of the AGN is an essential parameter in the modeling. To derive this quantity, spatial resolution is obviously needed to disentangle the different sources of emission – dust, stars, non-thermal source.. – and, in that respect, AO observations bring precious informations. Fluxes at 3.5 and 4.8 $`\mu \mathrm{m}`$ have been measured through circular apertures centered on the near-infrared peak, with a radius varying from 0.3” to 1.5” (22 to 100 pc). They are depicted in Table 2. A weak PAH line emission at 3.3$`\mu \mathrm{m}`$ has been detected as well, although no flux calibration is available for this observation, unfortunately . The aperture flux density as a function of radius, over the region 22 pc $`r`$ 100 pc, can be fitted with a power law: we find $`F_Lr^{1.05}`$ and $`F_Mr^{1.00}`$ (where the flux unit is Jy/arcsec<sup>2</sup>). From the set of AO images (this paper and Rouan et al. 1998), as well as high resolution images obtained at 10 $`\mu \mathrm{m}`$ (Alloin et al. 1999), we have reconstructed the SED of the core emission through an 0.6” diameter diaphragm, as shown in Fig. 2. Therefore, this SED refers only to the central engine and its immediate environment (including the inner parts of the dusty/molecular torus as well as some contribution from the NS extended structure). In this plot, the contribution from the stellar component has been removed in the J and H bands, following the spatial profile analysis performed by Rouan et al. (1998), while this is not the case in the K band. The part of the near-infrared to mid-infrared emission which is arising from hot to warm dust grains can be represented by a series of grey-bodies of different temperatures and this summation is expected to result in a smooth distribution. Yet, an emission bump at 2.2 $`\mu \mathrm{m}`$ is observed, which could be interpreted as the unremoved stellar contribution in the K band, within the 0.6” diameter diaphragm. Under this assumption, an upper limit of 75% for the stellar contribution to the flux at 2.2 $`\mu \mathrm{m}`$, can be derived. This upper limit remains far larger than the 6% effectively derived by Thatte et al. (1997) within a 1” diameter diaphragm, on the basis of the dilution of the equivalent width of a CO absorption feature arising from cold stars. In any case, a revision of the AGN modeling for ngc 1068 should incorporate the 1 to 10 $`\mu \mathrm{m}`$ SED derived for the innermost region around the central engine and therefore less affected by dilution from other surrounding components (in particular cleaned from the stellar contribution). It is interesting as well to compare these 1996/1997 measurements in the infrared to those performed by Rieke & Low as early as 1975. Therefore, we have derived, for their 3” diameter diaphragm, the 1996/1997 observed fluxes at 1.25, 1.65 (both uncorrected for the stellar contribution), 2.2, 3.4, 4.8 and 10 $`\mu \mathrm{m}`$, from Rouan et al. (1998), from the current data set and from Alloin et al. (1999). Before examining the temporal behavior of the near-infrared emission in ngc 1068, the 1997 flux measurements in the K band can be compared to previous determinations. We consider again a 3” diameter diaphragm and compare the 1997 AO measurement to classical photometric measurements. Generally those are performed through much larger diaphragms. However Penston et al. (1974) provide a full set of measurements through diaphragms ranging from 12” to 2” diameter. Leaving aside the measurements from this paper which have been flagged down for bad transparency or other flaws, we obtain the K magnitude offset when one moves from a 12” diaphragm to a 3” diaphragm, $`\mathrm{\Delta }`$K = 0.85. This magnitude offset is not expected to vary with time because it refers to the outer parts of the AGN ($`r>`$ 110 pc). Then, from the 12” diameter diaphragm measurements by Glass (1995), who analyzed the variability properties of ngc 1068, we can infer/extrapolate the K magnitude at the date of the AO measurement by Rouan et al. (1998): K $`=`$ 6.81. Applying the magnitude offset computed above between the 12” and 3” diameter diaphragms, we predict that the K magnitude should be of 7.66 at the date of the Rouan et al. (1998) observation, while the K magnitude measured is of 7.26. This agreement is quite satisfactory, given the rather large error-bars involved in the Penston et al. (1974) data set which was obtained more than 25 years ago. A comparison of the fluxes within a 3” diameter diaphragm at both epochs, 1975 (Rieke & Low 1975) and 1997 (this paper) is depicted in Table 3. One notices immediately that a flux increase has occurred over this time interval: by a factor 2 between 2.2 to 4.8 $`\mu \mathrm{m}`$, and by a factor 1.2, at 10 $`\mu \mathrm{m}`$. According to the independent photometric monitoring by Glass (1997), the L band (3.5 $`\mu \mathrm{m}`$) flux has doubled in 18 years, from 1974 to 1992: our result is in very good agreement with his finding. Because of the possibly complex way through which the UV-optical photons illuminate and heat the dust grains (direct and/or indirect illumination via scattering on the mirror to the N of the central engine or on dusty regions further out, Miller et al. 1991), it is not possible to infer the size of the dust component from light-echo effects acting differentially in the near-infrared and mid-infrared bands. ## 6 Colors, dust temperature and mass of dust Following Barvainis (1987), we assume the emissivity of the grains to depend on wavelength as $`\lambda ^{1.6}`$ and we use a simple model of optically thick grey-body dust emission: $`\nu ^{1.6}B_\nu (T_{\mathrm{gr}})`$. In the general situation of an AGN, the grain temperature varies strongly with distance to the central engine, following a power law (Barvainis 1987): $`T_{\mathrm{gr}}(r)=1650L_{UV,46}^{0.18}r^{0.36}`$ K, where $`L_{UV,46}`$ is the UV luminosity in units of $`10^{46}`$ ergs s<sup>-1</sup> and $`r`$ the radial distance in parsecs. ### 6.1 Color gradients in the near-infrared #### 6.1.1 Colors of the core Owing to our limitation in spatial resolution in the M band (FWHM = 0.33”), we can measure at best the \[L-M\] and \[K-L\] colors of the core through an 0.6” diameter diaphragm centered on the near-infrared emission peak. We find \[L-M\] = 1.6 $`\pm `$ 0.4 and \[K-L\] = 1.8 $`\pm `$ 0.2. It must be noticed however that the contribution of stellar light in the K band (Thatte et al. 1997) has not been removed at this stage and that the observed \[K-L\] color does not relate only to the dust emission. #### 6.1.2 Colors of the extended structure As for the colors of the sources forming the extended structures, and again because of different spatial resolutions in the K, L and M bands, we have considered the mean colors in a ring which extends from $`r`$ = 0.3” to $`r`$ = 0.5”. This ring does include the emitting regions forming the extended structures along the two directions P.A. $``$100 and NS, at 3.5 and 4.8 $`\mu \mathrm{m}`$, but excludes in part the secondary peaks which delineate the extended structures at 2.2 $`\mu \mathrm{m}`$. Therefore, at 2.2 $`\mu \mathrm{m}`$, the ring includes more of the diffuse contribution possibly related with the stellar core analyzed by Thatte et al. (1997). The colors found for the ring, representative of a mean 0.4” radius, are \[L-M\] = 1.6 $`\pm `$ 0.4 and \[K-L\] = 2.8 $`\pm `$ 0.2. ### 6.2 Dust temperature The advantage of interpreting the \[L-M\] color is that the L and M band flux contributions are known to arise almost entirely from the dust component. Within the limitation in spatial resolution from the M band data set, we do not detect any \[L-M\] color gradient within the central 1” region of ngc 1068. Under the simple assumption of optically thick grey-body emission from the dust component, the observed \[L-M\] color corresponds to a grain temperature $`T_{gr}`$$``$480 K. The foreground extinction to the core has been calculated by several authors (Bailey et al. 1988; Bridger et al. 1994; Efstathiou et al. 1995; Young et al. 1995; Veilleux et al. 1997; Glass 1997; Thatte et al. 1997; Rouan et al. 1998) and an estimate of $`A_v`$$``$30 mag is retained. Applying a correction for such an extinction, we deduce an intrinsic color \[L-M\] = 0.8 $`\pm `$ 0.4 and $`T_{gr}`$$``$700 K. The absolute luminosity $`L_{UV,opt}`$ of the AGN in ngc 1068 – assumed here to be the unique heating source of the dust grains in the AGN environment – can be approached only indirectly and is still quite uncertain. From the analysis by Pier et al. (1994) who have examined various methods for deriving the absolute luminosity of the AGN in ngc 1068 and summarize the current knowledge on this question, we deduce L<sub>UV,opt</sub> = 4 $`10^{44}`$ erg s<sup>-1</sup>. However it should be noted that this figure has been obtained assuming a reflected light fraction f<sub>refl</sub> $``$0.01, while the consideration of ionized gas in regions further out inside the ionization cone (ENLR) leads to a value f<sub>refl</sub> $``$0.001 (Bland-Hawthorn et al. 1991). Then a value as high as L<sub>UV,opt</sub> = 4 $`10^{45}`$ erg s<sup>-1</sup> should be envisaged as well. In addition, most of these estimates have been derived without taking into account the fraction of UV-optical flux which provides the dust grains heating: already the K magnitude of the innermost core (FWHM = 0.12”), 9.3, corresponds to an energy output of $``$8 $`10^{41}`$ erg s<sup>-1</sup>. It might be important to consider the energy radiated in the near- to mid-infrared bands for the evaluation of L<sub>UV,opt</sub>. In conclusion, the figures currently available for L<sub>UV,opt</sub> in ngc 1068 might be lower limits. Still, under the simple model of optically thick grey-body dust emission, reaching $`T_{gr}`$$``$480 K at $`r`$ = 28 pc requires L<sub>UV,opt</sub> = 8 $`10^{45}`$ erg s<sup>-1</sup>, a value roughly consistent with the highest figure given above for L<sub>UV,opt</sub> in ngc 1068. This figure goes up to L<sub>UV,opt</sub> = 6.5 $`10^{46}`$ erg s<sup>-1</sup> if the grain temperature is of 700 K at $`r`$ = 28 pc (extinction-corrected estimate), pushing ngc 1068 to the limit between AGN and quasars. This question certainly deserves further attention and above all the consideration of a more elaborated model of the dust region, with regard to its geometry and heating. This is beyond the scope of the current paper and will be discussed in the future. The \[K-L\] color in the extended structures can be contaminated by some stellar contribution in the 2.2 $`\mu \mathrm{m}`$ band. From grains at $`T_{gr}`$$``$480 K, which are the dominant contributors, we expect a \[K-L\]<sub>dust</sub> color of 4.0. Given the observed \[K-L\] value, we deduce that the pourcentage of the flux at 2.2 $`\mu \mathrm{m}`$ which arises from the dust component (with at most $`T_{gr}`$$``$480 K) is of 30%. For this estimate, we have not considered any correction for extinction. How can the lack of \[L-M\] color gradient between the 0.6” diameter core and the extended structures be explained? Given the unresolved and intense core emission at 2.2 $`\mu \mathrm{m}`$ (FWHM = 0.12” from Rouan et al. 1998) it can be inferred that the hottest dust grains are extremely confined and located at a radius less than 4 pc. With L<sub>UV,opt</sub> = 8 $`10^{45}`$ erg s<sup>-1</sup>, they would be present only up to $`r`$ = 1.1 pc. Hence, there must exist a very steep dust temperature gradient close to the central heating source. Such a few parsec scale corresponds to a resolution which is well beyond that accessible at 3.5 & 4.8 $`\mu \mathrm{m}`$. In fact the L and M emission we are measuring in an 0.6” diameter core is already strongly dominated by the warm grains. Because of this suspected steep temperature gradient, the procedure applied previously to derive the stellar contribution in the 0.3” to 0.5” radius ring cannot be used in the core. ### 6.3 Mass of the hot dust The mass of dust associated with the near-infrared emission can be estimated only in a rough way, as it depends on the (unknown) grain composition and grain size distribution. Assuming graphite grains and following Barvainis (1987), the infrared spectral luminosity of an individual graphite grain is given by: $`L_{\nu ,\mathrm{ir}}^{\mathrm{gr}}=4\pi a^2\pi Q_\nu B_\nu (T_{\mathrm{gr}})\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1`$ where $`a`$ is the grain radius, $`Q_\nu =q_{\mathrm{ir}}\nu ^\gamma `$ is the absorption efficiency of the grains, and $`B_\nu (T_{\mathrm{gr}})`$ is the Planck function for a grain temperature $`T_{\mathrm{gr}}`$. Following Barvainis (1987), we take $`a`$=0.05 $`\mu \mathrm{m}`$ and in the near-infrared, $`q_{\mathrm{ir}}=1.4\times 10^{24}`$, $`\gamma =1.6`$ leading to $`Q_\nu `$=0.058 (for the K band). Because no \[L-M\] color gradient is detected towards the 0.6” diameter core, we consider the simple case of 2 populations of dust grains in that region, hot grains at T = 1500 K and warm grains at T = 500 K, matching the extreme values in that region. Solving the equation $`F_{\nu ,\mathrm{measured}}=xF_{\nu ,1500\mathrm{K}}+yF_{\nu ,500\mathrm{K}}`$ for the three bands available, K, L and M, one derives 2 10<sup>45</sup> and 9 10<sup>47</sup>, for the number of grains at 1500 K and 500 K respectively, in the 0.6” diameter core. This indicates that there are $``$450 times more warm dust grains than hot dust grains in the 0.6” diameter core. The warm dust grains dominate the \[L-M\] color. With a grain density $`\rho `$=2.26 g cm<sup>-3</sup>, the mass of warm dust grains is found to be $`M`$(warm dust)$``$0.5 $`M_{}`$. This mass is above that of hot dust grains detected in two Seyfert 1 nuclei: 0.05 $`M_{}`$ in the case of ngc 7469 (Marco et al. 1998) and 0.02 $`M_{}`$ in the case of Fairall 9 (Clavel et al. 1989). This result supports the fact that only a small fraction of the dust present in the torus is heated close to its sublimation temperature. ## 7 Grain number density in the warm dust In ngc 1068, like in other AGN (see for instance ngc 7469), dust is present also in the NLR region (see Fig. 1). It has been shown in Sect. 5 that the infrared emission in the L & M bands over the region $`22r100`$ pc follows a power-law. In addition the temperature of the grains deduced from the \[L-M\] color has been found to remain roughly constant with radius. Thus, we can derive the radial profile of the warm grain number density in the NLR region, $`\eta (r)r^\beta `$. Under optically thin conditions, probably applicable in the NLR, the brightness is directly proportional to the grain number density. Then the observed brightness power-law leads to $`\beta `$=1.0, suggesting a concentrated grain distribution (a uniform grain density would correspond to $`\beta `$=0). In ngc 7469, the warm dust component led to a figure for $`\beta `$ around 1.5 (Marco et al. 1998). ## 8 Comparison with thick tori model predictions Several torus models have been developed so far to explain the obscuration of the BLR and UV/X-ray continuum sources along some lines of sight (AGN unification scheme). Some of these models are generic, while others have been designed to match the case of ngc 1068. Pier & Krolik (1992a, b, 1993) propose a thick, parsec-scale, uniform density torus illuminated by a nuclear source. The dust can be heated up to the effective temperature of the nuclear radiation at the inner edge of the torus. They investigate models with effective temperatures between 500 K and 2000 K. Such a model can explain the unresolved core observed at 2.2, 3.5 and 4.8 $`\mu \mathrm{m}`$ with AO in the particular case of ngc 1068. Does it explain the extended near infrared emission also revealed by these observations? Indeed, extended emission over 1” to 2” could result from reflected radiation from the torus and/or dust in the NLR. Therefore, this model accounts for most of the features unveiled with high resolution imaging in the near infrared. Efstathiou & Rowan-Robinson (1995) propose a model with a very thick tapered disk. They assume the melting temperature of all dust grains to be identical (1000 K), but consider a radial distribution of the grain physical parameters (size and chemical composition). In the case of ngc 1068, Efstathiou et al. (1995) have shown that the torus emission alone cannot account for the total infrared emission. They attribute the excess infrared emission to a distribution of optically thin dust with $`\beta `$=2 in the NLR region. Their model is in disagreement with the steep grain temperature gradient across the torus, which we infer to exist close to the central engine. Conversely, the dust postulated to be present in the NLR by their model is indeed detected with the AO data set. Granato & Danese (1994) and Granato et al. (1996, 1997) developed a simple thick ($`\tau _e>`$30) torus model extended over several hundreds pc. To minimize the number of free model parameters, they have adopted a dust density distribution constant with radial distance from the nuclear source. But, they do not rule out the possibility, in the case of smaller values of the optical depth ($`\tau _e`$=1.5), that a more concentrated density distribution exist. Their predicted size and shape for the near infrared emission are compatible with those derived through AO observations at 2.2, 3.5 and 4.8 $`\mu \mathrm{m}`$. A revised modeling of the AGN in ngc 1068 is timely, owing to the emergence of sub-arcsec resolution images in the near infrared (AO techniques) and in the millimeter range (interferometric techniques), giving direct access to the dust and molecular environment of the central engine. ## 9 Conclusion The observation, for the first time at high angular resolution, of ngc 1068 at 3.5 and 4.8 $`\mu \mathrm{m}`$ provides new informations to build a more realistic model of this AGN under the current unification scheme. As regard to the AGN structure, we do observe: (i) an unresolved core, already known at 2.2 $`\mu \mathrm{m}`$ to have a size (FWHM) less than 8 pc, and interpreted as the inner region of a dusty/molecular torus in which the central engine of ngc 1068 is embedded, (ii) along P.A.$``$100, an extended emission up to 40 pc on either side of the core, particularly prominent at 4.8 $`\mu \mathrm{m}`$. This structure, also detected at 2.2 $`\mu \mathrm{m}`$ up to 20 pc on either side of the core, is coincident in P.A. with the parsec-scale disc of ionized gas detected with VLBI (P.A.$``$110), and is found to be roughly perpendicular to the axis of the ionization cone in ngc 1068. We interpret it as the trace, up to a 40 pc radius, of this dusty/molecular torus seen edge-on, (iii) an extended emission along the NS direction up to 50 pc from the core and with rather symmetrical properties on either side, both at 3.5 and 4.8 $`\mu \mathrm{m}`$. Again, this extended emission is detected , both at 2.2 $`\mu \mathrm{m}`$ on a similar scale, and at 10 and 20 $`\mu \mathrm{m}`$ on a slightly larger scale. It reveals the presence of dust in the NLR, heated both by hard radiation within the ionizing cone and by shocks associated with the AGN radio jet. As regard to the dust temperature and dust distribution in the central arcsec of the nucleus, we get a final picture as follows. As close as $`r`$ $``$1 pc from the central engine, the dust is heated up to its evaporation temperature, 1500 K. Then the dust temperature declines very rapidly and reaches T$``$500 K at $`r<`$ 28 pc. The total mass of warm dust within a 22 pc radius region is found to be around 0.5 $`M_{}`$. It is observed as well that the near infrared flux of ngc 1068, in the 2.2 to 4.8 $`\mu \mathrm{m}`$ range, has increased by a factor two over some 20 years, while in the 10 $`\mu \mathrm{m}`$ window the flux increase is only by a factor 1.2. Our results bring observational evidence of a dusty torus in the AGN of ngc 1068. They further support AGN modelling in the framework of the unification scheme: a thick torus surrounding a central engine (black hole and accretion disc). Although several models of the AGN in ngc 1068 are available, none matches in detail all the aspects of the current near-infrared results obtained at a subarsec scale. Such new observational constraints make it both timely and exciting to run updated models. Yet, we are aware that the most convincing and undisputable argument for the presence of the torus-like structure will come from a study of the kinematics of the gas within the 100 central parsec of ngc 1068. We expect such information to be soon obtained from ISAAC/ANTU observations on Paranal. ###### Acknowledgements. We warmly thank J.P. Veran and E. Gendron for useful discussions, F. Lacombe for his help on the data reduction and Z. Tsetanov for his precious contribution in deriving the HST composite image seen by the WFAS. We acknowledge as well precious comments from an anonymous referee.
no-problem/9910/astro-ph9910153.html
ar5iv
text
# Galactic Bulges from HST-NICMOS Observations: Ages and Dust ## 1 Introduction We do not have a clear picture of the formation mechanism for the central bulges of spiral galaxies. Their structural and dynamical properties have long suggested the view that bulges are like small elliptical components residing in the center of a large disk. Indeed, bulges obey the D<sub>n</sub>$`\sigma `$ relation and fall on the Fundamental Plane of elliptical galaxies (Dressler et al. 1987, Bender et al. 1992). Their surface brightness profiles fall off as $`\mu (r)r^{1/n}`$ (Andredakis et al. 1995), with $`n`$ = 4 for early-type galaxies falling to $`n`$ = 1 for Sc and later types. Dynamically, bulges are consistent with oblate, isotropic models, like low-luminosity ellipticals (Kormendy & Illingworth 1982, Davies et al. (1983)). In a series of papers Peletier and Balcells have studied optical and near-infrared (NIR) colour distributions of a complete sample of spiral bulges, using ground-based data (Balcells & Peletier 1994, Peletier & Balcells 1996, 1997) with the goal of extending the comparison of bulges and ellipticals to their stellar populations. They find that, once dust is accounted for, the colours of bulges are never redder than those of elliptical galaxies of the same luminosity. Bulge colours are very similar to those of the inner disk, the differences being much smaller than colour differences from galaxy to galaxy. Population models then suggest that the inner disk (at 2 scale lengths) must have formed at the same time as, or at most 3 Gyr after, the bulge, and that bulge metallicities are lower than those of giant ellipticals. HST allows us to investigate the centres of bulges with a ten fold increase in angular resolution, yet HST data on the colours of galaxy centres at scales of tens of parsecs is scarce. Colours contain useful information on the stellar populations of the galaxy centres, and allow us to estimate the level of internal extinction. HST data have shown that dust patches are very common in galaxy nuclei, even in ellipticals (van Dokkum & Franx 1995), hence we expect a strong signature of dust in spiral bulges. NICMOS allows an unprecedented look at the inner structure of bulges in the H band where the extinction is approximately a factor 6 lower than in V. NICMOS can produce more reliable measurements of surface brightness profiles and isophotal shapes. Here we show the results of the first colour study of the centers of galactic bulges based on HST/NICMOS data. We observed in the NICMOS F160W band ($`H`$-band), F814W ($`I`$) and in F450W, a wider version of the $`B`$-band. This combination of passbands is very suitable to study stellar populations in galaxies with limited amounts of extinction and recent star formation, like centres of early-type spirals. In those galaxies the observed colours are determined by the underlying old stellar population, perturbed by dust extinction and the light of young, recently formed stars. Extinction by dust affects all optical and near-infrared colours, while recent star formation can only be seen in blue passbands (see e.g. Knapen et al. 1995). In that paper it is shown that a red colour, like $`IH`$ will primarily show the extinction due to dust, as well as old stellar population gradients, while a colour which includes a blue band like $`B`$ or $`U`$, in this paper $`BI`$, is specially sensitive to recent star formation. In the absence of extinction, B, I and H make it possible, in principle, to measure the age and metallicity of the stellar population (e.g. Aaronson et al. 1978; Peletier et al. 1990; Bothun & Gregg 1990). We infer large amounts of centrally concentrated dust within the inner 100 pc. Outside this region, the colour-colour distribution is quite tight, allowing us to place limits on the age dispersion of galactic bulges. With the HST it is easier to reach a high photometric accuracy, because there is no atmospheric extinction to correct for, the instrumental PSF is stable, and the instrument is very well characterized, as compared to ground-based instruments. This means that we can expect accuracies of 0.02–0.03 mag or better in optical and near-infrared colours. Such high quality measurements are vital if we are to separate the effects of age and metallicity using colours, even though the large wavelength baseline ($`B`$-$`I`$-$`H`$) minimises the effect of photometric errors. Currently there are three main scenarios for the formation of bulges: bulges form before, contemporaneously with, or after disks (see the review by Wyse et al. 1997, and Bouwens et al. 1999). In the first scenario (Eggen et al. 1962, Larson 1975, Carlberg 1984) the formation of bulges is mainly described by collapse of a primordial gas cloud into clumps, which then merge together. The disk only forms after the last massive merger via gas infall. In a second scenario, infall of a gas-rich dwarf galaxy onto a disk produces star formation more or less at the same time in bulge and disk (Pfenniger 1992). In the third scenario of secular evolution of disks (e.g Combes et al. 1990, Pfenniger & Norman 1990, Norman et al. 1996), the bulge is formed by dynamical instabilities of the disk, in which a bar is formed, which then forms a massive central concentration by allowing material to stream towards the central regions. The central mass concentration itself then tidally disrupts the bar and forms the bulge. When material is continuing to fall onto the disk sometime later a new bar can be formed, and the whole process will start again and induce more star formation in the bulge. To distinguish between these models one needs to measure the ages of both bulges and disks. Peletier & Balcells (1996) and Terndrup et al. 1994, using ground-based data, found that the ages of bulges and inner disks are very similar. Here in this paper we measure the ages of the bulges themselves. The paper is organized as follows: Section 2 gives details of the observations and data reduction. Section 3a discusses the dust content of the galactic bulges. Section 3b describes the analysis of the stellar populations, and in Section 4 some implications for our understanding of galactic bulges are discussed. ## 2 Sample and Observations Twenty galaxies were observed with HST in Cycle 7 (Summer 1997) with WFPC2 (F450W and F814W) and NICMOS (F160W). They all are part of the original sample of Balcells & Peletier (1994), which is a complete, $`B`$-magnitude-limited sample of early-type spirals (type S0-Sbc) with inclinations larger than 50<sup>o</sup>, and for which one side of the minor axis colour profile is approximately featureless as seen from the ground. The subsample considered here was chosen to include galaxies of types S0-Sbc, excluding galaxies that are exactly edge-on, and is biased towards the nearest objects. The galaxies observed are given in Tables 1 and 2. Their basic parameters: luminosities, effective radii and ground-based colours, are given in Andredakis et al. (1995) and Peletier & Balcells (1997). The mean absolute $`R`$-band magnitude of the sample is $`M_R`$ = –22.50 mag, with an RMS dispersion of 1 mag (H<sub>0</sub> = 70 km s<sup>-1</sup> Mpc<sup>-1</sup>). The standard HST-pipeline data reduction was used for the WFPC2 optical data. For NGC 7457 the $`I`$-band observations were taken from the HST Archive. For the $`H`$-band two MULTIACCUM NIC2-exposures of in total 256 s were taken, offset from each other by 1<sup>′′</sup>, to enable us to remove bad pixels. The standard STSDAS CALNICA reduction package was used, after which a small fraction of the flatfield was subtracted from the reduced images, to account for a non-zero pedestal level in the raw image. This fraction was determined by requiring that the final image was smooth, and amounts to about 0.3-0.5 ADU s<sup>-1</sup>, corresponding to about $`H`$=17 mag (arcsec)<sup>-2</sup>. Applying this step is only important for the lowest flux levels, but the images improve considerably in quality (they look like the ground-based images). The final sky background level was determined by comparing the image with the ground-based image in the same band. To compare the $`H`$-band data with the ground-based data in $`K`$, we used the following transformation, derived from the new GISSEL96 models of Bruzual & Charlot (see Leitherer et al. 1996): $$HK=0.111(IK)0.0339$$ The images were calibrated to the STMAG system using Holtzmann et al. (1995), using a constant shift of 0.10 mag to correct to infinite aperture. The method described in the same paper was used to iteratively account for the colour term in the F450W filter. The colours were corrected for galactic extinction using the new dust maps of Schlegel et al. (1998), and the Galactic extinction law (Rieke & Lebofsky 1985). After this the bands were K-corrected. This small correction (from Persson et al. 1979) was $`\mathrm{\Delta }B`$ = -5$`z`$, $`\mathrm{\Delta }I`$ = –$`z`$ and $`\mathrm{\Delta }H`$ = 0. To account for the difference in Point Spread Functions (PSF) when determining the $`X`$$`Y`$ colour maps and profiles, the $`X`$-band image was convolved with the $`Y`$-band PSF, and the $`Y`$-band image with the $`X`$-band PSF. These PSFs were determined with the TinyTim package (Krist 1992). We then determined minor axis profiles by averaging azimuthally in wedges of 22.5<sup>o</sup>, centered on the $`H`$-band nucleus. We then combined the surface brightness profiles with the ground-based profiles to cover the whole spatial range of the bulge, by shifting the ground-based profile on top of the HST-profile between about 3<sup>′′</sup> and 6<sup>′′</sup>. This way the final profiles have high signal-to-noise everywhere, and the high-quality photometric accuracy of HST data, for which absolute and relative accuracies of a few percent can be expected (Colina et al. 1998). We then subtracted these wedge profiles in pairs of bands to obtain colour profiles. In Table 1 we list the galaxies and their colours at the center and at one R<sub>eff</sub>. ## 3 The $`BI`$ vs. $`IH`$ colour-colour diagram ### 3.1 Dust in Nuclei of Galactic Bulges In Fig. 1 we show intensity-maps of the inner regions of the galaxies, together with $`BI`$ and $`IH`$ colour maps with the same scale and orientation, superimposed on $`H`$-band contour maps. The maps indicate the position of dust lanes. It is clear from the figure that many, maybe all, galaxies have red nuclei. To show this more clearly we show all colour profiles obtained in wedges along the dust free side of the minor axis in Fig. 2. This is not the first time that red nuclei have been found in spiral galaxies. For example, in our ground-based data (Peletier & Balcells 1997) we show several galaxies with red nuclei. For that sample however the colour profiles on the least dusty side are generally featureless at radii larger than 1<sup>′′</sup>, showing the same logarithmic colour gradients throughout the bulge. Inside 1<sup>′′</sup> no information is available about the colours from ground-based data. The fact that the PSF of the HST data is well known and stable enables us to correct the colour-profiles and maps for most of the instrumental PSF effects. In this way one can measure colours down to radii as small as $``$ 0.10<sup>′′</sup> (the diffraction limit is 0.15<sup>′′</sup>). At this resolution the red nuclei are easy to see, and extended in most of our sample. Nuclei can be red because of local dust, large foreground dust lanes, or because of red stellar populations. One usually can distinguish between dust and stellar population reddening by looking for patchy structures, since they are generally due to dust extinction, or to star formation, in combination with extinction. In Table 2 we indicate the nature of the features that we find in the nuclei. We find (Table 2) only three galaxies: NGC 6010, 6504 and 7457, without nuclear dust features, although in some cases a large foreground dust lane makes nuclear dust difficult to detect. For these three galaxies we analysed the isophote shapes more carefully. Dust patches make galaxy isophotes irregular in particular the third order Fourier terms C3 and S3 (Carter 1978) will be non-zero, and generally increasing in amplitude to the blue. S3 and C3 were found to be significantly non-zero in both $`B`$ and $`I`$ in NGC 6010 and 6504 but not in NGC 7457. We conclude that the signature of dust is to be found in almost all galaxies in the sample, including some galaxies indicated with a (–) sign in column 3 of Table 2. How much reddening is caused by the dust? In Fig. 3 we show the colours at the center (filled dots) and at one bulge effective radius (open dots) in a $`IH`$ vs. $`BI`$ colour-colour diagram. Fig. 3 (b) and (c) show that in all cases the galaxy is redder in the centre than at r<sub>e</sub>, by sometimes very large amounts. Since the vector indicating reddening by dust is almost parallel to the the vector indicating changes in metallicity, it is not possible to say exactly how much of the reddening is due to extinction. Lower and upper limits to the internal extinction may be estimated as follows. A plausible upper limit to the reddening is derived by assuming that the galaxy at 1 R<sub>eff</sub> is dustfree. This is perhaps reasonable, since at these radii no structure is seen in the colour maps, and there is a small dispersion in the colour-colour plot. We infer that the central $`IH`$ and the $`BI`$ colours are reddened by an average of 0.42 $`\pm `$ 0.06 mag, and 0.61 $`\pm `$ 0.11 mag respectively. A lower limit to the reddening can be found if one assumes that the stellar populations are never redder than those of the central Virgo galaxy NGC 4472. Using the data at 5<sup>′′</sup> (or $``$ 400 pc) from Peletier et al. (1990) for NGC 4472, converted to $`BI`$ and $`IH`$ as described in the Appendix, we find that the $`IH`$ colours would be reddened by on the average 0.37 $`\pm `$ 0.06 mag and $`BI`$ by 0.41 $`\pm `$ 0.10 mag. Using the Galactic extinction law (Rieke & Lebofsky 1985) these numbers correspond to an average internal extinction A<sub>V</sub> between 0.89 and 1.01 mag, if $`IH`$ is used, and between 0.56 and 0.78 mag when one uses $`BI`$. The estimates using $`IH`$ are somewhat higher than those from $`BI`$, because nuclear star formation in some galaxies makes the radial colour difference in $`BI`$ smaller, as is seen in NGC 5838, 5854 and 5475. In the $`BI`$ colour map of these galaxies there are blue nuclear features which do not appear in $`IH`$. These patches, presumably caused by young stars, reduce the radial colour variations in blue colours like $`BI`$ but not in $`IH`$. Our upper limits for the amount of dust inferred here are probably somewhat too high, because part of the reddening is caused by stellar population gradients. From the most dustfree galaxies we infer that the colour difference due to stellar populations between center and 1 R<sub>eff</sub> is $``$ 0.1 mag in both $`BI`$ and $`IH`$, which implies that colour gradients in the inner parts of bulges are affected much more by dust than by stellar populations. ### 3.2 Stellar Populations Apart from providing high resolution, HST also has the advantage that the photometric conditions are very stable, so that accurate colours can be determined. For this reason, and because we have a very large colour-baseline, we can use the colour-colour diagram to infer information about the age-spread in the sample, and about the cause of the stellar population gradients. Studying Fig. 1 carefully, we see that many physical phenomena are playing a role in these galaxies. We observe the combined effects of extinction, recent star formation, and old stellar population gradients. To disentangle them we have displayed the galaxies in Fig. 1 according to their morphological type. Although the type (from de Vaucouleurs et al. 1991) has been given based on low-resolution optical observations, we can see that the properties of the galaxies change smoothly as a function of type. S0 galaxies have small dustlanes, and have rather featureless colour maps. Sb galaxies tend to have strong dustlanes, while the Sbc galaxies are considerably bluer, have lower surface brightness, show patchy dust and star formation together, and are rather different from the rest of the galaxies. Having established (in Section 3.1) that extinction at 1 R<sub>eff</sub> is probably negligible, we will now consider what the colours can tell us about the stellar populations. First we note the small scatter amongst the open symbols in Fig. 3a confirming that extinction is not important at 1 R<sub>eff</sub>. If we exclude the three galaxies with the latest Hubble type (the open crossed symbols in Fig. 3c) the stellar populations at R<sub>eff</sub> form a rather tight sequence in the $`BI`$ vs. $`IH`$ plane. Fig. 1e shows that the central regions of our 3 Sbc are full of dust and regions of recent star formation, we will not consider them further in this analysis of stellar populations. In Fig. 3a we also show a set of Single Stellar Population (SSP) models of Vazdekis et al. (1996). The models displayed here have a Salpeter IMF, with a reduced number of low-mass stars, to match better the Scalo (1986) IMF. We see that, independent of the amount of extinction, the tightness of the colour-colour relation shows that the luminosity weighted age of the stars is very similar from bulge to bulge. According to the models the age-spread would be about 1-2 Gyr, although the bluest (and generally faintest) bulges would be somewhat younger. We derive a mean luminosity weighted age of 9 Gyr from these models. While the absolute ages are poorly constrained, the relative ages are much more robust so we can conclude that the age spread amongst bulges in this sample is small. (If we use Worthey (1994)’s models the inferred luminosity weighted age would be implausibly small : 2 Gyr). The galaxy sequence runs parallel to line of constant age suggesting that the colour-variations from galaxy to galaxy are due predominantly to changes in metallicity supporting the view that the colour-magnitude relation for early-type galaxies (e.g. Bower et al. 1992) is mainly driven by changes in metallicity. Figure 3c shows that the type dependence along the colour-colour relation is very small. The outlying point is NGC 5854 which has a blue central patch (probably of recent star formation). NGC 7457 is a faint S0 galaxy that has bluer $`BR`$ and $`UR`$ colours than an elliptical of comparable luminosity (Balcells & Peletier 1994) accounting for its rather young inferred age. The bulk of the galaxies of type Sb and earlier occupy a narrow band in $`BI`$ vs. $`IH`$. As a comparison we have plotted the colour-colour relation for Coma, determined from the data of Bower etal (1992), converted to our colours as described in the Appendix. We find that the colours of our bulges at 1 R<sub>eff</sub> are very similar to those of bright Coma galaxies. Our bulges however are somewhat bluer in $`IH`$ and redder in $`BI`$, which, according to the stellar population models, could be explained if they are slightly older and somewhat less metal rich. The colour-colour conversion is rather uncertain, however. Also plotted in Fig. 3c are the colour-colour diagrams converted using the theoretical models of Vazdekis et al. (1996) and Worthey (1994). As can be seen in the Figure, the difference between the three colour-colour diagrams in $`BI`$ and $`IK`$ is quite large. Although we argue (see Appendix) that our empirical calibration can be trusted much more than the theoretical calibrations, one should be careful not to over-interpret the data. What seems safe to conclude is that Coma galaxies are to be found in the shaded area, and that the ages of our bulges are similar to those of the early-type galaxies in Coma. ## 4 Discussion In the previous section we have shown that: 1. Centers of bulges of early-type spirals are generally dusty. We find that A<sub>V</sub> on the average lies between 0.6 and 1.0 mag, which implies that A<sub>H</sub> should be between 0.1 and 0.2 mag. 2. At 1 R<sub>eff</sub> Galactic bulges show a very tight $`IH`$ vs. $`BI`$ relation, implying that the age spread among bulges of early-type spirals is small (at most 2 Gyr). 3. The colours of bulges of early-type spirals at 1 R<sub>eff</sub> are similar to the colours of early-type galaxies in the Coma cluster. This implies that the age difference between nearby bulges and cluster-ellipticals is small, probably smaller than about 2 Gyr. Since the extinction observed, is more than a factor 2 (0.76 mag), if the dust is located in or near the center we cannot see through to the other side of the galaxy. The extinction locally will be very large (see Sadler & Gerhard 1985). The fact that this occurs in so many of our galaxies shows that the very central region is almost always optically thick in $`B`$ or $`V`$. The situation with dust in the centers of spiral bulges is similar to that of elliptical galaxies. Van Dokkum & Franx, analyzing WFPC data of nearby elliptical galaxies, found evidence for central extinction in 75% of their sample. This is more than was previously found from the ground (Ebneter et al. 1988, Véron-Cetty & Véron 1988). Goudfrooij et al. (1994b) detected dust in 41% of the galaxies in their sample of Revised Shapley-Ames (Sandage & Tammann 1981) galaxies. A similar detection rate as found from IRAS fluxes by Knapp et al. (1989). The higher angular resolution of HST allows us to reach much lower detection limits, we have been able to detect dust in 95 $`\pm `$ 5% of our sample of bulges of early-type spirals. This detection rate is higher than for ellipticals, although our method of finding dust, using colour images, is more sensitive than the method of van Dokkum & Franx, who determined their dust masses from one $`V`$-band image only. Van Dokkum & Franx found an average dust mass of 4 $`\times `$ 10<sup>3</sup> M. For the bulges analysed here we find an average dust mass of $``$ 10<sup>4</sup> M, determined using the method of van Dokkum & Franx: for each dust feature the mass is calculated using M<sub>dust</sub> = $`\mathrm{\Sigma }`$ $`A_V`$ $`\mathrm{\Gamma }_V^1`$, with $`\mathrm{\Sigma }`$ the area of the feature, $`A_V`$ the mean absorption in the area, and $`\mathrm{\Gamma }_V`$ the visual mass absorption coefficient (Sadler & Gerhard 1985). $`\mathrm{\Gamma }_V`$ is taken to be 6 10<sup>-6</sup> mag kpc<sup>2</sup> M$`{}_{}{}^{1}{}_{}{}^{}`$. Rough values of $`A_V`$ were determined by taking the difference in $`BI`$ or $`IH`$ of the feature and the dustfree values at 1 R<sub>eff</sub>, converted to A<sub>V</sub> using the Galactic extinction law (Rieke & Lebofsky 1985). If we assume a Galactic gas to dust ratio of 130 (see van Dokkum & Franx 1995) we find the galaxies analysed in this paper have an average of 10<sup>6</sup> M of interstellar material in their nuclear regions. The origin of the nuclear dust is unclear. At larger radii kinematic observations of gas, usually associated with dust, in ellipticals show that it is often decoupled from the stellar velocity field (for a discussion see Goudfrooij et al. 1994b). This is used to imply that large-scale dust lanes are of external origin perhaps being accreted during galaxy mergers or interactions. However, the origin of the small arcsecond-scale dust lanes found in ellipticals, often oriented along the major axis of the stellar body (Goudfrooij et al. 1994b), could well be internal. Scaling from the numbers for stellar mass-loss for bright ellipticals given by Faber & Gallagher, 1976, we estimate that for typical bulges in our sample the mass will be dposited into the bulge at the rate of $``$ 0.1 – 1 M yr<sup>-1</sup>, so there is no problem accruing the dust we see. The amount of dust, the large detected fraction, and the fact that the dust lanes are found parallel to the major axis indicate an internal origin. The central dust provides a suitable environment for centrally concentrated star formation, probably leading to strong metallicity enhancements in the central 100 pc of bulges. In that regime, it is possible that so much dust is produced that ordinary dust destruction mechanisms are ineffective. Potentially, high resolution spectral observations will show us whether this dust is indeed of internal origin, and what metallicities that are being reached. The dust however could provide some serious observational difficulties for the determination of the inner slope of the stellar surface density profile of bulges, even in the H band. Several of our bulges have central features which resemble the inner disk in the giant elliptical NGC 4261 (Jaffe et al. 1996). Examples are NGC 5326, NGC 5587, NGC 5838 and NGC 5854. NGC 4261 has a LINER spectrum in the center, Jaffe et al. argue that this inner disk might provide the fuel for the AGN. Only 4 of the objects studied in our paper however are known to be (weakly) active, NGC 5746, NGC 5838, NGC 5879 and NGC 7331, all classified by Ho et al. (1997) as T2, transition objects between LINER and Seyfert, with narrow emission lines. Their colours are however entirely consistent with an extincted stellar population. Furthermore, many of the ellipticals with similar features are not active galaxies. It seems that an inner disk of gas and dust alone is not sufficient to produce an AGN. No HST colour profiles of bulges of spirals have appeared in the literature up to now. The only paper presenting high-resolution HST colour profiles of ellipticals is Carollo et al. (1997a), who report V and I profiles of 15 galaxies with dynamically decoupled cores using the refurbished WFPC2. They find that their galaxies all have very similar $`VI`$ gradients between radii of 1.5<sup>′′</sup> and 10<sup>′′</sup>, while the dispersion is larger between 0.25<sup>′′</sup> and 1.5<sup>′′</sup>, where some galaxies are seen with gradients that are twice as large, while some others are much smaller, or even negative. Clearly the behavior of $`VI`$ is different in their inner 1.5<sup>′′</sup> as compared to the area further out. Although Carollo et al. (1997a) masked out dusty areas before obtaining the colour profiles by radially averaging the remaining light, this process will probably not have removed all the extinction. The large spread in colour gradient in the inner region might be a confirmation of significant quantities of dust near the center in almost all elliptical galaxies. What can we learn about the formation of bulges? The fact that the ages of most of the bulges in this paper are so similar and old makes it very difficult to form the bulges of early-type spirals (S0-Sb) through secular evolution of disks. In this scenario it is expected that bulges regularly undergo major bursts of star formation, to convert gas that has been funneled to the central area through the presence of a bar into stars. This would mean that we would expect to find more young bulges and a large spread in bulge ages. Another problem for early-type bulges is that the bulge densities are up to a factor 100 higher than in the center of the disks. To create such high overdensities in the disk, the required star formation rates are such that they could easily disrupt the disk (Ostriker 1990). Late type galaxies (Sbc’s and later types) might be different. We see that the three in our sample have younger ages (crossed circles in Fig. 3), and also the density contrast between bulge and disk is much smaller. For these galaxies in the bulge region spiral arms, blue star formation regions and dust lanes are seen, as in the disk, telling us that the stellar populations of bulge and disk here are very much the same (see Fig. 1e). Although Norman et al. (1996) use the fact that the Fundamental Plane has changed little from a redshift of the order of 0.5 to now as an argument against secular-evolution driven bulge building, it is not clear whether bulges of late-type galaxies really lie on the fundamental plane of ellipticals and early-type bulges, since no data is available at present. The fact that the ages of our bulges are all so similar supports the idea of a major episode of star formation in the past, in which most of these bulges (and also the bright galaxies in large clusters like Coma) were formed. It is currently thought that cluster ellipticals must have formed at redshifts beyond $`z`$ = 3 (Ellis et al. 1997, Stanford et al. 1997), because of the lack of evolution seen in clusters of intermediate redshift. This corresponds to an age of 10.5 Gyr ($`\mathrm{\Omega }_0`$ = 0.2, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ = 0.8, H<sub>0</sub> = 70 km s<sup>-1</sup> Mpc<sup>-1</sup>), or 8.4 Gyr ($`\mathrm{\Omega }_0`$ = 1, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ = 0, H<sub>0</sub> = 70 km s<sup>-1</sup> Mpc<sup>-1</sup>) (Hogg 1999). The fact that the colours of the majority of our bulges are similar to the Coma galaxies indicates that our bulges (except maybe the Sbc’s) are also old, and formed at redshifts beyond $`z`$ = 3. These observations endorse the original model by Eggen et al. (1962) forming bulges in the beginning during a monolithic, or clumpy, collapse. The fact that the colours of bulges and inner disks are very similar (Terndrup et al. 1994, Peletier & Balcells 1996) then implies that the disk formed very gradually from inside out, with the age of the inner disk similar to the age of the bulge. Can we then say something about the age of the disk? Would it be possible that the whole disk formed at the same time as the bulge? At that point a conflict arises with the star formation history of the universe derived e.g. by Madau et al. (1996) on the basis of data of the Hubble Deep Field, which shows a maximum between $`z`$ = 1 and 3. Since our bulges are found in a variety of environments, they are in no way special, and if most early-type spirals would form at redshifts beyond 3 this would also imply that the maximum in the HDF would have to go to larger z. However, this problem would be solved if disks of early-type galaxies on the average would be considerably younger than their bulges. With a sufficiently large age difference between bulge and disk the luminosity-weighted age of the galaxy (which is in general dominated by the disk) can then be young enough. There is nothing in the data of Terndrup et al. 1994 and Peletier & Balcells 1996 preventing this from happening. Alternatively, Kauffmann (1995) and Governato et al. (1998) have pointed out that in biased models of galaxy formation accelerated evolution can be expected in dense regions. Our observations here show that there is no difference in age between the ellipticals in Coma and our galactic bulges. This would mean that our bulges, except for the latest types, must have formed early, beyond $`z`$ = 3, as well. On the other hand, we also do not find any dependence of bulge age upon environment. The bulges are found in all kinds of environments, ranging from isolated to groups of more than 20 members (see Table 1). Apparently everywhere in the universe these intermediate-size galaxies must have started forming early-on. It appears that the only feasible solution is that bulges of early-type spirals are old, and disks considerably younger. This is in agreement with Abraham et. al. (1998), who find for the disk galaxies in the HDF that their bulges are significantly older than their disks (up to 50%). One has however to take into account that at z $``$ 1 many large spiral galaxies are found (Lilly et al. (1998), whose stellar populations are consistent with a declining activity since at least z=1.5 – 2 (Abraham et al. 1998). Semi-analytic galaxy formation models also support this picture. In Fig. 4 a histogram is shown of the distribution of $`V`$-band averaged ages of ellipticals and bulges in the simulations of Baugh et al. (1996) (Baugh, private communication). The models find that the large majority of bulges is old. This is independent of the environment, contrary to elliptical galaxies, which are found to be old in rich clusters, and can be 5 Gyrs younger in the field. Bulges are old because the accompanying disk needs time to form without being disrupted. We have found that bulges of early-type spirals in general are old, that their age-spread is smaller than 2 Gyr, and that their colour gradients are mainly due to metallicity gradients in the stellar populations. These results once more support the idea that bulges and ellipticals are similar objects. The good agreement between the observational results of this paper and models like the semi-analytical models of Baugh et al. (1996) strongly supports the picture that bulges were formed through monolithic or clumpy collapse, and formed much before theier disks. However, one can see that bulges of Sbc galaxies are different in many respects, and it is likely that these differences will be larger for Sc galaxies and later types (see also the recent HST-study of Carollo et al. (1997b, 1998). They are smaller, younger, have lower central surface brightness, and it is not clear whether they fall on the fundamental plane of elliptical galaxies. Since the observations until now are not good enough to establish whether these late type bulges can be made from bars, and in this way the galaxy type can be changed by secular evolution, it is clear that it is very important to study the bulges of late type spirals to understand the formation of bulges. ## Acknowledgements This paper is based on observation with the Hubble Space Telescope The authors acknowledge very useful support from Luis Colina, Massimo Stiavelli, Jeremy Walsh and Doris Daou at the STScI and ST-ECF, and useful discussions with Ian Smail, Harald Kuntschner, Carlton Baugh and Carlos Frenk. RLD is grateful to Durham University for the award of a Sir James Knott Fellowship and to the Leverhulme Trust for the award of a Research Fellowship, these awards contributed significantly to this research. ## Appendix A Colour transformations for the colour-colour diagram of the Coma cluster In this appendix we describe how a colour-colour relation for the Coma cluster was obtained in $`BI`$ and $`IH`$. We started with the excellent data of Bower et al. (1992) in $`U`$, $`V`$ and $`K`$. As a first attempt we tried to make the colour conversions using single burst theoretical models for old stellar populations using the Salpeter IMF. A fit to the models of Vazdekis et al. (1996) gives $$IK=0.546(VK)+0.275$$ and $$BV=0.433(UV)+0.296$$ $`IK`$ was finally converted to $`IH`$ using the new GISSEL96 models of Bruzual & Charlot (see Leitherer et al. 1996) using $$IH=0.889(IK)+0.034$$ since the Vazdekis’ models do not tabulate the $`H`$-band. Alternatively, a fit to Worthey (1994)’s models gives: $$IK=0.707(VK)0.253$$ and $$BV=0.407(UV)+0.379$$ using a very similar transformation from $`IK`$ to $`IH`$: $$IH=0.895(IK)+0.047$$ There are several reasons to be wary of this approach. In the first place one knows (e.g. Charlot et al. 1996) that the infrared colours of the Worthey models for large metallicities are too red by as much as a magnitude. Secondly, the model colours of Vazdekis et al. in the infrared, just like those of Bruzual & Charlot (Leitherer et al. 1996) do not appear to be very accurate either, since they show abrupt jumps as a function of age and metallicity, due to the rather unknown contribution from stars in their late stages of evolution. The conversion between $`IK`$ and $`IH`$ is only of minor importance. When one compares the results of both methods, one finds that the difference in the colour-colour diagram of Coma, when using two independent models, is quite large (see Fig. 3c). This shows that a different, independent way to convert the colours would be very useful, and for that reason we attempted to derive an empirical conversion. In the search of data in $`B`$ or $`I`$ of the Coma cluster to combine with Bower et al.’s dataset, we have found the data of Jørgensen et al. (1994), with photometry in $`V`$ and Gunn $`r`$, and the Ph.D. thesis of Steel (1998), with $`V`$ and $`R`$. Since there is no $`I`$-band photometry available, we decided to determine a conversion from $`VK`$ to $`VR`$, a band with information very similar to $`VI`$, and then convert $`VR`$ to $`VI`$ using published stars. It was also found that if one determines a colour of a galaxy using its integrated magnitudes, the observational error would be much larger than if the same aperture was used both times. For that reason we preferred to use the $`VR`$ data inside 20<sup>′′</sup> of Steel (1998), for which the dispersion in the $`VR`$ vs. $`R_T`$ was only 0.02 mag, rather than the total colours of Jørgensen etal. (1994). Combining the data of Steel (1998) with those of Bower et al. (1992) one obtains $$VR=0.336(VK)0.501$$ with a scatter in $`VR`$ of 0.03 mag. To convert $`VR`$ to $`VI`$ we fitted a least squares relation to all the Landolt (1992) standard stars. We find a very tight relation $$VI=2.029(VR)0.018$$ with negligible scatter. The validity of this conversion between $`VK`$ and $`VI`$ can be for example be established by looking at the $`VI`$ \- magnitude relation for the elliptical galaxies of Goudfrooij et al. (1994a). This comparison gives satisfactory results. Finally, we need to obtain a relation between $`UV`$ and $`BV`$. Since here also there is quite a difference between the theoretical models, we have determined a least squares relation between the observed $`UV`$ and Jørgensen’s $`Br`$, which we first converted to $`BV`$ using the conversion given in Jørgensen (1995): $$BV=0.673(Br)+0.184$$ to get: $$BV=0.356(UV)+0.448$$ Errors are difficult to determine. We can get an estimate by comparing the data of Goudfrooij et al. (1994a) with the $`VK`$ data of Bower et al. converted to $`VI`$ (Figure 5). Plotted on the x-axis for the data of Bower et al. are absolute $`V`$-band magnitudes, obtained using a distance modulus of 30.82 for Virgo and 34.51 for Coma (Aaronson et al. 1986), to which a correction of 0.76 mag has been applied to convert them to integrated magnitudes. For the data of Goudfrooij et al. (1994a) we plot their M<sub>V</sub> values. There are three galaxies in common between the 2 samples, for which the M<sub>V</sub> values agree within 0.10 mag. Comparing the two samples we find that for the galaxies between M<sub>V</sub> = –21 and –24 $`VI`$ is redder by 0.032 mag on the average in the data of Goudfrooij et al. (1994a). This shows that our conversion from $`VK`$ to $`VI`$ is probably reasonable, and based on this, and on the tightness of individual relations, we believe that the error in $`VI`$ is at most 0.06 mag, the same for $`IH`$ or $`IK`$, and 0.05 mag in $`BV`$ , resulting in 0.08 mag in $`BI`$. We not only applied these transformation to the mean relation for the Coma cluster, but also for the giant elliptical NGC 4472 (see Fig. 3).
no-problem/9910/cond-mat9910028.html
ar5iv
text
# Formation of Liesegang patterns ## I Introduction Formation of precipitation patterns in the wake of moving reaction fronts (known as the Liesegang phenomenon) has been studied for more than a century . The motivation for these studies has been diverse, coming from the importance of related practical problems such as crystal growth in gels, as well as from the fascination with a complex pattern that has eluded a clean-cut explanation (e.g. agate rocks are believed to display Liesegang patterns). From a theoretical point of view, the main factor in the popularity was the belief that much can be learned about the details of precipitation processes (nucleation, growth, coagulation, etc.) by investigating the instabilities underlying this phenomenon. Currently, the Liesegang phenomenon is mainly studied as a nontrivial example of pattern formation in the wake of a moving front and there are speculations about the possibility of creating complex mesoscopic structures using this rather inexpensive process. Liesegang patterns are easy to produce (Fig.1 shows a particular experiment that we shall have in mind in the following discussion). The main ingredients are two chemicals $`A`$ and $`B`$ yielding a reaction product $`A+BC`$ that forms a nonsoluble precipitate $`CD`$ under appropriate conditions \[$`A=NaOH`$, $`B=MgCl_2`$ and $`D=Mg(OH)_2`$ in Fig.1\]. The reagents are separated initially with one of them ($`B`$, inner electrolyte) dissolved in a gel and placed in a test tube. Then at time $`t=0`$ an aqueous solution of the other reagent ($`A`$, outer electrolyte) is poured over the gel. The initial concentration $`a_0`$ of $`A`$ is chosen to be much larger than that of $`B`$ (typically $`a_0/b_010^2`$), thus $`A`$ diffuses into the gel and a reaction front moves down the tube. Behind the front, a series of stationary precipitation zones (Liesegang bands) appear at positions $`x_n`$ ($`x_n`$ is measured from the interface between the gel and the aqueous solution; $`n=1,2,\mathrm{},1020`$, typically). A band appears in a rather short time-interval thus the time of the appearance $`t_n`$ of the $`n`$-th band is also a well defined, experimentally measurable quantity. Finally, the widths of the bands $`w_n`$ can also be determined in order to characterize the pattern in more detail. The experimentally measured quantities ($`x_n`$, $`t_n`$ and $`w_n`$) in regular Liesegang patterns satisfy the following time-, spacing-, and width laws. * Time law : $$x_n\sqrt{t_n}.$$ (1) This law is satisfied in all the experiments where it was measured and it appears to be a direct consequence of the diffusive dynamics of the reagents. * Spacing law : The positions of the bands form a geometric series to a good approximation $$x_nQ(1+p)^n$$ (2) where $`p>0`$ is the spacing coefficient while $`Q`$ is the amplitude of the spacing law. The quantitative experimental observations concern mainly this law. More detailed works go past the confirmation of the existence of the geometric series and study the dependence of the spacing coefficient on $`a_0`$ and $`b_0`$. The results can be summarized in a relatively simple expression usually referred to as the Matalon-Packter law : $$p=F(b_0)+G(b_0)\frac{b_0}{a_0}$$ (3) where $`F`$ and $`G`$ are decreasing functions of their argument $`b_0`$. * Width law : $$w_nx_n.$$ (4) This is the least established law since there are problems with both the definition and the measurement (fluctuations) of the width. Recent, good quality data does support, however, the validity of 4). It should be clear that (1-4) summarizes only those properties of Liesegang patterns that are common in a large number of experimental observations. There is a wealth of additional data on various details such as e.g. the secondary structures or the irregular band spacing . These features, however, appear to be peculiarities of given systems. It is hard to characterize them and their reproducibility is often problematic as well. In view of this, it is not surprising that the theoretical explanations of Liesegang phenomena have been mainly concerned with the derivation of (1-4). The theoretical approaches to quasiperiodic precipitation have a long history and the two main lines of thoughts are called as pre- and post-nucleation theories (for a brief overview see ). They all share the assumption that the precipitate appears as the system goes through some nucleation or coagulation thresholds. The differences are in the details of treating the intermediate steps $`\mathrm{"}\mathrm{}C\mathrm{}\mathrm{"}`$ in the chain of reactions $`A+B\mathrm{}C\mathrm{}D`$ producing the precipitate $`D`$. In general, all the theories can explain the emergence of distinct bands but only the pre-nucleation theories can account for the time- and spacing laws of normal patterns. These theories are rather complicated, however, and have been developed only recently to a level that the dependence of $`p`$ on the initial concentrations $`a_0`$ and $`b_0`$ can be investigated quantitatively, and connection can be made to the Matalon-Packter law . Unfortunately, there are several problems with the theories mentioned above. First, they employ a large number of parameters and some of these parameters are hard to grasp theoretically and impossible to control experimentally (an example is the lower threshold in the density of $`C`$-s below which aggregation $`C+D2D`$ ceases ). Second, some of the mechanisms invoked in the explanations are too detailed and tailored to a given system in contrast to the generality of the resulting pattern in diverse systems. A real drawback of the too detailed description is that quantitative deductions are difficult to make even with the present computer power . A final problem we should mention is the absence of an unambiguous derivation of the width law in any of the theories. In order to avoid the above problems, we have recently developed a simple model of band formation based on the assumption that the main ingredients of a macroscopic description should be the presence of a moving reaction front and the phase separation that takes place behind the front. This theory contains a minimal number of parameters, it accounts for the spacing law, and it is simple enough that the existence of the Matalon-Packter law can be established numerically. The apparent success warrants a closer look at the model and, in this lecture, I will describe in detail how one arrives at such a model and what are the underlying assumptions of the theory. Then I would like to discuss the choice of input parameters that yield experimentally observable patterns and, finally, I will show that the derivation of width law is straightforward in this theory. ## II The model Let us begin building the model by taking a look at Fig.1. It shows alternating high- and low-density regions of the chemical $`Mg(OH)_2`$ and the systems appear to be a quasi-steady state (actually, there are experiments that suggest that the pattern does not change over a 30 years period ). We shall take this picture as an evidence that phase separation underlies the formation of bands and, furthermore, that the phase separation takes place at a very low effective temperature (no coarsening is observed). The phase separation, of course, must be preceded by the production of $`C`$-s. This is the least understood part of the process and it is particular to each system. What is clear is that due to the condition $`a_0b_0`$ a reaction front ($`A+Bsomething`$) moves down the tube diffusively (note that this is the point where the role of the gel is important since it prevents convective motion). The result of the reaction may be rather complex (intermediate products, sol formation, etc.) and one of our main assumptions is that all these are irrelevant details on a macroscopic level. Accordingly, the production of $`C`$ will be assumed to be describable by the simplest reaction scheme $`A+BC`$. Once $`A+BC`$ is assumed, the properties of the front and the production of $`C`$-s are known . Namely, the front moves diffusively with its position given by $`x_f=\sqrt{2D_ft}`$, the production of $`C`$-s is restricted to a slowly widening narrow interval \[$`w_f(t)=w_0t^{1/6}`$\] around $`x_f`$, and the rate of production $`S(x,t)`$ of $`C`$-s can be approximated by a gaussian (the actual form is not a gaussian, see for details about a non-moving front) $$S(x,t)=\frac{S_0}{t^{2/3}}\mathrm{exp}\left[\frac{[xx_f(t)]^2}{2w_f^2(t)}\right].$$ (5) The parameter $`D_f`$ can be expressed through $`a_0`$, $`b_0`$, and the diffusion coefficients of the reagents $`(D_a,D_b)`$ while $`S_0`$ and $`w_0`$ depends also on the rate constant, $`k`$, of the reaction $`A+BC`$. An important property of the front is that it leaves behind a constant density $`c_0`$ of $`C`$-s and $`c_0`$ depends only on $`a_0`$, $`b_0`$, $`D_a`$ and $`D_b`$. This is important because the relevant parameters in the phase separation are $`D_f`$ and $`c_0`$ (where and how much of the $`C`$-s are produced ) and thus the least available parameter $`(k)`$ does not play a significant role in the pattern formation. Having a description of the production of $`C`$-s, we must now turn to the dynamics of their phase separation. Since the emerging pattern is macroscopic, we shall assume that, on a coarse-grained level, the phase separation can be described by the simplest ‘hydrodynamical’ equation that respects the conservation of $`C`$-s. This is the Cahn-Hilliard equation or, in other context, it is the equation for model B in critical dynamics . This equation, however, requires the knowledge of the free-energy density ($``$) of the system. For a homogeneous system, $``$ must have two minima corresponding to the low- ($`c_l`$) and high-density ($`c_h`$) states being in equilibrium (Fig.1). The simplest form of $``$ having this property and containing a minimal number of parameters is the Landau-Ginzburg free energy (Fig.2) $$=\frac{1}{2}\epsilon m^2+\frac{1}{4}\gamma m^4+\frac{1}{2}\sigma (m)^2,$$ (6) where $`m=c(c_l+c_h)/2`$ is the density, $`c`$, of the $`C`$-s measured from the average of the two steady state values (we are following the notation in where the ‘magnetic language’ has its origin in a connection to Ising lattice gases). The parameters $`\epsilon `$, $`\gamma `$, and $`\sigma `$ are system dependent with $`\epsilon >0`$ ensuring that the system is in the phase-separating regime, $`\sigma >0`$ provides stability against short-wavelength fluctuations, and requiring $`\sqrt{\epsilon /\gamma }=(c_hc_l)/2`$ fixes the minima of $``$ at $`\pm m_e`$ corresponding to $`c_l`$ and $`c_h`$. Note that the $`mm`$ symmetry is usually not present in a real system and $``$ could contain e.g. an $`m^3`$ term. The presence or absence of the $`mm`$ symmetry, however, is not relevant for the discussion that follows. Using (6) and including the source term, the Cahn-Hilliard equation takes the form $$_tm=\lambda \mathrm{\Delta }\left(ϵm\gamma m^3+\sigma \mathrm{\Delta }m\right)+S.$$ (7) were $`\lambda `$ is a kinetic coefficient. The above equation should contain two noise terms. One of them should be the thermal noise while the other should originate in the chemical reaction that creates the source term. Both of these noise terms are omitted here. The reason for neglecting the thermal noise is the low effective temperature of the phase separation as discussed in connection with Fig.1. The noise in $`S`$, on the other hand, is dropped since the $`A+BC`$ type reaction fronts have been shown to be mean-field like above dimension two . The absence of noise means that the phase separation can occur only through spinodal decomposition . Thus the assumption behind omitting the noises is that the characteristic time of nucleation is much larger than the time needed by the front to increase the density of $`C`$-s beyond the spinodal value ($`m_s`$ in Fig.2) where the system is unstable against linear perturbations. Since there are examples where the bands appear to be formed by nucleation and growth , the spinodal decomposition scenario is clearly not universally applicable, and one should explore the effects of including noise (this becomes, however, an order of magnitude harder problem). Eq.(7) together with the form of the source (5) defines now our model that produces regular Liesegang patterns (Fig.3) satisfying the spacing law (2) and, furthermore, the spacing coefficient is in agreement with the Matalon-Packter law (3). Fig.3 shows a rather general picture that is instructive in understanding the pattern formation. The last band acts as a sink for neighboring particles above $`m_e`$ ($`c_l`$) density. Thus the $`C`$-s produced in the front end up increasing the width of the last band. This continues until the front moves far enough so that the density in it reaches the spinodal value. Then the spinodal instability sets in and a new band appears. Remarkably, the above picture is rather similar to the phenomenological ‘nucleation and growth’ scenario with the density at the spinodal point playing the role of threshold density for nucleation. It is thus not entirely surprising that both of these theories do equally well in producing the spacing- and the Matalon-Packter law. One should note that the actual form of $``$ does not play an important role in the picture developed above. The crucial feature is the existence of a spinodal density above which phase separation occurs. This is the meaning of our previous remark about the irrelevance of the $`m^3`$ term in the free energy (of course, one should also realize that explanations of details in experiments may require the inclusion of such terms). ## III Choice of parameters Fig.3 shows the results of numerical solution of eq.(7) with the same parameter values as in Fig.3 of ref. but stopped at an earlier time so that the visual similarity to the experiments (number of bands in Fig.1) would be greater. In this section, we shall examine whether the parameters used for obtaining this resemblance have any relevance to real Liesegang phenomena. The experimental patterns have a total length of about $`\mathrm{}_{exp}0.2`$m and the time of producing such a pattern is about 1-2 weeks (we shall take $`\tau _{exp}10^6`$s). Since our model has a length-scale $`\mathrm{}_{th}=\sqrt{\sigma /\epsilon }`$ and a time-scale $`\tau _{th}=\sigma /(\lambda \epsilon ^2)`$, they can be chosen so \[$`\sqrt{\sigma /ϵ}=210^4`$m and $`\sigma /(\lambda ϵ^2)=40`$s\] that $`\mathrm{}_{exp}\mathrm{}_{th}`$ and $`\tau _{exp}\tau _{th}`$. Once we have chosen $`\mathrm{}_{th}`$ and $`\tau _{th}`$ we can start to calculate other quantities and see if they have reasonable values. It is clear from Fig.3 that the widths of the bands are in agreement with the experiments, they are of the order of a few mm at the beginning and approach to $`1`$cm at the end. The width of the front is also of the order of 1cm after $`10^6`$s. Unfortunately, there is no information on the reaction zone in this system. In a study of a different system it was found that $`w_f`$(t=2 hours)$``$ 2mm. Extrapolating this result to $`t=10^6`$s one finds $`w_f1`$cm \[note that the exponent of the increase of $`w_f(t)t^{1/6}`$ is small\] in agreement with the observed value. Next we calculate the diffusion coefficient of the front, $`D_f=21.72\mathrm{}_{th}^2/\tau _{th}210^8`$m<sup>2</sup>/s. This value appears to be an order of magnitude larger than the usual ionic diffusion coefficients ($`D10^9`$m<sup>2</sup>/s). One should remember, however, that Fig.3 is the result for initial conditions $`a_0/b_0=10^2`$ and, for this ratio of $`a_0/b_0`$, the diffusion coefficient of the front $`D_f`$ is about 10 times larger than $`D_a`$ ($`D_f/D_a10`$ see Fig.4 in ). Thus $`D_f`$ also comes out to be the right order of magnitude. We do not have information on the amplitude ($`S_0`$) of the source but, once the concentrations ($`a_0`$, $`b_0`$) are given and $`D_f`$ and $`w_f`$ are known then $`S_0`$ is fixed by the conservation law for the $`C`$-s. Thus the correct order of magnitude for $`D_f`$ and $`w_f`$ should ensure that $`S_0`$ is also of right order of magnitude. Finally, we shall calculate the time it takes for a band to form. It is well known that the bands appear rather quickly. From the visual notice of the beginning of the band formation it takes about $`\tau _{ini}=3060`$ minutes for the band to be clearly seen and then it takes much longer to increase its width to the final value. In order to calculate $`\tau _{ini}`$ let us consider the formation of the last band in Fig.3 (see Fig.4). The lower limit of the density that can be visually noticed is, of course, not well defined. We shall assume that this density corresponds to $`m=0`$ i.e. it is the halfway density from $`c_l`$ to $`c_h`$. This means that we see the beginnings of the band at $`\delta t=410`$min and the density reaches well above $`90`$% of its final value by $`\delta t=470`$min. Consequently, we obtain again an estimate ($`\tau _{ini}60`$min) for an observed quantity that agrees with the experiments. As a result of the above estimates, we feel that the parameters in our model can indeed be chosen so that they are relevant to real Liesegang experiments. ## IV Width law The width law is problematic from experimental point of view since the fluctuations in the widths appear to be large. Part of the difficulties are undoubtedly due to the fact that the boundaries of the bands are not sharply defined and high-resolution digitizing methods are needed in a precise analysis. The most thorough experiment to date has been carried out recently with the result $`w_nx_n^\alpha `$ where $`\alpha 0.91.0`$. As to the theories, they also have their share of difficulties since, on a microscopic level, the growth of the width involves precipitation processes in the presence of large concentration gradients, while a macroscopic treatment must elaborate on the dynamics of the interfaces between two phases. Accordingly, there are only a few works to report on. Dee used reaction-diffusion equations supplemented by terms coming from nucleation and growth processes and obtained $`w_nx_n`$ from a rather limited (6 bands) numerical result. Chopard et al. employed cellular automata simulations of a phenomenological version of the microscopic processes and found $`w_nx_n^\alpha `$ with $`\alpha 0.50.6`$. Finally, Droz et al. combined scaling considerations with the conservation law for the number of $`C`$ particles to obtain $`\alpha `$ in terms of the scaling properties of the density of precipitates in the bands. Assuming constant density they found $`\alpha =1`$. Our derivation below parallels this last work in that the same conservation law is one of the main ingredient in it. In our theory, the derivation of the width law is straightforward. One combines the facts that (i) the reaction front leaves behind a constant density ($`c_0`$) of $`C`$-s, (ii) the $`C`$-s segregate into low ($`c_l`$) and high ($`c_h`$) density bands, (iii) the number of $`C`$-s is conserved in the segregation process; and writes down the equation expressing the conservation of $`C`$-s $$(x_{n+1}x_n)c_0=(x_{n+1}x_nw_n)c_l+w_nc_h.$$ (8) Using now the spacing law (2) that has been established for this model one finds $$w_n=\frac{p(c_0c_l)}{c_hc_l}x_n=\zeta x_n.$$ (9) We have thus derived the width law and obtained the coefficient of proportionality, $`\zeta `$, as well. The importance of $`\zeta `$ lies in that measuring it provides a way of accessing $`c_0`$ that is not easily measured otherwise. ## V Final remarks In summary, we have seen that the spinodal decomposition scenario for the formation of Liesegang patterns performs well whenever quantitative comparison with experiments is possible. It remains to be seen if the applicability of this model extends beyond the regular patterns. One should certainly try to use this theory to explain the exotic patterns (e.g. inverse patterns, helixes) that are experimentally reproducible and lack even qualitative understanding. ## Acknowledgments I thank M. Droz, M. Zrínyi, T. Antal, P. Hantz, J. Magnin, and T. Unger for useful discussions. This work has been supported by the Hungarian Academy of Sciences (Grant No. OTKA T 029792).
no-problem/9910/astro-ph9910351.html
ar5iv
text
# HARD X-RAY EMISSION FROM ELLIPTICAL GALAXIES AND ITS CONTRIBUTION TO THE X-RAY BACKGROUND ## 1. Introduction Although it is now clear that the Cosmic X-ray Background (XRB) results from the integrated X-ray emission from many discrete sources, and that a large fraction of the soft ($`0.52`$ keV) XRB is produced by Active Galactic Nuclei (AGN; e.g., Hasinger et al. 1998; Schmidt et al. 1998), the nature of the sources producing the energetically dominant, hard ($`260`$ keV) XRB remains largely unknown. Most current models for the XRB attempt to explain its origin within the context of AGN unification schemes and suggest that the XRB arises from the integrated emission of AGN with a range of intrinsic absorbing column densities (e.g., Setti & Woltjer 1989; Comastri et al. 1995 and references therein). A population of sources with hard X-ray spectra in the $`210\mathrm{keV}`$ band have been discovered by ASCA and Beppo-SAX (Boyle et al. 1998; Ueda et al. 1998; Giommi et al. 1998), with a fraction of these sources showing evidence for heavy obscuration (Fiore et al. 1999). However, it remains unclear whether obscured AGN can fully account for the hard XRB. The most recent synthesis models, which include the latest constraints on the luminosity function and evolution of AGN (Miyaji, Hasinger & Schmidt 1999), cannot easily reproduce the hard counts observed in the ASCA (2-10 keV) and BeppoSAX (5-10 keV) bands and require a number ratio of type-2/type-1 AGN much higher than the locally observed value (Gilli, Risaliti & Salvati 1999). The discrepancies within the context of AGN synthesis models for the XRB suggest the need for an additional population of hard-spectrum sources. In this Letter, we explore the implications of the discovery of hard, power-law X-ray components in the ASCA spectra of six nearby, giant elliptical galaxies (Allen, Di Matteo & Fabian 1999; hereafter ADF99). If most early-type galaxies contain a hard, power-law source with a luminosity of $`10^{40}10^{42}\mathrm{erg}\mathrm{s}^1`$, as may be extrapolated from the detections of such components in both active (e.g. M87) and quiescent galaxies, then the integrated emission from these sources, distributed over a larger redshift interval, can make a significant contribution to the hard XRB. Dynamical studies of elliptical galaxies indicate the presence of supermassive black holes in their nuclei, with masses in the range $`10^810^{10}`$$`\mathrm{M}_{}`$ (e.g., Magorrian et al. 1998). As discussed by ADF99 and Di Matteo et al. (1999a; hereafter DM99), the hard X-ray components detected in nearby giant ellipticals are likely to be due, at least in part, to accretion onto their central black holes. In the cores of elliptical galaxies, accretion from the hot interstellar medium may proceed directly into a hot, low radiative efficiency regime (e.g., Fabian & Rees 1995). DM99 show that the hard X-ray components observed in these systems are consistent with models of thermal bremsstrahlung emission from hot, radiatively-inefficient accretion flows, with temperatures of $`50100\mathrm{keV}`$. Given that the XRB spectrum in the $`360`$ keV band is also well described by a bremsstrahlung spectrum with $`kT40\mathrm{keV}`$, the hard X-ray sources in elliptical galaxies represent a unique class of object with emission spectra that closely match that of the XRB (see also Di Matteo & Fabian 1997). ## 2. HARD X-RAY EMISSION FROM ELLIPTICAL GALAXIES ### 2.1. The observed power-law components ADF99 discuss ASCA observations of six nearby, giant elliptical galaxies: M87, NGC 4696 and NGC 1399 (the dominant galaxies of the Virgo, Centaurus and Fornax clusters) and three other giant ellipticals in the Virgo Cluster (NGC 4472, NGC 4636 and NGC 4649). All of these galaxies (with the exception of NGC 4696, which had not previously been studied in as much detail as the other systems) exhibit clear stellar and gas dynamical evidence for central, supermassive black holes in their nuclei, with masses in the range $`10^810^{10}`$$`\mathrm{M}_{}`$ (e.g., Magorrian et al. 1998). The ASCA spectra for these systems reveal the presence of hard, power-law emission components, with energy indices $`0.5<\alpha <0.5`$ (weighted-mean value $`\alpha =0.22`$; a discussion detailing the reasons why the sources are unlikely to be heavily obscured AGN is given by ADF99) and intrinsic $`110`$ keV luminosities of $`2\times 10^{40}2\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$. These spectral slopes are harder and the luminosities are lower than typical values for Seyfert galaxies, identifying these objects as, potentially, a new class of X-ray source. The presence of hard components in all six galaxies studied also suggests that such sources may be ubiquitous in early-type galaxies. ### 2.2. An empirical comparison We first compare the spectra of the power-law sources observed in nearby ellipticals with the $`110\mathrm{keV}`$ XRB. ASCA and BeppoSAX observations of the cosmic XRB in the $`17\mathrm{keV}`$ band can be well-described by a simple power-law model with an energy index, $`\alpha =0.380.47`$ and a normalization, $`I=811`$ keV s<sup>-1</sup>cm<sup>-2</sup>sr<sup>-1</sup> keV<sup>-1</sup> at 1 keV (Gendreau et al. 1995; Chen, Fabian & Gendreau 1997; Miyaji et al. 1998; Parmar et al. 1999). We have simulated the spectrum obtained by adding a $`7080`$ per cent contribution to the $`110`$ keV flux from power-law sources with an energy index $`\alpha =0.22`$ (modeling the emission from the elliptical galaxies) to the established $`<30`$ per cent contribution, in the same band, from unabsorbed Seyfert-1 galaxies and QSOs (as determined from ROSAT and ASCA; e.g. Schmidt et al. 1998; Boyle et al. 1998). These latter sources have been characterized by a power-law spectrum with an intrinsic energy index, $`\alpha =0.9`$, with a reflection component accounting for emission reprocessed by cold material close to the central X-ray sources (Magdziarz & Zdziarski 1995). Note that the use of a simpler power-law parameterization for the type-1 AGN, with an apparent energy index, $`\alpha =0.7`$ (Turner & Pounds 1989) leads to similar results. The simulated spectrum, as would be observed with the ASCA Solid-state Imaging Spectrometers (SIS) from a 250ks exposure (matching the total exposure time analyzed by Gendreau et al. 1995) is shown in Figure 1. The flux in the simulated spectrum matches the cosmic XRB flux observed by Gendreau et al. (1995). We do not account for the internal background in the SIS detectors, which provides an additional contribution to the total count rate detected by those authors. Following standard X-ray analysis methods, we have fit the simulated spectrum in the $`110\mathrm{keV}`$ range with a simple power-law model. We find that this model provides a good description of the simulated data (reduced $`\chi ^20.9`$ for 200 degrees of freedom, after regrouping to a minimum of 20 counts per channel) and returns a best-fitting slope of $`\alpha =0.40\pm 0.02`$ or $`\alpha =0.48\pm 0.02`$ (90 per cent errors determined from monte-carlo simulations) for simulations with 20 and 30 per cent contributions to the $`210`$ keV flux from type-1 AGN, respectively. These results are in excellent agreement with those for the real XRB. This simple exercise illustrates that, independently of the model used to explain the hard, power-law emission components in the elliptical galaxies, their observed $`210`$ keV spectra match that of the XRB in this band, once the expected contributions from QSOs and AGN are also accounted for. ## 3. BREMSSTRAHLUNG EMISSION FROM ELLIPTICAL GALAXY NUCLEI In previous papers (DM99; Di Matteo et al. 1999b) we have shown that the broad-band spectral energy distributions for the nuclear regions of the six elliptical galaxies studied by ADF99 can be explained by low radiative efficiency accretion models (i.e. advection dominated accretion flows or ADAFs; see Narayan, Mahadevan & Quataert 1998 and references therein) in which accretion occurs from the hot, gaseous halos of the galaxies at rates comparable to their Bondi accretion rates, and in which a significant fraction of the mass, angular momentum and energy in the accretion flows is removed by winds (Blandford & Begelman 1999). Within the context of these models, the systematically hard, observed X-ray spectra can be accounted for by the energetically-dominant bremsstrahlung emission produced by such flows, with electron temperatures of $`50100\mathrm{keV}`$. In this Section, we explore the implications for the XRB of this interpretation for the origin of the hard X-ray emission in elliptical galaxies. ### 3.1. The XRB model We consider the integrated emission from unresolved sources, with hard bremsstrahlung spectra and luminosities consistent with those observed in the six nearby ellipticals studied by ADF99. We first constructed a standard coadded source spectrum by combining the results from fits to the observed spectral energy distributions for the galaxies with two-temperature ADAF models (including the effects of winds), as discussed by DM99. The contributions to the co-added spectrum from the three central cluster galaxies were down-weighted by a factor $`10^2`$ to reflect their lower space density. The resulting co-added source spectrum has a bremsstrahlung luminosity of $`8\times 10^{40}\mathrm{erg}\mathrm{s}^1`$ (see also Figure 2 in DM99). This standard source spectrum was then folded with the appropriate cosmological model to determine the possible, integrated contribution from such sources to the $`260\mathrm{keV}`$ XRB. (This model is essentially the same as that described by Di Matteo et al. (1999c) and Di Matteo & Fabian (1997), but now including the constraints on the source spectra from DM99; see Section 5 and Fig. 2 of that paper.) We assume that the sources are distributed over a redshift range $`z=z_0`$ to $`z=z_{\mathrm{max}}`$, and write the comoving spectral emissivity from such objects as the product $`j[E,z]=n(z)L_\mathrm{E}(z)`$, where $`n(z)`$ is the comoving number density of X-ray sources, and $`L_\mathrm{E}(z)`$ is the specific luminosity of the individual sources. We adopt a simple prescription for the redshift evolution of the comoving emissivity, $`j(E,z)=j_0(E)(1+z)^k`$, where $`j_0(E)`$ is the model spectrum bremsstrahlung emissivity and $`k`$ is the evolution parameter. The total flux from such objects is then $$I(E)=\frac{c}{4\pi H_0}\times _{z_0}^{z_{\mathrm{max}}}\frac{(1+z)^{k2}}{(1+2q_0z)^{1/2}}j_0[E(1+z)]𝑑z,$$ where $`q_0`$ is the deceleration parameter, $`H_0`$ is the Hubble constant (we use $`q_0=0.5`$ and $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$) and $`I(E)`$ is the computed XRB intensity in units of $`\mathrm{keV}\mathrm{s}^1\mathrm{sr}^1\mathrm{cm}^2\mathrm{keV}^1`$ at an energy of 1 keV. For our given source spectrum and fixed values for $`z_0`$ (we assume $`z_0=0`$), $`z_{\mathrm{max}}`$ and $`k`$, the only free parameter in Equation 1 is the local source number density, $`n(z_0)`$, which we determine by normalizing $`I(E)`$ to the observed XRB. The XRB spectra predicted by a series of ‘best fit’ models are shown in Figure 2. The dashed line shows the result due only to bremsstrahlung sources. The solid line shows the improved result when we include a 30 per cent contribution to the $`2\mathrm{keV}`$ flux from unabsorbed AGN (corresponding to a $`20`$ contribution to the $`210`$ keVflux), characterized by a canonical power-law spectrum with an energy index, $`\alpha =0.7`$. (Both curves assume $`z_{\mathrm{max}}=1.0`$ and $`k=3`$.) In this case, the normalization of the XRB requires a local comoving number density of bremsstrahlung sources, $`n(z_0=0)2\times 10^3\mathrm{Mpc}^3`$. The dotted line in Figure 2 shows the result for $`z_{\mathrm{max}}=1.8`$ (other parameters the same) in which case $`n(z_0=0)=8\times 10^4\mathrm{Mpc}^3`$ (the most distant sources contribute most to the total model flux). These values for the comoving number density of emitting sources are in good agreement with the observed number density of bright, early-type galaxies in the nearby Universe ($`n10^3\mathrm{Mpc}^3`$ e.g., Marinoni et al. 1999; Heyl et al. 1997). ## 4. DISCUSSION We have shown that our model provides a good match to the observed XRB, with a required number density of emitting sources in good agreement with the observed number density of early-type galaxies in the nearby Universe (for $`z_{\mathrm{max}}>1`$ and $`k3`$). The requirement for most/all early-type galaxies to have hard X-ray spectra and luminosities consistent with the objects studied by ADF99 may be relaxed once more realistic models for the XRB are considered, including a fractional contribution from heavily absorbed Seyfert-2 nuclei (e.g. Gilli et al. 1999). This will then allow for a broader range of luminosities and/or a lower number density for the bremsstrahlung sources. (In this case the relative contribution from these sources to the $`210`$ keV XRB intensity will also be rescaled to $`<`$ 70 per cent.) The required comoving number density of bremsstrahlung sources is also decreased as $`z_{\mathrm{max}}`$ is increased towards $`z_{\mathrm{max}}2`$. It is important to note that our sources may also contribute to the hard number counts detected at faint X-ray fluxes by ASCA and BeppoSAX (e.g. Ueda et al. 1998; Fiore et al. 1999). Detailed synthesis models for the XRB, which simulate the integrated emission from AGN with a range of absorbing column densities, have revealed the need for additional hard spectrum sources to explain the observed number counts (e.g., Gilli et al. 1999). The $`210`$ and $`510`$ keV fluxes associated with the power-law sources detected in the nearby ellipticals studied by ADF99 range from $`0.68.7`$ and $`0.35.0\times 10^{12}`$$`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, respectively. Within the context of our model, a significant fraction of the hard number counts detected at fluxes, $`F_{\mathrm{X},210}<`$ a few $`10^{14}`$$`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$; may then arise from sources at low redshift. We note that models postulating the existence of an as yet unobserved population of heavily obscured black holes at redshift, $`z2`$, also predict X-ray fluxes too faint to account for the observed hard counts (Fabian 1999). The value of $`z_{\mathrm{max}}`$ in our model is constrained by the effective temperature of the bremsstrahlung emission in the coadded source spectra: the coadded spectrum must be redshifted to fit the 30 keV rollover observed in the XRB. We note that the high-energy ($`>50\mathrm{keV}`$) spectrum predicted by our model is not firmly constrained. The presence of winds/outflows associated with low radiative efficiency accretion flows around supermassive black holes inevitably causes the X-ray spectra of such flows to be dominated by bremsstrahlung emission. (Inverse Compton emission is heavily suppressed in the presence of outflows for any range of $`\dot{m}`$; this, with $`\dot{m}\dot{m}_{\mathrm{crit}}`$ for the component sources, allows for higher individual bremsstrahlung source luminosities than considered in the earlier work of Di Matteo & Fabian.) However, the wind characterization currently employed in the accretion models is very basic and the spectra produced cannot be used to perform reliable statistical fits to the data. In particular, the presence of an outflow will significantly affect the density and temperature profiles in the central regions of an ADAF (c.f. DM99; Quataert & Narayan 1999), where the higher energy ($`h\nu >kT`$) emission originates (although the emission in the $`210\mathrm{keV}`$ band, where the models provide a good match to the power-law components observed in nearby ellipticals, is virtually unaffected). Due to these uncertainties the value of $`z_{\mathrm{max}}`$ cannot be tightly constrained. The black holes at the centers of nearby elliptical galaxies have masses consistent with being the remnants of an earlier quasar phase. If, as our work may suggest, these systems accrete via low-radiative efficiency accretion flows, then a fraction of the hard XRB could be produced after the main quasar phase in galaxies and be associated with a change in the dominant accretion mechanism in their nuclei (see also Di Matteo & Fabian 1997). It is interesting, in this context, that studies with the Hubble Space Telescope have shown the underlying hosts of essentially all classes of QSOs appear to be massive, elliptical galaxies (McLure et al. 1999). We stress that our analysis is not intended to explore the full range of parameter space available to either the accretion flow or XRB models, or to provide detailed, quantitative results. At present, the theoretical and observational uncertainties involved in such calculations are too large to merit such work. The observational constraints will, however, be significantly improved in the near future with data from the Chandra Observatory, XMM and ASTRO-E ## 5. CONCLUSIONS We have examined the potential importance of the hard, power-law emission components detected in the X-ray spectra of nearby ellipticals for the origin of the hard XRB. In previous papers (ADF99 and DM99) we have shown that these components are likely to be associated with accretion onto the central, supermassive black holes in the galaxies. The emission spectra from these sources can be well-explained by bremsstrahlung models, with typical temperatures of $`50100\mathrm{keV}`$, resulting from low radiative-efficiency accretion flows with strong winds. In this paper we have shown that the application of such emission models to a plausible redshift distribution of sources, with individual source luminosities in agreement with the ASCA results for nearby ellipticals, can account for a significant fraction of the XRB in the $`160\mathrm{keV}`$ range, with an implied number density of sources in good agreement with the observed local number density of early-type galaxies. We have argued that the emission from these sources may also contribute to the hard number counts detected at faint X-ray fluxes with ASCA and BeppoSAX. We thank K. Gendreau for supplying the ASCA spectrum of the XRB and D. Psaltis for helpful discussions. T. D. M. acknowledges support provided by NASA through Chandra Postdoctoral Fellowship grant number PF8-10005 awarded by the Chandra Science Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073.
no-problem/9910/cond-mat9910049.html
ar5iv
text
# Field Theoretical Representation of the Hohenberg-Kohn Free Energy for Fluids ## Abstract To go beyond Gaussian approximation to the Hohenberg-Kohn free energy playing the key role in the density functional theory (DFT), the density functional integral representation would be relevant, because field theoretical approach to perturbative calculations becomes available. Then the present letter first derives the associated Hamiltonian of density functional, explicitly including logarithmic entropy term, from the grand partition function expressed by configurational integrals. Moreover, two things are done so that the efficiency of the obtained form may be revealed: to demonstrate that this representation facilitates the field theoretical treatment of the perturbative calculation, and further to compare our perturbative formulation with that of the DFT. The Hohenberg-Kohn (HK) free energy—in terms of the density functional theory (DFT) —is the natural extension of the Helmholtz one. At the beginning, we see what is meant by this via the primary definition of the HK free energy . Let us start with a grand canonical system which has a volume of $`V`$ and is surrounded by a reservoir of a chemical potential $`\mu `$ in the unit of the thermal energy $`k_BT`$ . We consider one-component classical particles, to keep notations as possible as simple, and define the grand potential $`\mathrm{\Omega }`$ in the form: $`e^\mathrm{\Omega }={\displaystyle \underset{N=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{N!}}{\displaystyle \underset{i=1}{\overset{N}{}}d𝒓_i\mathrm{exp}[\{\underset{i,j}{}U(𝒓_i,𝒓_j)+\underset{i}{}J(𝒓_i)\mu N\}]},`$ (1) with $`𝒓`$ being the position vectors of particles, $`J`$ the external potential, and $`U(𝒓_i,𝒓_j)`$ the two-body interaction potential for particles $`i`$ and $`j`$. In the thermodynamic limit, the Helmholtz free energy $`F`$ of a canonical system with $`\overline{N}`$ particles can be obtained from $`\mathrm{\Omega }`$ by a Legendre transform that $`F=\mathrm{\Omega }+\mu \overline{N}`$, where $`\overline{N}`$ is the averaged total number given by the relation $`\overline{N}=\mathrm{\Omega }/\mu `$ in the grand canonical formalism. Similarly, the HK free energy $`F_{HK}`$ is defined by replacing $`\overline{N}`$ in this mapping with the averaged density field $`\phi (𝒓)`$ given in terms of the shifted external field $`\stackrel{~}{J}(𝒓)=J(𝒓)\mu `$ as $$\phi =\widehat{\rho }=\frac{\delta \mathrm{\Omega }}{\delta \stackrel{~}{J}},$$ (2) where $`\widehat{\rho }_{i=1}^N\delta (𝒓𝒓_i)`$ is the density operator and $`\mathrm{}`$ represents the ensemble average with the weight in Eq. (1): the Legendre transform of $`\mathrm{\Omega }`$ with use of $`\phi `$ and $`\stackrel{~}{J}`$ yields $$F_{HK}(\phi )=\mathrm{\Omega }(\stackrel{~}{J})\phi \stackrel{~}{J},$$ (3) where $`\phi \stackrel{~}{J}{\displaystyle 𝑑𝒓\phi \stackrel{~}{J}}`$. Since the relation (3) is reciprocal, $`F_{HK}`$ exactly satisfies the equation that $`\delta F_{HK}/\delta \phi =\stackrel{~}{J}`$. In spite of the above generality and tractability of the HK free energy $`F_{HK}`$, the introduction of $`F_{HK}`$ has been mostly in the context of the DFT. Indeed the DFT is one of the most powerful tools for investigating not only spatially inhomogeneous states for simple liquids but also a variety of more complex fluids (e.g. liquid crystals and polymers) . However, it is also to be noted that there are cases where the largely fluctuating systems such as fluids near critical points are beyond its scope. This is seen from the following Ramakrishnan-Youssouff form : $`\mathrm{\Delta }F_{HK}^A={\displaystyle 𝑑𝒓\phi ^A(𝒓)\mathrm{ln}\frac{\phi ^A(𝒓)}{\overline{\rho }_\text{M}^A}}{\displaystyle \frac{1}{2}}{\displaystyle 𝑑𝒓𝑑𝒓^{}C^{(2)}(𝒓𝒓^{};\overline{\rho }_\text{M}^A)\mathrm{\Delta }\phi ^A(𝒓)\mathrm{\Delta }\phi ^A(𝒓^{})},`$ (4) where $`\mathrm{\Delta }F_{HK}^A`$ is the excess free energy around an arbitrary uniform density $`\overline{\rho }_\text{M}^A`$ in an $`A`$-domain (e.g., liquid region in liquid-vapor coexisting state), $`C^{(2)}(𝒓𝒓^{};\overline{\rho }_\text{M}^A)`$ is the second-order direct correlation function, and $`\mathrm{\Delta }\phi ^A=\phi \overline{\rho }_\text{M}^A`$ is the density difference between $`\overline{\rho }_\text{M}^A`$ and $`\phi `$ obtained from the self-consistent equation: $$\phi (𝒓)=\mathrm{exp}[C^{(1)}(𝒓;\phi )\stackrel{~}{J}(𝒓)].$$ (5) The merit of this form (4) is that short scale correlation—particularly crucial for fluids—may be taken into elaborate account via the input of the direct correlation function; this insert is much of benefit due to the extensive study on the Orstein-Zernike integral equation . For all that, the limitation is also to be realized that the direct correlation functions as input effectively consider only quadratic fluctuations in case these are the solutions of the integral equation within the mean spherical approximation including the equivalence, i.e., Percus-Yevick one to hard-core potentials . A systematic way of going beyond the Gaussian approximaion to the HK free energy is to start with the density functional integral representation so that field theoretical approach to the perturbative calculations may be available. Then we purpose, first of all, to transform straightforwardly the primary definition of $`F_{HK}`$ expressed by configurational integrals to the functional integral representation: (i) to derive from Eq. (1) with the relation (3) the following expression that $`e^{F_{HK}}`$ $`=`$ $`{\displaystyle D\rho \mathrm{exp}[\{H_{sad}(\rho )\phi \stackrel{~}{J}\}]}`$ (6) $`H_{sad}(\rho )`$ $`=`$ $`{\displaystyle 𝑑𝒓𝑑𝒓^{}\frac{1}{2}\rho (𝒓)U(𝒓,𝒓^{})\rho (𝒓^{})}+\rho (𝒓)\stackrel{~}{J}+\rho (𝒓)\mathrm{ln}\rho (𝒓)\rho (𝒓),`$ (7) where $`\rho `$ denotes an instantaneous quantity (not operator) of density. The justification of the Hamiltonian (7) is the main result of this letter; one has had few grounds so far for the form (7), except for the primitive discussion that division of ideal gas systems into cells produces the entropy term, i.e., the last two terms on the right hand side of Eq. (7) , though the above expression (7) comprising the familiar free energy functional is trivial intuitively and hence the corresponding form for the Helmholtz free energy has been often used a priori . (i) As usual, let us insert into the position vectors’ form (1) of the grand partition function, the identity with use of the auxiliary field $`\psi (𝒓)`$: $`𝑑\rho 𝑑\psi \mathrm{exp}[i\psi (\rho \widehat{\rho })]=1`$. Then Eq. (1) reads $`e^\mathrm{\Omega }`$ $`=`$ $`{\displaystyle D\rho D\psi \mathrm{exp}[H(\rho ,\psi )]}`$ (8) $`H(\rho ,\psi )`$ $`=`$ $`{\displaystyle 𝑑𝒓𝑑𝒓^{}\frac{1}{2}\rho U\rho ^{}}+\rho Ji\rho \psi \mathrm{exp}(i\psi +\mu ),`$ (9) with setting $`\rho (𝒓)=\rho `$ and $`\rho (𝒓^{})=\rho ^{}`$. Previous treatments of Eqs. (8, 9) have conventionally proceeded to perform Gaussian integration over $`\rho `$ for fixed $`\psi `$—the Hubbard-Stratonovich transformation, what is called . The reduction is exact indeed, and hence there appears to be no choice but to do so. As a matter of fact, however, Gaussian approximation—this also considering quadratic contribution—would be applicable to $`\psi `$ for given $`\rho `$: we have the alternatives in mapping Eqs. (8,9) to more tractable forms. Then, taking the latter approach, Eqs. (8,9) are reduced to $`e^\mathrm{\Omega }={\displaystyle D\rho \mathrm{exp}[H_{sad}]}`$ (10) and $`H_{sad}(\rho )`$ given by Eq. (7); $`H_{sad}`$ is equal to the Hamiltonian (9) along the saddle point path for $`\psi `$, i.e., $`H(\rho ,\psi _{sad})`$ with $`\psi _{sad}`$ satisfying the saddle point equation that $`\delta H(\rho ,\psi )/\delta \psi |_{\psi =\psi _{sad}}=0`$. To be noted in Eq. (10), the excess grand potential $`\mathrm{\Delta }\mathrm{\Omega }=\mathrm{ln}[D\psi e^{\rho (\delta \psi )^2/2}]`$ arising from quadratic fluctuations of $`\delta \psi =\psi \psi _{sad}`$ is absent. This is due to the following evaluation: Gaussian integration over $`\psi `$—carried out by discretized fields, $`\rho _l`$ and $`\psi _l`$—yields the apparently nontrivial term of the Lee-Yang type such that $$\mathrm{\Delta }\mathrm{\Omega }=\underset{a^30}{lim}\frac{1}{a^3}𝑑𝒓\frac{1}{2}\mathrm{ln}(\rho _la^3),$$ (11) in the continuum limit (or the vanishing limit of the lattice constant $`a`$ defined as $`a^3=(\rho _0)^1V/\overline{N}`$), where the other trivial term has been formally absorbed into the integral measure $`D\rho `$ following the standard procedure . The excess potential $`\mathrm{\Delta }\mathrm{\Omega }`$ given by Eq. (11), however, converges to $`0.5(N\overline{N})`$, the half of the difference between the actual total number $`N`$ and the most probable value $`\overline{N}`$, as found from the expansion that $`\mathrm{ln}(\rho _la^3)=(\rho _l\rho _0)a^3+O[\{(\rho _l\rho _0)a^3\}^2]`$. Thus $`\mathrm{\Delta }\mathrm{\Omega }`$ is negligible in the thermodynamic limit. In the next step, we equate the functional derivative $`\delta \mathrm{\Omega }/\delta \stackrel{~}{J}`$ using the representation (10) with the averaged density $`\phi `$ obtained from both the relation (2) and the position vectors’ expression (1) of $`\mathrm{\Omega }`$: we put $`\phi \widehat{\rho }=\rho _c`$ with $`\mathrm{}_c`$ denoting the average under the weight in Eq. (10). Then the Legendre transform (3) leads to the expressions (6, 7), in question, of the HK free energy $`F_{HK}`$; the principal purpose of this letter has been accomplished. What we have to do, in addition, would be to show virtues of the now justified representations (6, 7). Then two things are done in the remainder: (ii) to demonstrate that this form facilitates the field theoretical treatment of the perturbative calculation, and (iii) to compare our perturbative formulation with that of the DFT. (ii) Let us return to the grand potential $`\mathrm{\Omega }`$ given by Eq. (10). The saddle point equation that $`\delta H_{sad}(\rho )/\delta \rho |_{\rho =\rho _\text{M},J=0}=0`$ in the absence of external potential, $`J(𝒓)0`$, produces the mean-field density: $$\rho _\text{M}(𝒓)=\mathrm{exp}\left\{𝑑𝒓^{}U(𝒓,𝒓^{})\rho _\text{M}(𝒓^{})+\mu \right\}.$$ (12) Expanding around $`\rho _\text{M}`$ the logarithmic term in the Hamiltonian difference $`\mathrm{\Delta }H=H_{sad}(\rho )H_{sad}(\rho _\text{M})`$, we obtain $`e^\mathrm{\Omega }`$ $`=`$ $`e^{H_{sad}(\rho _\text{M})}{\displaystyle D\stackrel{~}{\rho }\mathrm{exp}\{(\mathrm{\Delta }H+\stackrel{~}{\rho }J)\}}`$ (13) $`\mathrm{\Delta }H(\stackrel{~}{\rho })`$ $`=`$ $`{\displaystyle 𝑑𝒓𝑑𝒓^{}\frac{1}{2}\stackrel{~}{\rho }(U+\frac{\delta [𝒓𝒓^{}]}{\rho _\text{M}})\stackrel{~}{\rho }^{}}+E(\stackrel{~}{\rho }),`$ (14) where $`\stackrel{~}{\rho }=\rho \rho _\text{M}`$, the fluctuation of the total number is neglected (i.e., $`𝑑𝒓\stackrel{~}{\rho }0`$), and $`E(\stackrel{~}{\rho })`$ denotes the terms higher than quadratic ones due to the logarithmic expansion. The Hamiltonian $`\mathrm{\Delta }H`$ given by Eq. (14) has some characteristics other than standard field theoretical formulations : One is that the free part of $`\mathrm{\Delta }H`$ includes the positionally dependent coefficient $`1/\rho _\text{M}(𝒓)`$, indicating that our formalism is applicable to investigating density fluctuations in structured fluids where the mean-field density itself is not uniform but spatially oscillates. Next, $`\mathrm{\Delta }H`$ reveals that higher terms $`E(\stackrel{~}{\rho })`$ in one-component systems arise from entropy allowing thermally activated hopping processes and not from interactions . Finally it is noted that $`\stackrel{~}{J}`$ is reduced to $`J`$. This is why the external potential $`J`$ is not included in determining the mean-field density from the saddle point equation; if we take as a reference density the mean-field one in the presence of $`J`$, $`\mathrm{\Delta }H`$ has no external source to create a generating functional. Of particular interest is the first property; this, for example, makes it possible to generallize Debye-Hückel equation as Fisher et al. propose recently . The present letter, however, restricts itself to the simplified systems which have only the short-ranged potentials and consist of macroscopic subsystems (or domains) inside which the mean-field density $`\rho _\text{M}(𝒓)`$ take constant values except the boundaries. A typical situation is the coexisting phase of two subsystems (such as liquid and gas). Consequently, neglect of the interfacial energy reduce $`\mathrm{\Delta }H`$ to the sum of the contributions from the domains. With the volume and the mean field density in an $`s`$-domain ($`s=A,B,\mathrm{}`$) denoted by $`V_s`$ and $`\overline{\rho }_\text{M}^s`$, respectively (where the overbar is for emphasizing the constancy), the functional integral in Eq. (13) reads in terms of the Fourier transformed density, $`\rho ^s(𝒌)=(1/V_s)𝑑𝒓\mathrm{exp}(i𝒌𝒓)\rho ^s(𝒓)`$: $`{\displaystyle D\stackrel{~}{\rho }\mathrm{exp}}`$ $`(\mathrm{\Delta }H)`$ (16) $`={\displaystyle \underset{s=A,\mathrm{}}{}}C\mathrm{exp}\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{\delta }{\delta \rho ^s}}\mathrm{\Delta }_F^s{\displaystyle \frac{\delta }{\delta \rho ^s}}\right)\mathrm{exp}\left[{\displaystyle \frac{V_s}{(2\pi )^3}}{\displaystyle 𝑑𝒌\{E(\rho ^s)+\rho ^sJ\}}\right]|_{\rho ^s=0},`$ where further shifted density is abbreviated to $`\rho ^s`$, $`C`$ is a constant independent of $`J`$, and the propagator $`\mathrm{\Delta }_F^s`$ is given in the usual form $`(\mathrm{\Delta }_F^s)^1=u_0+(1/\overline{\rho }_\text{M}^s)+u_2𝒌^2`$ due to the conventional expansion of the short-ranged potential, $`U=u_0+u_2𝒌^2`$, in the $`𝒌`$–space. The expression (16) implies that Feynman graphs are now available. Thus the HK free energy $`F_{HK}`$ gets into spotlight, because $`F_{HK}`$ given by Eqs. (6,7) includes the generating functional of the n-point vertex functionals $`\mathrm{\Gamma }^{(n)}`$ consisting only of one-particle irreducible diagrams as well as the standard field theory : putting that $`\mathrm{\Delta }\phi ^s=\phi \overline{\rho }_\text{M}^s`$ in an $`s`$-domain ($`s=A,B,\mathrm{}`$) as in Eq. (4) and ignoring the number fluctuation (i.e., $`𝑑𝒓\mathrm{\Delta }\phi ^s0`$) as before, $`F_{HK}`$ reads $`F_{HK}`$ $`=`$ $`{\displaystyle \underset{s=A,\mathrm{}}{}}F_{MF}^s+\mathrm{\Delta }F_{HK}^s`$ (17) $`F_{MF}^s`$ $`=`$ $`{\displaystyle 𝑑𝒓𝑑𝒓^{}\frac{1}{2}\overline{\rho }_\text{M}^sU\overline{\rho }_\text{M}^{}_{}{}^{}s}+\overline{\rho }_\text{M}^s\mathrm{ln}\overline{\rho }_\text{M}^s\overline{\rho }_\text{M}^s`$ (18) $`\mathrm{\Delta }F_{HK}^s`$ $`=`$ $`{\displaystyle \underset{n2}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{n!}}{\displaystyle 𝑑𝒓_1\mathrm{}𝑑𝒓_n\mathrm{\Gamma }^{(n)}(𝒓_1,\mathrm{},𝒓_n;\overline{\rho }_\text{M}^s)\mathrm{\Delta }\phi ^s(𝒓_1)},\mathrm{},\mathrm{\Delta }\phi ^s(𝒓_n).`$ (19) It is to be noted in the above representations that $`F_{HK}`$ correctly includes the mean-field Helmholtz free energy $`F_{MF}`$ in the absence of external potential $`J`$, in contrast to previous density functional integral formulations which take as a reference the free energy for the density smeared over the entire system. As a result, the present formalism may focus on appropriate density deviation from not a smeared value but a mean-field one $`\overline{\rho }_\text{M}^s`$, and therefore makes the perturbative approximation more precise than previous approaches. (iii) Even at the starting point, there exists difference between the DFT and our formalism: the former separates the entropy term, $`𝑑𝒓\phi \mathrm{ln}\phi \phi `$, from $`F_{HK}`$ beforehand, whereas the latter does the mean-field free energy $`H_{sad}(\rho _\text{M})`$ (see Eq. (13)). Such distinct types of theories, though, have correspondence in some cases as will be seen below. Most successful is the absence of interactions between particles, i.e., $`U=0`$, where the field theoretical representation is identified with the DFT formalism. To see this, we consider the grand potential $`\mathrm{\Omega }`$. In the DFT, $`\mathrm{\Omega }`$ is reduced to $`\mathrm{\Omega }=𝑑𝒓\phi _{\mathrm{ex}}\mathrm{ln}\phi _{\mathrm{ex}}+\phi _{\mathrm{ex}}\stackrel{~}{J}`$ with the averaged density $`\phi _{\mathrm{ex}}=\mathrm{exp}(\stackrel{~}{J})`$ obtained from the relation (5) when $`C^{(1)}=0`$. While the saddle point equation, $`\delta H_{sad}/\delta \rho =0`$, explicitly including the external potential $`J`$ also produces the density equal to $`\phi _{\mathrm{ex}}`$, and therefore the deviation of $`\mathrm{\Omega }`$ in the functional integral formalism from that for the DFT is given in the first evaluation as $`\mathrm{ln}[D\delta \rho \mathrm{exp}((\delta \rho )^2/2\phi _{\mathrm{ex}})]`$, being the same kind as the excess grand potential $`\mathrm{\Delta }\mathrm{\Omega }`$. Then repeating the similar discussion to that after Eq. (11), the difference may be ignored and thus the equivalence in the case of $`U=0`$ is assured. For $`U0`$, on the other hand, let us compare both representations of the excess HK free energy $`\mathrm{\Delta }F_{HK}^A`$ in an $`A`$-domain up to quadratic terms for $`\mathrm{\Delta }\phi ^A`$. For the DFT formulation (4), the expansion of the logarithmic term yields $`\mathrm{\Delta }F_{HK}^A={\displaystyle 𝑑𝒓𝑑𝒓^{}\frac{1}{2}\left(C^{(2)}+\frac{\delta [𝒓𝒓^{}]}{\overline{\rho }_\text{M}^A}\right)\mathrm{\Delta }\phi ^A(𝒓)\mathrm{\Delta }\phi ^A(𝒓^{})}.`$ (20) In applying the mean spherical approximation (MSA) to the calculation of the direct correlation function for hard-core(-Yukawa) or square well fluids , we have only to put $`C^{(2)}=U`$ in Eq. (20), because the other condition that the pair distribution function is to be set to zero inside hard spheres with the diameter $`d`$ is formally satisfied due to the potential of $`U=\mathrm{}`$ for $`𝒓𝒓^{}d`$. On the other hand, the functional integral representation (19) under the tree approximation reduces to $`\mathrm{\Delta }F_{HK}^A=\mathrm{\Delta }H(\mathrm{\Delta }\phi ^A)`$ , and therefore takes the same form up to quadratic terms as the DFT expression (20) with use of the MSA for the above mentioned fluids (see also Eq. (14)). The conformity for $`U0`$ contrarily results in highlighting some merits of the field theoretical forms (17) to (19), which we describe in conclusion. One virtue other than the DFT using the MSA is that the density functional integral representation may still take fluctuations into more elaborate consideration systematically, by including loop graphs in Eq. (19). Moreover, we would like to stress that a reference density—this being merely an arbitrary value in the DFT (see the statement just after Eq. (4))—is obtained from the relation (12) self-consistently in our formalism. We acknowledge the financial support from the Ministry of Education, Science, Culture, and Sports of Japan under Grant No. 10450032.
no-problem/9910/astro-ph9910062.html
ar5iv
text
# Statistical properties of SGR 1900+14 bursts ## 1 Introduction At least three of the four currently-known soft gamma repeaters are associated with slowly rotating, extremely magnetized neutron stars located within young supernova remnants (Kouveliotou et al. 1998, 1999). They are characterized by the recurrent emission of gamma-ray bursts with relatively soft spectra (resembling optically-thin thermal bremsstrahlung at $`kT20`$–40 keV) and short durations ($``$ 0.1 s) (Kouveliotou 1995). Thompson and Duncan (1995) suggested that these bursts are due to neutron star crust fractures, driven by the stress of an evolving, ultra-strong magnetic field, $`B10^{14}`$ Gauss. Cheng, Epstein, Guyer & Young (1996) observed that particular statistical properties of a sample of 111 SGR events from SGR 1806-20 are quite similar to those of earthquakes (EQ). These properties include the distribution of event energies, which follow a power law dN $``$ E dE with an exponent, $`\gamma `$ = 1.6. A similar distribution was obtained empirically by Gutenberg and Richter (1956a; 1965) for the distribution of EQ energies, with power law index $`\gamma _{EQ}`$=$`1.6\pm 0.2`$; and in computer simulations of fractures in a stressed, elastic medium (Katz 1986). The distribution of time intervals between successive SGR 1806-20 events is well described by a log-normal distribution analogous to the waiting times distribution of microglitches seen in the Vela pulsar (see Hurley et al. 1994). Cheng et al. (1996) also showed that cumulative waiting time distributions of SGR 1806-20 and EQ events are similar. These results support the idea that SGR bursts are caused by starquakes, as expected to occur in the crusts of magnetically-powered neutron stars, or “magnetars” (Duncan & Thompson 1992; Thompson and Duncan 1995, 1996). In May 1998, SGR 1900+14 became extremely active after a long period during which only sporadic activity occurred (Kouveliotou 1993). In the period from May 1998 until January 1999 a total of 200 events were detected (Woods et al. 1999b) with the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma Ray Observatory (CGRO). Out of these 200 events, 63 led to an on-board trigger. The sudden change in source activity initiated a series of Rossi X-ray Timing Explorer (RXTE) observations between May 31 and December 21, 1998. During these observations, 837 bursts from SGR 1900+14 were detected with the Proportional Counter Array (PCA). As noted by Kouveliotou et al. (1998) for SGR 1806-20, the bursts occur in an apparently irregular temporal pattern. This is also true for SGR 1900+14 bursts <sup>1</sup><sup>1</sup>1Some examples of irregular temporal pattern of SGR 1900+14 bursts can be seen at http://gammaray.msfc.nasa.gov/batse/sgr/sgr1900/. In this Letter, we study the statistics of SGR bursting using the new measurements of SGR 1900+14. Our data base of events is larger by $`10`$ than that of previous statistical studies, and extends over a larger dynamic range in burst energy (or fluence) by $`10^2`$. ## 2 BATSE Observations The BATSE instrument is made up of 8 identical detector modules located on each corner of the CGRO. Each module contains a large area detector (LAD) and a spectroscopy detector (SD). In our analysis, we have used DISCriminator LAD (DISCLA) data with coarse energy resolution (4 channels covering E$`>`$25 keV), Spectroscopy Time-Tagged Event (STTE) data and Spectroscopy High Energy Resolution Burst (SHERB) data with fine energy binning (256 channels). A detailed description of BATSE instrumentation and data types can be found in Fishman et al. (1989). BATSE was triggered by SGR 1900+14 bursts 63 times between May 1998 and January 1999. For 22 of the brightest events, we obtained STTE or SHERB data with detailed spectral information. We fit the background subtracted source spectra to optically-thin thermal bremsstrahlung (OTTB) and power law models. The OTTB model, F(E)$``$ E<sup>-1</sup>$`\mathrm{exp}`$($``$E/kT), provides suitable fits (0.83 $`<`$ $`\chi _\nu ^2`$ $`<`$ 1.32) to all of the event spectra with temperatures ranging between 21.0 and 46.9 keV. The power law model failed to fit most of the spectra. The mean of the OTTB temperatures for this sample of 22 events, appropriately weighted by uncertainties, is $`25.7\pm 0.2`$ keV. Woods et al. (1999b) performed an extensive search for untriggered BATSE events from SGR 1900+14. They found, in addition to the 63 triggered events, 137 untriggered burst events between 24 May 1998 and 3 February 1999. In this study we selected the 187 BATSE events (triggered and untriggered) which had DISCLA data. This data type is read out continuously (with the exception of data telemetry gaps) and therefore is available for the largest sample of events. We have excluded events which occurred in data gaps, events that were too weak to fit, and four events due to their distinction from typical SGR activity. The events on 1998 October 22 and 1999 January 10 with relatively hard spectra (Woods et al. 1999c), and the multi-episodic events on 1998 May 30 and September 1 (Göğüş et al. 1999b) will be discussed elsewhere. Given the long DISCLA data integration time (1.024 s) relative to typical burst durations ($``$ 0.1 s), we could only estimate the fluence for each event. In order to determine the fluence of each burst, we fit the background-subtracted source spectrum to the OTTB model with a fixed kT of 25.7 keV, a reasonable choice considering the fairly narrow kT distribution of the triggered bursts. We find that the fluences of SGR 1900+14 bursts observed with BATSE range between $`2\times 10^8`$ and $`2.5\times 10^5`$ ergs cm<sup>-2</sup>. For an estimated distance to SGR 1900+14 of 7 kpc (Vasisht et al. 1994), and assuming isotropic emission, the corresponding energy range is $`1.1\times 10^{38}`$$`1.5\times 10^{41}`$ ergs. ## 3 PCA Observations RXTE observed SGR 1900+14 for a total exposure time of $``$ 180 ks between May 1998 and December 1998. In this work, we have analyzed data from 32 pointed observations with the PCA. We performed an automated burst search similar to the one used on BATSE data described by Woods et al. (1999b). Using Standard 1 data (2-60keV) for all times where the source was above the Earth’s horizon by more than 5 , we searched for bursts using the following methodology. For each 0.125 s bin, a background count rate was estimated by fitting a first order polynomial to 5 s of data before and after each bin with a 3 s gap between the bin searched and the background intervals. Bins with count rates exceeding 1000 counts s<sup>-1</sup> were assumed to contain burst emission and were excluded from background intervals. At the beginning(end) of each continuous stretch of data, extrapolations of background fits after(before) the bin were used to estimate the background count rate within the bin searched. A burst was defined as any continuous set of bins with count rates in excess of 5.5 $`\sigma `$ above the estimated background. The count fluence of each burst was measured by simply integrating the background-subtracted counts over the bins covering the event. In order to compare integrated counts obtained with the PCA and BATSE fluences, we determined a conversion factor between each PCA count and BATSE fluence. We assume a constant spectral model (OTTB with kT=25.7 keV). First, we searched for simultaneous bursts observed with both instruments and we found 13 events ( 6 triggered events, 3 in the read-out of triggered events and 4 untriggered events in BATSE). We then computed the ratio of the BATSE fluence of each simultaneous event to the PCA counts, which ranges over a factor of $``$ 2 between $`4.65\times 10^{12}`$ and $`1.15\times 10^{11}`$ ergs cm<sup>-2</sup>counts<sup>-1</sup>. The weighted mean of the ratios for SGR 1900+14 is $`5.45\times 10^{12}`$ ergs cm<sup>-2</sup>counts<sup>-1</sup> and the standard deviation, $`\sigma `$ = $`2\times 10^{12}`$ ergs cm<sup>-2</sup>counts<sup>-1</sup>. Invoking this conversion factor, the fluence of the bursts from PCA extends from $`1.2\times 10^{10}`$ to $`3.3\times 10^7`$ ergs cm<sup>-2</sup> (in the BATSE energy range, E $`>`$ 25 keV) and the burst energies range from $`7\times 10^{35}`$ to $`2\times 10^{39}`$ ergs. ## 4 Statistical Data Analysis i) Fluence distributions: The fluences of BATSE bursts were binned in equally spaced logarithmic fluence steps ( $`dN/d\mathrm{log}E`$) (Fig.1). Using a standard least squares fitting method, we fit a power law model to data between $`5.0\times 10^8`$ and $`2.5\times 10^6`$ ergs cm<sup>-2</sup>. Bursts at the low end of the distribution were excluded because of diminished detection efficiency as well as at the high end due to undersampling of the intrinsic distribution. The power law exponent obtained is $`0.65\pm 0.08`$ (solid line passing through BATSE data in Fig.1), which corresponds to dN $``$ E<sup>-1.65</sup> dE. We also employed a maximum likelihood analysis, instead of the least squares method, to fit a power law model to the unbinned fluence values within the same interval of fluences. This method yields $`\gamma `$=$`1.66\pm _{0.12}^{0.13}`$ for the energy exponent which agrees well with the least squares fit. Using the conversion factor we derived from RXTE counts to BATSE fluence for SGR 1900+14, we determined the fluence of each RXTE burst (in the BATSE energy range) and distributed them over the same logarithmic fluence steps (Fig.1). We first fit binned RXTE fluences between $`1.6\times 10^{10}`$ and $`3.3\times 10^7`$ ergs cm<sup>-2</sup> to a power law model using least squares method which gives an exponent value of $`0.64\pm 0.04`$ (solid line passing through RXTE data in Fig.1). The unbinned fluences were then fit to the same model using the maximum likelihood method obtaining $`1.66\pm 0.05`$ for the power law exponent. Combined RXTE and BATSE fluences range from $`1.2\times 10^{10}`$ to $`2.5\times 10^5`$ ergs cm<sup>-2</sup> (Fig.1) which demonstrates that power law distribution of energies with an exponent $`\gamma 1.66`$ is valid for SGR 1900+14 over 4 orders of magnitude. ii) Waiting times statistics: We have measured the waiting times ($`\mathrm{\Delta }`$T) between successive bursts, uninterupted by Earth occultation and data gaps, for 779 events. Fig 2 shows the distribution of waiting times which range from 0.25 to 1421 s. We fit the ($`\mathrm{\Delta }`$T)-distribution to a log-normal function and found a peak at $``$ 49 s. The solid line in Fig.2 shows the interval used to fit and the dashed lines are the extrapolations of log-normal distribution. We do not include waiting times less than 2 s since these bursts appear to be double peaked events in which the second burst peak appears shortly after the first one, although recorded as two distinct bursts. We were unable to generate a $`\mathrm{\Delta }`$T-distribution for BATSE bursts due to the much smaller number of events which occurred during a single orbital window. In order to investigate any relations between waiting times till the next burst ($`\mathrm{\Delta }`$$`T^+`$) and the intensity of the bursts, we divided the 779 events sample into 8 intensity intervals each of which contains approximately 100 events. We fit the $`\mathrm{\Delta }`$$`T^+`$-distribution to a log-normal distribution and determined the mean-$`\mathrm{\Delta }`$$`T^+`$ (i.e. where the fitted log-normal distribution peaks), and the mean counts for each of the 8 groups. We show in Fig.3-a that there is no correlation between $`\mathrm{\Delta }`$$`T^+`$ and energy of the bursts (Spearman rank-order correlation coefficient, $`\rho `$ = 0.05 and the probability that this correlation occurs by a random data set, P = 0.91). We also searched for the relation between the elapsed times since the previous burst ($`\mathrm{\Delta }`$$`T^{}`$) and the intensity of the bursts. Similar to the previous case, we sub-divided the events into 8 intensity intervals and determined mean-$`\mathrm{\Delta }`$$`T^{}`$ by fitting to a log-normal distribution and mean counts for each group individually. Fig.3-b shows that there appears to be an anti-correlation between mean-$`\mathrm{\Delta }`$$`T^{}`$ and burst energy ($`\rho `$ = $`0.93`$, P = 8$`\times 10^4`$). iii) Burst durations: Gutenberg and Richter (1956a; 1956b) demonstrated that there is a power law relation between the magnitude, or energy of the EQ events and the durations of the strong motion at short distances from an EQ region. In order to investigate if a similar correlation exists for SGR events, we selected all 679 PCA bursts from the most active period of SGR 1900+14. In order to determine the durations of the bursts accurately, we used event mode PCA data with 1 ms time resolution. For 281 of the bursts selected, we obtained t<sub>90</sub> durations (Koshut et al. 1996) of the bursts. Fig.4 shows that burst energies and durations are correlated ($`\rho `$=0.54, P$``$$`10^{24}`$), although there is a significant spread of fluences at a given duration. ## 5 Discussion The power-law size distribution of SGR 1900+14 bursts with an index $`\gamma `$ = 1.66 is similar to those found for SGR 1806-20 (Cheng et al. 1996) and SGR 1627-41 (Woods et al. 1999a). The lack of a high energy cut-off in the differential size distribution indicates that the highest energy events are not well sampled in our distribution. The distribution of waiting times between successive SGR 1900+14 bursts is characterized by a log-normal function, similar to that of SGR 1806-20 (Hurley et al. 1994). Waiting times between SGR 1900+14 bursts are on average shorter than those of SGR 1806-20 since all SGR 1900+14 bursts occurred during the most active period of the source. There is no correlation between the intensity of the burst and the waiting time until the following burst. This result agrees well with the results of Laros et al. (1987) for SGR 1806-20 and distinguishes the physical mechanism of SGR 1900+14 bursts from that of type II X-ray bursts from the Rapid Burster (Lewin et al. 1976) in which the burst energy is proportional to the waiting time till the next burst. We find an anti-correlation between the intensity of the bursts and the waiting time since the previous bursts. This is very different from type I X-ray bursts (thermonuclear flashes, see Lewin, Van Paradijs and Taam 1993) for which there is a rough positive correlation. There is evidence of a positive correlation between the energy and the duration of SGR 1900+14 bursts. Similar behavior was also observed for EQs (Gutenberg & Richter 1956a, 1956b) and solar flares (Lu et al. 1993). The EQ size distribution appears to be a power law with an exponent between 1.4-1.8 independent of geographic location (Gutenberg & Richter 1956a,1965; Lay & Wallace 1995). Using data taken from the Solar Maximum Mission (SMM) Crosby et al. (1993) found a power law size distribution for 12000 solar flares with exponents ranging between 1.53 and 1.73. The SMM results have been confirmed by the results from International Cometary Explorer (ICE) for 4350 flares that finds an exponent of 1.6 (Lu et al. 1993). Gershberg and Shakhovskaya (1983) found that the size distribution of stellar flares from 23 stars display power law with exponent between 1.5 and 2.1. Chen et al. (1991) argued that EQ dynamics is described by a self-organized critical system. Crosby et al. (1993) similarly suggested that the size distribution of solar flares reflects an underlying system in a state of self-organized criticality (see Bak et al. 1988) which states that many composite systems will self-organize to a critical state in which a small perturbation can trigger a chain reaction that affects any number of elements within the system. We have been unable to find clear results in the literature on the distribution of waiting times between successive solar flares or EQs. Wheatland et al. (1998) predicted that the distribution of waiting times of solar flares displays a power law, while Biesecker (1994) proposed that it is consistent with a time-dependent Poisson process. Nishenko and Bulland (1987) showed that waiting time distribution of large EQs is well described by a log-normal function. In recent work by Nadeau and McEvilly (1999) there is an evidence of log-normal distribution of waiting times between micro EQs. The large number of bursts in our samples allow us to stringently test the power law size distribution proposed by Cheng et al. (1996); we find that the size distribution of SGR 1900+14 bursts follows a power-law of index 1.66 over more than four orders of magnitude in burst fluence. This behavior, along with a log-normal waiting time distribution and energy-correlated burst durations, are characteristics of self-organized critical systems in general, and earthquakes and solar flares in particular. In the magnetar model, the triggering mechanism for SGR bursts is a hybrid of starquakes and magnetically-powered flares (Thompson & Duncan 1995). When magnetic stresses induce elastic strains in the crust, the stored potential energy is predominantly magnetic rather than elastic. In contrast with an EQ, this allows much of the energy to be released directly into a propagating disturbance of the external magnetic field<sup>2</sup><sup>2</sup>2Small scale fractures occuring deep in the crust excite internal seismic waves that couple only indirectly to external Alfvén modes. of the neutron star; and in contrast with a solar flare it is the rigidity of the crust that provides a gate or trigger for the energy release. The extended power-law distribution of burst fluences suggests that the average radiative efficiency does not vary significantly over four orders of magnitude in burst energy, and provides a strong constraint on burst emission models (Thompson et al. 1999). We thank Dr. Robert D. Preece for adjusting WINGSPAN software to process untriggered BATSE events and Dr. Markus J. Aschwanden for helpful discussions on solar flares. We acknowledge support from the cooperative agreement NCC 8-65 (EG); NASA grants NAG5-3674 and NAG5-7060 (JvP); Texas Advanced Research Project grant ARP-028 and NASA grant NAG5-8381 (RCD).
no-problem/9910/hep-ex9910061.html
ar5iv
text
# 1 Introduction ## 1 Introduction Strong evidence for neutrino oscillation has come from Super-Kamiokande, from the zenith angle dependence of the muon deficit that they observe for atmospheric neutrinos . The favoured explanation is the oscillation of muon neutrinos to tau neutrinos, which escape detection in their apparatus. These results require confirmation using an artificially generated neutrino beam, and accelerator-based experiments that will check the muon neutrino disappearance are underway . A long-baseline beam from CERN to Gran Sasso is also under discussion . As the question of $`\nu _\mu `$ disappearance should have been settled by the time that experiments using that beam take data, the key issue for them is to confirm that tau neutrinos are indeed being produced. Since the neutral current interactions of the $`\nu _\tau `$ cannot be distinguished from those of other neutrinos, the crucial point is to identify charged-current interactions, $`\nu _\tau N\tau ^{}X`$, by the appearance of the tau lepton. Two experiments are being proposed to make this measurement: OPERA and ICANOE . OPERA seeks to identify the taus by their characteristic short lifetime, corresponding to an average decay length of about 1 mm for the CNGS beam energy spectrum. They propose to use an emulsion target to recognise the kink from the tau decay, building on the expertise accumulated by the CHORUS and DONUT experiments. ICANOE, on the other hand, intends to recognise the charged-current $`\nu _\tau `$ events through kinematical criteria, involving the missing energy from the neutrinos accompanying the tau decay, and isolation criteria for the tau decay products, following the approach pioneered by NOMAD . These experiments will be challenging, as the number of charged-current $`\nu _\tau `$ interactions expected in each year of operation of the CNGS beam is only about 30 per kiloton of sensitive detector mass , for the oscillation parameters preferred by the Super-Kamiokande data: $`\mathrm{\Delta }m^2=3.5\times 10^3\mathrm{eV}^2`$, $`\mathrm{sin}^2(2\theta )=1`$ . Maintaining high efficiency is therefore crucial, whilst background must be suppressed such that just a few observed events would correspond to an unambiguous signal. The detection technique presented in this note is different—the idea is to directly identify the tau by imaging the Cherenkov light that it produces. Cherenkov detectors have already been used in this field: Super-Kamiokande itself relies on the generation of Cherenkov light in water, but without focussing. A ring-imaging water Cherenkov detector, AQUA-RICH , was originally proposed for Gran Sasso, but insufficient sensitive mass could fit in the experimental halls, so it is now being pursued as an atmospheric neutrino experiment sited elsewhere (with a megaton mass!). Due to the chromatic dispersion in water, a relatively narrow energy bandwidth is assumed for photon detection in AQUA-RICH. Coupled with the 20% detector coverage this leads to typically 0.5 detected photons per mm of track length, insufficient to see the tau track. The concept presented here is to use $`\mathrm{C}_6\mathrm{F}_{14}`$ liquid as the radiator, which due to its low dispersion allows a wider photon energy bandwidth, and to have full detector coverage. This leads to 13 detected photons per mm, and direct detection of the Cherenkov ring from the tau then becomes feasible. Furthermore, the increased density of the radiator would allow a kiloton detector to fit comfortably in a Gran Sasso hall. ## 2 Detector concept Perfluorohexane ($`\mathrm{C}_6\mathrm{F}_{14}`$) is a well-established radiator material for RICH detectors , liquid at room temperature. About a ton of it is used in DELPHI . It has the nice features for this application of a refractive index of about 1.27, slightly lower than that of water, whilst being significantly more dense (1.68 $`\mathrm{g}/\mathrm{cm}^3`$). It is also, after purification, highly transparent to photons with wavelength down to 200 nm and beyond , and has low chromatic dispersion: the dependence of the refractive index on photon energy is shown in Fig. 1 (a). A classical focussed RICH geometry is adopted, with a spherical mirror following the radiator and a spherical detection surface sited at radius $$r_\mathrm{d}=\frac{r_\mathrm{m}}{2}\frac{\sqrt{1+\frac{9}{16}\mathrm{sin}^2\theta _\mathrm{c}}+\frac{3}{8}\mathrm{sin}^2\theta _\mathrm{c}}{1\frac{3}{16}\mathrm{sin}^2\theta _\mathrm{c}},$$ (1) where $`r_\mathrm{m}`$ is the mirror radius of curvature and $`\theta _\mathrm{c}`$ is the Cherenkov angle . The saturated Cherenkov angle in $`\mathrm{C}_6\mathrm{F}_{14}`$ is about $`38^{}`$, and so $`r_\mathrm{d}=0.67r_\mathrm{m}`$. The tau leptons produced by charged-current interaction of neutrinos from the CNGS beam are produced in the predominantly forward direction, along the direction of the beam. The detector elements are therefore oriented to collect the light produced by such tracks, as shown schematically in Fig. 2 (a). The assumed quantum efficiency $`Q(E)`$ of the photodetectors is shown in Fig. 1 (b); it is cut off above 6.2 eV, corresponding to a quartz entrance window for the detectors. The possible implementation of this concept illustrated in Fig. 2 (b) will be discussed in the following section. For the purposes of the simulation presented here, a circular detector surface of 1 m diameter is assumed, equal to its radius of curvature. The radiator thickness is then 50 cm, with a spherical mirror of radius 150 cm. Such a module would contain about 0.67 $`\mathrm{m}^3`$ of $`\mathrm{C}_6\mathrm{F}_{14}`$ liquid, corresponding to about 1100 kg. A kiloton detector would thus require about 900 such modules. Cherenkov photons produced by the tau and other charged particles in the event are focussed by the mirror into rings on the detector surface. For full detector coverage, the number of detected photoelectrons per ring is given by: $$N=\left(\frac{\alpha }{\mathrm{}c}\right)LQTR\mathrm{sin}^2\theta _\mathrm{c}dE,$$ (2) where the factor in parentheses is a constant with value $`370\mathrm{eV}^1\mathrm{cm}^1`$, $`L`$ is the track length in the radiator, $`T`$ is the transmittance of the radiator, and $`R`$ is the reflectivity of the mirror (assumed to be 95%) . The absorbtion length of purified $`\mathrm{C}_6\mathrm{F}_{14}`$ has been measured to be greater than 100 cm for $`E<6.2`$ eV , so $`T=1`$ is assumed here. Then Eq. 2 corresponds to 13 detected photoelectrons per mm of track length, which renders the tau track visible. The muon from $`\tau ^{}\mu ^{}\nu _\tau \overline{\nu }_\mu `$ decays will typically pass through 25 cm of radiator, giving 3200 photoelectrons. The RICH optics result in the position of the rings on the detector surface being determined by the angle of the tracks, insensitive to their production point in the radiator, and is thus well adapted to identification of the tau decay kink. Such events have been simulated, taking the tau and muon track parameters from a detailed simulation of quasielastic interactions of a neutrino beam with the CNGS energy spectrum . In quasielastic events $`\nu _\tau \mathrm{n}\tau ^{}\mathrm{p}`$ the only other track is a proton, which is below threshold for producing Cherenkov light if it has momentum less than 1.2 GeV: this is the case for 85% of the simulated events. For this simple simulation, multiple scattering of the tracks was ignored, and photons generated along the track length in the radiator according to the distribution shown in Fig. 1 (b), calculating their Cherenkov angle according to the dispersion curve in Fig. 1 (a). A typical event is shown in Fig. 3 (a), where the tau decay length was 1.5 mm and the kink between tau and muon was 100 mrad. The signature of such decays will thus be a densely populated ring from the muon, accompanied by an offset low-intensity ring from the tau. In the case of $`\tau ^{}\mathrm{e}^{}\nu _\tau \overline{\nu }_\mathrm{e}`$ decays, the electron will shower in the $`\mathrm{C}_6\mathrm{F}_{14}`$, giving a more diffuse ring than for the muon, making the separation of the tau hits more difficult. For the single-prong hadronic decay (corresponding to about half of the tau decays) there is a high probability of the hadron escaping without nuclear interaction: for the radiator interaction length of about 55 cm, a hadronic track of 25 cm length has 63% probability of not interacting; these events may therefore also be useful. The angular distribution of the tau tracks relative to the incoming neutrino direction in the simulated events is shown as the solid line in Fig. 4 (a); as can be seen it is strongly peaked in the forward direction, with a mean of only 40 mrad, whilst the distribution of the kink angle between the tau and muon tracks is broader (dashed in the figure) with a mean of 150 mrad. The momentum distribution of the taus is shown in Fig. 4 (b): all are above threshold for generating Cherenkov light, which is 2.3 GeV for the tau in $`\mathrm{C}_6\mathrm{F}_{14}`$. The distribution of the number of detected photoelectrons from the tau track is shown in Fig. 4 (c). About 60% of the events have 6 or more detected hits, which should be sufficient to recognize the ring. However, the selection of tau events can use not only the signature of the offset low-intensity ring, but also the measurement of the average Cherenkov angle of the associated photons. For this, good resolution is required to positively identify the tau by separating its ring radius from that expected for a saturated track (as would be the case for lighter particles: e, $`\mu `$ or $`\pi `$), as well as to distinguish the photons from different tracks. A tau with the typical momentum of 20 GeV emits Cherenkov light at an angle which is 5 mrad less than a fully relativistic particle. The Cherenkov angle resolution due to dispersion in the radiator has an RMS of 10 mrad per photoelectron, corresponding to 0.7 cm on the detector surface. A detector granularity of $`2\times 2\mathrm{cm}^2`$ is therefore suitable, to avoid limiting the resolution. Because of its short decay length, there is no smearing due to emission-point uncertainty for photons from the tau. For longer tracks, such as the muon in Fig 3 (a), there is noticeable effect from spherical aberration, leading to a tail of photons on the outside of the ring; this, however, tends to be on the side away from the tau ring, and so should not degrade the pattern recognition. The result of pixellization of the detector plane is shown for the same event in Fig. 3 (b). The determination of the production point of the tau can be achieved by localizing the muon track as it leaves the detector module, using a tracking detector. The Cherenkov ring of the muon gives a very precise measurement of its angle, and the integrated number of photoelectrons detected on the ring is proportional to the path length. The tau production point should be localized in this way to a precision in space of order 1 cm, more than adequate for the Cherenkov angle calculation. The average Cherenkov angle is determined for all photons from the tau in each event, and compared to the value expected for a saturated ring. The significance of the separation, expressed as the number of sigma $`N_\sigma `$ between the tau hypothesis and the saturated Cherenkov angle, is shown in Fig. 4 (d). Of course, the tau momentum is not fully reconstructed; nevertheless, the measured muon momentum will provide a lower limit on the tau momentum, and for the typical muon momenta observed all light particle types would give a saturated ring. About half of the tau tracks have significant separation ($`N_\sigma >2`$), with 30% having $`N_\sigma >3`$. The significance could be increased by improving the resolution with a narrower bandwidth of photon energies, at the cost of reducing the total number of photoelectrons observed; the optimal cut will depend on the level of background that needs to be rejected. The performance will also be reduced somewhat due to confusion with overlapping rings from other tracks in the event. In particular, for the muon decays, a kink angle greater than about 40 mrad will be required to separate the tau and muon images; this occurs in about 80% of events. Detailed study of the loss due to pattern recognition awaits a more complete simulation of the events. ## 3 Possible implementation To keep high detection efficiency it is advantageous to cover the detection surface of a module with a single detector. Since the mass of radiator that is imaged by a detector scales as the cube of the detector diameter, the largest possible detectors are desirable to limit the number required. A 1 m diameter hybrid photodiode (HPD) detector has been proposed for AQUA-RICH . These devices combine the photocathode and focussing of vacuum photodetectors with the spatial and energy resolution of silicon detectors. They have been the subject of an intense program of R&D for the RICH detectors of the LHCb experiment , and one of the devices developed has 2048 channels in a 5<sup>′′</sup> diameter (127 mm) envelope . These tubes are fabricated at CERN with a bialkali ($`\mathrm{K}_2\mathrm{CsSb}`$) photocathode.<sup>2</sup><sup>2</sup>2Note that Hamamatsu is advertising an HPD with GaAsP photocathode that achieves 45% quantum efficiency at 500 nm: such performance could double the number of detected photoelectrons from the tau. A recent test-beam image from one of them is shown in Fig. 5 (a). Envelopes for a 10<sup>′′</sup> version of this tube have recently been manufactured, and a 20<sup>′′</sup> version is already foreseen (the photomultipliers used by Super-Kamiokande are also of 20<sup>′′</sup> diameter). Extrapolation to a 40<sup>′′</sup> (1 m) diameter tube appears feasible. With 2048 channels, the effective pixel size at the photocathode would be $`2\times 2\mathrm{cm}^2`$, ideal for the present application. The excellent energy resolution makes photon counting straightforward in these tubes, as illustrated in Fig. 5 (b). With such an HPD as the photodetector, the layout of a module would be as shown in Fig. 2 (b). Neighbouring modules would be connected so that their radiators fill a single volume. Modules could be stacked vertically, with hexagonal close packing, to make a wall. Each wall would then be followed by a tracking station, and this structure repeated as often as necessary to provide the detector mass required. Sixteen walls, each of 61 modules, would provide a kiloton mass, as illustrated in Fig 6 (a). Interleaving of toroidal magnets would allow the muon momentum and charge to be determined. The first tracking station would act as a veto against charged particles entering the apparatus from upstream, whilst the last station is separated by sufficient lever arm to provide a measurement of the track angle after the last magnet. The required number of magnet and tracking stations would clearly be a matter for detailed optimisation. A more compact detector would be possible if a radiator of higher density was available. However, other possibilities that have been investigated such as lead glass, whilst being suitably dense, also have a much higher refractive index, so the volume imaged per detector is reduced (by Eq. 1). Furthermore they lack the low chromatic error and large photon bandwidth of $`\mathrm{C}_6\mathrm{F}_{14}`$. Nevertheless, a suitable glass may still be found. If the use of individual detectors for each module is abandoned, then the module size could increase. The extreme case would be a large volume of radiator limited by the dimensions of the experimental hall: for example, a spherical mirror of radius 9 m with a radiator length of 3 m and an array of detectors covering the upstream surface. The photodetector coverage would be a little lower, due to the packing of the tubes, and the transparency of $`\mathrm{C}_6\mathrm{F}_{14}`$ over such a long radiator length would need to be studied. Also the large number of photons from a muon track would give strong constraints on the mirror quality, to avoid a tail of poorly reflected photons obscuring the tau signal. The advantages are the significantly reduced number of channels required, and the possibility of using standard photodetectors. One such module would have a radiator mass of 240 t, so could replace a series of four walls of modules in the previous layout, as illustrated in Fig. 6 (b). The optimal choice may lie somewhere between these two limits. ## 4 Conclusions A novel concept for detection of tau neutrinos has been presented, through their charged-current interaction in $`\mathrm{C}_6\mathrm{F}_{14}`$ liquid to give a tau lepton, that produces sufficient Cherenkov light for a ring image to be formed. In about half of the events in a simple simulation a positive identification of the tau can be achieved through the measurement of the average Cherenkov angle of the detected photons. The signature, for $`\tau \mu `$ decays, is of a densely populated ring from the muon, accompanied by an offset low intensity ring from the tau. Investigation of the pattern recognition issues, including the effects of tracks from nuclear breakup in deep-inelastic interactions, will await a more detailed simulation of the experiment. Similarly, possible background sources would need to be addressed, both technological (from mirror imperfections, or backscattering from the silicon of the HPD) and from physics (such as the production of delta rays, and nuclear reinteraction). The purpose of this note is to gauge the interest in the detector concept, before embarking on such a programme. ## Acknowledgements It is a pleasure to thank Tom Ypsilantis for inspiration and advice: he suggested the use of $`\mathrm{C}_6\mathrm{F}_{14}`$ as radiator, and pointed me to Eq. 1. Thanks also to Ioannis Papadopoulous and Pietro Antonioli, who provided the simulated events used here, and to Christian Joram for the HPD information.
no-problem/9910/astro-ph9910289.html
ar5iv
text
# Nucleosynthesis in evolved stars with the NACRE11footnote 1Nuclear Astrophysical Compilation of REaction rates (Angulo et al. 1999) compilation ## 1 Nucleosynthesis and abundance anomalies in RGB stars Proton capture nucleosynthesis inside the CNO, NeNa and MgAl loops, is advocated to account for abundances anomalies observed at the surface of GCRGs (Globular Cluster Red Giants), either in the context of the primordial scenario or evolutionary scenario (see Sneden and Charbonnel et al. in this volume). We focus here on the evolutionary scenario in order to determine to what extent it can account for the observations from a nuclear point of view. We present results for a $`0.83\mathrm{M}_{}`$, \[Fe/H\] = -1.5 model which is typical of a RGB star in the globular cluster M13. For our calculations we use the nuclear reaction rates recommended by the NACRE consortium (Angulo et al. 1999). This compilation provides revised nuclear rates and, for the first time, gives the uncertainties associated to these rates which we explored with a parametric code of nucleosynthesis. ### 1.1 Influence of the NACRE reaction rates on the abundance profiles In fig.1, we present a comparison between the NACRE rates and earlier ones (Caughlan $`\&`$ Fowler 1988, Champagne et al. 1988, Beer et al. 1991, Illiadis 1990, Gorres et al. 1989) for the main nuclear reactions involving $`{}_{}{}^{24}\mathrm{Mg}`$ and $`{}_{}{}^{27}\mathrm{Al}`$. Shaded areas represent uncertainties associated to the NACRE reaction rates. The uncertainty domain can be quite large for some rates at the temperatures typical of the Hydrogen Burning Shell inside an RGB star. This may affect the abundance profiles of some elements in stellar interiors. Figure 2 presents the abundance profiles for CNO, NeNa and MgAl elements between the bottom of the convective envelope ($`\delta `$M = 1) and the base of the HBS ($`\delta `$M = 0), for a star typical of M13 at the tip of the RGB. With the NACRE reaction rates (bold lines), the CNO abundance profiles appear to be very similar to the ones obtained with the reaction rates by Caughlan $`\&`$ Fowler (1988; thin lines). The internal structure of the stellar models thus remains unaffected by using either rates. This enables us to use a parametric code to study the effects of the uncertainties on the reaction rates involved in the NeNa- and MgAl- loops, which appear to be very large in the relevant domains of temperature (fig.1). The profiles of the carbon and oxygen isotopes appear to be fairly well constrained. However quite large uncertainties remain in the profiles (position and abundances) of $`{}_{}{}^{23}\mathrm{Na}`$ and $`{}_{}{}^{25}\mathrm{Mg}`$ which are mainly shifted toward the center or the external layers, and of $`{}_{}{}^{21}\mathrm{Ne}`$, $`{}_{}{}^{22}\mathrm{Ne}`$, $`{}_{}{}^{26}\mathrm{Mg}`$ and $`{}_{}{}^{27}\mathrm{Al}`$. For these isotopes, the maximum and minimum values of their abundances at a given temperature can even change by one order of magnitude. ### 1.2 Impact of the new rates on the evolutionary scenario Exploring these uncertainty domains, we now present the maximum surface variations of sodium, oxygen, aluminium, etc…, that one can expect to find within the framework of the evolutionary scenario. In fig.3 and fig.4 we compare the nucleosynthesis predictions (bold lines) with observations. From a nucleosynthesis point of view, enhancements of sodium and depletion of oxygen observed at the surface of many GCRGs can be reproduced assuming that some deep mixing process (which is not taken into account in our model) connects the convective envelope and the regions where the abundance of helium has increased by less than 4$`\%`$. There, within the more external sodium “plateau”, hydrogen is not depleted yet (see fig.2 and Charbonnel et al. in this volume). Figure 2 shows that no change in the $`{}_{}{}^{24}\mathrm{Mg}`$ abundance is noticeable with the NACRE rates in such a star as was already the case with previous rates (Denissenkov et al. 1998, Lnager et al. 1993, Langer $`\&`$ Hoffman 1993). $`{}_{}{}^{24}\mathrm{Mg}`$ requiring higher temperatures to capture protons (T $``$ 80.$`10^6`$ K), an enhancement of $`{}_{}{}^{27}\mathrm{Al}`$ could only be due to $`{}_{}{}^{25}\mathrm{Mg}`$ abundance decrease and to $`{}_{}{}^{26}\mathrm{Mg}`$ increase, but not to $`{}_{}{}^{24}\mathrm{Mg}`$ destruction. On the other hand, without making any particular assumption on the magnesium isotopic ratios in a M13 typical star (M = $`0.83\mathrm{M}_{}`$ and \[Fe/H\] = -1.5) (Shetrone 1996a and references therein), nucleosynthesis can not account for the observed aluminium enhancements and the Na-O anticorrelation at once as shown in fig.4. We confirm that the aluminium abundance anomalies observed by Shetrone(1996) ($`{}_{}{}^{24}\mathrm{Mg}`$ seems to be depleted in Al-rich stars) could be related to primordial contamination. Testing various isotopic ratios for magnesium, it turns out that with $`{}_{}{}^{24}\mathrm{Mg}/^{25}\mathrm{Mg}/^{26}\mathrm{Mg}29.4/58.8/11.8`$, typical of intermediate mass AGB yields (Forestini $`\&`$ Charbonnel 1997), one could expect a more important but yet insuficient production of aluminium. In any case, new determinations of the Mg isotopic ratios in Al-rich RGB stars are necessary to clarify the primordial effects. ## 2 Nucleosynthesis in an AGB star The NACRE consortium also provides reaction rates (and their lower and upper limits) for the combustion of helium. We investigate their influence on the evolution of the composition of the helium shell surrounding the C-O core during the AGB phase. The NACRE rates allowed us to show that the reactions responsible for the production of $`{}_{}{}^{16}\mathrm{O}`$ in intermediate mass AGB are well known and constrained. On the other hand, the rate of the reaction $`{}_{}{}^{22}\mathrm{Ne}`$($`\alpha `$,n)$`{}_{}{}^{25}\mathrm{Mg}`$ is very uncertain (uncertainty domain width : $`10^310^4`$ at $`\mathrm{T}_8`$ = 1 - 2); this prevents from giving good constraints to the production of neutrons in massive AGB stars among other things. We present here the results of the $`\alpha `$-capture nucleosynthesis in the “mean” physical conditions of a thermal pulse inside an AGB, obtained with a simplified parametric code (temperature and density are constant). In particular, we describe in details the variations of the abundances resulting of the two reaction chains involving $`{}_{}{}^{14}\mathrm{N}`$ and $`{}_{}{}^{18}\mathrm{O}`$, and leading to the production of $`{}_{}{}^{25}\mathrm{Mg}`$ through $`{}_{}{}^{22}\mathrm{Ne}`$. During a thermal pulse, ashes from the HBS are melt with helium at average temperature and density of $`\mathrm{T}_8`$ = 2.5 and $`\rho =10^4`$ g/$`\mathrm{cm}^3`$. Under these conditions a very rich and unique nucleosynthesis occurs, as shown in fig.5. We focus on the three following reaction chains: $${}_{}{}^{16}\mathrm{O}(\alpha ,\gamma )^{20}\mathrm{Ne}(\alpha ,\gamma )^{24}\mathrm{Mg}$$ (1) $${}_{}{}^{14}\mathrm{N}(\alpha ,\gamma )^{18}\mathrm{F}(\beta )^{18}\mathrm{O}(\alpha ,\gamma )^{22}\mathrm{Ne}(\alpha ,n)^{25}\mathrm{Mg}$$ (2) and $${}_{}{}^{14}\mathrm{N}(\alpha ,\gamma )^{18}\mathrm{F}(\beta )^{18}\mathrm{O}(p,\alpha )^{15}\mathrm{N}(\alpha ,p)^{19}\mathrm{F}(\alpha ,n)^{25}\mathrm{Mg}$$ (3) We can notice that all the reactions involve the capture of an $`\alpha `$-particle, except $`{}_{}{}^{18}\mathrm{O}`$(p,$`\alpha `$)$`{}_{}{}^{15}\mathrm{N}`$. In this case, we will show that (n,p) reactions provide the flux of protons necessary to make the (p,$`\alpha `$) channel efficient for the destruction of $`{}_{}{}^{18}\mathrm{O}`$ in the third chain. Proton and neutron sources during a thermal pulse In fig.5, we present the main nuclear reaction fluxes during the combustion of helium inside the thermal pulse. The 3$`\alpha `$ reaction produces a large amount of $`{}_{}{}^{12}\mathrm{C}`$, which, together with $`{}_{}{}^{13}\mathrm{C}`$ (which comes from H-burning ashes), leads mainly to the synthesis of $`{}_{}{}^{16}\mathrm{O}`$. Indeed, the reactions responsible for the production of $`{}_{}{}^{20}\mathrm{Ne}`$ and $`{}_{}{}^{24}\mathrm{Mg}`$ have sufficiently low rates to keep the abundances of these elements unchanged (reaction chain (1)). The reaction $`{}_{}{}^{13}\mathrm{C}`$($`\alpha `$,n)$`{}_{}{}^{16}\mathrm{O}`$ is the main source of neutrons in the medium at the beginning of the helium flash, as shown in figure 6. This production of neutrons is directly correlated to the abundance of protons through the reaction $`{}_{}{}^{14}\mathrm{N}`$(n,p)$`{}_{}{}^{14}\mathrm{C}`$ at this point in the He-burning phase (see fig.6). Reaction chain (2) is generated by a “$`{}_{}{}^{14}\mathrm{N}`$ reservoir”, which has been built up during the hydrogen combustion through the CNO cycle. Each element involved in this chain is produced and then destroyed by the capture of an $`\alpha `$-particle. Concerning $`{}_{}{}^{18}\mathrm{O}`$, however, we can notice that the (p,$`\alpha `$) channel, even if narrower than the ($`\alpha `$,$`\gamma `$) channel, can not be neglected (reaction chain (3)). This is due to the presence of protons in the medium, that we can impute to both $`{}_{}{}^{14}\mathrm{N}`$(n,p)$`{}_{}{}^{14}\mathrm{C}`$ and $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$(n,p)$`{}_{}{}^{26}\mathrm{Mg}`$ reactions. Indeed, an analysis of the fluxes derived from the NACRE reaction rates shows that these two reactions provide an equivalent amount of protons in so far the abundance of $`{}_{}{}^{14}\mathrm{N}`$ is not negligible. We have already mentionned $`{}_{}{}^{13}\mathrm{C}`$($`\alpha `$,n)$`{}_{}{}^{16}\mathrm{O}`$ as main supplier of neutrons in the early stages of He-burning, and we can add $`{}_{}{}^{22}\mathrm{Ne}`$($`\alpha `$,n)$`{}_{}{}^{25}\mathrm{Mg}`$, which takes over at more advanced stages (see fig.6). ## References > Angulo C., Arnould M., Rayet M., et al. (the NACRE collaboration), 1999, Nucl. Phys., A656, 3 > > Beer H., Voos F., Winters R.R., 1991 preprint > > Caughlan G. and Fowler W.A., 1988, At. Data Nucl. Data Tables, 40, 207 > > Champagne A.E., Cella C.H., Konzes R.T., Loury R.M., Magnus P.V, Smith M.S., Mao Z.Q., 1988, Nucl. Phys. A487 > > Denissenkov P.A. and Denissenkova S.N., 1990, Soviet Astron. Lett., 16, 275 > > Denissenkov P.A., Da Costa G.S., Norris J.E. and Weiss A., 1998, A$`\&`$A, 333, 926-941 > > Forestini M. and Charbonnel C., 1997, A$`\&`$A Suppl. Ser., 123, 241 > > Gorres J., Wiescher M., Rolfs C., 1989, ApJ 343, 365 > > Illiadis C., 1990, Nucl. Phys. A512, 509 > > Langer G.E., Hoffman R. and Sneden C., 1993, PASP, 105, 301-307 > > Langer G.E. and Hoffman R., 1995, PASP, 107, 1177-1182 > > Shetrone M.D., 1996a, AJ 112, 1517 > > Timmermans R., Becker H.W., Rolfs C., Schroder U., Trautvetter H.P., 1988 Nucl. Phys. A447, 105
no-problem/9910/cond-mat9910517.html
ar5iv
text
# Search for two-scale localization in disordered wires in a magnetic field \[ ## Abstract The supersymmetry technique \[A. V. Kolesnikov and K. B. Efetov, Phys. Rev. Lett. 83, 3689 (1999)\] predicts a two-scale behavior of wavefunction decay in disordered wires in the crossover regime from preserved to broken time-reversal symmetry. We have tested this prediction by a transmission approach, relying on the Borland conjecture that relates the decay length of the transmittance to the decay length of the wavefunctions. Our numerical simulations show no indication of two-scale behavior. \] In a remarkable paper , Kolesnikov and Efetov have predicted that the decay of wavefunctions in disordered wires is characterized by two localization lengths, if time-reversal symmetry is partially broken by a weak magnetic field. Using the supersymmetry technique it was demonstrated that the far tail of the wavefunctions decays with the length $`\xi _2`$ characteristic for completely broken time-reversal symmetry—even if the flux through a localized area is much smaller than a flux quantum. At shorter distances the decay length is $`\xi _1=\frac{1}{2}\xi _2`$. It was suspected that previous studies by Pichard et al. found single-scale decay because of the misguiding theoretical expectation of such behavior. This expectation was also the basis for the interpretation of the experiments by Khavin, Gershenson, and Bogdanov on submicron-wide wires. The prediction of Kolesnikov and Efetov calls for a test by means of a dedicated experiment or computer simulation. It is the purpose of this work to provide the latter. We target the key feature of the two-scale localization phenomenon, which is the doubling of the asymptotic decay length at infinitesimally weak magnetic fields. Our numerical simulations are based on a transmission approach. We rely on the Borland conjecture (believed to be true generally ), that relates the asymptotic decay of the transmittance $`T`$ with increasing wire length $`L`$ to the asymptotic decay of the wavefunction $`\psi (L)`$. According to the Borland conjecture, the Lyapunov exponent $`\alpha =lim_L\mathrm{}\frac{1}{2}L^1\mathrm{ln}T`$ is identical to the inverse localization length $`\xi ^1=lim_L\mathrm{}L^1\mathrm{ln}|\psi (L)|`$. Moreover, $`\xi `$ and $`\alpha `$ are self-averaging, meaning that the statistical fluctuations become smaller and smaller as $`L\mathrm{}`$. Our numerical simulations show that the crossover from $`\xi =\xi _1`$ to $`\xi =\xi _2`$ does not occur until the flux $`\mathrm{\Phi }_\xi `$ through a wire segment of length $`\xi _1`$ is of the order of a flux quantum $`\mathrm{\Phi }_0=h/e`$. For our longest wires ($`L150\xi _1`$) the crossover according to Ref. should have occurred at $`\mathrm{\Phi }_\xi /\mathrm{\Phi }_0\mathrm{exp}(L/8\xi _1)10^8`$. We consider various possible reasons for the disagreement with Ref. (finite number of modes, anomalously localized states), but believe that none of these provides a satisfactory explanation. Our first set of results is obtained from the numerical calculation (by the technique of recursive Green functions ) of the transmission matrix $`t`$ for a two-dimensional Anderson Hamiltonian with on-site disorder. In units of the lattice constant $`a1`$, the width of the wire is $`W=13`$ and the wavelength of the electrons is $`\lambda =5.1`$, resulting in $`N=5`$ propagating modes through the wire. The localization lengths $`\xi _1=(N+1)l`$ and $`\xi _2=2Nl`$ are determined by the scaling parameter $`l`$ of quasi-one dimensional localization theory, which differs from the transport mean-free path by a coefficient of order unity . The average of the transmittance $`T=\text{tr}tt^{}`$ in the metallic regime, fitted to $`T=N(1+L/l)^1`$, yields $`l=65`$. This gives a localization length $`\xi _1=390`$ for preserved time-reversal symmetry (symmetry index $`\beta =1`$) and a localization length $`\xi _2=650`$ for broken time-reversal symmetry ($`\beta =2`$). Fig. 1 shows the ensemble-averaged logarithm of the transmittance $`\mathrm{ln}T`$ as a function of wire length $`L`$ for various values of the magnetic field $`B`$ (or flux $`\mathrm{\Phi }_\xi =W\xi _1B`$). We find a smooth transition between the theoretical expectations for preserved and broken time-reversal symmetry. Most importantly, we find an asymptotic slope $`s(B)=lim_L\mathrm{}L^1\mathrm{ln}T`$ that interpolates smoothly between the values $`s=2/\xi _1`$ for $`B=0`$ and $`s=2/\xi _2`$ for large $`B`$. There is no indication of a crossover to the slope $`s=2/\xi _2`$ for smaller values of $`B`$, even for very long wires ($`L150\xi _1`$). According to the theory of Ref. , the crossover should occur at a length $`L_{\mathrm{cross}}`$ given by $$L_{\mathrm{cross}}/\xi _1=8\mathrm{ln}(\sqrt{12}\mathrm{\Phi }_0/4\pi \mathrm{\Phi }_\xi )+𝒪(1),$$ (1) which is well within the range of our simulations ($`L_{\mathrm{cross}}14\xi _1`$ for $`\mathrm{\Phi }_\xi 0.05\mathrm{\Phi }_0`$). The absence of two-scale behavior in the transmittance of an individual, arbitrarily chosen realization is demonstrated in the inset of Fig. 1, for $`\mathrm{\Phi }_\xi =\frac{1}{2}\mathrm{\Phi }_0`$. The self-averaging property of the Lyapunov exponent is evident. The asymptotic decay length $`\xi (B)=2/s(B)`$ is plotted versus magnetic field in Fig. 2, together with the weak-localization correction $`\delta T=T(B=\mathrm{})T(B)`$ at $`L=\xi _1`$. For both quantities, breaking of time-reversal symmetry sets in when $`\mathrm{\Phi }_\xi `$ is comparable to $`\mathrm{\Phi }_0`$. The transition from $`\beta =1`$ to $`\beta =2`$ is completed for $`\mathrm{\Phi }_\xi 100\mathrm{\Phi }_0`$. Our second set of results is obtained from a computationally more efficient model of a disordered wire, consisting of a chain of chaotic cavities (or quantum dots) with two leads attached on each side. This so-called ‘domino’ model is similar to Efetov’s model of a granulated metal and to the Iida-Weidenmüller-Zuk model of connected slices . The length $`L`$ is now measured in units of cavities, and the mean free path $`l=1`$. The scattering matrices of each cavity are randomly drawn from an ensemble (proposed by Życzkowski and Kuś ) that interpolates (by means of a parameter $`\delta `$) between the circular orthogonal ($`\beta =1`$, $`\delta =0`$) and unitary ($`\beta =2`$, $`\delta =1`$) ensembles of random-matrix theory. The relationship between $`\delta `$ and $`\mathrm{\Phi }_\xi /\mathrm{\Phi }_0`$ is linear for $`\delta 1`$. We increased the number of propagating modes to $`N=50`$, because it is conceivable that the two-scale localization becomes manifest only in the large $`N`$-limit, or that only in this limit the critical flux $`\mathrm{\Phi }_\xi `$ for the transition from $`\xi _1`$ to $`\xi _2`$ becomes $`\mathrm{\Phi }_0`$. (In the experiments of Ref. $`N10`$, so our simulations are in the experimentally relevant range of $`N`$.) Because of the much larger value of $`N`$, we restricted ourselves for larger values of the magnetic flux to $`L25\xi _1`$, which should be sufficient to observe the localization length $`\xi _2`$ for $`\mathrm{\Phi }_\xi /\mathrm{\Phi }_010^2`$. For smaller values of the flux, we increased the wire length to $`L100\xi _1`$. The data is presented in Fig. 3. It is qualitatively similar to the results for the $`N=5`$ Anderson model. Instead of two-scale behavior, we only see a single decay length which crosses over smoothly from $`\xi _1`$ to $`\xi _2`$ with increasing $`\delta `$. Again, the crossover of $`\xi `$ coincides with the crossover of the weak-localization correction, so there is no anomalously small crossover flux for the localization length. The logarithmic average $`\mathrm{ln}T`$ is the experimentally relevant quantity since it is representative for a single realization (see Fig. 1, inset). The average transmittance $`T`$ itself is not representative, because it is dominated by rare occurrences of anomalously localized states . Since Kolesnikov and Efetov studied the average of wavefunctions themselves, rather than the average of logarithms of wavefunctions, it is conceivable that their findings are the result of such rare occurrences. For completely broken or fully preserved time-reversal symmetry the average transmittance is given by $$\mathrm{ln}T=L/2\xi _\beta \frac{3}{2}\mathrm{ln}L/\xi _\beta +𝒪(1).$$ (2) The order 1 terms are also known and contribute significantly for $`L30\xi _1`$. (This is the numerically accessible range, because anomalously localized states become exponentially rare with increasing wire length.) We have plotted the full expressions in Fig. 4 (dashed curves), together with the numerical data for the $`N=5`$ Anderson model. Again we find a smooth crossover between preserved and broken time-reversal symmetry. There is no transition with increasing wire length to a behavior indicative of completely broken time-reversal symmetry, even though the flux $`\mathrm{\Phi }_\xi `$ is much larger than required \[according to Eq. (1)\] to observe this crossover for the wavefunctions. In conclusion, we have presented a numerical search for the two-scale localization phenomenon predicted by Kolesnikov and Efetov , with negative result: The asymptotic decay length of the transmittance is found to be given by $`\xi _1`$ and not by $`\xi _2`$, as long as the flux through a localization area is small compared to the flux quantum. How can one reconcile this numerical finding with the result of the supersymmetry theory? We give three possibilities. (1) One might abandon the Borland conjecture and permit the asymptotic decay length of the transmittance (Lyapunov exponent) to differ from the asymptotic decay length of the wavefunction (localization length). Since the Borland conjecture has been the cornerstone of localization theory for more than three decades, this seems a too drastic solution. (2) One could argue that the wires in the simulation are too narrow or too short — although they are in the experimentally relevant range of $`N`$ and $`L`$. (3) One could attribute the two-scale localization phenomenon to anomalously localized states, that are irrelevant for a typical wire. We can not fully exclude these two remaining possibilities, but they would severely diminish the experimental relevance of the phenomenon. A discussion with P. G. Silvestrov motivated us to look into this problem. We acknowledge helpful correspondence with A. V. Kolesnikov and support by the Dutch Science Foundation NWO/FOM.
no-problem/9910/hep-th9910212.html
ar5iv
text
# JHEP 02(2000)042, hep-th/9910212 On the generalized Legendre transform and monopole metrics ## 1 Introduction Monopole moduli spaces are hyperkähler and, therefore, have a twistor description. Such a description is given in . More recently, Ivanov and Roček used the generalized Legendre transform of to construct the metric on the 2-monopole moduli space . The relationship between these two twistor constructions is clarified in this paper. The twistor space of a hyperkähler manifold, $`M`$, is a trivial fiber bundle, $`Z=M\times ^1`$, with a holomorphic symplectic form $`\omega `$, which is an $`𝒪(2)`$ section over $`^1`$. $`^1`$ is covered by two affine patches, $`U_0`$ and $`U_1`$. If $`\zeta `$ is the usual projective coordinate on $`U_0`$, then $`\omega `$ is given on the fiber over $`\zeta `$ by $`\omega =(\omega _2+i\omega _3)+2\zeta \omega _1\zeta ^2(\omega _2i\omega _3)`$ where $`\omega _1`$, $`\omega _2`$ and $`\omega _3`$ are the covariantly constant 2-forms which hyperkählerity implies exist on $`M`$. The generalized Legendre transform construction shows how the Kähler potential may be calculated, if $`\omega `$ is assumed to have a certain form. In subsection 1.1 of this introductory section, there is a brief review of this construction. This is followed, in subsection 1.2, by a brief review of the twistor theory of monopoles and the related twistor theory of monopole moduli spaces. In section 2, this twistor theory is re-expressed as a generalized Legendre transform. The Legendre transform constraints are then the Ercolani-Sinha conditions . ### 1.1 Twistors and the generalized Legendre transform The generalized Legendre transform construction of concerns twistor spaces with $`k`$ intermediate holomorphic projections $`Z𝒪(2n_j)^1`$, where $`j=1\mathrm{}k`$ and $`n_j`$ are integers. In the example of interest in this paper $`n_j=j`$ and for ease of notation attention is restricted to that case. This requirement is equivalent to the existence of $`k`$ coordinates $`\alpha ^j(\zeta )`$ on $`Z`$, so that $`\alpha ^j(\zeta )`$ is a degree $`2j`$ polynomial: $$\alpha ^j=\underset{a=1}{\overset{2j}{}}w_a^j\zeta ^a,$$ (1) satisfying the reality condition $`\alpha ^j(\zeta )=(1)^j\zeta ^{2j}\overline{\alpha ^j(1/\overline{\zeta })}.`$ The construction is derived from the patching formulae relating quantities over $`U_0`$ and $`U_1`$. The coordinate on $`U_1`$ is given by $`\stackrel{~}{\zeta }=1/\zeta `$. Since $`\omega `$ is an $`𝒪(2)`$ line bundle, it is given on $`U_1`$ by $`\stackrel{~}{\omega }`$, where $$\stackrel{~}{\omega }=\frac{1}{\zeta ^2}\omega $$ (2) on $`U_0U_1`$. Similarly, the $`\alpha ^j`$ coordinates are related by $$\stackrel{~}{\alpha }^j=\frac{1}{\zeta ^{2j}}\alpha ^j.$$ (3) This means that, if $`(\alpha ^j,\xi ^j,\zeta )`$ are coordinates for the whole of $`Z`$, such that $$\omega =\underset{j=1}{\overset{k}{}}d\alpha ^jd\xi ^j,$$ (4) then, the patching formula for $`\xi ^j`$ must be $$\stackrel{~}{\xi }^j=\zeta ^{2j2}\left(\xi ^j+\frac{H}{\alpha ^j}\right)$$ (5) for some function $`H(\alpha ^j)`$. The expansion of these coordinates as power series is now considered. The patching formulae will place constraints on the values of certain coefficients in these expansions and these constraints will be unified as constraints on a single function $`F`$. By expanding the symplectic form in $`\zeta `$, it is possible to calculate the Kähler potential for the metric on $`M`$ in terms of $`F`$. Assuming that $`\xi ^j`$ is non-singular near $`\zeta =0`$; $$\xi ^j=\underset{n=0}{\overset{\mathrm{}}{}}x_n^j\zeta ^n,\stackrel{~}{\xi }^j=\underset{n=0}{\overset{\mathrm{}}{}}y_n^j\zeta ^n.$$ (6) Using the residue theorem and the patching formula (5), this means that $$x_m^j=\frac{1}{2\pi i}_0\xi ^j\frac{d\zeta }{\zeta ^{m+1}}=\frac{1}{2\pi i}_0\stackrel{~}{\xi }^j\frac{d\zeta }{\zeta ^{m+2j1}}\frac{1}{2\pi i}_0\frac{H}{\alpha ^j}\frac{d\zeta }{\zeta ^{m+1}}$$ (7) where $`0`$ is the small contour surrounding $`\zeta =0`$. The integral of $`\stackrel{~}{\xi }^j`$ does not give the coefficient $`y_{2m2j}^j`$, because the contour around $`\zeta =0`$ may enclose branch cuts. It is assumed that the contribution from these cuts can be expressed as an integral of some new function, $`H^{}`$, around some contour, $`c`$. This integral is the effect of moving the contour of the $`\stackrel{~}{\xi }^j`$ integral from a small contour around zero to a small contour around infinity. This technique is justified by example and is dealt with carefully in the specific case considered below. Thus, $$x_m^j=y_{2m2j}^j\frac{1}{2\pi i}_c\frac{H^{}}{\alpha ^j}\frac{d\zeta }{\zeta ^{m+1}}\frac{1}{2\pi i}_0\frac{H}{\alpha ^j}\frac{d\zeta }{\zeta ^{m+1}}.$$ (8) A function $`F`$ is defined as $$F=\frac{1}{2\pi i}_cH^{}\frac{d\zeta }{\zeta ^2}\frac{1}{2\pi i}_0H\frac{d\zeta }{\zeta ^2}.$$ (9) By the chain rule $$\frac{F}{w_n^j}=\frac{1}{2\pi i}_c\frac{H^{}}{\alpha ^j}\zeta ^n\frac{d\zeta }{\zeta ^2}\frac{1}{2\pi i}_0\frac{H}{\alpha ^j}\zeta ^n\frac{d\zeta }{\zeta ^2}$$ (10) Therefore, from (8) $$\frac{F}{w_0^j}=x_1^j,\frac{F}{w_1^j}=x_0^j$$ (11) and, for $`0<a<2j2`$, $$\frac{F}{w_a^j}=0.$$ (12) The symplectic form can be expanded in $`\zeta `$ to determine $`\omega _1`$ on the fiber above $`\zeta =0`$. The coordinates on this fiber are $`\alpha ^j(0)=w_0^j`$ and $`\xi ^j(0)=x_0^j`$. It can then be shown that the Kähler form for the hyperkähler manifold, $`M`$, is given by $$K(w_0^j,x_0^j)=F(w_0^j,w_1^j)\underset{j=1}{\overset{k}{}}x_0^jw_1^j\underset{j=1}{\overset{k}{}}\overline{x}_0^j\overline{w}_1^j$$ (13) where $`w_1^j`$ is related to $`x_0^j`$ by (11) . This is the generalized Legendre transform construction. ### 1.2 Twistors and monopoles The twistor theory of monopoles is described in . A $`k`$-monopole is equivalent to a curve, $`S`$, in $`T^1`$ of the form $$P(\eta ,\zeta )=\eta ^k+\underset{j=1}{\overset{k}{}}\alpha ^j\zeta ^{kj}=0$$ (14) where $`\alpha ^j`$ is, as before, a degree $`2j`$ polynomial satisfying the reality condition. $`\eta `$ is the usual coordinate on the fiber of $`T^1^1`$. $`S`$ is called the spectral curve. In addition to the reality condition on $`\alpha ^j`$, it must also satisfy a number of algebraic-geometric conditions. In particular it is required that the $`L^2`$ bundle over the spectral curve must be trivial. The patches $`U_0`$ and $`U_1`$ on $`^1`$ lift to patches on $`T^1`$. These, in turn, give patches on the spectral curve: these will also be called $`U_0`$ and $`U_1`$. The triviality of the $`L^2`$ bundle over the spectral curve, means that there is a section given by two nowhere vanishing holomorphic functions $`f_0`$ on $`U_0`$ and $`f_1`$ on $`U_1`$ satisfying $$f_0(\eta ,\zeta )=e^{\frac{2\eta }{\zeta }}f_1(\eta ,\zeta )$$ (15) on the intersection $`U_0U_1`$. In , explicit integral conditions are given for the existence of such a section. There must exist a 1-cycle $`c`$ so that any global holomorphic 1-form, $`\mathrm{\Omega }`$, satisfies $$_c\mathrm{\Omega }=2\underset{j=1}{\overset{k}{}}\eta _j(0)g_j$$ (16) where $`\eta _j(0)`$ is the value of $`\eta `$ at $`0_j`$, the point on the $`j`$th sheet above $`\zeta =0`$. $`g_j`$ is defined by writing $`\mathrm{\Omega }=g_jd\zeta `$ at $`0_j`$. These are the Ercolani-Sinha conditions. $`c`$ must be primitive if the $`L^s`$ bundle on $`S`$ is nontrivial for $`0<s<2`$: another necessary condition. In the calculation of section 2, the relationship between $`c`$ and the $`L^2`$ section will be important. If $`\{a_1,\mathrm{},a_g,b_1,\mathrm{},b_g\}`$ is a canonical homology basis, $$c=\underset{r=1}{\overset{g}{}}(n_ra_r+m_rb_r)$$ (17) where $`n_r=\frac{1}{2\pi i}_{b_r}d\mathrm{log}f_1`$ and $`m_r=\frac{1}{2\pi i}_{a_r}d\mathrm{log}f_1`$ are integers. The twistor data for monopoles can also be used to derive a twistor space for the $`k`$-monopole moduli space, $`M_k`$. Coordinates for $`M_k`$ are provided by the rational map description . This description relates a $`k`$-monopole to a degree $`k`$ based rational map: $`p(z)/q(z)`$. $`q(z)`$ is a monic polynomial of degree $`k`$ and $`p(z)`$ is a polynomial of degree $`k1`$, which has no factors in common with $`q(z)`$. Following Hurtubise , the rational map for a monopole can be constructed from the spectral curve and the trivialization of the $`L^2`$ bundle. A direction, $`\zeta `$, is chosen and $`q(z;\zeta )`$ $`=`$ $`P(z,\zeta )`$ $`p(z;\zeta )`$ $``$ $`f_0(z,\zeta )\text{mod}q(z).`$ (18) Now, if $`q(z)`$ has distinct roots, $`\eta _1,\mathrm{},\eta _k`$, coordinates for $`M_k`$ are given by $`(\eta _1,\mathrm{},\eta _k,p(\eta _1),\mathrm{},p(\eta _k))`$. Atiyah and Hitchin point out in , that the symplectic form $$\omega =\underset{i=1}{\overset{k}{}}\frac{dp(\eta _i)d\eta _i}{p(\eta _i)}$$ (19) has the property that $`\stackrel{~}{\omega }=\zeta ^2\omega `$ and is, therefore, an $`𝒪(2)`$ section over $`^1`$. In the next section, the relationship between the rational map and the spectral curve is exploited to clarify the relationship between this symplectic form and the generalized Legendre transformation. ## 2 The generalized Legendre transform and monopole moduli spaces The formula for the spectral curve (14) defines $`\eta _i`$ as roots of a polynomial equation whose coefficients are $`𝒪(2j)`$ sections. Because of this, $`\omega `$ can be rewritten $$\omega =\underset{i=1}{\overset{k}{}}\underset{j=1}{\overset{k1}{}}\frac{dp(\eta _i)}{p(\eta _i)}\frac{\eta _j}{\alpha ^j}d\alpha ^j=\underset{j=1}{\overset{k1}{}}d\xi ^jd\alpha ^j$$ (20) where $$\xi ^j=\underset{i=1}{\overset{k}{}}\frac{\eta _i}{\alpha ^j}\chi (\eta _i)$$ (21) and $`\chi (\eta ,\zeta )=\mathrm{log}p(\eta ;\zeta )`$. $`\xi ^j`$ is not defined on the spectral curve, since $`\chi `$ is not. In general, $`_a𝑑\chi 0`$ for a nontrivial cycle $`a`$. $`\chi `$ can be defined by cutting the surface. The 1-forms $`d\chi `$ and $`d\xi ^j`$ are defined on the uncut surface. The patching formula for $`\chi `$ follows from the $`L^2`$ patching formula (15), since $`\stackrel{~}{\chi }=\mathrm{log}f_1`$, $$\stackrel{~}{\chi }=\chi +\frac{2\eta }{\zeta }.$$ (22) This, in turn, provides a patching formula for $`\xi ^j`$, $$\stackrel{~}{\xi }^j=\zeta ^{2j2}\left(\xi ^j+2\underset{i=1}{\overset{k}{}}\frac{\eta _i}{\alpha ^j}\frac{\eta _i}{\zeta }\right)=\zeta ^{2j2}\left(\xi ^j+\frac{}{\alpha ^j}\underset{i=1}{\overset{k}{}}\frac{\eta _i^2}{\zeta }\right).$$ (23) Therefore, $`\xi ^j`$ has a Legendre transform patching formula with $$H=\underset{i=1}{\overset{k}{}}\frac{\eta _i^2}{\zeta }.$$ (24) This is the Hamiltonian function mentioned in . Now, as before, the integral around $`\zeta =0`$ is considered. Because of the particular form of the sum in the expression for $`H`$, the integral on $`^1`$ can be written as an integral on the spectral curve. $$\frac{1}{2\pi i}_0\frac{}{\alpha ^j}H\frac{d\zeta }{\zeta ^{m+1}}=\frac{1}{2\pi i}_0\frac{}{\alpha ^j}\underset{i=1}{\overset{k}{}}\frac{\eta _i^2}{\zeta }\frac{d\zeta }{\zeta ^{m+1}}=\frac{1}{2\pi i}_{_{i=1}^k0_i}\frac{}{\alpha ^j}\frac{\eta ^2}{\zeta }\frac{d\zeta }{\zeta ^{m+1}}$$ (25) where $`0_j`$ is the small contour on the $`j`$th sheet of the spectral curve around the lift of $`\zeta =0`$ to that sheet. Next, the contour in the $`\stackrel{~}{\xi }^j`$ integral must be moved from $`0`$ to $`\mathrm{}`$. This is not difficult if the integral is first rewritten as an integral on the spectral curve: $$\frac{1}{2\pi i}_0\stackrel{~}{\xi }^j\frac{d\zeta }{\zeta ^{m+2j1}}=\frac{1}{2\pi i}_{_{i=1}^k0_i}\frac{\stackrel{~}{\eta }}{\stackrel{~}{\alpha }^j}\stackrel{~}{\chi }\frac{d\zeta }{\zeta ^{m+2j1}}$$ (26) $`\stackrel{~}{\chi }`$ is defined on the $`4g`$-gon, formed by cutting the spectral curve along the canonical homology 1-cycles $`a_r`$ and $`b_r`$ for $`r=1\mathrm{}g`$. Although the spectral curve is obtained from the $`4g`$-gon by identifying appropriate edges, the function $`\stackrel{~}{\chi }`$ does not respect the identifications. In fact, since, for example, $$_{b_1}d\mathrm{log}f_1=2\pi in_1$$ (27) the value of $`\stackrel{~}{\chi }`$, at a point on the $`a_1^1`$ edge, is $`2\pi n_1`$ larger than its value at the corresponding point on the $`a_1`$ edge. This means that $$\frac{1}{2\pi i}_{\text{edge}}f(\zeta )\stackrel{~}{\chi }𝑑\zeta =\underset{r=1}{\overset{g}{}}\left(n_r_{a_r}f(\zeta )𝑑\zeta +m_r_{b_r}f(\zeta )𝑑\zeta \right)=_cf(\zeta )𝑑\zeta $$ (28) where $`f(\zeta )`$ is any function which is well-behaved on the edge and $`c`$ is the special homology cycle mentioned above (17). Furthermore, it is easy to see that $$\underset{i=1}{\overset{k}{}}0_i+\underset{i=1}{\overset{k}{}}\mathrm{}_i=\text{edge}$$ (29) and so $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _0}\stackrel{~}{\xi }^j{\displaystyle \frac{d\zeta }{\zeta ^{m+2j1}}}`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _{\mathrm{}}}\stackrel{~}{\xi }^j\stackrel{~}{\zeta }^{m+2j3}𝑑\zeta +{\displaystyle _c}{\displaystyle \frac{\stackrel{~}{\eta }}{\stackrel{~}{\alpha }^j}}{\displaystyle \frac{d\zeta }{\zeta ^{m+2j1}}}`$ (30) $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _{\mathrm{}}}\stackrel{~}{\xi }^j\stackrel{~}{\zeta }^{m+2j3}𝑑\zeta +{\displaystyle _c}{\displaystyle \frac{\eta }{\alpha ^j}}{\displaystyle \frac{d\zeta }{\zeta ^{m+1}}}.`$ Thus, the Legendre transformation of the $`k`$-monopole metric is given by $$F=_c\frac{\eta }{\zeta ^2}𝑑\zeta \frac{1}{2\pi i}_{_{i=1}^k0_i}\frac{\eta ^2}{\zeta ^3}𝑑\zeta .$$ (31) This $`F`$ has also been considered by Roger Bielawski . $`F`$ is composed of integrals on the spectral curve, rather that on $`^1`$ itself. The integrals can be rewritten as integrals on $`^1`$, although they become less succinct. If the integral in the $`k=2`$ case is rewritten in this way the $`F`$ used by Ivanov and Roček to calculate the Atiyah-Hitchin metric is recovered. In that case the two branches over $`\zeta =0`$ differ only by a sign in $`\eta `$ and the special contour, $`c`$, is an equator. In the $`k=2`$ case, the constraint (12) arising in the generalized Legendre transform construction is precisely the one that Hurtubise demonstrates must be satisfied for a spectral curve to ensure triviality of the $`L^2`$ bundle . In fact, it is true for all $`k`$, that the generalized Legendre transformation constraints are the $`L^2`$ triviality conditions (16). This is demonstrated by using the spectral curve equation to rewrite the integrands in the constraint equations. $$\frac{d}{dw_a^j}P(\eta ,\zeta )=\frac{P}{\eta }\frac{\eta }{w_a^j}+\frac{P}{w_a^j}=0$$ (32) implies that $$\frac{\eta }{w_a^j}=\frac{\zeta ^a\eta ^{kj}}{P/\eta }.$$ (33) Therefore, the constraint equation requires that $$\frac{1}{2\pi i}_{_{i=1}^k0_i}\frac{2\eta ^{kj+1}\zeta ^{a2}}{P/\eta }\frac{d\zeta }{\zeta }=_c\frac{\eta ^{kj}\zeta ^{a2}d\zeta }{P/\eta }$$ (34) where $`2a2j2`$, or, put another way $$_c\mathrm{\Omega }^{ja}=\frac{1}{2\pi i}\underset{i}{}_{0_i}\frac{2\eta \mathrm{\Omega }^{ja}}{\zeta }$$ (35) where $$\mathrm{\Omega }^{ja}=\frac{\eta ^{kj}\zeta ^{a2}d\zeta }{P/\eta }$$ (36) is a global holomorphic 1-form. In fact, these 1-forms form a basis for the global holomorphic 1-forms on the spectral curve. There, using the residue theorem on the right-hand side of the equations shows that it is the Ercolani-Sinha constraint (16). It may be noted that the right hand side of this equation is actually zero for $`j1`$ . Thus, the generalized Legendre transform for monopole moduli space can be derived from the twistor description of monopoles. The constraints on $`F`$ are the Ercolani-Sinha constraints. However, it should be emphasized that the Ercolani-Sinha constraints only ensure that the $`L^2`$ bundle is trivial and that the $`L^s`$ bundle is not trivial if $`s<2`$. These are necessary conditions. They are not sufficient. The additional condition requires that $`H^0(S,L^s(k2))=0`$ for $`0<s<2`$. It is possible that this condition may also be interpreted in terms of the generalized Legendre transformation. The crucial step in calculating the constraints on $`F`$, is the derivation of the coefficient form of the patching equation by expanding the patching formula and moving the contour in the $`\stackrel{~}{\xi }^j`$ integral. Mimicking this derivation also clarifies the relationship, explained in , between the Ercolani-Sinha and Corrigan-Goddard conditions . ### 2.1 The point dyon limit In this paper $`F`$ has been calculated from the known $`k`$-monopole symplectic form. In Ivanov and Roček found $`F`$ for the 2-monopole metric by making a guess based on the known asymptotic metric and then verifying this guess by explicit calculation of the Kähler form. In fact, the asymptotic metric has been calculated for $`k`$ monopoles by examining the dynamics of point dyons , this approximation was confirmed in . It is a simple matter to derive this asymptotic metric from $`F`$, thereby reversing, for general $`k`$, the original derivation of $`F`$ for $`k=2`$. The spectral curve for a single monopole located at $`(\text{Re}z,\text{Im}z,x)`$ is $$\eta z+2x\zeta +\overline{z}\zeta ^2=0.$$ (37) This is the sphere in $`T^1`$ corresponding to all the lines through the monopole location. The spectral curve for $`k`$-monopoles located at well separated points $`(\text{Re}z_i,\text{Im}z_i,x_i)`$ is approximated with exponential accuracy by the product of spheres $$\underset{i=1}{\overset{k}{}}(\eta \gamma _i)=0$$ (38) where $$\gamma _i=z_i2x_i\zeta \overline{z}_i\zeta ^2.$$ (39) The $`i`$ and $`j`$ spheres touch at two points, the two roots of $`\gamma _i=\gamma _j`$: $$\zeta _{ij}^\pm =\frac{x_{ij}\pm \sqrt{x_{ij}^2+|z_{ij}|^2}}{\overline{z}_{ij}}$$ (40) where $`z_{ij}=z_iz_j`$ and $`x_{ij}=x_ix_j`$. It is known that the special contour $`c`$ must change sign under the reality transformation $`\zeta 1/\overline{\zeta }`$ and so must be a sum of contours which run from $`\zeta _{ij}^{}`$ to $`\zeta _{ij}^+=\zeta _{ji}^{}`$ on sphere $`i`$ and then from $`\zeta _{ji}^{}`$ back to $`\zeta _{ji}^+=\zeta _{ij}^{}`$ on sphere $`j`$. In order for $`F`$ to generate the asymptotic metric $`c`$ must be a sum of all such contours. $`F`$ can be rewritten in terms of integrals on $`^1`$. It is $$F=\underset{ij}{}_{ij}\frac{\gamma _{ij}}{\zeta ^2}𝑑\zeta \underset{i}{}\frac{1}{2\pi i}_0\frac{\gamma _i^2}{\zeta ^3}𝑑\zeta $$ (41) where $`\gamma _{ij}=\gamma _i\gamma _j`$ and the $`ij`$ integral is along the line running from $`\zeta _{ij}^{}`$ to $`\zeta _{ij}^+`$. In order to change the line integrals into contour integrals a $`\mathrm{log}\gamma _{ij}`$ is introduced into the integrand, thus, $$F=\underset{ij}{}\frac{1}{2\pi i}_{ij}\frac{\gamma _{ij}\mathrm{log}\gamma _{ij}}{\zeta ^2}𝑑\zeta \underset{i}{}\frac{1}{2\pi i}_0\frac{\gamma _i^2}{\zeta ^3}𝑑\zeta $$ (42) where the $`ij`$ integral is now around the figure of eight contour enclosing the two zeros of $`\gamma _{ij}`$. This $`F`$ has been discussed in and gives the correct asymptotic metric. ## 3 Conclusions The function $`F`$ appropriate to the generalized Legendre transform construction of multimonopole metrics is calculated by a simple change of variables. This function is a contour integral over the spectral curve. The constraints on $`F`$ are precisely the integral constraints on the spectral curve required to ensure the existence of a trivial $`L^2`$ bundle. In practice these constraints are difficult to apply. The generalized Legendre transform was originally derived from a duality transformation on an N=4 supersymmetric $`\sigma `$-model. It would be interesting to understand what relationship this $`\sigma `$-model has to monopoles. ## Acknowledgments I have had many benificial conversations with N.S. Manton and N.M. Romão regarding spectral curves. I also thank N.J. Hitchin and P.M. Sutcliffe for useful remarks. I am grateful to the Fulbright Commission and the Royal Commission for the Exhibition of 1851 for financial support.
no-problem/9910/cond-mat9910065.html
ar5iv
text
# Meissner state in finite superconducting cylinders with uniform applied magnetic field ## I Introduction Important intrinsic parameters of superconductors, such as the lower critical field $`H_{\mathrm{c1}}`$ and the critical current density $`J_c`$, are experimentally obtained by measuring their response to an applied magnetic field. The procedures to obtain these parameters often rely on theoretical approaches developed for infinite samples. When considering realistic finite-size superconductors, important complications arise, so that these methods fail. Even in the case of a uniform field applied to a finite superconductor, in general it is not easy to extract information about the intrinsic parameters of the superconductor since the magnetic response may be strongly dependent on its shape. Demagnetizing effects appearing in finite samples make the internal magnetic field H=B/$`\mu _0`$ in the sample different than the applied one, H<sub>a</sub>. The exact relation between both magnetic vectors is in general unknown. As a consequence of this indetermimation of the local magnetic field in the sample, the estimations of $`H_{\mathrm{c1}}`$ and other parameters are very complicated. Sometimes, this problem is treated by considering a constant demagnetizing factor $`N`$, so that the magnetic field in the sample volume H is assumed to be related to the applied magnetic field H<sub>a</sub> by: $$𝐇=𝐇_aN𝐌.$$ (1) This procedure is correct only for ellipsoidal-shape samples and when the external magnetic field is parallel to one of its principal axis . In all other cases this equation is not valid and it becomes very hard to find a simple relation between the magnetic vectors $`𝐇`$ and $`𝐇_a`$. Moreover, in general the field $`𝐇`$ is related with $`𝐇_a`$ in a different way from point to point in the superconductor. Recently, there have been important theoretical advances in treating the magnetic response of finite superconducting samples. The magnetic response of finite superconductors in the critical state including demagnetizing effects have been recently calculated for strips and cylinders , following previous works on very thin strips and disks . These works deal basically with current and field distributions in superconductors in the mixed state, with critical-state supercurrents penetrating into the bulk of the material. Besides, Chen et al. have calculated demagnetizing factors for cylinders as a function of the length to diameter ratio and for different values of the susceptibility $`\chi `$ (including $`\chi =1`$). However, to our knowledge, there has not been done a systematic study involving the comparison between experiments and theoretical data of the Meissner state in superconducting cylinders. This is equivalent to ask which are the currents that completely shield a cylindrical volume and the resulting magnetization. In this work we systematically investigate the magnetic response of superconducting cylinders in the complete shielding state, by quantitatively studying the effect of demagnetizing fields in their Meissner response. This paper is organized as follows. In Section II we discuss the experimental setup and measured samples. In Section III we present the experimental results obtained from DC magnetization and AC magnetic susceptibility techniques, performed on niobium cylindrical samples. We introduce in Section IV a new approach to study the magnetic response of completely diamagnetic finite cylinders. This model allows the interpretation and understanding of our experimental results, which is discussed in Section V. The model enables us to calculate the surface current distributions resulting from the magnetic shielding of the cylinder, as well as the magnetic fields created in the exterior of the samples by the induced supercurrents. The results are discussed in Section VI. Finally, we summarize our conclusions in Section VII. ## II Samples and experimental setup We have performed the magnetic characterization of ten cylindrical niobium polycristalline samples with different values for the ratio $`L/R`$, where $`L`$ and $`R`$ are the length and the radius of the sample, respectively (see Table I). They were obtained from two different pieces of brut niobium. After machined, they were cleaned in ultrasound bath, and later by using a HCl-HNO<sub>3</sub> solution. We have studied two families (with five samples in each one) of cylinders with nominal diameters of 1.94 mm and 2.86 mm, respectively. Samples with the same nominal value for the diameter were cut from the same piece, in cylinders of different lengths. The variation between nominal and real values of the diameter is smaller than 4%. Before performing magnetic measurements, we have determined the quality of all samples through x-ray diffraction (XRD) and scanning eletron microscopy (SEM) analysis. For the XRD we use a SIEMENS D/5000 difractometer, and for the SEM, we use a JEOL JSM-5800/LV microscope. The magnetic characterization was performed through both the magnetization as a function of the external DC magnetic field, $`M(H_a)`$, and the complex AC magnetic susceptibility, as a function of the absolute temperature, $`\chi (T)`$. To perform those experiments we have used a Quantum Design MPMS5 SQUID magnetometer able to operate in the ranges 2 K $`<`$ $`T`$ $`<`$ 400 K, 0.1 A/m $`<`$ $`h`$ $`<`$ 300 A/m, and 1 $`<`$ $`f`$ $`<`$ 1000 Hz, where $`T,`$ $`h`$ and $`f`$ are the absolute temperature, and the amplitude and the frequency of the AC magnetic field, respectively. In all cases, to avoid trapped magnetic flux, samples were zero-field-cooled (ZFC) before each experiment. The magnetic field was always applied along the axis of the cylinders. ## III Experimental results The XRD and SEM analysis confirmed the high quality of the niobium samples, as verified by the absence of impurities and the low density of grain boundaries. Measurements of the complex AC magnetic susceptibility, $`\chi (T)`$=$`\chi ^{}(T)`$\+ $`\chi ^{\prime \prime }(T)`$, were performed with the parameters $`h=80`$ A/m, and $`f=100`$ Hz, and are shown in Fig. 1. These experiments also confirm the quality of our polycristals, by showing a critical temperature $`T_C=9.2`$ K, and sharp superconductiong transitions, with typical width of $`0.2`$ K. This value is considered excellent for a polycristalline sample. These measurements also point out the strong shape effect on $`\chi (T)`$. As can be seen in Fig. 1, the lower the ratio $`L/R`$ is, the larger the shape effect is, evidenced for larger values of the modulus of $`\chi ^{}(T)`$. As mentioned above, we have also verified the magnetic behavior of the cylinders by measuring isothermal $`M(H_a)`$ curves. The measurements were performed at $`T=8`$ K. We show these results in Fig. 2. There, we can see again the strong shape effect on the obtained curves. The lower the ratio $`L/R`$, the higher the value of the initial slope $`M/H`$. Also, we can see the shape effect on the point at which the magnetization curve departs from a straight line, which corresponds to the position where the flux starts to penetrate in the superconductor. Thus, flux penetration starts at lower fields for large values of $`L/R`$, as expected. As it was mentioned in the last section, samples with the same nominal diameter were cut from the same piece, in cylinders with different lengths. Since the variation between nominal and real values of the diameter is smaller than 4%, only shape effects will be responsible for the observed difference in their magnetic response. ## IV Description of the model In order to study the Meissner state and analyze the experimental data, we have developed a model to calculate the surface currents that shield any axially symmetric applied magnetic field inside a cylindrical material. This model could be applied, in principle, to both type-I and type-II superconductors below $`H_\mathrm{c}`$ and $`H_{\mathrm{c1}}`$, respectively, and also to good conductors in a high frequency applied magnetic field. It is well known that supercurrents are induced in the superconductors in response to variations of the applied magnetic field. For zero-field cooled type-II superconductors in the Meissner state, in order to minimize the magnetic energy, these supercurrents completely shield the applied magnetic field inside the superconductor. This shielding occurs over the whole sample volume except for a thin surface shell of thickness $`\lambda `$ (the London penetration depth), where supercurrents flow. Our model simulates this process. It will enable us to calculate the current distribution that shields the applied magnetic field, by finding at the currents that minimize the magnetic energy in the system. For simplicity, we will assume that $`\lambda `$ is negligible. In this approximation, the calculated supercurrents flow only in the cylinder surfaces. We consider a cylindrical type-II superconductor of radius $`R`$ and length $`L`$ with its axis in the $`z`$ direction, in the presence of a uniform applied magnetic field $`𝐇_𝐚=H_a\widehat{𝐳}`$. We use cylindrical coordinates $`(\rho ,\phi ,z)`$. Owing to the symmetry of the problem all shielding currents will have azimuthal direction. We divide the superconducting cylinder surfaces into a series of concentric circular circuits in which currents can flow, with no limitation in the value of current. We consider $`n`$ of such circuits in each one of the cylinders ends and $`m`$ circuits in the lateral surface (see Fig. 3). These linear circuits are indexed, as seen in Fig. 3, from $`i=1`$ to $`i=2n+m1`$. The method to obtain the current profile is the following. We start with a zero-field cooled (ZFC) superconductor and set an applied field $`H_a`$. The magnetic flux that threads the area closed by the $`i`$circuit due to the external field is: $$\mathrm{\Phi }_i^a=\mu _0\pi \rho _i^2H_a,$$ (2) $`\rho _i`$ being the radius of the $`i`$circuit. The magnetic flux contributes with an energy that has to be counteracted by induced surface currents. A step of current $`\mathrm{\Delta }I`$ that is set in some $`j`$circuit requires an energy: $$E_j=\frac{1}{2}L_j(\mathrm{\Delta }I)^2,$$ (3) being $`L_j`$ the self-inductance of the $`j`$circuit, while it contributes to reduce the energy by a factor $`(\mathrm{\Delta }I)\mathrm{\Phi }_j^a`$. So, after an external field is applied, we seek for the circuit that decreases the energy the most and set a current step $`\mathrm{\Delta }I`$ there. The same criterion is used next to choose either to increase the current at the same circuit by another step $`\mathrm{\Delta }I`$, or instead put $`\mathrm{\Delta }I`$ at some new circuit. In the last case (and in the general case when they are curents circulating in many circuits of the sample), the energy cost to set a current step has an extra term coming from the mutual inductances of all other currents: $$E_j=\left(\underset{kj}{}M_{kj}I_k\right)\mathrm{\Delta }I_j.$$ (4) The mutual inductances $`M_{ij}`$ between the $`i`$ and $`j`$ linear circuits are calculated using the Neumann formulas (see, for example, Ref. ). To avoid diverging self-inductances, we have used a cut-off and calculated the self-inductance of a circuit with radius $`\rho `$ from the mutual inductance between the considered circuit and one with the same current at a radial position $`\rho +ϵ`$. An appropriate choice for $`ϵ`$ has been found to be $`ϵ=0.78\mathrm{\Delta }R`$, where $`\mathrm{\Delta }R`$ is the radial separation between two consecutive circuits in the end faces ($`\mathrm{\Delta }R=R/n`$). The minimun energy correspoding to a given value of the applied magnetic field will be reached when it becomes impossible to further decrease the magnetic energy by setting extra current steps. From the existing current profile, we can easily obtain the different magnitudes we are interested in. The magnetization, which has only axial direccion $`M_z`$, is calculated by using: $$M_z=\frac{1}{R^2L}\underset{i}{}I_i\rho _i^2.$$ (5) The magnetic induction B could be computed from the current profiles using the Biot-Savart formulas. However, we use a simpler way, which allows the calculation of B from the flux. In our model, the total magnetic flux that threads any circular circuit (not necessarily those in the surfaces of the superconductor) can be easily calculated as: $$\mathrm{\Phi }(\rho ,z)=\mathrm{\Phi }^a(\rho ,z)+\mathrm{\Phi }^i(\rho ,z),$$ (6) where the internal flux that threads a $`j`$circuit due to all the currents is $`\mathrm{\Phi }_j^i=_kM_{jk}I_k`$ (the term of the self-inductances is also included). Then, the axial component of the total magnetic induction is simply: $$B_z(\rho ,z)=\frac{\mathrm{\Phi }(\rho +\mathrm{\Delta }R,z)\mathrm{\Phi }(\rho ,z)}{\pi ((\rho +\mathrm{\Delta }R)^2\rho ^2)}.$$ (7) The radial component $`B_r(\rho ,z)`$ is calculated from the values of the axial one, imposing the condition that the divergence of $`𝐁`$ equals zero. Finally, if the external magnetic field is further increased, the same procedure starts again from the present distribution of current. The values of $`n`$ and $`m`$ have been chosen sufficiently large so that the results are independent of their particular value. Typical values are $`nm750010000`$. The computation time for a initial curve in a personal Digital Workstation takes few minutes for any $`L/R`$ ratio. We would like to remark that, with the method described here, we obtain the current distribution, field profiles, magnetization, and all the other results without using any free parameter. Only the direction of currents has to be known, which is straightfordward in this geometry. ## V Comparison of experimental and theoretical data In Fig. 4 we compare the experimental values of the initial magnetic susceptibility $`\chi _{\mathrm{ini}}`$ obtained from both DC and AC measurements, with those calculated from our model, for different values of $`L/R`$. The calculated values of the initial slope are only function of the length-to-radius ratio $`L/R`$. The agreement with the experimental data is then very satisfactory, confirming the validity of our theoretical approach. Both experimental and calculated data indicate a strong increase in the absolute value of the initial susceptibility when decreasing the cylinder aspect ratio. When the sample is very large (for $`L>10R`$), the initial susceptibility $`\chi _{\mathrm{ini}}`$ approaches the value predicted for infinite samples, $`\chi _{\mathrm{ini}}=1`$. For shorter samples, the magnetization gets larger in magnitude for the same value of the applied magnetic field. We find that the general behavior of the dependence of $`\chi _{\mathrm{ini}}`$ on $`L/R`$ is well described (with a departure of less than 1.5 % from our calculations) by the approximate formula given by Brandt : $$\chi =\pi R^2L\frac{8}{3}R^3\frac{4R^3}{3}\mathrm{tanh}\left[1.27\frac{L}{2R}\mathrm{ln}\left(1+\frac{2R}{L}\right)\right].$$ (8) The values of the initial slope calculated from our model are also compatible with those given by Chen et al. for cylinders with $`\chi =1`$ with a maximum deviation of 1.0%. ## VI Discussion ### A Effect of the demagnetizing fields The experimental and theoretical results can be understood from analyzing the effect of the demagnetizing fields. In an irrealistic infinitely long sample, if the field is applied along its axis, shielding currents will flow only in the lateral surface, with a constant value along the cylinder. This creates a spatially uniform magnetic field over the superconductor, which makes B=0 inside. In finite samples, however, in the top and bottom end surfaces the tangencial magnetic field is not continuous and shielding currents are also there induced. Hence, these currents create an extra non-uniform magnetic field over the lateral surface of the cylinder, so the currents flowing in this surface will not have a constant value. It is easily seen (by a simple examination of the magnetic field created by each single current loop) that the effect that the demagnetizing fields produce in the lateral surface region is to enhance the local magnetic field. In this case, its total value is larger than the actual value of $`H_a`$. As a result, higher values of current are necessary to shield the applied magnetic field, yielding to larger values of both the magnetization and the magnetic susceptibility. It is clear that, the thinner the sample is, the larger this effect occurs, which explains the increase of $`|\chi |`$ when decreasing $`L/R`$. ### B Field and current profiles The previous discussions can be illustrated by studying the distribution of the magnetic field in finite cylinders. In Fig. 5 we show the calculated total magnetic induction B for cylinders with three different values of $`L/R`$. The displayed lines indicate the direction of B (tangential to the lines at each point), although their densities do not reflect in general the field strength. These results show that only in the case of the largest $`L/R`$ ratio the magnetic field in the lateral surface can be considered as basically having a constant (axial) direction over the cylinder length. In the other two cases, the magnetic field loses its main axial direction, gradually bending towards the axis at both cylinder ends. In Fig. 6 we plot the calculated surface current density corresponding to the $`L/R=1`$ case. In both top and bottom ends of the cylinder, the strength of the shielding supercurrents flowing in the azimuthal direction gradually grows (in absolute value) from a zero value over the axis to a diverging behavior when reaching the cylinder edge (this divergence is smoothed because of our discretization; in actual experiments the nonzero value of $`\lambda `$ makes also smooth values). The currents in the lateral surface are also stronger at the edges and their intensity decrease towards the center of the sample, where they have a roughly constant value over the plateau shown in Fig. 6. The extension of this plateau (if defined as the region where the surface current differs less than 5% with respect to the minimum value, located at the center) is of about 70 % of the total cylinder length, for $`L/R=10`$. This percentage decreases to 46% and 36% for $`L/R=1`$ and $`L/R=0.2`$, respectively (for clarity, Fig. 6 does not show the data for the cases $`L/R=0.2`$ and 10). This is in correspondence with the regions, shown in Fig. 5, where the magnetic field is almost constant. Besides, our calculations show that the relative contribution of the currents in the end surfaces to the magnetization increases by decreasing the cylinder thickness. The contributions of the lateral and the end surfaces to the total magnetization are depicted in Fig. 7. These results show that, whereas in a long sample ($`L/R=10`$) about 94% of the contribution to the superconductor magnetization comes from the lateral surface, this percentage decreases to about 72% and 43%, for $`L/R=1`$ and $`L/R=0.2`$, respectively. ### C Remarks about the generality of the results In the $`M(H_a)`$ experimental data, Fig. 2, the curves for different $`L/R`$ show a systematic behavior in the first portion of the initial curve (region of interest). There, the shielding should be perfect. Nevertheless, the trend is not so clear for larger values of the applied magnetic field. In the middle part of the loop, it is known that bulk supercurrents penetrate into the superconductor, which goes from the Meissner state to the mixed state. Within the critical-state model framework, the magnetization in the mixed state is known to be dependent on the particular $`J_c(B)`$ dependence of the samples. This depends itself on factors such as the detailed microstructure of the sample. Thus, samples with different microstructures may have a difference distribution of pinning strenghts and locations, which influences the critical current and the magnetization. The samples we have measured were cut from two different cylinders, which explains why the $`M(H_a)`$ loops for intermediate values of $`H_a`$ are different. The systematic behavior observed in the initial susceptibility despite using samples from different original pieces, supports the generality of our results for any diamagnetic cylinder. ## VII Conclusions In this work we propose a new model based on energy minimization which allows the calculation of the magnetic response for perfectly diamagnetic cylinders of any size with high precision. The only assumption we have considered is that the magnetic penetration depth, $`\lambda `$, is negligible. We have experimentally verified the validity of this model by measuring the low-field magnetic response of niobium cylinders with different values for the length-to-radius ratio. We have demonstrated both experimentally and theoretically that the value of the initial susceptibility of zero-field cooled type-II superconductor cylinders is a function of only the sample aspect ratio, and calculated its value for a wide range of sample dimensions. The model is sufficiently general to be adapted to other geometries and also to non-uniform axially symmetric applied fields. These results may help to discriminate whether some of the effects recently observed in the study of thin film high-$`T_c`$ superconductors are due to intrinsic causes or instead have an extrinsic origin associated with sample size effects. ## Acknowledgements We thank DGES project number PB96-1143 for financial support. C.N. acknowledges a grant from CUR (Generalitat de Catalunya). FMAM gratefully acknowledges financial support from brazilian agencies CNPq and FAPESP, through grants 98/12809-7 and 99/04393-8.
no-problem/9910/astro-ph9910417.html
ar5iv
text
# On the Physical Conditions in AGN Optical Jets ## 1 Introduction To emit at optical-UV frequencies in a field of $`B10^4`$ G, typical of the magnetic fields found in AGN jets on kpc scales, electrons must have Lorentz factors $`\gamma \stackrel{>}{}10^6`$ and have diffusion times of a few hundred years. Despite the shortness of the expected synchrotron cooling times, the jets are long and there is no indication of strong steepening of the radio-to-optical spectral index as the distance from the nucleus increases (Sparks et al. 1996; Scarpa et al. 1999). Therefore, either electrons are continuously reaccelerated or the magnetic field is weaker than the equipartition value, as may be the case if relativistic beaming is important. To shed light on this issue we analyze the energy budget of all known optical jets, requiring they transport on average as much energy as needed to explain the existence of extended radio lobes. Our approach relies on the hypothesis that the extended radio structures are powered by jets (Blandford & Königl 1979), and on the fact that, even if rare, optical jets are discovered in all kinds of radio sources (of both FRI and FRII morphologies and powers). ## 2 Theoretical Considerations and Available Data Standard formulae for synchrotron emission are used (Pacholczyk 1970). The synchrotron emission is assumed to extend, in the observer frame, from $`\nu _1=10^7`$ to $`\nu _2=10^{15}`$ Hz, following a single power law ($`F_\nu \nu ^\alpha `$) of constant spectral index $`\alpha `$. The electron distribution in energy space is $`N(E)=N_0E^p`$, where $`p=2\alpha 1`$. Each electron emits in a narrow range of frequencies centered on $`\nu =c_1BE^2`$, where $`c_1=\frac{3e}{4\pi m^3c^5}`$, at a rate of $`dE/dt=c_2B^2E^2`$, where $`c_2=\frac{2e^4}{3m^4c^7}`$. In the presence of relativistic beaming, rest frame ($`L_R`$) and observed ($`L`$) jet luminosities are related by $`L=L_R\delta ^3`$ (as appropriate for a continuous jet; it would be $`L=L_R\delta ^4`$ for a moving sphere), where $`\delta =[\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )]^1`$ is the Doppler beaming factor. The rest-frame luminosity is $`L_R=_{E_1}^{E_2}\frac{dE}{dt}N(E)𝑑E`$, where $`E_1`$ and $`E_2`$ are the cutoffs of the electron energy distribution corresponding to $`\nu _1`$ and $`\nu _2`$. After integrating, $`L_R`$ depends on $`B`$ and $`N_0`$, so that a second equation is necessary to solve completely for jet properties. This can be obtained by imposing equipartition of energy, in which case we have $`\frac{B^2\varphi V}{8\pi }=(1+k)E_e`$, where $`\varphi `$ is the magnetic field filling factor, V is the source volume, and $`E_e`$ is the total electron energy. The proton energy, which remains unconstrained, is accounted for by the proton-to-electron energy ratio $`k`$. For consistency with previous works, we set $`k=0`$ (assigning no energy to the protons). Solving for $`B`$ and $`N_0`$ allows the calculation of the rest-frame number density $`n=\frac{L}{c_2B^2<E^2>V\delta ^3}`$ of the emitting electrons, which is used to derive the kinetic energy transported in the jet, which is $`L_k=\pi R^2\mathrm{\Gamma }^2\beta cn<E>(1+k)`$ (Celotti & Fabian 1993). Here $`\mathrm{\Gamma }`$ is the Lorentz factor of the bulk motion, $`\beta c`$ the plasma speed, and $`R`$ the jet radius. As before, the term $`(1+k)`$ accounts for the proton energy. Under these quite standard assumptions, the flux at 2 frequencies and the bulk speed of the plasma suffice to evaluate the jet kinetic energy. Relevant data for all known optical jets are summarized in Table 1. Interestingly, optical jets are discovered in radio sources of all types (Column 2), and have remarkably similar radio-to-optical spectral indexes ($`\alpha _{RO}`$$`0.8`$), consistent with radio observations where 90% of the jets have $`0.5<\alpha <0.9`$ (Bridle & Perley 1984). This indicates a high degree of homogeneity of all jets, so that it is reasonable to compare their energy budgets with those for radio lobes observed in a large sample of radio sources. ## 3 Bulk Jet Speed Relativistic beaming elegantly explains both jet one-sidedness and superluminal motion, observed routinely on parsec scales (Zensus & Pearson 1987; Readhead 1993), and also on kpc scales in the jet of M87 (Biretta et al. 1999). Assuming the two sides of the jet are intrinsically identical (Rees 1978; Shklovsky 1970; Saslaw & Whittle 1988; Laing 1988), the Lorentz factor of the bulk speed can then be derived (Scheuer & Readhead 1979) from the jet/counterjet luminosity ratio $`J=\left(\frac{1+\beta \mathrm{cos}\theta }{1\beta \mathrm{cos}\theta }\right)^{(2\alpha )}`$, where $`\theta `$ is the angle between jet speed and line of sight, and the exponent $`(2\alpha )`$ is appropriate for a continuum jet (it would be $`3\alpha `$ for discrete emitting blobs; see Lind & Blandford 1985). The dependence of $`J`$ on jet inclination $`\theta `$ is shown in Figure 1. It is seen that $`J`$ easily reaches very large values even for modest bulk speed, but for large inclinations it remains finite and quite small even for $`\mathrm{\Gamma }\mathrm{}`$, so that $`J`$ can be effectively used to constrain $`\theta `$. From Table 1 the median lower limit of $`J`$ for optical jets is 40, implying median inclinations $`\stackrel{<}{}55^{}`$ and $`\mathrm{\Gamma }>1.1`$. ## 4 Jet Versus Lobe Kinetic Energy The kinetic luminosity depends on the density and average energy of the emitting electrons, as well as the bulk speed of the plasma. These quantities are therefore constrained if the kinetic luminosity can be estimated in an independent way. We require jets to transport enough energy for powering an average radio lobe. Setting all relevant parameters as before (i.e., same low frequency cutoff, $`k=0`$, and $`\varphi =1`$), a median kinetic power of $`<L_{kin}>=10^{45}`$ erg/s was found for a large sample of radio galaxies including both weak and powerful radio sources (Rawlings & Saunders 1991). The high frequency cutoff is higher for optical jets than radio lobes, but this should have no effect on the energy budget because for $`p<2`$ ($`\alpha <0.5`$) the energetics are dominated by the low energy particles. Comparing the median kinetic power of all optical jets with that of radio lobes (Figure 1), it is found that for very low bulk speeds ($`\mathrm{\Gamma }\stackrel{<}{}2`$), the median kinetic energy for this sample of jets is at least one order of magnitude smaller than needed to power average lobes, independently of inclination angle. Very small inclinations are also excluded, because for small $`\theta `$ the beaming is so strong that the rest frame luminosity of the jet is very small, and the kinetic energy significantly reduced. The maximum speed of the plasma can be loosely constrained if jets with enhanced rather than dimmed emission are preferentially discovered. For any given value of $`\theta `$, as soon as $`\mathrm{\Gamma }`$ becomes larger than the value defined by $`\mathrm{\Gamma }\sqrt{\mathrm{\Gamma }^21}\mathrm{cos}\theta =1`$, the emission is de-amplified, severely reducing our probability of discovering the jet. In this way we derive a “most probable” region, centered on $`\theta 20^{}`$ and $`\mathrm{\Gamma }7.5`$, where the average kinetic energy carried by (optical) jets is fully consistent with the requirement imposed by radio lobes. The conclusion is that jets should be relativistic at kpc scales. ## 5 Conclusions Comparing the kinetic power (as derived assuming equipartition) for all known optical jets with that for typical radio lobes suggests the presence of relativistic bulk motion of the emitting plasma at $`kiloparsec`$ scales. In a non-relativistic scenario the estimated kinetic jet luminosity is at least one order of magnitude less than needed to power the lobes. This has important consequences. Indeed, at face value, the data in Table 1 imply strong magnetic fields and short electron lifetimes, leading to the conclusion that electron reacceleration is necessary to explain the optical emission on kpc scales (e.g., Meisenheimer et al. 1996). The analysis here points to an opposite possibility. Indeed, the constraints on the kinetic luminosity allow $`\mathrm{\Gamma }`$ and $`\theta `$ to lie within a “most probable” region (Fig. 1), centered near $`\mathrm{\Gamma }7.5`$, corresponding to a highly relativistic bulk speed. Because of relativistic beaming, the rest-frame magnetic field is reduced (Table 1), and the electron lifetimes are lengthened because of the lower energy losses and time dilation. In these conditions the de-projected length $`l/\mathrm{sin}\theta `$ of the jets is fully consistent with the electron diffusion length $`ct_{1/2}\mathrm{\Gamma }`$, without the need for reacceleration (Figure 2), explaining the very nearly uniform $`\alpha _{RO}`$ observed in the jets of M87 (Sparks et al. 1996) and PKS 0521-365 (Scarpa et al. 1999). It is a pleasure to thanks G. Ghisellini, L. Maraschi, and F. Macchetto for helpful and encouraging comments. Support for this work was provided by NASA through grant GO-06363.01-95A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
no-problem/9910/gr-qc9910089.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION Traditionally the lack of experimental input has been the most important obstacle in the search for “quantum gravity”, the new theory that should provide a unified description of gravitation and quantum mechanics. Recently there has been a small, but nonetheless encouraging, number of proposals of experiments probing the nature of the interplay between gravitation and quantum mechanics. At the same time the “COW-type” experiments on quantum mechanics in a strong (classical) gravitational environment, initiated by Colella, Overhauser and Werner , have reached levels of sensitivity such that even gravitationally induced quantum phases due to local tides can be detected. In light of these developments there is now growing (although still understandably cautious) hope for data-driven insight into the structure of quantum gravity. The primary objective of these lecture notes is the one of giving the reader an intuitive idea of how far quantum-gravity phenomenology has come. This is somewhat tricky. Traditionally experimental tests of quantum gravity were believed to be not better than a dream. The fact that now (some) theory and (some) experiments finally “meet” could have two very different explanations: it could be that experimental techniques and ideas have improved so much that now tests of plausible quantum-gravity effects are within reach, but it could also be that theorists have managed to come up with scenarios speculative enough to allow testing by conventional experimental techniques. I shall argue that experiments have indeed progressed to the point were some significant quantum-gravity tests are doable. I shall also clarify in which sense the traditional pessimism concerning quantum-gravity experiments was built upon the analysis of a very limited set of experimental ideas, with the significant omission of the possibility (which we now find to be within our capabilities) of experiments set up in such a way that very many of the very small quantum-gravity effects are somehow summed together. Some of the theoretical ideas that can be tested experimentally are of course quite speculative (decoherence, space-time fluctuations, large extra dimensions, …) but this is not so disappointing because it seems reasonable to expect that the new theory should host a large number of new conceptual/structural elements in order to be capable of reconciling the (apparent) incompatibility between gravitation and quantum mechanics. \[An example of motivation for very new structures is discussed here in Section 11, which is a “theory addendum” reviewing some of the arguments in support of the idea that the mechanics on which quantum gravity is based might not be exactly the one of ordinary quantum mechanics, since it should accommodate a somewhat different (non-classical) concept of “measuring apparatus” and a somewhat different relationship between “system” and “measuring apparatus”.\] In giving the reader an intuitive idea of how far quantum-gravity phenomenology has come it will be very useful to rely on simple phenomenological models of candidate quantum-gravity effects. The position I am here taking is not that these models should become cornerstones of theoretical work on quantum gravity (at best they are possible ways in which quantum-gravity might manifest itself), but rather that these models can be useful in giving an intuitive description of the level of sensitivity that experiments are finally reaching. Depending on the reader’s intuition for the quantum-gravity realm these phenomenological models might or might not appear likely as faithful descriptions of effects actually present in quantum-gravity, but in any case by the end of these notes the reader should find that these models are at least useful for the characterization of the level of sensitivity that quantum-gravity experiments have reached, and can also be useful to describe the progress (past and future) of these sensitivity levels. In particular, in the “language” set up by these models one can see an emerging picture suggesting that we are finally ready for the exploration of a relatively large class of plausible quantum-gravity effects, even though our chances to obtain positive (discovery) experimental results still depend crucially on the magnitude of these effects: in most cases the level of sensitivity that the relevant experiments should achieve within a few years corresponds to effects suppressed only linearly by the Planck length $`L_p`$ ($`L_p10^{35}m`$). The bulk of these notes gives brief reviews of the quantum-gravity experiments that can be done. The reader will be asked to forgive the fact that this review is not very balanced. The two proposals in which this author has been involved are in fact discussed in greater detail, while for the experiments proposed in Refs. I just give a very brief discussion with emphasis on the most important conceptual ingredients. The students who attended the School might be surprised to find the material presented with a completely different strategy. While my lectures in Polanica were sharply divided in a first part on theory and a second part on experiments, here some of the theoretical intuition is presented while discussing the experiments. It appears to me that this strategy might be better suited for a written presentation. I also thought it might be useful to start with the conclusions, which are given in the next two sections. Section 4 reviews the proposal of using modern interferometers to set bounds on space-time fuzziness. In Section 5 I review the proposal of using data on GRBs (gamma-ray bursts) to investigate possible quantum-gravity induced in vacuo dispersion of electromagnetic radiation. In Section 6 I give brief reviews of other quantum-gravity experiments. In Section 7 I give a brief discussion of the mentioned “COW-type” experiments testing quantum mechanics in a strong classical-gravity environment. Section 8 provides a “theory addendum” on various scenarios for bounds on the measurability of distances in quantum gravity and their possible relation to properties of the space-time foam. Section 9 provides a theory addendum on an absolute bound on the measurability of the amplitude of a gravity wave which should hold even if distances are not fuzzy. Section 10 provides a theory addendum on other works which are in one way or another related to (or relevant for) the content of these notes. Section 11 gives the mentioned theory addendum concerning ideas on a mechanics for quantum gravity that be not exactly of the type of ordinary quantum mechanics. Finally in Section 12 I give some comments on the outlook of quantum-gravity phenomenology, and I also emphasize the fact that, whether or not they turn out to be helpful for quantum gravity, most of the experiments considered in these notes are intrinsically significant as tests of quantum mechanics and/or tests of fundamental symmetries. ## 2 FIRST THE CONCLUSIONS: WHAT HAS THIS PHENOMENOLOGY ACHIEVED? Let me start with a brief description of the present status of quantum-gravity phenomenology. Some of the points made in this section are supported by analyses which will be reviewed in the following sections. The crucial question is: Can we just test some wildly speculative ideas which have somehow surfaced in the quantum-gravity literature? Or can we test even some plausible candidate quantum-gravity phenomena? Before answering these questions it is appropriate to comment on the general expectations we have for quantum gravity. It has been realized for some time now that by combining elements of gravity with elements of quantum mechanics one is led to “interplay phenomena” with rather distinctive signatures, such as quantum fluctuations of space-time , and violations of Lorentz and/or CPT symmetries , but the relevant effects are expected to be very small (because of the smallness of the Planck length). Therefore in this “intuition-building” section the reader must expect from me a description of experiments with a remarkable sensitivity to the new phenomena. Let me start from the possibility of quantum fluctuations of space-time. A prediction of nearly all approaches to the unification of gravitation and quantum mechanics is that at very short distances the sharp classical concept of space-time should give way to a somewhat “fuzzy” (or “foamy”) picture, possibly involving virulent geometry fluctuations (sometimes depicted as wormholes and black holes popping in and out of the vacuum). Although the idea of space-time foam remains somewhat vague and it appears to have significantly different incarnations in different quantum-gravity approaches, a plausible expectation that emerges from this framework is that the distance between two bodies “immerged” in the space-time foam would be affected by (quantum) fluctuations. If urged to give a rough description of these fluctuations at present theorists can only guess that they would be of Planck-length magnitude and occurring at a frequency of roughly one per Planck time $`T_p`$ ($`T_p=L_p/c10^{44}s`$). One should therefore deem significant for space-time-foam research any experiment that monitors the distances between two bodies with enough sensitivity to test this type of fluctuations. This is exactly what was achieved by the analysis reported in Refs. , which was based on the observation that the most advanced modern interferometers (the ones normally used for detection of classical gravity waves) are the natural instruments to study the fuzziness of distances. While I postpone to Section 4 a detailed discussion of these interferometry-based tests of fuzziness, let me emphasize already here that modern interferometers have achieved such a level of sensitivity that we are already in a position to rule out fluctuations in the distances of their test masses of the type discussed above, i.e. fluctuations of Planck-length magnitude occurring at a rate of one per each Planck time. This is perhaps the simplest way for the reader to picture intuitively the type of objectives already reached by quantum-gravity phenomenology. Another very intuitive measure of the maturity of quantum-gravity phenomenology comes from the studies of in vacuo dispersion proposed in Ref. (also see the more recent purely experimental analyses ). Deformed dispersion relations are a rather natural possibility for quantum gravity. For example, they emerge naturally in quantum gravity scenarios requiring a modification of Lorentz symmetry. Modifications of Lorentz symmetry could result from space-time discreteness (e.g. a discrete space accommodates a somewhat different concept of “rotation” with respect to the one of ordinary continuous spaces), a possibility extensively investigated in the quantum gravity literature (see, e.g., Ref. ), and it would also naturally result from an “active” quantum-gravity vacuum of the type advocated by Wheeler and Hawking (such a “vacuum” might physically label the space-time points, rendering possible the selection of a “preferred frame”). The specific structure of the deformation can differ significantly from model to model. Assuming that the deformation admits a series expansion at small energies $`E`$, and parametrizing the deformation in terms of an energy<sup>3</sup><sup>3</sup>3I parametrize deformations of dispersion relations in terms of an energy scale $`E_{QG}`$, while I later parametrize the proposals for distance fuzziness with a length scale $`L_{QG}`$. scale $`E_{QG}`$ (a scale characterizing the onset of quantum-gravity dispersion effects, often identified with the Planck energy $`E_p=\mathrm{}c/L_p10^{19}GeV`$), for a massless particle one would expect to be able to approximate the deformed dispersion relation at low energies according to $$c^2𝐩^2E^2\left[1+\xi \left(\frac{E}{E_{QG}}\right)^\alpha +O\left(\left(\frac{E}{E_{QG}}\right)^{\alpha +1}\right)\right]$$ (1) where $`c`$ is the conventional speed-of-light constant. The scale $`E_{QG}`$, the power $`\alpha `$ and the sign ambiguity $`\xi =\pm 1`$ would be fixed in a given dynamical framework; for example, in some of the approaches based on dimensionful quantum deformations of Poincaré symmetries one encounters a dispersion relation $`c^2𝐩^2=E_{QG}^2\left[1e^{E/E_{QG}}\right]^2`$, which implies $`\xi =\alpha =1`$. Because of the smallness of $`1/E_{QG}`$ it was traditionally believed that this effect could not be seriously tested experimentally (i.e. that, for $`E_{QG}E_p`$, experiments would only be sensitive to values of $`\alpha `$ much smaller than $`1`$), but in Ref. it was observed that recent progress in the phenomenology of GRBs and other astrophysical phenomena should soon allow us to probe values of $`E_{QG}`$ of the order of (or even greater than) $`E_p`$ for values of $`\alpha `$ as large as $`1`$. As discussed later in these notes, $`\alpha =1`$ appears to be the smallest value that can be obtained with plausible quantum-gravity arguments and several of these arguments actually point us toward the larger value $`\alpha =2`$, which is still very far from present-day experimental capabilities. While of course it would be very important to achieve sensitivity to both the $`\alpha =1`$ and the $`\alpha =2`$ scenarios, the fact that we will soon test $`\alpha =1`$ is a significant first step. Another recently proposed quantum-gravity experiment concerns possible violations of CPT invariance. This is a rather general prediction of quantum-gravity approaches, which for example can be due to elements of nonlocality (locality is one of the hypotheses of the “CPT theorem”) and/or elements of decoherence present in the approach. At least some level of non-locality is quite natural for quantum gravity as a theory with a natural length scale which might play the role of “minimum length” . Motivated by the structure of “Liouville strings” (a non-critical string approach to quantum gravity which appears to admit a space-time foam picture) a phenomenological parametrization of quantum-gravity induced CPT violation in the neutral-kaon system has been proposed in Refs. . (Other studies of the phenomenology of CPT violation can be found in Ref. .) In estimating the parameters that appear in this phenomenological model the crucial point is as usual the overall suppression given by some power of the Planck length $`L_p1/E_p`$. For the case in which the Planck length enters only linearly in the relevant formulas, experiments investigating the properties of neutral kaons are already setting significant bounds on the parameters of this phenomenological approach . In summary, experiments are reaching significant sensitivity with respect to all of the frequently discussed features of quantum gravity that I mentioned at the beginning of this section: space-time fuzziness, violations of Lorentz invariance, and violations of CPT invariance. Other quantum-gravity experiments, which I shall discuss later in these notes, can probe other candidate quantum-gravity phenomena, giving additional breadth to quantum-gravity phenomenology. Before closing this section there is one more answer I should give: how could this happen in spite of all the gloomy forecasts which one finds in most quantum-gravity review papers? The answer is actually simple. Those gloomy forecasts were based on the observation that under ordinary conditions the direct detection of a single quantum-gravity phenomenon would be well beyond our capabilities if the magnitude of the phenomenon is suppressed by the smallness of the Planck length. For example, in particle-physics contexts this is seen in the fact that the contribution from “gravitons” (the conjectured mediators of quantum-gravity interactions) to particle-physics processes with center-of-mass energy $``$ is expected to be penalized by overall factors given by some power of the ratio $`/(10^{19}GeV)`$. However, small effects can become observable in special contexts and in particular one can always search for an experimental setup such that a very large number of the very small quantum-gravity contributions are effectively summed together. This later possibility is not unknown to the particle-physics community, since it has been exploited in the context of investigations of the particle-physics theories unifying the strong and electroweak interactions, were one encounters the phenomenon of proton decay. By keeping under observation very large numbers of protons, experimentalists have managed<sup>4</sup><sup>4</sup>4This author’s familiarity with the accomplishments of proton-decay experiments has certainly contributed to the moderate optimism for the outlook of quantum-gravity phenomenology which is found in these notes. to set highly significant bounds on proton decay , even though the proton-decay probability is penalized by the fourth power of the small ratio between the proton mass, which is of order $`1GeV`$, and the mass of the vector bosons expected to mediate proton decay, which is conjectured to be of order $`10^{16}GeV`$. Just like proton-decay experiments are based on a simple way to put together very many of the small proton-decay effects<sup>5</sup><sup>5</sup>5For each of the protons being monitored the probability of decay is extremely small, but there is a significantly large probability that at least one of the many monitored protons decay. the experiments using modern interferometers to study space-time fuzziness and the experiments using GRBs to study violations of Lorentz invariance exploit simple ways to put together very many of the very small quantum-gravity effects. I shall explain this in detail in Sections 4 and 5. ## 3 ADDENDUM TO CONCLUSIONS: ANY HINTS TO THEORISTS FROM EXPERIMENTS? In the preceding section I have argued that quantum-gravity phenomenology, even being as it is in its infancy, is already starting to provide the first significant tests of plausible candidate quantum-gravity phenomena. It is of course just “scratching the surface” of whatever “volume” contains the full collection of experimental studies we might wish to perform, but we are finally getting started. Of course, a phenomenology programme is meant to provide input to the theorists working in the area, and therefore one measure of the achievements of a phenomenology programme is given by the impact it is having on theory studies. In the case of quantum-gravity experiments the flow of information from experiments to theory will take some time. The primary reason is that most quantum-gravity approaches have been guided (just because there was no alternative guidance from data) by various sorts of formal intuition for quantum gravity (which of course remain pure speculations as long as they are not confirmed by experiments). This is in particular true for the two most popular approaches to the unification of gravitation and quantum mechanics, i.e. “critical superstrings” and “canonical/loop quantum gravity” . Because of the type of intuition that went into them, it is not surprising that these “formalism-driven quantum gravity approaches” are proving extremely useful in providing us new ideas on how gravitation and quantum mechanics could resolve the apparent conflicts between their conceptual structures, but they are not giving us any ideas on which experiments could give insight into the nature of quantum gravity. The hope that these approaches could eventually lead to new intuitions for the nature of space-time at very short distances has been realized only rather limitedly. In particular, it is still unclear if and how these formalisms host the mentioned scenarios for quantum fluctuations of space-time and violations of Lorentz and/or CPT symmetries. The nature of the quantum-gravity vacuum (in the sense discussed in the preceding section) appears to be still very far ahead in the critical superstring research programme and its analysis is only at a very preliminary stage within canonical/loop quantum gravity. In order for the experiments discussed in these notes to affect directly critical superstring research and research in canonical/loop quantum gravity it is necessary to make substantial progress in the analysis of the physical implications of these formalisms. Still, in an indirect way the recent results of quantum-gravity phenomenology have already started to have an impact on theory work in these quantum gravity approaches. The fact that it is becoming clear that (at least a few) quantum-gravity experiments can be done has reenergized efforts to explore the physical implications of the formalisms. The best example of this way in which phenomenology can influence “pure theory” work is provided by Ref. , which was motivated by the results reported in Ref. and showed that canonical/loop quantum gravity admits (under certain conditions, which in particular involve some parity breaking) the phenomenon of deformed dispersion relations, with deformation going linearly with the Planck length. While the impact on theory work in the formalism-driven quantum gravity approaches is still quite limited, of course the new experiments are providing useful input for more intuitive/phenomelogical theoretical work on quantum gravity. For example, the analysis reported in Refs. , by ruling out the scheme of distance fluctuations of Planck length magnitude occurring at a rate of one per Planck time, has had significant impact on the line of research which has been deriving intuitive pictures of properties of quantum space-time from analyses of measurability and uncertainty relations . Similarly the “Liouville-string” inspired phenomenological approach to quantum gravity has already received important input from the mentioned studies of the neutral-kaon system and will receive equally important input from the mentioned GRB experiments, once these experiments (in a few years) reach Planck-scale sensitivity. ## 4 INTERFEROMETRY AND FUZZY SPACE-TIME In the preceding two sections I have described the conclusions which I believe to be supported by the present status of quantum-gravity phenomenology. Let me now start providing some support for those conclusions by reviewing my proposal of using modern interferometers to set bounds on space-time fuzziness. I shall articulate this in subsections because some preliminaries are in order. Before going to the analysis of experimental data it is in fact necessary to give a proper (operative) definition of fuzzy distance and give a description of the type of stochastic properties one might expect of quantum-gravity-induced fluctuations of distances. ### 4.1 Operative definition of fuzzy distance While nearly all approaches to the unification of gravity and quantum mechanics appear to lead to a somewhat fuzzy picture of space-time, within the various formalisms it is often difficult to characterize physically this fuzziness. Rather than starting from formalism, I shall advocate an operative definition of fuzzy space-time. More precisely for the time being I shall just consider the concept of fuzzy distance. I shall be guided by the expectation that at very short distances the sharp classical concept of distance should give way to a somewhat fuzzy distance. Since interferometers are ideally suited to monitor the distance between test masses, I choose as operative definition of quantum-gravity-induced fuzziness one which is expressed in terms of quantum-gravity-induced noise in the read-out of interferometers. In order to properly discuss this proposed definition it will prove useful to briefly review some aspects of the physics of modern Michelson-type interferometers. These are schematically composed of a (laser) light source, a beam splitter and two fully-reflecting mirrors placed at a distance $`L`$ from the beam splitter in orthogonal directions. The light beam is decomposed by the beam splitter into a transmitted beam directed toward one of the mirrors and a reflected beam directed toward the other mirror; the beams are then reflected by the mirrors back toward the beam splitter, where they are superposed<sup>6</sup><sup>6</sup>6Although all modern interferometers rely on the technique of folded interferometer’s arms (the light beam bounces several times between the beam splitter and the mirrors before superposition), I shall just discuss the simpler “no-folding” conceptual setup. The readers familiar with the subject can easily realize that the observations here reported also apply to more realistic setups, although in some steps of the derivations the length $`L`$ would have to be understood as the optical length (given by the actual length of the arms multiplied by the number of foldings).. The resulting interference pattern is extremely sensitive to changes in the positions of the mirrors relative to the beam splitter. The achievable sensitivity is so high that planned interferometers with arm lengths $`L`$ of $`3`$ or $`4`$ $`Km`$ expect to detect gravity waves of amplitude $`h`$ as low as $`310^{22}`$ at frequencies of about $`100Hz`$. This roughly means that these modern gravity-wave interferometers should monitor the (relative) positions of their test masses (the beam splitter and the mirrors) with an accuracy of order $`10^{18}m`$ and better. In achieving this remarkable accuracy experimentalists must deal with classical-physics displacement noise sources (e.g., thermal and seismic effects induce fluctuations in the relative positions of the test masses) and displacement noise sources associated to effects of ordinary quantum mechanics (e.g., the combined minimization of photon shot noise and radiation pressure noise leads to an irreducible noise source which has its root in ordinary quantum mechanics ). The operative definition of fuzzy distance which I advocate characterizes the corresponding quantum-gravity effects as an additional source of displacement noise. A theory in which the concept of distance is fundamentally fuzzy in this operative sense would be such that even in the idealized limit in which all classical-physics and ordinary-quantum-mechanics noise sources are completely eliminated the read-out of an interferometer would still be noisy as a result of quantum-gravity effects. Upon adopting this operative definition of fuzzy distance, interferometers are of course the natural tools for experimental tests of proposed distance-fuzziness scenarios. I am only properly discussing distance fuzziness although ideas on space-time foam would also motivate investigations of time fuzziness. It is not hard to modify the definition here advocated for distance fuzziness to describe time fuzziness by replacing the interferometer with some device that keeps track of the synchronization of a pair of clocks<sup>7</sup><sup>7</sup>7Actually, a realistic analysis of ordinary Michelson-type interferometers is likely to lead to a contribution from space-time foam to noise levels that is the sum (in some appropriate sense) of the effects due to distance fuzziness and time fuzziness (e.g. associated to the frequency/time measurements involved).. I shall not pursue this matter further since I seem to understand<sup>8</sup><sup>8</sup>8This understanding is mostly based on recent conversations with G. Busca and P. Thomann who are involved in the next generation of ultra-precise clocks to be realized in microgravity (outer space) environments. that sensitivity to time fluctuations is still significantly behind the type of sensitivity to distance fluctuations achievable with modern Michelson-type experiments. ### 4.2 Random-walk noise from random-walk models of quantum space-time fluctuations As already mentioned in Section 2, it is plausible that a quantum space-time might involve fluctuations of magnitude $`L_p`$ occurring at a rate of roughly one per each time interval of magnitude $`t_p=L_p/c10^{44}s`$. One can start investigating this scenario by considering the possibility that experiments monitoring the distance $`D`$ between two bodies for a time $`T_{obs}`$ (in the sense appropriate, e.g., for an interferometer) could involve a total effect amounting to $`n_{obs}T_{obs}/t_p`$ randomly directed fluctuations<sup>9</sup><sup>9</sup>9One might actually expect even more than $`T_{obs}/t_p`$ fluctuations of magnitude $`L_p`$ in a time $`T_{obs}`$ depending on how frequent fluctuations occur in the region of space spanned by the distance $`D`$. This and other possibilities will be later modelled by replacing $`L_p`$ with a phenomenological scale $`L_{QG}`$ which could even depend on $`D`$. However, as mentioned in the Introduction, rather than focusing on the details of the physics of the fuzziness models, I am here discussing models from the point of view of a characterization of the levels of quantum-gravity sensitivity reached by recent experiments, and the scale $`L_{QG}`$ will be seen primarily from this perspective rather than attempting careful estimates in terms of one or another picture of space-time fluctuations. of magnitude $`L_p`$. An elementary analysis allows to establish that in such a context the root-mean-square deviation $`\sigma _D`$ would be proportional to $`\sqrt{T_{obs}}`$: $$\sigma _D\sqrt{cL_pT_{obs}}.$$ (2) From the type of $`T_{obs}`$-dependence of Eq. (2) it follows that the corresponding quantum fluctuations should have displacement amplitude spectral density $`S(f)`$ with the $`f^1`$ dependence<sup>10</sup><sup>10</sup>10Of course, in light of the nature of the arguments used, one expects that an $`f^1`$ dependence of the quantum-gravity induced $`S(f)`$ could only be valid for frequencies $`f`$ significantly smaller than the Planck frequency $`c/L_p`$ and significantly larger than the inverse of the time scale over which, even ignoring the gravitational field generated by the devices, the classical geometry of the space-time region where the experiment is performed manifests significant curvature effects. typical of “random walk noise” : $`S(f)=f^1\sqrt{cL_p}.`$ (3) In fact, there is a general connection between $`\sigma \sqrt{T_{obs}}`$ and $`S(f)f^1`$, which follows from the general relation $`\sigma ^2={\displaystyle _{1/T_{obs}}^{f_{max}}}[S(f)]^2𝑑f,`$ (4) valid for a frequency band limited from below only by the time of observation $`T_{obs}`$. The displacement amplitude spectral density (3) provides a very useful characterization of the random-walk model of quantum space-time fluctuations prescribing fluctuations of magnitude $`L_p`$ occurring at a rate of roughly one per each time interval of magnitude $`L_p/c`$. If somehow we have been assuming the wrong magnitude of distance fluctuations or the wrong rate (also see Subsection 4.4) but we have been correct in taking a random-walk model of quantum space-time fluctuations Eq. (3) should be replaced by $`S(f)=f^1\sqrt{cL_{QG}},`$ (5) where $`L_{QG}`$ is the appropriate length scale that takes into account the correct values of magnitude and rate of the fluctuations. If one wants to be open to the possibility that the nature of the stochastic processes associated to quantum space-time be not exactly (also see Section 8) the one of a random-walk model of quantum space-time fluctuations, then the $`f`$-dependence of the displacement amplitude spectral density could be different. This leads one to consider the more general parametrization $`S(f)=f^\beta c^{\beta \frac{1}{2}}(_\beta )^{\frac{3}{2}\beta }.`$ (6) In this general parametrization the dimensionless quantity $`\beta `$ carries the information on the nature of the underlying stochastic processes, while the length scale $`_\beta `$ carries the information on the magnitude and rate of the fluctuations. I am assigning an index $`\beta `$ to $`_\beta `$ just in order to facilitate a concise description of experimental bounds; for example, if the fluctuations scenario with, say, $`\beta =0.6`$ was ruled out down to values of the effective length scale of order, say, $`10^{27}m`$ I would just write $`_{\beta =0.6}<10^{27}m`$. As I will discuss in Section 8, one might be interested in probing experimentally all values of $`\beta `$ in the range $`1/2\beta 1`$, with special interest in the cases $`\beta =1`$ (the case of random-walk models whose effective length scale I denominated with $`L_{QG}_{\beta =1}`$), $`\beta =5/6`$, and $`\beta =1/2`$. ### 4.3 Comparison with gravity-wave interferometer data Before discussing experimental bounds on $`_\beta `$ from gravity-wave interferometers, let us fully appreciate the significance of these bounds by getting some intuition on the actual magnitude of the quantum fluctuations I am discussing. One intuition-building observation is that even for the case $`\beta =1`$, which among the cases I consider is the one with the most virulent space-time fluctuations, the fluctuations predicted are truly minute: the $`\beta =1`$ relation (2) only predicts fluctuations with standard deviation of order $`10^5m`$ on a time of observation as large as $`10^{10}`$ years (the size of the whole observable universe is about $`10^{10}`$ light years!!). In spite of the smallness of these effects, the precision of modern interferometers (the ones whose primary objective is the detection of the classical-gravity phenomenon of gravity waves) is such that we can obtain significant information at least on the scenarios with values of $`\beta `$ toward the high end<sup>11</sup><sup>11</sup>11As mentioned, for $`L_{QG}=L_p`$ the case $`\beta =1`$ corresponds to a mean-square deviation induced by the distance fluctuations that is only linearly suppressed by $`L_p`$: $`\sigma _D^2L_pcT`$. Analogously, values of $`\beta `$ in the interval $`1/2<\beta <1`$ correspond to $`\sigma _D^2`$ suppressed by a power of $`L_p`$ between 1 and 2. The fact that we can only test values of $`\beta `$ toward the high end of the interval $`1/2\beta 1`$ can be intuitively characterized by stating that the fuzziness models we are able to test have $`\sigma _D^2`$ that is not much more than linearly suppressed by the Planck length. of the interesting interval $`1/2\beta 1`$, and in particular we can investigate quite sensitively the intuitive case of the random-walk model of space-time fluctuations. The operation of gravity-wave interferometers is based on the detection of minute changes in the positions of some test masses (relative to the position of a beam splitter). If these positions were affected by quantum fluctuations of the type discussed above, the operation of gravity-wave interferometers would effectively involve an additional source of noise due to quantum gravity. This observation allows to set interesting bounds already using existing noise-level data obtained at the Caltech 40-meter interferometer, which has achieved displacement noise levels with amplitude spectral density lower than $`10^{18}m/\sqrt{Hz}`$ for frequencies between $`200`$ and $`2000`$ $`Hz`$. While this is still very far from the levels required in order to probe significantly the lowest values of $`\beta `$ (for $`_{\beta =1/2}L_p`$ and $`f1000Hz`$ the quantum-gravity noise induced in the $`\beta =1/2`$ scenario is only of order $`10^{36}m/\sqrt{Hz}`$), these sensitivity levels clearly rule out all values of $`L_{QG}`$ (i.e. $`_{\beta =1}`$) down to the Planck length. Actually, even values of $`L_{QG}`$ significantly smaller than the Planck length are inconsistent with the data reported in Ref. ; in particular, from the observed noise level of $`310^{19}m/\sqrt{Hz}`$ near $`450`$ $`Hz`$, which is the best achieved at the Caltech 40-meter interferometer, one obtains the bound $`L_{QG}10^{40}m`$. As discussed above, the random-walk model of distance fuzziness with fluctuations of magnitude $`L_p`$ occurring at a rate of one per each $`t_p`$ time interval, would correspond to the prediction $`L_{QG}L_p10^{35}m`$ and it is therefore ruled out by these data. This experimental information implies that, if one was to insist on this type of models, realistic random-walk models of quantum space-time fluctuations would have to be significantly less noisy (smaller prediction for $`L_{QG}`$) than the intuitive one which is now ruled out. Since, as I shall discuss, there are rather plausible scenarios for significantly less noisy random-walk models, it is important that experimentalists keep pushing forward the bound on $`L_{QG}`$. More stringent bounds on $`L_{QG}`$ are within reach of the LIGO/VIRGO generation of gravity-wave interferometers.<sup>12</sup><sup>12</sup>12Besides allowing an improvement on the bound on $`L_{QG}`$ intended as a universal property of quantum gravity, the LIGO/VIRGO generation of interferometers will also allow us to explore the idea that $`L_{QG}`$ might be a scale that depends on the experimental context in such a way that larger interferometers pick up more of the space-time fluctuations. Based on the intuition coming from the Salecker-Wigner limit (here reviewed in Section 8), or just simply on phenomenological models in which distance fluctuations affect equally each $`L_p`$-long segment of a given distance, it would not be surprising if $`L_{QG}`$ was a growing function of the length of the arms of the interferometer. This gives added significance to the step from the 40-meter arms of the existing Caltech interferometer to the few-Km arms of LIGO/VIRGO interferometers. In planning future experiments, possibly taylored to test these effects (unlike LIGO and VIRGO which were tailored around the properties needed for gravity-wave detection), it is important to observe that an experiment achieving displacement noise levels with amplitude spectral density $`S^{}`$ at frequency $`f^{}`$ will set a bound on $`_\beta `$ of order $`_\beta <\left[S^{}(f^{})^\beta c^{(12\beta )/2}\right]^{2/(32\beta )},`$ (7) which in particular for random-walk models takes the form $`_\beta <\left[{\displaystyle \frac{S^{}f^{}}{\sqrt{c}}}\right]^2.`$ (8) The structure of Eq. (7) (and Eq. (8)) shows that there can be instances in which a very large interferometer (the ideal tool for low-frequency studies) might not be better than a smaller interferometer, if the smaller one achieves a very small value of $`S^{}`$. The formula (7) can also be used to describe as a function of $`\beta `$ the bounds on $`_\beta `$ achieved by the data collected at the Caltech 40-meter interferometer. Using again the fact that a noise level of only $`S^{}310^{19}m/\sqrt{Hz}`$ near $`f^{}450`$ $`Hz`$ was achieved , one obtains the bounds $`[_\beta ]_{caltech}<\left[{\displaystyle \frac{310^{19}m}{\sqrt{Hz}}}(450Hz)^\beta c^{(12\beta )/2}\right]^{2/(32\beta )}.`$ (9) Let me comment in particular on the case $`\beta =5/6`$ which might deserve special attention because of its connection (which was derived in Refs. and will be reviewed here in Section 8) with certain arguments for bounds on the measurability of distances in quantum gravity . From Eq. (9) we find that $`_{\beta =5/6}`$ is presently bound to the level $`_{\beta =5/6}10^{29}m`$. This bound is remarkably stringent in absolute terms, but is still quite far from the range of values one ordinarily considers as likely candidates for length scales appearing in quantum gravity. A more significant bound on $`_{\beta =5/6}`$ should be obtained by the LIGO/VIRGO generation of gravity-wave interferometers. For example, it is plausible that the “advanced phase” of LIGO achieve a displacement noise spectrum of less than $`10^{20}m/\sqrt{Hz}`$ near $`100`$ $`Hz`$ and this would probe values of $`_{\beta =5/6}`$ as small as $`10^{34}m`$. Interferometers are our best long-term hope for the development of this phenomenology, and that is why the analysis in this Section focuses on interferometers. However, it should be noted that among detectors already in operation the best bound on $`L_\beta `$ (if taken as universal) comes from resonant-bar detectors such as NAUTILUS , which achieved displacement sensitivity of about $`10^{21}m/\sqrt{Hz}`$ near $`924Hz`$. Correspondingly, one obtains the bound $`[_\beta ]_{bars}<\left[{\displaystyle \frac{10^{21}m}{\sqrt{Hz}}}(924Hz)^\beta c^{(12\beta )/2}\right]^{2/(32\beta )}.`$ (10) In closing this subsection on interferometry data analysis relevant for space-time fuzziness scenarios, let me clarify how it happened that such small effects could be tested. As I already mentioned, one of the viable strategies for quantum-gravity experiments is the one finding ways to put together very many of the very small quantum-gravity effects. In these interferometric studies that I proposed in Ref. one does indeed effectively sum up a large number of quantum space-time fluctuations. In a time of observation as long as the inverse of the typical gravity-wave interferometer frequency of operation an extremely large number of minute quantum fluctuations could affect the distance between the test masses. Although these fluctuations average out, they do leave traces in the interferometer. These traces grow with the time of observation: the standard deviation increases with the time of observation, while the displacement noise amplitude spectral density increases with the inverse of frequency (which again effectively means that it increases with the time of observation). From this point of view it is not surprising that plausible quantum-gravity scenarios ($`1/2\beta 1`$) all involve higher noise at lower frequencies: the observation of lower frequencies requires longer times and is therefore affected by a larger number of quantum-gravity fluctuations. ### 4.4 Less noisy random-walk models of distance fluctuations? The most intuitive result obtained in Refs. and reviewed in the preceding subsection is that we can rule out the picture in which the distances between the test masses of the interferometer are affected by fluctuations of magnitude $`L_p`$ occurring at a rate of one per each $`t_p`$ time interval. Does this rule out completely the possibility of a random-walk model of distance fluctuations? or are we just learning that the most intuitive/naive example of such a model does not work, but there are other plausible random-walk models? Without wanting to embark on a discussion of the plausibility of less noisy random-walk models, I shall nonetheless discuss some ideas which could lead to this noise reduction. Let me start by observing that certain studies of measurability of distances in quantum gravity (see Ref. and the brief review of those arguments which is provided in parts of Section 8) can be interpreted as suggesting that $`L_{QG}`$ might not be a universal length scale, i.e. it might depend on some specific properties of the experimental setup (particularly the energies/masses involved), and in some cases $`L_{QG}`$ could be significantly smaller than $`L_p`$. Another possibility one might want to consider is the one in which the quantum properties of space-time are such that fluctuations of magnitude $`L_p`$ would occur with frequency somewhat lower than $`1/t_p`$. This might happen for various reasons, but a particularly intriguing possibility<sup>13</sup><sup>13</sup>13This possibility emerged in discussions with Gabriele Veneziano. In response to my comments on the possibility of fluctuations with frequency somewhat lower than $`1/t_p`$ Gabriele made the suggestion that extended fundamental objects might be less susceptible than point particles to very localized space-time fluctuations. It would be interesting to work out in some detail an example of dynamical model of strings in a fuzzy space-time. is the one of theories whose fundamental objects are not pointlike, such as the popular string theories. For such theories it is plausible that fluctuations occurring at the Planck-distance level might have only a modest impact on extended fundamental objects characterized by a length scale significantly larger than the Planck length (e.g. in string theory the string size, or “length”, might be an order of magnitude larger than the Planck length). This possibility is interesting in general for quantum-gravity theories with a hierarchy of length scales, such as certain “M-theory motivated” scenarios with an extra length scale associated to the compactification from 11 to 10 dimensions. Yet another possibility for a random-walk model to cause less noise in interferometers could emerge if somehow the results of the schematic analysis adopted here and in Refs. turned out to be significantly modified once we become capable of handling all of the details of a real interferometer. To clarify which type of details I have in mind let me mention as an example the fact that in my analysis the structure of the test masses was not taken into account in any way: they were essentially treated as point-like. It would not be too surprising if we eventually became able to construct theoretical models taking into account the interplay between the binding forces that keep together (“in one piece”) a macroscopic test mass as well as some random-walk-type fundamental fluctuations of the space-time in which these macroscopic bodies “live”. The interference pattern observed in the laboratory reflects the space-time fluctuations only filtered through their interplay with the mentioned binding forces of the macroscopic test masses. These open issues are certainly important and a lot of insight could be gained through their investigation, but there is also some confusion that might easily result<sup>14</sup><sup>14</sup>14In particular, as I emphasized in Ref. , these and other elements of confusion are responsible for the incorrect conclusions on the Salecker-Wigner measurability limit which were drawn in the very recent Ref. . from simple-minded considerations (possibly guided by intuition developed using rudimentary table-top interferometers) concerning the macroscopic nature of the test masses used in modern interferometers. In closing this section let me try to offer a few relevant clarifications. I need to start by adding some comments on the stochastic processes I have been considering. In most physical contexts a series of random steps does not lead to $`\sqrt{T_{obs}}`$ dependence of $`\sigma `$ because often the context is such that through the fluctuation-dissipation theorem the source of $`\sqrt{T_{obs}}`$ dependence is (partly) compensated (some sort of restoring effect). The hypothesis explored in these discussions of random-walk models of space-time fuzziness is that the type of underlying dynamics of quantum space-time be such that the fluctuation-dissipation theorem be satisfied without spoiling the $`\sqrt{T_{obs}}`$ dependence of $`\sigma `$. This is an intuition which apparently is shared by other authors; for example, the study reported in Ref. (which followed by a few months Ref. , but clearly was the result of completely independent work) also models some implications of quantum space-time (the ones that affect clocks) with stochastic processes whose underlying dynamics does not produce any dissipation and therefore the “fluctuation contribution” to the $`T_{obs}`$ dependence is left unmodified, although the fluctuation-dissipation theorem is fully taken into account. Since a mirror of an interferometer of LIGO/VIRGO type is in practice an extremity of a pendulum, another aspect that the reader might at first find counter-intuitive is the fact that the $`\sqrt{T_{obs}}`$ dependence of $`\sigma `$, although coming in with a very small prefactor, for extremely large $`T_{obs}`$ would seem to give values of $`\sigma `$ too large to be consistent with the structure of a pendulum. This is a misleading intuition which originates from the experience with ordinary (non-quantum-gravity) analyses of the pendulum. In fact, the dynamics of an ordinary pendulum has one extremity “fixed” to a very heavy macroscopic and rigid body, while the other extremity is fixed to a much lighter (but, of course, still macroscopic) body. The usual stochastic processes considered in the study of the pendulum affect the heavier body in a totally negligible way, while they have strong impact on the dynamics of the lighter body. A pendulum analyzed according to a random-walk model of space-time fluctuations would be affected by stochastic processes which are of the same magnitude both for its heavier and its lighter extremity. \[The bodies are fluctuating along with the intrinsic space-time fluctuations, rather than fluctuating as a result of, say, collisions with material particles occurring in a conventional space-time.\] In particular, in the directions orthogonal to the vertical axis the stochastic processes affect the position of the center of mass of the entire pendulum just as they would affect the position of the center of mass of any other body (the spring that connects the two extremities of the pendulum would not affect the motion of the overall center of mass of the pendulum). With respect to the application of some of these considerations to modern gravity-wave interferometers it is also important to keep in mind that the measurement strategy of these interferometers requires that their test masses be free-falling. ## 5 GAMMA-RAY BURSTS AND IN-VACUO DISPERSION Let me now discuss the proposal put forward in Ref. (also see Ref. ), which exploits the recent confirmation that at least some gamma-ray bursters are indeed at cosmological distances , making it possible for observations of these to provide interesting constraints on the fundamental laws of physics. In particular, such cosmological distances combine with the short time structure seen in emissions from some GRBs to provide ideal features for tests of possible in vacuo dispersion of electromagnetic radiation from GRBs, of the type one might expect based on the intuitive quantum-gravity arguments reviewed in Section 2. As mentioned, a quantum-gravity-induced deformation of the dispersion relation for photons would naturally take the form $`c^2𝐩^2=E^2[1+(E/E_{QG})]`$, where $`E_{QG}`$ is an effective quantum-gravity energy scale and $``$ is a model-dependent function of the dimensionless ratio $`E/E_{QG}`$. In quantum-gravity scenarios in which the Hamiltonian equation of motion $`\dot{x}_i=H/p_i`$ is still valid (at least approximately valid; valid to an extent sufficient to justify the analysis that follows) such a deformed dispersion relation would lead to energy-dependent velocities for massless particles, with implications for the electromagnetic signals that we receive from astrophysical objects at large distances. At small energies $`EE_{QG}`$, it is reasonable to expect that a series expansion of the dispersion relation should be applicable leading to the formula (1). For the case $`\alpha =1`$, which is the most optimistic (largest quantum-gravity effect) among the cases discussed in the quantum-gravity literature, the formula (1) reduces to $$c^2𝐩^2E^2\left(1+\xi \frac{E}{E_{QG}}\right).$$ (11) Correspondingly one would predict the energy-dependent velocity formula $`v={\displaystyle \frac{E}{p}}c\left(1\xi {\displaystyle \frac{E}{E_{QG}}}\right).`$ (12) To elaborate a bit more than I did in Section 2 on the intuition that leads to this type of candidate quantum-gravity effect let me observe that velocity dispersion such as described in (12) could result from a picture of the vacuum as a quantum-gravitational ‘medium’, which responds differently to the propagation of particles of different energies and hence velocities. This is analogous to propagation through a conventional medium, such as an electromagnetic plasma . The gravitational ‘medium’ is generally believed to contain microscopic quantum fluctuations, such as the ones considered in the previous sections. These may be somewhat analogous to the thermal fluctuations in a plasma, that occur on time scales of order $`t1/T`$, where $`T`$ is the temperature. Since it is a much ‘harder’ phenomenon associated with new physics at an energy scale far beyond typical photon energies, any analogous quantum-gravity effect could be distinguished by its different energy dependence: the quantum-gravity effect would increase with energy, whereas conventional medium effects decrease with energy in the range of interest . Also relevant for building some quantum-gravity intuition for this type of in vacuo dispersion and deformed velocity law is the observation that this has implications for the measurability of distances in quantum gravity that fit well with the intuition emerging from heuristic analyses based on a combination of arguments from ordinary quantum mechanics and general relativity. \[This connection between dispersion relations and measurability bounds will be here reviewed in Section 8.\] Notably, recent work has provided evidence for the possibility that the popular canonical/loop quantum gravity might be among the theoretical approaches that admit the phenomenon of deformed dispersion relations with the deformation going linearly with the Planck length ($`L_p1/E_p`$). Similarly, evidence for this type of dispersion relations has been found in Liouville (non-critical) strings , whose development was partly motivated by an intuition concerning the “quantum-gravity vacuum” that is rather close to the one traditionally associated to the works of Wheeler and Hawking . Moreover, the phenomenon of deformed dispersion relations with the deformation going linearly with the Planck length fits rather naturally within certain approaches based on non-commutative geometry and deformed symmetries. In particular, there is growing evidence for this phenomenon in theories living in the non-commutative Minkowski space-time proposed in Refs. , which involves a dimensionful (presumably Planck-length related) deformation parameter. Equation (12) encodes a minute modification for most practical purposes, since $`E_{QG}`$ is believed to be a very high scale, presumably of order the Planck scale $`E_p`$. Nevertheless, such a deformation could be rather significant for even moderate-energy signals, if they travel over very long distances. According to (12) two signals respectively of energy $`E`$ and $`E+\mathrm{\Delta }E`$ emitted simultaneously from the same astrophysical source in traveling a distance $`L`$ acquire a “relative time delay” $`|\delta t|`$ given by $`|\delta t|{\displaystyle \frac{\mathrm{\Delta }E}{E_{QG}}}{\displaystyle \frac{L}{c}}.`$ (13) Such a time delay can be observable if $`\mathrm{\Delta }E`$ and $`L`$ are large while the time scale over which the signal exhibits time structure is small. As mentioned, these are the respects in which GRBs offer particularly good prospects for such measurements. Typical photon energies in GRB emissions are in the range $`0.1100`$ MeV , and it is possible that the spectrum might in fact extend up to TeV energies . Moreover, time structure down to the millisecond scale has been observed in the light curves , as is predicted in the most popular theoretical models involving merging neutron stars or black holes, where the last stages occur on the time scales associated with grazing orbits. Similar time scales could also occur in models that identify GRBs with other cataclysmic stellar events such as failed supernovae Ib, young ultra-magnetized pulsars or the sudden deaths of massive stars . We see from equations (12) and (13) that a signal with millisecond time structure in photons of energy around 10 MeV coming from a distance of order $`10^{10}`$ light years, which is well within the range of GRB observations and models, would be sensitive to $`E_{QG}`$ of order the Planck scale. In order to set a definite bound on $`E_{QG}`$ it is necessary to measure $`L`$ and to measure the time of arrival of different energy/wavelength components of a sharp peak within the burst. From Eq. (13) it follows that one could set a bound $`E_{QG}>\mathrm{\Delta }E{\displaystyle \frac{L}{c|\tau |}}`$ (14) by establishing the times of arrival of the peak to be the same up to an uncertainty $`\tau `$ in two energy channels $`E`$ and $`E+\mathrm{\Delta }E`$. Unfortunately, at present we have data available only on a few GRBs for which the distance $`L`$ has been determined (the first measurements of this type were obtained only in 1997), and these are the only GRBs which can be reliably used to set bounds on the new effect. Moreover, mostly because of the nature of the relevant experiments (particularly the BATSE detector on the Compton Gamma Ray Observatory), for a large majority of the GRBs on record only the portion of the burst with energies up to the MeV energy scale was observed, whereas higher energies would be helpful for the study of the phenomenon of quantum-gravity-induced dispersion here considered (which increases linearly with energy). We expect significant improvements in these coming years. The number of observed GRBs with associated distance measurement should rapidly increase. A new generation of orbiting spectrometers, e.g. AMS and GLAST , are being developed, whose potential sensitivities are very impressive. For example, assuming a $`E^2`$ energy spectrum, GLAST would expect to observe about 25 GRBs per year at photon energies exceeding 100 GeV, with time resolution of microseconds. AMS would observe a similar number at $`E>10`$ GeV with time resolution below 100 nanoseconds. While we wait for these new experiments, preliminary bounds can already be set with available data. Some of these bounds are “conditional” in the sense that they rely on the assumption that the relevant GRB originated at distances corresponding to redshift of $`𝒪(1)`$ (corresponding to a distance of $`3000`$ Mpc), which appears to be typical. Let me start by considering the “conditional” bound (first considered in Ref. ) which can be obtained from data on GRB920229. GRB920229 exhibited micro-structure in its burst at energies up to $`200`$ KeV. In Ref. it was estimated conservatively that a detailed time-series analysis might reveal coincidences in different BATSE energy bands on a time-scale $`10^2`$ s, which, assuming redshift of $`𝒪(1)`$ (the redshift of GRB920229 was not measured) would yield sensitivity to $`E_{QG}10^{16}`$ GeV (it would allow to set a bound $`E_{QG}>10^{16}`$ GeV). As observed in Ref. , a similar sensitivity might be obtainable with GRB980425, given its likely identification with the unusual supernova 1998bw . This is known to have taken place at a redshift $`z=0.0083`$ corresponding to a distance $`D40`$ Mpc (for a Hubble constant of 65 km sec<sup>-1</sup>Mpc<sup>-1</sup>) which is rather smaller than a typical GRB distance. However GRB980425 was observed by BeppoSAX at energies up to $`1.8`$ MeV, which gains back an order of magnitude in the sensitivity. If a time-series analysis were to reveal structure at the $`\mathrm{\Delta }t10^3`$ s level, which is typical of many GRBs , it would yield the same sensitivity as GRB920229 (but in this case, in which a redshift measurement is available, one would have a definite bound, whereas GRB920229 only provides a “conditional” bound of the type discussed above). Ref. also observed that an interesting (although not very “robust”) bound could be obtained using GRB920925c, which was observed by WATCH and possibly in high-energy $`\gamma `$ rays by the HEGRA/AIROBICC array above $`20`$ TeV . Several caveats are in order: taking into account the appropriate trial factor, the confidence level for the signal seen by HEGRA to be related to GRB920925c is only $`99.7\%`$ ($`2.7\sigma `$), the reported directions differ by 9<sup>0</sup>, and the redshift of the source is unknown. Nevertheless, the potential sensitivity is impressive. The events reported by HEGRA range up to $`E200`$ TeV, and the correlation with GRB920925c is within $`\mathrm{\Delta }t200`$ s. Making the reasonably conservative assumption that GRB920925c occurred no closer than GRB980425, viz. $`40`$Mpc, one finds a minimum sensitivity to $`E_{QG}10^{19}`$ GeV, modulo the caveats listed above. Even more spectacularly, several of the HEGRA GRB920925c candidate events occurred within $`\mathrm{\Delta }t1`$ s, providing a potential sensitivity even two orders of magnitude higher. As illustrated by this discussion, the GRBs have remarkable potential for the study of in vacuo dispersion, which will certainly lead to impressive bounds/tests as soon as improved experiments are put into operation, but at present the best GRB-based bounds are either “conditional” (example of GRB92022) or “not very robust”(example of GRB920925c). As a result, at present the best (reliable) bound has been extracted using data from the Whipple telescope on a TeV $`\gamma `$-ray flare associated with the active galaxy Mrk 421. This object has a redshift of 0.03 corresponding to a distance of $`100`$ Mpc. Four events with $`\gamma `$-ray energies above 2 TeV have been observed within a period of $`280`$ s. These provide a definite limit $`E_{QG}>4\times 10^{16}`$ GeV. In passing let me mention that (as observed in Ref. ) pulsars and supernovae, which are among the other astrophysical phenomena that might at first sight appear well suited for the study of in vacuo dispersion, do not actually provide interesting sensitivities. Although pulsar signals have very well-defined time structure, they are at relatively low energies and are presently observable over distances of at most $`10^4`$ light years. If one takes an energy of order $`1`$ eV and postulates generously a sensitivity to time delays as small as $`1\mu `$sec, one nevertheless reaches only an estimated sensitivity to $`E_{QG}10^9`$ GeV. With new experiments such as AXAF it may be possible to detect X-ray pulsars out to $`10^6`$ light years, but this would at best push the sensitivity up to $`E_{QG}10^{11}`$ GeV. Concerning supernovae, it is important to take into account that neutrinos<sup>15</sup><sup>15</sup>15Of course, at present we should be open to the possibility that the velocity law (12) might apply to all massless particles, but it is also plausible that the correct quantum-gravity velocity law would depend on the spin of the particle. It would therefore be important to set up a phenomenological programme that studies neutrinos with the same level of sensitivies that GRBs and other astrophysical phenomena allow for the study of the velocity law of the photon. from Type II events similar to SN1987a, which should have energies up to about 100 MeV with a time structure that could extend down to milliseconds, are likely to be detectable at distances of up to about $`10^5`$ light years, providing sensitivity to $`E_{QG}10^{15}`$ GeV, which is remarkable in absolute terms, but is still significantly far from the Planck scale and anyway cannot compete with the type of sensitivity achievable with GRBs. It is rather amusing that GRBs can provide such a good laboratory for investigations of in vacuo dispersion in spite of the fact that the short-time structure of GRB signals is still not understood. The key point of the proposal in Ref. is that sensitive tests can be performed through the serendipitous detection of short-scale time structure at different energies in GRBs which are established to be at cosmological distances. Detailed features of burst time series enable (as already shown in several examples) the emission times in different energy ranges to be put into correspondence. Any time shift due to quantum-gravity would increase with the photon energy, and this characteristic dependence is separable from more conventional in-medium-physics effects, which decrease with energy. To distinguish any quantum-gravity induced shift from effects due to the source, one can use the fact that the quantum-gravity effect here considered is linear in the GRB distance. This last remark applies to all values of $`\alpha `$, but most of the observations and formulas in this section are only valid in the case $`\alpha =1`$ (linear suppression). The generalization to cases with $`\alpha 1`$ is however rather simple; for example, Eq. (14) takes the form (up to coefficients of order 1) $`E_{QG}>\left[\left[(E+\mathrm{\Delta }E)^\alpha E^\alpha \right]{\displaystyle \frac{L}{c|\tau |}}\right]^{1/\alpha }.`$ (15) Notice that here, because of the non-linearity, the right-hand side depends both on $`E`$ and $`\mathrm{\Delta }E`$. Before moving on to other experiments let me clarify what is the key ingredient of this experiment using observations of gamma rays from distant astrophysical sources (the ingredient that allowed to render observable the minute quantum-gravity effects). This ingredient is very similar to the one relevant for the studies of space-time fuzziness using modern interferometers which I discussed in the preceding section; in fact, the gamma rays here considered are affected by a very large number of the minute quantum-gravity effects. Each of the dispersion-inducing quantum-gravity effect is very small, but the gamma rays emitted by distant astrophysical sources travel for a very long time before reaching us and can therefore be affected by an extremely large number of such effects. ## 6 OTHER QUANTUM-GRAVITY EXPERIMENTS In this section I provide brief reviews of some other quantum-gravity experiments. The fact that the discussion here provided for these experiments is less detailed than the preceding discussions of the interferometry-based and GRB-based experiments is not to be interpreted as indicating that these experiments are somehow less significant: it is just that a detailed discussion of a couple of examples was sufficient to provide to the reader some general intuition on the strategy behind quantum-gravity experiments and it was natural for me to use as examples the ones I am most familiar with. For the experiments discussed in this section I shall just give a rough idea of the quantum-gravity scenarios that could be tested and of the experimental procedures which have been proposed. ### 6.1 Neutral kaons and CPT violation One of the formalisms that has been proposed for the evolution equations of particles in the space-time foam relies on a density-matrix picture. The foam is seen as providing a sort of environment inducing quantum decoherence even on isolated systems (i.e. systems which only interact with the foam). A given non-relativistic system (such as the neutral kaons studied by the CPLEAR collaboration at CERN) is described by a density matrix $`\rho `$ that satisfies an evolution equation analogous to the one ordinarily used for the quantum mechanics of certain open systems: $`_t\rho =i[\rho ,H]+\delta H\rho `$ (16) where $`H`$ is the ordinary Hamiltonian and $`\delta H`$, which has a non-commutator structure , represents the effects of the foam. $`\delta H`$ is expected to be extremely small, suppressed by some power of the Planck length. The precise form of $`\delta H`$ (which in particular would set the level of the new physics by establishing how many powers of the Planck length suppress the effect) has not yet been derived from some full-grown quantum gravity<sup>16</sup><sup>16</sup>16Within the quantum-gravity approach here reviewed in Subsection 11.2, which only attempts to model certain aspects of quantum gravity, such a direct calculation might soon be performed., and therefore phenomenological parametrizations have been introduced (see Refs. ). For the case in which the effects are only suppressed by one power of the Planck length (linear suppression) recent neutral-kaon experiments, such as the ones performed by CPLEAR, have set significant bounds on the associated CPT-violation effects and forthcoming experiments are likely to push these bounds even further. Like the interferometry-based and the GRB-based experiments, these experiments (which have the added merit of having started the recent wave of quantum-gravity proposals) also appear to provide significant quantum-gravity tests. As mentioned, the effect of quantum-gravity-induced decoherence certainly qualifies as a traditional quantum-gravity subject, and the level of sensitivity reached by the neutral-kaon studies is certainly significant (as in the case of in vacuo dispersion and GRBs, one would like to be able to explore also the case of a quadratic Planck-length suppression, but it is nonetheless very significant that we have at least reached the capability to test the case of linear suppression). Also in this case it is natural to ask: how come we could manage this? What strategy allowed this neutral-kaon studies to evade the traditional gloomy forecasts for quantum-gravity phenomenology? While, as discussed above, in the interferometry-based and the GRB-based experiments the crucial element in the experimental proposal is the possibility to put together many quantum gravity effects, in the case of the neutral-kaon experiments the crucial element in the experimental proposal is provided by the very delicate balance of scales that characterizes the neutral-kaon system. In particular, it just happens to be true that the dimensionless ratio setting the order of magnitude of quantum-gravity effects in the linear suppression scenario, which is $`c^2M_{L,S}/E_p210^{19}`$, is not much smaller than some of the dimensionless ratios characterizing the neutral-kaon system, notably the ratio $`|M_LM_S|/M_{L,S}710^{15}`$ and the ratio $`|\mathrm{\Gamma }_L\mathrm{\Gamma }_S|/M_{L,S}1.410^{14}`$. This renders possible for the quantum-gravity effects to provide observably large corrections to the physics of neutral kaons. ### 6.2 Interferometry and string cosmology Up to this point I have only reviewed experiments probing foamy properties of space-time in the sense of Wheeler and Hawking. A different type of quantum-gravity effect which might produce a signature strong enough for experimental testing has been discussed in the context of studies of a cosmology based on critical superstrings . While for a description of this cosmology and of its physical signatures I must only refer the reader to the recent reviews in Ref. , I want to briefly discuss here the basic ingredients of the proposal of interferometry-based tests of the stochastic gravity-wave background predicted by string cosmology. In string cosmology the universe starts from a state of very small curvature, then goes through a long phase of dilaton-driven inflation reaching nearly Plankian energy density, and then eventually reaches the standard radiation-dominated cosmological evolution . The period of nearly Plankian energy density plays a crucial role in rendering the quantum-gravity effects observable. In fact, this example based on string cosmology is quite different from the experiments I discussed earlier in these lectures also because it does not involve small quantum-gravity effects which are somehow amplified (in the sense for example of the amplification provided when many effects are somehow put together). The string cosmology involves a period in which the quantum-gravity effects are actually quite large. As clarified in Refs. planned gravity-wave detectors such as LIGO might be able to detect the faint residual traces of these strong effects occurred in a far past. As mentioned, the quantum-gravity effects that, within string cosmology, leave a trace in the gravity-wave background are not of the type that requires an active Wheeler-Hawking foam. The relevant quantum-gravity effects live in the more familiar vacuum which we are used to encounter in the context of ordinary gauge theory. (Actually, for the purposes of the analyses reported in Refs. quantum gravity could be seen as an ordinary gauge theory, although with unusual gauge-field content.) In the case of the Wheeler-Hawking foam one is tempted to visualize the vacuum as reboiling with (virtual) worm-holes and black-holes. Instead the effects relevant for the gravity-wave background predicted by string cosmology are more conventional field-theory-type fluctuations, although carrying gravitational degrees of freedom, like the graviton. Also from this point of view the experimental proposal discussed in Refs. probes a set of candidate quantum-gravity phenomena which is complementary to the ones I have reviewed earlier in these notes. ### 6.3 Matter interferometry and primary state diffusion The studies reported in Ref. (and references therein) have considered how certain effectively stochastic properties of space-time would affect the evolution of quantum-mechanical states. The stochastic properties there considered are different from the ones discussed here in Sections 2, 3 and 4, but were introduced within a similar viewpoint, i.e. stochastic processes as effective description of quantum space-time processes. The implications of these stochastic properties for the evolution of quantum-mechanical states were modeled via the formalism of “primary state diffusion”, but only rather crude models turned out to be treatable. The approach proposed in Ref. actually puts together some of the unknowns of space-time foam and the specific properties of “primary state diffusion”. The structure of the predicted effects cannot be simply discussed in terms of elementary properties of space-time foam and a simple interpretation in terms of symmetry deformations does not appear to be possible. Those effects appear to be the net result of the whole formalism that goes into the approach. Moreover, as also emphasized by the authors, the crudeness of the models is such that all conclusions are to be considered as tentative at best. Still, the analysis reported in Ref. is very significant as an independent indication of a mechanism, based on matter-interferometry experiments, that could unveil Planck-length-suppressed effects. ### 6.4 Colliders and large extra dimensions It was recently suggested that the characteristic quantum-gravity length scale might be given by a length scale $`L_D`$ much larger than the Planck length in theories with large extra dimensions. It appears plausible that there exist models that are consistent with presently-available experimental data and have $`L_D`$ as large as the $`(TeV)^1`$ scale and (some of) the extra dimensions as large as a millimiter . In such models the smallness of the Planck length is seen as the result of the fact that the strength of gravitation in the ordinary 3+1 space-time dimensions would be proportional to the square-root of the inverse of the large volume of the external compactified space multiplied by an appropriate (according to dimensional analysis) power of $`L_D`$. Several studies have been motivated by the proposal put forward in Ref. , but only a small percentage of these studies considered the implications for quantum-gravity scenarios. Among these studies the ones reported in Refs. are particularly significant for the objectives of these lectures, since they illustrate another completely different strategy for quantum-gravity experiments. It is there observed that within the realm of the ordinary 3+1 dimensional space-time an important consequence of the existence of large extra dimensions would be the presence of a tower of Kaluza-Klein modes associated to the gravitons. The weakness of the coupling between gravitons and other particles can be compensated by the large number of these Kaluza-Klein modes when the experimental energy resolution is much larger than the mass splitting between the modes, which for a small number of very large extra dimensions can be a weak requirement (e.g. for 6 millimiter-wide extra dimensions the mass splitting is of a few $`MeV`$). This can lead to observably large effects at planned particle-physics colliders, particularly CERN’s LHC. In a sense, the experimental proposal put forward in Refs. is another example of experiment in which the smallness of quantum gravity effects is compensated by putting together a large number of such effects (putting together the contributions of all of the Kaluza-Klein modes). Concerning the quantum-gravity aspects of the models with large extra dimensions proposed in Ref. , it is important to realize that, as emphasized in Ref. , if anything like the space-time foam here described in Sections 2, 3, 4 and 5 was present in such models the effective reduction of the quantum-gravity scale would naturally lead to foamy effects that are too large for consistency with available experimental data. Preliminary estimates based solely on dimensional considerations appear to suggest that linear suppression by the reduced quantum-gravity scale would certainly be ruled out and even quadratic suppression might not be sufficient for consistency with available data. These arguments should lead to rather stringent bounds on space-time foam especially in those models in which some of the large extra dimensions are accessible to non-gravitational particles (see, e.g., Ref. ), and should have interesting (although smaller) implications also for the popular scenario in which only the gravitational degrees of freedom have access to the large extra dimensions. Of course, a final verdict must await detailed calculations analysing space-time foam in these models with large extra dimensions. The first examples of this type of computations are given by the very recent studies in Refs. , which considered the implications of foam-induced light-cone deformation for certain examples of models with large extra dimensions. ## 7 CLASSICAL-SPACE-TIME-INDUCED QUANTUM PHASES IN MATTER INTERFEROMETRY While of course the quantum properties of space-time are the most exciting effects we expect of quantum gravity, and probably the ones which will prove most useful in gaining insight into the fundamental structure of the theory, it is important to investigate experimentally all aspects of the interplay between gravitation and quantum mechanics. Among these experiments the ones that could be expected to provide fewer surprises (and less insight into the structure of quantum gravity) are the ones concerning the interplay between strong-but-classical gravitational fields and quantum matter fields. However, this is not necessarily true as I shall try to clarify within this section’s brief review of the experiment performed nearly a quarter of a century ago by Colella, Overhauser and Werner , which, to my knowledge, was the first experiment probing some aspect of the interplay between gravitation and quantum mechanics. That experiment has been followed by several modifications and refinements (often labeled “COW experiments” from the initials of the scientists involved in the first experiment) all probing the same basic physics, i.e. the validity of the Schrödinger equation $`\left[\left({\displaystyle \frac{\mathrm{}^2}{2M_I}}\right)\stackrel{}{}^2+M_G\varphi (\stackrel{}{r})\right]\psi (t,\stackrel{}{r})=i\mathrm{}{\displaystyle \frac{\psi (t,\stackrel{}{r})}{t}}`$ (17) for the description of the dynamics of matter (with wave function $`\psi (t,\stackrel{}{r})`$) in presence of the Earth’s gravitational potential $`\varphi (\stackrel{}{r})`$. \[In (17) $`M_I`$ and $`M_G`$ denote the inertial and gravitational mass respectively.\] The COW experiments exploit the fact that the Earth’s gravitational potential puts together the contribution of so many particles (all of those composing the Earth) that it ends up being large enough to introduce observable effects in rotating table-top interferometers. This was the first example of a physical context in which gravitation was shown to have an observable effect on a quantum-mechanical system in spite of the weakness of the gravitational force. The fact that the original experiment performed by Colella, Overhauser and Werner obtained results in very good agreement with Eq. (17) might seem to indicate that, as generally expected, experiments on the interplay between strong-but-classical gravitational fields and quantum matter fields should not lead to surprises and should not provide insight into the structure of quantum gravity. However, the confirmation of Eq. (17) does raise some sort of a puzzle with respect to the Equivalence Principle of general relativity; in fact, even for $`M_I=M_G`$ the mass does not cancel out in the quantum evolution equation (17). This is an observation that by now has also been emphasized in textbooks , but to my knowledge it has not been fully addressed even within the most popular quantum-gravity approaches, i.e. critical superstrings and canonical/loop quantum gravity. Which role should be played by the Equivalence Principle in quantum gravity? Which version/formulation of the Equivalence Principle should/could hold in quantum gravity? Additional elements for consideration in quantum-gravity models will emerge if the small discrepancy between (17) and data reported in Ref. (a refined COW experiment) is confirmed by future experiments. The subject of gravitationally induced quantum phases is also expanding in new directions , which are likely to provide additional insight. ## 8 ESTIMATES OF SPACE-TIME FUZZINESS FROM MEASURABILITY BOUNDS In the preceding Sections 4, 5, 6 and 7 I have discussed the experimental proposals that support the conclusions anticipated in Sections 2 and 3. This Section 8 and the following three sections each provide a “theoretical-physics addendum”. In this section I discuss some arguments that appear to suggest properties of the space-time foam. These arguments are based on analyses of bounds on the measurability of distances in quantum gravity. The existence of measurability bounds has attracted the interest of several theorists, because these bounds appear to capture an important novel element of quantum gravity. In ordinary (non-gravitational) quantum mechanics there is no absolute limit on the accuracy of the measurement of a distance. \[Ordinary quantum mechanics allows $`\delta A=0`$ for any single observable $`A`$, since it only limits the combined measurability of pairs of conjugate observables.\] The quantum-gravity bound on the measurability of distances (whatever final form it actually takes in the correct theory) is of course intrinsically interesting, but here (as in previous works ) I shall be interested in the possibility that it might reflect properties of the space-time foam. This is of course not necessarily true: a bound on the measurability of distances is not necessarily associated to space-time fluctuations, but guided by the Wheeler-Hawking intuition on the nature of space-time one is tempted to interpret any measurability bound (which might be obtained with totally independent arguments) as an indicator of the type of irreducible fuzziness that characterizes space-time. One has on one hand some intuition about quantum gravity which involves stochastic fluctuations of distances and on the other hand some different arguments lead to intuition for absolute bounds on the measurability of distances; it is natural to explore the possibility that the two might be related, i.e. that the intrinsic stochastic fluctuations should limit the measurability just to the level suggested by the independent measurability arguments. Different arguments appear to lead to different measurability bounds, which in turn could provide different intuition for the stochastic properties of space-time foam. ### 8.1 Minimum-length noise In many quantum-gravity approaches there appears to be a length scale $`L_{min}`$, often identified with the Planck length or the string length $`L_{string}`$ (which, as mentioned, should be somewhat larger than the Planck length, plausibly in the neighborhood of $`10^{34}m`$), which sets an absolute bound on the measurability of distances (a minimum uncertainty): $`\delta DL_{min}.`$ (18) This property emerges in approaches based on canonical quantization of Einstein’s gravity when analyzing certain gedanken experiments (see, e.g., Refs. and references therein). In critical superstring theories, theories whose mechanics is still governed by the laws of ordinary quantum mechanics but with one-dimensional (rather than point-like) fundamental objects, a relation of type (18) follows from the stringy modification of Heisenberg’s uncertainty principle $`\delta x\delta p`$ $``$ $`1+L_{string}^2\delta p^2.`$ (19) In fact, whereas Heisenberg’s uncertainty principle allows $`\delta x=0`$ (for $`\delta p\mathrm{}`$), for all choices of $`\delta p`$ the uncertainty relation (19) gives $`\delta xL_{string}`$. The relation (19) is suggested by certain analyses of string scattering , but it might have to be modified when taking into account the non-perturbative solitonic structures of superstrings known as Dirichlet branes . In particular, evidence has been found in support of the possibility that “Dirichlet particles” (Dirichlet 0 branes) could probe the structure of space-time down to scales shorter than the string length. In any case, all evidence available on critical superstrings is consistent with a relation of type (18), although it is probably safe to say that some more work is still needed to firmly establish the string-theory value of $`L_{min}`$. Having clarified that a relation of type (18) is a rather common prediction of theoretical work on quantum gravity, it is then natural to wonder whether such a relation is suggestive of stochastic distance fluctuations of a type that could significantly affect the noise levels of an interferometer. As mentioned, relations such as (18) do not necessarily encode any fuzziness; for example, relation (18) could simply emerge from a theory based on a lattice of points with spacing $`L_{min}`$ and equipped with a measurement theory consistent with (18). The concept of distance in such a theory would not necessarily be affected by the type of stochastic processes that lead to noise in an interferometer. However, if one does take as guidance the Wheeler-Hawking intuition on space-time foam it makes sense to assume that relation (18) might encode the net effect of some underlying physical processes of the type one would qualify as quantum space-time fluctuations. This (however preliminary) network of intuitions suggests that (18) could be the result of fuzziness for distances $`D`$ of the type associated with stochastic fluctuations with root-mean-square deviation $`\sigma _D`$ given by $$\sigma _DL_{min}.$$ (20) The associated displacement amplitude spectral density $`S_{min}(f)`$ should roughly have a $`1/\sqrt{f}`$ behaviour $$S_{min}(f)\frac{L_{min}}{\sqrt{f}},$$ (21) which (using notation set up in Section 4) can be concisely described stating that $`L_{min}_{\beta =1/2}`$. Eq. (21) can be justified using the general relation (4). Substituting the $`S_{min}(f)`$ of Eq. (21) for the $`S(f)`$ of Eq. (4) one obtains a $`\sigma `$ that approximates the $`\sigma _D`$ of Eq. (20) up to small (logarithmic) $`T_{obs}`$-dependent corrections. A more detailed description of the displacement amplitude spectral density associated with Eq. (20) can be found in Refs. . For the objectives of these lectures the rough estimate (21) is sufficient since, if indeed $`L_{min}L_p`$, from (21) one obtains $`S_{min}(f)10^{35}m/\sqrt{f}`$, which is still very far from the sensitivity of even the most advanced modern interferometers, and therefore I shall not be concerned with corrections to Eq. (21). ### 8.2 Random-walk noise motivated by the analysis of a Salecker-Wigner gedanken experiment Let me now consider a measurability bound which is encountered when taking into account the quantum properties of devices. It is well understood (see, e.g., Refs. ) that the combination of the gravitational properties and the quantum properties of devices can have an important role in the analysis of the operative definition of gravitational observables. Since the analyses that led to the proposal of Eq. (18) only treated the devices in a completely idealized manner (assuming that one could ignore any contribution to the uncertainty in the measurement of $`D`$ due to the gravitational and quantum properties of devices), it is not surprising that analyses taking into account the gravitational and quantum properties of devices found more significant limitations to the measurability of distances. Actually, by ignoring the way in which the gravitational properties and the quantum properties of devices combine in measurements of geometry-related physical properties of a system one misses some of the fundamental elements of novelty we should expect for the interplay of gravitation and quantum mechanics; in fact, one would be missing an element of novelty which is deeply associated to the Equivalence Principle. In measurements of physical properties which are not geometry-related one can safely resort to an idealized description of devices. For example, in the famous Bohr-Rosenfeld analysis of the measurability of the electromagnetic field it was shown that the accuracy allowed by the formalism of ordinary quantum mechanics could only be achieved using idealized test particles with vanishing ratio between electric charge and inertial mass. Attempts to generalize the Bohr-Rosenfeld analysis to the study of gravitational fields (see, e.g., Ref. ) are of course confronted with the fact that the ratio between gravitational “charge” (mass) and inertial mass is fixed by the Equivalence Principle. While ideal devices with vanishing ratio between electric charge and inertial mass can be considered at least in principle, devices with vanishing ratio between gravitational mass and inertial mass are not admissible in any (however formal) limit of the laws of gravitation. This observation provides one of the strongest elements in support of the idea that the mechanics on which quantum gravity is based must not be exactly the one of ordinary quantum mechanics, since it should accommodate a somewhat different relationship between “system” and “measuring apparatus” and should not rely on the idealized “measuring apparatus” which plays such a central role in the mechanics laws of ordinary quantum mechanics (see, e.g., the “Copenhagen interpretation”). In trying to develop some intuition for the type of fuzziness that could affect the concept of distance in quantum gravity, it might be useful to consider the way in which the interplay between the gravitational and the quantum properties of devices affects the measurability of distances. In Refs. I have argued<sup>17</sup><sup>17</sup>17I shall comment later in these notes on the measurability analysis reported in Ref. , which also took as starting point the mentioned work by Salecker and Wigner, but advocated a different viewpoint and reached different conclusions. that a natural starting point for this type of analysis is provided by the procedure for the measurement of distances which was discussed in influential work<sup>18</sup><sup>18</sup>18The classic Salecker-Wigner work is criticized in the recent paper . As I explain in detail in Ref. , the analysis reported in Ref. is incorrect. Whereas Salecker and Wigner sought an operative definition of distances suitable for the Planck regime, the analysis in Ref. relies on several assumptions that appear to be natural in the context of most present-day experiments but are not even meaningful in the Planck regime. Moreover, contrary to the claim made in Ref. , the source of $`\sqrt{T_{obs}}`$-uncertainty used in the Salecker-Wigner derivation cannot be truly eliminated; unsurprisingly, it can only be traded for another comparable contribution to the total uncertainty in the measurement. In addition to this incorrect criticism of the limit derived by Salecker and Wigner, Ref. also misrepresented the role of the Salecker-Wigner limit in providing motivation for the interferometric studies here considered (and originally proposed in Refs. ): the reader could come out of reading Ref. with the impression that such interferometry-based tests would only be sensitive to quantum-gravity ideas motivated by the Salecker-Wigner limit. As emphasized in Sections 4 and 8 of these notes (and in Ref. ) motivation for this phenomenological programme also comes from a long tradition of ideas (developing independently of the ideas related to the Salecker-Wigner limit) on foamy/fuzzy space-time, and from recent work on the possibility that quantum-gravity might induce a deformation of the dispersion relation that characterizes the propagation of the massless particles used as space-time probes in the operative definition of distances. This is already quite clear at least to a portion of the community; for example, in recent work on foamy space-times (without any reference to the Salecker-Wigner related literature) the type of modern-interferometer sensitivity exposed in Refs. was used to constrain certain novel candidate quantum-gravity effects. by Salecker and Wigner . These authors “measured” (in the “gedanken” sense) the distance $`D`$ between two bodies by exchanging a light signal between them. The measurement procedure requires attaching<sup>19</sup><sup>19</sup>19Of course, for consistency with causality, in such contexts one assumes devices to be “attached non-rigidly,” and, in particular, the relative position and velocity of their centers of mass continue to satisfy the standard uncertainty relations of quantum mechanics. a light-gun (i.e. a device capable of sending a light signal when triggered), a detector and a clock to one of the two bodies and attaching a mirror to the other body. By measuring the time $`T_{obs}`$ (time of observation) needed by the light signal for a two-way journey between the bodies one also obtains a measurement of the distance $`D`$. For example, in flat space and neglecting quantum effects one simply finds that $`D=cT_{obs}/2`$. Within this setup it is easy to realize that the interplay between the gravitational and the quantum properties of devices leads to an irreducible contribution to the uncertainty $`\delta D`$. In order to see this it is sufficient to consider the contribution to $`\delta D`$ coming from the uncertainties that affect the motion of the center of mass of the system composed by the light-gun, the detector and the clock. Denoting with $`x^{}`$ and $`v^{}`$ the position and the velocity of the center of mass of this composite device relative to the position of the body to which it is attached, and assuming that the experimentalists prepare this device in a state characterised by uncertainties $`\delta x^{}`$ and $`\delta v^{}`$, one easily finds $`\delta D\delta x^{}+T_{obs}\delta v^{}\delta x^{}+\left({\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_d}}\right){\displaystyle \frac{\mathrm{}T_{obs}}{2\delta x^{}}}\sqrt{{\displaystyle \frac{\mathrm{}T_{obs}}{2}}{\displaystyle \frac{1}{M_d}}},`$ (22) where $`M_b`$ is the mass of the body, $`M_d`$ is the total mass of the device composed of the light-gun, the detector, and the clock, and I also used the fact that Heisenberg’s Uncertainty Principle implies $`\delta x^{}\delta v^{}(1/M_b+1/M_d)\mathrm{}/2`$. \[The reduced mass $`(1/M_b+1/M_d)^1`$ is relevant for the relative motion.\] Clearly, from (22) it follows that in order to reduce the contribution to the uncertainty coming from the quantum properties of the devices it is necessary to take the formal “classical-device limit,” i.e. the limit<sup>20</sup><sup>20</sup>20A body of finite mass can acquire a nearly-classical behaviour when immerged in a suitable environment (environment-induced decoherence). However, one of the central hypothesis of the work of Salecker and Wigner and followers is that the quantum properties of devices should not be negligible in quantum gravity, and that in particular the in-principle operative definition of distances (which we expect to lie at the foundations of quantum gravity) should not rely on environment-induced decoherence. It appears worth exploring the implications of this hypothesis not only because quantum gravity could be a truly fundamental theory (rather than the effective large-distance description of a more fundamental theory) but also because the operative definition of distances in quantum gravity should be applicable all the way down to the Planck length. It is even plausible that quantum gravity should accommodate an operative definition of a material reference system composed of a network of free-falling particles with relative distances comparable to the Planck length. Within the framework of these intuitions it is indeed quite hard to imagine a decoherence-inducing environment suitable for the in-principle operative definition of distances in quantum gravity. As emphasized in Ref. , the analysis reported in Ref. missed this important conceptual element of the Salecker-Wigner approach. of infinitely large $`M_d`$. Up to this point I have not yet taken into account the gravitational properties of the devices and in fact the “classical-device limit” encountered above is fully consistent with the laws of ordinary quantum mechanics. From a physical/phenomenological and conceptual viewpoint it is well understood that the formalism of quantum mechanics is only appropriate for the description of the results of measurements performed by classical devices. It is therefore not surprising that the classical-device (infinite-mass) limit turns out to be required in order to match the prediction $`min\delta D=0`$ of ordinary quantum mechanics. If one also takes into account the gravitational properties of the devices, a conflict with ordinary quantum mechanics immediately arises because the classical-device (infinite-mass) limit is in principle inadmissible for measurements concerning gravitational effects.<sup>21</sup><sup>21</sup>21This conflict between the infinite-mass classical-device limit (which is implicit in the applications of the formalism of ordinary quantum mechanics to the description of the outcome of experiments) and the nature of gravitational interactions has not been addressed within any of the most popular quantum gravity approaches, including critical superstrings and canonical/loop quantum gravity . In a sense somewhat similar to the one appropriate for Hawking’s work on black holes , this “classical-device paradox” appears to provide an obstruction for the use of the ordinary formalism of quantum mechanics for a description of quantum gravity. As the devices get more and more massive they increasingly disturb the gravitational/geometrical observables, and well before reaching the infinite-mass limit the procedures for the measurement of gravitational observables cannot be meaningfully performed . In the Salecker-Wigner measurement procedure the limit $`M_d\mathrm{}`$ is not admissible when gravitational interactions are taken into account. At the very least the value of $`M_d`$ is limited by the requirement that the apparatus should not turn into a black hole (which would not allow the exchange of signals required by the measurement procedure). These observations render unavoidable the $`\sqrt{T_{obs}}`$-dependence of Eq. (22). It is important to realize that this $`\sqrt{T_{obs}}`$-dependence of the bound on the measurability of distances comes simply from combining elements of quantum mechanics with elements of classical gravity. As it stands it is not to be interpreted as a quantum-gravity effect. However, as clarified in the opening of this section, if one is interested in modeling properties of the space-time foam it is natural to explore the possibility that the foam be such that distances be affected by stochastic fluctuations with this typical $`\sqrt{T_{obs}}`$-dependence. The logic is here the one of observing that stochastic fluctuations associated to the foam would anyway lead to some form of dependence on $`T_{obs}`$ and in guessing the specific form of this dependence the measurability analysis reviewed in this subsection can be seen as providing motivation for a $`\sqrt{T_{obs}}`$-dependence. From this point of view the measurability analysis reviewed in this subsection provides additional motivation for the study of random-walk-type models of distance fuzziness, whose fundamental stochastic fluctuations are characterized (as already discussed in Section 4) by root-mean-square deviation $`\sigma _D`$ given by<sup>22</sup><sup>22</sup>22As discussed in Refs. , this form of $`\sigma _D`$ also implies that in quantum gravity any measurement that monitors a distance $`D`$ for a time $`T_{obs}`$ is affected by an uncertainty $`\delta D\sqrt{L_{QG}cT_{obs}}`$. This must be seen as a minimum uncertainty that takes only into account the quantum and gravitational properties of the measuring apparatus. Of course, an even tighter bound can emerge when taking into account also the quantum and gravitational properties of the system under observation. According to the estimates provided in Refs. the contribution to the uncertainty coming from the system is of the type $`\delta DL_p`$, so that the total contribution (summing the system and the apparatus contributions) might be of the type $`\delta DL_p+\sqrt{L_{QG}cT_{obs}}`$. $`\sigma _D\sqrt{L_{QG}cT_{obs}}`$ (23) and by displacement amplitude spectral density $`S(f)`$ given by $`S(f)=f^1\sqrt{L_{QG}c}.`$ (24) Here the scale $`L_{QG}`$ plays exactly the same role as in Section 4 (in particular $`L_{QG}_{\beta =1}`$ in the sense of Section 4). However, seeing $`L_{QG}`$ as the result of Planck-length fluctuations occurring at a rate of one per Planck time can suggest $`L_{QG}L_p`$, whereas the different intuition which has gone into the emergence of $`L_{QG}`$ in this subsection leaves room for different predictions. As already emphasized, by mixing elements of quantum mechanics and elements of gravitation one can only conclude that there could be some $`\sqrt{T_{obs}}`$-dependent irreducible contribution to the uncertainty in the measurement of distances. One can then guess that space-time foam might reflect this $`\sqrt{T_{obs}}`$-dependence and one can parametrize our ignorance by introducing $`L_{QG}`$ in the formula $`\sqrt{L_{QG}cT_{obs}}`$. Within such an argument the estimate $`L_{QG}L_p`$ could only be motivated on dimensional grounds ($`L_p`$ is the only length scale available), but there is no direct estimate of $`L_{QG}`$ within the argument advocated in this subsection. We only have (in the specific sense intended above) a lower limit on $`L_{QG}`$ which is set by the bare analysis using straightforward combination of elements of ordinary quantum mechanics and elements of ordinary gravity. As seen above, this lower limit on $`L_{QG}`$ is set by the minimum allowed value of $`1/M_d`$. Our intuition for $`L_{QG}`$ might benefit from trying to establish this minimum allowed value of $`1/M_d`$. As mentioned, a conservative (possibly very conservative) estimate of this minimum value can be obtained by enforcing that $`M_d`$ be at least sufficiently small to avoid black hole formation. In leading order (e.g., assuming corresponding spherical symmetries) this amounts to the requirement that $`M_d<\mathrm{}S_d/(cL_p^2)`$, where the length $`S_d`$ characterizes the size of the region of space where the matter distribution associated to $`M_d`$ is localized. This observation implies $`{\displaystyle \frac{1}{M_d}}>{\displaystyle \frac{cL_p^2}{\mathrm{}}}{\displaystyle \frac{1}{S_d}},`$ (25) which in turn suggests that $`L_{QG}min[L_p^2/S_d]`$: $`\delta Dmin\sqrt{{\displaystyle \frac{1}{S_d}}{\displaystyle \frac{L_p^2cT_{obs}}{2}}}.`$ (26) Of course, this estimate is very preliminary since a full quantum gravity would be needed here; in particular, the way in which black holes were handled in my argument might have missed important properties which would become clear only once we have the correct theory. However, it is nevertheless striking to observe that the guess $`L_{QG}L_p`$ appears to be very high with respect to the lower limit on $`L_{QG}`$ which we are finding from this estimate; in fact, $`L_{QG}L_p`$ would correspond to the maximum admissible value of $`S_d`$ being of order $`L_p`$. Since my analysis only holds for devices that can be treated as approximately rigid<sup>23</sup><sup>23</sup>23The fact that I have included only one contribution from the quantum properties of the devices, the one associated with the quantum properties of the motion of the center of mass, implicitly relies on the assumption that the devices and the bodies can be treated as approximately rigid. Any non-rigidity of the devices could introduce additional contributions to the uncertainty in the measurement of $`D`$. This is particularly clear in the case of detector screens and mirrors, whose shape plays a central role in data analysis. Uncertainties in the shape (the relative position of different small parts) of a detector screen or of a mirror would lead to uncertainties in the measured quantity. For large devices some level of non-rigidity appears to be inevitable in quantum gravity. Causality alone (without any quantum mechanics) forbids rigid attachment of two bodies (e.g., two small parts of a device), but is still consistent with rigid motion (bodies are not really attached but because of fine-tuned initial conditions their relative position and relative orientation are constants of motion). When Heisenberg’s Uncertainty Principle is introduced rigid motion becomes possible only for bodies of infinite mass (otherwise the relative motion inevitably has some irreducible uncertainty). Rigid devices are still available in ordinary quantum mechanics but they are peculiar devices, with infinite mass. \[Alternatively, in ordinary quantum mechanics one can take a less fundamental viewpoint on measurement (which does not appear to be natural in the Planck regime ) in which the trajectory of the different components/parts of a device are classical because the device is immerged in a decoherence-inducing environment.\] When both gravitation and quantum mechanics are introduced rigid devices are no longer available since the infinite-mass limit is then inconsistent with the nature of gravitational devices. and any non-rigidity could introduce additional contributions to the uncertainties, it is reasonable to assume that $`max[S_d]`$ be some small length (small enough that any non-rigidity would negligibly affect the measurement procedure), but it appears unlikely that $`max[S_d]L_p`$. This observation might provide some encouragement for values of $`L_{QG}`$ smaller than $`L_p`$, which after all is the only way to obtain random-walk models consistent with the data analysis reported in Refs. . Later in this section I will consider a particular class of estimates for $`max[S_d]`$: if the correct quantum gravity is such that something like (26) holds but with $`max[S_d]`$ that depends on $`\delta D`$ and/or $`T_{obs}`$, one would have a different $`T_{obs}`$-dependence (and corresponding $`f`$-dependence), as I shall show in one example. ### 8.3 Random-walk noise motivated by linear deformation of dispersion relations Besides the analysis of the Salecker-Wigner measurement procedure also the mentioned possibility of quantum-gravity-induced deformation of dispersion relations would be consistent with the idea of random-walk distance fuzziness. The sense in which this is true is clarified by the arguments that follow. Let me start by going back to the general relation (already discussed in Section 2): $$c^2𝐩^2E^2\left[1+\xi \left(\frac{E}{E_{QG}}\right)^\alpha \right].$$ (27) Scenarios (27) with $`\alpha =1`$ are consistent with random-walk noise, in the sense that an experiment involving as a device (as a probe) a massless particle satisfying the dispersion relation (27) with $`\alpha =1`$ would be naturally affected by a device-induced uncertainty that grows with $`\sqrt{T_{obs}}`$. From the deformed dispersion relation (27) one is led to energy-dependent velocities $$vc\left[1\left(\frac{1+\alpha }{2}\right)\xi \left(\frac{E}{E_{QG}}\right)^\alpha \right],$$ (28) and consequently when a time $`T_{obs}`$ has lapsed from the moment in which the observer (experimentalist) set off the measurement procedure the uncertainty in the position of the massless probe is given by $$\delta xc\delta t+\delta vT_{obs}c\delta t+\frac{1+\alpha }{2}\alpha \frac{E^{\alpha 1}\delta E}{E_{QG}^\alpha }cT_{obs},$$ (29) where $`\delta t`$ is the uncertainty in the time of emission of the probe, $`\delta v`$ is the uncertainty in the velocity of the probe, $`\delta E`$ is the uncertainty in the energy of the probe, and I used the relation between $`\delta v`$ and $`\delta E`$ that follows from (28). Since the uncertainty in the time of emission of a particle and the uncertainty in its energy are related<sup>24</sup><sup>24</sup>24It is well understood that the $`\delta t\delta E\mathrm{}`$ relation is valid only in a weaker sense than, say, Heisenberg’s Uncertainty Principle $`\delta x\delta p\mathrm{}`$. This has its roots in the fact that the time appearing in quantum-mechanics equations is just a parameter (not an operator), and in general there is no self-adjoint operator canonically conjugate to the total energy, if the energy spectrum is bounded from below . However, $`\delta t\delta E\mathrm{}`$ does relate $`\delta t`$ intended as uncertainty in the time of emission of a particle and $`\delta E`$ intended as uncertainty in the energy of that same particle. by $`\delta t\delta E\mathrm{}`$, Eq. (29) can be turned into an absolute bound on the uncertainty in the position of the massless probe when a time $`T_{obs}`$ has lapsed from the moment in which the observer set off the measurement procedure $$\delta xc\frac{\mathrm{}}{\delta E}+\frac{1+\alpha }{2}\alpha \frac{E^{\alpha 1}\delta E}{E_{QG}^\alpha }T_{obs}\sqrt{\left(\frac{\alpha +\alpha ^2}{2}\right)\left(\frac{E}{E_{QG}}\right)^{\alpha 1}\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (30) For $`\alpha =1`$ the $`E`$-dependence on the right-hand side of Eq. (30) disappears and one is led again to a $`\delta x`$ of the type $`(constant)\sqrt{T_{obs}}`$: $$\delta x\sqrt{\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (31) When massless probes are used in the measurement of a distance $`D`$ the uncertainty (31) in the position of the probe translates directly into an uncertainty on $`D`$: $$\delta D\sqrt{\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (32) This was already observed in Refs. which considered the implications of deformed dispersion relations (27) with $`\alpha =1`$ for the operative definition of distances. Since deformed dispersion relations (27) with $`\alpha =1`$ have led us to the same measurability bound already encountered both in the analysis of the Salecker-Wigner measurement procedure and the analysis of simple-minded random-walk models of quantum space-time fluctuations, if we assume again that such measurability bounds emerge in a full quantum gravity as a result of corresponding quantum fluctuations (fuzziness), we are led once again to random-walk noise: $$\sigma _D\sqrt{\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (33) ### 8.4 Noise motivated by quadratic deformation of dispersion relations In the preceding subsection I observed that quantum-gravity deformed dispersion relations (27) with $`\alpha =1`$ can also motivate random-walk noise $`\sigma _D(constant)\sqrt{T_{obs}}`$. If we use the same line of reasoning that connects a measurability bound to a scenario for fuzziness when $`\alpha 1`$ we appear to find $`\sigma _D𝒢(E/E_{QG})\sqrt{T_{obs}}`$, where $`𝒢(E/E_{QG})`$ is a ($`\alpha `$-dependent) function of $`E/E_{QG}`$. However, in these cases with $`\alpha 1`$ clearly the connection between measurability bound and fuzzy-distance scenario cannot be this simple; in fact, the energy of the probe $`E`$ which naturally plays a role in the context of the derivation of the measurability bound does not have an obvious counter-part in the context of the conjectured fuzzy-distance scenario. In order to preserve the conjectured connection between measurability bounds and fuzzy-distance scenarios one can be tempted to envision that if $`\alpha 1`$ the interferometer noise levels induced by space-time fuzziness might be of the type $$\sigma _D\sqrt{\left(\frac{\alpha +\alpha ^2}{2}\right)\left(\frac{E^{}}{E_{QG}}\right)^{\alpha 1}\frac{c^2\mathrm{}T_{obs}}{E_{QG}}},$$ (34) where $`E^{}`$ is some energy scale characterizing the physical context under consideration. \[For example, at the intuitive level one might conjecture that $`E^{}`$ could characterize some sort of energy density associated with quantum fluctuations of space-time or an energy scale associated with the masses of the devices used in the measurement process.\] Since $`\alpha 1`$ in all quantum-gravity approaches believed to support deformed dispersion relations, it appears likely that the factor $`(E^{}/E_{QG})^{\alpha 1}`$ would suppress the random-walk noise effect in all contexts with $`E^{}<E_{QG}`$. Besides the case $`\alpha =1`$ (linear deformation) also the case $`\alpha =2`$ (quadratic deformation) deserves special interest since it can emerge quite naturally in quantum-gravity approaches (see, e.g., Ref. ). ### 8.5 Noise with $`f^{5/6}`$ amplitude spectral density In Subsection 8.2 a bound on the measurability of distances based on the Salecker-Wigner procedure was used as additional motivation for experimental tests of interferometer noise of random-walk type, with $`f^1`$ amplitude spectral density and $`\sqrt{T_{obs}}`$ root-mean-square deviation. In this subsection I shall pursue further the observation that the relevant measurability bound could be derived by simply insisting that the devices do not turn into black holes. That observation allowed to derive Eq. (26), which expresses the minimum uncertainty $`\delta D`$ on the measurement of a distance $`D`$ (i.e. the measurability bound for $`D`$) as proportional to $`\sqrt{T_{obs}}`$ and $`\sqrt{1/S_d}`$. Within that derivation the minimum uncertainty is obtained in correspondence of $`max[S_d]`$, the maximum value of $`S_d`$ consistent with the structure of the measurement procedure. I was therefore led to consider how large $`S_d`$ could be while still allowing to disregard any non-rigidity in the quantum motion of the device (which could introduce additional contributions to the uncertainties). Something suggestive of the random-walk noise scenario emerged by simply assuming that $`max[S_d]`$ be independent of $`T_{obs}`$ and independent of the accuracy $`\delta D`$ that the observer would wish to achieve. However, as mentioned, the same physical intuition that motivates some of the fuzzy space-time scenarios here considered also suggests that quantum gravity might require a novel measurement theory, possibly involving a new type of relation between system and measuring apparatus. Based on this intuition, it seems reasonable to contemplate the possibility that $`max[S_d]`$ might actually depend on $`\delta D`$. It is such a scenario that I want to consider in this subsection. In particular I want to consider the case $`max[S_d]\delta D`$, which, besides being simple, has the plausible property that it allows only small devices if the uncertainty to be achieved is small, while it would allow correspondingly larger devices if the observer was content with a larger uncertainty. This is also consistent with the idea that elements of non-rigidity in the quantum motion of extended devices could be neglected if anyway the measurement is not aiming for great accuracy, while they might even lead to the most significant contributions to the uncertainty if all other sources of uncertainty are very small. \[Salecker and Wigner would also argue that “large” devices are not suitable for very accurate space-time measurements (they end up being “in the way” of the measurement procedure) while they might be admissible if space-time is being probed rather softly.\] In this scenario with $`max[S_d]\delta D`$, Eq. (26) takes the form $`\delta D\sqrt{{\displaystyle \frac{1}{S_d}}{\displaystyle \frac{L_p^2cT_{obs}}{2}}}\sqrt{{\displaystyle \frac{L_p^2cT_{obs}}{2\delta D}}},`$ (35) which actually gives $`\delta D\left({\displaystyle \frac{1}{2}}L_p^2cT_{obs}\right)^{1/3}.`$ (36) As done with the other measurability bounds, I have proposed to take Eq. (36) as motivation for the investigation of a corresponding fuzziness scenario characterised by $`\sigma _D\left(\stackrel{~}{L}_{QG}^2cT_{obs}\right)^{1/3}.`$ (37) Notice that in this equation I replaced $`L_p`$ with a generic length scale $`\stackrel{~}{L}_{QG}`$, since it is possible that the heuristic argument leading to Eq. (37) might have captured the qualitative structure of the phenomenon while providing an incorrect estimate of the relevant length scale. Also notice that Eq. (36) has the same form as the relations emerged in other measurability analyses , even though those analyses adopted a very different viewpoint (and even the physical interpretation of the elements of Eq. (36) was different, as explained in the next section). As observed in Refs. the $`T_{obs}^{1/3}`$ dependence of $`\sigma _D`$ is associated with displacement amplitude spectral density with $`f^{5/6}`$ behaviour: $`𝒮(f)=f^{5/6}(\stackrel{~}{L}_{QG}^2c)^{1/3}.`$ (38) Therefore the measurability analyses discussed in this subsection provides motivation for the investigation of the case $`\beta =5/6`$ (using again the notation set up in Section 4). ## 9 ABSOLUTE MEASURABILITY BOUND FOR THE AMPLITUDE OF A GRAVITY WAVE The bulk of this Article (presented in the previous three sections) concerns the implications of distance fuzziness for interferometry. Various scenarios for distance fuzziness were motivated either by a general Wheeler-Hawking-inspired phenomenological parametrization or by intuitive arguments based on the possibility of quantum-gravity-induced deformations of dispersion relations or quantum-gravity<sup>25</sup><sup>25</sup>25My observations within the Salecker-Wigner setup do pertain to the quantum-gravity realm because I took into account the gravitational properties of the devices and I also, like Salecker and Wigner, removed the assumption of classicality of the devices. If one was only putting together some properties of gravitation and quantum mechanics one could at best probe a simple limiting behaviour of quantum gravity, but by removing one of the conceptual ingredients of ordinary quantum mechanics it is plausible that we get a glimpse of a true property of quantum gravity. The Salecker-Wigner study (just like the Bohr-Rosenfeld analysis ) suggests that among the conceptual elements of quantum mechanics the one that is most likely (although there are of course no guarantees) to succumb to the unification of gravitation and quantum mechanics is the requirement for devices to be treated as classical. distance-measurability analyses within the Salecker-Wigner setup. My observation that distance fuzziness would be felt by interferometers as a fundamental additional source of noise (i.e. as some sort of fundamental source of stochastic gravity-wave background) also implies that, if indeed quantum gravity hosts distance fuzziness, there would be a quantum-gravity induced bound on the measurability of gravity waves. This section is parenthetical, within the logical line of this Article, in the sense that I will assume in this section that there is no distance fuzziness. The objective is one of showing that even without distance fuzziness it appears that the measurability of gravity waves should be limited in quantum gravity. The strategy I will use to derive this bound is an adaptation of the Salecker-Wigner framework to the analysis of gravity-wave measurability. Basically, while the Salecker-Wigner framework concerns the measurement of a distance $`D`$, I shall here apply the same reasoning to the measurement of “distance displacements” in interferometers arms (of length $`L`$) of the type that could be induced by a gravity wave. Having clarified in which sense this section represents a deviation from the main bulk of observations reported in the present Article, let me start the discussion by reminding the reader of the fact that, as already mentioned in Section 2, the interference pattern generated by a modern interferometer can be remarkably sensitive to changes in the positions of the mirrors relative to the beam splitter, and is therefore sensitive to gravitational waves (which, as described in the proper reference frame , have the effect of changing these relative positions). With just a few lines of simple algebra one can show that an ideal gravitational wave of amplitude $`h`$ and reduced<sup>26</sup><sup>26</sup>26I report these results in terms of reduced wavelengths $`\lambda ^o`$ (which are related to the wavelengths $`\lambda `$ by $`\lambda ^o=\lambda /(2\pi )`$) in order to avoid cumbersome factors of $`\pi `$ in some of the formulas. wavelength $`\lambda _{gw}^o`$ propagating along the direction orthogonal to the plane of the interferometer would cause a change in the interference pattern as for a phase shift of magnitude $`\mathrm{\Delta }\varphi =D_L/\lambda ^o`$, where $`\lambda ^o`$ is the reduced wavelength of the laser beam used in the measurement procedure and $`D_L2h\lambda _{gw}^o\left|\mathrm{sin}\left({\displaystyle \frac{L}{2\lambda _{gw}^o}}\right)\right|,`$ (39) is the magnitude of the change caused by the gravitational wave in the length of the arms of the interferometer. (The changes in the lengths of the two arms have opposite sign .) As already mentioned in Section 2, modern techniques allow to construct gravity-wave interferometers with truly remarkable sensitivity; in particular, at least for gravitational waves with $`\lambda _{gw}^o`$ of order $`10^3Km`$, the next LIGO/VIRGO generation of detectors should be sensitive to $`h`$ as low as $`310^{22}`$. Since $`h310^{22}`$ causes a $`D_L`$ of order $`10^{18}m`$ in arms lengths $`L`$ of order $`3Km`$, it is not surprising that in the analysis of gravity-wave interferometers, in spite of their huge size, one ends up having to take into account the type of quantum effects usually significant only for the study of processes at or below the atomic scale. In particular, there is the so-called standard quantum limit on the measurability of $`h`$ that results from the combined minimization of photon shot noise and radiation pressure noise. While a careful discussion of these two noise sources (which the interested reader can find in Ref. ) is quite insightful, here I shall rederive this standard quantum limit in an alternative<sup>27</sup><sup>27</sup>27While the standard quantum limit can be equivalently obtained either from the combined minimization of photon shot noise and radiation pressure noise or from the application of Heisenberg’s uncertainty principle to the position and momentum of the mirror, it is this author’s opinion that there might actually be a fundamental difference between the two derivations. In fact, it appears (see, e.g., Ref. and references therein) that the limit obtained through combined minimization of photon shot noise and radiation pressure noise can be violated by careful exploitation of the properties of squeezed light, whereas the limit obtained through the application of Heisenberg’s uncertainty principle to the position and momentum of the mirror is truly fundamental. and straightforward manner (also discussed in Ref. ), which relies on the application of Heisenberg’s uncertainty principle to the position and momentum of a mirror relative to the position of the beam splitter. This can be done along the lines of my analysis of the Salecker-Wigner procedure for the measurement of distances. Since the mirrors and the beam splitter are macroscopic, and therefore the corresponding momenta and velocities are related non-relativistically, Heisenberg’s uncertainty principle implies that <sup>28</sup><sup>28</sup>28Note that in the setup of gravity-wave interferometers the test masses are required to be free-falling . In such a context the type of observations reported in Ref. is not only inadequate for in-principle analyses of measurability in the full quantum-gravity regime but in most cases, as a result of the free-fall requirement, it will also be inapplicable in the ordinary context of present-day interferometers. $`\delta x\delta v{\displaystyle \frac{\mathrm{}}{2}}\left({\displaystyle \frac{1}{M_m}}+{\displaystyle \frac{1}{M_b}}\right){\displaystyle \frac{\mathrm{}}{2M_m}},`$ (40) where $`\delta x`$ and $`\delta v`$ are the uncertainties in the relative position and relative velocity, $`M_m`$ is the mass of the mirror, $`M_b`$ is the mass of the beam splitter. \[Again, the relative motion is characterised by the reduced mass, which is given in this case by $`(1/M_m+1/M_b)^1`$.\] Clearly, the high precision of the planned measurements requires that the position of the mirrors be kept under control during the whole time $`2L/c`$ that the beam spends in between the arms of the detector before superposition. When combined with (40) this leads to the finding that, for any given value of $`M_m`$, the $`D_L`$ induced by the gravitational wave can be measured only up to an irreducible uncertainty, the so-called standard quantum limit: $`\delta D_L\delta x+\delta v\mathrm{\hspace{0.17em}2}{\displaystyle \frac{L}{c}}\delta x+{\displaystyle \frac{\mathrm{}L}{cM_m\delta x}}\sqrt{{\displaystyle \frac{\mathrm{}L}{cM_m}}}.`$ (41) The case of gravity-wave measurements is a canonical example of my general argument that the infinite-mass classical-device limit underlying ordinary quantum mechanics is inconsistent with the nature of gravitational measurements. As the devices get more and more massive they not only increasingly disturb the gravitational/geometrical observables, but eventually (well before reaching the infinite-mass limit) they also render impossible the completion of the procedure of measurement of gravitational observables. In trying to asses how this observation affects the measurability of the properties of a gravity wave let me start by combining Eqs. (39) and (41): $`\delta h=\delta \left({\displaystyle \frac{D_L}{L}}\right)=h{\displaystyle \frac{\delta D_L}{D_L}}{\displaystyle \frac{\sqrt{\frac{\mathrm{}L}{cM_m}}}{2\lambda _{gw}^o\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}.`$ (42) In complete analogy with some of the observations made in Section 3 concerning the measurability of distances, I observe that, when gravitational effects are taken into account, the limit of infinite mirror mass is of course inadmissable. At the very least $`M_m`$ must be small enough that the mirror does not turn into a black hole.<sup>29</sup><sup>29</sup>29This is of course a very conservative bound, since a mirror stops being useful as a device well before it turns into a black hole, but even this conservative approach leads to an interesting conclusion. In order for the mirror not to be a black hole one requires $`M_m<\mathrm{}S_m/(cL_p^2)`$, where $`S_m`$ is the size of the region of space occupied by the mirror. This observation combined with (42) implies that one would have obtained a bound on the measurability of $`h`$ if one found a maximum allowed mirror size $`S_m`$. In estimating this maximum $`S_m`$ one can be easily led to some extreme and incorrect assumptions. In particular, one could suppose that in order to achieve a sensitivity to $`D_L`$ as low as $`10^{18}m`$ it might be necessary to “accurately position” each $`10^{36}m^2`$ surface element of the mirror. If this was really necessary, our line of argument would then lead to a rather large measurability bound. Fortunately, the phase of the wavefront of the reflected light beam is determined by the average position of all the atoms across the beam’s width, and microscopic irregularities in the structure of the mirror only lead to scattering of a small fraction of light out of the beam. This suggests that in our analysis the size of the mirror should be assumed to be of the order of the width of the beam . So $`S_m`$ cannot be too small, but on the other hand in light of this observation, and taking into account the in-principle nature<sup>30</sup><sup>30</sup>30For the gravitational waves to which LIGO/VIRGO will be most sensitive, which have $`\lambda _{gw}^o`$ of order $`10^3Km`$, the requirement $`S_m<\lambda _{gw}^o`$ simply states that the size of mirrors should be smaller than $`10^3Km`$. This bound might appear very conservative, but I am trying to establish an in-principle limitation on the measurability of $`h`$. Since such a bound was not previously established, in this first study I just want to clarify that the bound exists, rather than dwell on the exact magnitude of the bound. I therefore prefer to be very conservative in my estimates. of the analysis I am performing, it is clear that $`S_m`$ could not be too large either, and in particular it appears safe to assume that $`S_m`$ should be smaller than the $`\lambda _{gw}^o`$ of the gravity wave which one is planning to observe. If $`S_m`$ is indeed the width of the beam (and therefore the effective size of the mirror), then one must exclude the possibility $`S_m>\lambda _{gw}^o`$ because otherwise the same gravity wave which one is intending to observe would cause phenomena preventing the proper completion of the measurement procedure (e.g. deforming the mirror and leading to a nonlinear relation between $`D_L`$ and $`h`$). The conservative bound $`S_m<\lambda _{gw}^o`$ also appears o be safe with respect to the expectations of another type of intuition, usually resulting from experience with table-top interferometers. Within this assumption one is always tempted to think of the mirror as attached to a very massive body. Even setting aside the limitations on this type of idealized attachements that are set by the uncertainty principle and causality, it appears that the bound $`S_m<\lambda _{gw}^o`$ should be safe because of the requirement that the mirror be free-falling. \[It actually seems extremely conservative to just demand of such a free-falling interferometer mirror that the sum of its mass and the mass of any body “attached” to it should not exceed the mass of a black hole of size $`\lambda _{gw}^o`$.\] In summary, it looks very safe to assume that $`M_m`$ should be smaller than $`\mathrm{}\lambda _{gw}^o/(cL_p^2)`$, and this can be combined with (42) to obtain the measurability bound $`\delta h>{\displaystyle \frac{L_p}{2\lambda _{gw}^o}}{\displaystyle \frac{\sqrt{L/\lambda _{gw}^o}}{\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}.`$ (43) This result not only sets a lower bound on the measurability of $`h`$ with given arm’s length $`L`$, but also encodes an absolute (i.e. irrespective of the value of $`L`$) lower bound, as a result of the fact that the function $`\sqrt{x}/|\mathrm{sin}(x/2)|`$ has an absolute minimum: $`min[\sqrt{x}/\mathrm{sin}(x/2)]1.66`$. This novel measurability bound is a significant departure from the principles of ordinary quantum mechanics, especially in light of the fact that it describes a limitation on the measurability of a single observable (the amplitude $`h`$ of a gravity wave), and that this limitation turns out to depend on the value (not the associated uncertainty) of another observable (the reduced wavelength $`\lambda _{gw}^o`$ of the same gravity wave). It is also significant that this new bound (43) encodes an aspect of a novel type of interplay between system and measuring apparatus in quantum-gravity regimes; in fact, in deriving (43) a crucial role was played by the fact that in accurate measurements of gravitational/geometrical observables it is no longer possible to advocate an idealized description of the devices. Also the $`T_{obs}`$-dependent bound on the measurability of distances which I reviewed in Section 3 encodes a departure from ordinary quantum mechanics and a novel type of interplay between system and measuring apparatus, but the bound (43) on the measurability of the amplitude of a gravity wave (which is one of the new results reported in the present Article) should provide even stronger motivation for the search of formalisms in which quantum gravity is based on a new mechanics, not exactly given by ordinary quantum mechanics. In fact, while one might still hope to find alternatives to the Salecker-Wigner measurement procedure that allow to measure distances evading the corresponding measurability bounds, it appears hard to imagine that there could be anything (even among “gedanken laboratories”) better than an interferometer for measurements of the amplitude of a gravity wave. It is also important to realize that the bound (43) cannot be obtained by just assuming that the Planck length $`L_p`$ provides the minimum uncertainty for distances (and distance variations). In fact, if the only limitation was $`\delta D_LL_p`$ the resulting uncertainty on $`h`$, which I denote with $`\delta h^{(L_p)}`$, would have the property $`min[\delta h^{(L_p)}]=min\left[{\displaystyle \frac{L_p}{2\lambda _{gw}^o\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}\right]={\displaystyle \frac{L_p}{2\lambda _{gw}^o}},`$ (44) whereas, exploiting the above-mentioned properties of the function $`\sqrt{x}/|\mathrm{sin}(x/2)|`$, from (43) one finds<sup>31</sup><sup>31</sup>31I am here (for “pedagogical” purposes) somewhat simplifying the comparison between $`\delta h`$ and $`\delta h^{(L_p)}`$. As mentioned, in principle one should take into account both uncertainties inherent in the “system” under observation, which are likely to be characterized exclusively by the Planck-length bound, and uncertainties coming from the “measuring apparatus”, which might easily involve other length (or time) scales besides the Planck length. It would therefore be proper to compare $`\delta h^{(L_p)}`$, which would be the only contribution present in the conventional idealization of “classical devices”, with the sum $`\delta h+\delta h^{(L_p)}`$, which, as appropriate for quantum gravity, provides a sum of system-inherent uncertainties plus apparatus-induced uncertainties. $`min[\delta h]>min\left[{\displaystyle \frac{L_p}{2\lambda _{gw}^o}}{\displaystyle \frac{\sqrt{L/\lambda _{gw}^o}}{\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}\right]>min[\delta h^{(L_p)}].`$ (45) In general, the dependence of $`\delta h^{(L_p)}`$ on $`\lambda _{gw}^o`$ is different from the one of $`\delta h`$. Actually, in light of the comparison of (44) with (45) it is amusing to observe that the bound (43) could be seen as the result of a minimum length $`L_p`$ combined with an $`\lambda _{gw}^o`$-dependent correction. This would be consistent with some of the ideas mentioned in Section 3 (the energy-dependent effect of in vacuo dispersion and the corresponding proposal (33) for distance fuzziness) in which the magnitude of the quantum-gravity effect depends rather sensitively on some energy-related aspect of the problem under investigation (just like $`\lambda _{gw}^o`$ for the gravity wave). It is easy to verify that the bound (43), would not observably affect the operation of even the most sophisticated planned interferometers. However, in the spirit of what I did in the previous sections considering the operative definition of distances, also for the amplitudes of gravity waves the fact that we have encountered an obstruction in the measurement analysis based on ordinary quantum mechanics (and the fact that by mixing gravitation and quantum mechanics we have obtained some intuition for novel qualitative features of such gravity-wave amplitudes in quantum gravity) could be used as starting point for the proposal of novel quantum-gravity effects possibly larger than the estimate (43). Although possibly very interesting, these fully quantum-gravity scenarios for the properties of gravity-wave amplitudes will not be explored in these notes. I just want to observe that the strain sensitivity ($`S_h(f)S(f)/L`$) of order $`10^{22}/\sqrt{Hz}`$ which is soon going to be achieved by several detectors corresponds to a rather natural scale for a fundamental quantum-gravity-induced stochastic-gravity-wave-like noise; in fact, $`10^{22}/\sqrt{Hz}\sqrt{L_p/c}`$. ## 10 RELATIONS WITH OTHER QUANTUM-GRAVITY APPROACHES In this section I comment on the connections and the differences between some of the ideas that I reviewed in these notes and other quantum-gravity ideas. ### 10.1 Canonical Quantum Gravity One of the most popular quantum-gravity approaches is the one in which the ordinary canonical formalism of quantum mechanics is applied to (some formulation of) Einstein’s Gravity. In spite of the fact that some of the observations reviewed in the previous sections suggest that quantum gravity should require a new mechanics, not exactly given by ordinary quantum mechanics, it is very interesting<sup>32</sup><sup>32</sup>32I am here taking a viewpoint that might be summarized rephrasing a comment by B.S. De Witt in Ref. . While some of the arguments reviewed here appear to indicate that ordinary quantum mechanics cannot suffice for quantum gravity, it is still plausible that the language of ordinary quantum mechanics might be a useful tool for the description of its own demise. This would be analogous to something we have learned in the study of special relativity: one could insist on describing the observed Lorentz-Fitzgerald contraction as the result of relativistic modifications in the force law between atoms, but in order to capture the true essence of the new regime it is necessary to embrance the new conceptual framework of special relativity. that some of the phenomena considered in the previous sections have also emerged in studies of canonical quantum gravity. As mentioned, the most direct connection was found in the study reported in Ref. , which was motivated by Ref. . In fact, Ref. shows that the popular canonical/loop quantum gravity admits the phenomenon of deformed dispersion relations, with the deformation going linearly with the Planck length. Concerning the bounds on the measurability of distances it is probably fair to say that the situation in canonical/loop quantum gravity is not yet clear because the present formulations do not appear to lead to a compelling candidate “length operator.” This author would like to interpret the problems associated with the length operator as an indication that perhaps something unexpected might actually emerge in canonical/loop quantum gravity as a length operator, possibly something with properties fitting the intuition of some of the scenarios for fuzzy distances which I reviewed. Actually, the random-walk space-time fuzziness model might have a (somewhat weak, but intriguing) connection with “quantum mechanics applied to gravity” at least to the level seen by comparison with the scenario discussed in Ref. , which was motivated by the intuition that is emerging from investigations of canonical/loop quantum gravity. The “moves” of Ref. share many of the properties of the “random steps” of the random-walk models here considered. ### 10.2 Critical and non-critical String Theories Unfortunately, in the popular quantum-gravity approach based on critical superstrings<sup>33</sup><sup>33</sup>33As already mentioned the mechanics of critical superstrings is just an ordinary quantum mechanics. All of the new structures emerging in this exciting formalism are the result of applying ordinary quantum mechanics to the dynamics of extended fundamental objects, rather than point-like objects (particles). not many results have been derived concerning directly the quantum properties of space-time. Perhaps the most noticeable such results are the ones on limitations on the measurability of distances emerged in the scattering analyses reported in Refs. , which I already mentioned. A rather different picture is emerging (through the difficult technical aspects of this rich formalism) in Liouville (non-critical) strings , whose development was partly motivated by intuition concerning the quantum-gravity vacuum that is rather close to the one traditionally associated with the mentioned works of Wheeler and Hawking. Evidence has been found in Liouville strings supporting the validity of deformed dispersion relations, with the deformation going linearly with the Planck/string length. In the sense clarified in Subsection 8.3 this approach might also host a bound on the measurability of distances which grows with $`\sqrt{T_{obs}}`$. ### 10.3 Other types of measurement analyses Because of the lack of experimental input, it is not surprising that many authors have been seeking some intuition on quantum gravity by formal analyses of the ways in which the interplay between gravitation and quantum mechanics could affect measurement procedures. A large portion of these analyses produced a “$`min[\delta D]`$” with $`D`$ denoting a distance; however, the same type of notation was used for structures defined in significantly different ways. Also different meanings have been given by different authors to the statement “absolute bound on the measurability of an observable.” Quite important for the topics here discussed are the differences (which might not be totally transparent as a result of this unfortunate choice of overlapping notations) between the approach advocated in Refs. and the approaches advocated in Refs. . In Refs. $`min[\delta D]`$” denotes an absolute limitation on the measurability of a distance $`D`$. The studies analyzed the interplay of gravity and quantum mechanics in defining a net of time-like geodesics, and in those studies “$`min[\delta D]`$” characterizes the maximum “tightness” achievable for the net of time-like geodesics. Moreover, in Refs. it was required that the measurement procedure should not affect/modify the geometric observable being measured, and “absolute bounds on the measurability” were obtained in this specific sense. Instead, in Refs. it was envisioned that the observable which is being measured might depend also on the devices (the underlying view is that observables in quantum gravity would always be, in a sense, shared properties of “system” and “apparatus”), and it was only required that the nature of the devices be consistent with the various stages of the measurement procedure (for example if a device turned into a black hole some of the exchanges of signals needed for the measurement would be impossible). The measurability bounds of Refs. are therefore to be intended from this more fundamental perspective, and this is crucial for the possibility that these measurability bounds be associated to a fundamental quantum-gravity mechanism for “fuzziness” (quantum fluctuations of space-time). The analyses reported in Refs. did not include any reference to fuzzy space-times of the type operatively defined in terms of stochastic processes in Section 4 (and in Ref. ). The more fundamental nature of the bounds obtained in Refs. is also crucial for the arguments suggesting that quantum gravity might require a new mechanics, not exactly given by ordinary quantum mechanics. The analyses reported in Refs. did not include any reference to this possibility. Having clarified that there is a “double difference” (different “$`min`$” and different “$`\delta D`$”) between the meaning of $`min[\delta D]`$ adopted in Refs. and the meaning of $`min[\delta D]`$ adopted in Refs. , it is however important to notice that the studies reported in Refs. were among the first studies which showed how in some aspects of measurement analysis the Planck length might appear together with other length scales in the problem. For example, a quantum-gravity effect naturally involving something of length-squared dimensions might not necessarily go like $`L_p^2`$: in some cases it could go like $`\mathrm{\Lambda }L_p`$, with $`\mathrm{\Lambda }`$ some other length scale in the problem. Interestingly, the analysis of the interplay of gravity and quantum mechanics in defining a net of time-like geodesics reported in Ref. concluded that the maximum “tightness” achievable for the geodesics would be characterized by $`\sqrt{L_p^2R^1s}`$, where $`R`$ is the radius of the (spherically symmetric) clocks whose world lines define the network of geodesics, and $`s`$ is the characteristic distance scale over which one is intending to define such a network. The $`\sqrt{L_p^2R^1s}`$ maximum tightness discussed in Ref. is formally analogous to Eq. (26), but, as clarified above, this “maximum tightness” was defined in a very different (“doubly different”) way, and therefore the two proposals have completely different physical implications. Actually, in Ref. it was also stated that for a single geodesic distance (which might be closer to the type of distance measurability analysis reported in Refs. ) one could achieve accuracy significantly better than the formula $`\sqrt{L_p^2R^1s}`$, which was interpreted in Ref. as a direct result of the structure of a network of geodesics. Relations of the type $`min[\delta D](L_p^2D)^{(1/3)}`$, which are formally analogous to Eq. (36), were encountered in the analysis of maximum tightness achievable for a geodesics network reported in Ref. and in the analysis of measurability of distances reported in Ref. . Although once again the definitions of “$`min`$” and “$`\delta D`$” used in these studies are completely different from the ones relevant for the “$`min[\delta D]`$” of Eq. (36). ## 11 QUANTUM GRAVITY, NO STRINGS (OR LOOPS) ATTACHED Some of the arguments reviewed in these notes appear to suggest that quantum gravity might require a mechanics not exactly of the type of ordinary quantum mechanics. In particular, the new mechanics might have to accommodate a somewhat different relationship (in a sense, “more democratic”) between “system” and “measuring apparatus”, and should take into account the fact that the limit in which the apparatus behaves classically is not accessible once gravitation is turned on. The fact that the most popular quantum-gravity approaches, including critical superstrings and canonical/loop quantum gravity, are based on ordinary quantum mechanics but seem inconsistent with a correspondence between formalism and measurability bounds of the type sought and found in non-gravitational quantum mechanics (through the work of Bohr, Rosenfeld, Landau, Peierls, Einstein, Salecker, Wigner and many others), represents, in this author’s humble opinion, one of the outstanding problems of these approaches. Still, it is of great importance for quantum-gravity research that these approaches continue to be pursued very aggressively: they might eventually encounter along their development unforeseeable answers to these questions or else, as they are “pushed to the limit”, they might turn out to fail in a way that provides insight on the correct theory. However, the observations pointing us toward deviations from ordinary quantum-mechanics could provide motivation for the parallel development of alternative quantum-gravity approaches. But how could we envision quantum gravity with no strings (or“canonical loops”) attached? More properly, how can we devise a new mechanics when we have no direct experimental data on its structure? Classical mechanics was abandoned for quantum mechanics only after a relatively long period of analysis of physical problems such as black-body spectrum and photoelectric effect which contained very relevant information. We don’t seem to have any such insightful physical problem. At best we might have identified the type of conceptual issues which Mach had discussed with respect to Newtonian physics. It is amusing to notice that the analogy with Machian conceptual analyses might actually be quite proper, since at the beginning of this century we were forced to renounce to the comfort of the reference to “absolute space” and now that we are reaching the end of this century we might be forced to renounce to the comfort of an idealized classical measuring apparatus. Our task is that much harder in light of the fact that (unless something like large extra dimensions is verified in Nature) we must make a gigantic leap from the energy scales we presently understand to Planckian energy scales. While of course it is not impossible that we eventually do come up with the correct recipe for this gigantic jump, one less optimistic strategy that might be worth pursuing is the one of trying to come up with some effective theory useful for the description of new space-time-related phenomena occurring in an energy-scale range extending from somewhere not much above presently achievable energies up to somewhere safely below the Planck scale. These theories might provide guidance to experimentalists, and in turn (if confirmed by experiments) might provide a useful intermediate step toward the Planck scale. For those who are not certain that we can make a lucky guess of the whole giant step toward the Planck scale<sup>34</sup><sup>34</sup>34Understandably, some are rendered prudent by the realization that the ratio between the Planck scale and the energy scales we are probing with modern particle colliders is so big that it is, for example, comparable (within a couple of orders of magnitude) to the ratio between the average Earth-Moon distance and the Bohr radius. this strategy might provide a possibility to eventually get to the Planck regime only after a (long and painful) series of intermediate steps. Some of the ideas discussed in the previous sections can be seen as examples of this strategy. In this section I collect additional relevant material. ### 11.1 A low-energy effective theory of quantum gravity While the primary emphasis has been on experimental tests of quantum-gravity-motivated candidate phenomena, some of the arguments (which are based on Refs. ) reviewed in these lecture notes can be seen as attempts to identify properties that one could demand of a theory suitable for a first stage of partial unification of gravitation and quantum mechanics. This first stage of partial unification would be a low-energy effective theory capturing only some rough features of quantum gravity. In particular, as discussed in Refs. , it is plausible that the most significant implications of quantum gravity for low-energy (large-distance) physics might be associated with the structure of the non-trivial “quantum-gravity vacuum”. A satisfactory picture of this vacuum is not available at present, and therefore we must generically characterize it as the appropriate new concept that in quantum gravity takes the place of the ordinary concept of “empty space”; however, it is plausible that some of the arguments by Wheeler, Hawking and followers, attempting to develop an intuitive description of the quantum-gravity vacuum, might have captured at least some of its actual properties. Therefore the experimental investigations of space-time foam discussed in some of the preceding sections could be quite relevant for the search of a theory describing a first stage of partial unification of gravitation and quantum mechanics. Other possible elements for the search of such a theory come from studies suggesting that this unification might require a new (non-classical) concept of measuring apparatus and a new relationship between measuring apparatus and system. I have reviewed some of the relevant arguments through the discussion of the Salecker-Wigner setup for the measurement of distances, which manifested the problems associated with the infinite-mass classical-device limit. As mentioned, a similar conclusion was already drawn in the context of attempts (see, e.g., Ref. ) to generalize to the study of the measurability of gravitational fields the famous Bohr-Rosenfeld analysis of the measurability of the electromagnetic field. It seems reasonable to explore the possibility that already the first stage of partial unification of gravitation and quantum mechanics might require a new mechanics. A (related) plausible feature of the correct effective low-energy theory of quantum-gravity is (some form of) a novel bound on the measurability of distances. This appears to be an inevitable consequence of relinquishing the idealized methods of measurement analysis that rely on the artifacts of the infinite-mass classical-device limit. If indeed one of these novel measurability bounds holds in the physical world, and if indeed the structure of the quantum-gravity vacuum is non-trivial and involves space-time fuzziness, it appears also plausible that this two features be related, i.e. that the fuzziness of space-time would be ultimately responsible for the measurability bounds. It is also plausible that an effective large-distance description of some aspects of quantum gravity might involve quantum symmetries and noncommutative geometry (while at the Planck scale even more novel geometric structures might be required). The intuition emerging from these considerations on a novel relationship between measuring apparatus and system and by a Wheeler-Hawking picture of the quantum-gravity vacuum has not yet been implemented in a fully-developed new formalism describing the first stage of partial unification of gravitation and quantum mechanics, but one can use this emerging intuition for rough estimates of certain candidate quantum-gravity effects. Some of the theoretical estimates that I reviewed in the preceding sections, particularly the ones on distance fuzziness, can be seen as examples of this. Besides the possibility of direct experimental tests (such as some of the ones here reviewed), studies of low-energy effective quantum-gravity models might provide a perspective on quantum gravity that is complementary with respect to the one emerging from approaches based on proposals for a one-step full unification of gravitation and quantum mechanics. On one side of this complementarity there are the attempts to find a low-energy effective quantum gravity which are necessarily driven by intuition based on direct extrapolation from known physical regimes; they are therefore rather close to the phenomenological realm but they are confronted with huge difficulties when trying to incorporate this physical intuition within a completely new formalism. On the other side there are the attempts of one-step full unification of gravitation and quantum mechanics, which usually start from some intuition concerning the appropriate formalism (e.g., canonical/loop quantum gravity or critical superstrings) but are confronted by huge difficulties when trying to “come down” to the level of phenomenological predictions. These complementary perspectives might meet at some mid-way point leading to new insight in quantum gravity physics. One instance in which this mid-way-point meeting has already been successful is provided by the mentioned results reported in Ref. , where the candidate phenomenon of quantum-gravity induced deformed dispersion relations, which had been proposed within phenomenological analyses of the type needed for the search of a low-energy theory of quantum gravity, was shown to be consistent with the structure of canonical/loop quantum gravity. ### 11.2 Theories on non-commutative Minkowski space-time At various points in these notes there is a more or less explicit reference to deformed symmetries and noncommutative space-times<sup>35</sup><sup>35</sup>35The general idea of some form of connection between Planck-scale physics and quantum groups (with their associated noncommutative geometry) is of course not new, see e.g., Refs. . Moreover, some support for noncommutativity of space-time has also been found within measurability analyses .. Just in the previous subsection I have recalled the conjecture that an effective large-distance description of some aspects of quantum gravity might involve quantum symmetries and noncommutative geometry. The type of in vacuo dispersion which can be tested using observations of gamma rays from distant astrophysical sources is naturally encoded within a consistent deformation of Poincaré symmetries . A useful structure (at least for toy-model purposes, but perhaps even more than that) appears to be the noncommutative (so-called “$`\kappa `$”) Minkowski space-time $`[x^i,t]=ı\lambda x^i,[x^i,x^j]=0`$ (46) where $`i,j=1,2,3`$ and $`\lambda `$ (commonly denoted<sup>36</sup><sup>36</sup>36This author is partly responsible for the redundant convention of using the notation $`\lambda `$ when the reader is invited to visualize a length scale and going back to the $`\kappa `$ notation when instead it might be natural for the reader to visualize a mass/energy scale. In spite of its unpleasantness, this redundancy is here reiterated in order to allow the reader to quickly identify/interpret corresponding equations in Ref. . by $`1/\kappa `$) is a free length scale. This simple noncommutative space-time could be taken as a basis for an effective description of phenomena associated with a nontrivial foamy quantum-gravity vacuum<sup>37</sup><sup>37</sup>37In particular, within one particular attempt to model space-time foam, the one of Liouville non-critical strings , the time “coordinate” appears to have properties that might be suggestive of a $`\kappa `$-Minkowski space-time.. When probed very softly such a space would appear as an ordinary Minkowski space-time<sup>38</sup><sup>38</sup>38Generalizations would of course be necessary for a description of how the quantum-gravity foam affects spaces which are curved (non-Minkowski) at the classical level, and even for spaces which are Minkowski at the classical level a full quantum gravity of course would predict phenomena which could not be simply encoded in noncommutativity of Minkowski space., but probes of sufficiently high energy would be affected by the properties of the quantum-gravity foam and one could attempt to model (at least some aspects of) the corresponding dynamics using a noncommutative Minkowski space-time. In light of this physical motivation it is natural to assume that $`\lambda `$ be related to the Planck length. The so-called $`\kappa `$-deformed Poincaré quantum group acts covariantly on the $`\kappa `$-Minkowski space-time (46). The dispersion relation for massless spin-0 particles $`\lambda ^2\left(e^{\lambda E}+e^{\lambda E}2\right)\stackrel{}{k}^2e^{\lambda E}=0,`$ (47) which at low energies describes a deformation that is linearly suppressed by $`\lambda `$ (and therefore, if indeed $`\lambda L_p`$, is of the type discussed in Section 5), emerges as the appropriate Casimir of the $`\kappa `$-deformed Poincaré group. Rigorous support for the interpretation of (47) as a bona fide dispersion relation characterizing the propagation of waves in the $`\kappa `$-Minkowski space-time was recently provided in Ref. . In Ref. it was also observed that, using the quantum group Fourier transform which was worked out for our particular algebra in Ref. , there might be a rather simple approach to the definition of a field theory on the $`\kappa `$-Minkowski space-time. In fact, through the quantum group Fourier transform it is possible to rewrite structures living on noncommutative space-time as structures living on a classical (but nonAbelian) “energy-momentum” space. If one is content to evaluate everything in energy-momentum space, this observation gives the opportunity to by-pass all problems directly associated with the non-commutativity of space-time. While waiting for a compelling space-time formulation of field theories on noncommutative geometries to emerge, it seems reasonable to restrict all considerations to the energy-momentum space. This approach does not work for any noncommutative space-time but only for those where the space-time coordinate algebra is the enveloping algebra of a Lie algebra, with the Lie algebra generators regarded ‘up side down’ as noncommuting coordinates .<sup>39</sup><sup>39</sup>39Another (partly related, but different) $`\kappa `$-Minkowski motivated proposal for field theory was recently put forward in Ref. . I thank J. Lukierski for bringing this paper to my attention. Within this viewpoint a field theory is not naturally described in terms of a Lagrangian, but rather it is characterized directly in terms of Feynman diagrams. In principle, according to this proposal a given ordinary field theory can be “deformed” into a counterpart living in a suitable noncommutative space-time not by fancy quantum-group methods but simply by the appropriate modification of the momentum-space Feynman rules to those appropriate for a nonAbelian group. Additional considerations can be found in Ref. , but, in order to give at least one example of how this nonAbelian deformation could be applied, let me observe here that the natural propagator of a massless spin-0 particle on $`\kappa `$-Minkowski space-time should be given in energy-momentum space by the inverse of the operator in the dispersion relation (47), i.e. in place of $`D=(\omega ^2\stackrel{}{k}^2m^2)^1`$ one would take $`D_\lambda =\left(\lambda ^2(e^{\lambda \omega }+e^{\lambda \omega }2)e^{\lambda \omega }\stackrel{}{k}^2\right)^1.`$ (48) As discussed in Ref. the elements of this approach to field theory appear to lead naturally to a deformation of CPT symmetries, which would first show up in experiments as a violation of ordinary CPT invariance. The development of realistic field theories of this type might therefore provide us a single workable formalism<sup>40</sup><sup>40</sup>40Until now the young field of quantum-gravity phenomenology has relied on “single-use” phenomenological models (the parameters of the phenomenological model are only relevant in one physical context). A first step toward a greater maturity of this phenomenological programme would be the development of phenomenological models that apply to more than one physical context (the same parameters are fitted using data from more than one physical context). The type of field theory on $`\kappa `$-Minkowski space-time that was considered in Ref. (with its single parameter $`\lambda `$) could represent a first example of these more ambitious multi-purpose phenomenological models. in which both in vacuo dispersion and violations of ordinary CPT invariance could be computed explicitly (rather than being expressed in terms of unknown parameters), connecting all of the aspects of these candidate quantum-gravity phenomena to the value of $`\lambda 1/\kappa `$. One possible “added bonus” of this approach could be associated with the fact that also loop integration must be appropriately deformed, and it appears plausible that (as in other quantum-group based approaches ) the deformation might render ultraviolet finite some classes of diagrams which would ordinarily be affected by ultraviolet divergences. ## 12 CONSERVATIVE MOTIVATION AND OTHER CLOSING REMARKS Since this paper started off with the conclusions, readers might not be too surprised of the fact that I devote most of the closing remarks to some additional motivation. These remarks had to be postponed until the very end also because in reviewing the experiments it would have been unreasonable to take a conservative viewpoint: those who are so inclined should find in the present lecture notes encouragement for unlimited excitement. However, before closing I must take a step back and emphasize those reasons of interest in this emerging phenomenology which can be shared even by those readers who are approaching all this from a conservative viewpoint. In reviewing these quantum-gravity experiments I have not concealed my (however moderate) optimism regarding the prospects for data-driven advances in quantum-gravity research. I have reminded the reader of the support one finds in the quantum-gravity literature for the type of phenomena which we can now start to test, particularly distance fuzziness and violations of Lorentz and/or CPT symmetries and I have also emphasized that it is thanks to recent advances in experimental techniques and ideas that these phenomena can be tested (see, for example, the role played by the remarkable sensitivities recently achieved with modern interferometers in the experimental proposal reviewed in Section 4 and the role played by very recent break-throughs in GRB phenomenology in the experimental proposal reviewed in Section 5). But now let me emphasize that even from a conservative viewpoint these experiments are extremely significant, especially those that provide tests of quantum mechanics and tests of fundamental symmetries. One would not ordinarily need to stress this, but since these lectures are primarily addressed to young physics students let me observe that of course this type of tests is crucial for a sound development of our science. Even if there was no theoretical argument casting doubts on them, we could not possibly take for granted (extrapolating ad infinitum) ingredients of our understanding of Nature as crucial as its mechanics laws and its symmetry structure. We should test quantum mechanics and fundamental symmetries anyway, we might as well do it along the directions which appear to be favoured by some quantum-gravity ideas. One important limitation of the present stage in the development of quantum-gravity phenomenology is the fact that most of the experiments actually test only one of the two main branches of quantum-gravity proposals: the proposals in which (in one or another fashion) quantum decoherence is present. There is in fact a connection (whose careful discussion I postpone to future publications) between decoherence and the type of violations of Lorentz and CPT symmetries and the type of power-law dependence on $`T_{obs}`$ of distance fuzziness here considered. The portion of our community which finds appealing the arguments supporting the decoherence-inducing Wheeler-Hawking space-time foam (and certain views on the so-called “black-hole information paradox”) can use these recent developments in quantum-gravity phenomenology as an opportunity for direct tests of some of its intuition. The rest of our community has developed an orthogonal intuition concerning the quantum-gravity realm, in which there is no place for quantum decoherence. The fact that we are finally at least at the point of testing decoherence-involving quantum-gravity approaches (something which was also supposed to be impossible) should be seen as encouragement for the hope that even other quantum-gravity approaches will eventually be investigated experimentally. Even though there is of course no guarantee that this new phenomenology will be able to uncover important elements of the structure of quantum gravity, the fact that such a phenomenological programme exists suffices to make a legitimate (empirical) science of quantum gravity, a subject often derided as a safe heaven for theorists wanting to speculate freely without any risk of being proven wrong by experiments. As emphasized in Refs. (and even in the non-technical press ) this can be an important turning point in the development of the field. Concerning the future of quantum-gravity phenomenology let me summarize my expectations in the form of a response to the question posed by the title of these notes: I believe that we are indeed at the dawn of quantum-gravity phenomenology, but the forecasts call for an extremely long and cloudy day with only a few rare moments of sunshine. Especially for those of us motivated by theoretical arguments suggesting that at the end of the road there should be a wonderful revolution of our understanding of Nature (perhaps a revolution of even greater magnitude than the one undergone during the first years of this 20th century), it is crucial to profit fully from the few glimpses of the road ahead which quantum-gravity phenomenology will provide. Acknowledgements First of all I would like to thank the organizers of this XXXV Winter School, especially for their role in creating a very comfortable informal atmosphere, which facilitated exchanges of ideas among the participants. My understanding of Refs. , and benefited from conversations with R. Brustein, M. Gasperini, G.F. Giudice, N.E. Mavromatos, G. Veneziano and J.D. Wells, which I very greatfully acknowlegde. Still on the “theory side” I am grateful to several colleagues who provided encouragement and stimulating feed-back, particularly D. Ahluwalia, A. Ashtekar, J. Ellis, J. Lukierski, C. Rovelli, S. Sarkar, L. Smolin and J. Stachel. On the “experiment side” I would like to thank F. Barone, E. Bloom, J. Faist, R. Flaminio, L. Gammaitoni, T. Huffman, L. Marrucci, M. Punturo and J. Scargle for informative conversations (some were nearly tutorials) on various aspects of interferometry and gamma-ray-burst experiments.
no-problem/9910/astro-ph9910076.html
ar5iv
text
# STUDY OF SOLAR ACTIVE REGIONS BASED ON BOAO VECTOR MAGNETOGRAMS ## I INTRODUCTION It is well known that magnetic fields play an important role in solar active phenomena such as solar flares and prominences. However, measurements of magnetic fields are only available at the photosphere and very limitedly at the chromosphere. Thus the evolution of magnetic fields at the photosphere has widely used for studies of relationships between active phenomena and magnetic fields. In this sense, reasonable measurements of solar magnetic fields at the photospheric level are of key importance in understanding solar activities (e.g. Hagyard et al. 1984). Solar Flare Telescope (SOFT) has been set up at the peak of Mt. Bohyun in 1995. A filter-based magnetograph (Vector Magnetograph, VMG) is attached to the SOFT (Moon 1999c, Park et al. 1997) of Bohyunsan Optical Astronomy Observatory (BOAO), which uses a very narrow band Lyot (birefringent) filter which measures magnetic fields at the solar photosphere with Fe I 6302.5 line. The Stokes parameters are measured by collecting spectrally integrated data over the filter passband. It has very high time resolution which is less than 1 minute, with relatively large field of view ($`400^{\prime \prime }\times 300^{\prime \prime }`$). For the efficient use of the SOFT, we have developed the data acquisition system (Moon et al. 1996), the telescope control software (Moon et al. 1997), the KD\*P control system (Nam et al. 1997), and the four channel filter control system (Jang et al. 1998). The calibration problem of filter-based magnetographs with Fe I 6302.5 spectral line has been discussed by several authors (Ichimoto 1993, Sakurai et al. 1995, Kim 1997, Moon et al. 199b). Recently, Moon et al. (1999b) have developed an improved calibration method by using theoretical Stokes polarization signals calculated with various inclination angles of magnetic fields (Hagyard and Kineke 1995). In this paper we study solar active regions using BOAO vector magnetograms with the new calibration method. For this we describe how to analyze BOAO vector magnetograms in Section II and compare observed magnetograms with corresponding magnetograms from other solar observatories in Section III. In Section IV we present some observational results of AR 8419 by the SOFT. A brief summary and conclusion will be given in Section V. ## II ANALYSIS OF VECTOR MAGNETOGRAMS ### (a) Dark Frame and Flat Field Correction For detecting observed images through VMG we use a SONY-XC 77 video CCD, whose signal is digitized by the image processor (Moon et al. 1996). Dark frame observations are made by closing the cover of the telescope. Flat field observations were made by calibration optics to produce defocused lights. However, the observed flat images had relatively large intensity gradients so that we now use a defocused intensity image of the solar disk center as a flat image by adjusting a focusing motor. The dark frame (D) and flat field (F) corrections are made by $$I_c(+)=\frac{I_o(+)D}{F(+)D},$$ (1) $$I_c()=\frac{I_o()D}{F()D},$$ (2) where $`I_o`$ represents an observed image and $`I_c`$, the image corrected for dark frame and flat field. We found that there are little systematic difference between $`F(+)`$ and $`F()`$. In Stokes V/I observation, a final image corrected for dark frame and flat field can be expressed as $$\frac{V}{I}=\frac{I_c(+)I_c()}{I_c(+)+I_c()}=\frac{I_o(+)I_o()}{I_o(+)+I_o()2D},$$ (3) where we assume that $`F(+)=F()`$. This argument can be also applied to Stokes Q/I and U/I observations. These facts imply that flat field correction can be neglected in magnetic field observations. ### (b) Alignment of Q, U, V Images Due to atmospheric seeing and tracking instability, observed images could be shifted during filter turret transition for Q, U, and V observations. When we make an observation of a set of Stokes data, three FITS files for Q, U and V are obtained, in which $`I(+)`$ and $`I()`$ are separately saved. Here $`I(+)`$ and $`I()`$ correspond to a filter convolved monochromatic intensity of right and left hand polarization, respectively. For convenience we define three intensity images as follows : $`I_q=I_q(+)+I_q()`$ in Q data, $`I_u=I_u(+)+I_u()`$ in U data, and $`I_v=I_v(+)+I_v()`$ in V data. We usually make alignments of theses intensity images by shifting $`I_q`$ and $`I_u`$ images to match an $`I_v`$ image as follows. (1) Two set of coordinates for the same largest sunspots in both a target image ($`I_u`$ or $`I_q`$) and a reference image ($`I_v`$) are searched by the center of gravity method (Ichimoto 1993). (2) An appropriate size of window around the coordinates are determined. (3) The target image is shifted to have a maximum correlation with the reference image for the selected window. ### (c) Calibration Moon et al. (1999b) have already discussed the calibration problems of filter-based magnetograms, especially focusing on the Fe I 6302.5 spectral line for the SOFT. In applying our developed method to the actual analysis the following facts should be kept in mind. In many cases, magnetic field strengths derived from filter-based magnetographs have been underestimated relative to theoretically predicted or spectrally determined ones. To compensate this problem, an arbitrary factor, so called k-factor, has been introduced to raise the observed polarization signals so that it matches the field strength estimated from nonfilter-based magnetic observations (Gary et al. 1987, Chae 1996). According to Chae (1996), stray light corrected fields still require a k-factor to match empirically determined ones. The underestimation of magnetic fields might be due to stray light effect (Chae et al. 1998a, 1998b), instrumental depolarization (Gary 1991), the fragmental distribution of the magnetic field on the solar surface (Ichimoto 1993), and transmission wavelength error etc. To properly correct all the problems related to this matter seems to be too challenging. For the calibration of BOAO magnetograms, we suggest to select one of two methods. The first method is a calibration method for Mitaka Solar Observatory (Ichimoto 1993, 1997, Sakurai et al. 1995), which is applicable to BOAO vector magnetograms since two observational systems are very similar to each other. Here we summarize Mitaka’s method as follows. (1) Observed polarization signals are converted to magnetic field strengths by the method described in Ichimoto (1993). (2) A k-factor is multiplied to longitudinal fields to balance between observed transverse fields and corresponding potential fields derived from the observed longitudinal fields (Sakurai et al. 1995). First, potential magnetic fields are derived by using a Fourier expansion method (Sakurai 1992) in which observed longitudinal fields are used as a boundary condition. Then they select data points on which the observed and the computed transverse fields have the nearly same directions, and compute the average ratio of the observed and computed transverse field strengths. Finally, this ratio (k-factor) is multiplied to the observed longitudinal fields to produce re-scaled longitudinal ones. In addition, we re-scale the calibrated vector fields to balance between the maximum of longitudinal fields and that of corresponding fields from a full disk longitudinal magnetogram of Kitt Peak Solar Observatory, which have provided us with unique daily full disk longitudinal magnetograms by the NSO/KP Vacuum Telescope together with a 10.7m vertical Littrow spectrograph (Jones et al. 1992). In terms of sunspot models as well as observational data of active regions, we have tested the validity of the second process that transverse fields balance with corresponding potential fields derived from longitudinal fields. First, we computed k-factors for three sunspot models which well describe observed field configuration at the photosphere and computed values are found to be 1.12 for Skumanich (1992)’s dipole model, 1.09 for Yun(1970)’s sunspot model, and 1.05 for Moon et al. (1998)’s sunspot model. We have also computed k-factors for 37 vector magnetograms of four flare-productive active regions (AR 5747, AR 6233, AR 6659, and AR 6982) observed at Mees Solar Observatory whose Stokes polarimeter is one of qualified spectrometer-based magnetographs. It is found that computed values for 28 magnetograms (about 76%) out of 37 are approximately unity within 10% accuracy. These results imply that the second process could be applied to representative active regions. The second calibration method is to employ the iterative calibration method (Moon et al. 1999b), which only works as long as both circular and linear polarization signals are reasonably estimated. Unfortunately, underestimation of circular polarization signals are larger than that of linear polarization ones. Thus we have devised an iterative method for calibration as follows. (1) We follow two-steps (1) and (2) of the first calibration method and multiply a derived k-factor to circular polarization signals. (2) We convert both re-scaled circular and original linear polarization signals to vector fields by our developed calibration method described in Moon et al. (1999b). (3) We re-scale longitudinal fields by the step (2) of the first method. (4) We iterate steps (2) and (3) until a k-factor (re-scaling factor) converges unity within 5 % accuracy. (5) If necessary, we re-scale again the calibrated vector fields to balance between the maximum of longitudinal fields and that of corresponding fields from a full disk longitudinal magnetogram of Kitt Peak Solar Observatory. ### (d) Solutions for $`180^o`$ ambiguity When one analyzes vector magnetograms from solar magnetograph measurements, one of challenging problems is to solve the $`180^o`$ ambiguity in the azimuth of observed transverse fields. This ambiguity is attributed to the fact that two anti-parallel polarization signals of transverse fields are indistinguishable each other since the transverse measurements of the magnetograph provides only the plane of linear polarization. For all vector magnetograms the ambiguity should be resolved to obtain the correct transverse field components. The great importance of the problem is attributed to the fact that the reasonable resolution of the problem can give us a meaningful understanding on physical quantities such as vertical current density, shear angle, magnetic free energy. To resolve the ambiguity, an additional constraint on the field azimuth should be introduced in terms of theoretical or observational aspects. One of commonly used ways is the potential field method based on the fact that an observed transverse magnetic field is not far away from a corresponding potential component. That is, the direction of the transverse field is chosen such that the two transverse components make an acute angle. This method holds for for nearly potential regions but not for highly non-potential regions. For our study we adopt two ambiguity methods : a potential field method (Sakurai 1992) and a multi-step method (Canfield et al. 1993). Comprehensive reviews for resolving the problem are found in several literatures (e.g., Sakurai 1989, Wang 1993, Gary and Demoulin 1995). #### i) Potential Field Method For the case of a potential field, the magnetic field can be derived from a scalar potential $`\mathrm{\Phi }`$, $$𝐁=\mathrm{\Phi }$$ (4) Using $`𝐁=0`$, the potential should satisfy the Laplace’s equation: $$^2\mathrm{\Phi }=0,$$ (5) where an observable quantity $`B_z`$ is used as the boundary condition, $`B_z=\mathrm{\Phi }/z`$. The potential field solution was initially suggested by Schmidt (1964) with the use of the Green function method. Later, the Fourier expansion method has been employed by Teuber et al.(1977) and Sakurai(1989). For our study, we have used a Fourier expansion method developed by Sakurai (1992). The criterion of the method is given by $$𝐁_{\mathrm{𝐨𝐭}}𝐁_{\mathrm{𝐩𝐭}}>0,$$ (6) where $`𝐁_{\mathrm{𝐨𝐭}}`$ is an observed transverse field and $`𝐁_{\mathrm{𝐩𝐭}}`$ is a tangential component of the potential field solution derived from Equation (5). #### ii) Multi-step Method Canfield et al. (1993) employed a multi-step for $`180^o`$ ambiguity solution, which was well described in the Appendix of their paper. Here we summarize their method as follows. Step 1 : They first choose the orientation of each transverse field vector which is closest to the potential field computed using longitudinal fields as a boundary condition. Then they rotate the data to the heliographic coordinate system. Step 2 : For current-carrying active regions, after resolving with the potential field, they compute the linear force-free field with a linear force-free coefficient $`\alpha `$ selected to match nonpotentialities discovered in Step 1. Step 3 : They next choose the orientation of the transverse field which minimizes the angle between neighboring field vectors by maximizing the sum of the vector dot product of the field vector with each of its eight neighbors. Step 4 : In regions with strong total magnetic field strength ($`1000G`$) and a high degree of shear (transverse field azimuth differing from the potential azimuth by more than $`85^o`$), they iteratively select the orientation of the field which minimizes the field divergence $`|𝐁|`$. Step 5 : Finally, in regions where the total field strength is below the noise level in the magnetograms, they iteratively choose the orientation of the field which minimizes the electric current. ## III COMPARISON WITH OTHER MAGNETOGRAMS We have compared vector fields of AR 8422 with corresponding magnetogram of Mitaka Solar Telescope (Sakurai et al. 1995) which have produced qualified vector magnetograms since 1992. The comparison seems to be more meaningful in that instruments and detecting system of the Mitaka Solar Observatory are very similar to those of the SOFT in BOAO. For comparison we adopted the first calibration method, i.e., the standard reduction procedure of Mitaka Observatory (Ichimoto 1993, 1997), and the multi-step method (Canfield et al. 1993) for the $`180^o`$ ambiguity resolution. A field of view ( $`400^{\prime \prime }\times 300^{\prime \prime }`$) of VMG in the SOFT was originally determined from its optical layout. Since the optical layout have been a little changed so that its field of view needs to be redetermined. In the case of Mitaka’s magnetograms, its field of view was determined by using a stop tracker (Ichimoto 1993). We have determined a field of view of vector magnetogram made with VMG by comparing with Mitaka’s corresponding magnetograms as follows. First of all, we have made a linear matching of SOFT’s magnetogram with Mitaka’s corresponding magnetogram for AR 8422 (S23W38) on Dec. 28, 1998. For this we make a linear mapping of reference magnetogram over the image coordinate system of corresponding magnetogram under consideration (Chae 1999): $$i=S_xl+x_0,$$ (7) $$j=S_ym+y_0,$$ (8) where $`l`$ and $`m`$ are coordinates of data points in a considered magnetogram in unit of pixel, and $`i`$ and $`j`$ are coordinates of the corresponding points in a reference magnetogram. Chae (1999) determined four parameters by minimizing $$H=(C_{lm}R_{ij})^2$$ (9) over $`S_x`$, $`S_y`$, $`x_0`$, and $`y_0`$, where $`C_{lm}`$ is a target magnetogram and $`R_{ij}`$ is a re-mapped reference magnetogram. We applied Chae’s method to a longitudinal magnetogram of AR 8422 observed on Dec. 28, 1998 and corresponding Mitaka’s magnetogram. We have also developed another method which derive i, j, l, and m to have a maximum correlation between $`C_{lm}`$ and $`R_{ij}`$. Two methods are in good agreements with each other, as expected. The field of view ($`400^{\prime \prime }\times 300^{\prime \prime }`$) of our observed vector magnetogram has been confirmed by finding the maximum correlation between our remapped longitudinal magnetogram and corresponding Mitaka’s magnetogram within about 1%. Figure 1 shows the comparison of BOAO’s longitudinal magnetic fields of AR 8422 and corresponding magnetic fields made with a similar magnetograph at Mitaka Solar Observatory. As seen in the figure, they are well correlated with each other (r=0.962). Some differences in negative strong field regions may be due to seeing condition, tracking instability, filter transmission wavelength, and observational time difference etc. Figure 2 shows BOAO(upper) and Mitaka(lower) vector magnetic fields of AR 8422 observed on Dec. 28, 1998. Its main features of longitudinal fields are very similar to each other, while those of transverse fields are a little different, which might be due to large noise levels ( $`100200`$ G) of transverse fields. In Figure 3 we present a longitudinal magnetogram observed on Dec. 27, 1998 at Kitt Peak Solar Observatory. Even though it was obtained about eight hours before than the corresponding SOFT’s one (Figure 2-a)), its main features are very similar to those of SOFT’s one. Our study shows that our vector magnetograph should normally work. ## IV STUDY OF ACTIVE REGION AR 8419 We have observed vector fields together with $`H\alpha `$ and white light of flare producing active region AR 8419 (N27W27) in which a M-class flare (M3.1/1B) occurred on Dec. 28, 1998. According to Solar Geophysical Data, GOES X-ray flux started to increase at UT 05:45, peaked at UT 05:48, and then ended at UT 05:59 (See Figure 4). At that time, the active region was of very complex magnetic polarities ($`\beta \gamma \delta `$ type). Figure 5 shows two vector magnetograms observed before and after the flare. The magnetograms were calibrated by the second calibration method in Section II. According to NOAA reports, this region has grown in complexity as well as in sunspot area for three days with sizable proceeding (P1 in Figure 5) and following sunspot (N1). The following sunspot has a series of umbra forming a NE-SW line with surrounding penumbra with a $`\delta `$ configuration (N1 and P2). Figure 6 shows a time series of $`H\alpha `$ filtergrams of the SOFT for about two hours including the peak time of X-ray fluxes. In the figures two dark areas correspond to the proceeding (P1) and following (N1) sunspots. At UT 4:02, two small brightening patches were found near the $`\delta `$ sunspot region. At UT 05:47, strong inverse $`S`$-shape (Pevtsov et al. 1997) H$`\alpha `$ patches were notified and last for about ten minutes, and then remained several flare ribbons. In order to examine in detail the change of H$`\alpha `$ phenomena, we present four $`H\alpha `$ filtergrams observed before and after the flare in Figure 7. It is interesting to note that there was a filament eruption with inverse $`S`$-shape (denoted by F on the 4:02 image), which exactly match the inversion line of a quadrupolar configuration (P2, P3, N1, N2 in Figure 5). As seen on 06:07 image, several flare ribbons are made after an eruptive phase of the flare. It is well accepted that two flare ribbons in $`H\alpha `$ emission are a result of filament eruption along the inversion line of $`\delta `$ configuration region (Priest 1982, Zirin 1988, Filippov 1997). We suggest that the M-class flare in AR 8419 should be associated with the quadrupolar configuration and its interaction with the new erupting filament. It is also found that sunspots in the quadrupolar configuration were nearly in a straight line with the largely sheared inversion line (Figure 5), which was often observed in flare producing active regions (Demoulin, Hénoux, and Mandrini 1994). Moon et al. (1999a) showed that a large magnetic field discontinuity exist at the separator of such a quadrupolar configuration that directions of two bipoles are antiparallel each other ($`\phi _p=180^o`$ in Table 1 of their paper). Figure 8 shows two longitudinal magnetograms of AR 8419 observed on Dec. 27 and 28, 1998 at Kitt Peak Solar Observatory. Its main features are very similar to those of SOFT’s corresponding ones in Figure 5. The comparison of Figure 5 and 8-a) shows that a positive polarity sunspot (P2) moved westward, collide with the following sunspot (N1) to form a $`\delta `$ configuration and to compress longitudinal fields near the $`\delta `$ configuration. In Figure 5, steep longitudinal field gradients over horizontal direction are notified and estimated to be about 0.2 G/km near the $`\delta `$ sunspot, which was often reported in flare-producing active regions (e.g., Patty and Hagyard 1986, Zhang et al. 1994 ). It is also found that the inversion line with inverse $`S`$-shape become more twisted after the flare. ## V SUMMARY AND CONCLUSION We have compared our vector fields of AR 8422 with those made with a similar vector magnetograph at Mitaka solar observatory. The comparison shows that longitudinal fields are very similar to each other but transverse fields are a little different. We have also compared longitudinal magnetograms of AR 8422 and AR 8419 with Kitt Peak longitudinal magnetograms and confirmed that its main features are very similar to those of the SOFT. We have also presented our vector magnetograms and H$`\alpha `$ observations of AR 8419 during its flaring(M3.1/1B) activity. Time series H-alpha observations show a filament eruption following the sheared inversion line of the quadrupolar configuration of sunspots and an inverse $`S`$-shape brightening patch near the filament. This fact imply that this flare could be associated with the quadrupolar configuration and its interaction with the filament eruption. ACKNOWLEDGEMENTS We wish to thank Dr. Ichimoto and Dr. Sakurai for allowing us to use some of their numerical routines for data analysis. This work has been supported in part by the Basic Research Fund (99-1-500-00 and 99-1-500-21) of Korea Astronomy Observatory and in part by the Korea-US Cooperative Science Program under KOSEF(995-0200-004-2). ### REFERENCES Canfield, R. C., La Beaujardiere, J.-F., Han, Y., Leka, K. D., McClymont, A.N., Metcalf, T. R., Mickey, D.L., Wuelser, J-P., & Lites, B.W. 1993, ApJ, 411, 362 Chae, J. 1996, Ph.D. dissertation, Seoul National University Chae, J., Yun, H. S., Sakurai, T., & Ichimoto, K. 1998a, Sol. Phys.183, 229 Chae, J., Yun, H. S., Sakurai, T., & Ichimoto, K. 1998b, Sol. Phys.183, 245 Chae, J. 1999, private communication Demoulin, P., Hénoux, J. C. and Mandrini, C. H. 1994, A&A, 285, 1023 Filippov, B. P. 1997, A&A, 324, 324 Gary, G. A., Moore, R. L., & Hagyard, M. J. 1987, ApJ, 314, 782 Gary, G. A., Hagyard, M. J., & West, E. A. 1991, in Solar Polarimetry, Proceeding of the Workshop of Solar Polarimetry, ed. L. November, p65 Gary, G. A. & Demoulin, P. 1995, ApJ, 445, 982 Hagyard, M.J., Smith, Jr, J.B., Teuber, D., & West, E. A. 1984, Sol. Phys., 91, 115 Hagyard, M.J. & Kineke, 1995, Sol. Phys., 158, 11 Ichimoto, K. 1993, in his solar magnetic field computation package Ichimoto, K. 1997, private communication Jang, B.-H., Nam, O.W. Park, Y.D., Han, I. W., Moon, Y.-J. 1998, KAO technical Report, Development of four channel filter controlling system Jones, H. P., Duvall, Jr. T. R., Harvey, J. W., Mahaffey, C. T., Schwitters, J. D., & Simmons, J. E. 1992, Sol. Phys.139, 211 Kim, K. S. 1997, Pub. Kor. Astron. Soc. 12, 1 Moon, Y.-J., Park, Y.D., Jang, B.H., Sim, K. J., Yun, H. S., & Kim, J. H. 1996, Pub. Kor. Astron. Soc, 11, 243 Moon, Y.-J., Yoon, S.-Y., Park, Y.D.,& Jang, B.H. 1997, Pub. Kor. Astron. Soc, 12, 47 Moon, Y.-J., Yun, H. S., & Park, J.-S. 1998, ApJ, 494, 851 Moon, Y.-J., Yun, H.S., Lee, S.W., Kim, J.H., Choe, G., Park, Y.D., Ai, G. Zhang, H., & Fang, C. 1999a, Sol. Phys.184, Moon, Y.-J., Park, Y. D., & Yun, H. S. 1999b, JKAS, 32, 63 Moon, Y.-J. 1999c, Ph. D. thesis, Seoul National University Nam, O.W., Park, Y.D., Moon, Y.-J., and Kim, H.Y. 1997, KAO Technical Report, Development of a new KD\*P control system Park, Y.D., Moon, Y.-J., Jang, B-H., & Sim, K. J. 1997, Pub. Kor. Astron. Soc, 12, 35 Patty, S. R. & Hagyard, M. J. 1986, Sol. Phys.103, 111 Pevtsov, A.A., Richard, C.C., & McClymont, A. N. 1997, ApJ, 481, 973 Priest, E. R. 1982, in Solar Magnetohyrodynamics, (Dordrecht : Reidel ) Sakurai, T. 1989, Space Science Review, 51, 1 Sakurai, T. 1992, in his numerical routines Sakurai, T., Ichimoto, K,. Nishino, Y., Shinoda, K., Noguchi, M., Hiei, E., Li, T., He, F., Mao, W., Lu, H., Ai, G., Zhao, Z., Kawakami, S., & Chae, J. 1995, PASJ, 47, 81 Schmidt, H. U. 1964, in Phys. of Solar Flares, ed. W.N. Hess, NASA SP-50, p107 Teuber, D. Tandberg-Hanssen, E., & Hagyard, M. J. 1977, Sol. Phys.53, 97 Wang, H. 1993, in Magnetic Field and Velocity Fields of Solar Active Regions, eds. H. Zirin, G. Ai, and H. Wang, ASP conference Series, 46, 323 Skumanich, A. 1992, in Sunspots : Theory and Observations, eds. J. H. Thomas and N. O. Weiss (Dordrecht: Kluwer Academic Publishers), 121 Yun, H. S. 1970, ApJ, 162, 975 Zhang, H., Ai, G., Yan, X., Li, W., & Liu, Y. 1994, ApJ, 423, 828 Zirin, H. 1988, in The Astrophysics of the Sun, Cambridge Univ. Press
no-problem/9910/cond-mat9910481.html
ar5iv
text
# Determination of the magnetization scaling exponent for single crystal La0.8Sr0.2MnO3 by broadband microwave surface impedance measurements ## Abstract Employing a broadband microwave reflection configuration, we have measured the complex surface impedance, $`Z_S(\omega ,T)`$, of single crystal La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub>, as a function of frequency (0.045–45 GHz) and temperature (250–325 K). Through the dependence of the microwave surface impedance on the magnetic permeability, $`\widehat{\mu }(\omega ,T)`$, we have studied the local magnetic behavior of this material, and have extracted the spontaneous magnetization, $`M_0(T)`$, in zero applied field. The broadband nature of these measurements and the fact that no external field is applied to the material provide a unique opportunity to analyze the critical behavior of the spontaneous magnetization at temperatures very close to the ferromagnetic phase transition. We find a Curie temperature $`T_C=305.5\pm 0.5`$ K and scaling exponent $`\beta =0.45\pm 0.05`$, in agreement with the prediction of mean-field theory. We also discuss other recent determinations of the magnetization critical exponent in this and similar materials and show why our results are more definitive. Since the recent discovery of large negative magnetoresistance in the manganite perovskites La<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> (where A is typically Ca, Sr, or Ba), much attention has been paid to understanding the properties of these materials. In addition to being potentially useful in technological applications, these so-called colossal magnetoresistive (CMR) oxides provide a system in which to study electronic and magnetic correlations and the interplay between magnetism and transport properties. In particular, a series of measurements in recent years have focused on the critical behavior of the magnetization in the vicinity of the ferromagnetic phase transition. These experiments, employing a variety of techniques, have given widely varying values of the magnetization scaling exponent $`\beta `$ ranging from about 0.3 to 0.5. This range encompasses both the long-range interactions of mean-field theory ($`\beta =0.5`$) and the values of $`\beta `$ which result from calculations based on shorter range interactions, such as the Ising and Heisenberg models ($`\beta =0.325`$ and 0.365, respectively). In this paper we present the results of broadband, non-resonant microwave surface impedance measurements, in which we have quantitatively determined the complex surface impedance, $`\widehat{Z}_S=R_S+iX_S`$, of La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> single crystals over three decades in frequency, and as a function of temperature. In contrast to conventional magnetization measurements, which typically require the application of an external magnetic field, this technique allows us to extract the temperature dependence of the static spontaneous magnetization in zero applied field. We have found that the spontaneous magnetization is zero above $`T_C`$ and rises continuously below $`T_C`$ in a manner which is well described by the theory of critical phenomena near a second-order phase transition. From these data we are able to determine the scaling exponent $`\beta `$, and find that it is consistent with the value predicted by mean-field theory. The single crystals of La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> used in this study were grown by the floating-zone technique and the stoichiometry and structural integrity have been checked by x-ray diffraction and energy dispersive x-ray analysis. Disk-shaped samples were cut from a 4 mm diameter rod, and resistivity, ac susceptibility, and dc magnetization measurements have been reported earlier on samples cut from the same boule. La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> has a ferromagnetic phase transition with a Curie temperature $`T_C`$ of approximately 305 K. It is well established that the low temperature phase is a ferromagnetic metal, while above $`T_C`$ the system is paramagnetic, and the resistivity exhibits a negative slope with respect to temperature. The resistivity has a maximum around $`T_p=318`$ K, significantly above $`T_C`$, which is typical in these manganite materials. In order to determine the temperature and frequency dependence of the complex surface impedance we have measured the complex reflection coefficient. We have reported the details of the experimental geometry elsewhere, and will therefore give only a brief overview of the technique here. A phase-locked signal from an HP8510C Vector Network Analyzer (45MHz–50GHz) is sent into a coaxial transmission line which is terminated by the sample inside a continuous flow cryostat. The amplitude and phase of the reflected signal are measured as functions of frequency and temperature, and the complex reflection coefficient, $`\widehat{S}_{11}(\omega ,T)`$, is determined as the ratio of the reflected to incident signals. The complex surface impedance of the terminating material can be calculated from the reflection coefficient as follows: $`\widehat{Z}_S/Z_0=(1+\widehat{S}_{11})/(1\widehat{S}_{11})`$, where $`Z_0=377\mathrm{\Omega }`$ is the impedance of free space. Due to the phase-sensitive detection capabilities of the network analyzer, it is possible to extract both the surface resistance $`R_S(\omega )`$ and the surface reactance $`X_S(\omega )`$, and the well-defined geometry allows for quantitative evaluation of these material parameters, opening a unique window to dynamical processes within the material. In the presence of a magnetic field the microwave properties of ferromagnetic materials are characterized by two distinct features which result from the dispersion of the complex magnetic permeability $`\widehat{\mu }(\omega )=\mu _1(\omega )i\mu _2(\omega )`$: ferromagnetic resonance (FMR) and ferromagnetic anti-resonance (FMAR). At the FMR frequency, $`\omega _r`$, the surface resistance, $`R_S(\omega _r)`$, shows a maximum due to a maximum in the imaginary part of the permeability. At the same frequency, the real part of the permeability has a zero-crossing with negative slope, as does the surface reactance, $`X_S(\omega _r)`$. In order to satisfy the condition $`\mu _1(\omega \mathrm{})=1`$, it is necessary that there be another zero-crossing, with positive slope, at a frequency $`\omega _{ar}>\omega _r`$. For a ferromagnetic metal, this zero in $`\mu _1`$ leads to a reduction in $`R_S`$ below the value it would have for a non-magnetic metal with the same resistivity. This suppression of $`R_S`$ in the vicinity of $`\omega _{ar}`$ is commonly referred to as the ferromagnetic anti-resonance. Both $`\omega _r`$ and $`\omega _{ar}`$ depend not only on the externally applied field but also on the local internal magnetization of the material, and measurements of the microwave surface impedance therefore yield information about the magnetization of material under study. The precise dependence of $`\omega _r`$ and $`\omega _{ar}`$ on the magnetization $`M_0`$ can be determined by starting from the Landau-Lifshitz-Gilbert equation of motion for the magnetization vector in the presence of both a static magnetic field $`H_0`$ and an oscillatory microwave field. The complex dynamic susceptibility of such a system can be written in the following form: $$\widehat{\chi }(\omega )=\frac{\widehat{\mu }(\omega )}{\mu _0}1=\frac{\omega _M[(\omega _0+i\mathrm{\Gamma })+\omega _M]}{\omega _r^2\omega ^2+i\mathrm{\Gamma }[2\omega _0+\omega _M]},$$ (1) where $`\omega _M=\gamma \mu _0M_0`$, $`\omega _0=\gamma \mu _0H_0`$, $`\omega _r=\sqrt{\omega _0(\omega _0+\omega _M)}`$, $`\mathrm{\Gamma }=\alpha \omega `$, $`\alpha `$ is the dimensionless Gilbert damping parameter, and $`\gamma `$ is the gyromagnetic ratio for an electron. We have expressed the field and magnetization as frequencies in order to clarify the comparison to our frequency-dependent data. It is clear from Eq. (1) that for small damping the quantity $`\omega _r`$ is the ferromagnetic resonance frequency, and it can be shown that the anti-resonance frequency, the point at which $`\mu _1=0`$, is given by $`\omega _{ar}=\omega _0+\omega _M`$. For simplicity, Eq. (1) has been written for the limiting case of an infinitely thin sample with the static magnetic field applied in the plane of the sample. For a finite sized sample there are corrections to this form due to demagnetization effects. As discussed above, the microwave reflection measurement which we have employed yields the surface impedance instead of the permeability, however the two are related as follows: $`\widehat{Z}_S(\omega ,T)=\sqrt{i\omega \widehat{\mu }(\omega ,T)\rho }`$. We have assumed that at microwave frequencies La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> is in the Hagen-Rubens limit (i.e. $`\rho _2\rho _1\rho _{\mathrm{dc}}`$), allowing us to insert a frequency-independent value for $`\rho `$. Then we can substitute the expression for the susceptibility from Eq. (1) into this expression in order to model the complete frequency dependence of $`R_S`$ and $`X_S`$. Measurements of the surface impedance as a function of applied magnetic field have shown that this model provides an excellent description of the measured data. Figure 1 shows zero-field $`R_S(\omega )`$ and $`X_S(\omega )`$ spectra at various temperatures. The anti-resonance features, minima in $`R_S`$ and steps in $`X_S`$ are clearly visible at all temperatures below $`T_C`$, despite the fact that $`H_0=0`$. Naively we would expect the permeability to be dispersionless in the absence of a static magnetic field, and thus both $`R_S(\omega )`$ and $`X_S(\omega )`$ should have the square root frequency dependence characteristic of a metal, as is seen in the 309 K spectra, above $`T_C`$. We can understand why this is not so if we consider that even in the absence of an external magnetic field, there are local internal fields $`H_i`$ due to anisotropy and domain structure, for example. Although 180-degree domain walls separating magnetically saturated regions can in principle produce rather large internal fields, our field-dependent measurements of the anti-resonance as $`H_00`$ allow us to estimate that $`\mu _0H_i0.02`$ T, probably due to a more complicated domain structure. Such small but finite fields can, however, cause the precession of the magnetization and thereby the dispersion of the permeability. Since $`\omega _{ar}=\gamma \mu _0(H_i+M_0)`$ and $`H_iM_0`$, we see a well-defined anti-resonance feature at a frequency determined predominantly by $`M_0`$ and a width due to inhomogeneities in the internal fields and the intrinsic damping given by $`\alpha `$. Therefore we can extract the temperature dependence of the magnetization, in the absense of an applied field, from the temperature dependence of the anti-resonance frequency. The solid lines on Fig. 1 are fits of the model presented above to the 291.6 K spectra, where we have set $`H_0=0`$, but included a finite damping to account for a distribution of internal fields around $`H_i=0`$. The discrepency between the model and the data at low frequencies is probably due to the fact that the model does not properly describe this distrubution, but it is clear that the model does an excellent job of describing the behavior of both components of the surface impedance in the vicinity of the ferromagnetic anti-resonance. The surface impedance can also be measured as a function of temperature at fixed frequency, and Fig. 2 shows such data at a few representative frequencies. The anti-resonance is manifested as a minimum in $`R_S(T)`$ coincident with an inflection point in $`X_S(T)`$, and moves to lower temperature as the frequency increases. It is, of course, also possible to extract $`M_0(T)`$ from these data, and therefore we have four sets of spectra from which to determine $`M_0(T)`$. For the $`X_S`$ data, both as a function of frequency and temperature, we have determined $`\omega _{ar}`$ by finding the peak of the first derivative, $`dX_S/df`$ \[Fig. 1b\] or $`dX_S/dT`$ \[Fig. 2b\]. Similarly, $`\omega _{ar}`$ is determined from the positions of the local minima in $`R_S(f)`$ and $`R_S(T)`$. Figure 3 shows the magnetization curve which we have extracted from these four sets of data. The onset of spontaneous magnetization at $`T_C305.5`$ K is clearly seen. With these data it is possible to examine the critical behavior of the magnetization in the vicinity of $`T_C`$. The theory of critical phenomena at a second order phase transition predicts that the magnetization will vary as a power law, $`M_0(T)(T_CT)^\beta `$, where $`\beta `$ depends on the Hamiltonian describing the interactions among the spins. Therefore, determination of this exponent yields information about the range of ferromagnetic interactions. But this expression is expected to hold only in the limit $`TT_C^{}`$, so it is necessary to look at the behavior very close to $`T_C`$, where the slope of $`M_0(T)`$ is very large. One approach is to calculate the following function: $`T^{}(T)=M_0(dM_0/dT)^1=\beta ^1(T_CT)`$. Thus $`T^{}`$ is linear in $`T`$, and values of $`\beta `$ and $`T_C`$ can be determined from the slope and intercept. For the data shown in Fig. 3 we find that $`T^{}`$ is roughly linear for temperatures above 290K, and below this temperature the slope of $`T^{}(T)`$ increases due to the saturation of the magnetization. It is therefore only reasonable to examine the critical behavior in the temperature region between 290K and $`T_C`$. If we define the dimensionless quantity $`\epsilon =1T/T_C`$ then we see that this corresponds to a range of $`\epsilon 00.05`$. By fitting a straight line to this portion of the $`T^{}(T)`$ curve we find that $`T_C=305.5\pm 0.5`$ K and $`\beta =0.45\pm 0.05`$. The solid line on Fig. 3 shows the critical behavior with these parameters, and the inset shows the same data and model between 290K and $`T_C`$, with some representative error bars included. A previous study of the microwave properties of similar La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> single crystals gave values of $`T_C=304\pm 3`$ K and $`\beta =0.34\pm 0.05`$. This value of $`\beta `$ is clearly in disagreement with ours, even though the $`M_0(T)`$ data from the two experiments are in agreement. This apparent contradiction can be understood as follows: in the earlier study, the value of $`\beta `$ was obtained by fitting the magnetization data between 270 and 300K ($`\epsilon 0.010.11`$). As we have shown above, the use of data below 290K is not appropriate for an examination of the critical behavior. In addition, our ability to measure FMAR to lower frequencies allows us to determine the magnetization much closer to $`T_C`$, which is the most important region for an accurate determination of both $`\beta `$ and $`T_C`$. If we fit our data only over the range $`\epsilon 0.010.11`$, then we find $`T_C=303`$ K and $`\beta =0.26`$. If instead we fix $`T_C`$ at 305.5 K and allow only $`\beta `$ to vary, again fitting over the same range, we find $`\beta =0.33`$. So it is clear that the difference between our result and that of the previous measurement arises simply from the temperature range over which the analysis of the critical behavior has been done. Our experimental value of the scaling exponent $`\beta `$ is in agreement with mean-field theory, which predicts $`\beta =0.5`$. Similar results were obtained from microwave measurements on La<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> single crystals, which gave $`\beta =0.45\pm 0.05`$. Their fit range was $`\epsilon 0.010.12`$. And recently reported dc magnetization measurements on polycrystalline La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub>, also gave $`\beta =0.50\pm 0.02`$, using a narrow temperature range of $`\epsilon 00.01`$. However, neutron scattering and dc magnetization measurements on La<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> single crystals yielded $`\beta =0.295\pm 0.002`$ ($`\epsilon 00.13`$) and $`\beta =0.37\pm 0.04`$ ($`\epsilon 00.03`$), respectively. Recent neutron scattering measurements on single crystals of both La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> and La<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> gave values of $`\beta =0.29\pm 0.01`$ ($`\epsilon 00.18`$) and $`\beta =0.30\pm 0.02`$ ($`\epsilon 00.3`$), respectively. Finally, a measurement of the temperature dependence of the zero-field muon precession frequency in La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> gave $`\beta =0.345\pm 0.015`$ ($`\epsilon 0.030.27`$). Although many of these studies yielded values of $`\beta `$ which are significantly lower than what we have found, the explanation for this seems to lie in the fact that the saturation of the magnetization will always lead to a reduced value for $`\beta `$ if the data are fit over too wide a temperature range. In conclusion, we have extracted the zero-field spontaneous magnetization of single crystal La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> from the temperature and frequency dependence of the microwave surface impedance. This magnetization rises continuously below $`T_C`$, as expected for a second-order phase transition. Analysis of these data gives values for the Curie temperature $`T_C=305.5\pm 0.5`$ K and the scaling exponent $`\beta =0.45\pm 0.05`$. Unfortunately, it seems that there is not yet experimental consensus about the scaling behavior of the magnetization in these compounds. It is clear that the value of $`\beta `$ is very sensitive to the fit range, with values ranging from $`\beta =0.30.5`$ reported in the literature. The technique presented here is unique because it requires no external field and is broadband, both of which allow us to examine the asymptotic critical behavior of the magnetization as $`TT_C`$. The result presented here is consistent with the mean-field value (0.5), implying that there are long-range ferromagnetic interactions in this lanthanum manganite. We thank Y. Mukovskii and colleagues for growing the samples used in this study. We also thank S. Bhagat and S. Lofland for providing the samples to us, and for many useful discussions, and R. Greene and C. Lobb for their helpful comments. This work was supported by NSF DMR-9624021, the Maryland/NSF MRSEC (NSF DMR-9632521), and the Maryland Center for Superconductivity Research.
no-problem/9910/nucl-ex9910011.html
ar5iv
text
# The Reaction 𝐷⁢(𝑒,𝑝⁢𝑝)⁢𝑒'⁢𝜋⁻ on Polarized Deuteron at High Proton Momenta ## Abstract The differential cross section and target asymmetry components of the reaction $`D(e,pp)e^{}\pi ^{}`$ on polarized deuteron were measured. The kinetic energies of the protons were measured within 55-180 MeV and 46-265 MeV and the acceptance angles in lab. frame are $`\mathrm{\Theta }_{1,2}=64^082^0`$, $`\mathrm{\Delta }\varphi _{1,2}=32^0`$. The sharp peak of the tensor $`a_{20}`$-component of the target asymmetry is found near the invariant mass of the $`pp\pi `$-system $`M_{pp\pi }`$ = 2300 $`MeV/c^2`$. The performed calculations of the differential yield and the tensor target asymmetry do not describe the obtained experimental results. Invited talk presented at the 16th European Conference on Few-Body Problems in Physics, Autrans, France, June 1-6, 1998 The interest to study of the $`\pi ^{}`$-meson production on a deuteron for high polar angles and large momenta of both protons proceeds from an opportunity to acquire a new information on the dynamics of NN-interaction at short internucleon distances. In the region of proton momenta larger than the Fermi-momentum the quasifree mechanism of the $`\pi ^{}`$-meson production appears to be suppressed. The relative contribution of more complex reaction mechanisms grows in this kinematic area and these reactions require new models to describe the nucleon systems and hadron interactions. It is for these reasons the previous experiments in Hamburg , Saclay and Bonn chose the search for dibarion states and observation of the $`(\mathrm{\Delta }\mathrm{\Delta })`$-states as their main subject. Our experiment was focused on the region of an even higher opening angles and larger values of the invariant mass than before. Also, the use of a polarized deuteron target enabled us to consider a number of polarization observables. The measurements reported here were conducted simultaneously with the experiments performed , which used an internal tensor-polarized deuterium target in the VEPP-3 storage ring at 2 GeV electron energy.The particle-detection system consisted of two identical two-arm apparatus to detect the protons in coincidence . Each arm of the detector was placed symmetrically around the electron-beam axis at a polar angle of $`75^0`$ with respect to the beam line. The proton telescope included a drift chamber and the thin and thick scintillator counters. Each proton arm detected particles within the range of angles $`\theta =68^o`$ – 82<sup>o</sup> and $`\mathrm{\Delta }\phi =32^o`$. The kinetic energy of the protons which deposit all their energy was reconstructed combining the values of the energy deposition in the detector layers. In these measurements the direction and sign of the target polarization were changed periodically during the data acquisition . The integrated luminosity and the average value of the tensor target polarization were determined from electron-deuteron elastic scattering . The collected data were processed in a few consecutive stages , which resulted in momentum vectors for both protons and reconstructed the vertex coordinates for the events. Computation of the pion momentum and photon energy was done on an assumption of a zero angle of electron scattering. The selected events were used to determine the yield of the reaction to each detector for two signs and two directions of the guide magnetic field. The components of the experimental target asymmetry are defined as the counting rate combinations : $`a_{11}={\displaystyle \frac{_{i=1,2}(1)^{\delta _{i1}}\left[N_{1+}^i+N_2^i\right]}{N}},a_{20}={\displaystyle \frac{_{i,j=1,2}(N_{j+}^iN_j^i)}{N}},`$ $`a_{22}={\displaystyle \frac{_{i,j=1,2}(1)^{\delta _{ij}}[N_j^iN_{j+}^i]}{N}},`$ (1) where $`N_{jk}^i`$ is the counting rate in the detector system $`i`$ with the magnetic guide field index $`j`$, and the sign of deuteron tenzor polarization degree $`P_{zz}`$ given by $`k`$ and N is the total counting rate. In order to obtain the distributions of We used (see . ref. ) the connection between the yield of the reaction summed over $`i`$, $`j`$ and $`k`$ into a 6-D phase space volume of the momenta, $`V_6`$ and differential cross section of the reaction $`Y(V_6)={\displaystyle _{V_6}}ϵL{\displaystyle \frac{d^6\sigma }{d^3p_1d^3p_2}}d^3p_1d^3p_2,`$ (2) where $`p_1`$ and $`p_2`$ are the momenta of the protons, $`ϵ`$ is the total detection and selection efficiency of the $`pp`$\- events, $`L`$ is the integral luminosity obtained from the measured elastic $`ed`$\- scattering. The dependences of the cross section and the analyzing power components of the reaction on the invariant mass $`pp`$-system that we obtaned at this experiment were presented in ref. . Here we present the first results as a function of the $`pp\pi ^{}`$-system mass, $`M_{pp\pi }`$. The differential yield of the reaction is shown in Fig.1. and tensor $`a_{20}`$-component of the target asymmetry is shown in Fig.2. The calculations of the cross section of the investigated process were made in a few theoretical models. The cross section of the process initiated by electrons was expressed in the terms of the cross section of a reaction induced by the virtual photons. We used Dalitz-Yennie’s virtual photon spectrum. For NEWGAM-code one nucleon pion photoproduction operator has been taken from the phenomenological analysis and the deuteron wave function was obtained using the Paris N-N potential. Also we used the ENIGMA-code which was developed for the exclusive pion electroproduction on nuclei . The calculations of the polarization observables and cross section of the reaction were made within the spectator model using the elementary pion photoproduction amplitude discussed in ref. . The Born terms of this amplitude are determined in pseudovector $`\pi N`$-coupling, the $`\mathrm{\Delta }`$-resonance is considered both in the $`s`$\- and the $`u`$-channels and the $`\rho `$ and $`\omega `$-mesons exchange are considered in the $`t`$-channel. This amplitude is useful for the studies of the $`\mathrm{\Delta }`$-resonance. In addition, we studied the role of the various dynamic effects in photoproduction of $`\mathrm{\Delta }(1232)`$-isobar on a polarized deuteron using the relativistic impulse approximation in the nucleon-spectator model . The experimental and some calculated dependences of the differential yield of the reaction on the mass of the $`pp\pi `$-system can be see in Fig. 1.The solid curve shows the result of the ENIGMA-code, the result of the NEWGAM-code is slightly different from this result. The dashed line corresponds to the calculation based on the total one-nucleon amplitude of the $`\pi `$-mesons photoproduction , whereas the dot-dashed line shows the result of the calculation based on the one-nucleon amplitude including the $`\mathrm{\Delta }`$-isobar in the $`s`$-channel only. One can see that the experimental spectrum is peaked at $`M_{pp\pi }`$ = 2300 $`MeV/c^2`$. It is clear from this figure that the experimental yield of the reaction are much higher than their calculated counterparts. Figure 2. plots the behavior of the tensor target $`a_{20}`$ –asymmetry versus the invariant mass of the $`pp\pi ^{}`$-system. Here one can see a peculiar feature - an sharp rise in the range of masses $`M_{pp\pi }`$=2300 $`MeV/c^2`$. Note that events from this range correspond to production $`\mathrm{\Delta }^0(1232)`$-isobar. This could be seen from the distribution of the invariant mass pion and one the fastest of two protons, $`M_{p\pi }`$ \- which excibite a clean peak at 1232 $`MeV/c^2`$. The calculated values of the $`a_{20}`$-component of the target asymmetry is below 0.6 in the mass region near 2300 $`MeV/c^2`$. We keep on working on further analysis of obtained results. These results allow us to make two conclusions. The behavior of the differential yield and $`a_{20}`$ -component of the target asymmetry near $`M_{pp\pi }`$=2300 $`MeV/c^2`$ is associated with the excitation of $`\mathrm{\Delta }^0(1232)`$-isobar on the deuteron. The noticeable difference near $`M_{pp\pi }`$=2300 $`MeV/c^2`$ between the experimental values and the calculated tensor target $`a_{20}`$ asymmetry and the reaction yield may be related with an excitation of a dibarion resonance state and its following decay into proton and $`\mathrm{\Delta }^0(1232)`$-isobar. This work was supported by the Russian Foundation for Fundamental Research grants No.98-02-17993, No.98-02-17949 and by the INTAS grant No.96-0424.
no-problem/9910/astro-ph9910088.html
ar5iv
text
# Evolution of Black Holes in the Galaxy ## 1 Introduction The fate of massive stellar cores, both in single and binary stars, has many observable consequences, both for what types of compact object may be found in what type of binary, and for the formation rates of all types of compact-object binary. We have discussed various aspects of this problem in previous works, and here give an overview of all these together, applying the same set of principles to all and obtaining a consistent picture of the evolution of massive stars and binaries. The best-known compact-object (i.e., neutron star or black hole) binaries are the binary neutron stars. They are key testing grounds of general relativity, and the usually favored gravity-wave source for LIGO. Until recently the theoretical formation rate of binary neutron stars gave at least one order-of-magnitude higher rate than was arrived at empirically by extrapolation from observed binary neutron stars. Because there are few binary neutron stars, and even fewer dominate the empirical estimates, the latter are frequently revised. The recent doubling of the estimated distance to PSR 1534$`+`$12 has lowered the empirical birth rate significantly, widening the gap. A solution to this discrepancy comes from combining the strange-matter equation of state, which results in a relatively low maximum mass for neutron stars, with hypercritical accretion . In the standard scenario the first neutron star formed spirals into the other star, in a phase of common-envelope evolution. Bethe & Brown argued that when a neutron star spirals into a red giant with a deeply convective envelope, it accretes matter at a very high rate of up to $`1M_{}`$ yr<sup>-1</sup>. Photons are trapped in the flow and carried adiabatically inwards to the surface of the neutron star . The latter is heated to $`T1`$ MeV, temperatures at which neutrino emission can carry off the thermal energy. Hence, the Eddington limit of $`\dot{M}_{\mathrm{Edd}}1.5\times 10^8M_{}`$ yr<sup>-1</sup> does not apply. As a result, the neutron star accretes about a solar mass of material and collapses to a low-mass black hole. Only if the two stars are initially so close in mass that at the time of spiral-in the first supernova has not yet exploded (i.e. the object that spirals in is still a helium star) a binary neutron star is formed. The sum total of binary neutron stars and black-hole, neutron-star binaries is almost the same as what was found for binary neutron stars in previous estimates, but now the binary neutron stars are only a small fraction of the total. The result is that an order of magnitude more black-hole, neutron-star binaries than binary neutron stars are formed. Together with the fact that the black holes are somewhat more massive than neutron stars, this implies that binaries with black holes should play an important part in mergers producing gravitation waves. They may also be good candidates for producing gamma-ray bursts. No low-mass black-hole, neutron-star binaries have been observed. This is due to the fact that the one neutron star in them is unrecycled, hence is observable for only a short time. The rarer binary neutron stars, like PSR 1913$`+`$16, do have a long-lived recycled pulsar, which more than offsets their lower formation rate and makes them dominate the observed population. We do observe high-mass black holes in Cyg X-1 and in soft X-ray transients. In the former, the black hole is of $`\text{ }>10M_{}`$ . The companion O-star is near its Roche Lobe, and its wind is continuously feeding the black hole, which shines through X-ray emission. In addition to Cyg X-1, high-mass black holes are seen in the LMC in LMC X-3 and perhaps LMC X-1. Much more copious are the transient sources, with black holes of mass $`M_{\mathrm{BH}}7M_{}`$, most of which flare up only occasionally with long quiescent times between flare ups. Wijers estimated $`3000`$ of these in the Galaxy. That is, these are the numbers that are presently operative. Remarkable about the transient sources with unevolved donors is that the main sequence star is K- or M-star, less massive than our sun. Brown, Lee, & Bethe explain this in terms such that higher-mass donors can also participate in the formation of such binaries containing a high-mass black hole, but will end up in the evolution further away from the black hole so that they can pour matter on the latter only when they evolve in subgiant or giant stage. Thus, there are a large factor estimated to be $`22`$ more of those binaries which will not be seen until the main sequence star evolves . The mechanism describing the evolution of the transient sources required the massive progenitor of the black hole to carry out core helium burning as if it were a single star; i.e., before having its H envelope removed in RLOF by its main sequence companion. An interval of $`2035M_{}`$ ZAMS was estimated for the progenitors of the high-mass black hole. Consequently, this same interval of single stars, not in binary, would be expected to end up as high-mass black holes. In the formation of these high-mass black holes, most of the helium envelope of the progenitor must drop into the black hole in order to form their high mass, so little matter is returned to the Galaxy. This brings us to the intriguing matter of SN 1987A which we believe did go into a black hole, but after the supernova explosion which did return matter to the Galaxy. The progenitor of SN 1987A was known to have ZAMS mass $`18M_{}`$. This leads us to the interesting concept of low-mass black holes with delayed explosion, which result from the ZAMS mass range $`1820M_{}`$, although the precise interval is uncertain. The delayed explosion mechanism has been elucidated by Prakash et al. . The range of ZAMS masses of single stars in which neutron stars are formed is thus only $`1018M_{}`$. The absence of matter being returned to the Galaxy in the ZAMS mass range $`2035M_{}`$ impacts on nucleosynthesis, especially in the amount of oxygen produced. Bethe & Brown suggested that matter was again returned to the Galaxy by stars in the ZAMS range $`3580M_{}`$. In this case, the progenitor was stripped of H envelope in an LBV phase, and the naked He star was suggested to evolve to a low-mass black hole, with return of matter to the galaxy before its formation in a delayed explosion, or to a neutron star. Thus, elements like oxygen were produced in a bimodal distribution of ZAMS masses $`M\text{ }<20M_{}`$ and $`35M_{}\text{ }<M\text{ }<80M_{}`$. The Bethe & Brown suggestion was based on naked He stars evolved by Woosley, Langer, & Weaver who used a too-large wind loss rate for He stars. Wellstein & Langer have evolved naked He stars with lower rates, in which case the final He envelope is somewhat larger. However, the central carbon abundance following core He burning is high $`33`$%. With this abundance, the stars will not skip the convective carbon burning stage in their evolution, and according to the arguments of Brown, Lee, & Bethe would still be expected to end up as low-mass compact objects, in which case matter would be returned to the Galaxy. This matter will not, however, be settled until the CO cores evolved with lowered He-star wind loss rates by Wellstein & Langer have been burned further up to the Fe core stage, so the Bethe & Brown bimodal mass region for nucleosynthesis should be viewed as provisional. In sect. 2, we discuss the maximum mass of neutron stars and the processes that determine which range of initial stellar masses gives rise to what compact object, and how mass loss in naked helium stars changes those ranges. Then we describe the Bethe & Brown scenario for the evolution of massive binary stars, and especially their treatment of common-envelope evolution and hypercritical accretion (sect. 3). We then discuss a few specific objects separately, first binary neutron stars (sect. 4), then Cyg X-1 and its ilk (sect. 5) and the black-hole transients (sect. 6). Then we comment briefly on how our results would affect the evolution of low-mass X-ray binaries with neutron stars (sect. 7) and summarize our conclusions (sect. 8). The discussion of Cyg X-3 and the possible implications of neutron-star, black-hole binaries for gravity waves and gamma-ray bursts are in appendices 1–3. ## 2 The Compact Star Thorsson, Prakash, & Lattimer and Brown & Bethe have studied the compact core after the collapse of a supernova, assuming reasonable interactions between hadrons. Initially, the core consists of neutrons, protons and electrons and a few neutrinos. It has been called a proto-neutron star. It is stabilized against gravity by the pressure of the Fermi gases of nucleons and leptons, provided its mass is less than a limiting mass $`M_{\mathrm{PC}}`$ (proto-compact) of $`1.8M_{}`$. If the assembled core mass is greater than $`M_{\mathrm{PC}}`$ there is no stability and no bounce; the core collapses immediately into a black hole. It is reasonable to take the core mass to be equal to the mass of the Fe core in the pre-supernova, and we shall make this assumption, although small corrections for fallback in the later supernova explosion can be made as in Brown, Weingartner & Wijers . If the center collapses into a black hole, the outer part of the star has no support (other than centrifugal force from angular momentum) and will also collapse. If the mass of the core is less than $`M_{\mathrm{PC}}`$, the electrons will be captured by protons $`p+e^{}n+\nu `$ (1) and the neutrinos will diffuse out of the core. This process takes of order of 10 seconds, as has been shown by the duration of the neutrino signal from SN 1987A. The result is a neutron star, with a small concentration of protons and electrons. The Fermi pressures of the core are chiefly from the nucleons, with small correction from the electrons. On the other hand the nucleon energy is increased by the symmetry energy; i.e., by the fact that we now have nearly pure neutrons instead of an approximately equal number of neutrons and protons. Thorsson et al. have calculated that the maximum mass of the neutron star $`M_{\mathrm{NS}}`$ is still about $`1.8M_{}`$; i.e., the symmetry energy compensates the loss of the Fermi energy of the leptons. Corrections for thermal pressure are small . The important fact is that the ten seconds of neutrino diffusion from the core give ample time for the development of a shock which expels most of the mass of the progenitor star. But this is not the end of the story. The neutrons can convert into protons plus $`K^{}`$ mesons, $`np+K^{}.`$ (2) This is short-hand for the more complicated interaction $`N+e^{}N^{}+K^{}+\nu `$ where $`N`$ is a nucleon. The neutrinos leave the star. The times are sufficiently long that chemical equilibrium is assured. Since the density at the center of the neutron star is very high, the energy of the $`K^{}`$ is very low, as confirmed by Li, Lee, & Brown using experimental data. By this conversion the nucleons can again become half neutrons and half protons, thereby saving the symmetry energy needed for pure neutron matter. The $`K^{}`$, which are bosons, will condense, saving the kinetic energy of the electrons they replace. The reaction eq. (2) will be slow, since it is preceded by $`e^{}K^{}+\nu `$ (3) (with the reaction eq. (2) following) as it becomes energetically advantageous to replace the fermionic electrons by the bosonic $`K^{}`$’s at higher densities. Initially the neutrino states in the neutron star are filled up to the neutrino chemical potential with trapped neutrinos, and it takes some seconds for them to leave the star. These must leave before new neutrinos can be formed from the process eq. (3). Thorsson et al. have calculated that the maximum mass of a star in which reaction eq. (2) has gone to completion is $`M_{\mathrm{NP}}1.5M_{},`$ (4) where the lower suffix NP denotes their nearly equal content of neutrons and protons, although we continue to use the usual name “neutron star”. This is the maximum mass of neutron stars, which is to be compared with the masses determined in binaries. The masses of 19 neutron star masses determined in radio pulsars are consistent with this maximum mass. The core mass $`M_\mathrm{C}`$ formed by the collapse of a supernova must therefore be compared to the two limiting masses, $`M_{\mathrm{PC}}`$ and $`M_{\mathrm{NP}}`$. If $`(\mathrm{I})M_\mathrm{C}>M_{\mathrm{PC}}`$ (5) we get a high mass black hole. If $`(\mathrm{II})M_{\mathrm{PC}}>M_\mathrm{C}>M_{\mathrm{NP}}`$ (6) we get a low-mass black hole, of mass $`M_\mathrm{C}`$. Only if $`(\mathrm{III})M_\mathrm{C}<M_{\mathrm{NP}}`$ (7) do we get a neutron (more precisely, “nucleon”) star from the SN. Only in this case can we observe a pulsar. In cases (II) and (III) we can see a supernova display. In case (I) we receive only initial neutrinos from electrons captured in the collapse before $`M_C`$ becomes greater than $`M_{\mathrm{PC}}`$ but no light would reach us. (Except perhaps if the new black hole rotates rapidly enough to power an explosion, a mechanism proposed by MacFadyen and Woosley for gamma-ray bursts.) Woosley, Langer, & Weaver evolve massive stars with mass loss. For stars in the ZAMS mass range $`2030M_{}`$, mass loss is relatively unimportant and since $`M_{\mathrm{PC}}\text{ }>1.8M_{}`$ for this range, we find from the earlier calculation of Woosley & Weaver that most of the single stars in this range will go into high-mass black holes. Evolution of these stars in binaries is another matter. Timmes, Woosley, & Weaver , Brown, Weingartner, & Wijers , and Wellstein & Langer find that substantially smaller core masses result if the hydrogen envelope is taken off in RLOF so that the helium star is naked when it burns. Thus, stars of ZAMS masses $`2035M_{}`$ in such binaries evolve into low-mass compact cores, black hole or neutron star. Woosley, Langer, & Weaver used helium-star wind loss rates which were too high by a factor $`23`$, but lower wind losses give only slightly larger He cores in the ZAMS mass range $`2035M_{}`$ so our above conclusion is unlikely to be reversed. On the other hand, the fate of single stars in the ZAMS mass range $`3580M_{}`$ is uncertain. In the published Woosley, Langer, & Weaver work with too high mass loss rate, so much matter is blown away, first in LBV stage and later in W.-R. stage that low-mass compact objects, black-hole or neutron-star, result . Bethe & Brown attribute this to the fact that convective carbon burning is not skipped in these stars. In this stage a lot of entropy can be removed by $`\nu \overline{\nu }`$ emission, so that a low-entropy, and therefore small, core results. In this range, Wellstein & Langer find central $`{}_{}{}^{12}C`$ abundances of $`3335\%`$ following He core burning, more than double the $`15\%`$ required for convective carbon core burning. Therefore, we believe that this range of stars will still go into low-mass compact objects, even though their final He cores are substantially larger because of the lower, more correct, He-star wind mass loss rates used by Wellstein & Langer . However, this problem cannot be considered as settled until the Wellstein & Langer CO cores are burned up to the Fe core stage. We will therefore not discuss the evolution of Cyg X-1 like objects, high-mass black holes accompanied by sufficiently massive giant companion so that they shine continuously in X-rays. It is not clear to us whether LMC X-3, with a high-mass black hole and a B-star companion of roughly equal mass, has a history more like Cyg X-1 or like the transient black-hole binaries which we discuss below. Bethe & Brown took $`80M_{}`$ as the lower mass limit for high-mass black hole formation in binaries which experience RLOF; i.e., in those for which helium core burning proceeds in a naked helium star. Because of our above discussion, we believe this mass limit may be too high, so that the contributions from high-mass black-hole, neutron-star binaries were, if anything, underestimated in their work. However, we will not know until the CO cores obtained with better He-star mass loss rates are evolved further. ## 3 Evolution of Binary Compact Objects We summarize the Bethe & Brown evolution of binary compact objects, paying special attention to their common envelope evolution. In particular, we shall show that their schematic evolution should be applicable to donors with deeply convective envelopes, whereas for non-convective or semiconvective envelopes, such as encountered in the evolution of low-mass X-ray binaries, their common envelope evolution would not be expected to apply. We call the star that is initially heavier star A, the other star B. We denote initial masses by subscript $`i`$, so we have masses $`M_{\mathrm{A},i}`$, $`M_{\mathrm{B},i}`$. We denote their ratio by $`q`$; thus $`q=M_{\mathrm{B},i}/M_{\mathrm{A},i}1.`$ (8) Following Portegies Zwart & Yungelson , we assume that $`q`$ is distributed uniformly between 0 and 1. Likewise, we also assume that ln $`a`$ is uniformly distributed, where $`a`$ is the semi-major axis of their orbit. However, we assume different limits for $`a`$ than Portegies Zwart & Yungelson . Initially both stars are massive main sequence stars, with radius at least 3 $`R_{}`$, so $`a>6R_{}=4\times 10^6`$ km. At the other end of the scale, we require $`a<4\times 10^9`$ km. We assume that 50% of all stars are binaries with separations in this range (stars in wider binaries would evolve as if they were single). Then the fraction of binaries in a given interval of $`\mathrm{ln}a`$ is $`d\varphi =d(\mathrm{ln}a)/7.`$ (9) We assume that a star needs an initial mass of $`M>M_s=10M_{}`$ (10) in order to go supernova. Therefore, if $`\alpha `$ is the total rate of SNe, the rate of SNe in mass interval $`dM`$ is given by $`d\alpha =\alpha n\left({\displaystyle \frac{M}{10M_{}}}\right)^n{\displaystyle \frac{dM}{M}}`$ (11) where we have used a power-law initial mass function with $`n=1.5`$ (close to the Salpeter value $`n=1.35`$). The birth rate of supernova systems was taken to be $`\alpha =0.02\mathrm{yr}^1`$ (12) in the Galaxy. By a supernova system we mean a single star that goes supernova (i.e., has $`M_{\mathrm{ZAMS}}>10M_{}`$) or a close binary containing at least one such star (close here means within the separation range mentioned above). Bethe & Brown find that if the primary is massive enough to go supernova, then there is an $`50\%`$ chance for the secondary to also go supernova. This was calculated for a distributions flat in $`q=M_{\mathrm{B},i}/M_{\mathrm{A},i}`$. Therefore, the supernova rate in our notation would be $`1.25\alpha =0.025\mathrm{yr}^1`$. Using the Cordes & Chernoff distribution of kick velocities, $`43\%`$ of the binaries were found to survive the first explosion. Thus, at this stage, we are left with a birth rate of $`R=0.02\times {\displaystyle \frac{1}{2}}\times {\displaystyle \frac{1}{2}}\times 0.432\times 10^3\mathrm{per}\mathrm{yr}`$ (13) for the formation of binaries consisting of a neutron star with a companion massive enough to go supernova ($`M>10M_{}`$). The life time of such systems is the companion life time of $`10^7`$ yr, but star A will be a pulsar for only $`5\times 10^6`$ yr because it will spin down electromagnetically until it is no longer observable. From these numbers we estimate the number of such systems to be $`10^4`$ in the Galaxy. Since the pulsar is unrecycled, the expected number should be compared with the detected population of active radio pulsars in the galaxy, about $`10^3`$. This number should be multiplied by a factor of $`1/2`$ for binarity, a further factor of $`1/2`$ for a binary in which both stars can go supernova and the 0.43 for survival of the first explosion. This would leave the large number $`10^2`$ if pulsars with massive companions were as easily detected as single pulsars. In fact, only 2 are observed; PSR 1259$``$63 with a Be-star companion and PSR 0045$``$73 with a B-star companion. Stellar winds interfere with the radio pulses from these binaries, obscuring the narrower ones. Doppler shifts also make these difficult to observe. Nevertheless, the factor necessary to reduce their observability is large. We return to the subject later. At this stage we have an $`1.4M_{}`$ neutron star with O or B-star companion. We take the latter to have mass $`15M_{}`$. The giant has a He core containing some $`30\%`$ of its mass, surrounded by an envelope consisting mainly of H. We take the envelope to be deeply convective,<sup>1</sup><sup>1</sup>1 The assumption that the envelope is deeply convective is essential for our later treatment of common envelope evolution with hypercritical accretion. Recent developments with non-convective or semiconvective donors show that the accretion rate is also highly super-Eddington, but still significantly less . For very massive donors the rate is always highly super-Eddington. so the entropy is constant. The particles, nuclei and electrons, are nonrelativistic and thus have $`\gamma =5/3`$. Therefore, the envelope forms a polytrope of index $`n=3/2`$. Applegate shows that the binding energy of the envelope is $`E0.6GM_\mathrm{B}^2R^1`$ (14) where $`R`$ is the outer radius. In this formula the binding energy is decreased $`50\%`$ by the kinetic energy, $`E`$ containing both effects. The major difference of the Bethe & Brown calculations and of case H of Portegies Zwart & Yungelson compared with other work is the use of hypercritical accretion. In a series of papers Chevalier showed that once $`\dot{M}`$ exceeded $`10^4\dot{M}_{\mathrm{Edd}}`$, the photons were carried inwards in the adiabatic inflow, onto the neutron star. The surface of the latter was heated sufficiently that energy could be carried off by neutrino pairs. Brown reproduced Chevalier’s results in analytical form. The idea has a much longer history: Colgate showed already in 1971 that if neutrinos carry off the bulk of the energy, accretion can proceed at a much greater rate than Eddington. In 1972 Zeldovich et al. , before the introduction of common envelope evolution, used hypercritical accretion of a cloud onto a neutron star. Bisnovatyi-Kogan & Lamzin and Chevalier pointed out that during the common envelope phase of binary evolution, photons would be trapped and accretion could occur at much higher rates, and that neutron stars that go through this phase generally will go into black holes. We begin by considering the work done by the neutron star on the envelope matter that it accretes. This will turn out to be only a fraction of the total work, the rest coming from the production of the wake, but it illustrates simply our procedure. Taking the neutron star to be at rest, the envelope matter is incident on it with the Keplerian velocity $`v`$. The rate of accretion is given by the Bondi-Hoyle-Lyttleton theory $`{\displaystyle \frac{dM_\mathrm{A}}{dt}}=\pi \rho vR_{\mathrm{ac}}^2`$ (15) where $`\rho `$ is the density of the B material, $`v`$ is its velocity relative to the neutron star A, and $`R_{\mathrm{ac}}`$ is the accretion radius $`R_{\mathrm{ac}}=2GM_\mathrm{A}v^2.`$ (16) The rate of change of momentum $`P`$ is $`{\displaystyle \frac{dP}{dt}}=v{\displaystyle \frac{dM_\mathrm{A}}{dt}},`$ (17) the matter being brought to rest on the neutron star, and this is equal to the force $`F`$. Consequently, the rate at which the neutron star does work in the material is $`\dot{E}=Fv=v^2{\displaystyle \frac{dM_\mathrm{A}}{dt}}.`$ (18) Inclusion of the work done in creating the wake involves numerical calculations with the result that the coefficient of the right-hand side of eq. (18) is changed; i.e, $`\dot{E}=\left({\displaystyle \frac{c_d}{2}}\right)v^2{\displaystyle \frac{dM_\mathrm{A}}{dt}},`$ (19) with $`c_d68`$ for our supersonic flow. It is, in fact, very important that the wake plays such a large role, in that it’s the fact that $`c_d/2>1`$ (We consider $`c_d/2`$ to be $`1`$) that makes our later common envelope evolution strongly nonconservative, the proportion of the total H-envelope mass accreted onto the neutron star being relatively small. In eq. (19) $`v^2`$ is the velocity of the B (giant) material relative to A, the neutron star. This is given by $`v^2=G(M_\mathrm{A}+M_\mathrm{B})a^1.`$ (20) The interaction energy of A and B is $`E={\displaystyle \frac{1}{2}}GM_\mathrm{A}M_\mathrm{B}a^1.`$ (21) Since we know $`M_{\mathrm{B},i}`$ and $`M_{\mathrm{B},f}`$, the initial mass of B and the mass of its He core, our unknown is $`a_f`$. We can obtain it by considering $`Y=M_\mathrm{B}a^1`$ (22) as one variable, $`M_\mathrm{A}`$ as the other. Differentiating eq. (21) we have $`\dot{E}={\displaystyle \frac{1}{2}}G\left(\dot{M}_\mathrm{A}Y+M_\mathrm{A}\dot{Y}\right)`$ (23) whereas combining eqs. (19) and (20) and neglecting $`M_\mathrm{A}`$ with respect to $`M_\mathrm{B}`$, we have $`\dot{E}=G\left({\displaystyle \frac{c_d}{2}}\right)YM_\mathrm{A}.`$ (24) Thus, eqs. (23) and (24) are equal, so we have $`{\displaystyle \frac{\dot{M}_\mathrm{A}}{M_\mathrm{A}}}={\displaystyle \frac{1}{(c_d1)}}{\displaystyle \frac{\dot{Y}}{Y}},`$ (25) which can be integrated to give $`M_\mathrm{A}Y^{1/(c_d1)}=Y^{1/5}`$ (26) where we have chosen $`c_d=6`$ . The final energy is then $`E_f={\displaystyle \frac{1}{2}}GM_{\mathrm{A},i}Y_i\left({\displaystyle \frac{Y_f}{Y_i}}\right)^{6/5}.`$ (27) The binding energy $`E_f`$ of star A to star B serves to expel the envelope of star B, whose initial binding energy is given by eq. (14). Mass transfer begins at the Roche Lobe which lies at $`0.6a_i`$ for the masses involved. However, star B expands rapidly in red giant stage before the mass transfer can be completed. To keep the numbers easy to compare with Bethe & Brown , we use their approximation of starting spiral-in when the giant’s radius equals the orbital separation rather than the Roche-lobe radius. Since for the large mass ratios considered here, $`R_\mathrm{L}/a0.5`$ for the giant, this implies we require $`E_f`$ of eq. (27) to be about twice the binding energy (eq. 14), i.e. $`E_f={\displaystyle \frac{0.6}{\alpha }}G{\displaystyle \frac{M_{\mathrm{B},i}^2}{a_i}}=1.2G{\displaystyle \frac{M_{\mathrm{B},i}^2}{a_i}}.`$ (28) (We set the common-envelope efficiency, $`\alpha `$, to 0.5.) The ejected material of B is, therefore, released with roughly the thermal energy it had in the envelope; in other words, the thermal energy content of the star is not used to help expel it. Inserting eq. (28) into eq. (27) yields $`\left({\displaystyle \frac{Y_f}{Y_i}}\right)^{1.2}=2.4{\displaystyle \frac{M_{\mathrm{B},i}}{M_{\mathrm{A},i}}}.`$ (29) Star A is initially a neutron star, $`M_{\mathrm{A},i}=1.4M_{}`$. For star B we assume $`M_{\mathrm{B},i}=15M_{}`$. Then eq. (29) yields $`{\displaystyle \frac{Y_f}{Y_i}}=15.`$ (30) We use this to find the result of accretion, with the help of eq. (26), $`{\displaystyle \frac{M_{\mathrm{A},f}}{M_{\mathrm{A},i}}}=1.73`$ (31) or $`M_{\mathrm{A},f}=2.4M_{}.`$ (32) This is well above any of the modern limits for neutron star masses, so we find that the neutron star has gone into a black hole. Our conclusion is, then, that in the standard scenario for evolving binary neutron stars, if the giant is deeply convective, accretion in the common envelope phase will convert the neutron star into a black hole. Star B, by losing its envelope, becomes a He star, We estimate that $`{\displaystyle \frac{M_{\mathrm{B},f}}{M_{\mathrm{B},i}}}0.3.`$ (33) The size of the orbit is determined by eq. (22), $`{\displaystyle \frac{a_i}{a_f}}={\displaystyle \frac{M_{\mathrm{B},i}}{M_{\mathrm{B},f}}}{\displaystyle \frac{Y_f}{Y_i}}=50.`$ (34) The final distance between the stars $`a_f`$ should not be less than about $`10^{11}`$ cm, so that the He star (mass $`M_{\mathrm{B},f}`$) fits within its Roche lobe next to the black hole of mass $`M_{\mathrm{A},f}`$. Bethe & Brown showed that if the black hole and the neutron star resulting from the explosion of star B are to merge in a Hubble time, then $`a_f<3.8\times 10^{11}`$ (for circular orbits; correction for eccentricity will be given later). Therefore the initial distance of the two stars, after the first mass exchange and the first supernova should be $`0.5\times 10^{13}\mathrm{cm}<a_i<1.9\times 10^{13}\mathrm{cm}`$ (35) If the initial distribution of distances is $`da/7a`$, the probability of finding $`a`$ between the limits of eq. (35) is $`P=18\%.`$ (36) As noted earlier, 43% of the binaries survive the first explosion, so the combined probability is now $`P=8\%`$ (37) for the survivors falling in the logarithmic interval in which they survive coalescence, but are narrow enough to merge in a Hubble time. Our final result, following from a birth rate of $`10^2`$ binaries per year in which one star goes supernova, half of which have both stars going supernova, is $`R=10^2\times 0.5\times 0.08\times 0.5=2\times 10^4\mathrm{yr}^1`$ (38) in the Galaxy. The final factor of 0.5 is the survival rate of the He-star, neutron star binary, calculated by Monte Carlo methods. Bethe & Brown quoted $`10^4`$ yr<sup>-1</sup>, or half of this rate, in order to take into account some effects not considered by them in which the binary disappeared (e.g., Portegies Zwart and Verbunt ). Our final rate is, then, $`R=10^4\mathrm{yr}^1\mathrm{galaxy}^1.`$ (39) Using our supernova rate of $`0.025`$ per year, which includes the case where both stars in the binary go supernova, we can convert this birth rate to $`0.004`$ per supernova for comparison with other work. Portegies Zwart & Yungelson in their case H, which included hypercritical accretion, got 0.0036 per supernova, within 10% of our value. Thus, the chief difference between our result in eq. (39) and the $`R=5.3\times 10^5`$ of these authors is due to the different assumed SN rate. In our above estimates we have assumed the second neutron star to be formed to have a circular orbit of the same $`a`$ as its He-star progenitor. However, eccentricity in its orbit leads to a value of $`a_f`$ substantially larger than the $`3.8\times 10^{11}`$ cm used above as the maximum separation for merger. In general, most of the final binaries will have $`e>0.5`$, with a heavy peak in the distribution close to $`e=1`$. The rise occurs because preservation of the binary in the explosion is substantially greater if the kick velocity is opposite to the orbital velocity before explosion. In this case the eccentricity $`ϵ`$ is large. The most favorable situation is when the orbital and kick velocities are equal in magnitude. (See the figures in Wettig and Brown .) Eggleton has kindly furnished us with a useful interpolation formula for the increase. The factor by which to multiply the time for merger in circular orbits, is $`Z(e)(1e^2)^{3.6890.243e0.058e^2}.`$ (40) This formula is accurate to about 1% for $`e0.99`$. Thus, if the initial eccentricity is 0.7, the time to shrink the orbit to zero is about 10% of the time required if the initial eccentricity were zero for the same initial period. The maximum $`a_f=3.8\times 10^{11}`$ cm for circular orbits would be increased by the fourth root of the decrease in time; i.e., up to $`6.8\times 10^{11}`$ cm for this eccentricity. The maximum $`a_i`$ in eq. (35) would go up to $`3.4\times 10^{13}`$ cm, increasing the favorable logarithmic interval by $``$ 40%. We have not introduced this correction because it is of the same general size as the uncertainty in the supernova rate. However, this correction gives us some comfort that our final numbers are not unreasonably large. If we produce an order of magnitude more low-mass black hole, neutron star binaries than binary neutron stars, the obvious question is why we have not seen any. The neutron star in this object is “fresh” (unrecycled) so it would spin down into the graveyard of neutron stars in $`5\times 10^6`$ yr. The two relativistic binary pulsars we do see 1913$`+`$16 and 1534$`+`$12 have been recycled, have magnetic fields $`B10^{10}`$ G, two orders of magnitude less than a fresh pulsar, and will therefore be seen for about 100 times longer than an unrecycled neutron star. So even with a ten times higher birth rate, we should see ten times fewer LBH-NS binaries than NS-NS binaries. Furthermore, the binary with black hole will have a somewhat higher mass, therefore greater Doppler shift, and therefore be harder to detect. In view of the above, it is reasonable that our low-mass black-hole, neutron-star binaries have not been observed, but they should be actively looked for. We should also calculate the rate of coalescences of the black hole with the He star. These have been suggested by Fryer & Woosley as candidate progenitors for the long time gamma ray bursters. Note that they will occur for a range of $`0.04\times 10^{13}\mathrm{cm}<a_i<0.5\times 10^{13}`$ cm, a logarithmic interval double that of eq. (35). Thus, the black-hole, He-star coalescence has a probability $`P=36\%.`$ (41) Furthermore, this situation does not have the 50% disruption in the final explosion, so the black-hole, He-star coalescences occur with a total rate of 4 times that of the black-hole, neutron-star mergers. There has been much discussion in the literature of the difficulties in common envelope evolution. We believe our model of deeply convective giants and hypercritical accretion offers an ideal case. Of course, the initiation of the common envelope evolution requires some attention, but it can be modeled in a realistic way . As the giant evolves across its Roche lobe, the compact object creates a tidal bulge in the giant envelope, which follows the compact object, torquing it in. As the convective giant loses mass, the envelope expands in order to keep entropy constant. In Bondi-Hoyle-Lyttleton accretion, a density $`\rho _{\mathrm{}}10^{13}\mathrm{g}\mathrm{cm}^3`$ is sufficient with wind velocities $`1000`$ km s<sup>-1</sup> in order to give accretion at the Eddington rate. Thus to achieve $`\dot{M}10^8M_{\mathrm{Edd}}1M_{}\mathrm{yr}^1`$ we need $`\rho 10^5`$ g cm<sup>-3</sup> which is found at $`0.9R`$, where $`R`$ is the radius of the giant. At this rate of accretion, angular momentum, etc, are hardly able to impede it appreciably. The total mass accreted onto the compact object is $`1M_{}`$, so the common envelope evolution has dynamical time of years. As noted earlier, it is non-conservative. ## 4 Evolution of Binary Neutron Stars Since the standard scenario of evolution of binary compact objects ends up with low-mass black-hole, neutron-star binaries, another way must be found to evolve neutron star binaries. In the double He-star scenario was suggested by Brown and developed further by Wettig & Brown the neutron star avoids going through common envelope with a companion star. In this way the neutron star can avoid being converted into a black hole by accretion. For two giants to burn He at the same time, they must be within $`5\%`$ of each other in mass, the helium burning time being $`10\%`$ of the main sequence life time, and stellar evolution time going roughly with the inverse square of the mass. With a flat mass ratio distribution, this happens in 5% of all cases, making the ratio of NS-NS to NS-LBH binaries 1:20. However, when the primary becomes an LBH, only half the secondaries will be massive enough to form a NS, whereas for the very close mass values of the double-He scenario this factor 2 loss does not occur. Thus, binary neutron stars should be formed $`10\%`$ as often as low-mass black-hole, neutron-star binaries. This $`10\%`$ is nearly model independent because everything else roughly scales. The scenario goes as in Wettig & Brown . The primary O-star evolves transferring its H-envelope to the companion. Often, this would lead to ‘rejuvenation’ of the secondary, i.e. its evolution would restart also from the ZAMS with the now higher total mass, and it would make a much heavier core. However, here the core of the secondary has evolved almost as far as the primary’s core, so the core molecular weight is much higher then that of the envelope. This prevents convection in the core from extending into the new envelope to make the bigger core, so no rejuvenation takes place. Since $`q1`$, the first mass transfer is nearly conservative. The second is not, so the two He-cores then share a common H envelope, which they expel, while dropping to a lower final separation $`a_f`$. Following the explosion of the first He star, the companion He-star pours wind matter onto the pulsar, bringing the magnetic field down and spinning it up . The end result is two neutron stars of very nearly equal mass, although wind accretion can change the mass two or three percent. The above scenario ends for He-star masses greater than 4 or $`5M_{}`$, corresponding to ZAMS masses greater than $`16`$ or $`18M_{}`$. However, less massive He stars evolve in the He shell-burning stage, and a further mass transfer (Case C) can take place. The transfer of He to the pulsar can again bring about a black hole, which Brown very roughly estimates to occur in $`50\%`$ of the double neutron star binaries. This is roughly consistent with results of Fryer & Kalogera . Taking a rate of $`R=10^4`$ per year per galaxy for the low-mass black-hole, neutron-star binaries, we thus arrive at a birth rate of $`R5\times 10^6\mathrm{per}\mathrm{year}\mathrm{per}\mathrm{galaxy}`$ (42) for binary neutron-star formation. However, the black holes formed in the He shell burning evolution will not have accreted much mass and will have about the same chirp mass as binary neutron stars (see below) for gravitational merging. Our best guess values eqs. (39) and (42) thus give an $`20`$ to 1 ratio for formation of low-mass black-hole, neutron-star binaries to binary neutron stars. The former are better progenitors for gravitational waves from mergers because of their higher masses and they have many advantages as progenitors of gamma ray bursters . Note that our estimated rate of $`R=5\times 10^6`$ per galaxy per year for binary neutron star formation is consistent with the empirical rates discussed in our introduction. ## 5 High-Mass Black-Hole O/B-star Binaries We will be brief in our review of these, because we believe the evolution of these objects such as Cyg X-1, LMC X-1 and LMC X-3 to be less well understood than the low-mass black-hole, neutron-star binaries. Evolutionary calculations now proceeding by Alexander Heger, using the CO cores evolved by Wellstein & Langer should clarify this situation substantially. Bethe & Brown arrived at a limit of ZAMS mass $`80M_{}`$ for stars in binaries to go into high-mass black holes (unless Case C mass transfer takes place as we discuss in our next section). This limiting mass is much higher than other workers have used. It was based on calculations of Woosley, Langer, & Weaver and was so high because of very high mass loss rates used by these authors. With more correct lower rates the limiting mass may come down, so the Bethe & Brown evolution should be viewed as giving a lower limits to the number of high-mass black-hole, O/B-star binaries. Their estimated birth rate of about $`3\times 10^5`$ per galaxy per year does agree reasonably well with the fact that only one such system is known in the Galaxy. However, since even with a twice larger separation the accretion rate of the black hole from the fast wind of the O star becomes small, it is possible that substantially more systems with somewhat wider orbits exist undetected, and that Cyg X-1 is the only one presently in the (very short) phase of incipient Roche lobe overflow when it is bright. Bethe & Brown found this narrowness of the Cyg X-1 orbit ($`17R_{}`$ according to Herrero et al. ) to be puzzling: the massive stars in the progenitor binary initially had to fit within their Roche Lobes, therefore a separation of at least double the current $`17R_{}`$ was needed. And most evolutionary effects from then on, such as wind mass loss or supernova-like mass loss, would tend to widen the orbit. Of course, the orbit could be narrowed in Case A mass transfer (i.e. during the main sequence) since the progenitor of the black hole was more massive than the present donor, but it could not become so narrow that the present donor filled its Roche lobe, and would widen again once the mass ratio became reversed and widen further due to wind loss after the whole primary envelope was lost. In any case, a binary as narrow as Cyg X-1 would coalesce in the common envelope evolution once the O-star companion of the massive black-hole goes into red giant phase, according to the Bethe & Brown estimates. Since the black hole in Cyg X-1 has mass $`\text{ }>10M_{}`$ and is probably the most massive black hole in a binary observed in the Galaxy, in the Fryer & Woosley model where the black hole “eats” the W.-R. companion, such a coalescence should produce the most energetic long-lasting gamma ray burster. We are unable to evaluate the probability of Cyg X-1 like objects merging following common envelope evolution because we have been unable to understand why Cyg X-1, before common envelope evolution, is so narrow. The LBV, RSG, and WNL stages of W.-R. development are not quantitatively understood. After the main sequence star in a Cyg X-1-like object explodes and becomes a neutron star, according to Bethe & Brown the binary will eventually merge. They estimated the contribution to the merger rate of these systems to be $`(46)\times 10^6\mathrm{yr}^1\mathrm{galaxy}^1`$, however with considerable uncertainty due to the fact that the evolution of Cyg X-1 itself is uncertain. Lowering the mass limit for black-hole formation by having lower mass loss rates would increase this number (e.g. a limit of 40 $`M_{}`$ would increase the merger rate by a factor 5). ## 6 The Formation of High-Mass Black Holes in Low-Mass X-ray Binaries ### 6.1 General Crucial to our discussion here is the fact that single stars evolve very differently from stars in binaries that lose their H-envelope either on the main sequence (Case A) or in the giant phase (Case B). However, stars that transfer mass or lose mass after core He burning (Case C) evolve, for our purposes, as single stars, because the He core is then exposed too close to its death for wind mass loss to significantly alter its fate. Single stars above a ZAMS mass of about $`20M_{}`$ skip convective carbon burning following core He burning, with the result, as we shall explain, that their Fe cores are substantially more massive than stars in binaries, in which H-envelope has been transferred or lifted off before He core burning. These latter “naked” He stars burn $`{}_{}{}^{12}C`$ convectively, and end up with relatively small Fe cores. The reason that they do this has to do chiefly with the large mass loss rates of the “naked” He cores, which behave like W.-R.’s. Unfortunately, in calculation until recently, substantially too large mass loss rates were used, so we cannot pin limits down quantitatively. In this section we will deal with the ZAMS mass range $`2035M_{}`$, in which it is clear that many, if not most, of the single stars go into high-mass black holes, whereas stars in binaries which burn “naked” He cores go into low-mass compact objects. In this region of ZAMS masses the use of too-high He-star mass loss rates does not cause large effects . The convective carbon burning phase (when it occurs) is extremely important in pre-supernova evolution, because this is the first phase in which a large amount of entropy can be carried off in $`\nu \overline{\nu }`$-pair emission, especially if this phase is of long duration. The reaction in which carbon burns is $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$ (other reactions like $`C+C`$ would require excessive temperatures). The cross section of $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$ is still not accurately determined; the lower this cross section the higher the temperature of the $`{}_{}{}^{12}C`$ burning, and therefore the more intense the $`\nu \overline{\nu }`$ emission. With the relatively low $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$ rates determined both directly from nuclear reactions and from nucleosynthesis by Weaver & Woosley , the entropy carried off during $`{}_{}{}^{12}C`$ burning in the stars of ZAMS mass $`1020M_{}`$ is substantial. The result is rather low-mass Fe cores for these stars, which can evolve into neutron stars. Note that in the literature earlier than Weaver & Woosley often large $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$ rates were used, so that the $`{}_{}{}^{12}C`$ was converted into oxygen and the convective burning did not have time to be effective. Thus its role was not widely appreciated. Of particular importance is the ZAMS mass at which the convective carbon burning is skipped. In the Woosley & Weaver calculations this occurs at ZAMS mass $`19M_{}`$ but with a slightly lower $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$ rate it might come at $`20M_{}`$ or higher . As the progenitor mass increases, it follows from general polytropic arguments that the entropy at a given burning stage increases. At the higher entropies of the more massive stars the density at which burning occurs is lower, because the temperature is almost fixed for a given fuel. Lower densities decrease the rate of the triple-$`\alpha `$ process which produces $`{}_{}{}^{12}C`$ relative to the two-body $`{}_{}{}^{12}C(\alpha ,\gamma )^{16}O`$ which produces oxygen. Therefore, at the higher entropies in the more massive stars the ratio of $`{}_{}{}^{12}C`$ to $`{}_{}{}^{16}O`$ at the end of He burning is lower. The star skips the long convective carbon burning and goes on to the much shorter oxygen burning. Oxygen burning goes via $`{}_{}{}^{16}O+^{16}O`$ giving various products, at very much higher temperature than $`C(\alpha ,\gamma )`$ and much faster. Since neutrino cooling during the long carbon-burning phase gets rid of a lot of entropy of the core, skipping this phase leaves the core entropy higher and the final Chandrasekhar core fatter. In Fig. 1 the large jump in compact object mass in single stars at ZAMS mass $`19M_{}`$ is clearly seen. From our discussion in Section 2 we see that this is just at the point where the Fe core mass goes above the proto-compact mass of $`1.8M_{}`$ and, therefore, above this mass one would expect single stars to go into high-mass black holes. Arguments have been given that SN 1987A with progenitor ZAMS mass of $`18M_{}`$ evolved into a low-mass black hole . We believe from our above arguments and Fig. 1 that just above the ZAMS mass of $`20M_{}`$, single stars go into high-mass black holes without return of matter to the Galaxy. Thus, the region of masses for low-mass black hole formation in single stars is narrow, say $`1820M_{}`$ (although we believe it to be much larger in binaries). Thus far our discussion has been chiefly about single stars, in which the He burns “clothed” by a hydrogen envelope. In this case the convective helium core grows in stars as time passes. In the “naked” He cores, in which the H envelope has been lifted off in RLOF or driven off by wind either before or early in the He burning the temperature and the entropy will be slightly lower, because the insulating layer is gone, so it is not surprising that their carbon abundance is large. Furthermore, the core mass continually decreases because of mass loss by wind. In fact, even for the naked $`20M_{}`$ He core, corresponding to ZAMS mass $`45M_{}`$, the central carbon abundance was $`33\%`$ at the end of He core burning whereas only $`15\%`$ is necessary for convective carbon burning . For lower mass He stars the $`{}_{}{}^{12}C`$ abundance was, of course, larger. Even with He-star wind mass loss rates reduced by half, Wellstein & Langer find a central carbon abundance of $`\text{ }>1/3`$ at the end of He core burning all the way up through $`60M_{}`$ stars, so it is clear that convective carbon burning will take place. Unfortunately, the cores have not yet been evolved past the CO stage. Thus, in the range of ZAMS masses up to $`60M_{}`$, if the H envelope is lifted off early in the core He burning phase, the convective carbon burning will take place after the He burning. By ZAMS mass $`40M_{}`$, where stars evolve into WR stars almost independent of whether they have a companion, the ultimate fate of the compact core is uncertain: Brown, Weingartner & Wijers suggest that 1700-37, with a progenitor of about $`40M_{}`$ went into a low-mass black hole. This would seem to indicate that the H-envelope of such massive stars is blown off in an LBV phase rapidly enough that the He core again burns as “naked”. In any case, $`{}_{}{}^{12}C`$ is burned convectively following He core burning, so the resulting Fe core should be small. We believe that our discussion earlier in this section indicates that single stars in the region of ZAMS masses $`2035M_{}`$ end up as high mass black holes. We can obtain the high mass black holes, according to our above discussion, if we make the He-stars burn with “clothing”, i.e., lift their H-envelope off only following He core burning. Thus, the evolving massive star should meet the companion main sequence star only following He core burning (in the supergiant stage). By then its radius $`R`$ is several hundred $`R_{}`$, and its binding energy $`0.6GM^2/R`$, very small. because of the large $`R`$. In order to see effects of matter stripped off from the main sequence companion in the transient sources, we want it to end up close to the black hole. Because of its low binding energy the supergiant envelope will be expelled by a relatively small binding energy of the companion, $`\frac{1}{2}M_\mathrm{A}M_{\mathrm{B},f}/a_f`$ where $`a_f`$ is the distance between black hole and companion. In order to make $`a_f`$ small, the mass $`M_\mathrm{A}`$ of the companion must be small. (More massive main sequence stars will spiral in less far, hence end up further from the black hole, and not fill their Roche Lobes. However, when they evolve in subgiant or giant phase they will fill it.) Both Portegies Zwart, Verbunt, & Ergma and Ergma & Van den Heuvel have suggested that roughly the above region of ZAMS masses must be responsible for the $`7M_{}`$ black holes in the transient X-ray sources in order to form enough such sources. Our scenario is essentially the same as that of de Kool et al. for the black hole binary A0620$``$00. We refer to this work for the properties of the K-star companion, stressing here the evolutionary aspects of the massive black hole progenitor. ### 6.2 Calculation We now calculate the common envelope evolution following the formalism of Section 3. Here $`M_\mathrm{A}`$ is the mass of the main sequence companion, $`M_\mathrm{B}`$ that of the massive black hole progenitor. The ratio $`q={\displaystyle \frac{M_{\mathrm{A},i}}{M_{\mathrm{B},i}}}`$ (43) is very small and there is great uncertainty in the initial number of binaries for such a small $`q1/25`$. We again take the distribution as $`dq`$, and again assume $`\mathrm{ln}a`$ to be uniformly distributed over a logarithmic interval of 7. Again, the fraction of binaries in a given interval is $`d\varphi ={\displaystyle \frac{d(\mathrm{ln}a)}{7}}.`$ (44) We evolve as typical a $`25M_{}`$ star (B) with a companion $`1M_{}`$ main sequence star (star A) as the progenitor of the transient X-ray sources. The common envelope evolution can be done as in Section 3. With $`M_{\mathrm{B},i}=25M_{}`$ and neglect of the accretion onto the main sequence mass $`M_\mathrm{A}`$, we find from Bethe & Brown $`\left({\displaystyle \frac{Y_f}{Y_i}}\right)^{1.2}={\displaystyle \frac{1.2}{\alpha _{ce}}}{\displaystyle \frac{M_{\mathrm{B},i}}{M_\mathrm{A}}}`$ (45) where $`Y=M_\mathrm{B}/a`$. Here the coefficient of dynamical friction $`c_d`$ was taken to be 6. The result is relatively insensitive to $`c_d`$, the exponent $`1.2`$ resulting from $`1+1/(c_d1)`$. Thus, in our case $`{\displaystyle \frac{Y_f}{Y_i}}=17\left({\displaystyle \frac{\alpha _{ce}M_A}{M_{}}}\right)^{0.83}=30\left({\displaystyle \frac{0.5}{\alpha _{ce}}}{\displaystyle \frac{M_{}}{M_A}}\right)^{0.83}.`$ (46) We expect $`\alpha _{ce}0.5`$, under the assumption that the thermal energy of the expelled envelope is equal to that it originally possessed in the massive star (i.e. that it is not used as extra energy to help remove the envelope), but it could be smaller. From this we obtain $`{\displaystyle \frac{a_i}{a_f}}={\displaystyle \frac{M_{\mathrm{B},i}Y_f}{M_{\mathrm{B},f}Y_i}}=90\left({\displaystyle \frac{0.5}{\alpha _{ce}}}{\displaystyle \frac{M_{}}{M_\mathrm{A}}}\right)^{0.83},`$ (47) where we have taken the He star mass $`M_{\mathrm{B},f}`$ to be $`1/3`$ of $`M_{\mathrm{B},i}`$. In order to survive spiral-in, the final separation $`a_f`$ must be sufficient so that the main sequence star lies at or inside its Roche Lobe, about $`0.2a_f`$ if $`M_\mathrm{A}=M_{}`$. This sets $`a_f5R_{}=3.5\times 10^{11}`$ cm and $`a_i=3.15\left({\displaystyle \frac{0.5}{\alpha _{ce}}}\right)^{0.83}\times 10^{13}\mathrm{cm},`$ (48) which is about 2 AU. This exceeds the radius of the red giant tip in the more numerous lower mass stars in our interval, so the massive star must generally be in the supergiant phase when it meets the main sequence star, i.e., the massive star must be beyond He core burning. E.g., the red giant tip (before the He core burning) for a $`20M_{}`$ star is at $`0.96\times 10^{13}`$ cm, for a $`25M_{}`$ star, $`2.5\times 10^{13}`$ cm . These numbers are, however, somewhat uncertain. Notice that decreasing $`\alpha _{ce}`$ will increase $`a_i`$. Decreasing $`M_A`$ has little influence, because with the smaller stellar radius the minimum $`a_f`$ will decrease nearly proportionately. Note that neglect of accretion onto the main sequence star would change the exponent $`0.83`$ to unity, so accretion is unimportant except in increasing the final mass. Now a ZAMS $`25M_{}`$ star ends up at radius $`6.7\times 10^{13}`$ cm ($`2a_i`$) following He shell burning . Thus the interval between $`a_i`$ and $`6.7\times 10^{13}`$ cm is available for spiral-in without merger so that a fraction $`{\displaystyle \frac{1}{7}}\mathrm{ln}\left({\displaystyle \frac{6.7}{3.15\left(\frac{0.5}{\alpha _{ce}}\right)^{0.83}}}\right)0.11`$ (49) of the binaries survive spiral-in, but are close enough so that the main sequence star is encountered by the evolving H envelope of the massive star. The He core burning will be completed before the supergiant has moved out to $`2`$ A.U., so binaries which survive spiral-in will have He cores which burn as “clothed”, namely as in single stars. Given our assumptions in Section 3, the fraction of supernovas which arise from ZAMS stars between 20 and $`35M_{}`$ is $`{\displaystyle \frac{1}{2^{3/2}}}{\displaystyle \frac{1}{3.5^{3/2}}}=0.20`$ (50) where we have assumed the mass $`10M_{}`$ is necessary for a star to go supernova. A Salpeter function with index $`n=1.5`$ is assumed here. Our assumption that the binary distribution is as $`dq`$ is arbitrary, and gives us a factor $`1/25`$ for a $`1M_{}`$ companion. Thus, for supernova rate 2 per century, our birth rate for transient sources in the Galaxy is $`2\times 10^2\times 0.5\times 0.11\times 0.20\times 0.048.8\times 10^6\mathrm{yr}^1`$ (51) where $`0.5`$ is the assumed binarity, $`0.11`$ comes from eq. (49), and the final (most uncertain) factor $`0.04`$ results from a distribution flat in $`q`$ and an assumed $`1M_{}`$ companion star. In order to estimate the number of transient sources with black holes in the Galaxy, we should know the time that a main sequence star of mass $`1M_{}`$ transfers mass to a more massive companion. This depends on the angular-momentum loss rate that drives the mass transfer. A guaranteed loss mechanism for close binaries is gravitational radiation, which for a main-sequence donor gives a mass transfer rate of $`10^{10}M_{}\mathrm{yr}^1`$, almost independent of donor mass . As mass is transferred, the mass of the donor decreases and with it the radius of the donor. Quite a few low-mass X-ray binaries have X-ray luminosities that imply accretion rates in excess of $`10^{10}M_{}\mathrm{yr}^1`$, leading to suggestions of additional mechanisms for loss of angular momentum from the binary, to increase mass transfer. Verbunt & Zwaan estimate that magnetic braking can boost the transfer of mass in a low-mass binary. We somewhat arbitrarily adopt an effective mass transfer rate of $`10^9M_{}\mathrm{yr}^1`$ for main sequence stars. In order to estimate the number of high-mass black hole, main sequence star binaries in the Galaxy we should multiply the birth rate eq. (51) by the $`10^9`$ yr required, at the assumed mass loss rate, to strip the main sequence star, obtaining 8800 as our estimate. From the observed black-hole transient sources Wijers arrives at 3000 low-mass black hole sources in the Galaxy, but regards this number as a lower limit. With the uncertainties in formation rate and life time, the agreement between the two numbers is as good as may be expected. ### 6.3 Observations We believe that there are many main sequence stars more massive than the $`\text{ }<1M_{}`$ we used in our schematic evolution, which end up further away from the black hole and will fill their Roche Lobe only during subgiant or giant stage. From our earlier discussion, we see that a $`2M_{}`$ main sequence star will end up about twice as far from the black hole as the $`1M_{}`$, a $`3M_{}`$ star, three times as far, etc. Two of the 9 systems in our Table 1 have subgiant donors (V404 Cyg and XN Sco). These have the longest periods, 6.5 and 2.6 days and XN Sco is suggested to have a relatively massive donor of $`2M_{}`$. It seems clear that these donors sat inside their Roche Lobes until they evolved off the main sequence, and then poured matter onto the black hole once they expanded and filled their Roche Lobe. For a $`2M_{}`$ star, the evolutionary time is about a percent of the main-sequence time, so the fact that we see two subgiants out of nine transient sources means that many more of these massive donors are sitting quietly well within their Roche Lobes. Indeed, we could estimate from the relative time, that there are $`2/9\times 100=22`$ times more of these latter quiet main sequence stars in binaries. Amazingly, this factor $`22`$ almost cancels the $`1/25`$ we had for the interval in $`q`$ over which the donors contribute. This is not coincidental. Essentially any mass donor, at least almost up to the $`25M_{}`$ progenitor of the black hole, can give rise to a common envelope phase. The BH progenitor crosses the Herzsprung gap very quickly, in a time in which the companion can hardly accept its mass. (The ratio of $`q\text{ }<1/4`$ for common envelope evolution was determined by Kippenhahn & Meyer-Hofmeister for case A mass transfer.) Thus, one can expect essentially all companions, up to $`q\text{ }<1`$, to go into common envelope evolution and contribute. Beginning from Wijers’ empirical estimate we would thus have $`(2/9)\times 100\times 3000=6.7\times 10^4`$ binaries with high-mass black holes and main-sequence companions. This number is determined, as shown above, chiefly by the number of observed systems with subgiant donors. If we assume that ZAMS masses $`1018M_{}`$ evolve into a neutron star, we should have $`3`$ times more neutron stars than high-mass black holes (see eq. (50)). The range follows from our belief that SN 1987A with progenitor $`18M_{}`$ ZAMS went into a low-mass black hole, following the scenario of Brown & Bethe . On the basis of a Monte Carlo calculation using the kick velocities of Cordes & Chernoff we find that $`1/2`$ of the binaries containing He-star, low-mass main sequence companion (with $`M1M_{}`$) will be disrupted in the explosion. Thus, we find only a slightly higher birth rate for LMXBs (Low Mass X-ray Binaries) with neutron stars than with black holes, although the numbers could be equal to within our accuracy. With comparable life times (since the donor masses and mass transfer rates are comparable), this would give us one to a few thousand LMXBs with neutron stars, much above the total number of observed LMXBs ($`130`$). Indeed, from Table 6 of Portegies Zwart & Verbunt one sees that their estimated empirical birth rate for low-mass X-ray binaries is $`2\times 10^7`$ yr<sup>-1</sup>, whereas in either theoretical evolution including kick velocities they obtain $`4\times 10^6`$ yr<sup>-1</sup>. This factor of 20 discrepancy is by far the greatest between theoretical and empirical rates in their table, and supports our point that many of the neutron stars must have disappeared along the way. Alternatively, a large number of LMXBs with neutron stars could be transients as well (like, e.g. Aql X-1). Just at the present there are new developments in the evolution of low-mass X-ray binaries, which we shall shortly summarize in Section 7. As we showed below eq. (48), the He core of the massive star will in general be uncovered only after He core burning is completed. The remaining time for He burning (in a shell) will be short, e.g., for a $`20M_{}`$ ZAMS star it is only $`1.4\times 10^4`$ years . Therefore the mass loss by wind after uncovering the He core will not be large, and when the star finally becomes a supernova, its mass will be almost equal to the He core of the original star. The latter can be calculated from $`M_{\mathrm{He}}0.10(M_{\mathrm{ZAMS}})^{1.4}`$ (52) so for ZAMS masses $`2035M_{}`$ $`M_{\mathrm{He}}`$ will lie in the interval $`714M_{}`$. Bailyn et al. find the black hole masses in transient sources to be clustered about $`7M_{}`$, except for V404 Cyg which has a higher mass. This is in general agreement with our scenario, because most of the black holes will come from the more numerous stars of ZAMS mass not far from our lower limit of $`20M_{}`$. Two points are important to note: 1. Not much mass can have been lost by wind. Naked He stars have rapid wind loss. However in our scenario the He star is made naked only during He shell burning and therefore does not have much time ($`\text{ }<10^4`$ yr) to lose mass by wind. 2. There are good reasons to believe that the initial He core will be rotating . The way in which the initial angular momentum affects the accretion process has been studied by Mineshige et al. for black hole accretion in supernovae. In general accretion discs which are optically thick and advection dominated are formed. The disc is hot and the produced energy and photons are advected inward rather than being radiated away. The disc material accretes into the black hole at a rate of $`>10^6\dot{M}_{\mathrm{Edd}}`$ for the first several tens of days. Angular momentum is advected outwards. Our results show that little mass is lost, because the final $`7M_{}`$ black hole masses are not much less massive than the He core masses of the progenitors, and some mass is lost by wind before the core collapses. The latter loss will not, however, be great, because there is not much time from the removal of the He envelope until the collapse. Accretion of the He into the black hole will differ quantitatively from the above, but we believe it will be qualitatively similar. The fact that the helium must be advected inwards and that little mass is lost as the angular momentum is advected outwards is extremely important to establish. This is because angular momentum, essentially centrifugal force, has been suggested by Chevalier to hold up hypercritical accretion onto neutron stars in common envelope evolution. (Chevalier had first proposed the hypercritical accretion during this evolutionary phase to turn the neutron stars into black holes, the work followed up by Brown and Bethe & Brown .) However, once matter is advected onto a neutron star, temperatures $`\text{ }>1`$ MeV are reached so that neutrinos can carry off the energy. The accreted matter simply adds to the neutron star mass, evolving into an equilibrium configuration. Thus, this accretion does not differ essentially from that into a black hole. In either case of neutron star or black hole an accretion disc or accretion shock, depending on amount of angular momentum, but both of radius $`10^{11}`$ cm, is first formed, giving essentially the same boundary condition for the hypercritical accretion for either black hole or neutron star. Thus, the masses of the black holes in transient sources argue against substantial inhibition of hypercritical accretion by jets, one of the Chevalier suggestions . Measured mass functions, which give a lower limit on the black hole mass are given in Table 1. Only GRO J0422+32 and 4U 1543-47 have a measured mass function $`\text{ }<3M_{}`$. Results of Callanan et al. indicate that the angle $`i`$ between the orbital plane and the plane of the sky for GRO J0422+32 is $`i<45^{}`$, and recent analysis indicate that the angle $`i`$ for 4U 1543-47 is $`20^{}<i<40^{}`$. So both GRO J0422+32 and 4U 1543-47 also contain high-mass black holes. Based on the observations of Kaper et al. that the companion is a hypergiant, Ergma & Van den Heuvel argue that the progenitor of the neutron star in 4U1223-62 must have a ZAMS mass $`\text{ }>50M_{}`$. Brown, Weingartner & Wijers , by similar argumentation, arrived at $`45M_{}`$, but then had the difficulty that 4U1700-37, which they suggested contains a low-mass black hole, appeared to evolve from a lower mass star than the neutron star in 1223. Wellstein & Langer suggest the alternative that in 1223 the mass occurs in the main-sequence phase (Case A mass transfer), which would be expected to be quasi conservative. They find that the progenitor of the neutron star in 1223 could then come from a mass as low as $`26M_{}`$. This is in agreement with Brown et al. for conservative mass transfer (their Table 1), but these authors discarded this possibility, considering only Case B mass transfer in which case considerable mass would be lost. Wellstein & Langer are in agreement with Brown et al. that 4U1700-37 should come from a quite massive progenitor. Conservative evolution here is not possible because of the short period of 3.4 days . The compact object mass is here $`1.8\pm 0.4M_{}`$ . Brown et al. suggest that the compact object is a low-mass black hole. The upper mass limit for these was found by Brown & Bethe to be $`1.8M_{}`$, as compared with an upper limit for neutron star masses of $`1.5M_{}`$. Thus, there seems to be evidence for some ZAMS masses of $`4050M_{}`$ ending up as low-mass compact objects, whereas we found that lower mass stars in the interval from $`2035M_{}`$ ended up as high-mass black holes. In this sense we agree with Ergma & Van den Heuvel that low-mass compact object formation “is connected with other stellar parameters than the initial stellar mass alone”. We suggest, however, following Brown et al. that stars in binaries evolve differently from single stars because of the different evolution of the He core in binaries resulting from RLOF in their evolution. Namely, “naked” He cores evolve to smaller final compact objects than “clothed” ones. In fact, this different evolution of binaries was found by Timmes, Woosley & Weaver . They pointed out that stars denuded of their hydrogen envelope in early RLOF in binaries would explode as Type Ib supernovae. They found the resulting remnant gravitational mass following explosion to be in the interval of $`1.21.4M_{}`$, whereas in exploding stars of all masses with hydrogen envelope (Type II supernova explosion) they found a peak at about $`1.28M_{}`$, chiefly from stars of low masses and another peak at $`1.73M_{}`$ more from massive stars. Our Fe core masses in Fig. 1 come from essentially the same calculations, but the “Remnant” masses of Woosley & Weaver are somewhat greater than those used by Timmes et al. . In fact, the differences between the masses we plot and those of Timmes et al. come in the region $`1.71.8M_{}`$ (gravitational). This is just in the Brown & Bethe range for low-mass black holes. It may be that some of the stars with low-mass companions evolve into low-mass black holes. Presumably these would give lower luminosities than the high-mass black holes, although at upper end of the mass range we discuss 4U1700-37 seems to be an example of such a system. Of course here the high luminosity results from the high mass loss rate of the giant companion. There are substantial ambiguities in fallback, etc., from the explosion. Our point in this paper is that most of the higher mass single stars $`2035M_{}`$ go into high mass black holes. (The Brown & Bethe limit for low-mass black hole formation is $`1.51.8M_{}`$ gravitational, but there is some give and take in both lower and upper limit. Also the stars are not all the same. In particular different metallicities will give different wind losses.) ## 7 Evolution of Low Mass X-ray Binaries We shall briefly point out new developments in the evolution of low-mass X-ray binaries. These were foreseen in the excellent review by Van den Heuvel , and there has been substantial development in this field lately. Low mass X-ray binaries are considered to be progenitors of recycled pulsars with helium white dwarf companions. In order to bring the magnetic fields of the latter down to $`10^8`$ G and to speed them up to their final period, Van den Heuvel & Bitzaraki had the neutron star accreting $`0.5M_{}`$ from the main-sequence donor. More detailed recent calculations by Tauris & Savonije find that if the initial orbital period is below $`30`$ days with a main sequence donor of $`1M_{}`$ which undergoes stable mass transfer with the neutron star, the mass of the latter is increased up to $`2M_{}`$ if the amount of material ejected as a result of propeller effect or disk instabilities is insignificant. This presents a problem for us because the Brown & Bethe mass limit for neutron stars is $`1.5M_{}`$. From this limit, we would say that these neutron stars in low-mass X-ray binaries would have gone into black holes. A way out of this problem was suggested by Van den Heuvel , which is called the evolution of Her X-1 type X-ray binaries (see especially the Appendix of Van den Heuvel ). In this case a radiative donor more massive than the neutron star pours matter in unstable mass transfer across the Roche Lobe onto the neutron star. This mass transfer can occur onto the accretion disc by as much as $`10^4\dot{M}_{\mathrm{Edd}}`$, if $`\dot{M}_{\mathrm{Edd}}1.5\times 10^8M_{}`$ yr<sup>-1</sup> is accreted onto the neutron star, since the Eddington limit goes linearly with $`R`$ and the radius of the disc can be $`10^{10}`$ cm. The advection dominated inflow-outflow solution (ADIOS) of Blandford & Begelman suggests that the binding energy of the matter released at the neutron star can carry away mass, angular momentum and energy from the gas accreting onto the edge of the accretion disc provided the latter does not cool too much. In this way the binding energy of a gram of gas at the neutron star can carry off $`10^3`$ grams of gas at the edge of the accretion disc. Such radiatively-driven outflows are suggested by King & Begelman to enable common envelope evolution to be avoided. Tauris & Savonije have carried out a detailed evolution of low-mass X-ray binaries with $`P_{\mathrm{orb}}>2`$ days using computer programs based on Eggleton’s, which for radiative and semiconvective donors follow, in at least a general way, the above ideas. For a deeply convective donor a short phase of rapid mass loss may reach a rate as large as $`10^4\dot{M}_{\mathrm{Edd}}`$ while the mass of the donor drops to well below the neutron star mass. Although rates $`>10^4\dot{M}_{\mathrm{Edd}}`$ would be hypercritical for spherical accretion, somewhat higher rates survive hypercritical accretion provided angular momentum is taken into account . The important point is that the donor mass can be brought down sufficiently far before stable mass transfer at a rate $`\text{ }<\dot{M}_{\mathrm{Edd}}`$ sets in, so that the neutron star can avoid accreting sufficient mass to send it into a black hole. It is not clear what percentage of the neutron stars will survive black-hole fate. Our rough estimates in Section 6 indicate that only a small fraction need to do so. For even more massive donors (2–6 $`M_{}`$) which are either radiative or semiconvective, work by Tauris et al. indicates that the low-mass X-ray binaries with C/O white dwarf (CO) companions can be made in much the same way. In an earlier paper, Van den Heuvel had suggested that these binaries would originate from donor stars on the asymptotic giant branch. In order to evolve these, he needed an efficiency $`\alpha >1`$; i.e., sources additional to those included in our earlier common envelope evolution, such as mass loss by instabilities in the AGB, dissociation energy, etc. have to participate in helping to remove the envelope of the donor star. King and Ritter have computed a scenario for Cyg X-2 with an initial donor mass of $`3.6M_{}`$. Currently the donor has a mass of $`0.50.7M_{}`$ and a large radius, about $`7R_{}`$. About $`2M_{}`$ must have been lost in super-Eddington accretion, roughly along the lines sketched above. More massive donors can lead to relatively more massive white dwarf companions, which will be C/O white dwarfs. In fact, the present situation is that no circular NS-CO<sub>c</sub><sup>2</sup><sup>2</sup>2 The lower suffix $`c`$ ($`e`$) denotes the circular (eccentric) binaries. binaries which went through common envelope evolution seem to be observed, the alternative Tauris et al. evolution which avoids common envelope evolution being preferred. This presents a real dilema for the standard scenario of common envelope evolution. It seems clear that in the binary B2303$`+`$46 the companion to the pulsar is a C/O white dwarf. B2303$`+`$46 is an eccentric binary NS-CO<sub>e</sub>, indicating that the neutron star was formed last. This is confirmed by the unrecycled field strength of the pulsar $`B=8\times 10^{11}`$ G. Cases have made that the recently discovered J1141$``$6545 and B1820$``$11 are also NS-CO<sub>e</sub> binaries. On the other hand, evolutionary calculations show that formation probability of NS-CO<sub>c</sub> binaries through common envelope evolution is $`\text{ }>50\%`$ as probable as of NS-CO<sub>e</sub> binaries . In this evolution the pulsar magnetic moment will be recycled, brought down at least a factor of 100 and possibly even further, down to the empirical values of $`5\times 10^8`$ G found in the NS-CO<sub>c</sub> binaries. The lowering of the magnetic fields increases the time of observation by a factor of $`100`$ or of $`2000`$, depending on whether the theoretical or empirical magnetic field is used. Since we fairly certainly observe at least one NS-CO<sub>e</sub> binary, we should see either 100 or 2000 NS-CO<sub>c</sub> binaries which have gone through common envelope evolution. We certainly don’t see anything like this, at most the 5 that had earlier been attributed to common envelope evolution, and probably none. Brown, Lee, & Portegies Zwart remove at least most of this discrepancy by showing that with the introduction of hypercritical accretion the neutron star in common envelope evolution with the evolving main sequence companion goes into a black hole. ## 8 Discussion and Conclusion Our chief new point in the evolution of binaries of compact objects is the use of hypercritical accretion in common envelope evolution, although the idea of hypercritical accretion is not new (Section 3). Chevalier discussed the possibility that angular momentum might hinder hypercritical accretion. In his treatment of the accretion disc, he assumed gas pressure to dominate, in order to raise the temperature sufficiently for neutrinos to be emitted. This entailed a tiny viscosity, characterized by $`\alpha \text{ }<10^6`$ in the $`\alpha `$-description. More reasonable values of $`\alpha `$ are $`0.1`$. Bethe, Brown, & Lee have shown that for larger $`\alpha `$’s, $`\alpha 0.011`$, the disc pressure is radiation dominated, and they find a simple hypercritical advection dominated accretion flow (HADAF) of matter onto the neutron star. The Bethe, Brown, & Lee HADAF appears to reproduce the Armitage & Livio numerical two-dimensional hydro solution. These latter authors suggest that jets will prevent hypercritical accretion by blowing off the accreting matter. At such high rates of accretion $`1M_{}`$ yr<sup>-1</sup> the Alfven radius is, however, close to the neutron star surface, and we believe that this will effectively, shut down any magnetically driven jets. In Section 7 we discussed the advection of a rotating He envelope into a black hole. We believe that two possibilities exist. Phinney & Spruit suggest that the magnetic turbulence is strong enough to keep the He envelope in corotation with the core of the star until shortly before it evolves into a black hole. Then not much angular momentum would have to be advected away in order to let the matter accrete. Alternatively, magnetic turbulence is strong enough so that angular momentum can be carried away from a rapidly rotating He core; then the matter can accrete. From the measured masses of $`7M_{}`$ we know that most of the He core must fall into the black hole, so one of these scenarios should hold. Both favor high magnetic turbulence, lending credence to the Chevalier suggestion we quoted. ## Appendix ## Appendix A Common Envelope Evolution of Cygnus X-3 The closeness of the compact object in Cyg X-3 to its $`10M_{}`$ companion helium star bears witness to an earlier stage of common envelope evolution. Although the mass of the He star has not been measured, the star is similar to V444 Cygni, the mass of which is $`9.3\pm 0.5M_{}`$ . For example, from the period change its mass loss rate would be $`\dot{M}_{\mathrm{dyn}}=0.6\times 10^5(M_{\mathrm{He}}/10M_{})M_{}\mathrm{yr}^1`$ whereas that of V444 Cygni is $`\dot{M}_{\mathrm{dyn}}=1\times 10^5M_{}`$ yr<sup>-1</sup> indicating an $`M_{\mathrm{He}}10M_{}`$. Mass loss rates cannot easily be obtained from W.-R. winds because of large nonlinear effects which necessitate corrections for “clumpiness”. However polarization measurement of the Thomson scattering, which depend linearly on the wind, give a mass loss rate of $`\dot{M}=0.75\times 10^5M_{}`$ yr<sup>-1</sup> , roughly compatible with the period change. In agreement with many other authors we take $`M_{\mathrm{He}}=10M_{}`$ in Cyg X-3. Here we evolve a massive O-star binary with initial ZAMS masses of $`33M_{}`$ and $`23M_{}`$ as possible progenitor for Cyg X-3. In red giant phase the $`33M_{}`$ star will transfer its H envelope to the $`23M_{}`$ companion, leaving a He star of $`M_{\mathrm{He}}=0.1M_{\mathrm{ZAMS}}^{1.4}=13M_{}.`$ (53) With efficiency of mass transfer assumed to go as $`q^2`$, about half of the $`20M_{}`$ H-envelope will be accepted by the companion, which then becomes a rejuvenated $`33M_{}`$ star. The He core of the primary then explodes, going into a $`1.5M_{}`$ compact object, neutron star or low-mass black hole. After the companion $`33M_{}`$ star evolves, the binary will go into common envelope evolution. Eq. (29) can be written $`\left({\displaystyle \frac{Y_f}{Y_i}}\right)=\left({\displaystyle \frac{2.4M_{\mathrm{B},i}}{M_{\mathrm{A},i}}}\right)^{\frac{c_d1}{c_d}}`$ (54) where we again take $`c_d=6`$. With $`M_{\mathrm{B},i}=33M_{}`$ and $`M_{\mathrm{A},i}=1.5M_{}`$, $`Y_f/Y_i=27.`$ (55) The compact object mass scales as $`M_\mathrm{A}Y^{1/(c_d1)}=Y^{1/5}`$ (56) so that $`M_{\mathrm{A},f}=2.9M_{}`$ (57) and the final compact object is certainly a black hole, in agreement with Cherepaschchuk & Moffat and Ergma & Yungelson. We believe our evolution here to show that this $`3M_{}`$ black hole is about the most massive that can be formed in common envelope evolution by accretion onto a low-mass compact object, since our $`33M_{}`$ companion is near to the ZAMS mass range that will lose mass in an LBV phase, unsuitable for common envelope evolution, so it cannot be made much more massive. We next find $`{\displaystyle \frac{a_i}{a_f}}={\displaystyle \frac{M_{\mathrm{B},i}}{M_{\mathrm{B},f}}}{\displaystyle \frac{Y_f}{Y_i}}70.`$ (58) For an $`a_f3.5R_{}`$ this gives $`a_i250R_{}`$ (59) comfortably within the red giant range. Following Ergma & Yungelson we calculate the accretion rate as $`\dot{M}_{\mathrm{acc}}=0.14\left({\displaystyle \frac{M_{\mathrm{BH}}}{M_{}}}\right)^2v_{1000}^4P_{\mathrm{hr}}^{4/3}\left({\displaystyle \frac{M_{}}{M_{\mathrm{tot}}}}\right)^{2/3}\dot{M}_{\mathrm{wind}}.`$ (60) Here $`v_{1000}`$ is the wind velocity in units of 1000 km s<sup>-1</sup> and $`P_{\mathrm{hr}}`$ is the period in hours.<sup>3</sup><sup>3</sup>3 Through a slip, the two factors preceding $`\dot{M}_{\mathrm{wind}}`$ appear in the denominator in , although we confirm that they carried out their calculations with the correct formula. For $`\dot{M}_{\mathrm{wind}}`$ we, as Ergma & Yungelson, take $`\dot{M}_{\mathrm{dyn}}`$. These authors take $`v_{1000}=1.5`$, essentially the result of Van Kerkwijk et al. . An earlier estimate by Van Kerkwijk et al. was $`v_{1000}=1`$. We believe that the $`v_{\mathrm{wind}}`$ to be used here may be different from the (uncertain) measured terminal wind velocities, because the velocity near the compact object is substantially less. Therefore, we take $`v_{1000}=1.`$ Taking $`\dot{M}_{\mathrm{wind}}=\dot{M}_{\mathrm{dyn}}`$ we obtain $`\dot{M}_{\mathrm{acc}}=2.2\times 10^7M_{}\mathrm{yr}^1`$ (61) This is to be compared with $`\dot{M}_{\mathrm{Edd}}=4\pi cR/\kappa _{\mathrm{es}}=2.6\times 10^8(M_{\mathrm{BH}}/M_{})M_{}\mathrm{yr}^1`$ (62) where $`\kappa _{\mathrm{es}}=0.2\mathrm{g}/\mathrm{cm}^2`$ for He accretion. Our result is in fair agreement with Ergma & Yungelson , who find $`\dot{M}_{\mathrm{Edd}}2.3\times 10^7M_{}`$ yr<sup>-1</sup> for a $`10M_{}`$ black hole. The presence of jets in Cyg X-3 argues for super Eddington rates of accretion, which we find. Cherepaschchuk and Moffat estimated the total luminosity of Cyg X-3 to be $`L_{\mathrm{bol}}3\times 10^{39}`$ erg. The efficiency of black hole accretion varies as $`0.057<ϵ<0.42`$ (63) for a black hole at rest to a (maximally rotating) Kerr black hole. We expect the black hole to be spun up by accretion from the wind or accretion disc. Taking an intermediate $`ϵ=0.2`$, we find $`L=2.5\times 10^{39}\mathrm{erg}\mathrm{s}^1`$ (64) in rough agreement with the Cherepaschchuk and Moffat value. Cyg X-3 is often discussed as the “missing link” in binary pulsar formation. In fact, because of its high He star mass, upon explosion of the latter, it most probably will break up. But it should be viewed as “tip of the iceberg” , in that there must be a great many more such objects with lower mass He stars which are not seen. We have shown, in section 3 however, that these objects are more likely to contain a black hole than a neutron star. In our evolutionary scenario, the He star progenitor has about the same ZAMS mass as that of the primary. Thus, the fate of the “naked” He star should be the same low-mass compact object, neutron star or low-mass black hole that resulted from the explosion of the primary. ## Appendix B Implications for LIGO Our results that there are 10 times more <sup>4</sup><sup>4</sup>4 Actually about 20 times more if we include the binaries in which the pulsar goes into a black hole in the He shell burning evolution. However, these will have masses not very different from the binary neutron stars so we do not differentiate them. black hole, neutron star binaries than binary neutron stars has important results for LIGO, the detection rates of which were based on the $`10^5`$ per year per galaxy rates of merging for the latter. The combination of masses which will be well determined by LIGO is the chirp mass $`M_{\mathrm{chirp}}=\mu ^{3/5}M^{2/5}=(M_1M_2)^{3/5}(M_1+M_2)^{1/5}`$ (65) where $`M=M_1+M_2`$ is the total system mass. The chirp mass of a NS-NS binary, with both neutron stars of mass $`1.4M_{}`$, is $`1.2M_{}`$. A $`10^5`$ birth rate implies a rate of 3 yr<sup>-1</sup> out to 200 Mpc . Kip Thorne informs us that LIGO’s first long gravitational-wave search in 2002-2003 is expected to see binaries with $`M_{\mathrm{chirp}}=1.2M_{}`$ out to 21 Mpc. The chirp mass corresponding to the Bethe & Brown LMBH-NS binary with masses $`2.4M_{}`$ and $`1.4M_{}`$ is $`1.6M_{}`$. Including an $`30`$ % increase in the rate to allow for high-mass black-hole, neutron-star mergers (which should be regarded as a lower limit because of the high-mass limit of $`80M_{}`$ used by Bethe & Brown for going into a HMBH) gives a 26 times higher rate than Phinney’s estimate for NS-NS mergers. There factors are calculated from the signal to noise ratio, which goes as $`M_{\mathrm{chirp}}^{5/6}`$ and then cubing it to obtain the volume of detectability. We then predict a ratio of $`3\times (21/200)^3\times 26=0.09`$ yr<sup>-1</sup> for 2003, rather slim. The enhanced LIGO interferometer planned to begin in 2004 should reach out beyond 150 Mpc for $`M_{\mathrm{chirp}}=1.2M_{}`$, increasing the detection rate to $`3\times (150/200)^3\times 26=33`$ yr<sup>-1</sup>. We therefore predict that LIGO will see more mergers per month than NS-NS mergers per year. ## Appendix C Binary Contributions to Gamma Ray Bursters The sheer numbers of black-hole, neutron-star binaries should dominate the mergers for gravitational waves, which could be detected by LIGO . For gamma-ray bursts, the presence of an event horizon eases the baryon pollution problem, because energy can be stored in the rotational energy of the black hole, and then released into a cleaner environment via the Blandford-Znajek magnetohydrodynamic process. Binaries containing a black hole, or single black holes, have been suggested for some time as good progenitors for gamma-ray bursts . Reasons for this include the fact that the rest mass of a stellar mass black hole is comparable to what is required to energize the strongest GRB. Also, the horizon of a black hole provides a way of quickly removing most of the material present in the cataclysmic event that formed it. This may be important because of the baryon pollution problem: we need the ejecta that give rise to the GRB to be accelerated to a Lorentz factor of 100 or more, whereas the natural energy scale for any particle near a black hole is less than its mass. Consequently, we have a distillation problem of taking all the energy released and putting it into a small fraction of the total mass. The use of a Poynting flux from a black hole in a magnetic field does not require the presence of much mass, and uses the rotation energy of the black hole, so it provides naturally clean power. As a neutron star in a binary moves nearer to a black hole companion, it is distorted into a torus around the latter. Most of the torus matter enters the black hole from the last stable Keplerian orbit of $`R=6GM_{\mathrm{BH}}/c^2`$, carrying considerable angular momentum. In the process the black hole is spun up until it rotates with some fraction of the speed of light. A magnetic field which originates from the neutron star, but which could have been enhanced by differential rotation is anchored in the remaining part of the torus, the accretion disc. When a rapidly rotating black hole is immersed in a magnetic field, frame dragging twists the field lines near the hole, which causes a Poynting flux to be emitted from near the black hole. This is the Blandford-Znajek mechanism . The source of energy for the flux is the rotation of the black hole. The source of the field is the surrounding accretion disk or debris torus. We showed that at most 9% of the rest mass of a rotating black hole can be converted to a Poynting flux, making the available energy for powering a GRB $`E_{\mathrm{BZ}}=1.6\times 10^{53}(M/M_{})\mathrm{erg}.`$ (66) The power depends on the applied magnetic field: $`P_{\mathrm{BZ}}6.7\times 10^{50}B_{15}^2(M/M_{})^2\mathrm{erg}\mathrm{s}^1`$ (67) (where $`B_{15}=B/10^{15}`$ G). This shows that modest variations in the applied magnetic field may explain a wide range of GRB powers, and therefore of GRB durations. There has been some recent dispute in the literature whether this mechanism can indeed be efficient and whether the power of the BH is ever significant relative to that from the disk . The answer in both cases is yes, as discussed by Lee, Wijers, & Brown . The issue, therefore, in finding efficient GRB sources among black holes is to find those that spin rapidly. There are a variety of reasons why a black hole might have high angular momentum. It may have formed from a rapidly rotating star, so the angular momentum was there all along (‘original spin’, according to Blandford ); it may also have accreted angular momentum by interaction with a disk (‘venial spin’) or have formed by coalescence of a compact binary (‘mortal spin’). We shall review some of the specific situations that have been proposed in turn. Neutron star mergers are among the oldest proposed cosmological GRB sources , and especially the neutrino flux is still actively studied as a GRB power source . However, once the central mass has collapsed to a black hole it becomes a good source for BZ power, since it naturally spins rapidly due to inheritance of angular momentum from the binary . Likewise BH-NS binaries will rapidly transfer a large amount of mass once the NS fills its Roche lobe, giving a rapidly rotating BH . The NS remnant may then be tidally destroyed, leading to a compact torus around the BH. It is unlikely that this would be long-lived enough to produce the longer GRB, but perhaps the short ($`t\text{ }<1`$ s) ones could be produced . However, mass transfer could stabilize and lead to a widening binary in which the NS lives until its mass drops to the minimum mass of about $`0.1M_{}`$, and then becomes a debris torus By then, it is far enough away that the resulting disk life time exceeds 1000 s, allowing even the longer GRB to be made. Thus BH-NS and NS-NS binaries are quite promising. They have the added advantage that their environment is naturally reasonably clean, since there is no stellar envelope, and much of the initially present baryonic material vanishes into the horizon. In addition to the mergers from compact objects, Fryer & Woosley suggested that GRBs could originate from the coalescence of low-mass black hole and helium-star binaries in the Bethe & Brown scenario. From eq. (35) we see that binaries survived in the initial range of $`0.5\times 10^{13}\mathrm{cm}<a_i<1.9\times 10^{13}\mathrm{cm}`$. Inside that range for $`0.04\times 10^{13}\mathrm{cm}<a_i<0.5\times 10^{13}`$ cm the low-mass black hole coalesces with the core. Hence, using a separation distribution flat in $`\mathrm{ln}a`$, coalescences are more common than low-mass black-hole, neutron-star binaries by a factor $`\mathrm{ln}(0.5/0.04)/\mathrm{ln}(1.9/0.5)=1.9`$. In Bethe & Brown the He star compact-object binary was disrupted $`50`$ % of the time in the last explosion, which we do not have here. Thus, the rate of low-mass black-hole, He-star mergers is 3.8 times the formation rate of low-mass black-hole, neutron-star binaries which merge, or $`R=3.8\times 10^4`$ yr<sup>-1</sup> in the Galaxy. In Table 2 we summarize the formation rates of GRBs and gravity waves from the binaries considered in this review. Because gamma-ray bursts have a median redshift of 1.5–2 ), and the supernova rate at that redshift was 10–20 times higher than now, the gamma-ray burst rate as observed is higher than one expects using the above rates. However, for ease of comparison with evolutionary scenarios we shall use the GRB rate at the present time (redshift 0) of about 0.1 GEM. (Wijers et al. found a factor 3 lower rate, but had slightly underestimated it because they overestimated the mean GRB redshift; see ref. for more extensive discussions of the redshift dependence). An important uncertainty is the beaming of gamma-ray bursts: the gamma rays may only be emitted in narrow cones around the spin axis of the black hole, and therefore most GRBs may not be seen by us. An upper limit to the ratio of undetected to detected GRB is 600 , so an upper limit to the total required formation rate would be 60 GEM. We may have seen beaming of about that factor or a bit less in GRB 990123 , but other bursts (e.g. 970228, 970508) show no evidence of beaming in the afterglows (which may not exclude beaming of their gamma rays). At present, therefore, any progenitor with a formation rate of 10 GEM or more should be considered consistent with the observed GRB rate. An exciting possibility for the future will be to receive both gravitational-wave and gamma-ray burst signals from the same merger, with attendant detailed measurement, which would give witness to them arising from the same binary. Because we dealt in this review with binaries, we did not explain one popular model of GRBs, the Woosley Collapsar model . In this model a black hole is formed in the center of a rotating W.-R. star. The outer matter can then be accreted into the neutron star, spinning it up. If, however, magnetic turbulence is sufficient to keep the envelope of the progenitor in corotation with the core until a few days before collapse of the latter, as suggested by Phinney & Spruit the He envelope could not furnish enough angular momentum to the black hole for the latter to drive the necessary jets (see end of Section 8). ## Acknowledgements G.E.B. and C.H.L. wish to acknowledge support from the U.S. Department of Energy under Grant No. DE–FG02–88ER40388. We thank C. Bailyn for his help with Table 1 and discussions on black-hole transients.
no-problem/9910/nucl-th9910053.html
ar5iv
text
# INFINITE NUCLEAR MATTER ON THE LIGHT FRONT: A MODERN APPROACH TO BRUECKNER THEORY ## 1 Outline This talk is divided into four parts. (1) What is the Light Front Approach? The basic idea is to use a “time” variable $`c\tau =ct+z`$ (2) Why use it? Certain kinds of high energy experiments are best analyzed using light front or light cone variables. (3) Mean field theory results. (4) Nucleon-nucleon correlations. The way to include these, in any formalism, is Brueckner Theory. ## 2 What is Light Front Dynamics? This is a relativistic treatment of many-body dynamics in which the “time” variable is taken to be $$\tau =ct+z=x^0+x^3x^+.$$ (1) The canonically conjugate “energy” variable is $`p^0p^3p^{}`$. One of the “space” variables must be the orthogonal combination $`x^{}tz`$, with its canonically conjugate momentum: $`p^+=p^0+p^3p^+`$. The other variables are $`\stackrel{}{x}_{},`$ and $`\stackrel{}{p}_{}`$. Our notation is $`A^\pm A^0\pm A^3`$. The point of this was noticed long ago by experimentalists. Consider a particle with a large velocity such that: $`\stackrel{}{v}c\widehat{e_3}`$. In that case the momentum $`p^+`$ is BIG. An important consequence of using light front variables is that the usual relation betwen energy and momentum, $`p^\mu p_\mu =m^2`$ becomes $$p^{}=\frac{1}{p^+}(p_{}^2+m^2),$$ (2) so that one obtains a relativistic kinetic energy without a square root operator. Eq. (2) is of great use in separating relative and center-of-mass variables. Another feature is that here the vacuum is empty. It contains no virtual-pair states. ## 3 Motivation It is certainly possible to do quantum mechanics this way, but why? I think that the use of light front dynamics is mandated if one wants to correctly understand a large class of high energy nuclear reactions. The most prominent example is deep inelastic lepton scattering from nuclei. ### 3.1 $`x_{Bj}`$, Light Front Nuclear Physics and the EMC effect Deep inelastic scattering occurs when a quark of four-momentum $`q`$ strikes a quark of momentum $`p`$ that originated from a nucleon of momentum $`k`$. In that case, $$x_{Bj}\frac{q^2}{2Mq^0}=\frac{p^+}{k^+},$$ (3) for large enough values of $`q^2`$ and $`q^0`$. One studies, experimentally and theoretically, the ratio of a cross section(per nucleon) $`\sigma (A)`$ on a nucleus to that $`\sigma (N)`$ on a nucleon. At high energies and momentum transfer one might think that the ratio $`\sigma (A)/\sigma (N)`$ would be very close to unity. The European Muon Collaboration found that this was not so–there is a depletion (EMC effect) $`\sigma (A)/\sigma (N)0.85`$ in the region $`x_{Bj}0.5`$ for which valence quark are dominant . If there is such a depletion, and momentum is conserved, there must be an enhancement of the momenta carried by other degrees of freedom. This could be manifest as an enhancement of nuclear pions for $`x_{Bj}0.1`$. But there were many non-conventional theories of this effect including swollen nucleus, six-quark cluster, and color conductivity through the entire nucleus. Almost immediately after the EMC effect was discovered we argued that another kind of experiment: Drell-Yan production of muon pairs, could be used to test the various theories of the EMC effect. No excess pions were discovered and this was termed a crisis in nuclear physics by Bertsch et al . My opinion is that the conventional explanation of nuclear binding and related Fermi motion effects has never been properly evaluated because of the failure to re-derive nuclear wave functions using the formalism (as given in reviews ) of light front dynamics. Thus, it has been our intention to provide realistic and relativistic calculations of nuclear wave functions using light front dynamics <sup>-</sup>. ### 3.2 Formal aspects To make light front-nuclear physics calculations we need to know the probability that a nucleon has a given value of $`k^+`$: $`f_N(k^+)`$. Similarly the distribution function for a pion is given by $`f_\pi (k^+)`$. I have emphasized deep inelastic scattering so far, but these quantities enter into the analyis of many experiments including the (e,e’p) and (p,pp) reactions . The consequence of taking $`\tau `$ of Eq. (1) as the time variable is that the distribution functions $`f_{N,\pi }`$ are simply related to the absolute square of the ground state wave function. If ones uses the conventional equal time formulation, one finds that the same information is encoded in the response function which involves matrix elements between the ground and an infinite number of excited states. In light front dynamics, one only needs the ground state, but one has to obtain this from a consistent calculation. To illustrate the difficulty one may ask, “What is $`k^+`$?”. Many authors, including myself, have used the idea that $`k^+`$ is an energy plus momentum to invoke a relation: $`k^+=Mϵ_\alpha =k^3`$, where $`ϵ_\alpha `$ is an orbital binding energy. This relation is not correct. The variable $`k^+`$ is a continuous kinematic variable (akin to $`k^3`$ of the usual quantum mechanics). It is not related to any discrete eigenvalue. ## 4 Light Front Quantization Our motto is that we need a $``$, no matter how bad! This is necessary in order to derive expressions for the operators $`P^\pm `$ which are the “momentum” and “Hamiltonian” of the theory. Consider for example, the Walecka model (also called QHD1) $`(\varphi ,V^\mu ,\psi )`$. The degrees of freedom are nucleon $`\psi `$, neutral vector meson $`V^\mu `$, and scalar meson $`\varphi `$. This is the simplest Lagrangian that can provide even a caricature of nuclear physics. Exchange of scalar mesons leads to a long ranged attractive potential and exchange of vector mesons leads to a shorter range and stronger repulsive potential. In this way, the nucleons are held together, but are not allowed to collapse. Given $``$, one constructs the energy-momentum tensor, $`T^{\mu \nu }`$. In particular, $$P^\mu =\frac{1}{2}d^2x_{}𝑑x^{}T^{+\mu }.$$ (4) A technical challenge is to express $`T^{+\mu }`$ in terms of independent variables. For example, the nucleon is usually treated as a 4-component spinor. But this particle has spin 1/2, so there are really only two independent degrees of freedom, denoted as $`\psi _+`$. One must express the remaining degrees of freedom in terms of $`\psi _+`$ . ## 5 Infinite Nuclear Matter in Mean Field Approximation-MFA This simple limiting case is the first problem we consider. The idea behind the mean field approximation is that the sources of mesons are strong, so there are many mesons, which can be treated as classical fields. The volume if taken as infinite, so that all positions, and spatial-directions are equivalent. We treat nuclear matter in its rest frame here. In that case the solution of the mesonic field equations lead to the results $$V^\pm =V^0=\frac{g_v}{m_v^2}\psi ^{}(0)\psi (0);V_i=0,$$ (5) $$\varphi =\frac{g_s}{m_s^2}\overline{\psi }(0)\psi (0),$$ (6) in which the brackets represent ground state matrix elements. The fields $`\varphi ,V^\pm `$ are constants, so the nucleon modes are plane waves. One has a Fermi gas in which $`\psi e^{ikx}`$ and $$i^{}\psi _+=g_v\overline{V}^{}\psi _++\frac{k_{}^2+(M+g_s\varphi )^2}{k^+}\psi _+.$$ (7) The equations (5)–(7) are a self-consistent set of equations. ### 5.1 Nuclear Momentum Content One uses the energy-momentum tensor to determine $`P^\pm `$. One finds $`{\displaystyle \frac{P^{}}{\mathrm{\Omega }}}=m_s^2\varphi ^2+{\displaystyle \frac{4}{(2\pi )^3}}{\displaystyle _F}d^2k_{}𝑑k^+{\displaystyle \frac{k_{}^2+(M+g_s\varphi )^2}{k^+}},`$ (8) $`{\displaystyle \frac{P^+}{\mathrm{\Omega }}}=m_v^2(V^{})^2+{\displaystyle \frac{4}{(2\pi )^3}}{\displaystyle _F}d^2k_{}𝑑k^+k^+,`$ (9) in which $`\mathrm{\Omega }`$ is the volume of the system. The Fermi sphere is determined by using an implicit defnition of $`k^3`$: $$k^+\sqrt{(M+g_s\varphi )^2+\stackrel{}{k}^2}+k^3.$$ (10) Then one may show that the energy of the nucleus, $`E\frac{1}{2}\left(P^{}+P^+\right)`$ is the same as for the Walecka model. This is a nice check on the calculation because that model as been worked out in a manifestly covariant manner. Then the minimization, $`\left(\frac{(E/A)}{k_F}\right)_\mathrm{\Omega }=0`$ determines the value of the Fermi momentum, $`k_F`$. This very same equation also sets $`P^+=P^{}=M_A,`$ a most welcome result. ### 5.2 Mean field results and implications The numerical calculation shows that the LF reproduces standard good results for energy and density. But the explicit decompostion (9) allows us to determine that nucleons carry only 65% of the nuclear + Momentum ($`M_A`$) . A value of 90% is needed to explain the EMC effect (in infinite nuclear matter) , so this is a problem. Furthermore, vector mesons carry a huge 35% of the + momentum. Because $`V^{}`$ is constant in space-time, $`V^{}0`$ only if $`k^+0.`$ This means that this effect requires a beam of infinite energy to be detected. These results, which conflict with experiments, might be artifacts of using infinite nuclear matter, or caused by the use of the MFA. More serously, the $``$ could be at fault. ### 5.3 Saving Mean Mean Field Theory? A simple way to improve the phenomenology is to modify $``$, by for example including scalar meson self coupling terms: $`\varphi ^3,\varphi ^4`$. A wide variety of parameter sets reproduce the binding energy and density of nuclear nuclear matter . For one set, nucleons carry 90% of $`P^+,`$ so that vector mesons carry 10%. This could be acceptable. of $`P^+`$ There is a problem with this parameter set, the related nuclear spin-orbit splitting is found to be too small . This is not so bad, since there are a variety of non-mean field mechanisms which can supply a spin orbit force. Thus one finds a need to go beyond mean field theory. This involves the introduction of light front Brueckner theory. ## 6 Light Front NN interaction The nucleon-nucleon potential is obtained from one boson exchange using another $``$ which includes the effects of pions and other mesons absent from QHD1 and in which chiral symmetry is respected . The $`\tau `$ ordered perturbation theory rules give expressions which can be translated into the usual language. Schematically, in momentum space we have $$V(meson)\frac{1}{q_0^2\stackrel{}{q}^2\mu ^2}$$ (11) in which $`q^\mu `$ is the four-mometnum transferred between nucleons and $`\mu `$ is the meson mass. This is the standard Yukawa form, except that the effects of retardation are included via the $`q_0`$ term. The kernal $`𝒦`$ is the sum of the meson exchanges: $$𝒦=\underset{meson}{}V(meson),$$ (12) in which the mesons are the usual set of $`\pi ,\rho ,\omega ,\sigma ,\eta ,\delta `$. The potentials are strong so that there effects are taken into account to all orders by solving the light front version of the Lippman-Schwinger equation . Schematically we write: $$=𝒦+\frac{d^2p_{}d\alpha }{\alpha (1\alpha )}𝒦\frac{2M^2}{P^2\frac{p_{}^2+M^2}{\alpha (1\alpha )}+iϵ},$$ (13) in which $`P`$ is the total four-momentum of the two nucleon system and $`p_{},\alpha `$ are relative momenta . This equation does not seem to have rotational invariance, but this can be recovered by making a change of variables inwhich the z-component of the relative momentum is defined implicitly: $$\alpha \frac{E(p)+p^3}{2E(p)},$$ (14) with $`E(p)=\sqrt{p_{}^2+p_3^2+M^2}`$. Then the integrand in the equation above is simplified: $$\frac{d^2p_{}d\alpha }{\alpha (1\alpha )}\frac{2M^2}{P^2\frac{p_{}^2+M^2}{\alpha (1\alpha )}+iϵ}\frac{M^2}{E(p)}\frac{d^3p}{P^2/4E^2(p)+iϵ}.$$ (15) This is of the form of the Blankenbecler Sugar equation except that the effects of retardation must be included. Given this formalism we followed the usual prescription of varying the meosn-nucleon form factors to achieve a reasonably good description of the data . ## 7 Light Front Theory of $`\mathrm{}`$ Matter - with NN Correlations I outline our detailed theory . The starting point is a Lagrangian decomposed into nucleon kinetic terms $`_0(N)`$, meson kinetic terms $`_0(\mathrm{mesons})`$ and meson-nucleon interactions $`_I(N,\mathrm{mesons})`$. Then $$=_0(N)+_I(N,\mathrm{mesons})+_0(\mathrm{mesons}).$$ (16) The two-nucleon one-boson-exchange-potential OBEP, $`𝒱(NN)`$, does not enter so we add it and subtract it: $$=_0(N)𝒱(NN)+(_I(N,\mathrm{mesons})+_0(\mathrm{mesons})+𝒱(NN))$$ (17) The term in parentheses accounts for mesonic content of Fock space. One does perturbation theory in this operator to learn if one has chosen a nucleon-nucleon potential that is consistent with the chosen Lagrangian. The first term of Eq. (17) represents the standard nuclear many body problem. One handles this by introducing the mean field $`MF`$: $$_0(N)𝒱(NN)=_0(N)U_{MF}+\left(U_{MF}𝒱(NN)\right)$$ (18) We choose $`U_{MF}`$ in the usual way, according to the independent pair approximation. In that case the mean field is the folding of scattering matrix with the nuclear density: $`U_{MF}\left(\mathrm{Brueckner}\mathrm{G}\mathrm{matrix}\right)\times \rho .`$ (19) The result of all of these manipulations is that one obtains a full wave function which contains both nucleon-nucleon correlations and explicit mesons. This procedure is very similar to the usual many-body theory evaluated with equal time quantization. I stress the differences. The simplicity of the vacuum allows a relativistic theory to be derived using non-relativistic techniques. We are able to obtain light front plus-momentum distributions for nucleons and mesons. The only technical difference is that we include retardation effects in our OBEP. ### 7.1 Saturation Properties We find good results. The binding energy per nucleon is 14.7 MeV and $`k_F=1.37`$ Fm. The compressibility is 180 MeV. Given this, the interesting thing to do is to assess the influence of this calculation on nuclear structure functions. ## 8 Deep Inelastic Scattering and Drell-Yan Production We find $`M+g_s\varphi =0.79M`$ this is very much larger than the mean field value of $`0.56M`$. As a result nucleons carry more than 84% of the nuclear plus-momentum. The 84% is obtained using only the uncorrelated- Fermi gas part of the wave function. We also estimate that including the 2p-2h correlations would lead to nucleons carrrying more than $`90\%`$ of the plus momentum. Including nucleons with momentum greater than $`k_F`$ would substantially increase the computed ratio $`F_{2A}/F_{2N}`$ because $`F_{2N}(x)`$ decreases very rapidly with increasing values of $`x`$ and because $`M^{}`$ would increase at high momenta. This is a good start to solving the problems mentioned in the earlier parts of this talk. Furthermore, we computed the total number of excess pions, and find that $`\frac{N_\pi }{A}=5\%`$. This is much smaller than the only previously computed result of 15%. The quantity $`N_\pi `$ is not a direct input into computations, but previous phenomenological calculations allow us to hope that the 5% would be consistent with Drell Yan data. Our present conclusion is that light front dynamics leads to reasonable nuclear dynamics. The 90%, and 5% numbers are an excellent start. Clearly, many things remain to be done with this approach. In the meantime, I would like to emphasize that Light Front Nuclear Physics exists! One can use it to understand any high energy nuclear reaction. ## Acknowledgments This talk is based on work done with Rupert Machleidt. This work was supported in part by the U.S.D.O.E. ## References
no-problem/9910/cond-mat9910247.html
ar5iv
text
# Nonlinear sigma model study of a frustrated spin ladder ## I Introduction Low-dimensional spin models still continue to attract a considerable attention of researchers, both in theoretical and experimental aspects. Since the famous prediction of Haldane in 1983 of different behaviour of Heisenberg spin chains with integer and half-integer value of spin $`S`$, which was based on a mapping to the nonlinear sigma model (NLSM), the NLSM approach was recognized as an important tool in studying spin systems and have found numerous applications (see, e.g., for a review). Normally, the NLSM approach does not give good numerical results, however it is usually able to capture the topology of the phase diagram. During the recent upsurge of interest to spin ladder models, several researchers have successfully applied NLSM to describe a $`N`$-leg spin-$`S`$ ladder . In an essential similarity to the case of a single chain, it was found that for half-integer $`S`$, the ladders with even or odd number of legs $`N`$ are respectively gapped and gapless. A natural question arose, namely, whether the properties of a gapped phase of, say, a two-leg spin-$`\frac{1}{2}`$ ladder are in some sense equivalent to those of the Haldane phase of a spin-$`1`$ chain. Several arguments were given in favour of the positive answer to the above question. Particularly, it was shown that by adding extra interactions one can introduce a suitable generalization of the pure ladder model, increasing the number of parameters in the phase space, and then one can find a path in this generalized phase space which smoothly (i.e., without crossing any phase boundaries) leads from the ladder model to a certain composite representation of a spin-$`1`$ chain ; moreover, it was demonstrated that a two-leg $`S=\frac{1}{2}`$ ladder has nonzero string order which is believed to be a characteristic feature of the Haldane phase. On the other hand, it turned out that other generalizations may have very different properties; for instance, for the model of a “diagonal ladder” with additional equal-strength diagonal interactions, which also yields a composite-spin representation of a spin-$`1`$ chain one finds numerically that the “usual” ladder is separated from the composite-spin-$`1`$ Haldane phase by a transition line . An exactly solvable model exhibiting similar features was also constructed . Recently, Kim et al. have made an interesting observation, noticing that there are actually at least two different definitions of the string order for a two-leg spin-$`\frac{1}{2}`$ ladder (depending on whether one combines the $`S=\frac{1}{2}`$ spins on the rungs or on the diagonals). Exploiting the analogy with the topological quantum numbers which can be introduced for short-range valence bond states on a square lattice , they have conjectured that those two definitions of the string order distinguish between two different Haldane-type phases. This assumption was supported by the results of the bosonization study of two generalizations of the ladder model. In hope to get a better understanding of the physics of the spin ladder, and to search for possible new phase transitions, we find interesting to study the phase diagram of the generalized ladder model with unequal diagonal couplings. We consider the model determined by the Hamiltonian $`\widehat{H}`$ $`=`$ $`J_L{\displaystyle \underset{\alpha =1,2}{}}{\displaystyle \underset{i}{}}𝑺_{\alpha ,i}𝑺_{\alpha ,i+1}+J_R{\displaystyle \underset{i}{}}𝑺_{1,i}𝑺_{2,i}`$ (1) $`+`$ $`{\displaystyle \underset{i}{}}(J_D𝑺_{1,i}𝑺_{2,i+1}+J_D^{}𝑺_{2,i}𝑺_{1,i+1}),`$ (2) where $`𝑺_{\alpha ,i}`$ are spin-$`S`$ operators at the $`i`$-th rung, $`\alpha =1,2`$ distinguishes the ladder legs. The model is schematically shown in Fig. 1. At $`J_D=J_D^{}=0`$ one recovers a regular ladder, while at $`J_D^{}=0`$ the model is equivalent to a zigzag chain with alternation of the nearest-neighbour interaction. Interchanging $`J_D`$ and $`J_D^{}`$ is obviously equivalent to interchanging the legs of the ladder, so that it is sufficient to restrict ourselves to the $`J_DJ_D^{}`$ case. The point $`J_D=J_D^{}`$ is in a certain sense special, since it allows an additional symmetry operation, namely, interchanging the spins on every other rung is then equivalent to interchanging $`J_D`$ and $`J_L`$. The phase space of the model is three-dimensional and is determined by the three ratios of exchange constants, e.g., $`J_D/J_R`$, $`J_D^{}/J_R`$, $`J_L/J_R`$. We will show that for $`J_DJ_D^{}`$ the phase diagram of the above model always possesses $`2S`$ gapless phase planes, which split off the boundary to the fully saturated ferromagnetic phase at $`J_D=J_D^{}`$. We consider in detail the most interesting case $`S=\frac{1}{2}`$ and show that the gapless plane is an extension of one of the transition lines discussed in Ref. . In the next section we briefly describe the mapping to NLSM, Sect. III contains the discussion of the results, and, finally, Sect. IV gives a brief summary. ## II Results of the mapping to the nonlinear sigma model To map the model (1) to a NLSM, we use the well-known technique of spin coherent states path integral; this technique is well described in reviews and textbooks, and here we will not give a complete derivation but rather indicate only the main steps. We choose a four-spin plaquette as an elementary magnetic cell; then there are four classical ground states commensurate with this choice of cell, namely a ferromagnetic state (F) and three modulated states shown in Fig. 1 and denoted as (A), (B) and (C). At $`n`$-th plaquette we introduce four variables $`𝐦_n`$, $`𝐥_n`$, $`𝐮_n`$, $`𝐯_n`$, defined as the following linear combinations of the “classical” spin vectors (parameters of the coherent states): $`𝐥_n`$ $`=`$ $`{\displaystyle \frac{1}{4S}}(𝐒_{1,2n1}+𝐒_{2,2n1}𝐒_{1,2n}𝐒_{2,2n}),`$ (3) $`𝐦_n`$ $`=`$ $`{\displaystyle \frac{1}{4S}}(𝐒_{1,2n1}+𝐒_{2,2n1}+𝐒_{1,2n}+𝐒_{2,2n}),`$ (4) $`𝐮_n`$ $`=`$ $`{\displaystyle \frac{1}{4S}}(𝐒_{1,2n}+𝐒_{2,2n1}𝐒_{1,2n1}𝐒_{2,2n}),`$ (5) $`𝐯_n`$ $`=`$ $`{\displaystyle \frac{1}{4S}}(𝐒_{2,2n1}+𝐒_{2,2n}𝐒_{1,2n1}𝐒_{1,2n}),`$ (6) which satisfy the following four constraints: $`𝐦^2+𝐥^2+𝐮^2+𝐯^2=1,`$ (7) $`(𝐦+𝐥)(𝐮+𝐯)=0,`$ (8) $`(𝐦𝐥)(𝐮𝐯)=0,`$ (9) $`(𝐦+𝐯)(𝐮+𝐥)=0,`$ (10) Those variables we consider as smoothly varying functions of the space coordinate $`x_n=na`$ when passing to the continuum limit; one should mention that the above ansatz is essentially similar to that used by Sénéchal . The advantage of the ansatz (3) is that it conserves the total number of the degrees of freedom, which is important to avoid ambiguities in the mapping, as was recently realized on the example of inhomogeneous spin chains . The order parameter for the four commensurate classical ground state configurations F, A, B, C is respectively $`𝐦`$, $`𝐮`$, $`𝐯`$, $`𝐥`$. Comparing the energies of those configurations, one may obtain a “draft” of the classical phase diagram which neglects presence of any incommensurate ground states; for the moment we are mainly interested in the commensurate antiferromagnetic part, and the conditions for the existence of spiral phases will be obtained later. One thus may treat $`𝐦`$ as a small fluctuation, and obtain different field descriptions starting from one of the configurations A, B, C. Massive degrees of freedom can be integrated out in a usual way, and in each case one obtains the final effective action in the form of a NLSM, $`𝒜_{\mathrm{eff}}/\mathrm{}`$ $`=`$ $`{\displaystyle \frac{1}{2g}}{\displaystyle 𝑑\xi 𝑑\tau \{(_t𝐧)^2(_x𝐧)^2\}}`$ (11) $`+`$ $`{\displaystyle \frac{\theta }{4\pi }}{\displaystyle 𝑑\xi 𝑑\tau 𝐧(_x𝐧\times _t𝐧)},`$ (12) where $`𝐧`$ is the corresponding order parameter, and $`\xi =x/a`$, $`\tau =ct/a`$ are dimensionless space-time variables, $`a`$ being the lattice constant along the legs direction. For each of the classical “phases” A, B, C the coupling constant $`g`$ and the topological angle $`\theta `$ are given by the following expressions: Phase A: $`J_R+2J_L>0`$, $`J_D^+<J_R`$, $`J_D^+<2J_L`$. $`g_A={\displaystyle \frac{J_R+2J_L}{2S\sqrt{W_A}}},\theta _A=0\text{mod}\mathrm{\hspace{0.17em}2}\pi ,`$ (14) $`W_A={\displaystyle \frac{1}{4}}(J_R+2J_L)\left\{2J_LJ_D^+{\displaystyle \frac{(J_D^{})^2}{J_RJ_D^+}}\right\}>0.`$ Phase B: $`J_R+J_D^+>0`$, $`2J_L<J_D^+`$, $`2J_L<J_R`$. $`g_B={\displaystyle \frac{J_R+J_D^+}{2S\sqrt{W_B}}},\theta _B={\displaystyle \frac{4\pi SJ_D^{}}{J_R+J_D^+}},`$ (15) $`W_B={\displaystyle \frac{1}{4}}\left\{(J_R+J_D^+)(J_D^+2J_L)(J_D^{})^2\right\}>0.`$ (16) Phase C: $`J_D^++2J_L>0`$, $`J_R<J_D^+`$, $`J_R<2J_L`$. $`g_C={\displaystyle \frac{J_D^++2J_L}{2S\sqrt{W_C}}},\theta _C=0\text{mod}\mathrm{\hspace{0.17em}2}\pi ,`$ (17) $`W_C={\displaystyle \frac{1}{4}}(J_D^++2J_L)\left\{2J_L+J_D^+{\displaystyle \frac{(J_D^{})^2}{J_D^+J_R}}\right\}>0.`$ (18) Here for the sake of convenience we have introduced the notations $`J_D^\pm (J_D\pm J_D^{}).`$ The spin wave velocity for each case is given by $`c=2\sqrt{W}Sa/\mathrm{}`$. The inequalities define the boundaries of the domains of validity of the correspondent mapping (not all of them represent real phase boundaries, as will be discussed later). The boundaries defined by $`W_{A,B,C}=0`$ represent just the classical conditions for the transition into a spiral phase. One may observe that there is no spiral phase at $`J_D^{}=0`$. Phase F has to be considered separately, and it is easy to obtain its boundaries using the linear spin wave theory. There are two magnon branches with the energies $`\epsilon _\pm (q)`$ $`=`$ $`S(J_R+J_D^++2J_L)+2SJ_L\mathrm{cos}q`$ (19) $`\pm `$ $`S\left\{(J_R+J_D^+\mathrm{cos}q)^2+(J_D^{}\mathrm{sin}q)^2\right\}^{1/2},`$ (20) and from the condition of positiveness of $`\epsilon _\pm `$ it is easy to obtain the boundaries of the F phase. They are determined by the inequalities $`J_R+J_D^+>0,J_R+2J_L>0,`$ (21) $`W_F2J_LJ_D^++(J_D^{})^2/(J_R+J_D^+)>0.`$ (22) At $`W_F=0`$, $`\epsilon _{}(q)`$ changes sign at once in a finite interval of wave vectors near $`q=0`$, signaling the first-order transition, $`\epsilon _{}(q=\pi )`$ vanishes at the line $`J_R+2J_L=0`$, and $`\epsilon _+(q=0)`$ becomes zero at the line $`J_R+J_D^+=0`$. One can see that only in the (B) case there is a nontrivial topological term, and the condition of gaplessness $`\theta =(2n+1)\pi `$ yields $$J_D^{}=\frac{2n+1}{4S}(J_R+J_D^+),n=0,1,\mathrm{}2S1.$$ (23) One can see that the $`2S`$ gapless planes (23) exist only at nonzero $`J_D^{}`$, and at $`J_D^{}=0`$ they split off the boundary $`J_R+J_D^+=0`$ to the ferromagnetic phase. ## III Discussion Let us concentrate on the case $`S=\frac{1}{2}`$ as being the most important one. For $`S=\frac{1}{2}`$, a sketch of the resulting phase diagram is presented in Fig. 2 in a form of two-dimensional slices through the phase space at three fixed values of $`J_D^{}`$ ($`J_R`$ is considered to be positive). At $`J_D^{}=0`$ there are no other gapless lines except the boundaries of the ferromagnetic phase, and there is no spiral phase. The coupling constants $`g_A`$, $`g_B`$ diverge at the (AB) boundary $`J_D^+=2J_L`$, which indicates that this classical phase boundary gets destroyed by quantum fluctuations. On the other hand, all the coupling constants remain finite at the (BC) and (AC) boundaries, but they undergo a jump when crossing the boundaries, which suggests a first-order transition. This is in agreement with the numerical and bosonization studies, showing the presence of a first-order transition with $`J_D=J_R/2`$ being the asymptote for the transition line at $`J_L\mathrm{}`$. There is also a “mirror” transition line $`J_L=J_R/2`$ due to the $`J_DJ_L`$ symmetry. According to the classification of Ref. , those two first-order transition lines separate two topologically different Haldane-type phases with $`𝒪_{\mathrm{even}}0`$ (phases A,B) and $`𝒪_{\mathrm{odd}}0`$ (phase C); below we refer to those two phases as H1 and H2 (see Fig. 2(a)). At finite $`J_D^{}`$, the spiral phase (S) appears classically in a finite region of the phase diagram. The S region is for us just a “white spot” which cannot be treated within the present approach; to construct an effective description for the (S) phase, one has to employ different techniques. At finite $`J_D^{}`$ the gapless line $`J_D^+=2J_D^{}J_R`$ starts to split off the (BF) boundary $`J_D^+=J_R`$, and the (FS) boundary becomes first order, as one sees from the behaviour of $`\epsilon _{}(q)`$ (cf. (19)). The coupling constants $`g_{A,B,C}`$ diverge at the boundaries to the S phase, which suggests destruction of any (quasi)-long-range order, and thus one may expect that the gapless line terminates at the (BS) boundary, though it may in principle continue as a first-order transition line. It is worthwhile to look at the particular case $`J_D^{}=J_R`$. According to Ref. , the gapless line $`J_D^+=J_R`$ in this case also separates two phases with different string order, which implies that the lower portion of the B phase belongs to the H2 class. It is also known that in this case the gapless line continues at larger $`J_L`$ as the first-order line (recall that $`J_D^{}=J_D^+=J_R`$ corresponds to the uniform spin chain with next-nearest neighbour interaction). Thus, it becomes clear that additional phase boundaries should exist somewhere inside the spiral “phase,” to achieve a proper separation of H1- and H2-type phases (see Fig. 2(b,c)). This could be an interesting topic for the future work. ## IV Summary We have studied the phase diagram of the generalized ladder model with unequal diagonal couplings $`J_D`$, $`J_D^{}`$ within the framework of the nonlinear sigma model. We show that the phase diagram has a rich structure including several first- and second-order transition boundaries. There exist $`2S`$ gapless phase boundaries which split off the boundary to the ferromagnetic phase at $`J_DJ_D^{}`$. We consider the case $`S=\frac{1}{2}`$ in more detail and show that the gapless plane is an extension of one of the transition lines discussed in Ref. which separate Haldane-type phases with different topological order parameter. Still, several features of the phase diagram remain unclear and require further study. ## ACKNOWLEDGMENTS This work was supported by the German Federal Ministry for Research and Technology (BMBFT) under the contract 03MI5HAN5. A.K. gratefully acknowledges the hospitality of the Hannover Institute for Theoretical Physics. C.N. was supported by the DFG-Graduiertenkolleg “Quantum Field Theory Methods in Particle Physics, Gravity, and Statistical Physics”.
no-problem/9910/cond-mat9910501.html
ar5iv
text
# Two-phase behavior in strained thin films of hole-doped manganites \[ ## Abstract We present a study of the effect of biaxial strain on the electrical and magnetic properties of thin films of manganites. We observe that manganite films grown under biaxial compressive strain exhibit island growth morphology which leads to a non-uniform distribution of the strain. Transport and magnetic properties of these films suggest the coexistence of two different phases, a metallic ferromagnet and an insulating antiferromagnet. We suggest that the high strain regions are insulating while the low strain regions are metallic. In such non-uniformly strained samples, we observe a large magnetoresistance and a field-induced insulator to metal transition. 72.15.Gd, 68.55.-a, 81.15.Fg \] Hole-doped manganites display a remarkable sensitivity to various perturbations and such sensitivity results in drastic changes in the sample properties depending on the form of the sample . Thin manganite films display properties different from those of bulk materials, and several papers have argued that the difference is due to strain induced by lattice mismatch . Lattice mismatch strain is a biaxial strain which modifies the lattice parameters of the film and it has been shown that biaxial strain has an effect which is fundamentally different from that of bulk strain . Compressive bulk strain drives the lattice towards cubic symmetry whereas compressive biaxial strain further distorts the lattice. It is essential to understand the effect of substrate induced strain on the manganite thin films to explain the behavior of the thin films and multilayers of these materials. In this paper we present evidence that thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> ($``$ 150 Å in thickness) grown under compressive lattice mismatch strain are structurally, magnetically and electronically non-uniform. We show that this is due to structural non-uniformity caused by the island growth mechanism. This phenomenon is well established in semiconductor heterostructures- which are also thin films grown under biaxial strain- and has been studied extensively, both experimentally and theoretically . Studies on the kinetics of the growth of these heterostructures have shown that the minimum energy configuration is a non-uniform strain distribution in the film resulting from a formation of islands. A continuous wetting layer of a few monolayer thickness covers the substrate first and islands are nucleated above this layer on further growth of the film. This island growth mode leads to a variation in the strain on the film, both in the direction normal to the substrate and also along the plane of the substrate, creating regions in the film which are strain relaxed (near the top of the islands) and also some regions (near the periphery of the islands) which have extremely high strain, i.e. much higher than the lattice mismatch strain . This type of strain distribution limits the lateral growth of the islands resulting in uniform island size in the entire film. This happens due to the diffusion of adatoms away from regions of higher strain as has been observed experimentally . All these factors can lead to structural transitions in the highly strained regions of the film due to the strain itself and/or due to the resultant migration of adatoms . Motivated by the idea that the high sensitivity of the properties of manganites to changes in structure and stoichiometry should result in interesting effects when these materials are subjected to a large non-uniform strain we have grown thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> ($``$ 150 Å) with different amounts of lattice mismatch with the substrate and have studied the resulting differences in the growth morphology, magnetization and transport. Conductivity and magnetization measurements indicate that the film grown under compressive strain due to lattice mismatch is a mixture of ferromagnetic (metallic) and antiferromagnetic (insulating) regions. Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM) experiments confirm the island growth of the strained film and a non-uniform distribution of strain over the film. We suggest that the high strain regions are at the edges of the islands and are insulating and the low-strain regions are at the top of the islands and are metallic. The difference in properties may be either a direct effect of the strain on the electronic properties or due to strain induced cation diffusion. Thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> (LCMO), 150 Å in thickness, were grown on (001) LaAlO<sub>3</sub> (LAO) and (110) NdGaO<sub>3</sub> (NGO) substrates by pulsed laser deposition (PLD). On LAO there is a compressive lattice mismatch strain of $``$2% for a film of LCMO while on NGO this strain is negligible. The films were grown at a rate of $``$ 1 Å/sec. The substrate temperature was 820C. The films were grown in an oxygen atmosphere of 400 mTorr. The thicknesses were measured by Dektak IIA profilometer. The resistivities were measured by the conventional four-probe method and the DC magnetization was measured using a SQUID magnetometer. The lattice parameters were measured using a Siemens D5000 diffractometer equipped with a four circle goniometer. The in-plane lattice constant measurements showed that the films were pseudomorphic with the substrate for this range of film thickness. The nanostructure of the films were measured using a Nanoscope III AFM operated in the tapping mode. Cross section high resolution transmission electron microscopy measurements were done using a JEOL 4000 EX microscope. Figure 1 shows the resistivity behavior of a 150 Å film of LCMO on LAO. The figure also shows the resistivity behavior of a 150 Å film of LCMO on NGO (dashed line). This figure clearly shows the drastic effect of lattice mismatch strain on the transport properties. The film of LCMO on LAO is insulating. In contrast the film of the same thickness on the lattice matched substrate NGO shows a resistivity behavior very close to that of the bulk. Figure 2 shows the magnetization of the strained film grown on LAO as a function of temperature. The magnetization (M) starts rising around 250 K but this rise is much slower than what is observed in thicker films of LCMO on LAO . The inset shows the $`M`$ vs. $`H`$ curve for the film on LAO at 5 K. The saturation value of $`M`$ ($`M_{sat}`$) is $``$ 1.8 $`\mu _B`$ which is about 50 % of the expected $`M_{sat}`$ = 3.67 $`\mu _B`$ for this compound. This shows that about half the volume of the film is not ferromagnetic at low temperatures. The magnetization of the film on NGO could not be measured due to the paramagnetic nature of the substrate, however, the correspondence of the resistivity behavior to that of the bulk compound suggests that the magnetization will be the same as the bulk material. A striking feature of our data is that application of a strong magnetic field causes the low temperature insulating state to become metallic in the strained film. In figure 1 we show that the resistivity of the LCMO film on LAO in a field of 8.5 Tesla has an insulator to metal transition near 200 K. The inset in figure 1 shows the $`\rho `$ vs. $`H`$ behavior of the film on LAO at three temperatures. At 100 K, $`\rho `$ drops by about 4 orders of magnitude in a field of 8 T. There is also a significant hysteresis in the $`\rho `$ vs. $`H`$ curve at 100 K. This behavior of the $`\rho `$ with $`H`$ is both quantitatively and qualitatively different from the magnetoresistance behavior of bulk ceramic La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub>. In the latter the presence of grain boundaries causes a small but sharp drop in the resistivity at low fields followed by a gradual decrease in the resistivity at higher fields . Our $`\rho `$ vs. $`H`$ data is also different from the low field magnetoresistance observed in strained ultra-thin films of Pr<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub> which was attributed to domain wall scattering . On the other hand, the magnitude of the magnetoresistance and the hysteresis in the $`\rho `$ vs. $`H`$ curve at low temperatures are similar to that observed in materials which exhibit charge ordering . Our data also resembles that seen in the compound (La,Pr,Ca)MnO<sub>3</sub>, now believed to consist of a two-phase coexistence of ferromagnetic metallic and charge-ordered insulating phases , where the field driven insulator to metal transition is induced by a change of the metal volume fraction through a percolation threshold. On the basis of this similarity and the magnetic and transport data discussed above, we argue that the biaxially strained thin film of LCMO grown on LAO exhibits two phase coexistence whereas the film grown on the lattice matched substrate NGO does not. The origin of the large magnetoresistance in the strained film is due to this phase separation- the magnetic field drives the insulating phase to a metallic phase leading to a metallic conduction path in the film. We emphasize that for this composition of LCMO ($`x`$=0.3), a highly insulating state due to phase separation has not been observed in the bulk form. So the important question is: What is the origin of the two phase coexistence observed in the film grown under biaxial strain which results in properties far removed from the properties of the bulk form of this compound? To answer this we take AFM images of our films as shown in figures 3 and 4. At the outset we stress that discontinuities in the film are not the cause. From the AFM micrographs, we have calculated the roughness of the film on LAO to be about 15 Å which is much smaller than the thickness of the film. This and the fact that a magnetic field drives the film metallic at low temperatures, show that the film on LAO is continuous. It is clear from figures 3 and 4 that there are significant differences in the nanostructure depending on the strain. The film of LCMO on NGO has negligible substrate induced strain and the film shows a step flow growth mode. The height of each step is marked in figure 4c. The $`\rho `$ vs. $`T`$ of this film is very close to that of bulk LCMO, as shown earlier. The film of LCMO on LAO which has about 2% strain, has an island growth mode. As shown in figure 1, this film is insulating down to the lowest temperatures. As discussed earlier, island growth leads to a highly non-uniform distribution of the strain. Such a variation of strain in the film may also lead to the migration of the constituent atoms resulting in a compositional inhomogeneity on the scale of the variation of the strain. The large strain at the edge of the islands leads to insulating, and perhaps charge ordered regions due to structural and/or compositional variations. As mentioned earlier the top of the islands are relatively strain free and these regions are ferromagnetic (metallic) but are separated by the insulating regions at the periphery of the islands. Our resistivity data in a field of 8.5 T suggests that enough of the insulating regions are driven metallic at this field that a metallic path is formed in the sample joining the ferromagnetic metallic regions in the film and consequently there is a large drop in the resistivity of the sample upon the application of a magnetic field. The magnetization measurements at 5 K show that the film on LAO has a saturation magnetization value of $``$ 1.8 $`\mu _B`$ and the expected $`M_{sat}`$ for this composition is 3.67$`\mu _B`$ which suggests that a significant part ($``$ 50 %) of the film is not in the ferromagnetic state at low temperatures. The saturation magnetization approaches 3.67 $`\mu _B`$ as the thickness of the films on LAO is increased i.e, the effect of the substrate induced strain becomes less. Another observation is that on annealing in flowing oxygen at a temperature of 850 for 10 hours, the film on LAO has the resistivity behavior and lattice parameters found in thicker films of LCMO on LAO and a saturation magnetization of 3.4 $`\mu _B`$. The AFM images suggest a significant increase in the size of the islands but more controlled experiments are required. This strengthens our claim that the insulating behavior is due to strain induced structural and compositional variations which are removed by annealing the film in oxygen. To get a better picture of the variation of the strain and composition over the film on LAO, cross sectional TEM (XTEM) studies were performed on a 1500 Å film of LCMO on LAO. A thicker film was used for reasons of sample preparation for the XTEM studies. We assume that the first 150 Å of this sample is the same as the 150 Å film on LAO. Figure 5a shows that on the scale of the distance between two islands (as estimated from the AFM images in figure 3c) there is a significant variation in the contrast of the image. The arrows mark the regions where there is a clear demarcation between two regions of similar contrast. These are the edges of the islands and the distance between the regions marked by the arrows is of the order of the distance between islands as seen from the profile of the AFM image shown in figure 3c (i.e. $``$ 500 Å). Figure 5b is a schematic diagram showing the expected regions of low and high strains. The variation in the contrast shows a variation of strain and/or stoichiometry in the film, both of which are expected in the growth of these thin films on lattice mismatched substrates. The properties of hole-doped manganites are sensitive to both these factors. The structure and the stoichiometry affect the transport of the material by tuning the number and mobility of carriers and the bandwidth. Although very large changes in stoichiometry are required for producing an effective Ca doping of $`x<0.2`$ or $`x>0.5`$, this gives us a possible mechanism for having charge ordered (insulating) regions in the film corresponding to the regions of very high strain i.e. at the edges of the islands. There is also a significant variation of the contrast in the image very near the substrate which reveals the initial wetting layer of the film. An earlier study of the near-interface transport properties of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> ultra-thin films grown on LAO and NGO substrates revealed a surface and interface related “dead layer” of about 30 - 50 Å depending on the substrate . This “dead layer” could arise due to this wetting layer. We would like to add here that the effect of tensile strain on the magnetic and transport properties of LCMO is similar to what is observed here. Zandbergen et al. observe a reduced saturation magnetization and a large magnetoresistance at low temperatures in their ultra-thin films of LCMO grown on SrTiO<sub>3</sub> (STO). These properties are attributed to the distortions induced in the film due to the lattice mismatch as inferred from high resolution XTEM experiments. These films, grown under tensile strain, remain insulating at low temperatures even upon application of a field of 8 T. In a recent paper Fäth et al. have shown scanning tunneling spectroscopy data on thin films of LCMO grown on STO substrates which suggests a two phase behavior in the film. On the application of a magnetic field the metallic phase grows at the expense of the insulating phase and the authors show a correspondence between this and the colossal magnetoresistive properties of the material. An LCMO film grown on STO is under tensile biaxial strain and should result in a non-uniform distribution of the strain. The non-uniformity in the strain is a likely origin of the observed two-phase behavior based on our results discussed here. In conclusion, we propose the following model to explain the properties of LCMO grown on LAO, a film which is under compressive biaxial strain. The film grows in the form of islands. The edges of the islands are regions of high strain and are insulating due to changes in structure and/or stoichiometry. The top of the islands are relatively strain free and only these parts are ferromagnetic and conducting at low temperatures and thus a two-phase state is formed. This explains the reduced saturation magnetization at low temperatures. The insulating regions separating the islands makes the film insulating, down to the lowest temperatures. The insulating regions are driven to a metallic state upon application of a magnetic field which results in a large decrease in the resistivity of the film. For a direct measure of the magnetization in different parts of the film low temperature magnetic force microscopy measurements in the presence of a magnetic field are underway. This work is partially supported by the MRSEC program of the NSF at the University of Maryland, College Park (Grant DMR96-32521). AJM acknowledges NSF-DMR-9705482.
no-problem/9910/cond-mat9910281.html
ar5iv
text
# Scale Invariance and Lack of Self-Averaging in Fragmentation ## Abstract We derive exact statistical properties of a class of recursive fragmentation processes. We show that introducing a fragmentation probability $`0<p<1`$ leads to a purely algebraic size distribution in one dimension, $`P(x)x^{2p}`$. In $`d`$ dimensions, the volume distribution diverges algebraically in the small fragment limit, $`P(V)V^\gamma `$ with $`\gamma =2p^{1/d}`$. Hence, the entire range of exponents allowed by mass conservation is realized. We demonstrate that this fragmentation process is non-self-averaging. Specifically, the moments $`Y_\alpha =_ix_i^\alpha `$ exhibit significant fluctuations even in the thermodynamic limit. PACS numbers: 05.40.+j, 64.60.Ak, 62.20.Mk Numerous physical phenomena are characterized by a set of variables, say $`\{x_j\}`$, which evolve according to a random process, and are subject to the conservation law $`_jx_j=\mathrm{const}`$. An important example is fragmentation, with applications ranging from geology and fracture to the breakup of liquid droplets and atomic nuclei. Other examples include spin glasses, where $`x_j`$ represents the equilibrium probability of finding the system in the $`j^{\mathrm{th}}`$ valley, genetic populations, where $`x_j`$ is the frequency of the $`j^{\mathrm{th}}`$ allele, and random Boolean networks. In most cases, stochasticity governs both the way in which fragments are produced, and the number of fragmentation events they experience. For example, in fragmentation of solid objects due to impact with a hard surface fragments may bounce several times before coming to a rest . The typical number of fragmentation events may vary greatly as it depends on the initial kinetic energy. Another seemingly unrelated example is DNA segmentation algorithms where homogeneous subsequences are produced recursively from a inhomogeneous sequence until a predefined homogeneity level is reached . Here, the number of segmentation events is determined by the degree of homogeneity of the original sequence. In this study, we examine fragmentation models with two types of objects – stable towards fragmentation and unstable. We show that the size distribution is algebraic, and that the entire range of power-laws allowed by the underlying conservation laws can be realized by tuning the fragmentation probability. Additionally, such processes are characterized by large sample to sample fluctuations, as seen from analysis of the moments of the fragment size distribution. Specifically, we consider the following recursive fragmentation process. We start with the unit interval and choose a break point $`l`$ in $`[0,1]`$ with a uniform probability density. Then, with probability $`p`$, the interval is divided into two fragments of lengths $`l`$ and $`1l`$, while with probability $`q=1p`$, the interval becomes “frozen” and never fragmented again. If the interval is fragmented, we recursively apply the above fragmentation procedure to both of the resulting fragments. First, let us examine the average total number of fragments, $`N`$. With probability $`q`$ a single fragment is produced, and with probability $`p`$ the process is repeated with two fragments. Hence $`N=q+2pN`$, yielding $$N=\{\begin{array}{cc}q/(12p),\hfill & \text{if }p<1/2\text{;}\hfill \\ \mathrm{},\hfill & \text{if }p1/2\text{.}\hfill \end{array}$$ (1) The average total number of fragments becomes infinite at the critical point $`p_c=1/2`$, reflecting the critical nature of the underlying branching process. Next, we study $`P(x)`$, the density of fragments of length $`x`$. The recursive nature of the process can be used to obtain the fragment length density $$P(x)=q\delta (x1)+2p_x^1\frac{dy}{y}P\left(\frac{x}{y}\right).$$ (2) The second term indicates that a fragment can be created only from a larger fragment, and the $`y^1`$ kernel reflects the uniform fragmentation density. Eq. (2) can be solved by introducing the Mellin transform $$M(s)=𝑑xx^{s1}P(x).$$ (3) Eqs. (2) and (3) yield $`M(s)=q+2ps^1M(s)`$ and as a result $$M(s)=q\left[1+\frac{2p}{s2p}\right].$$ (4) The average total number $`M(1)=N`$ is consistent with Eq. (1), and the total fragment length $`M(2)=1`$ is conserved in accord with $`1=𝑑xxP(x)`$. (Here and in the following the integration is carried over the unit interval, i.e., $`0<x<1`$.) The inverse Mellin transform of Eq. (4) gives $$P(x)=q\left[\delta (x1)+2px^{2p}\right].$$ (5) Apart from the obvious $`\delta `$-function, the length density is a purely algebraic function. In particular, the fragment distribution diverges algebraically in the limit of small fragments. Given such an algebraic divergence near the origin $`P(x)x^\gamma `$, length conservation restricts the exponent range to $`\gamma <2`$. In our case $`\gamma =2p`$, and since $`0<p<1`$, the entire range of acceptable exponents emerges by tuning the only control parameter $`p`$. Interestingly, at the critical point $`p_c=\frac{1}{2}`$, the fragment length distribution becomes independent of the initial interval length. Starting from an interval of length $`L`$, Eq. (5) can be generalized to yield $$P(x)=q\delta (xL)+2pqL^{12p}x^{2p}.$$ (6) Thus, the critical point may be detected by observing the point at which the segment distribution becomes independent of the original interval length. The recursive fragmentation process can be generalized to $`d`$ dimensions. For instance, in two dimensions we start with the unit square, choose a point $`(x_1,x_2)`$ with a uniform probability density, and divide, with probability $`p`$, the original square into four rectangles of sizes $`x_1\times x_2`$, $`x_1\times (1x_2)`$, $`(1x_1)\times x_2`$, and $`(1x_1)\times (1x_2)`$. With probability $`q`$, the square becomes frozen and we never again attempt to fragment it. The process is repeated recursively whenever a new fragment is produced. Let $`P(𝐱)`$, $`𝐱(x_1,\mathrm{},x_d)`$, be the probability density of fragments of size $`x_1\times \mathrm{}\times x_d`$. This quantity satisfies $$P(𝐱)=q\delta (𝐱\mathrm{𝟏})+2^dp\frac{d𝐲}{y_1\mathrm{}y_d}P(\frac{x_1}{y_1},\mathrm{},\frac{x_d}{y_d}).$$ (7) with $`𝑑𝐲=𝑑y_1\mathrm{}𝑑y_d`$. Following the steps leading to Eq. (4), we find that the $`d`$-dimensional Mellin transform, defined by $`M(𝐬)=𝑑𝐱x_1^{s_11}\mathrm{}x_d^{s_d1}P(𝐱)`$ with the shorthand notation $`𝐬(s_1,\mathrm{},s_d)`$ obeys $$M(𝐬)=q\left[1+\frac{\alpha ^d}{s_1\mathrm{}s_d\alpha ^d}\right],\mathrm{with}\alpha =2p^{1/d}.$$ (8) Eq. (8) gives the total average number of fragments, $`N=M(\mathrm{𝟏})=q/(12^dp)`$ if $`p<2^d`$ and $`N=\mathrm{}`$ if $`p2^d`$. One can also verify that the total volume $`M(\mathrm{𝟐})=1`$ is conserved. Interestingly, there is an additional infinite set of conserved quantities: all moments whose indices belong to the hyper-surface $`s_1^{}\mathrm{}s_d^{}=2^d`$ satisfy $`M(𝐬^{})=1`$. In a continuous time formulation of this process the same moments were found to be integrals of motion . The existence of an infinite number of conservation laws is surprising, because only the volume conservation has a clear physical justification. Next, we study the volume density $`P(V)`$, defined by $$P(V)=𝑑𝐱P(𝐱)\delta \left(Vx_1\mathrm{}x_d\right).$$ (9) The Mellin transform $`M(s)=𝑑VV^{s1}P(V)`$ can be obtained from Eq. (8) by setting $`s_i=s`$, $$M(s)=q\left[1+\frac{\alpha ^d}{s^d\alpha ^d}\right].$$ (10) Using the $`d^{\mathrm{th}}`$ root of unity, $`\zeta =e^{2\pi i/d}`$, and the identity $`\frac{1}{s^d1}=\frac{1}{d}_{k=0}^{d1}\frac{\zeta ^k}{s\zeta ^k}`$, $`M(s)`$ can be expressed as a sum over simple poles at $`\alpha \zeta ^k`$. Consequently, the inverse Mellin transform is given by a linear combination of $`d`$ power laws $$P(V)=q\left[\delta (V1)+\frac{\alpha }{d}\underset{k=0}{\overset{d1}{}}\zeta ^kV^{\alpha \zeta ^k}\right].$$ (11) One can verify that this expression equals its complex conjugate and hence, it is real. Additionally, the one-dimensional case (5) is recovered by setting $`d=1`$. The small-volume tail of the distribution can be obtained by noting that the sum in Eq. (11) is dominated by the first term in the series, which leads to $$P(V)AV^\gamma \text{as}V0,$$ (12) with $`\gamma =\alpha =2p^{1/d}`$ and $`A=\alpha q/d`$. Although the value of the exponent changes, the possible range of exponents for this process remains the same since $`0<2p^{1/d}<2`$ when $`0<p<1`$. In the infinite dimension limit, $`P(V)`$ becomes universal: $`P(V)V^2`$. The leading behavior of $`P(V)`$ in the large size limit can be derived by using the Taylor expansion and the identity $`_{k=0}^{d1}\zeta ^{kn}=\delta _{n,0}`$ for $`n=0,\mathrm{},d1`$. One finds that in higher dimensions the volume distribution vanishes algebraically near its maximum value, $$P(V)B_d(1V)^{d1}\text{as}V1,$$ (13) with $`B_d=\alpha ^d/(d1)!`$. In fact, the entire multivariate fragment length density can be obtained explicitly. This can be achieved by expanding the geometric series $`{\displaystyle \frac{\alpha ^d}{s_1\mathrm{}s_d\alpha ^d}}={\displaystyle \underset{n0}{}}{\displaystyle \underset{i=1}{\overset{d}{}}}\left({\displaystyle \frac{\alpha }{s_i}}\right)^{n+1},`$ and performing the inverse Mellin transform for each variable separately. Using the transform $`𝑑xx^{s1}\left[\mathrm{ln}\frac{1}{x}\right]^n=n!s^{n1}`$ gives $$P(𝐱)=q\left[\delta (𝐱\mathrm{𝟏})+\alpha ^dF_d(z)\right],$$ (14) with the shorthand notations $`F_d(z)={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{z^n}{n!}}\right)^d,z=\alpha \left({\displaystyle \underset{i=1}{\overset{d}{}}}\mathrm{ln}{\displaystyle \frac{1}{x_i}}\right)^{1/d}.`$ (15) In two dimensions, $`F_2(z)=I_0(2z)`$ where $`I_0`$ is the modified Bessel function. The small size behavior of $`P(𝐱)`$ can be obtained by using the steepest decent method. The leading tail behavior, $`F_d(z)(2\pi z)^{\frac{1d}{2}}e^{zd}`$ for $`z1`$, corresponds to the case when at least one of the lengths is small, i. e. $`x_i1`$. Returning to the original variables we see that the fragment distribution exhibits an unusual “log-stretched-exponential” behavior $`P(𝐱)\left[{\displaystyle \underset{i=1}{\overset{d}{}}}\mathrm{ln}{\displaystyle \frac{1}{x_i}}\right]^{\frac{d1}{2d}}\mathrm{exp}\left[d\alpha \left({\displaystyle \underset{i=1}{\overset{d}{}}}\mathrm{ln}{\displaystyle \frac{1}{x_i}}\right)^{1/d}\right].`$ (16) The fragment distribution represents an average over infinitely many realizations of the fragmentation process; hence, it does not capture sample to sample fluctuations. These fluctuations are important in non-self-averaging systems, where they do not vanish in the thermodynamic limit. Useful quantities for characterizing such fluctuations are the moments $`Y_\alpha `$ $`Y_\alpha ={\displaystyle \underset{i}{}}x_i^\alpha ,`$ (17) where the sum runs over all fragments. We are interested in the average values $`Y_\alpha `$ and $`Y_\alpha Y_\beta `$. For integer $`\alpha `$, $`Y_\alpha `$ is the probability that $`\alpha `$ points randomly chosen in the unit interval belong to the same fragment. The expected value of $`Y_\alpha `$ satisfies $$Y_\alpha =q+pY_\alpha 𝑑y\left[y^\alpha +(1y)^\alpha \right].$$ (18) The first term corresponds to the case where the unit interval is not fragmented, and the second term describes the situation when at least one fragmentation event has occurred. Eq. (18) gives $$Y_\alpha =q\left[1+\frac{2p}{\alpha +12p}\right]$$ (19) if $`\alpha >2p1`$, and $`Y_\alpha =\mathrm{}`$ if $`\alpha 2p1`$. As expected, Eq. (19) agrees with the moments of $`P(x)`$ obtained by integrating Eq. (5), $`Y_\alpha =𝑑xx^\alpha P(x)`$. However, higher order averages such as $`Y_\alpha Y_\beta `$ do not follow directly from the fragment density. For integer $`\alpha `$ and $`\beta `$, $`Y_\alpha Y_\beta `$ is the probability that, if $`\alpha +\beta `$ points are chosen at random, the first $`\alpha `$ points all lie on the same fragment, and the last $`\beta `$ points all lie on another (possibly the same) fragment. This quantity satisfies $`Y_\alpha Y_\beta `$ $`=`$ $`q+pY_\alpha Y_\beta {\displaystyle 𝑑y\left[y^{\alpha +\beta }+(1y)^{\alpha +\beta }\right]}`$ (20) $`+`$ $`pY_\alpha Y_\beta {\displaystyle 𝑑y\left[y^\alpha (1y)^\beta +(1y)^\alpha y^\beta \right]},`$ (21) yielding $`Y_\alpha Y_\beta `$ $`=`$ $`q+{\displaystyle \frac{2pq}{\alpha +\beta +12p}}`$ (22) $`+`$ $`2p{\displaystyle \frac{\mathrm{\Gamma }(\alpha +1)\mathrm{\Gamma }(\beta +1)}{\mathrm{\Gamma }(\alpha +\beta +1)}}{\displaystyle \frac{Y_\alpha Y_\beta }{\alpha +\beta +12p}}`$ (23) when $`\alpha ,\beta ,\alpha +\beta >2p1`$, and $`Y_\alpha Y_\beta =\mathrm{}`$ otherwise. Eq. (22) shows that $`Y_\alpha Y_\beta Y_\alpha Y_\beta `$, and in particular, $`Y_\alpha ^2Y_\alpha ^2`$. Therefore, fluctuations in $`Y_\alpha `$ do not vanish in the thermodynamic limit, and the recursive fragmentation process is non-self-averaging. While for $`p<1/2`$ non-self-averaging behavior is expected because the average number of fragments is finite, the emergence of non-self-averaging for $`p>1/2`$ is surprising. Hence, statistical properties obtained by averaging over all realizations are insufficient to probe sample to sample fluctuations. In principle, higher order averages such as $`Y_\alpha ^n`$ can be calculated recursively by the procedure outlined above. The resulting expressions are cumbersome and not terribly illuminating. Instead, one may study the distribution $`Q_\alpha (Y)`$ which obeys $`Q_\alpha (Y)=q\delta (Y1)`$ (24) $`+p{\displaystyle 𝑑l_0^Y𝑑Z\frac{1}{l^\alpha }Q_\alpha \left(\frac{Z}{l^\alpha }\right)\frac{1}{(1l)^\alpha }Q_\alpha \left(\frac{YZ}{(1l)^\alpha }\right)}.`$ (25) In addition to the recursive nature of the process, we have employed extensivity, i.e, $`Y_\alpha L^\alpha `$ in an interval of length $`L`$. Clearly, $`Y_0=N`$ and $`Y_11`$, and therefore $`Q_1(Y)=\delta (Y1)`$ and $`Q_0(N)`$ can be determined analytically as well. Generally, different behaviors emerge for $`\alpha >1`$ and $`\alpha <1`$. We concentrate on the former case where the support of the distribution $`Q_\alpha (Y)`$ is the interval . The Laplace transform, $`R_\alpha (\lambda )=_0^1𝑑Ye^{\lambda Y}Q_\alpha (Y)`$, obeys $$R_\alpha (\lambda )=qe^\lambda +p_0^1𝑑lR_\alpha \left[\lambda l^\alpha \right]R_\alpha \left[\lambda (1l)^\alpha \right].$$ (26) The behavior of $`Q_\alpha (Y)`$ in the limit $`Y0`$ is reflected by the asymptotics of $`R_\alpha (\lambda )`$ as $`\lambda \mathrm{}`$. Substituting $`R_\alpha (\lambda )\mathrm{exp}(A\lambda ^\beta )`$ into both sides of Eq. (26), evaluating the integral using steepest decent and equating the left and right hand sides gives $`\beta =1/\alpha `$. Consequently, we find that the distribution has an essential singularity near the origin $$Q_\alpha (Y_\alpha )\mathrm{exp}\left[BY_\alpha ^{\frac{1}{\alpha 1}}\right],Y_\alpha 0.$$ (27) Extremal properties can be viewed as an additional probe of sample to sample fluctuations. We thus consider $`(x)`$, the length density of the largest fragment. For a self-averaging process with an infinite number of fragments, one expects $`(x)\delta (x)`$ in the thermodynamic limit. To see that $`(x)`$ is non-trivial for any $`p`$, let us first determine $`(x)`$ for $`x1/2`$. In this region, $`(x)=q\delta (x1)+p{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}\left({\displaystyle \frac{x}{y}}\right).`$ (28) If the original unit interval has not been fragmented, the largest fragment is obviously the unit interval. If the first fragmentation is performed, only one of the two resulting fragments can be larger than $`x>1/2`$. Therefore, only subsequent breaking of this fragment (of length $`y>x`$) can contribute to $`(x)`$, which explains Eq. (28). Eq. (28) is similar to Eq. (2), and can be solved by the same technique to give $`(x)=q\delta (x1)+pqx^p\mathrm{for}x1/2.`$ (29) In the complementary case of $`x<1/2`$, $`(x)`$ satisfies $`(x)`$ $`=`$ $`p{\displaystyle \underset{1x}{\overset{1}{}}}{\displaystyle \frac{dy}{y}}\left({\displaystyle \frac{x}{y}}\right)+p{\displaystyle \underset{1/2}{\overset{1x}{}}}{\displaystyle \frac{dy}{y}}\left({\displaystyle \frac{x}{y}}\right)_{}\left({\displaystyle \frac{x}{1y}}\right)`$ (30) $`+`$ $`p{\displaystyle \underset{1/2}{\overset{1x}{}}}{\displaystyle \frac{dy}{y}}\left({\displaystyle \frac{x}{1y}}\right)_{}\left({\displaystyle \frac{x}{y}}\right).`$ (31) The first term on the right-hand side of this equation is constructed as in Eq. (28): if we first break the unit interval into two fragments of lengths $`y>1/2`$ and $`1y`$, then for $`1y<x`$ the longest fragment is produced by breaking the fragment of length $`y`$. The next two terms describe the situation when $`1y>x`$, so the longest fragment can arise out of breaking any of the two fragments. The factors $`_{}(u)=_0^u𝑑v(v)`$ guarantee that the longest fragment of length $`x`$ comes from the fragment of length $`v`$ in the first generation. Since $`(x)`$ obeys different equations in different regions, it looses analyticity on the boundaries. Namely, $`(x)`$ possesses an infinite set of singularities at $`x=1/k`$ which become weaker as $`k`$ increases. Similar singularities underly extremal properties of a number of random processes including random walks, spin glasses, random maps, and random trees . In summary, we have found that recursive fragmentation is scale free, i.e., the fragment length distribution is purely algebraic. In higher dimensions, the volume distribution is a linear combination of $`d`$ power laws, and consequently, an algebraic divergence characterizes the small-fragment tail of the distribution. A number of recent impact fragmentation experiments reported algebraic mass distributions with the corresponding exponents ranging from $`1`$ to $`2`$ . It will be interesting to further examine whether our simplified model is appropriate for describing fragmentation of solid objects. We have also found that the recursive fragmentation process exhibits a number of features that arise in other complex and disordered systems, such as non-self-averaging behavior and the existence of an infinite number of singularities in the distribution of the largest fragment. These features indicate that even in the thermodynamic limit sample to sample fluctuations remain, and that knowledge of first order averages may not be sufficient for characterizing the system. Our 1D model is equivalent to applying the aforementioned DNA segmentation algorithm to a random sequence. It will be interesting to study self-averaging and extremal properties of DNA sequences, which are known to have commonalities with disordered systems. Indeed, if these subtle features are found for genetic sequences as well, this would suggest that much caution should be exercised in statistical analysis of DNA. We are thankful to S. Redner and O. Weiss for useful discussions, and to DOE, NSF, ARO, NIH, and DFG for financial support.
no-problem/9910/physics9910010.html
ar5iv
text
# Characterization of the State of Hydrogen at High Temperature and Density ## 1 Introduction The phase diagram of hydrogen has been studied intensively with different theoretical approaches ,, simulation techniques , and experiments ,. From theory, the principal effects at low densities are well-known. On the other hand, the properties at intermediate density are not yet well understood, and the phase diagram is not yet accurately determined. In particular, the nature of the transition to a metallic state is still an open question. In this article, we would like to show how these questions can be addressed by path integral Monte Carlo (PIMC) simulations. Using this approach, we derived the phase diagram in Fig.1 where we distinguish between molecular, atomic, metallic and plasma regimes. We will demonstrate how these different states can be identified from PIMC simulations. The imaginary-time path integral formalism is based on the position-space density matrix $`\rho (,^{},\beta )`$, which can be used to determine the equilibrium expectation value of any operator $`\widehat{O}`$, $$\widehat{O}=\frac{\text{Tr}\widehat{O}\rho }{\text{Tr}\rho }=\frac{𝑑𝑑^{}\rho (,^{},\beta )|\widehat{O}|^{}}{𝑑\rho (,,\beta )}$$ (1) where $``$ represents the coordinates of all particles. The low temperature density matrix $`\rho (,^{},\beta )=|e^\beta |^{}`$ can be expressed as product of high temperature density matrices $`\rho (,,\tau )`$ with the time step $`\tau =\beta /M`$. In position space, this is a convolution, $$\rho (_0,_M;\beta )=\mathrm{}𝑑_1𝑑_2\mathrm{}𝑑_{M1}\rho (_0,_1;\tau )\rho (_1,_2;\tau )\mathrm{}\rho (_{M1},_M;\tau ).$$ (2) This high dimensional integral can be integrated using Monte Carlo methods. Each particle is represented by a closed path in imaginary time. Fermi statistics is taken into account by considering the fermion density matrix, which can be expressed by considering all permutations $`𝒫`$ of identical particles, $$\rho _F(,^{};\beta )=𝒜\rho (,^{};\beta )=\frac{1}{N!}\underset{𝒫}{}(1)^𝒫\rho (,𝒫^{};\beta ),$$ (3) where $`𝒜`$ is the antisymmetrization projection operator. Cancellation of positive and negative contributions leads to the fermion sign problem, which is solved approximately by restricting the paths within a nodal surface derived from the free-particle density matrix . ## 2 Phase diagram of hydrogen and deuterium We used PIMC simulation with 32 protons and 32 electrons and a time step $`\tau =1/10^6`$K to generate the phase diagram shown in Fig. 1. In the low density and low temperature regime, we find a molecular fluid. In the proton-proton correlation function shown in Fig. 2, one finds a clear peak at the bond length of $`0.75`$Å. We determine the number of molecules as well as other compound particles by a cluster analysis based on the distances. Using this approach we can estimate the number of bound states (see ). We can also estimate the fraction of molecules and atoms to determine the regime boundaries. However at high density, a clear definition of those species is difficult to give. Starting in the molecular regime, one finds that increasing temperature at constant density leads to gradual dissociation of molecules followed by a regime, with a majority of atoms. The atoms are then gradually ionized at even higher temperatures. Lowering the density at constant temperature leads to a decrease in the number of molecules, or atoms respectively, due to entropy effects. If the density is increased at constant temperature, pressure dissociation diminishes the molecular fraction. This transition was described by Magro et. al. . Its precise nature is still a topic of our current research. Using PIMC simulations, one finds it occurs within a small density interval and we predict that it is connected with both the molecular-atomic and insulator-metal transition. We determine the fraction of electrons involved in a permutation as an indication of electronic delocalization. Permuting electron are required to form a Fermi surface, which means that a high number of permutations indicate a high degree of degeneracy of the electrons. Permuting electrons form long chains of paths and therefore occupy delocalized states. This delocalization destabilizes the hydrogen molecules. Before all bonds are broken, one finds a molecular fluid with some permuting electrons, which could indicate the existence of a molecular fluid with metallic properties. The boundaries of the metallic regime are determined by two effects. With increasing temperature, the degree of degeneracy of the electrons is simply reduced. If the temperature is lowered, the attraction to the protons becomes more relevant, which localizes the electron wave function and decreases the degree of degeneracy also (see Fig. 1). ### Acknowledgements Support from CSAR program and computer facilities at NCSA and Lawrence Livermore National Laboratory. ### Received October 1, 1998
no-problem/9910/cond-mat9910308.html
ar5iv
text
# Universal Behaviour of the Superfluid Fraction and Tc of 3He in Aerogel ## Abstract We have investigated the superfluid transition of <sup>3</sup>He in different samples of silica aerogel. By comparing new measurements on a 99.5% sample with previous observations on the behaviour of <sup>3</sup>He in 98% porous aerogel we have found evidence for universal behaviour of <sup>3</sup>He in aerogel. We relate both the transition temperature and superfluid density to the correlation length of the aerogel. The properties of bulk <sup>3</sup>He are well understood. The extreme purity of <sup>3</sup>He at low temperatures makes it an ideal system to study the agreement between theoretical and experimental results on non-conventional Cooper pairing in the absence of disorder. Disorder plays a crucial role in suppressing the pairing interaction in high T<sub>c</sub> superconductors, the other well established non s-wave paired system. The superfluid transition of <sup>3</sup>He confined to a sample of very porous silica aerogel was first reported four years ago. The aerogel provides a structural disorder background to the liquid. <sup>3</sup>He is compressible, and the density can be continuously tuned by $``$30% while maintaining a fixed disorder. The <sup>3</sup>He zero temperature coherence length $`\xi _0`$, defined as $`\xi _0=\mathrm{}v_f/k_BT_c`$, varies from 180Å to over 700Å as a function of density. Because the Cooper pairs in <sup>3</sup>He form in a p-wave state, quasiparticle scattering from the aerogel strands is pairbreaking. Thus the <sup>3</sup>He-in-aerogel system is well suited to the exploration of the effect of impurity scattering and disorder on the superfluid transition and phase diagram. The superfluidity of <sup>3</sup>He in silica aerogel has been studied using torsional oscillators, NMR and sound propagation techniques. These measurements show that both the superfluid transition temperature (T<sub>c</sub>) and superfluid density ($`\rho _s`$) of the <sup>3</sup>He are suppressed by the disorder, but that the transition remains sharp. This suppression is sensitive to both the density and the microstructure of the aerogel sample. The simplest model for the effect of impurity scattering on the <sup>3</sup>He superfluid transition is the homogeneous scattering model (HSM) which is based on the Abrikosov-Gorkov model for a superconductor with magnetic impurities that induce pair-breaking via spinflip scattering. This mechanism is similar to that of diffuse scattering of Cooper paired <sup>3</sup>He from a surface, and is unable to explain the observed behaviour. Specifically, the observed suppression of the superfluid density is much greater than predicted by this model. More sophisticated models, such as the isotropic inhomogeneous scattering model (IISM) proposed by Thuneberg and co-workers are able to quantitatively predict the superfluid transition temperature of <sup>3</sup>He in aerogel (for small suppressions) and have had success at qualitatively explaining the observed superfluid densities. In this Letter we present data from several different experiments on <sup>3</sup>He in aerogel, including new results on <sup>3</sup>He confined to a 99.5% porosity sample. This sample is a factor of four more dilute than any previously investigated and is crucial for understanding the evolution from bulk <sup>3</sup>He to a regime where impurity scattering dominates. In comparing these different samples we find evidence that the relation between superfluid density and the superfluid transition temperature of <sup>3</sup>He in aerogel follows a universal behaviour, independent of the aerogel sample. This is significant because both of these quantities are individually sensitive to the microstructure of the aerogel, and vary greatly from sample to sample. We also present evidence that the suppression of T<sub>c</sub> can be related to the correlation length ($`\xi _a`$) of the aerogel sample. The aerogels used in the experiments discussed in this Letter were grown under basic conditions. Under these conditions gelation is the result of diffusion limited aggregation of small ($``$30Å diameter) primary silica particles. The aerogels are characterized by a fractal dimension (D<sub>f</sub>) related to the real space correlations, and a long length scale cutoff to these correlations ($`\xi _a`$) above which the sample appears homogeneous. The fractal exponent depends only on the gelation process, while the cutoff length is also dependent on the average density. Simulations based on the diffusion-limited cluster-cluster aggregation (DLCA) algorithm predict that the fractal exponent should lie between 1.7 and 1.9, which is in good agreement with small-angle X-ray scattering (SAXS) measurements (Table I). We note that $`\xi _a`$ in the most dilute sample, D, could not be inferred from the data as the SAXS did not extend to sufficiently small q. Samples A and C have the same density, but were made under different conditions and have slightly different fractal dimension and significantly different $`\xi _a`$. Both samples A and C have been studied with SAXS. Sample B was made under conditions very similar to sample C. We do not have direct information on its microstructure from SAXS, but assume here that samples B and C are essentially identical. The correlation length for sample D is within the range obtained from simulations based on the DLCA algorithm and is consistent with the SAXS data of Mulders on a 99.6% sample ($`\xi _a`$2000Å). For a more extensive discussion the reader is referred to reference . The 99.5% porosity aerogel used for our experiment was grown inside the (on average) 100 $`\mu `$m large pores of a coarse silver sinter. Previous torsional oscillator experiments have been affected by the presence of spurious resonances resulting from composite modes of <sup>3</sup>He and aerogel whose frequency crosses the resonant frequency of the cell. The strength of these resonances grow as the porosity of the aerogel sample increases, affecting the quality of data. In our cell the aerogel is clamped to the silver sinter, and the effect of spurious resonances is strongly reduced. We operated our torsional oscillator in self-resonant mode near 483 Hz. The temperature in the cell was measured using a lanthanum diluted cerous magnesium nitrate ac susceptibility thermometer, thermally connected to the sample through a shared reservoir of <sup>3</sup>He. Data was collected while the temperature increased at a rate of 20 $`\mu `$K per hour. The period shift of the oscillator as the superfluid <sup>3</sup>He decoupled from the torsion head provided both the transition temperature of the <sup>3</sup>He in aerogel and the superfluid density. Figure 1 shows the superfluid transition temperature for several different aerogel samples. Three of these samples (A, B and C) have a nominal porosity of 98%, the fourth one (D) is our 99.5% aerogel. All of these measurements were done by monitoring the period shift in torsional oscillators filled with the aerogel and <sup>3</sup>He. The difference in transition temperature between samples A, B and C arises from differences in the microstructure of the aerogels (see table I and reference ). The relative suppression of the transition temperature (1-T<sub>c</sub>/T<sub>c0</sub>) (with T<sub>c0</sub> the transition temperature in bulk <sup>3</sup>He) is larger at lower pressures (larger $`\xi _0`$) than at higer pressures (smaller $`\xi _0`$). In view of the fact that all the aerogels used in these experiments have a similar fractal dimension and primary particle size, one would expect them to be mutually self similar on length scales shorter than the fractal cutoff length. Thus for temperature dependent <sup>3</sup>He coherence lengths $`\xi (T)`$ shorter than $`\xi _a`$, the ratio of density of silica sampled at two different $`\xi (T)`$ is independent of aerogel density. Experimentally one observes a strong dependence of the suppression of T<sub>c</sub> on the aerogel density. In figure 2 we show the dependence of the relative suppression of T<sub>c</sub> as a function of $`\xi _0`$. At short coherence lengths, the relative suppression is small for all samples. However, when the pressure is reduced, increasing $`\xi _0`$, this suppression shows a marked dependence on the microstructure of the aerogel. The transition temperature shows an evolution from a strong impurity scattering regime when the <sup>3</sup>He is confined to a 98% porosity aerogel sample towards the behaviour of bulk <sup>3</sup>He in the 99.5% sample. Motivated by figure 2 we plot the relative suppression of T<sub>c</sub> against $`\xi _0`$ scaled by the aerogel correlation length $`\xi _a`$. The aerogel-limited mean free path ($`l_g`$) would be another natural choice to compare against $`\xi _0`$. However, this length scale has not been independently measured. If we use the HSM to determine $`l_g`$ from T<sub>c</sub>/T<sub>c0</sub>, the values of $`l_g`$ show a strong pressure dependence. Figure 3 shows that the scaled transition temperature depends solely on the ratio of $`\xi _0`$ to $`\xi _a`$ *independent* of the aerogel density. The error bars for sample D result from the high and low estimates (2000Å and 3000Å respectively) for the correlation length for a 99.5% porosity sample from DLCA simulations. We determined the temperature-dependent superfluid density of <sup>3</sup>He in 99.5% aerogel using the shift in resonant frequency of our torsional oscillator upon warming. A small, temperature dependent contribution due to bulk <sup>3</sup>He in the cell was subtracted. The remaining shift was scaled by the period change due to filling the cell with <sup>3</sup>He at 50 mK and by the tortuosity (measured with <sup>4</sup>He) to obtain the superfluid density. Figure 4 shows the bare superfluid density ($`\rho _s^b`$/$`\rho `$) plotted versus the reduced temperature T/T<sub>c</sub> for different aerogel samples and pressures. The bare superfluid density is obtained from $`\rho _s`$ by stripping away the Fermi liquid factor according to: $$\frac{\rho _s^b}{\rho }=\frac{(1+\frac{1}{3}F_1)\frac{\rho _s}{\rho }}{1+\frac{1}{3}F_1\frac{\rho _s}{\rho }}$$ (1) and is equivalent to 1-Y(T) where Y(T) is the temperature dependent Yosida function for bulk <sup>3</sup>He. As with the transition temperature T<sub>c</sub>, the $`\rho _s^b`$ for these two different samples are similar at high pressures and are both close to the bulk value. At the lowest pressure, there is a factor of five difference in $`\rho _s^b`$/$`\rho `$ at the same reduced temperature between the 99.5% sample and the 98% sample; $`\rho _s^b`$/$`\rho `$ is more strongly suppressed by the aerogel than T<sub>c</sub>. There is also a large difference between the suppression factors of T<sub>c</sub> and $`\rho _s^b`$ in the same aerogel sample. This large suppression in $`\rho _s`$ with respect to T<sub>c</sub> is consistent with measurements made on other samples, and cannot be explained with homogenous scattering models for <sup>3</sup>He in aerogel. In order to better understand the superfluid transition of <sup>3</sup>He in aerogel, in figure 5 we plot $`\rho _s^b`$/$`\rho _s`$ at 0.8T<sub>c</sub> against (T<sub>c</sub>/T<sub>c0</sub>)<sup>2</sup> for several aerogel samples. The error bars shown for cell B arise from a spurious resonance that introduces uncertainty in the determination of $`\rho _s`$. As in figure 3, the data collapses on a universal curve. The dashed line is the prediction for a homogeneously scattering model based on the Abrikosov-Gorkov equation. This plot compares aerogels with *different* densities—there is a factor of four difference in the average impurity density between the 98% samples and the 99.5% aerogel sample. Furthermore, the coherence length of the Cooper pairs varies from 180 Å to 600 Å over this data, yet the very strong pressure dependence shown in figures 2 and 4 has been factored out in this plot. The correlations in the aerogel will affect the suppression of T<sub>c</sub> and the evolution of $`\rho _s^b`$ relative to a homogeous disorder, but this suppression apparently depends only on the correlation length of the sample and possibly the fractal exponent. The steep slope of the data in figure 5 is evidence that the fractal nature of the aerogel plays an important role in the development of $`\rho _s`$. Since all of these samples were base-catalyzed, the behaviour of <sup>3</sup>He in each aerogel is determined mainly by $`\xi _a`$. As long as $`\xi _0`$ is much less than $`\xi _a`$, the disorder sampled by the ensemble of Cooper pairs should be insensitive to changes in the temperature-dependent coherence length $`\xi `$(T), until $`\xi `$(T)$`\xi _a`$. That is, the system has a conformal symmetry normally absent in disordered systems. The evidence of this one-parameter scaling is displayed in figure 5; $`\rho _s^b`$ is a function of T<sub>c</sub>/T<sub>c0</sub> (or equivalently $`\xi _0`$/$`\xi _a`$) only. This behaviour is reminiscent of the compilation of data from disordered high T<sub>c</sub> materials by Franz *et al* with the exception that the relative suppression of T<sub>c</sub>/T<sub>c0</sub> and $`\rho _s/\rho _{s0}`$ in the high T<sub>c</sub> materials do not follow a universal behaviour, presumably because the impurities are not fractally correlated. In order to compare the behaviour of T<sub>c</sub> and $`\rho _s^b`$ of <sup>3</sup>He in aerogel with <sup>3</sup>He in bulk it will be necessary to understand precisely *how* the fractal disorder affects the superfluid pairing mechanism. One test for models of non-conventional Cooper pairing in the presence of disorder would be to predict the functional form of the universal curve for $`\rho _s^b`$ versus (T<sub>c</sub>)<sup>2</sup> for <sup>3</sup>He in (base-catalyzed) aerogel. The IISM of Thuneberg *et al.* predicts a relation between $`\rho _s^b/\rho _{s0}^b`$ and T<sub>c</sub>/T<sub>c0</sub> similar to the trend illustrated in figure 5, showing behaviour very different than Abrikosov-Gorkov model. This model does not explicitly consider the fractal nature of the aerogel, but shows how inhomogeneities in the disorder can lead to a large suppression of $`\rho _s^b`$ relative to T<sub>c</sub>. In this letter we have presented data from our measurements on <sup>3</sup>He in a very dilute 99.5% porous silica aerogel. The values of T<sub>c</sub> and $`\rho _s^b`$ fall between those of <sup>3</sup>He in bulk and <sup>3</sup>He in denser aerogel samples. We also present T<sub>c</sub> and $`\rho _s^b`$ data for <sup>3</sup>He in aerogel experiments performed at Cornell that show universal behaviour that can be traced to the fractal structure of the aerogel. In order to more fully understand this exciting physical system, more attention must be devoted to understanding the microstructure of the aerogel. Specifically, the universal scaling discussed above depends strongly on the fact that the fractal exponent real-space correlations is similar for all the aerogel samples. It would be interesting to study neutrally catalyzed silica aerogels, which have a different fractal exponent than the base-catalyzed samples. <sup>3</sup>He would be expected to follow *different* universal behaviour when confined to base-catalyzed and neutrally catalyzed aerogels. As yet, there is no clear theoretical picture for why the transition temperature T<sub>c</sub>should depend on the ratio $`\xi _0/\xi _a`$. The explanation for this behaviour will provide insight into how correlated disorder affects non-conventional Cooper pairing. We would like to acknowledge helpful conversations with T.L. Ho, S. Yip, E. Thuneberg, J. Beamish, A. Golov and M.H.W. Chan. This research was supported by the NSF under DMR-9705295.
no-problem/9910/hep-ph9910346.html
ar5iv
text
# References I. Introduction A recently proposed model suggests that gravitational interactions take place in $`4+n`$ dimensions, where the extra $`n`$-dimensions are large (i.e., as large as millimeter scale) spatial dimensions, commonly referred to as the bulk. Interactions other than gravity (electroweak and strong) are confined to the 3-dimensional brane, commonly referred to as the wall, which corresponds to the usual 3 spatial dimensions. The gravitational interaction is then understood as appearing to be weak, as we only observe its projection onto the wall; once small enough (spatial) dimensions are probed, the gravitational interaction will again appear large. Models of this sort can remove the hierarchy problem, by eliminating the large difference in scales between the electroweak scale and the Planck mass. An application of Gauss’ law yields the result $$M_{Planck}^2r^nM_{eff}^{2+n}$$ (1) where $`r`$ is the spatial size of the extra dimensions in the bulk, and $`M_{eff}`$ is the effective Planck mass. Explicit suggestions have been made for how such a low mass effective Planck or string scale and large extra dimensions might arise in both Kaluza Klein models and string theory. We will concentrate on one such scenario in which large extra-dimensional gravity is embedded into string models , where the string scale, $`M_S`$, is identified with the effective Planck mass, $`M_{eff}`$. One interesting consequence of this scenario is that a Kaluza Klein (KK) tower of massive gravitons can interact with the Standard Model (SM) fields on the wall. This can lead to direct production of a graviton tower as well as virtual exchange of a graviton towers. Direct production of a graviton tower produces a missing $`p__T`$ type signal, while virtual exchange can lead to new, tree-level interactions and/or modifications to SM processes. The Feynman rules for these new types of interactions have been developed, e.g., in Ref. , and many processes have been studied in $`e^+e^{}`$ , $`e\gamma `$ , $`\gamma \gamma `$ , $`ep`$ and hadron colliders. New contributions to standard model interactions can occur in almost any process involving photon production and/or exchange or other neutral current phenomena. Additionally, Higgs production , precision electroweak observable analyses and astrophysical constraints have been considered. Based on direct production analyses, the current limits on $`M_S`$ fall in the range $`500GeV`$ to $`1.2TeV`$, while virtual graviton tower effects can yield current $`M_S`$ estimates from $`650GeV`$ to $`1.2TeV`$. Future colliders, like the NLC and LHC can push these limits into the multi-$`TeV`$ range. In this note, we will focus on aspects of dijet production at $`\gamma \gamma `$ colliders. Other two-photon processes are also valuable in probing low-scale gravity effects , but dijet production will be one of the most experimentally accessible processes in $`\gamma \gamma `$ collisions with guaranteed large event rates. The authors of Ref. have recently considered gauge boson-gauge boson scattering in general, incorporating the effects of low-scale gravity models, and include useful results for $`\gamma +\gamma g+g`$ which is necessary for our calculation. We also require, however, cross-sections for the corresponding $`\gamma +\gamma q+\overline{q}`$ processes for the two-jet cross-section at leading order. The authors of Ref. fail, however, to include the “box” diagram: $`\gamma +\gamma g+g`$ exists as a 1-loop diagram in the SM . Although the box diagram, in the SM, is not as important in $`\gamma \gamma `$ collisions as it is in hadron collisions, we include it here for completeness . The authors of Ref. consider the inverse process, di-photon production at hadron colliders. These authors present the parton level processes for both $`g+g\gamma +\gamma `$ (including the box diagram) and $`q+\overline{q}\gamma +\gamma `$. The subprocesses we consider here, $`\gamma +\gamma g+g`$ and $`\gamma +\gamma q+\overline{q}`$, are identical in form, and differ only by color factors, from those presented in Ref. . We will not reproduce those expressions here, but focus instead on optimizing the sensitivity of the $`\gamma +\gamma jet+jet`$ process to new physics contributions. II. Calculation and Results To examine the $`\gamma \gamma jj`$ process at a future collider, we assume a linear $`e^+e^{}`$ collider, with backscattered laser photons for the initial photon beams. The physical process at leading order is a sum of two “parton level” subprocesses, $`\gamma \gamma gg`$ and $`\gamma \gamma q\overline{q}`$; furthermore, the subprocesses include SM contributions as well as extra-dimensional gravity (KK graviton tower exchange) contributions. In the SM, the lowest-order Feynman diagram for $`\gamma \gamma gg`$ is the one-loop, box diagram. Although nominally higher-order in the perturbative expansion, we include it, as well as its interference with the extra-dimensional gravity contribution as its contributions are known to be very important in the inverse process (two-photon production in hadron collisions.) The event rate at planned colliders, even considering the SM contribution alone, is significant. With the addition of graviton tower exchange, the angular and energy distribution of events is altered. The graviton tower exchange is essentially the s-channel exchange of a large number of gravitons, all with different masses. This leads to an enhancement of the cross section at all invariant masses kinematically allowed; a consequence of this is that, for low enough $`p__T`$, the SM contribution dominates while at higher $`p__T`$ the contribution of graviton tower exchange dominates. Furthermore, the exact value of $`p__T`$ where graviton tower exchange becomes important depends strongly on the scale parameter, $`M_S`$. These properties are illustrated in Figure 1. In Figure 1, we show some typical results of our calculation. First, we choose an $`e^+e^{}`$ collider with $`\sqrt{s}=500GeV`$ operating in $`\gamma \gamma `$ mode, where the $`\gamma `$ beams are generated by backscattering laser photons off the original lepton beams. In order to simulate detector accpetances, we employ cuts on our simulated events: $`p__T>10GeV`$ and $`\theta _{lab}>10^{}`$ from the beam pipe are required to observe a jet. Below, we refer to this choice of acceptance cuts as nominal. In order to compare and contrast dijet production, we present the $`p__T`$ distribution for purely SM production (dashed curve), as well as SM + KK graviton tower exchange for $`n=4`$, and $`M_S=1.0TeV`$ (solid curve) and $`M_S=2.0TeV`$ (dotdashed curve). The deviation from SM occurs at larger $`p__T`$ for larger $`M_S`$. Any particular value of $`M_S`$ will have a value of the $`p__T`$ cut which maximizes the deviation from SM in total cross section: $$\mathrm{\Delta }=\frac{\sigma \sigma _{SM}}{\delta \sigma }$$ (2) where $`\delta \sigma `$ is the statistical uncertainty in the actual cross section. With the nominal acceptance cuts, though, we expect in excess of $`10^6`$ events per year (using typical planned luminosities), at each center-of-mass energy considered below. Large event rates are thus possible even if rather severe cuts are applied. Given the behavior of the extra-dimensional gravity contribution illustrated in Figure 1, sensitivity to deviations from the SM (especially at large $`M_S`$) can benefit from a large $`p__T`$ cut, removing much of the cross section where the SM dominates. In order to find the optimal value of the $`p__T`$ cut, we have used an iterative process. We begin with the nominal acceptance cuts listed above, and searched for the highest value of $`M_S`$ which gave a significant deviation from the SM. We defined “significant deviation” to be a $`2\sigma `$ (statistical) deviation. Then, we used that value of $`M_S`$, and varied the $`p__T`$ cut in order to maximize the deviation from the SM; we replaced the original $`p__T`$ cut with this new value. This process is repeated until the values of the $`p__T`$ cut and $`M_S`$ are stabilized. This iterative process converges very rapidly and we have repeated this optimization process for each center-of-mass energy considered. To obtain specific estimates of possible $`M_S`$ limits, we have considered a 1 year run at center-of-mass energies given by $`500GeV`$, $`1TeV`$, $`1.5TeV`$ and $`2TeV`$. We take conservative values for the integrated luminosity: $`50fb^1`$ at the $`500GeV`$ collider, and $`200fb^1`$ at the others. Longer running times or more optimistic luminosity values will simply increase the search reach. As seen in the expression for the “parton level” subprocesses in Ref. , the cross section depends on the number of dimensions in the bulk, $`n`$. So, in addition to different values of the center-of-mass energy of the linear $`e^+e^{}`$ collider, we also consider 2 values of $`n`$: $`n=4`$ and $`n=6`$. Our results are summarized in Table I where achievable limits on $`M_S`$ are shown, as well as the optimum value of the $`p__T`$ cut for each center-of-mass energy. In addition, achievable limits on $`M_S`$ using a nominal $`p__T`$ cut are shown for comparison. The optimization of the $`p__T`$ cut increases the $`M_S`$ limits by at least $`700GeV`$; as expected, the optimization is more effective for larger center-of-mass energy. It is interesting to note that the value of the optimum $`p__T`$ cut is, in all cases, approximately $`46\%`$ of the beam energy of the $`e^+e^{}`$ collider. In addition to maximizing the deviation from the SM, this large value for the $`p__T`$ cut indicates a very nice signature for extra-dimensional gravity effects: an excess at extremely large $`p__T`$. III. Conclusions In conclusion, we have examined dijet production at $`\gamma \gamma `$ colliders, in order to study the effects of, and search potential for, large extra-dimensional gravity models. We have included a full, tree-level calculation of $`\gamma +\gamma q+\overline{q}`$ (SM plus KK graviton tower exchange), and the 1-loop “box” diagram (SM) plus tree-level, KK graviton tower exchange for $`\gamma +\gamma g+g`$. Furthermore, we maximized the string scale, $`M_S`$, reach by optimizing the $`p__T`$ cut. We found that a rather large $`p__T`$ cut yielded the highest sensitivity to the string scale. At a $`500GeV`$ linear $`e^+e^{}`$ collider, operating in $`\gamma \gamma `$ mode, using a cut of $`p__T>115GeV`$, dijet production will be sensitive to $`M_S`$ from $`2.75TeV`$ ($`n=6`$) up to $`3.24TeV`$ ($`n=4`$). These sensitivities are $`600700GeV`$ higher than they would be with a nominal $`p__T`$ cut of $`10GeV`$. At a $`2TeV`$ linear $`e^+e^{}`$ collider, operating in $`\gamma \gamma `$ mode, using a cut of $`p__T>465GeV`$, dijet production will be sensitive to $`M_S`$ from $`9.35TeV`$ ($`n=6`$) up to $`11.10TeV`$ ($`n=4`$). At this higher center-of-mass energy, the increase in sensitivity, compared to the nominal $`10GeV`$ $`p__T`$ cut, is even more significant: $`2.12.6TeV`$. These limits assume a 1 year run at conservative luminosity estimates. Longer runs or more optimistic luminosity estimates will, of course, increase the sensitivity to $`M_S`$ further. Dijet production at $`\gamma \gamma `$ colliders is a sensitive and important test of large extra-dimensional gravity. Although many other processes are also very sensitive to deviations from the SM as produced by large extra-dimensional gravity, it is important to have as many independent tests as possible, in order to verify the source of the deviations and to study the models as completely as possible. IV. Acknowledgments The work of MAD was supported, in part, by the Commonwealth College of Penn State University under a Research Development Grant (RDG); the work of one of RR was supported, in part, by NSF grant DUE-9950702. Figure Captions * $`p__T`$ distribution for dijet production at a $`500GeV`$ $`e^+e^{}`$ collider operating in $`\gamma \gamma `$ mode. The dashed curve indicates the SM cross section while the solid (dotdashed) curve indicates the contribution with the addition of extra dimension gravity with parameters $`M_S=1(2)TeV`$ and $`n=4`$. Tables
no-problem/9910/cond-mat9910035.html
ar5iv
text
# Measurements of flux dependent screening in Aharonov-Bohm rings When an electric field E is applied to an isolated metallic sample, electron screening gives rise to an induced dipole d. In the linear response regime : $$𝐝=\alpha 𝐄$$ (1) where $`\alpha `$ is the electric polarizability. For a sample of typical size $`a`$ much larger than the Thomas-Fermi screening length $`\lambda _s`$, $`\alpha `$ is essentially determined by geometry, with a negative correction of the order of $`\lambda _s/a`$ . The measurement of $`\alpha `$ gives information on the way electrons screen an external electric field. At the mesoscopic scale, when phase coherence through the sample is achieved (i.e. the phase coherence length is of the same order as the typical size of the system) electronic properties are sensitive to the phase of the electronic wave functions, which can be tuned by an Aharonov-Bohm flux in a ring geometry . It has been recently suggested that screening of an electric field may be sensitive to this phase coherence, leading to a flux dependent mesoscopic correction to polarizability . In particular the polarizability of an Aharonov-Bohm ring is expected to exhibit oscillations as a function of the magnetic flux. We have inferred the electrical response of an array of rings from the flux dependence of the capacitance $`C`$, placed underneath the rings, of an RF superconducting micro-resonator(Fig 1, left inset). This experiment has been checked to be only sensitive to the electrical response of the rings . $`C`$ is modified by the non dissipative response $`\alpha ^{^{}}`$ of the rings : $$\frac{\delta C}{C}=kN_s\alpha ^{^{}}$$ (2) where $`N_s`$ is the number of rings coupled to the resonator and $`k`$ the electric coupling coefficient between a ring and the capacitance, which only depends on geometry. Since the resonance frequency $`f=1/(2\pi \sqrt{LC})`$, with $`L`$ the inductance of the resonator, the change in $`C`$ shifts the resonance frequency. In addition the dissipative response $`\alpha ^{^{\prime \prime }}`$ of the rings weakens the quality factor $`Q`$ : $$\frac{\delta f}{f}=\frac{1}{2}kN_s\alpha ^{^{}}(\omega ),\delta (\frac{1}{Q})=kN_s\alpha ^{^{\prime \prime }}(\omega )$$ (3) In order to measure $`\delta f`$ and $`\delta Q`$, the RF frequency is modulated at 100 kHz. Using lock-in detection, the reflected signal from the resonator at the modulated RF frequency is measured and used to lock the experimental setup on the resonance frequency. The feedback signal is then proportional to the variation of $`f`$, whereas the signal at double the modulation frequency is proportional to the variation of $`Q`$. We are careful to inject sufficiently low power ($`10pW`$) so as not to heat the sample. The signal is measured as a function of magnetic field. To improve accuracy, the derivative of the signal is also detected by modulating at 30 Hz a magnetic field of 1 Gauss amplitude. Our precision with this setup is $`\delta f/f=10^8`$ and $`\delta Q/Q=10^8`$. The rings are etched in a high-mobility GaAs-AlGaAs heterojunction. Etching strongly decreases the conductivity of the rings, but the nominal conductivity is recovered by illuminating the sample with an infrared diode . We have checked this on a connected sample. The characteristics of the rings, deduced from transport measurements on wires of the same width etched in the same heterojunction, are the following : at nominal electronic density ($`n_e=3.10^{11}`$cm<sup>-2</sup>) the mean free path $`l_e=3\mu `$m, the etched width is 0.5 $`\mu `$m whereas the effective width $`W=`$0.2 $`\mu `$m (estimated from weak localization experiments ) is much smaller due to depletion, the coherence length is $`L_\mathrm{\Phi }=5.5\mu `$m and the effective perimeter $`L=`$5.2 $`\mu `$m. The rings are thus ballistic in the transverse direction and diffusive longitudinally. The mean level spacing $`\mathrm{\Delta }=h^2/(2\pi mWL)80`$ mK $`1.66`$ GHz and the Thouless energy $`E_c=hD/L^2=450`$ mK, with $`D`$ the diffusion coefficient and $`m`$ the effective mass of electrons. The Thomas-Fermi screening length is $`\lambda _s=\pi a^{}/2=16`$ nm, with $`a^{}`$ the effective Bohr radius. There are $`N=10^5`$ rings on one sample. The resonator is made by optical lithography in niobium on a sapphire substrate. The length of the capacitance is $`l=20.5`$ cm and the inductance is 5 cm long. It has a single resonance frequency $`f_0=385`$ MHz and a quality factor of 10 000. The distance between the capacitance and the inductance is 300 $`\mu `$m. A 0.9 $`\mu `$m thick mylar film is inserted between the detector and the ring substrate in order to reduce inhomogeneities of the magnetic field due to the Meissner effect in the vicinity of the superconducting resonator. The system constituted by the resonator and the sample has a reduced quality factor of 3000, probably to dielectric losses in the GaAs substrate, and a resonance frequency of 350 MHz. The system is cooled in a dilution refrigerator down to 18 mK. The typical field dependence of the rings contribution to $`\delta f/f`$ is shown on figure 1. This signal is superimposed on the diamagnetic response of the niobium resonator (Fig 1, right inset) which we substract in the following way : the base line of the derivative of the resonance frequency is removed and the signal is then integrated. One thus obtains the curves of figures 1 and 2, which are directly proportional to the flux dependence of the polarizability of the rings. The resonance frequency is periodic in field with a period of 12.5 G which corresponds to half a flux quantum $`\mathrm{\Phi }_0/2=h/2e`$ in a ring, consistent with an Aharonov-Bohm effect averaged over many rings . The resonance frequency decreases by about 100Hz between 0 and 6 G. The magnetic field reduces the amplitude of the oscillations, which cannot be detected for a field higher than 40 G, consistent with the finite width of the ring. Illuminating the sample with the electroluminescent diode strongly affects the measured signal. At low illumination the amplitude of the oscillations increases and decreases at higher illuminations (see Fig 3). The extremum amplitude of $`\delta _\mathrm{\Phi }f/f=(f(\mathrm{\Phi }=\mathrm{\Phi }_0/4)f(\mathrm{\Phi }=0))/f`$, is $`\mathrm{2.8\hspace{0.17em}10}^7`$. In our geometry one has : $$k=\frac{1}{\pi ϵ_0ϵald}\frac{\mathrm{ln}(\frac{d+a}{da})}{\mathrm{ln}(\frac{d}{r})}$$ (4) with the relative dielectric constant of GaAs $`ϵ=12.85`$, the size of the rings $`a=1.3\mu `$m, the length of the capacitance $`l=20.5`$ cm, the distance between one lead of the capacitance and a ring $`d/23.15\mu `$m and the width of the lead $`r=1\mu `$m. Since all the rings are not identically coupled to the resonator, $`k`$ has to be understood as an average capacitive coupling between one ring an the capacitance. Note that approximatively half the rings are well coupled to the capacitance. Because of these approximations the experimental value is given with a $`50\%`$ error range. One obtains $`\delta _\mathrm{\Phi }\alpha ^{}/\alpha _{1D}=\mathrm{0.7\hspace{0.17em}10}^3`$, where $`\alpha _{1D}=ϵ_0\pi ^2R^3/\mathrm{ln}(R/W)`$ is the polarizability of a quasi one dimensional (quasi-1D) circular ring of radius R. The dissipative part of the polarizability is obtained from the field dependence of $`Q`$ at different illuminations (Fig 2). It exhibits a periodic behavior with the same period as the resonance frequency. At low illumination $`1/Q`$ decreases with magnetic field for small field whereas at higher illumination a dip in the zero field region, which is not understood, appears indicating an increase of $`1/Q`$ with field. The typical amplitude of $`\delta _\mathrm{\Phi }(1/Q)`$ is $`10^7`$, hence $`\delta _\mathrm{\Phi }\alpha ^{^{\prime \prime }}/\alpha _{1D}=\mathrm{1.3\hspace{0.17em}10}^4`$. The sensitivity of the electrostatic properties of mesoscopic systems to quantum coherence has been emphasized by Büttiker for connected geometries . The phase coherent correction to the polarizability of isolated systems has been recently theoretically investigated. Efetov found that it is possible to relate self consistently this correction to the flux dependence of the screened potential . Noat et al. calculated this effect in the diffusive regime and found no flux correction in the canonical ensemble. In the grand canonical (GC) ensemble no effect is predicted if the RF pulsation $`\omega `$ is much smaller than the inverse relaxation time $`\gamma `$. However when $`\omega \gamma `$ the correction to the polarizability is positive. In particular for quasi-1D rings, one has : $$\frac{\delta _\mathrm{\Phi }\alpha ^{^{}}}{\alpha _{1D}}=\frac{ϵ}{16\pi ^2\mathrm{ln}(R/W)}\frac{\mathrm{\Delta }}{E_c}\frac{\lambda _s}{W}$$ (5) Using supersymmetry techniques Blanter and Mirlin essentially confirmed these results . Since the rings in our experiment are completely isolated, the results concerning the canonical ensemble, in which the effect is predicted to be zero or very small, should apply. However our experiment shows unambiguously a decrease of the resonance frequency as we increase the magnetic field at low field : this corresponds to a positive magneto-polarizability. The GC result (eq. 5) leads to $`\delta _\mathrm{\Phi }\alpha ^{^{}}/\alpha _{1D}=\mathrm{0.8\hspace{0.17em}10}^3`$, which is close to the experimental value. Therefore the value and the sign of the effect are consistent with rings considered in the GC case in the limit $`\gamma \omega `$. This discrepancy can be related to an ensemble averaging intermediate between canonical and GC, a situation called GC-canonical by Kamenev and Gefen : if the system is brought to a GC equilibrium at a certain value of the magnetic flux and then submitted to a time dependent flux whose time scale is faster than the particle equilibration time the response of the system can be identical to the GC case in the limit $`\gamma \omega `$. Another possibility is that the mathematical cancellation responsible for the absence of magneto-polarizability in the canonical ensemble disappears when one does not have the condition $`\omega \mathrm{\Delta }`$. We find that the polarizability is greater in a magnetic field than in zero field. This can be related to the flux dependence of screening within a simple model in which the Thomas-Fermi screening length is flux dependent. As the correction to the polarizability due to screening is negative and of the order of $`\lambda _s/a`$, an increase of polarizability corresponds to a decrease of the screening length. Hence the charges are more concentrated on the edges of the sample in the presence of a magnetic flux. This phenomenon has to be related to the disappearance of weak localization and thus enhancement of the metallic character of the sample in the presence of a magnetic field. Concerning the quality factor our result are, at least for the small electronic density, in agreement with what is predicted for the dissipative part of the magneto-polarizability . For low magnetic field we observe a negative $`\delta _\mathrm{\Phi }(1/Q)`$ which corresponds to a negative $`\delta _\mathrm{\Phi }\alpha ^{^{\prime \prime }}`$ according to (2). The measured ratio $`\delta _\mathrm{\Phi }\alpha ^{^{\prime \prime }}/\delta _\mathrm{\Phi }\alpha ^{^{}}=0.29`$ for the illumination 0. Note that this ratio does not depend on the electric coupling coefficient and hence can be determined with a good accuracy. When $`\mathrm{\Delta }`$ is less than the temperature, $`\delta _\mathrm{\Phi }\alpha ^{^{\prime \prime }}/\delta _\mathrm{\Phi }\alpha ^{^{}}`$ is related to the level spacing distributions function which obeys universal rules of random matrix theory : $$\frac{\delta _\mathrm{\Phi }\alpha ^{^{\prime \prime }}}{\delta _\mathrm{\Phi }\alpha ^{^{}}}=\frac{2\pi \omega }{\mathrm{\Delta }}(R^{GUE}(\frac{\pi \omega }{\mathrm{\Delta }})R^{GOE}(\frac{\pi \omega }{\mathrm{\Delta }}))$$ (6) $`R^{GUE}`$ is the two level correlation function in the gaussian unitary ensemble and $`R^{GOE}`$ in the gaussian orthogonal ensemble. This formula is valid in the limit $`\gamma \omega `$ and $`\gamma \mathrm{\Delta }`$ and yields to $`\delta _\mathrm{\Phi }\alpha ^{^{\prime \prime }}/\delta _\mathrm{\Phi }\alpha ^{^{}}=0.26`$ which is close to the experimental value. This is a good indication that we are effectively in the regime $`\gamma \omega `$ and $`\gamma \mathrm{\Delta }`$ as confirmed by temperature dependence of the signal. The first harmonic of $`\delta _\mathrm{\Phi }f/f`$ obeys an exponential decay with a typical temperature scale of 90 mK, independent of illumination (Fig 4). Taking the temperature dependence of the first harmonic to be $`\mathrm{exp}(2L/L_\mathrm{\Phi })`$, like in weak localization , and supposing that all the temperature dependence comes from $`L_\mathrm{\Phi }`$, one has $`L_\mathrm{\Phi }1/T`$. We deduced $`\gamma =1/\tau _\mathrm{\Phi }=D/L_\mathrm{\Phi }^20.8`$ mK at 18 mK. It increases like $`T^2`$, in agreement with theoretical predictions on the broadening of single electron energy levels due to electron-electron interaction in a quantum dot, and remains below $`\mathrm{\Delta }`$ up to $`T=180`$ mK. At 50 mK, $`L_\mathrm{\Phi }18\mu `$m which is larger than the coherence length deduced from weak-localization measurements on connected samples($`L_\mathrm{\Phi }6.5\mu `$m at the same temperature). In this latter case one has a 1D geometry and a broadening of the energy levels due to the coupling with reservoirs whereas in an isolated ring the energy spectrum is discrete. Illumination increases the electronic density in the rings. We start from a situation where the rings are empty or in an electronic localized state. The increase of the signal observed at low illumination corresponds to the repopulation of these depleted rings. The subsequent decay of the signal can be attributed to the increase of the average conductance of the rings and the $`1/g`$ dependence of the magneto-polarizability in the diffusive regime (cf formula 4. $`g`$ is the dimensionless conductance, defined as $`g=E_c/\mathrm{\Delta }`$). It is instructive to compare these results with previous measurements on similar array of rings coupled to a multi-mode strip line resonator sensitive to both electric and magnetic response. Similar amplitudes of the flux dependence of the resonance frequency are found in both experiments indicating that the electric part of the response of the rings is at least of the same order of magnitude as the magnetic one. We plan to measure this latter quantity by coupling the rings to the inductive part of the resonator. To conclude we have measured the flux dependent part of the ac-polarizability of mesoscopic rings down to 18 mK. Both the non-dissipative and dissipative parts of the polarizability exhibit a small correction periodic in flux with a period of half a flux quantum in a ring. The correction of the non-dissipative part is positive in low magnetic field in agreement with theoretical predictions in the GC ensemble in the limit $`\gamma \omega `$. It indicates a better screening of the electric field in the presence of magnetic flux. The correction to the dissipative part is negative for low field, at least for low electronic density. The effect on the polarizability is qualitatively consistent with a $`1/g`$ dependence over the dimensionless conductance $`g`$. These corrections are sensitive to temperature, with a typical scale of 90 mK. It would be interesting to pursue these studies in the low frequency regime. We thank B. Etienne for the fabrication of the heterojunction and acknowledge fruitful discussions with M. Nardone, L.P. Lévy, L. Malih, A. Mac Farlane, S. Guéron and L. Limot.
no-problem/9910/astro-ph9910231.html
ar5iv
text
# Reconstructing the Cosmic Equation of State from Supernova distances \[ ## Abstract Observations of high-redshift supernovae indicate that the universe is accelerating. Here we present a model-independent method for estimating the form of the potential $`V(\varphi )`$ of the scalar field driving this acceleration, and the associated equation of state $`w_\varphi `$. Our method is based on a versatile analytical form for the luminosity distance $`D_L`$, optimized to fit observed distances to distant supernovae and differentiated to yield $`V(\varphi )`$ and $`w_\varphi `$. Our results favor $`w_\varphi 1`$ at the present epoch, steadily increasing with redshift. A cosmological constant is consistent with our results. preprint: IUCAA-41/99 \] The observed relation between luminosity distance and redshift for extragalactic Type Ia Supernovae (SNe) appears to favor an accelerating Universe, where almost two-thirds of the critical energy density may be in the form of a component with negative pressure . Although this is consistent with $`\mathrm{\Omega }_\mathrm{M}<1`$ and a cosmological constant $`\mathrm{\Lambda }>0`$ (e.g. ), at the theoretical level a constant $`\mathrm{\Lambda }`$ runs into serious difficulties, since the present value of $`\mathrm{\Lambda }`$ is $``$10<sup>123</sup> times smaller than predicted by most particle physics models. However, neither the present data nor the theoretical models require $`\mathrm{\Lambda }`$ to be exactly constant. To explore the possibility that the $`\mathrm{\Lambda }`$-like term (e.g. quintessence) is time-dependent, we use a model for it that mimics the simplest variant of the inflationary scenario of the early Universe. A variable $`\mathrm{\Lambda }`$-term is described in terms of an effective scalar field (referred to here as the $`\mathrm{\Lambda }`$-field) with some self-interaction $`V(\varphi )`$, which is minimally coupled to the gravitational field and has little or no coupling to other known physical fields. In analogy to the inflationary scenario, more fundamental theories like supergravity or the M-theory can provide a number of possible candidates for the $`\mathrm{\Lambda }`$-field but do not uniquely predict its potential $`V(\varphi )`$. On the other hand, it is remarkable that $`V(\varphi )`$ may be directly reconstructed from present-day cosmological observations. The aim of the present letter is to go from observations to theory, i.e. from $`D_L(z)`$ to $`V(\varphi )`$, following the prescription outlined by Starobinsky (see also ). This is the first attempt at reconstructing $`V(\varphi )`$ from real observational data without resorting to specific models (e.g. cosmological constant, quintessence etc.). Since the spatially flat Universe ($`\mathrm{\Omega }_\varphi +\mathrm{\Omega }_\mathrm{M}=1`$) is both predicted by the simplest inflationary models and agrees well with observational evidence, we will not consider spatially curved Friedmann-Robertson-Walker (FRW) cosmological models. In a flat FRW cosmology, the luminosity distance $`D_L`$ and the coordinate distance $`r`$ to an object at redshift $`z`$ are simply related as ($`c=1`$ here and elsewhere) $$a_0r=a_0_t^{t_0}\frac{dt^{}}{a(t^{})}=\frac{D_L(z)}{1+z}.$$ (1) This uniquely defines the Hubble parameter $$H(z)\frac{\dot{a}}{a}=\left[\frac{d}{dz}\left(\frac{D_L(z)}{1+z}\right)\right]^1.$$ (2) Note that this relation is purely kinematic and depends neither upon a microscopic model of matter, including a $`\mathrm{\Lambda }`$-term, nor on a dynamical theory of gravity. For a sample of objects (in this case, extragalactic SNe Ia) for which luminosity distances $`D_L`$ are measured, one can fit an analytical form to $`D_L`$ as a function of $`z`$, and then estimate $`H(z)`$ from (2). If $`\rho _m=(3H_0^2/8\pi G)\mathrm{\Omega }_\mathrm{M}(a/a_0)^3`$ is the density of dust-like cold dark matter and the usual baryonic matter, then $`H^2`$ $`=`$ $`{\displaystyle \frac{8}{3}}\pi G\left(\rho _m+{\displaystyle \frac{1}{2}}\dot{\varphi }^2+V(\varphi )\right),`$ (3) from where it follows that $$\dot{H}=4\pi G(\rho _m+\dot{\varphi }^2).$$ (4) Eqs. (3) & (4) can be rephrased in the following form convenient for our current reconstruction exercise, $`{\displaystyle \frac{8\pi G}{3H_0^2}}V(x)`$ $`=`$ $`{\displaystyle \frac{H^2}{H_0^2}}{\displaystyle \frac{x}{6H_0^2}}{\displaystyle \frac{dH^2}{dx}}{\displaystyle \frac{1}{2}}\mathrm{\Omega }_\mathrm{M}x^3,`$ (5) $`{\displaystyle \frac{8\pi G}{3H_0^2}}\left({\displaystyle \frac{d\varphi }{dx}}\right)^2`$ $`=`$ $`{\displaystyle \frac{2}{3H_0^2x}}{\displaystyle \frac{d\mathrm{ln}H}{dx}}{\displaystyle \frac{\mathrm{\Omega }_\mathrm{M}x}{H^2}},`$ (6) where $`x1+z`$. Thus from the luminosity distance $`D_L`$, both $`H(z)`$ and $`dH(z)/dz`$ can be unambiguously calculated. This allows us to reconstruct the potential $`V(z)`$ and $`d\varphi /dz`$ if the value of $`\mathrm{\Omega }_M`$ is additionally given. Integrating the latter equation, we can determine $`\varphi (z)`$ (to within an additive constant) and, therefore, reconstruct the form of $`V(\varphi )`$. Note also that the present Hubble constant $`H_0H(z=0)`$ enters in a multiplicative way in all expressions. Thus, neither the potential $`V(\varphi )/H_0^2`$ nor the cosmic equation of state $`w_\varphi (z)`$ depends upon the actual value of $`H_0`$. A fitting function for $`D_L`$: We use a rational (in terms of $`\sqrt{x}`$) ansatz for the luminosity distance $`D_L`$, $$\frac{D_L}{x}\frac{2}{H_0}\left[\frac{x\alpha \sqrt{x}1+\alpha }{\beta x+\gamma \sqrt{x}+2\alpha \beta \gamma }\right]$$ (7) where $`\alpha `$, $`\beta `$ and $`\gamma `$ are fitting parameters. This function has the following important features: it is valid for a wide range of models, and it is exactly equal to the analytical form given by (1) for the two extreme cases: $`\mathrm{\Omega }_\varphi =0,1`$. At these two limits, as $`\mathrm{\Omega }_\mathrm{M}1`$, $`\alpha +\gamma 1`$ and $`\beta 1`$ ; and as $`\mathrm{\Omega }_\mathrm{M}0`$, $`\alpha ,\beta ,\gamma 0`$. The accuracy of our ansatz is illustrated in Fig. 1. We choose this form since the value of $`H(z)`$ obtained by differentiating $`D_L/x`$, according to (2), has the correct asymptotic behavior: $`H(z)/H_01`$ as $`z0`$, and $`H(z)/H_0=\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}^{1/2}(1+z)^{3/2}`$ for $`z1`$, where $$\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}=\left(\frac{\beta ^2}{\alpha \beta +\gamma }\right)^2.$$ (8) This ensures that at high-$`z`$, the Universe has gone through a matter dominated phase. It should be noted that $`\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}`$ can be slightly larger than the CDM component $`\mathrm{\Omega }_\mathrm{M}`$ since the $`\mathrm{\Lambda }`$-field (or quintessence) can have an equation of state mimicking cold matter (dust) at high redshifts. For instance, $`\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}1.1\mathrm{\Omega }_\mathrm{M}`$ in the quintessence model considered by Sahni & Wang . On the other hand, $`\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}\mathrm{}<1.15\mathrm{\Omega }_\mathrm{M}`$, to ensure that there is sufficient growth of perturbations during the matter-dominated epoch (see, e.g., the relevant discussion in ). Note that the right hand side of (6) should be non-negative for the minimally coupled scalar field model. At $`z=0`$, this condition gives $$\frac{4\beta +2\gamma \alpha }{2\alpha }3\mathrm{\Omega }_\mathrm{M},$$ (9) where the equality sign occurs when the $`\mathrm{\Lambda }`$-term is constant. The fact that $`D_L`$ is smaller in a universe with time-dependent $`\mathrm{\Lambda }`$-term than it is in a constant-$`\mathrm{\Lambda }`$ universe leads to a lower limit for the parameter $`\beta `$. When taken together with the fact that $`\beta 1`$ as $`\mathrm{\Omega }_\mathrm{M}1`$ ($`\mathrm{\Omega }_\varphi 0`$) this leads to the following set of constraints $$1\frac{1}{\beta }\frac{1}{2}_1^{\mathrm{}}\frac{dx}{\sqrt{1\mathrm{\Omega }_\mathrm{M}+\mathrm{\Omega }_\mathrm{M}x^3}}.$$ (10) The observational data: Till date, about 100 SNe Ia in the redshift range $`z=0.11`$ have been discovered, a large fraction of which have reliable published data from which luminosity distances can be calculated. We use the 54 SNe Ia from the preferred “primary fit” (‘C’ in their Table 1) of the Supernova Cosmology Project , including the low-$`z`$ Calan Tololo sample as used therein. We adopt the quoted redshifts, reducing them to the cosmic microwave background frame. Maximum likelihood fits: The luminosity distance $`D_L`$ (Mpc) is related to the measured quantity, the corrected apparent peak $`B`$ magnitude $`m_B`$ as $`m_B=M_0+25+5\mathrm{log}_{10}D_L`$, where $`M_0`$ is the absolute peak luminosity of the SN. The function to be minimized is $$\chi ^2\underset{i=1}{\overset{n}{}}\frac{\left[y(z_i)y(m_{Bi})\right]^2}{\sigma _i^2};y(z)10^{M_0/5}D_L(z).$$ (11) A fourth fitting parameter, $`\kappa =2\times 10^{M_0/5}(c/H_0)`$, which is required in addition to $`\alpha ,\beta ,\gamma `$ in the above minimization process, includes both $`M_0`$ and $`H_0`$, which cannot be measured independent of each other. For instance, if $`M_0=19.5\pm 0.1`$ and $`\mathrm{\Omega }_\mathrm{M}=0.3`$, the value of $`H_0=61.3\pm 2.9`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Note that $`\kappa `$ only features in the fit of (7) to the data, and does not play a role in the reconstruction of $`V(\varphi )`$. To obtain the best fit model, we perform an orthogonal chi-square fit, using errors on both the magnitude and redshift axes in $`\sigma _i`$, subject to the constraints (9), (10) and the condition $`\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}\mathrm{\Omega }_\mathrm{M}`$. The latter condition is used for simplicity – our results remain essentially the same even if we use the entire permitted range $`\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}\mathrm{}<1.15\mathrm{\Omega }_\mathrm{M}`$. The results shown in Table I and in Figure 2 are for $`\mathrm{\Omega }_\mathrm{M}=0.3`$. In arriving at the best fit, the two constraints in (10) are found to be redundant, which means that only two constraints, (9) and $`\stackrel{~}{\mathrm{\Omega }}_\mathrm{M}=\mathrm{\Omega }_\mathrm{M}`$, are actually used. Reconstructing the scalar field potential: We show the form of the effective potential $`V(z)`$ reconstructed using (5) in Fig. 3, along with the corresponding plot for $`V(\varphi )`$, where $`\varphi `$ is calculated by integrating (6). The field $`\varphi `$ is determined up to an additive constant $`\varphi _0`$, so we take $`\varphi `$ to be zero at the present epoch ($`z=0`$). Our experiments with several realizations of synthetic data show that this method works best if we fix the value of $`\mathrm{\Omega }_\mathrm{M}`$. Henceforth, all reconstructed quantities are shown for $`\mathrm{\Omega }_\mathrm{M}=0.3`$. For a scalar field, the pressure $`pT_\alpha ^\alpha =\frac{1}{2}\dot{\varphi }^2V`$ and the energy density $`\epsilon T_0^0=\frac{1}{2}\dot{\varphi }^2+V`$ are related by the equation of state, $`w_\varphi (x){\displaystyle \frac{p}{\epsilon }}`$ $`=`$ $`{\displaystyle \frac{(2x/3)d\mathrm{ln}H/dx1}{1\left(H_0^2/H^2\right)\mathrm{\Omega }_\mathrm{M}x^3}}.`$ (12) For the Cosmological constant, $`w=1`$, while quintessence models generally require $`1w0`$ for $`z\mathrm{}<2`$. Our reconstruction for $`w_\varphi (z)`$ according to (12) is plotted in Fig. 4. There is some evidence of possible evolution in $`w_\varphi `$ with $`1w_\varphi \mathrm{}<0.86`$ preferred at the present epoch, and $`1w_\varphi \mathrm{}<0.66`$ at $`z=0.83`$, the farthest SN in the sample (both at 68% confidence, upper limits correspond to $`0.80`$ and $`0.46`$ at 90% confidence respectively). However, a cosmological constant with $`w=1`$ is consistent with the data. The errors quoted in this paper are calculated using a Monte-Carlo method, where, in a region around the best-fit values of the parameters shown in Table 1, random points are chosen in parameter space from the probability distribution function given by the $`\chi ^2`$-function that is minimized to yield the best fit. At each value of $`z`$ in the given range, the function in question is evaluated at over $`10^7`$ such points, and the errors enclosing 68% and 90% of all the values centered on the median are shown in the figures. The ages of objects: Our ansatz (7) also provides us with a model-independent means of finding the age of the universe at a redshift $`z`$, $$t(z)=H_0^1_z^{\mathrm{}}\frac{dz^{}}{(1+z^{})h(z^{})},$$ (13) where the value of $`h(z)H(z)/H_0`$ is determined from (2). Figure 5 shows the age of the Universe at a given $`z`$ and compares it with the ages of two high redshift galaxies and the quasar B1422+231 . We find that the requirement that the Universe be older than any of its constituents at a given redshift is consistent with our best-fit model, which is a positive feature since a flat matter-dominated Universe must have an uncomfortably small value of $`H_0`$ to achieve this. Discussion: In this letter, we have proposed a simple, analytical, three parameter ansatz describing the luminosity distance as a function of redshift in a flat FRW universe. The form of this ansatz is very flexible and can be applied to determine $`D_L`$ either from supernovae observations (as we have done) or from other cosmological tests such as lensing, the angular size-redshift relation etc. Using the resulting form of $`D_L`$ we reconstruct the potential of a minimally coupled scalar $`\mathrm{\Lambda }`$-field (or quintessence) and its equation of state $`w_\varphi (z)`$. It should be noted that the basic equations of this ansatz: (2), (7), (12) & (13) are flexible and can be applied to models other than those considered in the present paper. For instance one can venture beyond minimally coupled scalar fields by dropping either one or both of the constraints (9) & (10) (this is equivalent to removing the constraint $`\rho _\mathrm{\Lambda }+p_\mathrm{\Lambda }0`$ on the $`\mathrm{\Lambda }`$-field). Even with the limited high-$`z`$ data currently available, our ansatz gives interesting results both for the form of $`V(\varphi )`$ as well as $`w_\varphi (z)`$. As data improve, our reconstruction promises to recover ‘true’ model-independent values of $`V(\varphi )`$ and $`w_\varphi (z)`$ with unprecedented accuracy, thereby providing us with a deep insight into the nature of dark matter driving the acceleration of the universe. Acknowledgments: TDS thanks the UGC for providing support for this work. VS acknowledges support from the ILTP program of cooperation between India and Russia. AS was partially supported by the Russian Foundation for Basic Research, grant 99-02-16224, and by the Russian Research Project “Cosmomicrophysics”.
no-problem/9910/cond-mat9910175.html
ar5iv
text
# 1 Introduction ## 1 Introduction Since the pioneering work of Bagnold , many researchers have investigated the complex dynamics of dry granular materials at a surface \[2-6\]. Dry granular materials are assemblies of macroscopic objects that interact with each other essentially via a hard core repulsive potential. Hence they are loosely connected, particularly at the surface. When those grains at the surface are exposed to a wind, they can readily be ejected and carried by the wind until gravity eventually pulls them back to the surface. The dynamics of a single grain is rather simple, given by the Newtonian trajectory of a point particle. Even so, experiments have shown that the collective response of the grains can become exceedingly complex, ranging from formation of simple ripple patterns to ridges and dunes to violent tornadoes . Our current understanding of such complex phenomena remains mostly confined to compiling data on experimental observations. With regard to the formation of ripple patterns, however, there have been some attempts to construct a simple yet physical continuum model. We will investigate the continuum model due to Nishimori and Ouchi . The Nishimori-Ouchi (NO) model of ripple patterns accounts for two elementary processes of sand transportation by the wind which have been identified by investigators in aeolian sand dynamics, namely saltation and creep . Saltation refers to the process by which surface grains are ejected into the air under the influence of a strong wind, and are blown downwind where they collide with other surface grains. There they transfer momentum to these downwind grains, which may themselves be ejected in turn, thereby continuing the process (see Fig. 1). Creep is the surface movement of grains too heavy to be ejected into the air but light enough to be pushed along the surface. Creep also describes the surface movement of grains on hills under the influence of gravity. Previous studies based on the NO model have been confined largely to linear stability analysis and Monte Carlo simulations of a lattice version with simple rules for the grain dynamics . The purpose of this paper is to go one step farther by carrying out a nonlinear analysis of the continuum model and uncovering some of the features of ripple formation that are inaccessible to linear analysis. In particular, we carry out a weakly nonlinear analysis valid near the onset of instability of a flat sandbed to determine the amplitude, shape, and propagation speed of the ripple pattern that forms in this regime. We also compare these results with our numerical integrations of the model equations. These computations are rather unusual because the model lacks an up-down symmetry, and especially because accounting for saltation makes the model nonlocal in space. We find, however, that the process of pattern selection in this simple one-dimensional system, in particular the selection of the wavelength and speed of the patterns , is similar to what is seen in more complicated multidimensional systems such as directional solidification or directional viscous fingering . That is, the wavelength of the final pattern depends on the initial conditions, and may lie anywhere within a band of linearly stable final states. The stable band turns out to be somewhat wider than in most other models. In the next section we review the Nishimori-Ouchi model equations , point out a physical symmetry which they violate, and propose a simple modification of the model which respects that symmetry. In Section III we carry out a linear stability analysis of the flat-sandbed solution of the model, both for the original Nishimori-Ouchi equations and for our modification. We extend this in Section IV to give a weakly nonlinear analysis for both forms of the model. Section V presents our numerical calculations and compares them with the results of the weakly nonlinear analysis. The results are discussed in the final section. ## 2 One-Dimensional Model for Windblown Sand The starting point of the Nishimori-Ouchi (NO) model is a local conservation law for sand grains. Let $`h(x,t)`$ be the local height of the sand bed at position $`x`$ and time $`t`$, measured from some reference level. The height increases when grains are added at position $`x`$. We write $$\frac{h}{t}+\frac{J_l}{x}=Q_{nl},$$ (1) where $`J_l(x,t)`$ is a local flux of grains in the positive direction at $`x`$, and $`Q_{nl}(x,t)`$ is the net input of grains at $`x`$ due to nonlocal processes. The expression for $`J_l`$ embodies a model of creep. Nishimori and Ouchi choose $`J_l=D(h/x)`$. Note that this merely expresses the tendency of grains to roll downhill; it does not include any bias favoring motion in the direction of the wind. Saltation is modeled by gain and loss terms in the nonlocal transfer rate $`Q_{nl}`$. Let $`N(x,t)`$ denote the outward saltation flux of particles from $`x`$ at time $`t`$. That is, let $`N(x,t)dx`$ be the number of particles per unit time taking off from positions between $`x`$ and $`x+dx`$. The loss term in $`Q_{nl}`$ is then $`AN(x,t)`$, where $`A`$ is a scale parameter. The gain term is proportional to the rate at which particles arrive at $`x`$ from other locations $`\xi `$ upwind of $`x`$. Suppose all particles which take off from the interval $`(\xi ,\xi +d\xi )`$ subsequently land in the interval $`(x,x+dx)`$. Then the number of particles per unit time landing in this latter interval of length $`dx`$ is $`N(\xi ,t)d\xi `$, so the gain term in the saltation flux is then $`AN(\xi ,t)(d\xi /dx)`$. It is possible to have more than one $`\xi `$ which satisfies this equation for a given $`x`$. That is, grains landing at $`x`$ may have come from more than one takeoff point $`\xi `$. If this is the case, then the input term in the evolution equation should be summed over the different values of $`\xi `$. Note that evaluating $`N(\xi )`$ at time $`t`$ neglects the flight time of the incoming grains; we expect the evolution of the sandbed profile to take place on a much longer time scale than this, so that the time delay between takeoff and landing should be unimportant. Indeed, experiments on ripple formation by sand transported by water show the evolution of the ripple pattern occurring on time scales of several hours. Combining the various contributions to the flux and substituting into the general conservation law for $`h`$ gives the model evolution equation for the sandbed profile, $$\frac{h}{t}=\frac{}{x}D\frac{h}{x}+A\left[N(\xi ,t)\frac{d\xi }{dx}N(x,t)\right].$$ (2) Note that this equation is nonlocal in $`x`$, as a result of the saltation gain term, which depends on conditions at a position $`\xi `$ which is a finite distance upwind of $`x`$. To complete the model, we must now specify the saltation function, an equation for the flight length of a single grain. In general, the locations $`x`$ and $`\xi `$ in the evolution equation will be related by $`x=\xi +L`$, where $`L`$ is the horizontal length an ejected grain travels from takeoff to landing. This will depend on the size of the grain, its speed when it takes off, the wind velocity profile, and the topography of the sandbed itself. Nishimori and Ouchi proposed the simple ansatz $$L=L_0+bh(\xi ,t).$$ (3) Here, $`L_0`$ is a parameter proportional to the shear stress of the wind at the surface, or more precisely to the friction velocity of the wind on the sand surface , and $`b`$ in general depends on the average drag force on the grain. Nishimori and Ouchi took both $`L_0`$ and $`b`$ to be constant, essentially assuming the wind velocity to be a constant, independent of $`x`$ and $`t`$ and unaffected by changes in the sandbed profile. Equation (3) merely indicates that the higher the takeoff point of a grain in saltation, the longer its trajectory. As Nishimori and Ouchi point out , this amounts to assuming that the height and topography at the point of landing may be neglected, and that only the surface height (as opposed to local topography) is important at the takeoff point. While this may be reasonable if $`h(x)`$ is everywhere close to zero, it does violate a symmetry of the physical problem, namely that the dynamics should be unaffected if we add any constant to $`h`$, thus changing our reference level. To restore this symmetry, it may be more appropriate to take the saltation function to be $$L=L_0+b[h(\xi )h(x)],$$ (4) where $`\xi `$ is the takeoff point and $`x`$ is the landing point. We will discuss the effects of this modification below. For convenience, we now put the model into dimensionless form. Taking $`L_0`$, $`b`$ and $`D`$ to be constants, we choose $`L_0`$ to be the unit of horizontal length, $`L_0/b`$ to be the unit of vertical length (i.e., of $`h`$), and $`L_0^2/D`$ to be the time unit. Further, we define $`J(x,t)=(AbL_0/D)N(x,t)`$, a dimensionless measure of the outward grain flux due to saltation. With these definitions, the evolution equation (2) becomes $$\frac{h}{t}=\frac{^2h}{x^2}+J(\xi ,t)\frac{d\xi }{dx}J(x,t),$$ (5) with the original NO saltation relation becoming the condition $$x=\xi +1+h(\xi ,t).$$ (6) The model simplifies further if we choose $`J(x,t)`$ to be a constant $`J`$ independent of $`x`$ and $`t`$, an assumption whose physical content is that the wind is uniform and there is no flux dependence on surface height. The evolution equation is then $$\frac{h}{t}=\frac{^2h}{x^2}+J\left(\frac{d\xi }{dx}1\right).$$ (7) This is the form of the problem which we will analyze below, using both the NO saltation relation (6) and our symmetric modification of it, $$x=\xi +1+h(\xi ,t)h(x,t).$$ (8) ## 3 Linear Stability Analysis We first note that a flat sandbed, $`h=h_0=constant`$, is always a steady-state solution of the model, for either choice of saltation relation. For the symmetric saltation relation (8) this always gives $`\xi =x1`$, while for the NO relation (6) we have $`\xi =x1h_0`$. In the latter case, however, we may then redefine the length and time units – and the value of $`J`$ – to map the solution with any finite $`h_0`$ (provided $`h_0>1`$) onto the solution with $`h_0=0`$. Specifically, we would take the horizontal length unit to be $`L_0(1+h_0)`$ instead of $`L_0`$, and $`J`$ would then be $`AbL_0(1+h_0)N/D`$ rather than $`AbL_0N/D`$. Thus we will take $`h=0`$ to be the steady state whose stability we will investigate. When $`h`$ is small, we may linearize the NO saltation relation to get $$\xi x1h(x1),$$ (9) so that there is a single, unique $`\xi `$ for each $`x`$. From this we obtain $`d\xi /dx=1h^{}(x1)`$, where the prime indicates partial differentiation with respect to $`x`$. The linearized evolution equation is then $$\frac{h}{t}=h^{\prime \prime }(x,t)Jh^{}(x1,t).$$ (10) Linear stability analysis proceeds in the usual way: we write $`h(x,t)`$ as a linear combination of Fourier modes, $$h(x,t)=h_k(t)\mathrm{exp}(ikx)𝑑k,$$ (11) substitute this into the linearized evolution equation, and note that – even with the nonlocal term present – the modes do not couple. Thus we find that each mode grows or decays exponentially with time, $$h_k(t)\mathrm{exp}[(\sigma _ki\omega _k)t],$$ (12) with $`\sigma _k`$ $`=`$ $`k^2Jk\mathrm{sin}k`$ (13) $`\omega _k`$ $`=`$ $`Jk\mathrm{cos}k.`$ (14) If we use the symmetric saltation relation instead of the NO relation, then the linearized evolution equation has an extra term $`+Jh^{}(x,t)`$ on the right side. This leaves $`\sigma _k`$ unchanged, but replaces (14) by $`\omega _k=Jk(1\mathrm{cos}k)`$. The locus $`\sigma _k=0`$ in the $`J`$$`k`$ plane defines a stability boundary. Points on one side of the boundary represent perturbations which have a positive growth rate $`\sigma _k`$, while points on the other side represent perturbations with negative growth rates which are therefore suppressed in the solution. Thus we expect that solutions of the full differential equation will consist only of modes whose wave numbers are on the unstable side of the boundary. The onset of instability of the flat sandbed occurs at the value $`J_c`$ of $`J`$ for which only a single mode, with wave number $`k_c`$, is marginally stable and no other modes are unstable. These critical values may be determined by solving $`\sigma =0`$ and $`d\sigma /dk=0`$ simultaneously, which yields $`J_c\mathrm{sin}k_c`$ $`=`$ $`k_c,`$ $`J_c\mathrm{cos}k_c`$ $`=`$ $`1.`$ (15) Eliminating $`J_c`$ gives $$\mathrm{tan}k_c=k_c.$$ (16) Thus the critical values are computed as $`k_c=4.493`$ and $`J_c=4.603`$. The wavelength of the marginal mode (in units of $`L_0`$) is $`\lambda _c=2\pi /k_c=1.398`$, somewhat longer than the flight distance of a grain in saltation. For $`k=k_c`$, the NO saltation relation leads to $`\omega _c=k_c`$, so the phase velocity of the marginal mode is $`v=\omega _c/k_c=1`$. With the symmetric saltation relation we get $`v=(1+J_c)=5.60`$. This is a surprising result of the model, that while the sand grains that form the ripples are blown downwind, the ripple pattern itself drifts upwind. The group velocity, however, is large and positive: From (14) we get $`d\omega _k/dk=J(k\mathrm{sin}k+\mathrm{cos}k)`$, which goes to $`k_c^21=19.19`$ at the critical point. For the symmetric saltation relation, the group velocity is lower by $`J`$, so at critical it is 14.58. Note that all velocities are in units of $`D/L_0`$. If we make the problem two-dimensional, allowing the sandbed to extend in both $`x`$ and $`y`$ directions, very little changes. The creep term in the evolution equation becomes $`D^2h`$, and as a result the expression for $`\sigma _k`$ changes to $$\sigma (k,k_y)=k^2k_y^2Jk\mathrm{sin}k,$$ (17) where $`k`$ is now the $`x`$ component of the wave vector of the Fourier mode and $`k_y`$ is its $`y`$ component. Clearly, the linear growth rate for a mode with nonzero $`k_y`$ is always less than the rate for the corresponding mode with $`k_y=0`$. Thus we do not expect to see instabilities in which the transverse shape of the ripples becomes wavy, since the first instability to occur is against a mode in which the ripples are parallel to the $`y`$ axis. ## 4 Nonlinear Analysis We now carry out a weakly nonlinear analysis to determine the amplitude, shape, and propagation velocity of the restabilized ripple patterns which form when $`J`$ is slightly above its critical value $`J_c`$. The nonlocality of the model, the dispersion in the imaginary part of the linear growth rate, and the lack of an up-down symmetry lead to some unusual features in the analysis. We begin with the assumption that the fundamental wave number $`k`$ of the pattern which develops does not deviate much from the critical value $`k_c`$ when $`J`$ is near $`J_c`$. Hence we define a small parameter $`ϵ`$ by setting $$J=J_c+ϵ^2,$$ (18) and then define a scaled wave number deviation $`q`$ by writing $$k=k_c+ϵq.$$ (19) This is the appropriate scaling for the wave number because the stability boundary is approximately quadratic in $`kk_c`$ and linear in $`J`$ near its maximum. Substituting these expressions into the linear growth rate (13) and expanding to second order in $`ϵ`$ gives $$\sigma _k=\frac{1}{2}ϵ^2k_c^2\left(\frac{2}{J_c}q^2\right)+O(ϵ^3).$$ (20) Furthermore, from the expression (14) for $`\omega `$ we find that the phase velocity of the ripples is given by $$v=\omega /k=1+ϵk_cq\frac{1}{2}ϵ^2\left(\frac{2}{J_c}q^2\right)+O(ϵ^3).$$ (21) The first stage of the nonlinear analysis consists of expanding the evolution equation (7) in powers of $`h`$, assuming the overall amplitude of $`h`$ is small. To do this, we may rewrite the NO saltation relation (6) in the form $$\xi =x1h(\xi ,t),$$ (22) repeatedly substitute this expression for $`\xi `$ back into the $`h(\xi ,t)`$ on the right side, and finally expand in powers of $`h`$. Differentiating the result with respect to $`x`$ then gives $$\frac{d\xi }{dx}=1h^{}(x1,t)+\frac{1}{2}[h^2(x1,t)]^{\prime \prime }\frac{1}{6}[h^3(x1,t)]^{\prime \prime \prime }+O(h^4).$$ (23) To third order in $`h`$, then, the evolution equation becomes $$\frac{h(x,t)}{t}=h^{\prime \prime }(x,t)Jh^{}(x1,t)+\frac{J}{2}[h^2(x1,t)]^{\prime \prime }\frac{J}{6}[h^3(x1,t)]^{\prime \prime \prime }+\mathrm{}$$ (24) This is the equation whose ripple solutions we will presently compute. It is remarkable that the expansion of $`d\xi /dx`$ has such an economical form. In fact it is not difficult to show that the pattern continues to all orders in $`h`$. To see this, consider the integral $$I_f_{\mathrm{}}^{\mathrm{}}f(x1)\frac{d\xi }{dx}𝑑x,$$ (25) where $`\xi (x)`$ is given by the saltation relation (6) and $`f`$ is a test function which is integrable and infinitely differentiable, but otherwise arbitrary. We now change variables in this integral from $`x`$ to $`\xi `$, $$I_f=_{\mathrm{}}^{\mathrm{}}f(\xi +h(\xi ))𝑑\xi ,$$ (26) and expand the integrand in powers of $`h`$ to get $$I_f=\underset{k=0}{\overset{\mathrm{}}{}}\frac{1}{k!}\frac{d^kf(\xi )}{d\xi ^k}h^k(\xi )d\xi .$$ (27) Next we integrate the $`k`$th term by parts $`k`$ times to get $$I_f=f(\xi )\underset{k=0}{\overset{\mathrm{}}{}}\frac{(1)^k}{k!}\frac{d^kh^k(\xi )}{d\xi ^k}d\xi ,$$ (28) and finally change variables again from $`\xi `$ to $`x\xi +1`$, $$I_f=_{\mathrm{}}^{\mathrm{}}f(x1)\underset{k=0}{\overset{\mathrm{}}{}}\frac{(1)^k}{k!}\frac{d^kh^k(x1)}{dx^k}dx.$$ (29) This result has the same form as the original expression for $`I_f`$, but with $`d\xi /dx`$ replaced by an expansion. However, since the test function $`f`$ is arbitrary, this requires the expansion and $`d\xi /dx`$ to be equal: $$\frac{d\xi }{dx}=\underset{k=0}{\overset{\mathrm{}}{}}\frac{(1)^k}{k!}\frac{d^kh^k(x1)}{dx^k}.$$ (30) We now turn to the second stage of the calculation, namely finding solutions to the third-order approximation (24) to the evolution equation. We assume the solution will have a fundamental wave number $`k`$ in the unstable range, with an amplitude of order $`ϵ`$. The quadratic terms in the evolution equation will then generate a Fourier component in the solution with wave number $`2k`$ and possibly a constant term, and the cubic terms will lead to a component with wave number $`3k`$. Thus we write $`h(x,t)ϵM(t)\mathrm{cos}(kx\varphi (t))`$ $`+`$ $`ϵ^2M_0(t)+ϵ^2M_2(t)\mathrm{cos}[2(kx\varphi (t))+\theta _2(t)]`$ (31) $`+`$ $`ϵ^3M_3(t)\mathrm{cos}[3(kx\varphi (t))+\theta _3(t)]+\mathrm{},`$ allowing phase differences among the various Fourier components. We substitute this ansatz into the evolution equation and expand in powers of $`ϵ`$. Then the coefficients of $`\mathrm{cos}(kx\varphi (t))`$ and $`\mathrm{sin}(kx\varphi (t))`$ give equations for the fundamental amplitude $`M(t)`$ and phase $`\varphi (t)`$: $$\dot{M}=\sigma _kMϵ^2Jk^2M_0M\mathrm{cos}k\frac{ϵ^2}{2}Jk^2M_2M\mathrm{cos}(k\theta _2)$$ $$+\frac{ϵ^2}{8}Jk^3M^3\mathrm{sin}k+O(ϵ^4)$$ $`(32)`$ $$\dot{\varphi }=\omega _kϵ^2Jk^2M_0\mathrm{sin}k\frac{ϵ^2}{2}Jk^2M_2\mathrm{sin}(k\theta _2)\frac{ϵ^2}{8}Jk^3M^2\mathrm{cos}k+O(ϵ^4).$$ $`(33)`$ Evidently we need to find $`M_0`$, $`M_2`$, and $`\theta _2`$ in order to determine the amplitude $`M`$ and propagation velocity $`\dot{\varphi }/k`$ of the pattern. The $`x`$-independent term in the expansion of the evolution equation gives $`\dot{M}_0=0`$, so $`M_0`$ is in fact a constant; as argued above, we can choose it to be zero, so the $`M_0`$ terms in the $`M`$ and $`\varphi `$ equations can be dropped. The equations for $`M_2`$ and $`\theta _2`$ come from the coefficients of $`\mathrm{cos}(2kx2\varphi +\theta _2)`$ and $`\mathrm{sin}(2kx2\varphi +\theta _2)`$ in the evolution equation. These are best written in the form $$\frac{d}{dt}M_2e^{i\theta _2}=(\sigma _{2k}+i\omega _{2k}2i\omega _k)M_2e^{i\theta _2}Jk^2M^2e^{2ik}+O(ϵ^2).$$ $`(34)`$ Similarly, we find equations for $`M_3`$ and $`\theta _3`$, $$\frac{d}{dt}M_3e^{i\theta _3}=(\sigma _{3k}+i\omega _{3k}3i\omega _k)M_3e^{i\theta _3}\frac{9}{2}Jk^2MM_2e^{3iki\theta _2}\frac{9}{8}iJk^3M^3e^{3ik}+O(ϵ^2).$$ $`(35)`$ Note that in the equation for $`\dot{M}(t)`$, all the terms on the right are of order $`ϵ^2`$. Therefore $`M(t)`$ changes on a long time scale of order $`ϵ^2`$, while $`M_2`$ and $`\theta _2`$ vary on times of order unity. Thus we may regard $`M`$ as a constant in the equations for $`M_2`$ and $`M_3`$. Since $`\sigma _{2k}`$ is negative, $`M_2\mathrm{exp}(i\theta _2)`$ goes to a quasi-steady state value which is proportional to $`M^2`$. Substituting this value into the $`M`$ equation gives $$\dot{M}=\sigma _kMϵ^2\lambda M^3+O(ϵ^4)$$ $`(36)`$ The analytical expression for the Landau constant $`\lambda `$ is complicated and unenlightening; substituting (18) and (19) into it gives $$\lambda =16.905+64.680ϵq+O(ϵ^2)$$ $`(37)`$ Note that the correction terms in the evolution equation for $`M`$, which would come from including higher-order terms in the original expansions (24) for the evolution equation and (31) for $`h(x,t)`$, are of order $`ϵ^4`$, not $`ϵ^3`$. As a result, the $`ϵ^3`$ terms in the equation come only from expanding the analytical expressions for $`\sigma `$ and $`\lambda `$ in powers of $`ϵ`$. This also holds for the equations for the higher harmonics. Thus we get the first-order corrections to all of our results essentially for free. From (37) and the third-order expansion (20) for $`\sigma _k`$ we obtain the steady-state amplitude $`M`$ to first order in $`ϵ`$, $$M^2=(0.259460.59719q^2)(0.877252.10774q^2)ϵq+O(ϵ^2)$$ $`(38)`$ We then find the phase velocity, $$v_{ph}=\omega /k=1+4.4934ϵq+(1.8774.320q^2)ϵ^2(3.43910.128q^2)ϵ^3q+O(ϵ^4),$$ $`(39)`$ and the group velocity, $$v_{gr}=d\omega /dk=19.1907+13.4802ϵq+(18.24140.984q^2)ϵ^2(43.198107.23q^2)ϵ^3q+O(ϵ^4),$$ $`(40)`$ from (33). The equation (34) for $`M_2`$ and $`\theta _2`$ gives $$M_2/M^2=0.908115+0.50118ϵq,$$ $$\theta _2/\pi =0.229160.36189ϵq,$$ $`(41)`$ and from (35) we find $$M_3/M^3=1.52273+1.89385ϵq,$$ $$\theta _3/\pi =0.43090.6024ϵq$$ $`(42)`$ As usual, the ripple solutions we have found are not all stable; instead, those with too large a wave number deviation $`q`$ are linearly unstable. The calculation of the critical value of $`q`$ is rather intricate, so we defer it to the Appendix. The result is that the range of stable wave numbers is rather wider than usual – it extends out to $`q=0.9095q_0`$, where $`q_0=(2/J_c)^{1/2}`$ is the wave number deviation at which $`\sigma _k`$ vanishes to leading order in $`ϵ`$. At the edge of the stable range, the amplitude $`M`$ of the ripple solution is 0.4157 times its value ay $`q=0`$. If instead of the NO saltation relation we use the symmetric relation (8), the results of the analysis are rather different. The expansion of $`d\xi /dx`$ is not as simple and clean as the derivation above; the evolution equation (24) is replaced by $$\frac{h(x,t)}{t}=h^{\prime \prime }(x,t)J[h(x1,t)h(x,t)]^{}+J\{[h(x1,t)h(x,t)]h^{}(x1,t)\}^{}$$ $$\frac{J}{2}\{[h(x1,t)h(x,t)]^2h^{\prime \prime }(x1,t)\}^{}J\{[h(x1,t)h(x,t)][h^{}(x1,t)]^2\}^{}+\mathrm{}$$ $`(43)`$ We again substitute the ansatz (31) into this equation and work out the Fourier components of the result. The equations for $`M`$ and $`\varphi `$ become $$\dot{M}=\sigma _kMϵ^2Jk^2M_2M[\mathrm{cos}k\mathrm{cos}\theta _2\mathrm{cos}(2k\theta _2)]$$ $$+\frac{ϵ^2}{4}Jk^3M^3(\mathrm{sin}k2\mathrm{sin}2k),$$ $`(44)`$ $$\dot{\varphi }=\omega _k+ϵ^2Jk^2M_2[\mathrm{cos}k\mathrm{sin}\theta _2+\mathrm{sin}(2k\theta _2)]\frac{ϵ^2}{2}Jk^3M^2(\mathrm{cos}k\mathrm{cos}2k)$$ $`(45)`$ where now $`\omega _k`$ is given by $`Jk(1\mathrm{cos}k)`$ as is appropriate for this model. Note that the $`M_0`$ terms which were present in (32) and (33) are absent here; this is because the new saltation relation respects the symmetry under addition of a constant to $`h`$. The evolution of $`M_2`$ and $`\theta _2`$ is now given by $$\frac{d}{dt}M_2e^{i\theta _2}=(\sigma _{2k}+i\omega _{2k}2i\omega _k)M_2e^{i\theta _2}+Jk^2(e^{ik}e^{2ik})M^2+O(ϵ^2),$$ $`(46)`$ and the third harmonic by $$\frac{d}{dt}M_3e^{i\theta _3}=(\sigma _{3k}+i\omega _{3k}3i\omega _k)M_3e^{i\theta _3}+\frac{3}{2}Jk^2e^{ik}(1e^{ik})(1+3e^{ik})MM_2e^{i\theta _2}$$ $$+\frac{3}{8}iJk^3e^{ik}(1e^{ik})(13e^{ik})M^3+O(ϵ^2)$$ $`(47)`$ After carrying out the calculation we find a much larger value for the Landau coefficient, $$\lambda =151.26+88.014ϵq+O(ϵ^2).$$ $`(48)`$ Thus for a given wave number, the restabilized amplitude $`M`$ of the ripples is smaller by a factor of about 3: $$M^2=(0.02899750.066743q^2)(0.0039660.0190315q^2)ϵq+O(ϵ^2).$$ $`(49)`$ The phase velocity is more negative than before, as we found from the linear stability analysis, $$v_{ph}=5.6033+4.4934ϵq(1.5061.16475q^2)ϵ^2(0.1971.853q^2)ϵ^3q+O(ϵ^4),$$ $`(50)`$ and likewise the group velocity, $$v_{gr}=14.5874+13.4802ϵq(2.5694.612q^2)ϵ^2(8.15219.799q^2)ϵ^3q+O(ϵ^4).$$ $`(51)`$ For a given amplitude, however, the harmonics are stronger than before: we find $$M_2/M^2=1.41691+0.213856ϵq,$$ $$\theta _2/\pi =x0.44430.2027ϵq,$$ $`(52)`$ and $$M_3/M^3=2.89266+1.57014ϵq,$$ $$\theta _3/\pi =0.14260.3952ϵq.$$ $`(53)`$ The range of wave numbers for which these solutions is stable is somewhat narrower than before but still wider than usual, extending out to $`q=0.7571q_0`$, where the amplitude $`M`$ is 0.6533 times its value at $`q=0`$. ## 5 Numerical solutions We now present numerical solutions and compare them with the predictions made in the previous section. The nonlocal evolution equation (7) was solved numerically with periodic boundary conditions on a system of length $`l=2\pi /k`$, so that only the Fourier modes $`nk`$ contributed to the solutions. For the discretization scheme, we chose an explicit method using forward differences in time and central differences in space. The axis was discretized at $`2^9`$ equally spaced sites with $`\mathrm{\Delta }x=2\pi /k2^9`$ and solutions were generated for five different values of $`J`$ near $`J_c=4.603`$, namely $`J=4.62,4.65,4.70,4.75,4.80`$, and values of $`k`$ were chosen to span the unstable region. Initial conditions were sinusoids of wave number $`k`$ centered around $`h=0`$. At t=0, we start with a sinusoid at a particular wave number $`k`$, and let it evolve with $`(\mathrm{\Delta }x)^2/\mathrm{\Delta }t=1/4`$ until it reaches a steady state. This takes about $`10^6`$ time steps. The nonlocal term in the evolution equation, $`J\left(d\xi /dx1\right)`$, was evaluated for a given $`x`$ by finding the nearest upwind value of $`\xi `$ satisfying the equation $`x=\xi +1+h(\xi )`$. Spefifically, the first root of the function $`f(\xi ;x)=\xi +1+h(\xi )x`$ with value less than $`x`$ was obtained by simply finding the two sites upwind of $`x`$ and nearest to it between which $`f(\xi ;x)`$ changed sign. Then, $`d\xi /dx`$ was calculated using the values of $`h`$ at these sites. The final steady state, $`h(x,t)`$, is then Fourier transformed, i.e.: $$h(x,t)=\underset{n=1}{\overset{\mathrm{}}{}}[a_n(t)\mathrm{sin}nkx+b_n(t)\mathrm{cos}nkx],$$ $`(54)`$ from which we obtain $`M_n=(a_n^2+b_n^2)^{1/2}`$ and $`\theta _n=\mathrm{tan}^1(b_n/a_n)`$. Note that $`M_1=M`$ and $`\theta _1=\varphi `$ in this notation. The nonlinear analysis in the previous section predicts that these quantities will go to time-independent values. We find numerically that they actually oscillate as a function of time around their mean values. However, the magnitudes of these oscillations are quite small and decrease with increasing grid resolution, so we believe them to be numerical artifacts. We therefore take the time averaged mean values and compare them with the predictions of the weakly nonlinear analysis. We also find that although we start with an initial profile with the average height $`h_0=0`$, the mean position of the steady state pattern shifts slightly upward in some cases, downward in others, to a small but finite $`h_0`$. Since the mean height of the sandbed is conserved by the exact evolution equation, we believe that this is also a numerical artifact. Moreover, as mentioned in section 2, we can map any steady state solution with finite $`h_0`$ to the solution with $`h_0=0`$ by redefining the horizontal length scale from $`L_0`$ to $`L_0(1+h_0)`$ and shifting the control parameter from $`J`$ to $`J(1+h_0)`$. However, the magnitude offset $`h_0`$ was always of the order of $`10^5`$ to $`10^3`$, and thus in all cases studied here, the corrections due to such an offset are quite negligible. Hence, our results without these corrections are virtually identical to those with corrections. Figure 2 shows the amplitude $`M`$ of the fundamental mode as a function of $`k`$ for different values of $`J`$. The data points are fairly close to the values predicted by the first-order expansion (38), which are represented by continuous curves. Note that the curves are asymmetric around the critical value $`k_c`$ and the asymmetry becomes more pronounced for larger $`J`$. The weakly nonlinear analysis is capable of predicting this asymmetry only because the order-$`ϵ`$ terms are included. In Figure 3 we plot the phase velocity of the fundamental mode against $`k`$ for different values of $`J`$. The speed was obtained by calculating $`\varphi (t)`$ in the expression $`cos(kx\varphi (t))`$, which is proportional to the fundamental mode in the steady state. The function $`\varphi (t)`$ was found to be linear in t, so v was calculated as $`v=d\varphi (t)/dt/k`$. The data points are compared against the weakly nonlinear predictions (solid line) given by (39). Only for fairly large $`J`$, and only near the high-wave-number end of the band of ripple solutions, does the velocity become positive, that is, in the direction of the wind. In Figures 4a and 4b are plotted the ratios of the amplitudes of the second and third harmonics to the appropriate powers of the fundamental ampltude, i.e., $`R_2=M_2/M^2`$ and $`R_3=M_3/M^3`$. The data fit quite well with the theoretical predictions for both cases, in particular near the onset $`k_c=4.493`$, where the nonlinear analysis is most reliable. Note that the first-order terms in the analytical results match the slope of the numerical results. The curvature which is evident in the numerical data for $`k`$ farther from $`k_c`$ is apparently a higher-order effect. Note that the width of the band of ripple solutions increases with $`ϵ`$, so an appreciably large $`ϵ`$ is required to reach these larger values of $`|kk_c|`$. In Figures 5a and 5b we plot the phase angles $`\theta _2`$ and $`\theta _3`$ against $`k`$. The agreement between the simulations and the weakly nonlinear analysis is again quite strong for $`J`$ near onset. The order-$`ϵ`$ terms in the analytical results match the slope of the numerical data. For higher $`J`$ we observe a systematic downward deviation in the numerical results. The shift appears to be linear in $`J`$, and so is second-order in $`ϵ(JJ_c)^{1/2}`$. ## 6 Discussion We have carried out numerical and weakly nonlinear analyses of the Nishimori-Ouchi continuum model for windblown sand, and also for a modification of that model which respects the physical symmetry of the system under changes of the reference level of height. Both versions of the model yield the surprising result that the ripple patterns, which form when the flat sandbed becomes unstable, drift upwind even as the sand which forms the ripples is blown downwind. This drift is found in the linear stability analysis and persists in the weakly nonlinear results, and numerical integrations confirm that it is a real consequence of the model. Such a counterintuitive result has not been examined or detected by previous Monte Carlo simulations of this model or in real experiments . It would be interesting to check experimentally whether or not ripples can move against the wind. The symmetric version of the model actually predicts a considerably higher upwind drift speed than the original Nishimori-Ouchi version. It may also be surprising that the differences between the symmetric and Nishimori-Ouchi models are merely qualitative. The restabilized ripple pattern for a given value of the control parameter have smaller amplitudes (by a factor of about 3) and higher drift velocities (by a factor of over 5) in the symmetric model than in the original version. The relative sizes and phases of the higher harmonics in the ripple shape are also different for the two models. A number of modifications to the model are needed in order to make it comparable with experiments. A major ingredient that is left out of the model is any effect of the surface topography on the wind. This lack means that there is no shadowing effect in the model. Including such an effect would make it more likely for grains to settle on the downwind side of a ripple than on the upwind side, and more likely for them to be blown off the upwind side than the downwind side. This would likely reduce the tendency of the ripples to drift upwind. The result that the ripples drift upwind in this model, which neglects shadowing, may be an indirect indication of the importance of shadowing in the development of real ripple patterns. An improved model of creep may also be needed; a downwind bias in the creep would modify the drift velocity. Perhaps most critical is a better and more realistic form of the saltation function, which must account the effects of the topography of the sandbed, and the many particle dynamics of the grains in the air as well as on the surface. References R.A. Bagnold, Proc. Roy. Soc. A157, 594 (1936); See also The physics of blown sand and desert dunes, reprinted by Chapman and Hall (1981). K. Pye and H. Tsoar, Aeolian sand and sand dunes, Unwin Hyman, London (1990). H. Nishimori and N. Ouchi, Phys. Rev. Lett. 71, 197 (1993). R.S. Anderson, Sedimentology, 34, 943 (1987). O. Terzidis, P. Claudin, and J.P. Bouchaud, cond-matt/9801295. For recent observations of surface instabilities that develop when grains are subject to vibrations, see: F. Melo, P. Umbanhowar, and H.L. Swinney, Phys. Rev. Lett. 75, 3838 (1995); T.H. Metcalf, J.B. Knight, and H.M. Jaeger, Physica A 236, 202 (1997); K.M. Aoki and T. Akiyama, Phys. Rev. Lett. 77, 4166 (1996). See, e.g.: J.S. Langer, Science 243, 1150 (1989); D.A. Kessler, H. Levine, and J. Koplik, Adv. Phys. 37, 255 (1988); Dynamics of curved fronts, edited by P. Pelce, Academic, San Diego, (1988) and references therein. See, e.g., M. Ben-Amar and B. Moussallam, Phys. Rev. Lett. 60, 317 (1989) and references therein. D. A. Kurtze and D. C. Hong, J. Korean. Phys. Soc. 28(2), 178 (1995) and references therein. A. Betat, V. Frette, and I. Rehberg, Phys. Rev. Lett. 83, 88 (1999). More detailed description of the model can be found in: H. Nishimori and N. Ouchi, Int. J. Mod. Phys. B. Vol.7, # 9 & 10, 2025 (1995). ## Appendix A Stability of ripple solutions and the Eckhaus boundary In this section, we examine the stability of the ripple solutions, and so determine the Eckhaus boundary in the $`J`$$`k`$ plane, within which the solutions are linearly stable and so may be observed. We begin with the ripple solution $$h_0(x,t)=ϵM\mathrm{cos}(kx\omega t)+ϵ^2M_2\mathrm{cos}[2(kx\omega t)+\theta _2]+O(ϵ^3M_3),$$ $`(A1)`$ with $`k=k_c+ϵq`$, and add an infinitesimal perturbation $`h_1(x,t)`$. If the perturbation contains a Fourier component with wave number $`k+ϵq^{}`$, then the nonlinear terms in the evolution equation will generate a component with wave number $`kϵq^{}`$. Thus we will start the calculation by taking $`h_1`$ to have the form $$h_1(x,t)=A_{}(t)\mathrm{cos}[(kϵq^{})x(\omega t+\varphi _{}(t))]+A_+(t)\mathrm{cos}[(k+ϵq^{})x(\omega t+\varphi _+(t))].$$ $`(A2)`$ Substituting this into the evolution equation, expanding, and picking off the coefficients of the sines and cosines of $`(kϵq^{})x`$ and $`(k+ϵq^{})x`$ yields a closed set of equations for the amplitudes $`A_{}`$ and $`A_+`$ and the phase $`\varphi _++\varphi _{}`$. These equations have the form $$\dot{A}_{}=\mathrm{\Sigma }_{}A_{}+\alpha A_+\mathrm{cos}\psi ,$$ $$\dot{A}_+=\mathrm{\Sigma }_+A_++\alpha A_{}\mathrm{cos}\psi ,$$ $$\dot{\psi }=\mathrm{\Omega }\alpha [(A_+/A_{})+(A_{}/A_+)]\mathrm{sin}\psi $$ $`(A3)`$ where $`\psi `$ is $`\varphi _++\varphi _{}`$ plus a constant which depends on $`k`$ and $`q^{}`$ (but not time), and the overdot denotes a derivative with respect to the slow time variable $`ϵ^2t`$. The coefficients are given to leading order in $`ϵ`$ by $$\mathrm{\Sigma }_\pm =\frac{k_c^2}{2}\left[\left(\frac{k_c^2}{2}\frac{2\lambda }{k_c^2}\right)M^22qq^{}q^2\right],$$ $$\alpha =\alpha _0k_c^2M^2=1.98183k_c^2M^2,$$ $$\mathrm{\Omega }=\frac{1}{2}k_c^3M^2+3k_cq^2.$$ $`(A4)`$ Note that it is important to keep the second-order term in $`h_0`$ during the calculation, since it contributes to $`\alpha `$. (Omitting it changes $`\alpha _0`$ to 2.58559, a 30% change.) We must now determine whether the amplitudes given by (A-3) grow or decay with time. We can simplify the equations somewhat by defining $$R=A_+/A_{};$$ $`(A5)`$ the amplitude equations then become $$\dot{A}_{}=(\mathrm{\Sigma }_{}+\alpha R\mathrm{cos}\psi )A_+,$$ $$\dot{R}=(\mathrm{\Sigma }_+\mathrm{\Sigma }_{})R+\alpha (1R^2)\mathrm{cos}\psi ,$$ $$\dot{\psi }=\mathrm{\Omega }\alpha [(1+R^2)/R]\mathrm{sin}\psi .$$ $`(A6)`$ Note that the first equation decouples from the last two. If it happens that $`R`$ and $`\psi `$ go to constants as $`t\mathrm{}`$, then the amplitudes decay for $`\mathrm{\Sigma }_{}+\alpha R\mathrm{cos}\psi <0`$ and grow otherwise. Thus for a given $`q`$, the ripple state (A-1) is linearly stable if this inequality is satisfied for every $`q^{}`$, otherwise it is unstable. To see what $`R`$ and $`\psi `$ actually do, we combine the $`R`$ and $`\psi `$ equations into an evolution equation for the complex variable $$Z=R\mathrm{exp}(i\psi ),$$ $`(A7)`$ namely $$\dot{Z}=\alpha (1Z^2)+(\mathrm{\Sigma }_+\mathrm{\Sigma }_{}+i\mathrm{\Omega })Z.$$ $`(A8)`$ Clearly this equation has two fixed points, and solving it exactly reveals that the one with positive real part is a global attractor and the one with negative real part a global repeller. Thus $`R\mathrm{cos}\psi `$ does go to a constant, and from its value we can decide whether the ripple state is stable or not. Since it is the real part of $`Z`$, namely $`R\mathrm{cos}\psi `$, which determines whether the perturbation grows or decays, it is useful to rewrite (A-8) in terms of the real and imaginary parts of $`Z`$, $$Z=X+iY.$$ $`(A9)`$ We find $$\dot{X}=\alpha (1X^2)+(\mathrm{\Sigma }_+\mathrm{\Sigma }_{})X\mathrm{\Omega }Y+\alpha Y^2,$$ $$\dot{Y}=(\mathrm{\Sigma }_+\mathrm{\Sigma }_{})Y+\mathrm{\Omega }X2\alpha XY.$$ $`(A10)`$ By linearizing about a fixed point $`(X,Y)`$ of this system, we quickly find that the fixed point is an attractor for $`X>(\mathrm{\Sigma }_+\mathrm{\Sigma }_{})/2\alpha `$. To find the fixed points, we set $`\dot{X}=\dot{Y}=0`$ and solve the second equation for $`Y`$ in terms of $`X`$, then substitute into the first equation to get $$\alpha X(\alpha X\mathrm{\Sigma }_++\mathrm{\Sigma }_{})=\frac{\mathrm{\Omega }^2X(\alpha X\mathrm{\Sigma }_++\mathrm{\Sigma }_{})}{(2\alpha X\mathrm{\Sigma }_++\mathrm{\Sigma }_{})^2}.$$ $`(A11)`$ The two sides of this equation are plotted in Fig. 6. Both sides are symmetric about $`X=(\mathrm{\Sigma }_+\mathrm{\Sigma }_{})/2\alpha `$, so there is clearly one solution with $`X`$ greater than this – the attractor – and one, the repeller, with $`X`$ less. In order for the perturbation to decay, the attractor must have $`X`$ less than $`\mathrm{\Sigma }_{}/\alpha `$. From the plot, we see that this means that at $`X=\mathrm{\Sigma }_{}/\alpha `$ the right side of (A-11) must be greater than the left side. After a little algebra, we can write this condition for the perturbation to decay in the form $$\mathrm{\Sigma }_+\mathrm{\Sigma }_{}>\frac{\alpha ^2(\mathrm{\Sigma }_++\mathrm{\Sigma }_{})^2}{(\mathrm{\Sigma }_++\mathrm{\Sigma }_{})^2+\mathrm{\Omega }^2}.$$ $`(A12)`$ Equation (A-12) above is the condition for the amplitude of a perturbation with a specific value of $`q^{}`$ to decay. In order to conclude that the ripple solution with a given $`q`$ is linearly stable, we must see to it that this condition is satisfied for all $`q^{}`$. For this we must substitute for the parameters from (A-4) above. To put the result into a useful form, we define $`Q=2q^2/k_c^2M^2`$ and eliminate $`q^2`$ in favor of $`M^2`$. After some rearranging, we find that the condition for the solution to be stable is $$M^2>\frac{16Q}{J}\frac{k_c^2(\beta +Q)^2+(1+3Q)^2}{k_c^2[(\beta Q)^2+4Q][k_c^2(\beta +Q)^2+(1+3Q)^2]16\alpha _0^2(\beta +Q)^2},$$ $`(A13)`$ where $`\beta =1(4\lambda /k_c^2)=0.834132`$. The complicated function of $`Q`$ on the right has a single maximum for positive $`Q`$, at a height of 0.04484. Ripple states with $`M^2`$ below this are unstable, while those with $`M^2`$ larger than this are linearly stable. From this we find that the range of wave numbers of linearly stable ripple solutions is given by $`|q|<0.9095q_0`$, where $`q_0=(2/J_c)^{1/2}`$ is the largest wave number for which a ripple solution exists. In summary, we have found that in the weakly nonlinear regime, the flat sandbed is unstable against perturbations with wave numbers $`k`$ in the range $$|kk_c|<ϵq_0=\sqrt{2\left(\frac{J}{J_c}1\right)},$$ $`(A14)`$ while the Eckahus boundary is given by $$|kk_c|<0.9095ϵq_0=\sqrt{1.654\left(\frac{J}{J_c}1\right)}.$$ $`(A15)`$ For the symmetric saltation relation, the structure of the calculation is the same but the numbers are different. We find that the marginally stable wave number is given by $`q=0.7571q_0`$, so the Eckhaus boundary is now given by $$|kk_c|<0.7571ϵq_0=\sqrt{1.1465\left(\frac{J}{J_c}1\right)}.$$ $`(A16)`$ Figure Captions Fig. 1: Saltation refers to the process of a single grain being ejected from the surface at a point $`\xi `$ and being blown to a landing point $`x`$ by the wind. Fig. 2: The steady state amplitude $`ϵM`$ vs. $`k`$ for five different values of $`J`$. The continuous lines are the analytical predictions given by Eq. (38). Fig. 3: The propagation speed of the steady state patterns $`v_{ph}`$ vs. $`k`$ for different values of $`J`$. Numerically obtained values are compared to the analytical predictions (continuous curves) from Eq. (39). Fig. 4: The ratios (a) $`R_2=M_2/M^2`$ and (b) $`R_3=M_3/M^3`$ are plotted against $`k`$. The solid line is the analytical predictions from Equations (41) and (42). Fig. 5: (a) $`\theta _2`$ and (b) $`\theta _3`$ are plotted against $`k`$ for five different values of $`J`$. The solid line is the analytical predictions from Equations (41) and (42). Fig. 6: The right and left hand sides of Eq. (A-11).
no-problem/9910/astro-ph9910023.html
ar5iv
text
# A Possible 100-day X-ray-to-Optical Lag in the Variations of the Seyfert 1 Nucleus NGC 3516 ## 1 Introduction Much of the energy of Seyfert-1-type active galactic nuclei (AGNs) is emitted in X-rays, yet it is unclear what the source of this emission is. Comparison of variations in different bands can provide valuable clues toward understanding the geometry and nature of AGNs. In particular, inter-band lags can discriminate between primary and secondary (i.e., reprocessed) emissions. Contemporaneous X-ray and UV/optical monitoring has been carried out for only a few Seyfert 1 galaxies to date. On short time scales, simultaneous optical and X-ray monitoring of both NGC 4051 (Done et al. 1990) and NGC 3516 (Edelson et al. 1999) showed strong X-ray variations and little or no optical changes over 2–3 day periods. Longer time scale monitoring of NGC 5548 (Clavel et al. 1992) and NGC 4151 (Kaspi et al. 1996; Crenshaw e al. 1996; Warwick et al. 1996; Edelson et al. 1996) found evidence for a correlation at zero lag between optical, ultraviolet (IUE data), and X-ray (ROSAT and ASCA data), but these data were very sparsely sampled ($``$12 points). NGC 7469 was monitored intensively with RXTE, IUE and ground-based observatories for one month in 1996. The optical and UV were found to be strongly correlated, with evidence presented for a lag that increases with wavelength (Wanders et al. 1997; Collier et al. 1998). There was, however, no clear correlation found between the X-rays and UV (Nandra et al. 1998). The peaks in the X-ray light curve appeared to lag the UV peaks by $`4`$ days, while the troughs appeared better correlated at zero lag. The X-rays also showed much more rapid variations than the UV and, by extension, the optical. Most recently, Chiang et al. (1999) monitored NGC 5548 for three days with RXTE, ASCA, and the Extreme Ultraviolet Explorer (EUVE). Evidence was presented for a lag that increases with energy band, with the ASCA (0.5-1 keV) variations lagging the EUVE (0.14–0.18 keV) variations by about 3.5 hours, and the RXTE(2–20 keV) variations lagging EUVE by about 10 hours. We initiated in 1997 a program to monitor the Seyfert 1 galaxy NGC 3516 with RXTE. Apart from its brightness and known tendency to vary, the high declination of this galaxy makes it circumpolar for most Northern ground-based observatories, allowing it to be observed year round. Month- and year-long variation timescales can thus be properly probed, as well as shorter timescales. Edelson & Nandra (1999) presented the RXTE data for NGC 3516 between 1997 March and 1998 September, and calculated the power-density spectrum (PDS) of the 2–10 keV fluctuations on all timescales from 20 min to 6 months. They found that the PDS can be described by a power law of slope $`1.7`$ that turns over to a flatter slope at timescales longer than $`1`$ month. Here we present densely-sampled optical broad-band ($`B`$ and $`R`$) measurements of NGC 3516 obtained at Wise Observatory contemporaneously with the RXTE monitoring, and supplement the RXTE light curve with new data through 1999 January. In §2 we describe the observations and data reduction, and derive the optical light curves. In §3 we carry out a time series analysis comparing the X-ray and optical light curves. In §4 we attempt to interpret our results within a physical picture. ## 2 Optical Observations and Reductions We observed NGC 3516 from 1997, March 5, to 1998, September 2, using the Wise Observatory 1m telescope in Mitzpe Ramon, Israel. On the nights when the galaxy was observed, Johnson-Cousins $`B`$\- and $`R`$-band images were obtained once per night. We used a $`1024\times 1024`$-pixel thinned Tektronix CCD at the Cassegrain focus, with a scale of $`0.7^{\prime \prime }`$ pixel<sup>-1</sup>. Exposure times were 3 min in $`R`$ and 5 min in $`B`$. During this 546-day period, useful data were obtained for 108 epochs in $`R`$ and for 87 epochs in $`B`$. Between 1997 February 1 and November 25 the telescope suffered from scattered-light problems due to a change in baffling. Data from this period could not be properly flat-fielded. However, under proper baffling of scattered light one can see that the detector response and illumination vary by only a few percent across the $`12^{}`$ field of view of the detector, so there should only be a minor effect on the accuracy of our photometry. We verify this below. Aperture photometry was carried out by integrating counts within circular apertures centered on the Seyfert nucleus and on the six brightest unsaturated stars projected near the galaxy. The stars were chosen to be within a few arcminutes from the galaxy, and in various directions, in order to minimize the error due to the lack of proper flatfielding for most of the frames. On some epochs, only the central section of the CCD was read out, and hence not all six comparison stars are present on the frame. Measurements of stars in which any of the pixels were near saturation were discarded. The apertures had a radius of 4 pixels. For comparison, the seeing half-width at half-maximum (HWHM) was in the range of 1 to 2.5 pixels, with a typical value of 1.5 pixels. The aperture thus included most of the light from a star, even under adverse seeing conditions. The local background level was calculated in annuli of inner and outer radii 8 and 11 pixels, respectively, around each object. For the measurement of the nucleus, this background subtraction provides some removal of the galaxy starlight. We experimented using smaller or larger apertures. We obtained similar light curves for the nucleus, but with smaller variation amplitudes for the larger apertures, due to the larger constant stellar contribution. On the other hand, the errors in the light curves (as determined below) also became larger for small apertures, due to the dependence of the integrated counts on the object-centering accuracy in the pixellated images. We found that the 4-pixel aperture radius was optimal in terms of minimizing both the galaxy background and the photometric errors. Relative photometry was achieved by calculating the instrumental magnitude difference between a star’s counts in a given epoch and its counts in the first epoch of the program. These differences were averaged among all the comparison stars present on a frame to provide an instrumental zero point for a given epoch. The standard deviation of this mean provided an empirical estimate of the photometric error. The difference of the nuclear instrumental magnitude and the zeropoint of a given epoch yielded the change in magnitude of the nucleus relative to the first epoch. We verified that there is only a barely-discernible effect of the choice of “first epoch” on the final light curves . To assure that the comparison stars are not variable themselves, and to assess the reliability of our error estimates, we measured in the same way each star using the five other stars as comparisons. We found that the stars are non-variable to within our measurement accuracy. The deviations of a star’s brightness from its mean are consistent with its assigned error-bars, assuming a Gaussian error distribution. The mean error is 0.02 mag. For epochs whose frames contained fewer than four comparison stars, the standard deviation of the zeropoint mean was poorly defined, and the larger among the standard deviation and 0.02 mag was adopted as the error. Figure 1 shows the optical light curves we have obtained for NGC 3516. In Figure 2 we plot on the same scale for each optical band the constant, to within errors, light curve of one of the comparison stars, calculated relative to the other five stars. The $`R`$ and $`B`$ light curves of NGC 3516 in Figure 1 show very similar variability patterns, with peak-to-peak amplitudes of 0.35 mag and 0.7 mag, respectively. There is thus no doubt as to the reality of the variations. As mentioned above, the exact amplitude of the variations depends on the choice of photometric extraction aperture, which will include a particular fraction of stellar light from the galaxy. The above numbers are therefore lower limits on the intrinsic variability amplitude of the nucleus in each band, which is difficult to estimate. ## 3 Time Series Analysis Here we compare the X-ray and optical light curves of NGC 3516. All our results apply equally well to both the $`B`$ and the $`R`$ light curves, to which we will refer collectively as the “optical light curves”. Since the $`R`$ light curve is better sampled than the $`B`$ light curve, we will use only the $`R`$ in the figures and discussion below. Figure 3 (top panel) shows again the $`R`$ light curve of NGC 3516, but with a relative linear (rather than magnitude) flux scale. The bottom panel shows the RXTE X-ray (2-10 keV) light curve of Edelson & Nandra (1999), supplemented with new RXTE data up to January 1999. Observations and reduction leading to the new RXTE data are as described in Edelson & Nandra (1999). Examination of Figure 3 shows that the bulk of the optical variation is in a $`250`$-day-long rise and fall between days 600 and 850, followed by a two-month-long deep minimum centered around day 1000. The X-ray light curve, by contrast, has much more power in short-timescale flickering. The z-transformed discrete correlation function (ZDCF; Alexander 1997), a modification of the discrete correlation function (Edelson & Krolik 1988) was used to assess the degree of correlation between variations in the optical and X-ray bands. The top panel of Figure 4 shows the ZDCF for the unsmoothed $`R`$-band and X-ray data. A positive correlation of $`r=0.70`$ is seen at a lag of $`\mathrm{\Delta }t110`$ days (that is, with the optical variations leading the X-rays) while an anticorrelation of $`r=0.70`$ is found at a lag of $`\mathrm{\Delta }t280`$ days. Furthermore, anticorrelations of $`r=0.3`$ to $`r=0.5`$ are seen between lags of $`\mathrm{\Delta }t+100`$ to $`+200`$ days. We also note that there is a small subpeak close to zero lag. The significance of the cross-correlation peaks is usually computed using Student’s t-test, with the null-hypothesis probability depending on the number of independent points in the correlation. Usually, this is assumed to be the number of data points in each bin of the ZDCF, and under such an assumption the correlations we find are highly significant. Here we question this assumption, however. It is well-known that the PDSs of AGN, including NGC 3516, have a “red-noise” character, with variations correlated over time scales of $``$1 month. The number of independent data points in each correlation bin may therefore be greatly reduced, as will the inferred significance of the correlations. The high significance of both the positive and negative correlations also indicates that the underlying assumptions need to be examined. With no straightforward way of estimating the number of independent data points in a given bin, we caution that the significances usually assumed are almost certainly overestimated. To obtain a more quantitative assessment of the significance of the correlation, we have carried out Monte Carlo simulations, as follows. Synthetic light curves having chosen PDSs were created by summing suitably-weighted harmonic functions with random phases. The synthetic light curves were then sampled with the same temporal pattern as the real optical and X-ray light curves. Simulated Gaussian measurement errors were added as each point, such that the ratio of the rms variation of the light curve to the Gaussian $`\sigma `$ was typical of that of the real light curves. The cross-correlation function of the simulated optical and X-ray light curves was then searched for values as high as the one observed in the real data. The whole process was repeated 1000 times for each choice of optical and X-ray PDS, and the fraction of iterations with correlation above the threshold noted. We find the results of these simulations are strongly dependent on the assumed PDS of each light curve. Edelson & Nandra (1999) showed that the X-ray PDS of NGC 3516, on timescales shorter than about 1 day, is well described by a power law of index $`\alpha _x=1.74\pm 0.12`$. On longer timescales, however, the index gradually flattens, to $`\alpha _x1.0`$ on day-long to month-long timescales, and further to $`\alpha _x0.7`$ on few-month timescales. The PDS slope on timescales longer than one day, precisely the timescales probed here, is not well contrained. The observational knowledge of the optical PDS is much worse. The uneven sampling of the optical light curve precludes any straightforward calculation of its PDS. Existing algorithms, e.g. Scargle (1982), for calculating the PDS of unevenly sampled data, are useful for periodicity searches, but badly fail to reproduce the shape of PDS’s having power over a broad range of frequencies, due to the aliasing between frequencies that the uneven window function introduces. (See Giveon et al. 1999, for a detailed discussion of the problem.) To obtain a very rough guide of the optical PDS shape, we applied to the data Giveon et al.’s (1999) “partial interpolation” algorithm. The results suggest the optical PDS may be a power law of slope $`\alpha _o2.0\pm 0.6`$. Given the above uncertainties with regard to the PDS’s that must be input to the simulations, we calculated the significance of the observed correlation for a grid of power-law PDS’s with different slopes. The significance of the correlation is highest for flat input PDS slopes, and becomes low for steep PDS’s, in which each light curve is dominated by only a few “events” which can produce spurious correlations. For an input X-ray PDS slope of $`\alpha _x=1.0`$, which is a reasonable choice, the observed correlation is significant at $`>99\%`$ confidence, as long as the optical PDS slope $`\alpha _o1.5`$. For $`\alpha _o=1.75`$ the significance declines to $`98.5\%`$, and for $`\alpha _o=2.5`$ it is only $`97\%`$. Steeper optical PDS’s are allowed for $`\alpha _x`$ somewhat flatter than $`1.0`$, and vice versa. We conclude that the observed X-ray-to-optical correlation at 110-day lag may indeed be significant, but the verdict depends on poorly known parameters. We also note that, as shown below, the observed correlation is actually driven only by the first year’s worth of data, during which the correlation is much higher. However, calculating the significance of only a segment of the data obtained would involve a posteriori statistics, which is something we will avoid. To study the relative contributions to the correlations made by fast and slow variations, we have smoothed the light curves with a 30-day boxcar running mean, and recalculated the ZDCF. The smoothed light curves are shown as solid lines in Figure 3. The middle panel of Figure 4 shows the ZDCF for the smoothed data, and the bottom panel, the ZDCF for the residuals (i.e., the original light curves minus their respective smoothed versions). The smoothed light curves show correlations and anticorrelations that are similar, but somewhat strengthened, compared to the unsmoothed light curves, with a positive peak of $`r=0.80`$ at $`\mathrm{\Delta }t=100`$ days, a negative peak of $`r=0.90`$ at $`\mathrm{\Delta }t=280`$ days, and a negative plateau of $`r0.65`$ at $`\mathrm{\Delta }t=+100`$ to $`+200`$ days. The correlation function of the residual light curves shows no significant signal, indicating that there is no correlation present in the high temporal frequency components of the data. We obtain similar results if, instead of using the ZDCF algorithm, we use a “least-squares shift and scale” scheme to find the best lag for the observed, smoothed, or residual light curves. For every time-shift between the light curves, we find the linear relation that, when applied to the X-ray light curve, minimizes the sum of the square of the differences between each optical point and the scaled X-ray point that is nearest in time to it at that shift. The global (over all time shifts) least squares then provides the best lag. On the face of it, the positive peak in the correlation suggests that the optical variations lead the X-ray variations by $`100`$d. The similarity between the smoothed optical and X-ray light curves during the first 350 days can be seen in Figure 5, which shows the two after the X-rays are scaled, and shifted back in time, according to the least-squares solution for this period. This figure clarifies the fact that the strong apparent correlation we observe is driven primarily by one “event” in the light curve. Indeed, one sees a complete mismatch, at this lag, of the light curves after day 850. The deep minimum observed in the optical light curve seems to correspond at zero lag to a local minimum in the X-ray light curve, which may account for the subpeak in the ZDCF at zero lag. This increases the likelihood that we are perhaps being misled by chance similarities in temporal structure of parts of the light curves, while in fact there is no correlation between the variations in the different bands. Also, the anti-correlations mentioned above have similar significance to that of the positive peak, and while there have been no physical mechanisms proposed to explain an anticorrelation between the bands, there is no statistical reason to prefer positive correlations over negative ones. Continued monitoring may help discriminate between these options, whose physical implications we discuss in the next section. We have also cross-correlated the optical light curves themselves. The ZDCF of the $`B`$ vs. $`R`$ light curves (Figure 6) shows, as expected, that they are highly correlated ($`r=0.95`$), but the peak and the centroid of the ZDCF are slightly shifted from zero lag, indicating the $`R`$ variations lag the $`B`$ variations by several days (which is comparable to the mean sampling interval). While this delay could be interpreted as a wavelength-dependent continuum lag, we believe a more likely explanation is the fact that the strong broad H$`\alpha `$ line is included in the $`R`$ band, and contributes of the order of 10-20% of the broad-band flux. Balmer-line variations in this galaxy lag the continuum variations by about 11 days (Wanders et al. 1993) due to the light-travel time across the broad-line region. The H$`\alpha `$ contribution to the $`R`$ band probably shifts slightly the ZDCF peak from the peak at or near zero lag that it would have if there were only variable continuum emission in the band. The observed delay may therefore be considered an upper limit on the true delay between $`B`$ and $`R`$. ## 4 Discussion Much current thinking about the emission processes in AGNs centers around the notion that the X-rays arise from very close (within a few Schwarzschild radii, $`R_S`$) of a massive black hole. Support for this idea has come from the rapid variability that is observed in X-rays (implying small physical scales), as well as the detection in X-rays of a broad Fe K-shell emission line in many Seyfert 1s (e.g. Nandra et al. 1997). The emission line is thought to be gravitationally and Doppler broadened fluorescence of the inner parts of an accretion disk, after the disk is illuminated by the X-rays. The continuum-emission mechanism is not known, but most commonly it is assumed that the X-rays are optical/UV photons which have been upscattered by a population of hot electrons. The acceleration mechanism and geometry of the X-ray source is not known. Neither is the source of seed photons, and despite some substantial problems it is still usually assumed that the optical/UV arises directly from an accretion disk (Shields 1978; Malkan 1983). It has also been hypothesized that X–rays illuminating the disk, or other optically thick gas, might be responsible for some or all of the optical/UV radiation, via reprocessing (Guilbert & Rees 1988; Clavel et al. 1992). Variability data such as those we have presented above can provide stringent constraints on possible models. In summary, our data have shown strong variability in both optical and X-ray bands on month-to-year time scales, but rapid (days) variations only in the X-rays. The zero-lag correlation between the bands is poor, with a much stronger relationship implied if the optical variations lead those in the X-rays by $`100d`$. This “100-day lead” breaks down in the latter parts of the monitoring period. Strong negative correlations are also observed for optical leads of $`280`$ d and optical lags of $`100200d`$. This makes us cautious about the reality of the lag, but we will discuss some physical implications of our results below. It has long been suspected that the X-rays show more rapid variations than the optical/UV, and this has been explicitly demonstrated in a few cases (e.g., NGC 4051, Done et al. 1990; NGC 7469, Nandra et al. 1998; NGC 3516, in a 3-day HST/RXTE/ASCA campaign, Edelson et al. 1999). Our data add to that body of evidence, which implies, unavoidably, either that the observed optical radiation is not the primary seed photon source, or that the process which turns these photons into X-rays induces variability intrinsically. Given a supposed location in the inner few $`R_\mathrm{S}`$, it might be more natural to assume that UV or EUV photons are the seeds for the X-rays. With an origin in the inner disk, the EUV emission would be expected to be more variable, although it is still extremely difficult to reconcile variability as rapid as that observed with physical time scales in the disk (Molendi, Maraschi & Stella 1992). The X-ray source itself may be less directly connected to the disk physics, and could, in principle, change much more rapidly, especially if consisting of multiple flaring regions, as opposed to a single, coherent one. We have also shown tentatively that the X-rays may respond on long ($`100d`$) time scales to variations in the optical. One way of viewing the 2–10 keV emission, then, is as the sum of two components: a smoothly-varying component that is very similar to the optical light curves during the first year, but lags them by $`100`$ days, and a fast, flickering, component that is uncorrelated with the optical variations. The seeming lack of a deep minimum in the X-ray light curve, corresponding to a delayed version of the minimum seen in the optical light curve around day 1000, could arise because the slow, delayed component had nearly turned off, and the X-ray emission had become dominated by the second component. Indeed, if a constant is subtracted from the smoothed X-ray light curve, such that the smoothed curve always passes below the observed X-ray measurements (to ensure that the flux in the fast component is always positive), then the lowest points in the smoothed light curve just reach zero flux (see Figure 3). The delay of the smooth component behind the optical emission could then be interpreted as the light-travel time between the seed photon source and a Compton upscattering region. If it is not the optical photons themselves being upscattered, we would need to assume that the variations of the optical light curves can serve as surrogates for some other seed photon (UV or soft X-ray) variations. The large delay observed would put the scatterer at a relatively large distance, $`r50100`$ lt-days ($`1.252.5\times 10^{17}`$cm) from the nucleus, i.e. $`10^4R_S`$ (for a $`10^8M_{}`$ black hole). It is possible to imagine a toy model of the required “Compton mirror” by surrounding the nucleus with a $`T3\times 10^9K`$ (or hotter, depending on the energy of the seed photons) electron gas in a thin ($`\mathrm{\Delta }r<20`$ lt-days $`=5\times 10^{16}`$cm) 50-lt-day-radius shell of density $`n2\times 10^6`$ cm<sup>-3</sup> and column density $`N10^{23}`$ cm<sup>-2</sup>, giving a low optical depth to Compton scattering of $`\tau 0.1`$. The geometrical thinness is contrained by the small amount of broadening allowed by the data between the X-ray and optical pulse. This configuration would ensure that only photons that are singly backscattered by large angles are upscattered to the 2-10 keV band, so that a coherent echo is seen only from a small cap on the far side of the shell. This optical depth will also roughly produce the observed ratio of 1 keV and 3 keV photons in this object, corresponding to the photon index $`\mathrm{\Gamma }2`$ between 0.6–10 keV measured by George et al. (1998). However, more sophisticated calculations are required to see if the observed variations and detailed spectrum can be reproduced in this scenario. As already mentioned, this picture also leaves the rapid variability of the X-rays unexplained, so we must then invoke either another X-ray producing-region closer to the black hole, or an extended region, which produces emission at various radii (and therefore variations with a range of time scales). Similar models have been proposed to explain the emission in Galactic stellar-mass accreting neutron stars and black holes. For individual bins of frequencies in the X-ray Fourier spectrum of such objects, the hard X-rays lag the soft X-rays by an approximately-constant phase, meaning there is a time lag that increases linearly with the Fourier timescale probed. For low frequencies (0.1 Hz) the time lag is about 0.2 s, corresponding, again, to $`10^4R_S/c`$ (e.g., van der Klis et al . 1987; Miyamoto et al. 1992; Vaughan et al. 1994; Ford et al. 1999). Here, too, it has been proposed that the lags are due to light-travel time in a very extended Compton upscattering gas (e.g., Sunyaev & Truemper 1979; Payne 1980; Kazanas, Hua, & Titarchuk 1997; Hua, Kazanas & Cui 1999). The more recent of these models invoke a centrally-concentrated distribution of gas, although the similarity of the optical and X-ray light curves in NGC 3516 are suggestive of the thinner shell referred to above. This picture is not free of problems. A separation into two (or more) X-ray components may be considered ad hoc. There are also strong implications from the spectral observations that the bulk of the X-ray continuum is concentrated in the central regions, arguing against a region extending to many thousand gravitational radii. We have not considered how the Compton-upscattering gas is heated to its high temperature at such a large radius. A similar problem is encountered for the X-ray binaries (e.g. Stollman et al. 1987). Solutions that have been suggested include that, for black holes, the gas is heated locally as part of an advection dominated accretion flow (ADAF, e.g. Narayan & Yi 1994), or, for neutron stars, that the gas was preheated by radiation from the central source (Kazanas et al. 1997), or that the energy is transported via magnetic fields (Stone et al. 1996). Alternatively, the $`100d`$ signal may be associated not with a light-travel time, but with some other time scale. One interpretation that is more in line with standard thinking about the inner regions of AGNs is that we are witnessing the effects of an inhomogeneous accretion flow onto the black hole. The time inferred from the optical-X-ray lag is then some timescale associated with the disk, or accretion, process. Some form of instability, e.g., thermal or viscous, forms in the disk and causes its optical emission to brighten. The instability then propagates inwards to the hotter, X-ray emitting radii, on a timescale of 100 days, when an X-ray “copy” of it is seen in the light curve. One problem with this scenario, however, is that it is unclear why the shapes of the variability patterns would be so similar in the two bands during the first year, implying that the instability spent very nearly equal time intervals in the optically-emitting and X-ray-emitting regions. The lack of correlation in the latter parts of the observation is also unexplained, meaning that the processes in the inner disk are far more complex than what we have just described. These kind of explanations have also been put forward for the hard X-ray lags in Galactic accretors. For example, Orosz et al. (1997) found that the optical brightening of the “microquasar” GRO-J1655-40 preceded its X-ray (2-12 keV) outburst by 6 days, and suggested this was the result of an inward propagation of a disturbance in the accretion disk. Bottcher & Liang (1999) also presented models for accretion of a cool blob in an advection dominated flow, with the blob’s radiation being Compton upscattered by a progressively hotter and denser corona as it drifts toward the event horizon at constant radial velocity. Alternatively, Poutanen & Fabian (1999) suggest that the lags reflect the timescale for development of a magnetic flare that floats out of a thin accretion disk into a hot, optically-thin, corona, emitting progressively harder radiation until the flare ends suddenly. Applied to AGNs, their model has the attraction of naturally maintaining consistency with the fast variability time scales in the X-ray, as well as the strength and extreme broadening of the iron K$`\alpha `$ line. These pose serious problems for both the extended-corona and the ADAF models. We must also keep in mind the possibility that the similarity of the X-ray and optical light curves at 100-days lag during the first year of our program may just be a chance coincidence, and that this is the reason for the extreme differences between the light curves after the first year. Even if there is no real correlation between the variability in these different bands, our data still constrain the origin of the optical emission. We observe no strong correlation at zero lag, or at the small positive lags expected if the optical continuum were produced by reprocessing of X-rays. An energetically–significant reprocessed component in the optical emission of NGC 3516 is ruled out by our data (c.f. NGC 7469, Nandra et al. 1998). Interpretation aside, we also note that both the 100-day lag between X-rays and optical, and the 30-day timescale that separates the slow, possibly-correlated X-ray variations from the fast, uncorrelated X-ray flickering, are similar to the turnover timescale in the PDS found for this object by Edelson & Nandra (1999). It will be interesting in the future to construct more specific models which can tie together these time scales. We would like the thank the following observers at Wise, who contributed their efforts and observing time to obtain the data presented here: R. Be’eri, T. Contini, J. Dann, A. Gal-Yam, U. Giveon, A. Heller, S. Kaspi, Y. Lipkin, I. Maor, H. Mendelson, E. Ofek, A. Retter, O. Shemmer, G. Raviv, and S. Steindling. We are also grateful for the assistance of the Wise Observatory staff: S. Ben-Guigui, P. Ibbetson, and E. Mashal. We acknowledge valuable discussions with N. Arav, I. George, A. Laor, A. Levinson, H. Netzer, A. Sternberg, and J. Turner. T. Alexander is thanked for providing his ZDCF code, and the anonymous referee for useful suggestions. Multiwavelength studies at Wise Observatory are supported by a grant from the Israel Science Foundation.
no-problem/9910/cond-mat9910266.html
ar5iv
text
# Raman spectroscopy of InN films grown on Si ## Abstract We have used Raman spectroscopy to study indium nitride thin films grown by molecular beam epitaxy on (111) silicon substrates at temperatures between 450 and 550C. The Raman spectra show well defined peaks at 443, 475, 491, and 591 cm<sup>-1</sup>, which correspond to the $`A_1`$(TO), $`E_1`$(TO), $`E_2^{\mathrm{high}}`$, and $`A_1`$(LO) phonons of the wurtzite structure, respectively. In backscattering normal to the surface the $`A_1`$(TO) and $`E_1`$(TO) peaks are very weak, indicating that the films grow along the hexagonal $`c`$ axis. The dependence of the peak width on growth temperature reveals that the optimum temperature is 500C, for which the fullwidth of the $`E_2^{\mathrm{high}}`$ peak has the minimum value of 7 cm<sup>-1</sup>. This small value, comparable to previous results for InN films grown on sapphire, is evidence of the good crystallinity of the films. The III-V direct-gap semiconductor InN has been largely ignored because its low dissociation temperature makes it very difficult to grow. However, in the last few years there has been an increasing interest on the material due, to large extent, to the successful application of nitride compounds in ultraviolet-blue light-emitting diodes and lasers. In particular, InN has promising transport and optical properties. Its large drift velocity at room temperature could render it better than GaAs and GaN for field effect transistors. Carrier capture by InN quntum dots has been adduced for the efficient emission of blue-violet commercial InGaN diode lasers. InN/Si tandem solar cells have been proposed for increased efficiency. Finally, the important quaternary alloy AlGaInN covers most of the visible spectrum, reaching the orange-red end for InN. Due to the lack of suitable lattice-matched substrates, InN thin films have been grown mostly on sapphire, which is widely available. Although silicon would be preferable for device applications and integration with microelectronic integrated circuits, films grown directly on Si substrates are poorly oriented. The reason is that In adatoms have a long migration length that causes the formation of InN islands during the initial stages. The exposed Si surface reacts with the nitrogen beam to produce amorphous SiN, hindering the growth of high quality InN. An AlN buffer layer has been shown to improve the quality of InN films grown on sapphire and also of GaN films on Si. Since Al has a short migration length, a thin, uniform AlN layer can be grown, avoiding the reaction of the substrate with the nitrogen beam. As other nitrides, InN can crystallize with the wurtzite hexagonal or the zincblende cubic structures. Raman spectroscopy has been extensively used to determine the structure and the crystallinity of GaN. For InN, previous work has been limited to wurtzite films grown on (0001) sapphire and zincblende films grown on (001) GaAs. In this paper we report the growth and characterization by Raman scattering of oriented, crystalline InN films. InN thin films were grown by molecular beam epitaxy on (111) Si substrates using a RF plasma nitrogen source. Three different samples were grown at substrate temperatures $`T_\mathrm{g}`$ of 450, 500 and 550C, respectively. A 10-nm-thick AlN buffer layer was deposited between the substrate and the InN layer. Under the optical microscope the films presented a domain-like morphology. The domain size increased with growth temperature. The film grown at 550C showed a poor adherence to the substrate. Room temperature Raman spectra were taken with a Renishaw Ramascope spectrometer, equipped with an Ar<sup>+</sup> ion laser as a light source operating at a wavelength of 514.5 nm and focused on the sample through an optical microscope. The power density on the surface was of the order of 100 kw/cm<sup>2</sup>. Light scattered by the sample was collected with the same microscope and analyzed with a single-grating spectrograph and a CCD detector. Reflected and elastically scattered light was blocked with two holographic filters, which also removed most of the Raman spectrum below 100 cm<sup>-1</sup>. Figure 1 shows the Raman spectra for the InN films grown at different temperatures. All samples exhibit four peaks characteristic of bulk InN. In addition, the 550C sample shows some peaks originating from the silicon substrate (labelled Si in the figure), due to its deficient coverage. The other samples show no signal from the substrate, and none of the three spectra reveal any band coming from the AlN buffer layer. The zincblende structure (spatial group $`T_d^2F\overline{4}3m`$) has only two Raman-active phonons $`F_2`$(TO) and $`F_2`$(LO). The wurtzite structure (spatial group $`C_{6v}^4P6_3mc`$), which is the most stable, has six Raman-active phonons, $`A_1`$(TO), $`A_1`$(LO), $`E_1`$(TO), $`E_1`$(LO), and $`2E_2`$. Therefore the number of peaks observed in the spectra indicates that the wurtzite phase must be present. Table I lists the positions of the peaks and their symmetry assignment. The strongest peaks correspond to the $`E_2^{\mathrm{high}}`$ phonon at around 491 cm<sup>-1</sup> and to the $`A_1`$(LO) phonon at around 591 cm<sup>-1</sup>. The frequency of the $`E_2^{\mathrm{high}}`$ phonon is very close to the value of 488 cm<sup>-1</sup> reported for InN on sapphire. The peaks at 443 and 475 cm<sup>-1</sup> have been identified as the $`A_1`$(TO) and the $`E_1`$(TO) phonons, respectively, of the hexagonal phase. This assignment has been done by comparison with the intensities of good quality GaN Raman spectra and by following the trend of phonon frequencies in AlN and GaN. The frequency of the $`A_1`$(TO) phonon agrees well with the value of 450 cm<sup>-1</sup> calculated by Kim et al, but is much lower than the value reported by Inushima et al, who assigned it to a shoulder in the Raman spectrum at 480 cm<sup>-1</sup>. On the other hand, the frequency of the $`E_1`$(TO) phonon coincides with the value of Inushima et al, but disagrees with the value of 580 cm<sup>-1</sup> estimated by Kim et al. Both the $`E_1`$(TO) and the $`A_1`$(TO) are forbidden for backscattering along the hexagonal $`c`$ axis. The fact that these peaks are very weak indicates that the films grow with a preferential orientation of this axis normal to the substrate plane. The $`A_1`$(LO) peak at 591 cm<sup>-1</sup> shows a low energy tail whose intensity depends on the sample and even on the measuring point. It could arise from the appearance of the forbidden mode $`E_1`$(LO), which has been reported to be at 570 cm<sup>-1</sup>. Alternatively, the low-energy tail can be attributed to LO-phonon-plasmon coupling due to residual free carriers, whose concentration increases with increasing growth temperature. This interpretation would explain the observed increase of the continuous background with temperature, an effect that has been associated in GaN with an increasing dopant density. The full-width at half maximum (FWHM) of the $`E_2^{\mathrm{high}}`$ Raman peak, which varies from 13 to 7 cm<sup>-1</sup> (see Table I), depends on the crystallinity of the films. Usually, the peak is broadened by reduced phonon coherence caused by lattice disorder, the formation of nanocrystals or the presence of defects and impurities. The observed linewidth indicates that the best crystallinity is obtained for a growth temperature of 500C. The FWHM value of 7 cm<sup>-1</sup> at this temperature is comparable to the value of 5 cm<sup>-1</sup> observed for the best films grown on sapphire. Although the latter has been preferred among the lattice-mismatched substrates, our results show that buffered silicon can also produce InN films with good crystallinity. This offers the advantage of a better compatibility with microelectronic integrated circuits and other silicon-based devices. In summary, by depositing a thin AlN buffer layer we have been able to grow InN films with good crystallinity on Si substrates by molecular beam epitaxy. An analysis of their Raman spectra show that the films have a wurtzite structure with the hexagonal $`c`$ axis perpendicular to the substrate plane. The best crystallinity of the samples is achieved for a growth temperature of (or close to) 500C. We acknowledge financial support from the Spanish CICyT (Projects MAT96-0395-CP and MAT97-0725) and the US Army Research Office. We thank C. Pecharromán for helpful discussions.
no-problem/9910/astro-ph9910215.html
ar5iv
text
# Nuclear heating and melted layers in the inner crust of an accreting neutron star ## 1 Introduction Many of the known neutron stars reside in low-mass x-ray binaries. These sources typically accrete at rates of $`10^{11}M_{}\mathrm{yr}^1`$ to $`10^8M_{}\mathrm{yr}^1`$ and show no conclusive evidence, such as cyclotron lines or coherent pulsations (in the persistent emission), of a magnetic field. In contrast to studies of isolated neutron star cooling (for a review, see Tsuruta 1998), there has been much less interest in the interior thermal state of an accreting neutron star. Originally, studies of how accretion affected a neutron star’s thermal structures were motivated by the challenge of explaining the type I x-ray bursts of some of these sources. Both Lamb & Lamb (1978) and Ayasli & Joss (1982) estimated the steady-state core temperature by balancing the heating from hydrogen and helium shell burning with neutrino and radiative losses. Later, Fujimoto et al. (1984) and Hanawa & Fujimoto (1984) calculated the thermal evolution of the entire neutron star, both for steady hydrogen and helium burning (Fujimoto et al. 1984) and for repeated shell flashes (Hanawa & Fujimoto 1984). In all of these works, the only heat sources considered were the hydrogen/helium burning and the influx of entropy by the accreted matter. Both Fujimoto et al. (1984) and Hanawa & Fujimoto (1984), who considered accretion rates $`5\times 10^9M_{}\mathrm{yr}^1`$, found that the deep crust and core would gradually become isothermal, at a temperature $`10^8\mathrm{K}`$, if there were no enhanced neutrino cooling; otherwise, the core would remain chilled at temperatures $`10^7\mathrm{K}`$. Without heat sources in the crust, the temperature of the deep crust tracks that of the core and is therefore sensitive to the cooling from neutrino processes active in the core. Unlike an isolated neutron star, the crust of an accreting neutron star is not in statistical nuclear equilibrium, but rather has a composition set by the nuclear history of the accreted material (Sato 1979; Blaes et al. 1990, 1992; Haensel & Zdunik 1990a, b). The atmosphere is composed of the accreted helium and hydrogen and any metals present (see Bildsten, Salpeter, & Wasserman 1992 for a discussion). This accumulated fuel eventually burns to heavier elements. The accretion of fresh fuel shoves the original crust deeper and, if continued over a long enough interval, will eventually replace the original crust with one formed by the ashes of hydrogen/helium burning. Compression of the crust by the weight of continually accreting material induces non-equilibrium reactions that release heat. The composition of the replaced crust is uncertain. Improved treatments of the physics of hydrogen and helium nuclear burning revealed that the ashes of this burning are unlikely to be a pure species, e.g., iron. The mixture is formed by the rp-process (Wallace & Woosley 1981; Champagne & Wiescher 1992; Van Wormer et al. 1994; Schatz et al. 1998), a sequence of rapid proton captures onto seed nuclei provided by helium and CNO burning. Calculations of the nucleosynthetic yield from the rp-process have been done, both for unstable burning during an x-ray burst (Koike et al. 1999) and for steady-state hydrogen and helium burning (Schatz et al. 1999). The ashes of the stable burning are a motley mix of iron-peak elements, so that the crust formed from these ashes will likely be very impure (Schatz et al. 1999). This paper studies the crustal temperatures of steadily accreting neutron stars with low magnetic fields. There are two differences with Miralda-Escudé, Paczynski, & Haensel (1990) and Zdunik et al. (1992), both of which included crust reactions in the neutron star’s thermal balance. First, this work considers the stable regime of hydrogen/helium burning, which requires rapid accretion (near the Eddington limit, $`10^8M_{}\mathrm{yr}^1`$). At the low accretion rates ($`\dot{M}10^{10}M_{}\mathrm{yr}^1`$, roughly two orders of magnitude less than the Eddington limit) considered by Miralda-Escudé et al. (1990) and Zdunik et al. (1992), the crust is basically isothermal, with a temperature locked to that of the core. Second, this work allows for an impure crust by surveying both high- and low-conductivity cases. Previous calculations have assumed the impurity concentration to be much less than unity. At the rapid accretion rates considered here, the neutrino luminosity from modified Urca processes and crust bremsstrahlung is significant and causes the temperature to decrease with depth in the inner crust (Brown & Bildsten 1998). Almost all of the heat produced in the crust flows inward. Moreover, the reduced conductivity of the impure crust produces a peaked thermal profile with a maximum temperature where the nuclear reactions heat the crust, at densities $`6\times 10^{11}\mathrm{g}\mathrm{cm}^3`$. The thermal profile in the crust is primarily determined by the ability of the inner crust to conduct a flux of $`1\mathrm{MeV}`$ per accreted baryon into the core, and is insensitive to the temperature of the hydrogen/helium burning shells. If the crust is very impure, the crust reaches temperatures $`8\times 10^8\mathrm{K}`$; the value of this temperature only weakly depends on the core temperature. Electron captures in the crust reduce the charge of the nuclei ($`Z`$) and hence the electrostatic binding of the lattice. For the hottest temperatures in the crust, this low-$`Z`$ lattice melts where the charge is lowest ($`Z15`$ for the composition of Haensel & Zdunik 1990a). As a result, the inner crust of the neutron star comes to resemble a “layer cake,” with alternating layers of lattice and liquid. This paper is relevant for the brightest low-mass x-ray binaries. These weakly magnetized neutron stars are considered possible progenitors of millisecond pulsars (for a review, see Bhattacharya 1995), and there has been much theoretical interest in the evolution of the crust magnetic field (Romani 1990; Geppert & Urpin 1994; Urpin & Geppert 1995; Konar & Bhattacharya 1997; Brown & Bildsten 1998; Urpin, Geppert, & Konenkov 1998). Many of these neutron stars rotate within an apparently narrow range of spin frequencies $`300\mathrm{Hz}`$ (van der Klis 1998, e.g.). One possibility for this convergence of spin frequencies is that gravitational radiation from the neutron star balances the accretion torque (Bildsten 1998; Andersson et al. 1999). The source for the gravitational radiation could be a mass quadrupole formed by misaligned electron capture layers in the crust (Bildsten 1998) or a current quadrupole from an r-mode instability in the core (Andersson 1998; Friedman & Morsink 1998; Lindblom, Owen, & Morsink 1998; Owen et al. 1998; Andersson, Kokkotas, & Schutz 1999; Andersson, Kokkotas, & Stergioulas 1999). All of these problems depend on the thermodynamics of the neutron star’s crust and core and motivate this paper. ### 1.1 An overview of the problem A neutron star has several distinctive regions. The *core* consists of uniform $`npe^{}`$ (in the least dense parts). At a baryon density less than $`n0.6n_s`$, where $`n_s=0.16\mathrm{fm}^3`$ is the saturation density<sup>1</sup><sup>1</sup>1Density is measured in units of $`\mathrm{fm}^3`$ and pressure in units of $`\mathrm{MeV}\mathrm{fm}^3`$. Note that throughout the crust, $`\rho nm_u=1.66\times 10^{14}(n/0.1\mathrm{fm}^3)\mathrm{g}\mathrm{cm}^3`$, where $`m_u=1.66\times 10^{24}\mathrm{g}`$ is the atomic mass unit, and $`1\mathrm{MeV}\mathrm{fm}^3=1.6\times 10^{33}\mathrm{dyne}\mathrm{cm}^2`$. of nuclear matter, individual nuclei appear (Pethick, Ravenhall, & Lorenz 1995). The portion of the neutron star exterior to this point, the *inner crust*, is composed of nuclei, degenerate neutrons, and relativistic degenerate electrons. Where the electron Fermi energy is less than about twice the nuclear bulk energy ($`30\mathrm{MeV}`$; see Pethick & Ravenhall 1998), free neutrons can no longer exist in $`\beta `$-equilibrium. This point, *neutron drip*, has a density $`0.0023n_s`$ ($`4\times 10^4\mathrm{fm}^3`$) and marks the boundary between the inner and the *outer crust*. At lesser densities, the crust is made of nuclei and electrons, with the degenerate electrons supplying the pressure. The outer and inner crust collectively occupy the outermost kilometer or so of the neutron star and contain a total mass of order $`0.01M_{}`$. The boundaries of the crust are demarcated by surfaces of constant pressure. For a thin crust, the mass above a given isobar is fixed by the surface gravity and area. As a consequence, accretion during a time brief compared with $`M/\dot{M}`$ (so that the overall structure of the neutron star remains roughly constant) pushes the underlying crust through these compositional boundaries. Low-mass x-ray binaries live for more than $`10^8\mathrm{yr}`$ (Webbink, Rappaport, & Savonije 1983); even at $`\dot{M}=10^{10}M_{}\mathrm{yr}^1`$, the neutron star can easily accrete enough material from the secondary to replace its entire crust. This replaced crust is composed of the ashes of hydrogen and helium burning and is quite different in composition from the original. Unlike during the neutron star’s hot birth, the crust does not burn to nuclear statistical equilibrium. As the ashes of the hydrogen and helium burning are pushed deeper into the crust, the rising electron Fermi energy induces a series of electron captures (Haensel & Zdunik 1990b; Sato 1979; Blaes et al. 1990). Further compression of this low-$`Z`$ material causes neutron emissions and pycnonuclear reactions (Haensel & Zdunik 1990b; Sato 1979). A schematic of the composition, from the calculation of Haensel & Zdunik (1990b), is shown in Figure 1. Each decrease in the nuclear charge $`Z`$ (*bottom panel*) is from an electron capture, and each decrease in the nuclear mass number $`A`$ (*top panel*) is from a neutron emission. Where a pycnonuclear reaction occurs, both $`Z`$ and $`A`$ double. The pycnonuclear reactions and neutron emissions liberate $`E_N1\mathrm{MeV}`$ for each baryon accreted and heat the crust at a rate $`L_N10^{36}(\dot{M}/10^8M_{}\mathrm{yr}^1)\mathrm{erg}\mathrm{s}^1`$. To set the scale for the core temperature, note that the core must be at a temperature $`4\times 10^8(L_N/10^{36}\mathrm{erg}\mathrm{s}^1)^{1/8}\mathrm{K}`$ for modified Urca processes (Friman & Maxwell 1979; Yakovlev & Levenfish 1995) to radiate a neutrino luminosity equal to $`L_N`$. If the nucleons in the core were superfluid and the modified Urca processes correspondingly suppressed, crust neutrino bremsstrahlung (Maxwell 1979; Yakovlev & Kaminker 1996; Haensel, Kaminker, & Yakovlev 1996; Itoh et al. 1996) can also balance $`L_N`$ for an inner crust temperature $`6\times 10^8(L_N/10^{36}\mathrm{erg}\mathrm{s}^1)^{1/6}\mathrm{K}`$. The luminosity from the steady-state hydrogen/helium burning ($`5\mathrm{MeV}`$ per accreted baryon; Schatz et al. 1999) is much larger than that flowing out from deeper in the crust. As a result, the temperature at the base of the hydrogen/helium burning shell is determined by the luminosity there (for a review, see Bildsten 1998) and is $`5\times 10^8\mathrm{K}`$ for $`\dot{M}10^8M_{}\mathrm{yr}^1`$ (Brown & Bildsten 1998; Schatz et al. 1999). The conductivity is much greater in the inner crust (the conductivity in the crust increases with density), while the thickness of the inner crust is only a factor of two greater than the thickness of the outer crust. For a similar thermal gradient, the inner crust can carry a much larger flux. If the change in temperature between the hydrogen/helium burning shell and neutron drip is of the same order as the change in the inner crust, than most of the nuclear luminosity generated in the crust will flow into the core (Brown & Bildsten 1998). ### 1.2 Outline of this paper The remainder of this paper paints in the crude picture just sketched. Sections 2 and 3 develop the details of the calculation, which proceeds in two steps. First, the hydrostatic structure of the neutron star is computed for different equations of state (§ 2.1) and neutron star masses. These hydrostatic models serve as background for the thermal computations, which are discussed in § 3. Section 3 also describes the relevant microphysics: the heating by crust reactions and the cooling by the crust and core neutrino emissivity (§ 3.1); the reduction of the core neutrino emissivity by neutron and proton superfluidity (§ 3.2); and the conductivity of the crust and core (§ 3.3), including the effect of impurities. Section 4 contains the results of these calculations, which are split into three parts. First, there is discussion on the nature of the thermal profile and its dependence on the crust composition and the core neutrino emissivity (§ 4.1). Section 4.2 presents some analytical formulae for the crust temperature; these formula are used (§ 4.3) to show how the high-accretion rate solutions discussed here connect with those at lower accretion rates (e.g., Miralda-Escudé et al. 1990; Zdunik et al. 1992). The melting and refreezing of the crystalline lattice in the inner crust are discussed in § 4.4. ## 2 Hydrostatic structure Throughout the crust and core, the pressure is supplied by degenerate particles with Fermi energies $`k_\mathrm{B}T`$, and so the equation of state (EOS) scarcely depends on temperature. The crust reactions heat the core on a timescale $$\tau _H\left(\frac{M}{m_u}\right)\frac{CT}{L_N}\frac{M}{\dot{M}}\frac{CT}{E_N}\frac{M}{\dot{M}}.$$ (1) In this equation $`C`$ is the specific heat per baryon, $`M/m_n`$ is approximately the number of baryons in the star, and $`M/\dot{M}`$ is the timescale for the mass to increase from accretion. If the heat is stored in the electrons (as would be the case if the neutrons were superfluid), the heat content per baryon (e.g., Landau & Lifshitz 1980) is $`CT\pi ^2k_\mathrm{B}T(k_\mathrm{B}T/E_{\mathrm{F},\mathrm{e}})E_N`$, where $`E_{\mathrm{F},\mathrm{e}}`$ is the electron Fermi energy. The heat content per baryon is similar if the neutrons are normal (Lamb & Lamb 1978). Equation (1) shows that over timescales necessary to establish a thermal steady-state, the mass of the star changes only slightly, and so the hydrostatic equations need not be solved simultaneously with the thermal equations. Using a fixed hydrostatic structure simplifies the thermal calculation. To calculate the temperature as a function of radius requires integrating the heat transport equations over the star. The strong gravitational field modifies the heat flow. In an isothermal star, where there is no heat flow, the *redshifted* temperature is constant, while the proper (as measured by a local thermometer) temperature increases as one moves toward the stellar center. Because the neutrino emissivity is a strong function of temperature, the thermal transport equations must account for gravitational effects. The appropriate equations, solved for stellar mass and EOS, are the post-Newtonian stellar structure equations (Thorne 1977) for the radius, gravitational mass, potential, and pressure: $`{\displaystyle \frac{r}{a}}`$ $`=`$ $`\left(4\pi r^2n\right)^1\left(1{\displaystyle \frac{2Gm}{rc^2}}\right)^{1/2}`$ (2) $`{\displaystyle \frac{m}{a}}`$ $`=`$ $`{\displaystyle \frac{\rho }{n}}\left(1{\displaystyle \frac{2Gm}{rc^2}}\right)^{1/2}`$ (3) $`{\displaystyle \frac{\mathrm{\Phi }}{a}}`$ $`=`$ $`{\displaystyle \frac{Gm}{4\pi r^4n}}\left(1+{\displaystyle \frac{4\pi r^3p}{mc^2}}\right)\left(1{\displaystyle \frac{2Gm}{rc^2}}\right)^{1/2}`$ (4) $`{\displaystyle \frac{p}{a}}`$ $`=`$ $`{\displaystyle \frac{Gm}{4\pi r^4}}{\displaystyle \frac{\rho }{n}}\left(1+{\displaystyle \frac{p}{\rho c^2}}\right)\left(1+{\displaystyle \frac{4\pi r^3p}{mc^2}}\right)\left(1{\displaystyle \frac{2Gm}{rc^2}}\right)^{1/2}.`$ (5) In these equations the Lagrangian variable $`a`$ is the total number of baryons inside a sphere of area $`4\pi r^2`$, and $`\rho `$ is the mass density. The potential $`\mathrm{\Phi }`$ appears in the time-time component of the metric as $`e^{\mathrm{\Phi }/c^2}`$ (it governs the redshift of photons and neutrinos; Misner, Thorne, & Wheeler 1973) and satisfies the boundary condition that at the stellar surface $`e^{2\mathrm{\Phi }/c^2}|_{r=R}=12G/Rc^2`$, where $``$ and $`4\pi R^2`$ are the total gravitational mass and surface area of the neutron star. ### 2.1 Equation of state For purposes of calculating the crust EOS, the ashes of hydrogen and helium burning are presumed to be pure iron (but see the discussion in § 3.3 on how the composition affects the energy release and heat transport). As a mass element is compressed to greater densities and pressures, the rising electron Fermi energy triggers a series of electron captures, neutron emissions, and pycnonuclear reactions (Sato 1979; Blaes et al. 1990; Haensel & Zdunik 1990b). At any given density, only one species is assumed present (see Figure 1) according to the composition calculated by Haensel & Zdunik (1990a, b). In the outer crust, relativistic degenerate electrons of density $`n_e=Y_en`$ supply the pressure. The electron chemical potential is basically the Fermi energy $`E_{\mathrm{F},\mathrm{e}}=m_ec^2[1+(3\pi ^2n_e)^{2/3}\lambda _e^2]^{1/2}`$, where $`\lambda _e=386.2\mathrm{fm}`$ is the electron Compton wavelength. I calculate the electron pressure, which is approximately $`n_eE_{\mathrm{F},\mathrm{e}}/4`$, from the interpolation formula of Paczyński (1983). The lattice pressure is calculated from the ionic free energy, which is a function of $$\mathrm{\Gamma }=\frac{Z^2e^2}{k_\mathrm{B}T}\left(\frac{4\pi }{3}n_N\right)^{1/3},$$ (6) where $`n_N`$ is the density of nuclei. I use the fits of Farouki & Hamaguchi (1993) to Monte-Carlo simulations of the free energy. (In the crust, the free energy per nucleus is to lowest order just the Madelung energy, $`0.9\mathrm{\Gamma }k_\mathrm{B}T`$.) These fits are valid for $`\mathrm{\Gamma }>1`$, which is always the case for the density-temperature regime of interest. Following Farouki & Hamaguchi (1993), I presume the nuclei are crystalline for $`\mathrm{\Gamma }173`$. The binding energy of the nuclei are computed from a compressible liquid-drop model (Mackie & Baym 1977). This formula accounts for an external neutron gas and is therefore applicable at densities greater than neutron drip. The energy density and pressure of the neutron gas (which differ from that of an ideal degenerate gas) are also computed with this model in the limit of a vanishing proton fraction. Summing the pressure contributions from electrons, ions, and neutrons gives the crust EOS, which agrees with that of Haensel & Zdunik (1990a) to the accuracy of their table. Figure 2 displays this relation, $`p(n)`$, throughout the inner crust. For reference, the $`pn^{4/3}`$ relation appropriate for an EOS dominated by relativistic, degenerate electrons is also shown (*dotted line*). Free neutrons are present (*heavy solid line*) for $`n>3.6\times 10^4\mathrm{fm}^3`$. As noted by Haensel & Zdunik (1990a), for $`n>0.04\mathrm{fm}^3`$, the free neutrons provide most of the pressure, and the ionic composition becomes less and less important to the EOS. In this regime I use the $`p(n)`$ fit of Negele & Vautherin (1973). At $`n0.1\mathrm{fm}^3`$, the nuclei dissolve into uniform nuclear matter (Pethick et al. 1995). I select two sample core equations of state for comparison. The first EOS is a fit (Lai 1994) to the AV14+UVII interaction, which is the Argonne V14 potential, with a three-nucleon interaction prescribed by the Urbana VII potential (Wiringa, Fiks, & Fabrocini 1988). The second EOS, called AV18+$`\delta `$v+UIX\*, is a Skyrme-type Hamiltonian fit (Akmal, Pandharipande, & Ravenhall 1998, appendix A) to the Argonne V18 potential, with relativistic boost corrections and the three-nucleon interaction UIX\* (Akmal et al. 1998). The components of both interactions are neutrons, protons, electrons, and, where $`E_{\mathrm{F},\mathrm{e}}>m_\mu c^2=105.66\mathrm{MeV}`$, muons. I do not consider, for simplicity, other possible components (e.g., hyperons or quark matter) in the EOS. To construct a table suitable for interpolating $`n(p)`$, I calculate for each $`n`$ the proton fraction $`Y_p=n_p/n`$ and electron fraction $`Y_e=n_e/n`$ from the equations for $`\beta `$-equilibrium, $`\mu _n\mu _p=\mu _e=\mu _\mu `$, and charge neutrality, $`n_p=n_e+n_\mu `$. Given $`(Y_p,Y_e)`$, I then compute the mass density $`\rho `$ and pressure $`p=c^2(\rho +n\rho /n)`$. There have been many attempts to calculate the density of the phase transition from the inner crust to the core (see Pethick & Ravenhall 1995, and references therein). I adopt the following approach. The density and pressure of the AV18+$`\delta `$v+UIX\* EOS equal those of Negele & Vautherin (1973) at $`n=0.078\mathrm{fm}^3`$, $`p=0.39\mathrm{MeV}\mathrm{fm}^3`$. I therefore take this density as the transition from crust to core; there is no density discontinuity in this case. For the AV14+UVII EOS, the energy density is always greater than that of Negele & Vautherin (1973), and so I choose the maximum crust density to be $`0.1\mathrm{fm}^3`$ ($`p=0.60\mathrm{MeV}\mathrm{fm}^3`$). In this case, there is a substantial density jump (from $`n=0.1\mathrm{fm}^3`$ to $`n=0.13\mathrm{fm}^3`$) between crust and core. The choice $`n=0.1\mathrm{fm}^3`$ as the upper limit for the crust density reflects recent detailed calculations (Pethick et al. 1995) of the phase transition. For equilibrium crust compositions, it becomes energetically favorable for nuclei to turn inside-out in the inner crust and form a phase with bubbles of neutron gas encased in bound nuclear matter (Lorenz, Ravenhall, & Pethick 1993; Oyamatsu 1993). Because the charge of nuclei in an accreted crust is less than that of the equilibrium composition, it is possible that the nuclei do not turn inside-out. The condition for this inversion (see, e.g., Pethick & Ravenhall 1995) is that the nuclear radius be more than one-half the Wigner-Seitz radius, $`(4\pi n_N/3)^{1/3}`$. For the core EOS AV18+$`\delta `$v+UIX\*, this ratio at the bottom of the crust is (for $`Z=20`$, $`A=100`$, $`Y_n=0.8`$, where $`Y_n=n_n/n`$ is the neutron fraction) 0.50. For the core EOS AV14+UVII, the ratio is 0.54. In the absence of a more detailed calculation of the composition, I am unable to say if an accreted crust contains non-spherical nuclei and have not explored this possibility. ### 2.2 Construction of models With $`n(p)`$ specified by interpolation from a table, I integrate equations (2)–(5) with a fourth-order Runge-Kutta integration algorithm (Press et al. 1992). Borrowing a technique used by van Riper (1991), the code restricts the stepsize $`\mathrm{\Delta }a`$ to be always less than some fraction $`f`$ of the reciprocal sum of the radial and baric scale heights, $$\mathrm{\Delta }af\left(\frac{\mathrm{ln}r}{a}\frac{\mathrm{ln}P}{a}\right)^1.$$ (7) Starting from a fixed central pressure, I expand the hydrostatic equations about the center $`a=m=r=0`$ and integrate outwards until the pressure is less than $`3.4\times 10^8\mathrm{MeV}\mathrm{fm}^3`$ (corresponding to a density $`n2.4\times 10^8\mathrm{fm}^3`$, about a factor of 10 greater than where the helium burning ends; Schatz et al. 1999). At this pressure, the radius and mass are constant to within $`10^5R`$ and $`10^9M`$, respectively. The algorithm iteratively adjusts the central pressure until a target gravitational mass $``$ is reached. The integration steps are then stored for later use in solving the thermal equations. For each of the two equations of state, AV14+UVII and AV18+$`\delta `$v+UIX\*, I compute two masses, $`=1.4M_{}`$ and $`=1.8M_{}`$; a summary of these four structures is provided in Table 1. Despite continuous accretion, the pressure is a good Eulerian coordinate throughout the crust (Bildsten et al. 1992; Brown & Bildsten 1998), and I shall plot the temperature and luminosity against it. The mass contained in the crust above a given isobar is to lowest order (from expanding eq. ; Lorenz et al. 1993) $`\mathrm{\Delta }M`$ $`=`$ $`m_n\mathrm{\Delta }a{\displaystyle \frac{p}{g}}4\pi R^2`$ (8) $`=`$ $`0.05M_{}\left({\displaystyle \frac{p}{\mathrm{MeV}\mathrm{fm}^3}}\right)\left({\displaystyle \frac{R}{10\mathrm{km}}}\right)^2\left({\displaystyle \frac{2\times 10^{14}\mathrm{cm}\mathrm{s}^2}{g}}\right),`$ where $`g=G(1+z)/R^2`$ is the gravitational acceleration and $`(1+z)=(12G/Rc^2)^{1/2}`$ is the surface redshift. Table 1 lists $`\mathrm{\Delta }M`$, $`g`$, and $`1+z`$ for the two masses and two EOSs considered in this paper. ## 3 Thermal Structure With a hydrostatic structure specified, the luminosity $`L`$ and temperature $`T`$ are found by solving the entropy and flux equations (Thorne 1977), $`e^{2\mathrm{\Phi }/c^2}{\displaystyle \frac{}{r}}\left(Le^{2\mathrm{\Phi }/c^2}\right)4\pi r^2n\left(ϵ_Nϵ_\nu \right)(12Gm/rc^2)^{1/2}`$ $`=`$ $`0`$ (9) $`e^{\mathrm{\Phi }/c^2}K{\displaystyle \frac{}{r}}\left(Te^{\mathrm{\Phi }/c^2}\right)+{\displaystyle \frac{L}{4\pi r^2}}(12Gm/rc^2)^{1/2}`$ $`=`$ $`0.`$ (10) Here $`ϵ_N`$ and $`ϵ_\nu `$ are the nuclear heating and neutrino emissivity per baryon, and $`K`$ is the thermal conductivity. I neglect in equation (9) terms arising from compressional heating, as they are of order $`T\mathrm{\Delta }s(\dot{M}/M)`$ (Fujimoto & Sugimoto 1982), $`s`$ being the specific entropy, and are negligible throughout the degenerate crust and core (Brown & Bildsten 1998). The physics of the problem is contained in the heating $`ϵ_N`$, neutrino cooling $`ϵ_\nu `$, and the conductivity $`K`$; the following sections discuss each in turn. ### 3.1 Nuclear heating and neutrino cooling As mentioned in § 2.1, the crust electron captures reduce the charge of the nuclei enough to trigger pycnonuclear reactions. The rate of pycnonuclear reactions is governed by the supply of low-Z nuclei, which is in turn determined by the rate of the preceding electron capture. As a result, even though the pycnonuclear reactions are better described at typical crust temperatures as strongly screened fusion reactions (Salpeter & van Horn 1969), they are insensitive to temperature and hence not susceptible to a thermal instability. To find the overall thermal profile, I do not need to resolve the individual capture layers. Rather, I distribute the reaction heat per baryon, $`E_N=1\mathrm{MeV}`$, over a pressure interval $`\mathrm{\Delta }p`$ between $`8.7\times 10^4\mathrm{MeV}\mathrm{fm}^3`$ and $`3.4\times 10^2\mathrm{MeV}\mathrm{fm}^3`$ (this covers the region where the pycnonuclear reactions occur). The total nuclear luminosity is $`L_N=\dot{M}e^\mathrm{\Phi }E_N/m_nL_A/200`$, where $`L_A=\dot{M}c^2z/(1+z)GM\dot{M}/R`$ is the accretion luminosity (see § 3.4) and $`\dot{M}e^\mathrm{\Phi }`$ is the accretion rate as measured in the crust. The heating term in equation (9) is therefore $$4\pi r^2nϵ_N=\left(\frac{E_N}{m_n}\dot{M}e^{\mathrm{\Phi }/c^2}\right)\left(\frac{1}{\mathrm{\Delta }p}\frac{p}{r}\right),$$ (11) where $`p/r`$ is the Jacobean. The cooling terms in equation (9) are evaluated from various fits to microscopic calculations. Throughout much of the crust, the dominant neutrino emissivity is neutrino pair bremsstrahlung, $`e^{}+(A,Z)e^{}+(A,Z)+\nu \overline{\nu }`$ (Maxwell 1979; Haensel et al. 1996; Itoh et al. 1996). Where the ions are crystallized, the bremsstrahlung rates are exponentially suppressed because the separation between electron energy bands is of order $`1\mathrm{MeV}k_\mathrm{B}T`$ (Pethick & Thorsson 1994, 1997). I use the fits of Haensel et al. (1996) for the emissivity where the ions are liquefied and the fits of Yakovlev & Kaminker (1996), which include this suppression, where the ions are crystallized. If the crust is very impure (see the discussion in § 3.3), then the crust bremsstrahlung will be dominated by electron-impurity scattering (Pethick & Thorsson 1997). For $`T10^9\mathrm{K}`$ and densities where $`\mathrm{}\omega _{pe}/k_\mathrm{B}T1`$, $`\mathrm{}\omega _{pe}0.056(E_{\mathrm{F},\mathrm{e}}/1\mathrm{MeV})\mathrm{MeV}`$ being the electron plasma frequency, the plasma neutrino process (Schinder et al. 1987; Itoh et al. 1996) becomes important. This paper assumes a standard core neutrino emissivity, for which modified Urca processes (Friman & Maxwell 1979; Yakovlev & Levenfish 1995) dominate. The phase space available for scattering is strongly restricted if the nucleons are superfluid and reduces the emissivity roughly as $`\mathrm{exp}(T/T_c)`$ (Yakovlev & Levenfish 1995) where $`T_c`$ is the superfluid transition temperature. In general, both the neutrons and protons must be superfluid to substantially reduce the modified Urca neutrino luminosity. The proton modified Urca branch ($`p+pp+n+e^++\nu _e`$) is nearly as efficient as the neutron branch (Yakovlev & Levenfish 1995), and so a slight increase in temperature is sufficient to compensate for the suppression of just one of the modified Urca branches. ### 3.2 The superfluid transition temperatures In the core, both the protons and neutrons are expected to be superfluid over some range of densities (Baym, Pethick, & Pines 1969; Hoffberg et al. 1970; Takatsuka & Tamagaki 1993; Amundsen & Østgaard 1985b; Amundsen & Østgaard 1985a; Elgarøy et al. 1996). At lower densities, the neutrons pair in a singlet ($`{}_{}{}^{1}S_{0}^{}`$) state, but at higher densities the repulsive core of the interaction forces the neutrons to pair in a triplet ($`{}_{}{}^{3}P_{2}^{}`$) state. The protons in the core are expected to be in a $`{}_{}{}^{1}S_{0}^{}`$ state. There is at present little agreement on the range of densities for which the protons and neutrons are superfluid and on their transition temperatures $`T_c`$ (for a review, see Pethick et al. 1995). The early calculation of Hoffberg et al. (1970) for pure neutron matter found peak gap energies of $`1.6\mathrm{MeV}`$ (singlet) and $`5\mathrm{MeV}`$ (triplet). They also found that the transition temperature remained high over a large range of densities, implying that the entire core would be superfluid. More recent calculations of the neutron triplet pairing (Takatsuka & Tamagaki 1993; Amundsen & Østgaard 1985b) find a lower transition temperature $`k_\mathrm{B}T_c^{\mathrm{max}}0.1\mathrm{MeV}`$. In addition, the range of densities for which pairing occurs is restricted. Elgarøy et al. (1996), using a meson-exchange model for $`\beta `$-stable matter, found that maximum gap energy was $`0.018\mathrm{MeV}`$ and that the range of densities was restricted to $`n0.13\mathrm{fm}^3`$, so that almost all of the core would be normal. In all cases the proton pairing gaps are somewhat larger, with $`k_\mathrm{B}T_c^{\mathrm{max}}1.0\mathrm{MeV}`$, and extend to densities several times the saturation density, $`n_s=0.16\mathrm{fm}^3`$. Although none of the published microscopic calculations of the critical temperature $`T_c`$ has presented a convenient fitting formula in terms of density, the critical temperatures are roughly quadratic functions of the Fermi wavevector $`k_{\{np\}}=(3\pi ^2n_{\{np\}})^{1/3}`$. I therefore use $$T_c(k)=T_{c0}\left[1\frac{(kk_0)^2}{(\mathrm{\Delta }_k/2)^2}\right]$$ (12) as the functional form of $`T_c`$ for the proton $`{}_{}{}^{1}S_{0}^{}`$, neutron $`{}_{}{}^{1}S_{0}^{}`$, and neutron $`{}_{}{}^{3}P_{2}^{}`$ states. The parameters $`T_{c0}`$, $`k_0`$, and $`\mathrm{\Delta }_k`$ are chosen (see Table 2) to approximate the transition temperature of Amundsen & Østgaard (1985b) for the neutron $`{}_{}{}^{3}P_{2}^{}`$ state and the transition temperature of Amundsen & Østgaard (1985a) for the proton and neutron singlet states. This choice of $`T_c`$ reflects the calculations of Takatsuka & Tamagaki (1993) as well. Figure 3 displays $`T_c`$ for the proton $`{}_{}{}^{1}S_{0}^{}`$ (*solid lines*) and neutron $`{}_{}{}^{3}P_{2}^{}`$ (*dashed lines*) pairing as a function of density, for both equations of state. While the neutron critical temperatures are roughly identical, the proton critical temperature for AV18+$`\delta `$v+UIX\* vanishes at lower $`n`$. This cutoff is because AV18+$`\delta `$v+UIX\* has a higher proton fraction, at a given $`n`$, than AV14+UVII. For each EOS, I show (*arrows*) the central densities of neutron stars of gravitational masses $`=1.4M_{}`$ and $`=1.8M_{}`$. At core temperatures $`5\times 10^8\mathrm{K}`$, each of the four EOS/mass combinations has normal protons, neutrons, or both in some part of the core. As a result, the modified Urca processes still play an important role in the neutron star’s thermal balance<sup>2</sup><sup>2</sup>2Recently, neutrino emission from the formation and destruction of Cooper pairs has received renewed interest (see Yakovlev, Kaminker, & Levenfish 1999, and references therein). For temperatures near the superfluid transition temperature, the neutrino emissivity is *enhanced* over that of modified Urca processes. I have not included this emissivity here; for an impure crust, this omission is not critical (see § 4.2).. ### 3.3 Heat transport Throughout the crust, relativistic electrons transport the heat. In the relaxation-time approximation, the conductivity is (e.g., Ziman 1972), $$K=\frac{\pi ^2}{3}k_\mathrm{B}\frac{k_\mathrm{B}Tn_e}{m_e^{}}\tau .$$ (13) Here $`m_e^{}=E_{\mathrm{F},\mathrm{e}}/c^2`$ is the effective electron mass, and $`\tau `$ is the relaxation time. Where the ions are crystallized, $`1/\tau =1/\tau _{ee}+1/\tau _{eQ}+1/\tau _{ep}`$, and where they are liquefied, $`1/\tau =1/\tau _{ee}+1/\tau _{ei}`$. In these formulae, $`\tau _{ee}`$, $`\tau _{eQ}`$, $`\tau _{ep}`$, and $`\tau _{ei}`$ are respectively the relaxation times for electron-electron (Urpin & Yakovlev 1980; Potekhin, Chabrier, & Yakovlev 1997), electron-impurity (Itoh & Kohyama 1993), electron-phonon (Baiko & Yakovlev 1995), and electron-ion (Yakovlev & Urpin 1980) scattering. Electron-electron scattering is typically negligible over much of the crust because the strong degeneracy of the electrons restricts the available phase space. The electron-impurity relaxation time is (Yakovlev & Urpin 1980) $$\tau _{eQ}=\frac{p_\mathrm{F}^2v_\mathrm{F}}{4\pi Qe^4n_N}\mathrm{\Lambda }_{eQ}^1.$$ (14) Here $`p_\mathrm{F}`$ and $`v_\mathrm{F}`$ are the momentum and velocity of an electron at the Fermi surface and $`\mathrm{\Lambda }_{eQ}2`$ (Yakovlev & Urpin 1980) is the logarithmic Coulomb factor. The concentration of impurities enters through $`Q`$, which for a large number of ion species is (Itoh & Kohyama 1993) $$Q\frac{1}{n_N}\underset{i}{}n_i\left(Z_iZ\right)^2.$$ (15) Here $`n_i`$ and $`Z_i`$ are the density and charge number of the $`i`$th species, $`n_N=_in_i`$ is the total ionic density, and $`Z=n_N^1_in_iZ_i`$ is the mean ionic charge number. There are several contributions to the impurities in the crust. First, within each electron capture layer there are at least two species present. The electron capture from an even-even to an odd-odd nucleus is immediately followed by a second electron capture to a lower energy even-even nucleus (Haensel & Zdunik 1990b). Within each capture layer the impurity parameter is then $`Q=4n_Zn_{Z2}/(n_Z+n_{Z2})^21`$, with $`n_Z`$ and $`n_{Z2}`$ denoting the densities of the two species. Although Haensel & Zdunik (1990a, b) treated the electron captures as sharp transitions in the crust, in actuality the layers have a finite thickness set by competition between the flow timescale and the (weak) electron capture timescale (Bisnovatyi-Kogan & Chechetkin 1979; Blaes et al. 1990; Bildsten & Cumming 1998). The zero-temperature electron capture rate is proportional to $`(E_{\mathrm{F},\mathrm{e}}\delta )^3`$, where $`\delta `$ is the reaction threshold. Where degenerate electrons supply the pressure, $`E_{\mathrm{F},\mathrm{e}}`$ must increase with depth. The layers are thin in $`E_{\mathrm{F},\mathrm{e}}`$, and because $`pE_{\mathrm{F},\mathrm{e}}^4`$, the layers are geometrically thin as well. For densities greater than neutron drip, however, the electrons no longer support the crust, and $`E_{\mathrm{F},\mathrm{e}}`$ need not increase with depth. In fact, $`E_{\mathrm{F},\mathrm{e}}`$ is actually less following an electron capture layer if the interface between layers of different composition is treated as a infinitely thin plane. The layers, although thin with respect to $`E_{\mathrm{F},\mathrm{e}}`$, are then geometrically thick. In actuality, thermal broadening of the electron Fermi surface causes many of the captures to occur pre-threshold, and the capture layers are thickened to nearly the width between layers (Ushomirsky et al. 1999). Should the capture layers overlap, then $`Q`$ in the mixed layer can become larger than unity. The impurities manufactured within the capture layers are probably a small perturbation compared to those already present in the mixture entering the top of the crust. Schatz et al. (1999) found that $`Q100`$ immediately following the end of stable hydrogen burning. An accurate assessment of $`Q`$, throughout the crust, requires evolving the composition of an accreted fluid element on its journey through the crust. This task is beyond the scope of this initial survey, and I instead set upper and lower bounds on the conductivity. The upper bound to the conductivity is that of a pure crystal (electron-phonon scattering). To set the lower bound, first note that electron-impurity scattering dominates the conductivity wherever $`\tau _{eQ}<\tau _{ep}`$, with $`\tau _{ep}`$ being the electron-phonon relaxation time (Baiko & Yakovlev 1995) $$\tau _{ep}=\frac{\mathrm{}}{\alpha k_\mathrm{B}T}\mathrm{\Lambda }_{ep}^1.$$ (16) Here $`\alpha `$ is the fine structure constant, and $`\mathrm{\Lambda }_{ep}13`$ comes from integrating over the phonon spectrum. Equations (14) and (16) imply that for $`Q0.66(30\mathrm{MeV}/E_{\mathrm{F},\mathrm{e}})(Z/26)(k_\mathrm{B}T/0.05\mathrm{MeV})`$ electron-impurity scattering determines the thermal conductivity in the crust. If the reactions in the crust do not significantly reduce $`Q`$ from its large value at the base of the hydrogen/helium burning shell, then *the heat transport in the crust is controlled by electron-impurity scattering rather than electron-phonon scattering.* If $`Q`$ is very large ($`Z^2`$), then the impurity relaxation time is roughly that of electron-ion scattering for a pure crystal (Yakovlev & Urpin 1980), $$\tau _{ei}=\frac{p_\mathrm{F}^2v_\mathrm{F}}{4\pi Z^2e^4n_N}\mathrm{\Lambda }_{ei}^1,$$ (17) with $`\mathrm{\Lambda }_{ei}=\mathrm{ln}[(2\pi Z/3)^{1/3}\sqrt{1.5+3/\mathrm{\Gamma }}]1`$. Basically, the phonon spectrum is extremely disordered in this case. I therefore set a lower limit to the conductivity by using electron-ion scattering, i.e., by treating the ions as if they were liquefied. For consistency, I also use the liquid-state neutrino bremsstrahlung emissivity (Haensel et al. 1996) in conjunction with the electron-ion conductivity. The impurities in the crust reduce the conductivity but increase the neutrino emissivity. In the core, heat is mostly carried by electrons, with neutrons contributing if they are normal (Flowers & Itoh 1979). I neglect here the neutron conductivity. This is a good approximation, as the core is practically isothermal (see § 4). In evaluating the electron-proton scattering terms, I used an effective proton mass $`m_p^{}=0.7m_p`$. The proton superfluidity both reduces the screening (increases the scattering) and reduces the proton scattering phase space (suppresses the scattering); I take these factors into account using the fits of Gnedin & Yakovlev (1995). ### 3.4 Boundary conditions and method of solution The first boundary condition is simply $`L|_{r=0}=0`$. For a fully self-consistent solution, the correct second boundary condition is a relation $`L(T)|_{r=R}`$, usually obtained from a separate photospheric calculation. This is unnecessary, however, when the hydrogen and helium burn steadily. The large energy release from this burning determines the temperature in the outer atmosphere, so that the temperature at the base of the hydrogen/helium burning shell is a function only of $`\dot{M}`$, $``$, and $`R`$ and may be calculated independently. In § 4, I show that the luminosity flowing outwards from the crust is in fact much smaller than that generated by the hydrogen/helium burning. The second boundary condition, then, is $`T|_{r=R}=T_{}(\dot{m})`$, where I take $`T_{}`$ from Schatz et al. (1999). Here $`\dot{m}`$ is the accretion rate per unit area; the fiducial rate used by Schatz et al. (1999) is the Eddington rate appropriate for a Newtonian star of $`=1.4M_{}`$ and $`R=10\mathrm{km}`$ accreting a solar composition plasma. Numerically, this rate is $`\dot{m_E}=8.8\times 10^4\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$ and is an excellent approximation to the lowest local accretion rate at which the hydrogen/helium burning is stable (see, e.g., Bildsten 1998). At $`\dot{m}=\dot{m}_E`$, the temperature at the base of the hydrogen/helium burning shell is $`T_{}=5\times 10^8\mathrm{K}`$. Although the Newtonian surface gravity used by Schatz et al. (1999) is less than the values used here, $`T_{}`$ is relatively insensitive to $`g`$ ($`T_{}g^{1/7}`$; Bildsten 1998), so I do not adjust it for each model. This is not critical, as the thermal profile in the crust is insensitive to $`T_{}`$ (see § 4). The accretion rate enters equations (9) and (10) through $`ϵ_N`$, which is scaled to the accretion luminosity $`L_A`$. Because $`T_{}`$ is only a function of $`\dot{m}`$, I use the same $`\dot{m}`$ for each model; the luminosity from this accretion is then different for each EOS and mass. The global accretion rate, as measured by an observer infinitely far away, is (Ayasli & Joss 1982) $`\dot{M}=4\pi R^2\dot{m}/(1+z)`$, and the luminosity is $$L_A=\frac{z}{1+z}\dot{M}c^2=\frac{z}{(1+z)^2}4\pi R^2\dot{m}c^2.$$ (18) For the fiducial local accretion rate $`\dot{m}_E`$, model M1.4-14 has $`L_A=L_A^{}=1.95\times 10^{38}\mathrm{erg}\mathrm{s}^1`$; M1.8-14 has $`L_A^{}=2.28\times 10^{38}\mathrm{erg}\mathrm{s}^1`$; M1.4-18, $`2.12\times 10^{38}\mathrm{erg}\mathrm{s}^1`$; and M1.8-18, $`2.51\times 10^{38}\mathrm{erg}\mathrm{s}^1`$. To solve the thermal structure, equations (9) and (10) are finite-differenced onto the mesh defined by the integration of equations (2)–(5). An initial guess is constructed by fixing the temperature throughout the star to $`T_{}`$ and integrating equation (9) from $`L|_{r=0}=0`$. This trial guess is then iteratively refined by a relaxation technique (Press et al. 1992). The resolution of the mesh was tested by computing models with step fractions (see eq. ) $`f=0.05`$ and $`f=0.02`$. ## 4 Results ### 4.1 The influence of the microphysics on the thermal profile As promised in section 3.3, I survey the uncertainties in the thermal conductivity by solving the thermal structure (eq. and eq. ) with the conductivity alternately set by electron-phonon scattering and electron-ion scattering. Figure 4 shows the thermal profiles for these two cases with different degrees of superfluidity: strong (both neutrons and protons are superfluid with $`T_cT`$ throughout the core; *top panel*), moderate (corresponding to the parameters in Table 2; *middle panel*), and nonexistent ($`T_c=0\mathrm{K}`$ for both neutrons and protons; *bottom panel*). The hydrostatic structure used in this plot is model M1.4-18 (see Table 1). I plot the proper temperature (i.e., the temperature a local thermometer would measure) because it controls the conductivity and neutrino emissivity. The lower conductivity and enhanced bremsstrahlung from electron-ion scattering (*dotted lines*) produce a greater temperature variation throughout the crust than if the conductivity and bremsstrahlung were determined by electron-phonon scattering (*solid lines*). In the inner crust, the electron-phonon conductivity (Fig. 5, *solid line*) is an order of magnitude greater than the electron-ion conductivity (Fig. 5, *dotted line*). As a result, the thermal gradient in a locally pure crust is very small, so that the inner crust temperature is not appreciably different from that of the core. A striking result for this case is that the peak crust temperature depends only weakly on the core temperature. This is a consequence of the requirement that a large thermal gradient is needed to carry the flux in the inner crust when the conductivity is reduced. The relative amounts of neutrino emission from the crust and core are displayed in Figure 6, which shows the luminosity measured by an observer at infinite distance and scaled to $`L_N`$. The lines and panels correspond to those of Figure 4, and for reference the region where nuclear heating occurs (see § 3.1) is denoted with boldfaced lines. A negative luminosity indicates that the heat flow is inward. When $`T_cT`$ throughout the core for both neutrons and protons (*top panel*), all of the neutrinos are emitted from the crust, and the luminosity is zero throughout the core. For moderate superfluidity (Table 2; *middle panel*), some neutrino emission occurs in the crust when the conductivity and crust bremsstrahlung are dominated by electron-ion scattering (*dotted line*), but the bulk of the neutrinos are emitted in the innermost core, where the protons are normal. If there is no superfluidity, then neutrino emission occurs throughout the core (*bottom panel*). The decrease in $`L`$ at pressures $`10^3\mathrm{MeV}\mathrm{fm}^3`$ marks where the plasma neutrino emissivity dominates. The core temperatures of the middle and bottom panels of Figure 4 are similar because the modified Urca emissivity is strongly temperature sensitive, so that the core temperature need only slightly adjust to compensate for a reduced normal core fraction. To demonstrate this further, Figures 7 and 8 show, as functions of pressure, the proper temperatures and luminosities for the four models in Table 1 with conductivity set by electron-ion scattering. The equations of state AV14+UVII (*solid lines*) and AV18+$`\delta `$v+UIX\* (*dotted lines*) are compared for $`=1.4M_{}`$ (*top panel*) and $`=1.8M_{}`$ (*bottom panel*). For $`=1.8M_{}`$ the two equations of state have similar thermal profiles because both have normal protons and neutrons in at least some fraction of the core (see Figure 3). In contrast, for $`=1.4M_{}`$, only M1.4-18 (*top panel, dotted line*) has normal protons in its innermost core, and so its core temperature is quite cooler than that of M1.4-14 (*top panel, solid line*). As a result, M1.4-14 has a stronger neutrino emission from the crust (Figure 8, *top panel, solid line*). As in Figure 6, the nuclear heating region is denoted with boldfaced lines. A generic feature of these solutions is that almost all of the nuclear heat released in the inner crust flows inward and is balanced by neutrino cooling from either the crust or core. Only a small amount ($`5\%`$) of $`L_N`$ is conducted to the surface. As a result, the temperature in the inner crust and core is set by the processes in the inner crust. In particular, *the temperature of the inner crust is nearly independent of the temperature in the hydrogen/helium burning shell*. This is explicitly shown in Figure 9, where I plot the thermal structure for model M1.4-18 but with the outer boundary temperature allowed to vary. The luminosity is fixed at $`L_A^{}=2.12\times 10^{38}\mathrm{erg}\mathrm{s}^1`$. The top panel shows the case of electron-ion conductivity; the bottom, electron-phonon. ### 4.2 Analytical expressions for the crust temperature The numerical results presented above can be easily understood by considering a crude analytical calculation of the thermal structure. The approach is similar to that of Hernquist & Applegate (1984), with three exceptions: I include heating from the crust reactions, I fit the pressure as a function of density, rather than presume a degenerate relativistic EOS, and I assume the conductivity is determined by electron-ion, rather than electron-phonon scattering. In the crust, the surface gravity and radius are roughly constant, and the differential expressions for the radius (eq. ), pressure (eq. ) and flux (eq. ) can be combined into the plane-parallel Newtonian equation $$g\rho K\frac{dT}{dp}=\frac{L}{4\pi R^2}.$$ (19) To construct this analytical model, I consider the heating and cooling emissivities to be $`\delta `$-functions. Since the flux is then constant between points where these sources or sinks reside, I may integrate equation (19) piecewise between these points, with $`L`$ stepping discontinuously at each point (Brown & Bildsten 1998). Integrating equation (19) requires a relation $`p(\rho )`$. I approximate the mass density $`\rho `$ by $`m_un`$ and fit the pressure with power-laws in both the electron-dominated and neutron-dominated regions, $$p=\{\begin{array}{cc}2.67\times 10^{30}\rho _{12}^{1.27}\mathrm{dyne}\mathrm{cm}^2,\hfill & \rho _{12}0.66\text{,}\hfill \\ 4.97\times 10^{29}\rho _{12}^{1.42}\mathrm{dyne}\mathrm{cm}^2,\hfill & \rho _{12}8.9\text{.}\hfill \end{array}$$ (20) Here $`\rho _{12}=\rho /10^{12}\mathrm{g}\mathrm{cm}^3`$, and the error in $`p`$ is less than 9% and 5% for the two density regimes respectively<sup>3</sup><sup>3</sup>3In this section I use cgs units for easy comparison with the astrophysical literature.. The exponent in the electron-dominated regime is less than $`4/3`$ because the fit accounts for the decrease in $`Y_e`$ by electron captures. For densities just above neutron drip ($`0.6607<\rho _{12}<8.913`$), the pressure cannot be fit by a simple power-law in density (see Figure 2). This region is where most of the heat is released, and so for simplicity I presume the region to be isothermal and place the heating $`\delta `$-function inside it. Inserting the expression for the electron-ion scattering frequency (eq. ) into the expression for the thermal conductivity (eq. ) and expanding, I have $`K`$ $``$ $`{\displaystyle \frac{1}{8}}\left({\displaystyle \frac{\pi ^5}{9}}\right)^{1/3}\alpha ^2\mathrm{}^1{\displaystyle \frac{Y_e^{1/3}}{Z}}\left({\displaystyle \frac{\rho }{m_n}}\right)^{1/3}k_\mathrm{B}^2T`$ (21) $``$ $`1.16\times 10^{20}\left({\displaystyle \frac{Y_e^{1/3}}{Z}}\right)\rho _{12}^{1/3}T_9\mathrm{erg}\mathrm{cm}^1\mathrm{s}^1\mathrm{K}^1.`$ In this expression, I set the Coulomb logarithm $`\mathrm{\Lambda }_{ei}`$ to unity and use the shorthand $`T_9=T/10^9\mathrm{K}`$. The composition enters through $`Y_e^{1/3}/Z`$; for densities less than neutron drip I use $`Y_e^{1/3}/Z=0.0298`$, appropriate for a pure iron composition, and for densities greater than neutron drip I use $`Y_e^{1/3}/Z=0.0167`$, as follows from the last entry of Table 2 in Haensel & Zdunik (1990b). (For impurity scattering, $`Y_e^{1/3}/ZZY_e^{1/3}/Q`$.) Using equations (20) and (21), I integrate equation (19) from the top of the crust to neutron drip, $`10^5<\rho _{12}<0.66`$, and over the inner crust to the core, $`8.913<\rho _{12}<166`$, to obtain $$T_9^2(\rho _{12})=T_{\mathrm{ND},9}^21.30L_{o,35}R_{10}^2g_{14.3}^1\left(\rho _{12}^{0.060}1.03\right)$$ (22) for $`\rho _{12}0.66`$ and $$T_9^2(\rho _{12})=T_{\mathrm{ND},9}^2+0.33L_{i,35}R_{10}^2g_{14.3}^1\left(\rho _{12}^{0.087}1.21\right)$$ (23) for $`\rho _{12}8.9`$. In these equations $`T_{\mathrm{ND}}`$ is the temperature at neutron drip (presumed constant for $`0.66<\rho _{12}<8.9`$), $`R_{10}=R/10\mathrm{km}`$, $`g_{14.3}=g/10^{14.3}\mathrm{cm}\mathrm{s}^2`$, and $`L_{o,35}`$ and $`L_{i,35}`$ are the luminosities, in units of $`10^{35}\mathrm{erg}\mathrm{s}^1`$, for $`\rho _{12}<0.66`$ and $`\rho _{12}>8.9`$, respectively. Both $`L_o`$ and $`L_i`$ are signed: they are positive if the flux is directed outwards and negative if directed inwards. Notice that the coefficient of $`L_{o,35}`$ in equation (22) is an order of magnitude larger than the coefficient of $`L_{i,35}`$ in equation (23). This disparity reflects that the inner crust requires a much smaller thermal gradient than the outer crust to carry a given flux. To solve for the thermal structure, I also require that the luminosity flowing away from the crust heat source is $$L_N=L_oL_i,$$ (24) and that the core neutrino luminosity balances the heat conducted into the core, $$L_i+L_\nu (T_{\mathrm{core}})=0.$$ (25) Evaluating equation (22) at $`\rho _{12}=10^5`$ and equation (23) at $`\rho _{12}=166`$, and using equations (24) and (25) to replace $`L_o`$ and $`L_i`$ with $`L_N`$ and $`L_\nu `$, I obtain an equation for the core temperature, $$T_{\mathrm{core},9}^2=T_{,9}^2+R_{10}^2g_{14.3}^1\left[1.26L_{N,35}1.38L_{\nu ,35}(T_{\mathrm{core},9})\right].$$ (26) Here $`T_{}=T|_{\rho _{12}=10^5}`$. Solving equation (26) for a modified Urca luminosity $`L_{\nu ,35}5\times 10^4T_{\mathrm{core},9}^8`$ and $`L_N1.07\times 10^{36}\mathrm{erg}\mathrm{s}^1(\dot{m}/\dot{m}_E)`$ gives $`T_{\mathrm{core},9}=0.34`$ and $`L_{\nu ,35}=0.92L_{N,35}`$, i.e., 92% of the heat generated in the crust flows into the core. This compares reasonably well with the numerical calculation without core superfluidity (Figures 4 and 6, *bottom panels*). In that case, the proper temperature at the crust bottom is $`3.1\times 10^8\mathrm{K}`$, the luminosity flowing out the top of the crust is $`0.04L_N`$, and the luminosity flowing into the core is $`0.8L_N`$. Neutrino emission from the crust balances the remainder of $`L_N`$. Substituting $`T_{\mathrm{core},9}`$ and $`L_{\nu ,35}`$ into equation (23), I find that $`T_{\mathrm{ND},9}=1.1`$, which is an overestimation of the maximum crust temperature. This is a consequence of the “two-zone” treatment, which puts all of the neutrino cooling in the core. Still, the qualitative features of the numerical solutions are reproduced. In equation (23), the increase in temperature from core to neutron drip (second term, right-hand side) is much larger than $`T_{\mathrm{core},9}^2`$. As a result, changing $`T_{\mathrm{core},9}`$ has only a small effect on $`T_{\mathrm{ND},9}`$. Even if the direct Urca process were to operate and cool the core to $`T_{\mathrm{core},9}1`$, the temperature around neutron drip will remain high. *At high accretion rates, the temperature around neutron drip, for a very impure crust, is primarily determined by the ability of the inner crust to carry the nuclear luminosity inward and not so much by the efficiency of core neutrino cooling.* ### 4.3 Accretion at higher and lower rates As the crust temperature increases, crust neutrino bremsstrahlung and the plasma neutrino process become increasingly important. At the higher accretion rate, the brighter crust neutrino luminosity balances the nuclear heating “on the spot.” Figure 10 compares proper temperature (*top panel*) and scaled luminosity (*bottom panel*), as measured by an observer at infinite distance, for model M1.4-18 accreting at $`\dot{m}_E`$ ($`L_A=L_A^{}=2.12\times 10^{38}\mathrm{erg}\mathrm{s}^1`$; *solid line*) and $`5\dot{m}_E`$ ($`L_A=5L_A^{}=1.06\times 10^{39}\mathrm{erg}\mathrm{s}^1`$; *dotted line*). The conductivity in both cases is set by electron-ion scattering. As the crust neutrino cooling increases, a smaller fraction of $`L_N`$ flows outward from the top of the crust. At lower accretion rates, the change in temperature over the inner crust becomes smaller relative to the core temperature. The crust becomes more nearly isothermal and hence more sensitive to the temperatures at its boundaries (cf. Miralda-Escudé et al. 1990; Zdunik et al. 1992). From equation (23), the temperature increase over the inner crust is $`<0.5T_{\mathrm{core}}`$ for $`L_A<0.06L_A^{}`$, assuming that $`L_i=L_N`$. To demonstrate this, Figure 11 displays, for an accretion luminosity $`L_A=0.01L_A^{}=2.12\times 10^{36}\mathrm{erg}\mathrm{s}^1`$, the proper temperature and luminosity. The hydrostatic structure is M1.4-18, the same as in Figure 9. The top panel is for a conductivity set by electron-ion scattering; the bottom panel is for a conductivity set by electron-phonon scattering. Solutions for several $`T_{}`$ are shown; the range of values are reduced from those used in Figure 9 by $`(\dot{m}/\dot{m}_E)^{2/7}=0.01^{2/7}`$, which is roughly how the temperature at the base of a hydrogen/helium burning shell scales with accretion rate (Schatz et al. 1999). Of course, the hydrogen and helium ignition is unstable in an envelope this cold (see Bildsten 1998, and references therein), and so $`T_{}`$ is determined by the compression of matter in the atmosphere and by the flux flowing out the top of the crust. As $`T_{}`$ is reduced, more and more of the heat generated in the crust flows outwards rather than into the core. As found by Zdunik et al. (1992), an enhanced core neutrino emissivity will drastically lower the crust temperature for low accretion rates. To illustrate how the crust temperature changes with the temperature in the hydrogen/helium burning region ($`T_{}`$), I compute the derivative $`dT_{\mathrm{crust}}/dT_{}`$, where $`T_{\mathrm{crust}}`$ is the temperature at the centroid of the heat-producing region, $`p=0.017\mathrm{MeV}\mathrm{fm}^3`$. Figure 12 displays $`dT_{\mathrm{crust}}/dT_{}`$ as a function of $`T_{}`$ for five different accretion rates: $`L_A/L_A^{}=0.01`$ (*hollow triangles*), 0.03 (*filled triangles*), 0.1 (*hollow squares*), 0.3 (*filled squares*), and 1.0 (*asterisks*). When the conductivity is low (electron-ion scattering; *top panel*), $`T_{\mathrm{crust}}`$ is generally less sensitive to $`T_{}`$ than when electron-phonon scattering sets the heat transport (*bottom panel*). The derivative (at a given accretion rate) is largest when $`T_{}=T_{\mathrm{crust}}`$; this peak is evident in the top panel for $`L_A/L_A^{}=0.01`$ and in the bottom panel for $`L_A/L_A^{}=0.03`$. The rapid rise of $`dT_{\mathrm{crust}}/dT_{}`$ in the bottom panel for $`L_A/L_A^{}=0.01`$ is because the neutrino cooling in the crust and core goes to zero, so that all of the heat generated in the crust flows outwards (cf. Figure 11, *bottom panel*). In addition, the crust in the entire region considered is also crystalline, which reduces $`dT/dp`$. In general, for $`\dot{M}10^9M_{}/\mathrm{yr}^1`$, the temperature in the crust becomes independent of the temperature in the atmosphere and upper ocean of the neutron star. ### 4.4 Crust melting An interesting possibility for a rapidly accreting neutron star is that its crust may melt. This happens wherever $`\mathrm{\Gamma }170`$, where the exact value is uncertain (for a review, see Ichimaru 1982). Since I use the formulation of Farouki & Hamaguchi (1993) to calculate the ionic free energy, I also adopt their melting value, $`\mathrm{\Gamma }_M=173`$. The crust reactions reduce $`Z`$ and heat the crust; both of these effects decrease $`\mathrm{\Gamma }`$, as shown in Figure 13 for models M1.4-18 (*top panel*) and M1.8-18 (*bottom panel*), with each model accreting at its fiducial rate. In both cases, the core superfluidity is as described in Table 2. For a low thermal conductivity (*dotted lines*), the crust melts in a series of layers. The jaggedness of $`\mathrm{\Gamma }`$ is because of the pycnonuclear reactions. Each one doubles $`Z`$ and halves $`n_N`$, so that $`\mathrm{\Gamma }`$ increases by $`2^{5/3}`$ and the crust refreezes. Electron captures then decrease $`Z`$ and $`\mathrm{\Gamma }`$ until the crust melts again. As a consequence of this melting and freezing, the crust resembles a layer cake at densities greater than neutron drip. Figure 14 shows the nuclear charge $`Z_M`$ (*thin line*), below which the ions are liquid, along with the $`Z`$ of the nuclei present (*thick lines*) according to Haensel & Zdunik (1990a). The thermal structure is the same as plotted in Figure 13, top panel, dotted line. $`\mathrm{\Gamma }`$ increases with density (or equivalently, $`Z_M`$ decreases), and so the naive expectation is a sharp transition from an ionic ocean to a crust. The Fermi energy also increases with density, however, and the ensuing decrease in $`Z`$ from electron captures offsets the rise in $`\mathrm{\Gamma }`$: both $`Z_M`$ and $`Z`$ decrease together. The melting strongly depends on composition: an increment of $`Z`$ by 2–3 is enough to keep the crust crystalline throughout. Notice from Figure 13 that there is no crustal melting if the conductivity is solely determined by electron-phonon scattering, for models M1.4-18 and M1.8-18 (*solid lines*). For model M1.4-14, the crust melts even if the conductivity is set by electron-phonon scattering. For larger masses (models M1.8-14 and M1.8-18), a high impurity concentration is needed to ensure melting. This is a consequence of the stronger core neutrino cooling holding the crust at a slightly lower temperature (cf. Figures 7 and 8). Of course, when the impurity concentration is high, the single-species calculation of $`\mathrm{\Gamma }`$ (eq. ) is no longer applicable. Calculations for binary-ionic mixtures (e.g., Segretain & Chabrier 1993) show that the melting temperature is lowered below that of the pure phases. While the phase diagram of a plasma composed of a large number of species has not been calculated, it is likely that the $`\mathrm{\Gamma }_M`$ of an impure crust is lower than that assumed here. This strengthens the contention that the impure crust of a rapidly accreting neutron star contains melted layers, provided that the $`Z`$ used here (from Haensel & Zdunik 1990a) is roughly the average charge of the nuclei actually present. A self-consistent calculation of the crust composition, and the resulting phase diagram, is required to conclusively determine if layer cake melting actually occurs. ## 5 Summary and concluding remarks There are three main conclusions presented in this work. First, for neutron stars accreting rapidly enough for the accreted hydrogen and helium to burn stably, most of the heat released in the crust flows into the core. As a result, the thermal profile in the inner crust is nearly independent of the temperature at the top of the crust. Second, if the crust lattice is very impure, there is a maximum in temperature at densities greater than neutron drip, where the heating occurs. The peak temperature in the crust in this case is set by the ability of the crust to carry the generated nuclear luminosity inward from the reaction shell and is relatively insensitive to the core temperature. Third, heating the inner crust to temperatures $`8\times 10^8\mathrm{K}`$ might melt the crust in thin layers where electron captures have reduced the ionic charge. There are several consequences of these results. Because a fluid layer does not support shear stress, the strain in the crust must vanish in these melt layers. This will limit the quadrupole that can be induced by thermal perturbations to the electron capture rate (Bildsten 1998) if these captures occur above the melt layer. In addition, the fluid layers can dissipate rotational energy, either through hydrodynamical or magnetohydrodynamical processes, and thus contribute to balancing the accretion torque acting on the stellar surface. The electrical conductivity of an accreted crust is reduced, both because of crust heating (Urpin & Geppert 1995; Geppert & Urpin 1994) and because of crust impurities (Brown & Bildsten 1998). If the crust is as impure as considered here, the timescale for Ohmic decay over a pressure scaleheight is much less (by a factor of 100) than the flow timescale, for much of the crust. As a result, the inward advection of magnetic flux (Konar & Bhattacharya 1997) is reduced in importance. Thermomagnetic effects, such as current drift (Geppert & Urpin 1994) and the battery effect (e.g., Blandford, Applegate, & Hernquist 1983), will be comparatively more important, however, because of the greater thermal gradient. In recent years, attention has been given to other, more efficient, cooling mechanisms. The direct Urca process can operate if the proton fraction is greater than 0.148 (Lattimer et al. 1991) or if hyperons are present (Prakash et al. 1992). Other exotic mechanisms may be possible, including pion condensates (Umeda et al. 1994), kaon condensates (Brown et al. 1988), or quark matter (Iwamoto 1982). The exotic mechanisms have the same temperature dependence as the direct Urca ($`T^6`$) but are weaker. Although none of the hydrostatic structures considered in this paper has an interior proton fraction large enough to activate the direct Urca, some form of enhanced cooling could operate. However, the crust temperature would still remain high (§ 4.2) if the crust were very impure. Direct observational consequences of the core neutrino emissivity are unfortunately lacking. It is only in the cooling after accretion halts and the crust thermally relaxes (as in the transients; Brown, Bildsten, & Rutledge 1998) that the mode of core neutrino emissivity can be investigated. This is unlike the case of isolated, cooling neutron stars, for which the core neutrino cooling must be treated correctly. The results of this investigation show that the most vexing impediment to further calculations of the thermal structure of an accreting neutron star, and hence to a better understanding of the issues raised in this section, is the need to calculate the composition throughout the crust for the trajectory in $`(n,T)`$ space followed by an accreted fluid element. It is a pleasure to thank Lars Bildsten, Andrew Cumming, Andrew Melatos, and Greg Ushomirsky for many helpful discussions and for reading drafts of this work. I also thank Chris Pethick for suggesting that the nuclei in the inner crust may remain spherical if the charge is low enough and the referee for helpful comments on the melting of a multi-species crystal. This research was supported by NASA grant NAG5-8658. EFB is supported by a NASA GSRP Graduate Fellowship under grant NGT5-50052.
no-problem/9910/astro-ph9910053.html
ar5iv
text
# The Orbital Period of the Accreting Pulsar GX 1+4 ## 1 Introduction GX 1+4 is a bright Galactic Center accretion-powered pulsar in a low-mass x-ray binary system (LMXB) discovered in the early 1970s (Lewin, Ricker & McClintock 1971). Throughout the 1970s the pulsar exhibited a spin-up behavior with the pulsation period decreasing from 135 s to less than 110 s (Cutler, Dennis & Dolan 1986 – hereafter CDD86 – and references therein), corresponding to a spin-up rate of $`\dot{P}2`$ s/year. After experiencing an extended low-intensity state in the early 1980s (Hall & Develaar 1983; McClintock & Leventhal 1989), GX 1+4 re-emerged in a spin-down state (Makishima et al. 1988; Sakao et al. 1990) with approximately the same $`\dot{P}`$ and stayed in this state ever since, with occasional short-term variations of $`\dot{P}`$. Infrared observations and optical spectroscopy of GX 1+4 established a rare association of a neutron star with a M5 III giant star, V2116 Oph, in a symbiotic binary system (Glass & Feast 1973; Davidsen, Malina & Bowyer 1977; Chakrabarty & Roche 1997). The identification was made secure by a ROSAT accurate position determination (Predehl, Friedrich & Staubert 1995) and by the discovery of optical pulsations in V2116 Oph consistent with the spin period of the neutron star (Jablonski et al. 1997, Pereira et al. 1997). In comparison with the other four known LMXB accretion-powered pulsars (GRO J1744$``$28, Her X-1, 4U 1627$``$67 and the recently discovered millisecond accreting pulsar SAX J1808.4-3658 – Wijnands & van der Klis 1998), GX 1+4 has a much longer (factor of $``$ 100) spin period and its orbital period, albeit not securely measured until this work, was known to be at least one order of magnitude longer than the periods of the other systems. Physically quantitative lower limits on the binary period of GX 1+4 were derived by Chakrabarty and Roche (1997), who showed that the binary period must be at least 100 d, and is probably more than 260 d. In 1991, the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory (CGRO) initiated a continuous and nearly uniform monitoring of GX 1+4. The BATSE observations confirmed the spin-down trend with occasional dramatic spin-up/down torque reversal events (Chakrabarty 1996, Chakrabarty et al. 1997, Nelson et al. 1997). Attempts to find the orbital period of GX 1+4 by Doppler shifts of the pulsar pulse timing or optical lines have both been inconclusive so far. For the X-ray timing measurements, the accretion torque magnitude is much larger than the expected orbital Doppler shifts, and the torque fluctuations have significant power at the time scales comparable to the expected binary period (Chakrabarty 1996). In the case of the optical lines, the problem is the long period ($`>`$ 100 days) expected (Davidsen, Malina & Bowyer 1977, Doty, Hoffman & Lewin 1981; Sood et al. 1995). Long-term optical photometry in R band has shown variations in the light curve with periods of $`30`$ and $`110`$ days (Pereira, Braga & Jablonski 1996; Pereira 1998). Using a small number of X-ray measurements carried out during the spin-up phase of GX 1+4 in the 1970s, CDD86 produced an ephemeris for predicting periodical enhancements in the spin-up rate of the neutron star. A possible interpretation for this periodic behavior is that the neutron star and the red giant are in an elliptical orbit with a 304-day period. In this work we report the results of a comprehensive time-series analysis of the BATSE data on GX 1+4 in an attempt to find the orbital period of the system. We discuss the implications that can be drawn from our results in light of the possible models for this source and show that the elliptical orbit interpretation is probably the correct one. We present a refined version of the ephemeris originally proposed by CDD86. ## 2 Data Analysis and Results The frequency and the pulsed flux data between Julian Day (JD) 2448376.5 and 2451138.5 (i.e., 1991 April 29 to 1998 October 20) used in this work were obtained from Chakrabarty (1996) and from the BATSE public domain data available at http://www.batse.msfc.nasa.gov/data/pulsar/sources. The 20–50 keV pulsed signals are extracted from DISCLA 1.024s channel 1 data. After being weighted according to the aspect angle of each detector to a source on an existing source list, data are treated to remove the background variations, barycentered and epoch-folded over a set of grid points in the vicinity of the expected pulse frequency. Frequency and modeled flux are inspected daily for significant detections and the reports are produced twice a week. 15-day mean values for the fluxes and pulse frequencies of GX 1+4 were calculated for the entire dataset. A dataset of GX 1+4 residual pulsation frequencies was obtained from the frequency history by subtraction of a standard cubic spline function to remove low frequency variations in the spin-down trend. The fitting points are mean frequency values calculated over suitably chosen time intervals. The results of the spline fitting are fairly insensitive to intervals greater than $``$ 200 days between fitting points (we have used $`\mathrm{\Delta }t=215`$ days). The pulsed X-ray flux, frequency history and residual frequencies are shown in Fig. 1 as functions of time. We have carried out a power spectrum analysis to search for periodicities of less than 1000 days in both the residual frequency and the pulsed flux data. A Lomb-Scargle periodogram (Press et al. 1992), suitable for time series with gaps, shows a significant periodic signal at 302.0 days (Fig. 2) in the residual frequency time series. This value is insensitive to any oversampling factor greater than 2 over the Nyquist interval. The power spectrum shows a red noise with an approximate power-law index index of $``$2. In order to estimate the statistical significance of the detection, a series of numerical simulations of the frequency time series with 1-sigma gaussian deviations (using the error bars of the data points) were performed. In the frequency domain of the simulated light curves, we forced the power spectra to have a power-law index of $`2`$, and transformed the results back to the time domain. We then selected the times in the simulated light curves to match the observed times for the datapoints and subtracted the spline with $`\mathrm{\Delta }t=215`$ days. We finally calculated the Lomb periodogram of the simulated time-series. The simulations show that the use of the 215-d spline, besides providing an effective filter for frequencies below $`2\times 10^3\mathrm{d}^1`$, does not produce power in any specific frequency in the range of interest, i.e., no monocromatic or even QPO signals are produced. By comparing the amplitude of our 302-day peak with the local value obtained by the mean of the numerical simulations (the peak is a factor of 13.91 higher), we obtain a statistical significance of 99.98% for the detection. Epoch folding the data using the 302-day period yields a 1-$`\sigma `$ uncertainty of 1.7 days. In the pulsed flux data, the most interesting feature is a wide structure of low-significance peaks observed in the range 200-500 days, with no significant peak at $``$ 300 days. By analyzing the variation of the period of GX 1+4 during the spin-up phase in the 1970s, CDD86 proposed a 304 -day orbital period and an ephemeris to predict the events of enhanced spin-up: $`T=\mathrm{JD}2,444,574.5\pm 304n`$, where $`n`$ is an integer. This ephemeris is based on four events discussed by the authors, whose existence was inferred from ad-hoc assumptions and extrapolations of the observations. The projected enhanced spin-up events derived from that ephemeris for the epochs contained in the BATSE dataset, represented as solid vertical lines in the lower panel of Fig. 1, are in excellent agreement with the BATSE reduced spin-down and spin-up events. The BATSE dataset is obviously significantly more reliable than CDD86’s inasmuch as it is based on 9 well-covered events measured with the same instrument as opposed to the 4 events discussed in CDD86. The striking agreement of CDD86’s ephemeris with the BATSE observations is very conspicuous and give a very strong support to the claim that the orbital period of the system is indeed $``$ 304 days. Taking integer cycle numbers, with the $`T0`$ epoch of CDD86 as cycle $`23`$, and performing a linear least-squares fit to the frequency residuals seen in the lower panel of Fig. 1, we find that the following ephemeris can represent the time of occurrence $`T`$ of the maxima in the frequency residuals: $$T=\mathrm{JD}2,448,571.3(\pm 3.2)\pm 303.8(\pm 1.1)n,$$ (1) where $`n`$ is any integer. The events predicted by the above ephemeris are shown as vertical dashed lines in the three panels of Fig. 1. Taking into account a conservative uncertainty estimate of 30 days for the peaks of the BATSE events, the reduced $`\chi ^2`$ of the fit is $`\chi _\text{r}^2=0.61`$. The value of $`303.8\pm 1.1`$ days for the orbital period is consistent with the one obtained through power spectrum analysis performed on the BATSE data, which gives further support for the period determination. ## 3 Discussion In the BATSE era, the long term frequency history of GX 1+4 shown in Fig. 1 (middle panel) exhibits a characteristic spin-down trend with an average rate of $`1.8`$ s/year. Frequency derivative reversals occur on times preceding the epochs of events labeled # 5, 7 and 9 in the bottom panel. The upper panel of Fig. 1 shows that these events are somewhat correlated with rather intense flares in the pulsed flux. In the 1970s, when the measurements used by CDD86 were carried out, the source was in a spin-up extended state. The scenario proposed by CDD86 to explain the periodic occurrence of enhanced spin-up events was that the system was in a elliptical orbit and the periastron passages would occur when $`\dot{P}`$ is maximum, as expected in standard accretion from a spherically expanding stellar wind. It is widely accepted today, as inferred from GX 1+4’s optical/IR properties (Jablonski et al. 1997; Chakrabarty, van Kerkwijk & Larkin 1998; Chakrabarty et al. 1997; Chakrabarty & Roche 1997), that the system has an accretion disk. Since the neutron star is currently spinning-down, the radius at which the magnetosphere boundary would corotate with the disk, $`r_{\mathrm{co}}=(GMP^2/4\pi ^2)^{1/3}3.6\times 10^4P_{100\mathrm{s}}^{2/3}`$ km, where $`M`$ is the mass of the neutron star (assumed to be $``$ 1.4 M) and $`P_{100\mathrm{s}}`$ is the spin period in units of 100 seconds, is probably smaller than the magnetosphere radius $`r_M4.1\times 10^4L_{36}^{2/7}`$ km, where $`L_{36}`$ is the X-ray luminosity in units of 10<sup>36</sup> erg/s (Frank, King & Raine 1992). This value for $`r_M`$ assumes a surface magnetic field of $``$ 10<sup>14</sup> G for GX 1+4 (Makishima et al. 1988, White 1988, Chakrabarty et al. 1997, Cui 1997). Since the pulse period is $``$ 120 s and the luminosity is typically $`<10^{37}`$ erg/s, the period is close to the equilibrium value, for which $`r_{\mathrm{co}}r_M`$. This allows spin-down to occur even though accretion continues, the centrifugal barrier not being sufficiently effective (White 1988). Assuming that the elliptical orbit is the correct interpretation for the origin of the modulation, the mass accretion rate (and hence the luminosity) should increase as the neutron star approaches periastron, making $`r_M`$ approach $`r_{\mathrm{co}}`$. As the velocity gradient between the disk material and the material flowing along the magnetic field lines decreases, the spin-down torque gets smaller and the neutron star decelerates at a slower rate. We expect that this mechanism will produce a peak in the frequency residuals close to the periastron epoch, beyond which the neutron star will start to get back to a higher spin-down rate. Occasionally, due to the highly variable mass loss rate of the red giant, $`r_{\mathrm{co}}`$ will surpass $`r_M`$ and the neutron star will spin-up for a brief period of time during periastron, as observed in the BATSE frequency curve in events 5, 7 and 9. According to this picture, one would expect an increase in X-ray luminosity at periastron. Although this is only marginally indicated in the BATSE pulsed flux light curve, it should be pointed out that total flux data from the All Sky Monitor (ASM) onboard RXTE for the epoch MJD 50088 to 51044 does not correlate significantly with the BATSE pulsed flux, indicating that the pulsed flux may not be a good tracer of the accretion luminosity in this system. Furthermore, the periodic $`5\mu `$Hz excursions in the residual frequency would lead to very low-significance variations in the X-ray flux measured by the ASM, as we now show. Taking the fiducial torque $`N_0=\dot{M}\sqrt{GM_\mathrm{X}r_{\mathrm{co}}}`$ given by Bildsten et al. (1997), where $`\dot{M}`$ is the accretion rate and $`M_\mathrm{X}`$ is the mass of the neutron star, as an order-of-magnitude estimation (since $`r_{\mathrm{co}}r_\mathrm{M}`$), we can establish a lower limit to the variation in $`\dot{M}`$ ($`\mathrm{\Delta }\dot{M}`$) that produced the residual torque, using the fact that $`\dot{\nu }=N_0/2\pi I`$, where $`I`$ is the moment of inertia of the neutron star (Ravenhall & Pethick 1994). Since the relative variation in flux ($`F`$) scales as the relative variation in luminosity, we get $`\mathrm{\Delta }F/F0.3`$ for $`L10^{37}`$ erg/s. The typical ASM GX 1+4 flux is $`1\pm 2`$ count/s in the 2–10 keV, so the expected variations of $``$ 0.3 counts/s would be very hard to detect, given the available observational coverage. This is consistent with the lack of any significant periodic signal in our calculation of the power spectrum of the entire available ASM flux history of GX 1+4 (from MJD 50088 to 51353). In the elliptical orbit interpretation, one would also expect that tidal torques exerted by the red giant envelope would circularize the orbit in a short time scale (Verbunt & Phinney 1995). However, with a period of $``$ 300 days, the red giant radius is probably less than 7% of the binary separation, as shown below. Since the rate scales as $`(R_\mathrm{c}/a)^8`$, where $`R_\mathrm{c}`$ is the red giant radius and $`a`$ is the binary separation, we do not expect the circularization time scale to be too short. Furthermore, Verbunt & Phinney (1995) show that for orbital periods longer than about 200 days, the eccentricities of red giant binaries in open clusters span the full range. An alternative interpretation for the observed modulation would be the presence of oscillation modes in the red giant star. For an M5 giant, persistent radial oscillations with a period of $``$ 300 days are perfectly plausible (Whitelock 1987). In this case, the oscillations could excite a modulation in the mass loss rate through the stellar wind that could produce the modulated torque history. However, the stability of the infrared magnitudes of V2116 Oph (Chakrabarty & Roche 1997) preclude it from being a long-period variable, since these stars undergo regular $`>`$ 1 mag variations in the infrared (Whitelock 1987). In addition, the secular optical light curve in the $`R`$ band obtained by our group at Laboratório Nacional de Astrofísica (Brazil) from 1991 to date shows no signs of these oscillations (Pereira, Braga & Jablonski 1996; Pereira 1998). It is noteworthy that the amplitude of the residual frequency oscillations in GX 1+4 cannot be attributed to Doppler shifts (Chakrabarty 1996). A firm lower limit for the companion mass is given by the X-ray mass function $`f_\mathrm{X}(M)=(c\mathrm{\Delta }\nu /\nu )^3P_{\mathrm{orb}}/2\pi G`$, which would be equal to $``$ 210 M for a $`5\mu `$Hz amplitude and a 304-day orbital period. This is clearly too massive for a red giant and actually for any stellar companion. There is also no evidence of Doppler shifts in the spectral lines of V2116 Oph (Sood et al. 1995; Chakrabarty & Roche 1997), which could be an indication that the inclination of the system is fairly low. The spectral and luminosity classification of V2116 Oph, together with the measured interstellar extinction of $`A_V5`$, is consistent with a low-mass star (M $``$ 0.8$``$2 M) on the first-ascent red giant branch at a distance of 3$``$6 kpc (Chakrabarty & Roche 1997). The range of radii for such a start is $`50110`$ R$``$. The size of the Roche lobe of this object as the companion in the binary system can be estimated by the radius of a sphere with the same volume as the lobe, $$R_L=1.42\times 10^{11}M_\mathrm{X}^{1/3}\frac{(1+q)^{1/3}q^{2/3}}{0.6q^{2/3}+\mathrm{ln}(1+q^{1/3})}P_d\mathrm{cm},$$ (2) where $`q=M_g/M_\mathrm{X}`$ is the mass ratio of the red giant and the neutron star, $`M_\mathrm{X}`$ is in solar mass units and $`P_d`$ is the orbital period in days (Eggleton 1983). Assuming $`M_\mathrm{X}=1.4`$ and $`P_\mathrm{d}=304`$, the range of values obtained for $`R_L`$ is 546 R$``$780 R$``$ for GX 1+4, with the binary separation ranging from 1640 to 1890 R$``$. Thus, the companion is probably not filling its Roche lobe and the accretion disk forms from the slow, dense stellar wind of the red giant. Another interesting argument leading to a $``$ 300-day orbital period for GX 1+4 comes from the work of van Paradijs & McClintock (1994), according to which the absolute visual magnitudes of low-mass X-ray binary systems seem to correlate linearly with the quantity $`\mathrm{\Sigma }=P_{\mathrm{orb}}^{2/3}\gamma ^{1/2}`$, where $`\gamma =L_\mathrm{X}/L_{\mathrm{Edd}}`$ is the accretion luminosity in units of the Eddington luminosity. Taking the value of $`M_V4.2`$ for the disk light not contaminated by H$`\alpha `$ obtained by Jablonski & Pereira (1997), we get $`P_{\mathrm{orb}}270\pm 82`$ days for $`L_\mathrm{X}L_{\mathrm{Edd}}`$, which is fully consistent with our results. It should be noted, however, that this model is based upon the assumption that the optical emission is dominated by reprocessing of X-rays in the accretion disk, which is not clear to be the case in GX 1+4. In conclusion, we have shown that the long-sought orbital period of GX 1+4 is very likely to be 304 days, as proposed in 1986 by CDD86 with marginal confidence. A more thorough covering of the X-ray luminosity of the system, with high sensitivity and spanning several cycles, will be very important to test the elliptical orbit model. We thank Dr. Bob Wilson from NASA Marshall Space Flight Center for gently providing us BATSE frequency and flux data on GX 1+4. M. P. is supported by a FAPESP Postdoctoral fellowship at INPE under grant 98/16529-9. J. B. thanks CNPq for support under grant 300689/92-6. F. J. acknowledges support by PRONEX/FINEP under grant 41.96.0908.00. We thank an anonymous referee for very important corrections and suggestions.
no-problem/9910/cond-mat9910448.html
ar5iv
text
# Landau Theory of the Phase Transitions in Half Doped Manganites: Interplay of Magnetic, Charge and Structural Orders \[ ## Abstract The order parameters of the magnetic, charge and structural orders at half-doped manganites are identified. A corresponding Landau theory of the phase transitions is formulated. Many structural and thermodynamical behaviors are accounted for and clarified within the framework. In particular, the theory provides a unified picture for the scenario of the phase transitions and their nature with respect to the variation of the tolerance factor of the manganites. It also accounts for the origin of the incommensurate nature of the orbital order and its subsequently accompanying antiferromagnetic order. preprint: ZHONG Fan et al \] The discovery of “colossal” magnetoresistance has stimulated a renaissance of interest in doped rare–earth manganese. Intensive investigation has revealed a diversity of novel phenomena due to the complex interplay among magnetic, charge, orbital and structural orders. A particular relevant issue is the competition between magnetic and charge orders in half doped manganites. It poses a great challenge to theorists upon how to deal with the strong correlation in models with magnetic, orbital and lattice degrees of freedom. Here we formulate a Landau theory of phase transitions based on the symmetry of the system in an attempt to understand a variety of sometime controversial structural and thermodynamical behaviors. Although it may be argued to be only a mean field theory which is incorrect at critical points, the structural information, among others it affords is robust. And it is the order parameters that exhibit singularity at the critical points. The most prominent charge ordered (CO) behavior in perovskite manganites concerns with those doped at 0.5. These systems exhibit several classes depending on the tolerance factor of the resultant structure (see Fig. 1). For La<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> with small distortions, which we classify as Class I hereafter though no charge order appears (neither does Class II below), a paramagnetic (PM) to ferromagnetic (FM) transition occurs at $`T_C360`$K. When La is replaced by a smaller ion Nd, $`T_C`$ decreases with the tolerance factor. At 40 percent of Nd or so, an intermediately distorted class II shows up in which the FM phase transforms at a lower temperature $`T_{AFM}`$ to a metallic A-type antiferromagnetic (AFM) state . As more La is replaced by Nd, a new class III sets in which displays a CO CE-type AFM state below $`T_{CO}`$. Similar behavior has been reported in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>. Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> is somehow special. It was initially reported to be CO, but later only A-type AFM order was found. However, Comparing the transport behavior of Pr<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> with (La<sub>z</sub>Nd<sub>1-z</sub>)<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> of $`0<z0.4`$, one finds similar behavior. Only the latter’s resistivity levels off to a presumed metallic state slightly slowly after a jump at $`T_{CO}`$ or $`T_{AFM}`$. So the boundary between the CO and A-type AFM states seems to be not so clear-cut: there may be a transition from the CO state to the A-type AFM state. For the most distorted class IV such as Pr<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> and Nd<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, no FM order appears. The PM phase changes directly into a CO state below $`T_{CO}`$ and then global AFM ordering shows up at a lower temperature ($`150`$K). The CO tendency in Class IV even extends to lower doping, although with a pseudo-CE-type structure due to the excess electrons. The PM to FM transition is continuous. All the other transitions shown in Fig. 1 are of first order with hysteresis. The most striking feature of the CO state is that it can be melt by an external magnetic field , pressure or electric field or even by x-ray or light irradiations into a FM state, indicating the competition between them. The required magnetic field to melt the CO state in Class IV is almost twice as large as that in Class III. In this paper, we shall concentrate on the peculiar phenomena associated with CO. The lattice structure of interest is orthorhombic with a space group $`Pnma`$ (see Fig. 1 inset), which is well characterized and the most common in doped manganites. There are some scatters in the reported structures. This is understandable because of the small distortion of the perovskite structure, which is also sensitive to the preparation conditions. We develop below a Landau theory of the phase transitions in these systems and show that many structural and thermodynamical behaviors are closely correlated within the framework. In particular, besides the magnetic, charge and structural/orbital pattern, primary features are a simple picture that unifies the three classes along with the nature of the transitions involved and the origin of the incommensurability of the orbital order, CE-type AFM order and the melting of CO state. We start with the magnetic transition. A FM phase transition is associated with a wave vector at $`𝐤=\mathrm{𝟎}`$ or the $`\mathrm{\Gamma }`$ point. On the other hand, the CE-type AFM structure is described by $`𝐤=(00\frac{1}{2})`$ and $`𝐤=(\frac{1}{2}0\frac{1}{2})`$ for Mn ions at positions 1,2 and 3,4, respectively. In La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, the FM to AFM transition is found to be accompanying the incommensurate (IC) to (nearly) commensurate orbital ordering transition. In Class IV, the CE-type AFM order can appear separate from the charge and structural orders, which produce a natural configuration for the CE-type AFM structure (see below). Thus the AFM order is regarded as originating from the charge and orbital orders. For simplicity, we leave it towards the end of the paper. The PM to FM transition is described by an order parameter $`M`$ representing the average magnetization over the system. As pointed out in our previous work, all the irreducible representations (IR’s) at $`𝐤=\mathrm{𝟎}`$ of $`Pnma`$ are one dimensional. So we choose $`M`$ as a scalar representing the component that carries the IR $`\tau ^5`$ responsible for the transition. Although it may couple to other magnetic configurations such as A-type or C-type AFM orders of the same IR, we can eliminate such modes if present and write the free energy simply as $$F_M=\frac{1}{2}a_1M^2+\frac{1}{4}b_1M^4,$$ (1) with $`a_1=a_{10}(TT_1)`$, and $`a_{10}`$ and $`b_1`$ depending only weakly on the temperature $`T`$. Eq. (1) describes a continuous phase transition at $`T=T_1`$ with $`M=\sqrt{a_1/b_1}`$. Next we move on to the CO phase transition. A prominent feature that indicates the existence of the CO state is the appearance of the superlattice diffraction spots characterized by a wave vector $`𝐤=(\frac{1}{2}00)`$ or the $`X`$ point in the $`Pnma`$ setting, which is adhered to throughout this paper. However, such spots associate more with lattice modulations than with the charge order that is characterized by $`𝐤=(100)`$ or $`\mathrm{\Gamma }`$. Direct charge $`\mathrm{𝑎𝑛𝑑}`$ orbital orders are first detected by x-ray resonant scattering techniques in doped La<sub>2</sub>SrMnO<sub>4</sub>. So to describe the transition, we need two order parameters. On the one hand, the charge order is characterized by an order parameter $`C=\xi _1+\xi _2\xi _3\xi _4`$ with $`\xi _i`$ the occupancy possibility of site $`i`$ (Fig. 1 inset). $`C`$ is maximized if $`\xi _1=\xi _2=\xi _3=\xi _4`$, and so a nonzero $`C`$ produces the observed CO pattern. By considering the permutations of the four sites under the symmetry operations of the $`Pnma`$ group, it can be shown that $`C`$ transforms also as the IR $`\tau ^5`$ of $`Pnma`$ at $`\mathrm{\Gamma }`$. Accordingly, the free energy of the CO transition is given by $$F_C=\frac{1}{2}a_2C^2+\frac{1}{4}b_2C^4,$$ (2) where $`a_2=a_{20}(TT_2)`$, and $`a_{20}`$ and $`b_2`$ are constants. Below $`T_2`$ charge order appears and the symmetry of the structure is lowered to $`P2_1/m`$. On the other hand, the structural modulation at $`𝐤=(\frac{1}{2}00)`$ is described by one of the two IR’s of the wave vector, namely, $`X_1`$ and $`X_2`$, which are both two dimensional. Accordingly, the structural transition is characterized by a two-dimensional order parameter ($`\eta _1,\eta _2`$). The physical meaning of the order parameter may be the displacement of the Mn<sup>4+</sup>O<sub>6</sub> octahedra as modeled by Radaelli and coworkers to account for the diffraction patterns. It can be shown that $`\eta _2`$ may represent an arbitrary linear combination of the $`x`$ and $`z`$ components and $`\eta _1`$ the $`y`$ component of such displacements. Such a displacement pattern is consistent respectively with an orbital configuration of $`d_{3x^2r^2}`$ and $`d_{x^2y^2}`$, which, when propagate half a period to $`X`$, switch to $`d_{3z^2r^2}`$ and $`d_{y^2z^2}`$, respectively. It may be possible to choose alternatively an orbital basis such as $`d_{3x^2r^2}`$ and $`d_{y^2z^2}`$ and then represent the order parameter as the long-range order of a certain orbital which assumes a certain angle in the orbital space. We just note in passing that a certain displacement pattern corresponds to some orbital order. Notice that the frequently observed $`P2_1/m`$ symmetry can only arise from the IR $`X_1`$ . Therefore, the free energy for this structural transition is $`F_\eta =`$ $`{\displaystyle \frac{1}{2}}a_3(\eta _1^2+\eta _2^2)+{\displaystyle \frac{1}{4}}b_3(\eta _1^4+\eta _2^4)+{\displaystyle \frac{1}{4}}d(\eta _1^2+\eta _2^2)^2+`$ (4) $`\kappa \left(\eta _1{\displaystyle \frac{\eta _2}{x}}\eta _2{\displaystyle \frac{\eta _1}{x}}\right)+{\displaystyle \frac{\sigma }{2}}\left[(\eta _1)^2+(\eta _2)^2\right],`$ where again $`a_3=a_{30}(TT_3)`$, and $`a_{30}`$, $`b_3`$, $`d`$, $`\kappa `$ and $`\sigma `$ are constants. A peculiar feature of Eq. (4) is the appearance of the Lifshitz invariant (the $`\kappa `$ term), which frequently leads to IC modulations. Many characteristic features of an IC transition has been observed in the CO and structural transition in manganites. Therefore, the IC nature of the structural/orbital (but not charge) order has its origin in the Lifshitz invariant of the $`X`$ point. Nevertheless, we shall neglect this IC feature of the structural modulation below for simplicity and focus on its interplay with the charge and magnetic orders. This is partly justified by the fact that commensurate structure is also frequently observed in the same experiments that display the reverse. In this case, Eq. (4) then exhibits two possible phases below $`T_3`$. One has only one nonzero component equal to $`\pm \sqrt{a_3/(b_3+d)}`$ and so its symmetry is $`P2_1/m`$ if $`b_3+d>0`$ and $`b_3<0`$. The other satisfies $`\eta _1=\pm \eta _2=\pm \sqrt{a_3/(b_3+2d)}`$ and belongs to $`Pm`$ symmetry when $`b_3+2d>0`$ and $`b_3>0`$. Coupling of the charge to the structural degrees of freedom can be readily found by noting that $`\eta _1^2\eta _2^2`$ transforms as the same IR as $`C`$, and so the simplest coupling between them is $$F_{C\eta }=g_{C\eta }C(\eta _1^2\eta _2^2),$$ (5) where $`g_{C\eta }`$ is a measure of the coupling. Assume $`g_{C\eta }`$ and $`C`$ are positive without loss of generality. It is transparent then that the coupling will favor the ordering of $`\eta _2`$ once the charge is ordered, since the transition point for $`\eta _2`$ is now elevated to $`a_32g_{C\eta }C`$, while that for $`\eta _1`$ lowered to $`a_3+2g_{C\eta }C`$. This explains the reason for the often observed $`P2_1/m`$ symmetry. Possibility for ordering of both $`\eta _1`$ and $`\eta _2`$ still exists, which may account for the absence of the $`2_1`$ screw axis in Sm<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, which should then be of $`Pm`$ symmetry, though it was preferred to be $`P2mm`$ or $`Pmmm`$. This scenario is confirmed from the phase diagram illustrated in the inset of Fig. 2. It is seen that as $`b_3`$ increases, the boundary of the $`Pm`$ phase moves to the right, reducing the region of the $`P2_1/m`$ phase. If $`b_3<0`$, only the $`Pm`$ phase exists, while for $`d<0`$, the $`Pm`$ phase extends far to the right. Note that a large $`b_3`$ or small $`d`$ means a large “lock-in” term \[the $`b_3`$ term in Eq. (4)\] that tends to suppress the incommensurability. This seems to be consistent with the observation that more distorted systems such as Sm and Gd tend to be more stoichiometric and so commensurate. We remark that the transition from $`P2_1/m`$ to $`Pm`$ phase may accompany or be hidden by the IC to commensurate transition and PM to CE-type AFM transition. Since $`M`$ changes sign by time reversal, the only possible couplings of the magnetic to the charge and structural transition are bi-quadratic, i.e., $$F_{MC\eta }=\frac{1}{2}g_{M\eta }M^2(\eta _1^2+\eta _2^2)+\frac{1}{2}g_{MC}C^2M^2,$$ (6) where both coupling coefficients are positive due to the competing orders. Equations (1-6) constitute our theory of the magnetic and charge and structural or orbital transitions. Instead of going into detailed estimations of the various coefficients in the model, we are content here with global features that are believed to be relevant to the parameter regime of real materials. To this end, we study of a simplified version of the theory, in which we have taken $`\eta _1=0`$, i.e., disregarded the possibility of the $`Pm`$ symmetry, relabeled $`b_3+d`$ as $`b_3`$, and neglected the bi-quadratic coupling between the charge and magnetism, since their coupling to the structural order results in a lower-order $`CM^2`$-type coupling. This implies that long-range charge order always accompanies with structural order. Figure 2 displays a generic phase diagram for several sets of the parameters in arbitrary units. It shows the strong coupling case $`g_{M\eta }^2>b_1b_3`$ between magnetism and lattice so that no mix magnetic and charge orders appears. It can be seen that as $`g_{C\eta }`$ increases, the sequence of phase transitions changes from a PM $``$ FM to a PM $``$ FM $``$ CO then to a PM $``$ CO phase directly as $`T`$ is lowered. The real situations as in Fig. 1 are of course not simply only a variation of $`g_{C\eta }`$, but Fig. 2 exhibits in a simple way the relevance of the theory. Figure 3 shows the variation of the order parameters for three different $`g_{C\eta }`$. It is seen that the transition to CO phase is discontinuous. The variations of $`C`$ and $`\eta `$ are similar, but their coupling is not necessarily linear. Note that the CO and the structural transitions are so strongly coupled that they take place at a single transition temperature. Although the CO can still appear preceding the structural order when $`g_{C\eta }<1`$, the reverse is not true, namely, $`\eta =0`$ if $`C=0`$. Therefore the structural or orbital order is driven by the charge order. The inset plots the corresponding free energy vs temperature, showing that the larger the $`g_{C\eta }`$, the lower the free energy, and so the bigger the magnetic field that lowers the upper curve \[Eq. (1)\] by $`HM`$ to melt the CO state, being in agreement with experiments. This indicates that the magnetic field acts in melting the CO state more than the AFM state. An evidence for this is that above the CE-type AFM ordering temperature, no AFM for the field to melt. This also justifies the separate treatment of the AFM order. Finally we discuss briefly the transition to the CE-type AFM order. Noting that in La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> with the standard CE-type state, the magnetic moments lie in the $`a`$-$`c`$ plane , we may choose the $`x`$ and $`z`$ components of the AFM vectors $`𝐋_1=𝝁_1𝝁_2`$ and $`𝐋_2=𝝁_3𝝁_4`$ as order parameters, since they transform as the two components of the two-dimensional IR’s $`\tau ^1`$ and $`\tau ^1`$ combining with its complex conjugate $`\tau ^3`$ associated respectively with the wave vectors $`(00\frac{1}{2})`$ and $`(\frac{1}{2}0\frac{1}{2})`$, where $`𝝁_𝒊`$ is the magnetic moment of ion $`i`$. The two IR’s rather than one makes the transition discontinuous. Both IR’s give a magnetic symmetry of $`P_b2_1/m`$ and a structural one of $`P2_1/m`$ which is identical with the CO and structural transitions. The compatibility of the orbital and magnetic patterns implies an enhancement of both transitions and can be described by a coupling $`(\eta _1^2+\eta _2^2)(L_{1\alpha }L_{1\beta }+L_{2\alpha }L_{2\beta })`$ with a negative coupling constant, where $`\alpha `$ and $`\beta `$ denote $`x`$ or $`z`$. As a result, ordering of one kind of the orders enhances the other, leading, for instance, to the accompanying of the AFM order with the IC to commensurate orbital order as the AFM order promotes the contribution of the lock-in term, and to the increasing of resonant x-ray scattering intensity of charge and orbital orders as AFM ordering. In conclusion, we have developed a Landau theory for the coupled phase transitions in half-doped manganites through the identification of the order parameters for the FM, CE-type AFM, CO and structural or orbital orders. The theory provides a unified picture for the scenario of the phase transitions and their nature with respect to the variation of the tolerance factor of the manganites via the symmetry-adapted coupling among the degrees of freedom. Many peculiar phenomena of half-doped manganites result from the interplay between the FM or A-type AFM and CO states, the CE-type AFM sets in only as a secondary factor. So an applied magnetic field primarily melts the CO state. The theory also accounts for the origin of the IC nature of the orbital order and its subsequently accompanying AFM order. As a phenomenological theory, it can make direct contact with the experimental results especially the symmetry of the involved structures that sensitively influence the transport behavior in manganites. Experimental clarifications are desirable of the symmetry of the CO state and its relation to the oxygen stoichiometry and commensurablity, possible structural transition from $`P2_1/m`$ to $`Pm`$, and the relation between A-type AFM and CO states. This work was supported by a URC fund and a CRCG grant at the University of Hong Kong.
no-problem/9910/astro-ph9910208.html
ar5iv
text
# The sun as a high energy neutrino source ## 1 Introduction Cosmic ray impingement on the solar atmosphere leads to the production of secondary particles via high energy $`pp`$-interactions, the decay of which subsequently results in a flux of both electron and muon neutrinos. (Note that throughout this article the term neutrino is meant to refer to both neutrino and corresponding antineutrino.) In order to compute the neutrino flux due to these processes, one first has to evaluate the absorption rate of cosmic rays in the sun, taking into account the interplanetary and solar magnetic fields . The high energy interactions may then be treated by means of Monte Carlo simulations such as JETSET and PYTHIA. Finally, the shadowing effect of inelastic neutrino scattering in the sun has to be included. This analysis is carried out for energies exceeding 100 GeV in . The results are shown in the left part of Fig. 1 and will be used in this article. For energies $`E_\nu `$ lower than 100 GeV we assume that the flux $`\varphi _{\nu _{e/\mu }}`$ of solar atmosphere neutrinos integrated over the solar disk is given by $`\varphi _{\nu _{e/\mu }}E_{\nu _{e/\mu }}^\gamma `$, where $`1.75<\gamma <2.45`$, thus allowing for some uncertainty due to heliomagnetic effects. Seckel et al. favor the lower limit of $`\gamma `$. Note that both choices are consistent with the EGRET limit on the gamma ray flux of the quiet sun , if a smaller value of $`\gamma `$ is adopted for $`E_\nu <10\mathrm{GeV}`$. The solar atmosphere neutrino spectra thus obtained may be altered by neutrino oscillations, which depend on the neutrino mass differences and mixing matrices. Together with the suggested solutions to the solar neutrino problem and long baseline experiments, the SuperKamiokande data on terrestrial atmospheric neutrinos can be used to narrow down these parameters to a few cases. In this article we investigate, for the various mixing matrices and corresponding mass differences, the influence of neutrino oscillations on the expected event rate of solar atmosphere neutrinos. ## 2 Neutrino detection For the detection of the solar atmosphere neutrino flux water-based Čerenkov detectors may be used. As these detect the Čerenkov radiation of leptons produced in charged current interactions, the total event rate $`\dot{N_\nu }`$ for energies exceeding some threshold $`E_0`$ is approximately given by $$\dot{N}_\nu =_{E_0}^{\mathrm{}}dE\varphi _\nu (E)\sigma _{\mathrm{CC}}(E)\frac{\rho }{m_p}L_\nu (E)A,$$ (1) where $`m_p`$ denotes the proton mass, $`\rho =1\mathrm{g}/\mathrm{cm}^3`$ the matter density, and $`A`$ the effective detector area. For computing the charged current cross section $`\sigma _{\mathrm{CC}}`$ the CTEQ4DIS parton distribution functions are used. Concerning tau neutrinos, the phase space limitations due to the large tauon mass must be taken into account. $`L_\nu `$ is the lepton range or the detector thickness $`h`$ in the direction of the sun, whichever the larger. Hence for $`\nu _e`$ and $`\nu _\tau `$ we may assume $`L_\nu =h`$, whereas for $`\nu _\mu `$ the relation $$L_\mu (E)=\mathrm{max}\{\frac{1}{\beta \rho }\mathrm{ln}\frac{E+\alpha /\beta }{E_0+\alpha /\beta },h\}$$ with $`\alpha =2.5\mathrm{MeV}/(\mathrm{g}\mathrm{cm}^2),\beta =4.0\times 10^6(\mathrm{g}\mathrm{cm}^2)^1`$ has to be employed. In the following, we will assume $`E_0=10\mathrm{GeV}`$. Taking $`A=10^4\mathrm{m}^2`$, $`h=500\mathrm{m}`$ and $`A=1\mathrm{km}^2`$, $`h=1\mathrm{km}`$ as an example, the solar atmosphere neutrino fluxes of Fig. 1 yield event rates of $`\dot{N}_e=0.1\text{}\mathrm{\hspace{0.17em}0.2}\mathrm{a}^1`$, $`\dot{N}_\mu =0.3\text{}\mathrm{\hspace{0.17em}0.5}\mathrm{a}^1`$ and $`\dot{N}_\mathrm{e}=24\text{}\mathrm{\hspace{0.17em}46}\mathrm{a}^1`$, $`\dot{N}_\mu =46\text{}\mathrm{\hspace{0.17em}82}\mathrm{a}^1`$, respectively. The range of values reflects the allowed range of $`\gamma `$ for energies smaller than 100 GeV. ## 3 Neutrino oscillations So far, no neutrino oscillations have been taken into account. However, the solar neutrino problem and the SuperKamiokande atmospheric neutrino data are best explained by transitions between the various neutrino flavors, so that one should expect that the solar neutrinos oscillate on their flight from sun to earth. Then the probability of a solar neutrino of flavor $`\alpha `$ arriving as a neutrino of flavor $`\beta `$ is given by $$P_{\nu _\alpha \nu _\beta }=\left|\delta _{\alpha \beta }+\underset{k=2}{\overset{n}{}}U_{\beta k}U_{\alpha k}^{}\left[\mathrm{exp}\left(i\frac{\mathrm{\Delta }m_{k1}^2L}{2E_\nu }\right)1\right]\right|^2$$ with the distance $`L`$ between earth and sun, $`\mathrm{\Delta }m_{k1}^2m_k^2m_1^2`$ ($`m`$ being the neutrino mass), and the mixing matrix $`U`$. Note that because of the high energies Mikheyev-Smirnov-Wolfenstein (MSW) effects may be ignored. The experimental data on neutrino oscillations suggest mixing matrices of the form $$U=\left(\begin{array}{ccc}\mathrm{cos}\theta _{\mathrm{sun}}& \mathrm{sin}\theta _{\mathrm{sun}}& 0\\ \mathrm{sin}\theta _{\mathrm{sun}}\mathrm{cos}\theta _{\mathrm{atm}}& \mathrm{cos}\theta _{\mathrm{sun}}\mathrm{cos}\theta _{\mathrm{atm}}& \mathrm{sin}\theta _{\mathrm{atm}}\\ \mathrm{sin}\theta _{\mathrm{sun}}\mathrm{sin}\theta _{\mathrm{atm}}& \mathrm{cos}\theta _{\mathrm{sun}}\mathrm{sin}\theta _{\mathrm{atm}}& \mathrm{cos}\theta _{\mathrm{atm}}\end{array}\right)$$ for 3 flavors and $$U=\left(\begin{array}{cccc}0& 0& \mathrm{cos}\theta _{\mathrm{sun}}& \mathrm{sin}\theta _{\mathrm{sun}}\\ \mathrm{cos}\theta _{\mathrm{atm}}& \mathrm{sin}\theta _{\mathrm{atm}}& 0& 0\\ \mathrm{sin}\theta _{\mathrm{atm}}& \mathrm{cos}\theta _{\mathrm{atm}}& 0& 0\\ 0& 0& \mathrm{sin}\theta _{\mathrm{sun}}& \mathrm{cos}\theta _{\mathrm{sun}}\end{array}\right)(\mathrm{case}\mathrm{A})$$ or $$U=\left(\begin{array}{cccc}\mathrm{cos}\theta _{\mathrm{sun}}& \mathrm{sin}\theta _{\mathrm{sun}}& 0& 0\\ 0& 0& \mathrm{cos}\theta _{\mathrm{atm}}& \mathrm{sin}\theta _{\mathrm{atm}}\\ 0& 0& \mathrm{sin}\theta _{\mathrm{atm}}& \mathrm{cos}\theta _{\mathrm{atm}}\\ \mathrm{sin}\theta _{\mathrm{sun}}& \mathrm{cos}\theta _{\mathrm{sun}}& 0& 0\end{array}\right)(\mathrm{case}\mathrm{B})$$ for 4 flavors, i.e. if the existence of a sterile neutrino is assumed. In the following discussion the two 4 flavor matrices lead to the same results, as the schemes of case A and B both consist of a $`\nu _e`$-$`\nu _{\mathrm{sterile}}`$ oscillation, which is relevant for the solar neutrino problem, and a $`\nu _\mu `$-$`\nu _\tau `$ oscillation, which is relevant for the atmospheric neutrino data. The experimental limits of the mixing angles $`\theta _{\mathrm{atm}}`$ and $`\theta _{\mathrm{sun}}`$ and of the mass square differences $`\mathrm{\Delta }m_{k1}^2`$ for the various solutions of the solar neutrino problem are listed in Table 1. The differential fluxes $`\varphi _\nu `$ of $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$ at the earth may now be written as $`\varphi _e`$ $`=`$ $`\varphi _{e,\mathrm{sun}}P_{\nu _e\nu _e}+\varphi _{\mu ,\mathrm{sun}}P_{\nu _\mu \nu _e}`$ $`\varphi _\mu `$ $`=`$ $`\varphi _{e,\mathrm{sun}}P_{\nu _e\nu _\mu }+\varphi _{\mu ,\mathrm{sun}}P_{\nu _\mu \nu _\mu }`$ $`\varphi _\tau `$ $`=`$ $`\varphi _{e,\mathrm{sun}}P_{\nu _e\nu _\tau }+\varphi _{\mu ,\mathrm{sun}}P_{\nu _\mu \nu _\tau }.`$ The right part of Fig. 1 shows an example of solar atmosphere neutrino fluxes with neutrino oscillations. One obtains the corresponding total event rates by inserting the fluxes at the earth into Eq. 1. In order to compare the rates thus obtained to those without neutrino oscillations, we introduce the ratios $$R_{e/\mu }\frac{\text{total }\nu _{e/\mu }\text{ event rate with neutrino oscillations}}{\text{total }\nu _{e/\mu }\text{ event rate without neutrino oscillations}}$$ $$T_\tau \frac{\text{total }\nu _\tau \text{ event rate with neutrino oscillations}}{\text{total }\nu _\mu \text{ event rate without neutrino oscillations}}.$$ Clearly, $`R_{e/\mu }`$ and $`T_\tau `$ depend on $`U`$, $`L`$ and $`\mathrm{\Delta }m_{k1}^2`$. However, due to the fact that these quantities involve an integration over the energy, the dependene on $`L`$ and $`\mathrm{\Delta }m_{k1}^2`$ within the mass square ranges given in Table 1 is weak and can be neglected. Furthermore, there is virtually no dependence on the detector size. The ranges of $`R_{e/\mu }`$ and $`T_\tau `$ for the various mixing schemes are given in Fig. 2. $`R_e`$ and $`R_\mu `$ essentially do not depend on the precise value of $`\gamma `$ between 1.75 and 2.45. However, because of the limited tauon phase space this is not true for $`T_\tau `$. ## 4 Background Concerning the solar atmosphere neutrino flux there are three possible kinds of background fluxes: * For electron and muon neutrinos cosmic ray impingement on earth’s atmosphere leads to a background flux (cf. Fig. 1), which in both cases is of the same order as the solar atmosphere neutrino flux. For tau neutrinos the terrestrial atmospheric flux can be neglected for energies far above 1 GeV. Therefore in all three cases, the solar flux will be discernible from the terrestrial atmospheric background. * The decay of WIMPs in the solar interior might produce a neutrino flux exceeding the one due to cosmic ray interactions . * Blazars, gamma-ray bursts, and particle decays may give rise to an isotropic neutrino background . Assuming that an upper limit to this background flux is given by $`\varphi _\nu (E)=\xi k(E/1\mathrm{GeV})^\kappa `$ with the coefficients $`\kappa =2.1`$, $`k=7.32\times 10^6\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\mathrm{GeV}^2`$ of the extragalactic gamma-ray spectrum as obtained by EGRET and $`\xi 1`$, one obtains even for a cubic kilometer telescope an event rate less than one event per year. Shadowing and cascading of neutrinos in the sun only further diminish this result. Hence the isotropic background can safely be neglected. Accordingly, the background is lower than the solar neutrino flux. It might be, however, that the solar neutrino flux is dominated by neutrinos due to the decay of WIMPs. The impact of the uncertainty in the knowledge of the initial neutrino direction will be discussed in the next section. ## 5 Discussion From Fig. 2 we see immediately that the influence of neutrino oscillations is mostly independent of the mixing scheme, the only exception being the large mixing angle MSW case for electron neutrinos. Assuming a threshold energy of $`E_0=10\mathrm{GeV}`$, one may predict the event rates in water-based Čerenkov detectors with an effective area of $`A=10^4\mathrm{m}^2`$ ($`A=1\mathrm{km}^2`$) and a thickness of $`h=500\mathrm{m}`$ ($`h=1\mathrm{km}`$) to be $`0.08\text{}\mathrm{\hspace{0.17em}0.3}\mathrm{a}^1`$ ($`12\text{}\mathrm{\hspace{0.17em}44}\mathrm{a}^1`$) for electron, $`0.1\text{}\mathrm{\hspace{0.17em}0.3}\mathrm{a}^1`$ ($`23\text{}\mathrm{\hspace{0.17em}53}\mathrm{a}^1`$) for muon and $`0.03\text{}\mathrm{\hspace{0.17em}0.1}\mathrm{a}^1`$ ($`6\text{}\mathrm{\hspace{0.17em}12}\mathrm{a}^1`$) for tau neutrinos. The corresponding $`\nu _\mu `$ and $`\nu _\tau `$ event rates in a 1 km<sup>3</sup> detector for threshold energies greater than 10 GeV are given in Fig. 3 for the large mixing MSW case with three neutrino flavors. For comparison, we note that the rate of tau neutrino events to be expected from the proposed CNGS beam is of the order of 30 per year . Hence at first sight it seems that, although not producing a sufficiently high event rate in present-day detectors, the sun should be detectible with next-generation neutrino telescopes. However, there are three drawbacks: Firstly, due to the immense number of atmospheric high energy muons, for $`\nu _\mu `$ only upward-going neutrinos (i.e. with zenith angles greater or (depending on the detector depth) slightly less than $`90^{}`$) may be detected. For the sun, this reduces the $`\nu _\mu `$ event rate to about half its value, the precise amount depending on the detector depth and latitude. Secondly, at present the detection rate of unambigous neutrino events is lower than the actual event rate (cf. ). Thirdly, for the energy range considered in this article the mean angle between neutrino and corresponding lepton cannot be neglected. For muon neutrinos it is given by $`1.5^{}(E_\nu /100\mathrm{GeV})^{0.5}`$. Hence effectively the solid angle of the sun is enlarged, so that the terrestrial background exceeds the solar flux by up to three orders of magnitude for $`\nu _\mathrm{e}`$ and $`\nu _\mu `$. The low angular resolution could be improved if information on the hadronic cascade might be used. Alternatively, one may restrict the neutrino energies to values greater than 100 GeV. In this case the number of solar atmosphere muon neutrino events for a horizontal flux and a one-year run (i.e. 7–10 events) would be comparable to the statistical error of the number of background events. It should be noted that due to the lack of any significant background, for energies far above 1 GeV the tau neutrino detectibility is not affected by the solid angle over which one has to integrate. Evidently, the detection of the sun requires intelligent event reconstruction schemes combined with very fast read-out detectors. Concerning tauons, the “double bang”, i.e. the two cascades arising at the production and the decay of a tauon , may be used, if a sufficiently fine granularity of the neutrino detector is achieved. The sun might thus serve as a test beam for searching neutrino oscillations on a scale exceeding earthbound distances. In particular, it offers the prospect of the discovery of tau-appearance. ## Acknowledgements We thank F. Rieger and D. Horns for helpful discussions. Part of this work has been supported by the Studienstiftung des deutschen Volkes.
no-problem/9910/cond-mat9910015.html
ar5iv
text
# Stable and unstable vortices in multicomponent Bose-Einstein condensates \[ ## Abstract We study the stability and dynamics of vortices in two-species condensates as prepared in the recent JILA experiment ( M. R. Andrews et al., Phys. Rev. Lett. 83 (1999) 2498). We find that of the two available configurations, in which one specie has vorticity $`m=1`$ and the other one has $`m=0`$, only one is linearly stable, which agrees with the experimental results. However, it is found that in the unstable case the vortex is not destroyed by the instability, but may be transfered from one specie to the other or display complex spatiotemporal dynamics. \] Vortices appear in many different physical contexts ranging from classical phenomena such as fluid mechanics and nonlinear Optics to purely quantum phenomena such as superconductivity and superfluidity . In the last two years more than 100 papers concerning vortices have been published in Physical Review Letters, which is a naive way to appreciate the importance of this subject in Physics. A vortex is the simplest topological defect one can construct: in a closed path around a vortex, the phase of the field undergoes a $`2\pi `$ winding and stabilizes a zero value of the field placed in the vortex core. The vortex is stable because of topological constraints; removing the phase singularity implies an effect on the boundaries of the system which cannot be achieved using local perturbations. Vortices are central to our understanding of superfluidity and quantized flow. This is why after the experimental realization of Bose-Einstein condensates (BEC) with ultracold atomic gases the question of whether atomic BEC’s are superfluids has triggered the analysis of vortices. The main goals up to now have been to propose a robust mechanism to generate and detect vortices . But another important research area is the analysis of vortex stability , to which this works contributes. Although most of the theoretical effort concerning vortices has been focused on single component condensates, the first experimental production of vortices in a BEC was attained using a two-species <sup>87</sup>Rb condensate . In this experiment the condensed cloud is made up of atoms in two different hyperfine levels, denoted by $`|1`$ and $`|2`$. Since the scattering lengths are different both states are not equivalent. As a consequence, while each specie can host a vortex, in Ref. it was shown that a single vortex is stable only when it is placed in the component with the largest scattering length, $`|1`$. The other possibility, which has the vortex in $`|2`$, leads to some kind of instability. Our intention is to prove in this paper that the instability is purely dynamical and can be explained with a mean field model which does not include any type or dissipation. We will achieve this goal in three major steps. First, we propose a model based on coupled Gross-Pitaevskii equations and solve the stationary equations in two and three-dimensional setups. We obtain the lowest energy stationary states that can be qualitatively identified with the ground state and the two realizations of Ref. . Next we study the stability of each state under small perturbations using linear perturbation theory. Our main result is that only the experimentally stable configuration is also linearly stable. Finally, using numerical simulations, we study more realistic conditions in which the condensate suffers moderate to strong perturbations. We show that there is a good agreement with experiment and also that the dynamics is very rich and depends on the dimensionality of the system and the intensity of the perturbations. The model.- In this work we will use the zero temperature approximation, in which collisions between the condensed and non condensed atomic clouds are neglected. In the two species case this leads to a pair of coupled Gross-Pitaevskii equations (GPE) $`i\mathrm{}{\displaystyle \frac{}{t}}\psi _1`$ $`=`$ $`\left[{\displaystyle \frac{\mathrm{}^2^2}{2m}}+V_1+\stackrel{~}{u}_{11}|\psi _1|^2+\stackrel{~}{u}_{12}|\psi _2|^2\right]\psi _1,`$ (2) $`i\mathrm{}{\displaystyle \frac{}{t}}\psi _2`$ $`=`$ $`\left[{\displaystyle \frac{\mathrm{}^2^2}{2m}}+V_2+\stackrel{~}{u}_{21}|\psi _1|^2+\stackrel{~}{u}_{22}|\psi _2|^2\right]\psi _2.`$ (3) where $`\stackrel{~}{u}_{ij}=\frac{4\pi \mathrm{}^2}{m}a_{ij}`$, and $`a_{ij}`$ are the corresponding scattering length. To simplify the formalism and in analogy with experiments we assume that both potentials are spherically symmetric, i. e. $`V_1(\stackrel{}{r})=V_2(\stackrel{}{r})=\frac{1}{2}m\omega ^2(r^2+z^2).`$ Following the experiments, we will present results for equal number of particle on each specie, $`N_1=N_2=N`$, which translates to $$|\psi _1(\stackrel{}{r})|^2𝑑\stackrel{}{r}=|\psi _2(\stackrel{}{r})|^2𝑑\stackrel{}{r}1.$$ (4) after a proper rescaling of $`\psi _1`$ and $`\psi _2`$ by $`N`$. To simplify the analysis we change to a new set of units based on the trap characteristic length, $`a_0=\sqrt{\mathrm{}/m\omega }`$, and period, $`\tau =1/\omega `$. In this set of units the nonlinear coefficients are $`u_{ij}=4\pi a_{ij}\sqrt{N_iN_j}/a_0`$. For the JILA experiment, in which $`\omega =2\pi \times 7.8\pm 0.1Hz`$, we have that $`\tau 20.4\text{ms}`$ is the new unit of time. The whole study, including the linear stability analysis and the numerical simulations, was performed in two- and three-dimensional systems. We have studied the system up to values of $`u_{ij}5000`$, which are of the order of magnitude of the experiments. Nevertheless, the linear stability analysis and the simulations change little for the strongest interactions and we expect our results to be still valid for a larger number of particles ($`N10^6`$). Regarding the intensity of the nonlinearity, we have used the scattering lengths of $`^{87}`$Rb which appear in Ref. . These values give us a precise line in the parameter space $$U=g\left(\begin{array}{cc}1.00& 0.97\\ 0.97& 0.94\end{array}\right).$$ (5) It is remarkable that because of the relation $`u_{11}>u_{12}>u_{22}`$ the experiment is performed in a regime in which the first component chases the second one, which rejects mixing with the chaser. This means that a “desired” configuration has the first component spread over the largest part of the space. As we will see, this has important consequences for the dynamic of the states. Search of solutions.- We are interested in stationary configurations in which each component has a well defined value of the angular momentum. Such states have the time and angular dependence factored out $$\psi _i(r,z,\theta )=e^{i\mu _it}e^{im_i\theta }\varphi _i(r,z).$$ (6) These functions satisfy a nonlinear set of coupled PDE $$\mu _t\psi _i=\frac{1}{2}^2\psi _i+\frac{1}{2}\left(r^2+z^2\right)\psi _i+\underset{j}{}u_{ij}|\psi _j|^2\psi _i,$$ (7) with $`i=1,2`$. Our focus will be on three particular configurations, which are the lowest energy states with vorticities $`(m_1,m_2)=(0,0),(1,0),(0,1)`$. They correspond to the ground state of the double condensate, and to the single vortex states for the $`|1`$ and $`|2`$ species, respectively. To find the radial and longitudinal dependences of the wave functions, $`\varphi _i(r,z)`$, we expand them approximately on a finite subset of the harmonic oscillator basis, $`\varphi _i(r,z)_{n=0}^{n=N}c_nP_n^{(m_i)}(r,z)e^{r^2/2}e^{z^2/2}`$ and then search the ground states for each of the $`(m_1,m_2)`$ pairs of vorticities. The details of the method applied to single vortex systems can be found in Ref. . As a result one obtains the desired eigenfunctions, chemical potential and energies \[Fig. 1(a)\] of each $`(m1,m2)`$ ground state. Linear stability analysis.- Let us study the behavior of the stationary states under infinitesimal perturbations (e.g. in the initial data, small amounts of noise, etc.). To do so, we define the excitations as $`\psi _i(r,z,\theta )`$ $`=`$ $`e^{i\mu _it+im_i\theta }\left(\varphi _i(r,z)+e^{i\lambda t+in\theta }\alpha _i(r,z)\right),`$ (9) $`\overline{\psi }_i(r,z,\theta )`$ $`=`$ $`e^{i\mu _itim_i\theta }\left(\overline{\varphi }_i(r,z)+e^{i\lambda tin\theta }\beta _i(r,z)\right).`$ (10) Then we introduce Eqs. (Stable and unstable vortices in multicomponent Bose-Einstein condensates) into Eq. (Stable and unstable vortices in multicomponent Bose-Einstein condensates) and keep the $`𝒪(\alpha )`$ and $`𝒪(\beta )`$ terms. In the end we reach an equation for the perturbations $`\stackrel{}{W}=[\alpha ,\alpha ^{},\beta ,\beta ^{}]`$ which is of the type $`iJ_t\stackrel{}{W}=H\stackrel{}{W}`$. Here $`H`$ is an hermitian operator and $`iJ`$ is an anti-hermitian operator. We will search a Jordan basis such that $`\lambda \stackrel{}{W}=iJH\stackrel{}{W}.`$ The lack of such a Jordan form leads to polynomial instabilities, while the presence of non-real eigenvalues is a signal of the exponentially growing instabilities. All other modes give us the frequencies of the linear response of the system to small perturbations. This analysis is formally equivalent to the Bogoliubov stability analysis of the states under consideration. However, as it was shown in , our perturbation is of order $`𝒪(\alpha ^2,\beta ^2)`$ in the energy and thus the exponentially unstable modes lay along directions of constant energy and thus a Bogoliubov expansion of the Hamiltonian cannot account for the instability, which can only be reached by working directly on the GPE. It is thus important to confirm our linear stability results with numerical simulations of the whole system. Linear stability results.- When we perform the diagonalization on $`(m_1,m_2)=(0,0)`$ we obtain that all of the eigenvalues are positive numbers, which implies that $`(0,0)`$ is at least a local minimum of our energy functional –furthermore, it is the ground state of the system and thus globally stable. Next we studied the $`(1,0)`$ family \[Fig. 1(c)\] and find that there is a negative eigenvalue among an infinite number of positive ones, which means that there is a single path in the configuration space along which the energy decreases. That direction belongs to a $`m=0`$ perturbation which takes the vortex out of the condensate . Nevertheless, as in this case there exist no complex eigenvalues, we conclude that the lifetime of the configuration is only limited by the presence and amount of the losses. This is confirmed when take a $`(1,0)`$ configuration, perturb it and study its real time evolution \[Fig. 1(d)\], and it is indeed consistent with the experiments of Ref. . We must remark that the existence of a negative eigenvalue in the spectrum around the $`(1,0)`$ vortex contradicts the belief that the second component could act as a pinning potential. Indeed, based on further study we can state that a $`(1,0)`$ vortex remains energetically unstable even for different proportions of each specie, from $`N_10`$ to $`N_20`$ . Finally we focus on the $`(0,1)`$ family \[Fig. 2(a)\]. Here we find an infinite number of modes with positive energy, plus a pair of them with negative frequency and complex eigenvalues, $`\lambda _u`$. Qualitatively, the shape and frequencies of the unstable modes are similar to those of the energy decreasing modes of the $`(1,0)`$ –that is, they are perturbations which push the vortex out of both clouds. The difference is that due to the imaginary part of those eigenvalues, which is Im$`(\lambda _u)=𝒪(0.04)`$ \[Fig. 2(b)\], vortices with unit charge in $`|2`$ are unstable under a generic perturbation of the initial data. This is consistent with the JILA experiments, where the a vortex hosted inside a $`|2`$ specie was found to be unstable. Although, as we mentioned above, the previous result does not depend drastically on $`N_1`$ being equal to $`N_2`$, we have also found that for $`N_2N_1`$ the vortex in $`|2`$ becomes practically stable –that is, the lifetime is too long to be observed in numerical simulations. This is consistent with the limit of a single-specie condensate where the unit-charge vortex is found to be stable for any scattering length. Does the vortex break?.- The linear stability analysis cannot be used to draw conclusions about the behavior of the vortex far from the limit of infinitesimal perturbations. To get further insight on the dynamics we have performed a set of numerical simulations in which we reproduce different realistic perturbations of each state. These perturbations vary from small displacements of the vortex core which should agree in detail with the linear stability results, to strong perturbations of the initial data. In particular, we have systematically used the procedure of creating a stationary state and then reducing the size of the trap, thus transfering the system to a non stationary state . The numerical simulations have been done using a symmetrized split-step Fourier pseudospectral method on grids of sizes ranging from $`32\times 32\times 32`$ to $`64\times 64\times 64`$ for the three-dimensional setups, and $`64\times 64`$ to $`256\times 256`$ for the two-dimensional model. All results were tested on different grid sizes and changing time steps to ensure their validity. From our numerical simulations we extract several conclusions. First, the linearly stable state, $`(1,0)`$, is also a robust one and survives to an ample range of perturbations, suffering at most a precession of the vortex core plus changes of the shapes of both components \[Fig. 1(d)\]. This behavior is equally reproduced in two- and three-dimensional simulations. Next we have the unstable states, $`(0,1)`$ subject to weak perturbations in either two- or three-dimensional models. In that case the unstable configuration develops a simple recurrent dynamics, which is well represented by Figs. 2(d-e). There we see a first stage where the first component \[Fig. 2(d)\] and the vortex \[Fig. 2(e)\] oscillate synchronously (the hole in $`|2`$ pins the peak of $`|1`$). These oscillations grow in amplitude until the linked system spirals out and forms a ying-yang which rotates clockwise, one specie chasing the other. Finally the first component develops a tail and later a hole which traps the second component. That hole is a vortex, which somehow has been transfered from $`|2`$ to $`|1`$. Though it is not completely periodic, this mechanism does exhibit some recurrence and the vortex returns to $`|2`$. The preceding behavior persists even for strong perturbations in a two-dimensional condensate. However, when one considers a three dimensional condensate and applies large perturbations on the initial data such as the one described above, it is possible to find a richer dynamics which include the establishment of spatiotemporal chaos. As an example, Fig. 3 shows a regime in which more than one vortex are introduced into the first component for long times. Intuitively, this turbulent behavior has two causes. First, due to energetic considerations there is a bigger overlap of both species than in the two dimensional model, as is apparent from the pictures. And second and most important, in the a three dimensional environment the first component is more reluctant to be dragged by the weak vortex-line from the second component. Thus, as the second component spirals around, it is able to shake the first fluid and produce pairs of vortices, much like what happens in laser-stirred condensates . Either phenomena, the transfer of the vortex and the chaotic evolution do not not contradict any topological rule or conservation law. In fact the angular momentum of each component is no longer a conserved quantity, and the topological charge of each specie needs not survive through the evolution. Instead, what is conserved is the total angular momentum $$L_z=i\mathrm{}\overline{\psi }_1_\theta \psi _1+i\mathrm{}\overline{\psi }_2_\theta \psi _2=L_z^{(1)}+L_z^{(2)}.$$ (11) In Fig. 4 we plot the evolution of the total angular momentum and that of each component $`L_z^{(j)}`$ for a recurrent situation and for the case of a spatiotemporally chaotic state \[Fig. 3\]. It can be seen that there is a complex interchange of angular momentum between both components. The intermediate states are topologically nontrivial ones since the phase singularity is being transfered from one component to the other. A more detailed analysis of this process will be presented elsewhere . Conclusions and discussion.- We have analyzed the stability of vortices in multi-component atomic Bose-Einstein condensates, using both two-dimensional and three-dimensional sets of coupled Gross-Pitaevskii equations. We prove that a vortex in a $`|1`$ state is a dynamically stable object even though it is not a global minimum of the energy. In contrast, we demonstrate that a vortex in the $`|2`$ specie is dynamically and energetically unstable and tends to spiral out of the condensate. Besides, since our model does not involve dissipative effects or asymmetries, one cannot get rid of the vortex angular momentum. This leads to a complex dynamics in which the vortex is transfered to the $`|1`$ state and eventually goes back to $`|2`$. This and other predictions about the dynamical behavior can be checked in current experiments. We believe that the simple model presented in this paper gives a reasonable explanation of the experiments of based only on the nonlinear interactions present in mean field theories. In fact if the instability found in the experiments were due to dissipation through a mechanism similar to that proposed in Ref. it would affect both the $`(1,0)`$ and $`(0,1)`$ type of vortices in a similar way, which is not the case. In support to our theory we can see from Fig. 3 of Ref. that the vortex does not completely escape but some kind of defect is formed in the periphery of the $`|2`$ component, a behavior similar to that found in our real time simulations \[Fig 2(c)\]. Regarding the time scales, our linear stability analysis gives lifetimes of about 500 milliseconds, which is two or three times larger than what is seen in the experiments. Nevertheless this is not significant since further study has revealed that the time after which the instability affects the system depends dramatically on the type and the intensity of the perturbation, which the linear stability analysis requires to be small. As such it is conceivable that the experimental realization of the $`(0,1)`$ vortex must break sooner: the whole experimental preparation takes the system to a state which differs finitely from the stationary configuration and are thus more likely to excite the unstable mode, even through the preparation process. This work has been partially supported by the DGICYT under grant PB96-0534.
no-problem/9910/cond-mat9910173.html
ar5iv
text
# Vehicular Traffic: A System of Interacting Particles Driven Far From Equilibrium∗ ## I Introduction Are you surprised to see an article on vehicular traffic in this special section of current science where physicists are supposed to report on some recent developments in the area of dynamics of nonequilibrium statistical systems? Aren’t civil engineers (or, more specifically, traffic engineers) expected to work on traffic? Solving traffic problems would become easier if one knows the fundamental laws governing traffic flow and traffic jam. For almost half a century physicists have been trying to develope a theoretical framework of traffic science extending concepts and techniques of statistical physics . The main aim of this brief review is to show how these attempts, particularly the recent ones, have led to deep insight in this frontier area of inter-disciplinary research. The dynamical phases of systems driven far from equilibrium are counterparts of the stable phases of systems in equilibrium. Let us first pose some of the questions that statistical physicists have been addressing in order to discover the fundamental laws governing vehicular traffic. For example, (i) What are the various dynamical phases of traffic? Does traffic exhibit phase-coexistence, phase transition, criticality or self-organized criticality and, if so, under which circumstances? (ii) What is the nature of fluctuations around the steady-states of traffic? (iii) If the initial state is far from a stationary state of the driven system, how does it evolve with time to reach a truely steady-state? (iv) What are the effects of quenched disorder (i.e., time-independent disorder) on the answers to the questions posed in (i)-(iii) above? The microscopic models of vehicular traffic can find practical applications in on-line traffic control systems as well as in the planning and design of transportation network. There are two different conceptual frameworks for modellig vehicular traffic. In the ”coarse-grained” fluid-dynamical description, the traffic is viewed as a compressible fluid formed by the vehicles but these individual vehicles do not appear explicitly in the theory. In contrast, in the ”microscopic” models traffic is treated as a system of interacting ”particles” driven far from equilibrium where attention is explicitly focussed on individual vehicles each of which is represented by a ”particle”; the nature of the interactions among these particles is determined by the way the vehicles influence each others’ movement. Unlike the particles in a gas, a driver is an intelligent agent who can ”think”, make individual decisions and ”learn” from experience. Nevertheless, many general phenomena in traffic can be explained in general terms with these models provided the behavioural effects of the drivers are captured by only a few phenomenological parameters. The conceptual basis of the older theoretical approaches are explained briefly in section II. Most of the ”microscopic” models developed in the recent years are ”particle-hopping” models which are usually formulated using the language of cellular automata (CA) . The Nagel-Schreckenberg (NaSch) model and the Biham-Middleton-Levine (BML) model, which are the most popular CA models of traffic on idealized highways and cities, respectively, have been extended by several authors to develope more realistic models. Some of the most interesting aspects of these recent developments are discussed in the long sections III and IV. The similarities between various particle-hopping models of traffic and some other models of systems, which are also far from equilibrium, are pointed out in section V followed by the concluding section VI. ## II Older theories of vehicular traffic ### A Fluid-dynamical Theories of vehicular traffic In traffic engineering, the fundamental diagram depicts the relation between density $`c`$ and the flux $`J`$, which is defined as the number of vehicles crossing a detector site per unit time . Because of the conservation of vehicles, the local density $`c(x;t)`$ and local flux $`J(x;t)`$ satisfy the equation of continuity which is the analogue of the equation of continuity in the hydrodynamic theories of fluids. In the early works it was assumed (i) that the flux (or, equivalently, the velocity) is a function of the density and (ii) that, following any change in the local density, the local speed instantaneously relaxes to a magnitude consistent with the new density at the same location. However, for a more realistic description of traffic, in the recent fluid-dynamical treatments of traffic an additional equation (the analogue of the Navier-Stokes equation for fluids), which describes the time-dependence of the velocity $`V(x;t)`$, has been considered. This approach, however, has its limitations; for example, viscosity of traffic is not a directly measureable quantity. ### B Kinetic theory of vehicular traffic In the kinetic theory of traffic, one begins with the basic quantity $`g(x,v,w;t)dxdvdw`$ which is the number of vehicles, at time $`t`$, located between $`x`$ and $`x+dx`$, having actual velocity between $`v`$ and $`v+dv`$ and desired velocity between $`w`$ and $`w+dw`$. In this approach, the fundamental dynamical equation is the analogue of the Boltzmann equation in the kinetic theory of gases. Assuming reasonable forms of ”relaxation” and ”interaction”, the problem of traffic is reduced to that of solving the Boltzmann-like equation, a formidable task, indeed ! ### C Car-following theories of vehicular traffic In the car-following theories one writes, for each individual vehicle, an equation of motion which is the analogue of the Newton’s equation for each individual particle in a system of interacting classical particles. In Newtonian mechanics, the acceleration may be regarded as the response of the particle to the stimulus it receives in the form of force which includes both the external force as well as those arising from its interaction with all the other particles in the system. Therefore, the basic philisophy of the car-following theories can be summarized by the equation $$[Response]_n[Stimulus]_n$$ (1) for the $`n`$-th vehicle ($`n=1,2,\mathrm{}`$). The constant of proportionality in the equation (1) can be interpreted as a measure of the sensitivity coefficient of the driver; it indicates how strongly the driver responds to unit stimulus. Each driver can respond to the surrounding traffic conditions only by accelerating or decelerating the vehicle. The stimulus and the sensitivity factor are assumed to be functions of the position and speed of the vehicle under consideration and those of its leading vehicle. Different forms of the equations of motion of the vehicles in the different versions of the car-following models arise from the differences in their postulates regarding the nature of the stimulus. In general, the dynamical equations for the vehicles in the car-following theories are coupled non-linear differential equations and thus, in this ”microscopic” approach, the problem of traffic flow reduces to problems of nonlinear dynamics. ## III Cellular-automata Models of Highway-traffic In the car-following models space is treated as a continuum and time is represented by a continuous variable $`t`$ while velocities and accelerations of the vehicles are also real variables. However, most often, for numerical manipulations of the differential equations of the car-following models, one needs to discretize the continuous variables with appropriately chosen grids. In contrast, in the CA models of traffic not only time but also the position, speed, and acceleration of the vehicles are treated as discrete variables. In this approach, a lane is represented by a one-dimensional lattice. Each of the lattice sites represents a ”cell” which can be either empty or occupied by at most one ”vehicle” at a given instant of time (see fig.1). At each discrete time step $`tt+1`$, the state of the system is updated following a well defined prescription. ### A The Nagel-Schreckenberg model of highway traffic: In the NaSch model, the speed $`V`$ of each vehicle can take one of the $`V_{max}+1`$ allowed integer values $`V=0,1,\mathrm{},V_{max}`$. Suppose, $`X_n`$ and $`V_n`$ denote the position and speed, respectively, of the $`n`$-th vehicle. Then, $`d_n=X_{n+1}X_n`$, is the gap in between the $`n`$-th vehicle and the vehicle in front of it at time $`t`$. At each time step $`tt+1`$, the arrangement of the $`N`$ vehicles on a finite lattice of length $`L`$ is updated in parallel according to the following ”rules”: Step 1: Acceleration. If, $`V_n<V_{max}`$, the speed of the $`n`$-th vehicle is increased by one, but $`V_n`$ remains unaltered if $`V_n=V_{max}`$, i.e., $`V_nmin(V_n+1,V_{max})`$. Step 2: Deceleration (due to other vehicles). If $`d_nV_n`$, the speed of the $`n`$-th vehicle is reduced to $`d_n1`$, i.e., $`V_nmin(V_n,d_n1)`$. Step 3: Randomization. If $`V_n>0`$, the speed of the $`n`$-th vehicle is decreased randomly by unity with probability $`p`$ but $`V_n`$ does not change if $`V_n=0`$, i.e., $`V_nmax(V_n1,0)`$ with probability $`p`$. Step 4: Vehicle movement. Each vehicle is moved forward so that $`X_nX_n+V_n`$. The NaSch model is a minimal model in the sense that all the four steps are necessary to reproduce the basic features of real traffic; however, additional rules need to be formulated to capture more complex situations. The step 1 reflects the general tendency of the drivers to drive as fast as possible, if allowed to do so, without crossing the maximum speed limit. The step 2 is intended to avoid collision between the vehicles. The randomization in step 3 takes into account the different behavioural patterns of the individual drivers, especially, nondeterministic acceleration as well as overreaction while slowing down; this is crucially important for the spontaneous formation of traffic jams. So long as $`p0`$, the NaSch model may be regarded as stochastic CA . For a realistic description of highway traffic , the typical length of each cell should be about $`7.5`$m and each time step should correspond to approximately $`1`$ sec of real time when $`V_{max}=5`$. The update scheme of the NaSch model is illustrated with a simple example in fig.2. Space-time diagrams showing the time evolutions of the NaSch model demonstrate that no jam is present at sufficiently low densities, but spontaneous fluctuations give rise to traffic jams at higher densities (fig.3(a)). From the fig.3(b) it should be obvious that the intrinsic stochasticity of the dynamics, arising from non-zero $`p`$, is essential for triggering the jams . The use of parallel dynamics is also important. In contrast to a random sequential update, it can lead to a chain of overreactions. Suppose, a vehicle slows down due the randomization step. If the density of vehicles is large enough this might force the following vehicle also to brake in the deceleration step. In addition, if $`p`$ is not too small, it might brake even further in Step 3. Eventually this can lead to the stopping of a vehicle, thus creating a jam. This mechanism of spontaneous jam formation is rather realistic and cannot be modelled by the random sequential update. ### B Relation between the NaSch model and ASEP In the NaSch model with $`V_{max}=1`$ every vehicle moves forward with probability $`q=1p`$ in the time step $`t+1`$ if the site immediately in front of it were empty at the time step $`t`$; this, is similar to the fully asymmetric simple exclusion process (ASEP where a randomly chosen particle can move forward with probablity $`q`$ if the site immediately in front is empty. But, updating is done in parallel in the NaSch model whereas that in the ASEP is done in a random sequential manner. Nevertheless, the special case of $`V_{max}=1`$ for the NaSch model achieves special importance from the fact that so far it has been possible to derive exact analytical results for the NaSch model only in the special limits (a) $`V_{max}=1`$ and arbitrary $`p`$ and (b) $`p=0`$ and arbitrary $`V_{max}`$. ### C NaSch model in the deterministic limits If $`p=0`$, the system can self-organize so that at low densities every vehicle can move with $`V_{max}`$ and the corresponding flux is $`cV_{max}`$; this is, however, possible only if enough empty cells are available in front of every vehicle, i.e., for $`cc_m^{det}=(V_{max}+1)^1`$ and the corresponding maximum flux is $`J_{max}^{det}=V_{max}/(V_{max}+1)`$. On the other hand, for $`c>c_m^{det}`$, the flow is limited by the density of holes. Hence, the fundamental diagram in the deterministic limit $`p=0`$ of the NaSch model (for any arbitrary $`V_{max}`$) is given by the exact expression $`J=min(cV_{max},(1c))`$. Aren’t the properties of the NaSch model with maximum allowed speed $`V_{max}`$, in the deterministic limit $`p=1`$, exactly identical to those of the same model with maximum allowed speed $`V_{max}1`$? The answer to the question posed above is: NO; if $`p=1`$, all random initial states lead to $`J=0`$ in the stationay state of the NaSch model irrespective of $`V_{max}`$ and $`c`$! ### D Analytical Theory for the NaSch Model In the ”site-oriented” theories one describes the state of the finite system of length $`L`$ by completely specifying the state of each site. In contrast, in the ”car-oriented” theories the state of the traffic system is described by specifying the positions and speeds of all the $`N`$ vehicles in the system. In the naive mean-field approximation one treats the probabilities of occupation of the lattice sites as independent of each other. In this approximation, for example, the steady-state flux for the NaSch model with $`V_{max}=1`$ and periodic boundary conditions, one gets $$J=qc(1c)$$ (2) It turns out that the naive mean-field theory underestimates the flux for all $`V_{max}`$. Curiously, if instead of parallel updating one uses the random sequential updating, the NaSch model with $`V_{max}=1`$ reduces to the ASEP for which the equation (2) is known to be the exact expression for the corresponding flux (see, e.g., )! What are the reasons for these differences arising from parallel updating and random sequential updating? There are ”garden of Eden” (GoE) states (dynamically forbidden states) of the NaSch model which cannot be reached by the parallel updating whereas no state is dynamically forbidden if the updating is done in a random sequential manner. For example, the configuration shown in fig.4 is a GoE state<sup>*</sup><sup>*</sup>* The configuration shown in fig.1 is also a GoE state! because it could occur at time $`t`$ only if the two vehicles occupied the same cell simultaneously at time $`t1`$. The naive mean-field theory mentioned above does not exclude the GoE states. The exact expression, given in the next subsection, for the flux in the steady-state of the NaSch model with $`V_{max}=1`$ can be derived by merely excluding these states from consideration in the naive mean-field theory , thereby indicating that the only source of correlation in this case is the parallel updating. But, for $`V_{max}>1`$, there are other sources of correlation because of which exclusion of the GoE states merely improves the naive mean-field estimate of the flux but does not yield exact results. A systematic improvement of the naive mean-field theory of the NaSch model has been achieved by incorporating short-ranged correlations through cluster approximations. We define a $`n`$-cluster to be a collection of $`n`$ successive sites. In the general $`n`$-cluster approximation, one divides the lattice into ”clusters” of length $`n`$ such that two neighbouring clusters have $`n1`$ sites in common (see fig.5). If $`n=1`$, then the $`1`$-cluster approximation can be regarded as the naive mean-field approximation. You can easily verify, for example, in the special case of $`V_{max}=1`$, that the state of the 2-cluster at time $`t+1`$ depends on the state of the 4-cluster at time $`t`$, which, in turn, depends on the state of a larger cluster at time $`t1`$ and, so on. Therefore, one needs to make an approximation to truncate this hierarchy in a sensible manner. For example, in the 2-cluster approximation for the NaSch model with $`V_{max}=1`$, the 4-cluster probabilities are approximated in terms of an appropriate product of 2-cluster probabilities. Thus, in the $`n`$-cluster approximation a cluster of $`n`$ neighbouring cells are treated exactly and the cluster is coupled to the rest of the system in a self-consistent way. Carrying out the 2-cluster calculation for $`V_{max}=1`$ one not only finds an effective particle-hole attraction (particle-particle repulsion), but also obtains the exact result $$J(c,p)=\frac{1}{2}[1\sqrt{14qc(1c)}]$$ (3) for the corresponding flux. But one gets only approximate results from the 2-cluster calculations for all $`V_{max}>1`$ (see for higher order cluster calculations for $`V_{max}=2`$ and comparison with computer simulation data). Let us explain the physical origin of the generic shape of the fundamental diagrams shown in fig.6. At sufficiently low density of vehicles, practically ”free flow” takes place whereas at higher densities traffic becomes ”congested” and traffic jams occur. So long as $`c`$ is sufficiently small, the average speed $`V`$ is practically independent of $`c`$ as the vehicles are too far apart to interact mutually. However, a faster monotonic decrease of $`V`$ with increasing $`c`$ takes place when the forward movement of the vehicles is strongly hindred by others because of the reduction in the average separation between them. Because of this trend of variation of $`V`$ with $`c`$, the flux $`J=cV`$ exhibits a maximum at $`c_m`$; for $`c<c_m`$, increasing $`c`$ leads to increasing $`J`$ whereas for $`c>c_m`$ sharp decrease of $`V`$ with increase of $`c`$ leads to the overall decrease of $`J`$. An interesting feature of the expression (3) is that the flux is invariant under charge conjugation, i.e., under the operation $`c(1c)`$ which interchanges particles and holes. Therefore, the fundamental diagram is symmetric about $`c=1/2`$ when $`V_{max}=1`$ (see fig.6a). Although this symmetry breaks down for all $`V_{max}>1`$ (see fig.6b), the corresponding fundamental diagrams appear more realistic. Moreover, for given $`p`$, the magnitude of $`c_m`$ decreases with increasing $`V_{max}`$ as the higher is the $`V_{max}`$ the longer is the effective range of interaction of the vehicles (see fig.6b). Furthermore, for $`V_{max}=1`$, flux merely decreases with increasing $`p`$ (see eqn(3)), but remains symmetric about $`c=1/2=c_m`$. On the other hand, for all $`V_{max}>1`$, increasing $`p`$ not only leads to smaller flux but also lowers $`c_m`$. ### E Spatio-temporal organization of vehicles The distance from a selected point on the lead vehicle to the same point on the following vehicle is defined as the distance-headway (DH) . In order to get information on the spatial organization of the vehicles, one can calculate the DH distribution $`𝒫_{dh}(\mathrm{\Delta }X)`$ by following either a site-oriented approach or a car-oriented approach if $`\mathrm{\Delta }X_j=X_jX_{j1}`$, i.e., if the number of empty lattice sites in front of the $`j`$-th vehicle is identified as the corresponding DH. At moderately high densities, $`𝒫_{dh}(\mathrm{\Delta }X)`$ exhibits two peaks; the peak at $`\mathrm{\Delta }X=1`$ is caused by the jammed vehicles while that at a larger $`\mathrm{\Delta }X`$ corresponds to the most probable DH in the free-flowing regions. The time-headway is defined as the time interval between the departures (or arrivals) of two successive vehicles recorded by a detector placed at a fixed position on the highway . The time-headway distribution contains information on the temporal organization. Suppose, $`𝒫_m(t_1)`$ is the probability that the following vehicle takes time $`t_1`$ to reach the detector, moving from its initial position where it was located when the leading vehicle just left the detector site. Suppose, after reaching the detector site, the following vehicle waits there for $`\tau t_1`$ time steps, either because of the presence of another vehicle in front of it or because of its own random braking; the probability for this event is denoted by $`Q(\tau t_1|t_1)`$. The distribution $`𝒫_{th}(\tau )`$, of the time-headway $`\tau `$, can be obtained from $`𝒫_{th}(\tau )=_{t_1=1}^{\tau 1}𝒫_m(t_1)Q(\tau t_1|t_1)`$. The most-probable time-headway, when plotted against the density, exhibits a minimum ; this is consistent with the well known exact relation $`J=1/T_{av}`$ between flux and the average time-headway, $`T_{av}`$. Is there a phase transition from ”free-flowing” to ”congested” dynamical phase of the NaSch model? No satisfacory order parameter has been found so far, except in the deterministic limit. The possibility of the existence of any critical density in the NaSch model is ruled out by the observations that, for all non-zero $`p`$, (a) the equal-time correlation function decays exponentially with separation, and (b) the relaxation time and lifetimes of the jams remain finite. This minimal model of highway traffic also does not exhibit any first order phase transition and two-phase co-existence. ### F Extensions of the NaSch model and practical applications In recent years some other minimal models of traffic on highways have been developed by modifying the updating rules of the NaSch model . In the cruise control limit of the NaSch model the randomization step is applied only to vehicles which have a velocity $`V<V_{max}`$ after step 2 of the update rule. Vehicles moving with their desired velocity $`V_{max}`$ are not subject to fluctuations. This is exactly the effect of a cruise-control which automatically keeps the velocity constant at a desired value. Interestingly, the cruise-control limit of the NaSch model exhibits self-organized criticality. Besides, a continuum limit of the NaSch model has also been considered. The vehicles which come to a stop because of hindrance from the leading vehicle may not be able to start as soon as the leading vehicle moves out of its way; it may start with a probability $`q_s<1`$. When such possibilities are incorporated in the NaSch model, the ”slow-to-start” rules can give rise to metastable states of very high flux and hysteresis effects as well as phase separation of the traffic into a ”free-flowing” phase and a ”mega-jam”. The bottleneck created by quenched disorder of the highway usually slows down traffic and can give rise to jams and phase segregation. However, a different type of quenched disorder, introduced by assigning randomly different braking probabilities $`p`$ to different drivers in the NaSch model, can have more dramatic effects which are reminiscent of ”Bose-Einstein-like condensation” in the fully ASEP where particle-hopping rates are quenched random variables. In such ”Bose-Einstein-like condensed” state finite fraction of the empty sites are ”condensed” in front of the slowest vehicle (i.e., the driver with highest $`p`$). Several attempts have been made to generalize the NaSch model to describe traffic on multi-lane highways and to simulate traffic on real networks in and around several cities. For planning and design of the transportation network , for example, in a metropolitan area , one needs much more than just micro-simulation of how vehicles move on a linear or square lattice under a specified set of vehicle-vehicle and road-vehicle interactions. For such a simulation, to begin with, one needs to specify the roads (including the number of lanes, ramps, bottlenecks, etc.) and their intersections. Then, times and places of the activities, e.g., working, shopping, etc., of individual drivers are planned. Micro-simulations are carried out for all possible different routes to execute these plans; the results give informations on the efficiency of the different routes and these informations are utilized in the designing of the transportation network . Some socio-economic questions as well as questions on the environmental impacts of the planned transportation infrastructure also need to be addressed during such planning and design. ## IV Cellular-automata Models of City-traffic ### A The Biham-Middleton-Levin model of city traffic and its generalizations In the BML model, each of the sites of a square lattice represent the crossing of a east-west street and a north-south street. All the streets parallel to the $`\widehat{X}`$-direction of a Cartesian coordinate system are assumed to allow only single-lane east-bound traffic while all those parallel to the $`\widehat{Y}`$-direction allow only single-lane north-bound traffic. In the initial state of the system, vehicles are randomly distributed among the streets. The states of east-bound vehicles are updated in parallel at every odd discrete time step whereas those of the north-bound vehicles are updated in parallel at every even discrete time step following a rule which is a simple extension of the fully ASEP: a vehicle moves forward by one lattice spacing if and only if the site in front is empty, otherwise the vehicle does not move at that time step. Computer simulations demonstrate that a first order phase transition takes place in the BML model at a finite non-vanishing density $`c_{}`$, where the average velocity of the vehicles vanishes discontinuously signalling complete jamming; this jamming arises from the mutual blocking of the flows of east-bound and north-bound traffic at various different crossings. Note that the dynamics of the BML model is fully deterministic and the randomness arises only from the random initial conditions . As usual, in the naive mean-field approximation one neglects the correlations between the occupations of different sites. However, if you are not interested in detailed information on the ”structure” of the dynamical phases, you can get a mean-field estimate of $`c_{}`$ by carrying out a back-of-the-envelope calculation. In the symmetric case $`c_x=c_y`$, for which $`v_x=v_y=v`$, $`c=c_{}0.343`$. The BML model has been extended to take into account the effects of (i) asymmetric distribution of the vehicles, i.e., $`c_xc_y`$, (ii) overpasses or two-level crossings that are represented by specifically identified sites each of which can accomodate upto a maximum of two vehicles simultaneously, (iii) faulty traffic lights (iv) static hindrances or road blocks or vehicles crashed in traffic accident, i.e., stagnant points, (v) stagnant street where the local density $`c_s`$ of the vehicles is initially higher than that in the other streets (vi) jam-avoiding drive of vehicles to a neighbouring street, parallel to the original direction, to avoid getting blocked by other vehicles in front, (vii) turning of the vehicles from east-bound (north-bound) to north-bound (east-bound) streets. (viii) a single north-bound street cutting across east-bound streets (ix) more realistic description of junctions of perpendicular streets , (x) green-waves . ### B Marriage of NaSch and BML models At first sight the BML model may appear very unrealistic because the vehicles seem to hop from one crossing to the next. However, it may not appear so unrealistic if each unit of discrete time interval in the BML model is interpreted as the time for which the traffic lights remain green (or red) before switching red (or green) simultaneously in a synchronized manner, and over that time scale each vehicle, which faces a green signal, gets an opportunity to move from $`j`$-th crossing to the $`j+1`$-th (or, more generally, to the $`j+r`$-th where $`r>1`$). However, if one wants to develope a more detailed ”fine-grained” description then one must first decorate each bond with $`D1`$ ($`D>1`$) sites to represent $`D1`$ cells in between each pair of successive crossings thereby modelling each segment of the streets in between successive crossings in the same manner in which the entire highway is modelled in the NaSch model. Then, one can follow the prescriptions of the NaSch model for describing the positions, speeds and accelerations of the vehicles as well as for taking into account the interactions among the vehicles moving along the same street. Moreover, one should flip the color of the signal periodically at regular interval of $`T`$ ($`T>>1`$) time steps where, during each unit of the discrete time interval every vehicle facing green signal should get an opportunity to move forward from one cell to the next. Such a CA model of traffic in cities has, indeed, been proposed very recently where the rules of updating have been formulated in such a way that, (a) a vehicle approaching a crossing can keep moving, even when the signal is red, until it reaches a site immediately in front of which there is either a halting vehicle or a crossing; and (b) no grid-locking would occur in the absence of random braking. A phase transition from the ”free-flowing” dynamical phase to the completely ”jammed” phase has been observed in this model at a vehicle density which depends on $`D`$ and $`T`$. The intrinsic stochasticity of the dynamics, which triggers the onset of jamming, is similar to that in the NaSch model, while the phenomenon of complete jamming through self-organization as well as the final jammed configurations (fig.7) are similar to those in the BML model. This model also provides a reasonable time-dependence of the average speeds of the vehicles in the ”free-flowing” phase. ## V Relation with other systems and phenomena You must have noticed in the earlier sections that some of the models of traffic are non-trivial generalizations or extensions of the ASEP, the simplest of the driven-dissipative systems which are of current interest in non-equilibrium statistical mechanics. Some similarities between these systems and a dynamical model of protein synthesis has been pointed out. Another driven-dissipative system, which is also receiving wide attention of physicists in recent years, is the granular material flowing through a pipe. There are some superficial similarities between the clustering of vehicles on a highway and particle-particle (and particle-cluster) aggregation process. The NaSch model with $`V_{max}=1`$ can be mapped onto stochastic growth models of one-dimensional surfaces in a two-dimensional medium. Particle (hole) movement to the right (left) correspond to local forward growth of the surface via particle deposition. In this scenario a particle evaporation would correspond to a particle (hole) movement to the left (right) which is not allowed in the NaSch model. It is worth pointing out that any quenched disorder in the rate of hopping between two adjacent sites would correspond to columnar quenched disorder in the growth rate for the surface. Inspired by the recent success in theoretical studies of traffic, some studies of information traffic on the computer network (internet) have also been carried out. ## VI Summary and conclusion: Nowadays the tools of statistical mechanics are increasingly being used to study self-organization and emergent collective behaviour of complex systems many of which, including vehicular traffic, fall outside the traditional domain of physical systems. However, as we have shown in this article, a strong theoretical foundation of traffic science can be built on the basic principles of statistical mechanics. In this brief review we have focussed attention mainly on the progress made in the recent years using ”particle-hopping” models, formulated in terms of cellular automata, and compared these with several other similar systems. Acknowledgements: It is our pleasure to thank R. Barlovic, J.G. Brankov, B. Eisenblätter, K. Ghosh, N. Ito, K. Klauck, W. Knospe, D. Ktitarev, A. Majumdar, K. Nagel, V.B. Priezzhev, M. Schreckenberg, A. Pasupathy, S. Sinha, R.B. Stinchcombe and D.E. Wolf for enjoyable collaborations the results of some of which have been reviewed here. We also thank M. Barma, J. Kertesz, J. Krug, G. Schütz, D. Stauffer and J. Zittartz for useful discussions and encouragements. This work is supported by SFB341 Köln-Aachen-Jülich.
no-problem/9910/cond-mat9910084.html
ar5iv
text
# 1 Introduction ## 1 Introduction Segregation and mixing of granular material is of eminent importance for industrial operations and it has been subject to research since decades. However, both effects are not yet completely understood and thus cannot be controlled under all circumstances. Traditional experimental methods and theoretical approaches are nicely complemented by numerical simulations which in the last few years have developed tremendously . For a review which covers a broad practical experience of segregation see Ref. and references therein. Segregation can be driven by geometric effects, shear, percolation and also by a convective motion of the small particles in the system . In vibrated systems, the segregation due to convection appears to be orders of magnitude faster than segregation due to purely geometrical effects . In rotating drums, another archetype of many industrial devices, several segregation processes acting in parallel are reported ; in three-dimensional devices, axial and longitudinal segregation are observed simultaneously. For axial segregation, particle percolation is reported to be responsible , while longitudinal segregation is related to different surface flow properties in the cylinder . In this paper, we investigate a model segregation problem which suggests a simple way to obtain uniform mixtures of two species. We show a sketch of the system in Fig. 1. $`N`$ particles are placed in a container of height $`H`$ in the presence of gravity. The side walls have been replaced with periodic boundary conditions. Energy is supplied to the container by vibrating the bottom using a symmetric sawtooth wave with velocity $`V`$. The top wall is stationary. $`N_A`$ of the particles have mass $`m_A`$, and the rest have mass $`m_B`$. We will take $`m_A>m_B`$. Though the particles have different mass, they all have the same radius $`a`$. We model the loss of energy during collisions with a restitution coefficient $`r<1`$. Our detailed study of a similar system with identical particles is Ref. . ## 2 The Mixing Mechanism We find that it is possible to obtain uniform mixtures of the two species by pitting two segregation mechanisms against each other. When the particles rarely touch the top of the container, all the dense particles are found near the bottom of the plate (see Fig. 2a). A similar effect occurs in the upper atmosphere, where different molecular species are sorted by weight . On the other hand, when gravity is turned off, the particles are pushed against the top plate and the dense particles are found close to the upper plate (Fig. 2b). By smoothly varying between these two situations, it is possible to obtain a situation where the two species are uniformly mixed. In Fig. 3, we plot the difference between $`y_A`$, the center of mass of the heavy particles, and $`y_B`$, the center of mass of the light particles, normalized by the height of the container. Mixing is optimal when the difference between the species’ centers of mass vanishes. We see that the state of maximum mixing is obtained near $`m_AgH/T2`$ for all values of the parameters, except for almost elastic particles ($`r=0.99`$). Here, $`T`$ is the granular temperature, defined as the average kinetic energy per particle: $`T(1/2N)m_iv_i^2`$. When the ratio of both energies is near unity, it means that the kinetic and potential energies of the particles are comparable. The tendency of $`(y_Ay_B)/H`$ to approach $`0`$ for large gravities $`m_AgH/T>10`$ is due to the initial conditions. Initially, all particles are arranged in a lattice just above the vibrating floor. Due to the large gravity, it is very difficult for particles to change places, and the mixture keeps its original configuration for a very long time. For $`m_AgH/T<10`$, the particles change places often, and $`(y_Ay_B)/H`$ is independent of initial conditions. To show more closely what happens with the densities of the different species, we show in Fig. 4 the concentrations of each as a function of height for three different simulations; one at small $`g`$, one at large $`g`$, and one where the particles are nicely mixed. In the situations with extremal $`g`$ values, we obtain rather strong density gradients, while in the case of optimal mixing the density gradients are small, i.e. the density is almost constant throughout the system. ## 3 Discussion and Conclusion Each of the two segregation mechanisms can be observed also with perfectly elastic (dissipationless) particles. In Fig. 5(a), we show a binary gas of elastic particles under gravity in the absence of forcing. The heavy particles accumulate at the bottom. In Fig. 5(b), we show a binary gas in the absence of gravity, subjected to a thermal gradient. Now the particles particles accumulate against the upper, cold wall. Therefore, neither segregation mechanism relies on the dissipation of energy during collisions. This dissipation serves only to set up the necessary gradients which drive the segregation of the particles (see also the paper by Luding, Strauß, and McNamara in this proceedings). To use this method to mix granular materials, the particles could be put into a chamber like the one shown in Fig. 1. To obtain the proper value of $`m_AgH/T`$, it is perhaps most convenient to adjust the height of the container $`H`$. It is also possible to control $`T`$ by changing the vibration velocity $`V`$ . One possible disadvantage is that only a small amount of material can be mixed at one time. It also may be difficult in practice to adjust $`H`$ or $`T`$ correctly. Replacing the periodic boundaries with side walls may also introduce new effects. ## Acknowledgements Inspiring discussions with H. J. Herrmann are appreciated, and we gratefully acknowledge the support of IUTAM, the National Science Foundation and the Department of Energy. S.L. also thanks the Deutsche Forschungsgemeinschaft, and S.M. the Alexander-von-Humboldt foundation, and the Geosciences Research Program, Office of Basic Energy Sciences, US Department of Energy.
no-problem/9910/hep-ph9910426.html
ar5iv
text
# Deeply virtual electroproduction of photons and mesons on the nucleon ## In recent years, a unified theoretical description of a wide variety of exclusive processes in the Bjorken regime has emerged through the formalism introducing new generalized parton distributions, the so-called skewed parton distributions (SPD’s). It has been shown that these distributions, which parametrize the structure of the nucleon, allow to describe in leading order perturbative QCD (PQCD), various exclusive processes such as, in particular, deeply virtual Compton scattering (DVCS) and longitudinal electroproduction of vector and pseudoscalar mesons (see e.g. Refs.- and references therein). The leading order PQCD diagrams for DVCS and hard meson electroproduction are of the type as shown in Fig. 1. It has been proven that the leading order DVCS amplitude in the forward direction can be factorized in a hard scattering part (which is exactly calculable in PQCD) and a soft, nonperturbative nucleon structure part as is illustrated on the left panel of Fig. 1. The nucleon structure information can be parametrized, at leading order, in terms of 4 generalized structure functions. In the notation of Ji, these functions are the off-forward parton distributions (OFPD’s) denoted as $`H,\stackrel{~}{H},E,\stackrel{~}{E}`$ which depend upon three variables : $`x`$, $`\xi `$ and $`t`$. The light-cone momentum fraction $`x`$ is defined by $`k^+=xP^+`$, where $`k`$ is the quark loop momentum and $`P`$ is the average nucleon momentum (using the definition $`a^\pm 1/\sqrt{2}(a^0\pm a^3)`$). The skewedness variable $`\xi `$ is defined by $`\mathrm{\Delta }^+=2\xi P^+`$, where $`\mathrm{\Delta }`$ is the overall momentum transfer in the process and where $`2\xi x_B/(1x_B/2)`$ in the Bjorken limit. Furthermore, the third variable entering the OFPD’s is given by the Mandelstam invariant $`t=\mathrm{\Delta }^2`$. In Fig. 1, the variable $`x`$ runs from -1 to 1. As noted by Radyushkin , one can identify two regions for the SPD’s. In the regions where $`x>\xi `$ or $`x<\xi `$, the SPD’s are the generalizations of the usual parton distributions from DIS. Actually, in the forward direction, the OFPD’s $`H`$ and $`\stackrel{~}{H}`$ respectively reduce to the quark density distribution $`q(x)`$ and quark helicity distribution $`\mathrm{\Delta }q(x)`$, obtained from DIS. In the region $`\xi <x<\xi `$, the SPD’s behave as a “meson-like” distribution amplitude and contain new information about nucleon structure . To provide estimates for electroproduction observables, we need a model for the SPD’s. The following calculations were performed using $`\xi `$-dependent SPD’s based on a product ansatz (for the double distributions ) of a quark distribution (we use the MRST98 quark distributions as input) and an asymptotic “meson-like” distribution amplitude (see Ref. for more details). The $`t`$-dependence is given by the corresponding form factors (Dirac form factor for $`H`$, axial form factor for $`\stackrel{~}{H}`$). Besides the DVCS process, a factorization proof was also given for the leading order meson electroproduction amplitude in the valence region at large $`Q^2`$ , which is shown on the right panel of Fig. 1. This factorization theorem only applies when the virtual photon is longitudinally polarized. The leading order longitudinal amplitude for meson electroproduction behaves as $`1/Q`$. This leads to a $`1/Q^6`$ scaling behavior for the longitudinal cross section $`d\sigma _L/dt`$ at large $`Q^2`$, which provides an experimental signature of the leading order mechanism. Besides the dependence on the SPD’s, the meson electroproduction amplitudes require the additional non-perturbative input from the meson distribution amplitudes, for which we take asymptotic forms in the calculations. Furthermore, it was shown in Ref. that electroproduction of vector (pseudoscalar) mesons accesses the unpolarized (polarized) OFPD’s $`H`$ and $`E`$ ($`\stackrel{~}{H}`$ and $`\stackrel{~}{E}`$) respectively. According to the produced meson ($`\rho ^0`$, $`\rho ^\pm `$, $`\omega `$, $`\varphi `$, $`\pi ^0`$, $`\pi ^\pm `$, $`\eta `$,…), the SPD’s for the different quark flavors enter in different combinations due to the different quark charges and isospin factors. We show here some results for DVCS and meson electroproduction observables using the $`\xi `$-dependent ansatz for the OFPD’s described previously. For more results, we refer to our works in Refs.. In Fig. 2, the $`\rho _L^0`$, $`\pi ^+`$ and $`\gamma `$ cross sections are compared as function of the beam energy at fixed $`Q^2`$ in the valence region. Going up in energy, the increasing virtual photon flux factor boosts the meson leptoproduction cross sections and the DVCS part of the $`\gamma `$ leptoproduction cross section. Comparing the different channels, it is clear on this picture that the $`\rho _L^0`$ channel is very favorable as it depends on the unpolarized SPD’s. For the $`\gamma `$ channel, the contaminating Bethe-Heitler (BH) process is hardly influenced by the beam energy and therefore overwhelms the DVCS cross section at low beam energies. Although Fig. 2 shows that a high energy such as planned at COMPASS is preferable, one can try to undertake a preliminary study of the hard electroproduction reactions using the existing facilities such as HERMES or JLAB, despite their low energy. Recently, an experiment has been approved at JLAB to investigate (the onset of) the scaling behavior for $`\rho _L^0`$ electroproduction in the valence region ($`Q^23.5`$ GeV<sup>2</sup>, $`x_B0.3`$). Although at present, no experimental data for the $`\rho _L^0`$ electroproduction at larger $`Q^2`$ exist in the valence region ($`x_B`$ 0.3), the reaction $`\gamma ^{}p\rho _L^0p`$ has been measured at smaller values of $`x_B`$. We therefore compare our results in Fig. 3 to see how these data approach the valence region, where one is sensitive to the quark SPD’s. For the purpose of this discussion, we call the mechanism which proceeds through the quark SPD’s, the Quark Exchange Mechanism (QEM). Besides the QEM, $`\rho ^0`$ electroproduction at large $`Q^2`$ and small $`x_B`$ proceeds predominantly through a perturbative two-gluon exchange mechanism (PTGEM) as studied in Ref. . To compare to the data at intermediate $`Q^2`$, we implemented in both mechanisms the power corrections due to the parton’s intrinsic transverse momentum dependence (see Ref. for details), which give a significant reduction at the lower $`Q^2`$ values. Comparing our results with the data in Fig. 3, one sees that the PTGEM explains well the fast increase at high c.m. energy ($`W`$) of the cross section but substantially underestimates the data at lower energies. This is where the QEM is expected to contribute since $`x_B`$ is then in the valence region. The QEM describes well the change of behavior of the data at lower $`W`$. This has also been confirmed by recent HERMES data (around $`W`$ 5 GeV) to which we compared our calculations. It is clear that at present new and accurate data for these exclusive channels are needed. The fundamental interest of the SPD’s justifies an effort towards their experimental determination in order to open up a new domain in the study of the nucleon structure.
no-problem/9910/math-ph9910006.html
ar5iv
text
# 1 Introduction ## 1 Introduction Existing mathematical models of quasiperiodic tilings of the plane and the 3dimensional space admit an important operation called inflation. Given a tiling of the plane or the space by prototiles $`\{X^i\}`$ from a local isomorphism class of tilings or specie , the inflation produces another tiling of the class out of the first one, by blowing up the tiles with a factor $`\lambda `$ ($`\lambda `$ is bigger then 1 and called the inflation factor) and substituting the $`\lambda `$–scaled tiles $`X_{(\lambda )}^i`$ in a particular way by the tiles $`\{X^i\}`$ of the original size. Generally, in the process of inflation, the tiles $`X^i`$ are cut into pieces (by plane cuts) and these smaller pieces can then be recombined together into the tiles $`X_{(\lambda )}^i`$. The tile $`X_{(\lambda )}^i`$ is made out of pieces of tiles $`X^j`$ for all $`j`$. Let $`M_j^i`$ be the sum of volumes of the pieces of the tiles of the type $`X^j`$. The matrix $`M=M_j^i`$ is called the volume inflation matrix. By its definition, the matrix $`M`$ has an eigenvector $`\stackrel{}{v}`$ with components $`v^i=\mathrm{Vol}(X^i)`$, the volumes of the tiles. The corresponding eigenvalue is $`\lambda ^3`$. In some cases, the matrix $`M`$ has rational entries. An example of an exception is the volume inflation matrix of the class of the tilings $`𝒯^{(2F)}`$ icosahedrally projected from the lattice $`D_6`$, to be discussed later in this paper. (Note: Under the “icosahedral projection” we mean the icosahedrally invariant projection.) Let $`lQ[\lambda ]`$ be the extension of $`lQ`$ by $`\lambda `$ and $`G=\mathrm{Gal}(lQ[\lambda ]/lQ)`$ its Galois group. Let $`G\lambda =\{\lambda _1=\lambda ,\lambda _2,\mathrm{},\lambda _k\}`$ be the orbit of $`\lambda `$. Then all the $`\lambda _i`$ are eigenvalues of the matrix $`M`$. In many physically interesting cases, $`\lambda `$ is a power of the golden mean $`\tau =\frac{1+\sqrt{5}}{2}`$; the field $`lQ[\lambda ]`$ is quadratic and therefore volumes can be used to build two eigenvectors (and eigenvalues) of $`M`$. In this article we address a question of a geometrical meaning of other eigenvectors of $`M`$. We need several standard definitions. One says that two polyhedra, $`P_1`$ and $`P_2`$, are scissor–equivalent (notation: $`P_1P_2`$) if $`P_1`$ can be cut (by plane cuts) and rebuilt into $`P_2`$. Assume that there is a function $``$ which associates an element of a ring $`𝒦`$ to any polyhedron. The function $``$ is called scissor–invariant if $``$ enjoys the property: $`P_1P_2`$ $``$ $`(P_1)=(P_2)`$. Any scissor–invariant function $``$ allows to construct an eigenvector $`\stackrel{}{f}`$ of the matrix $`M`$, $`f^i=(X^i)`$. The comment about the Galois group holds for the vector $`\stackrel{}{f}`$ as well. It is well known that starting from the dimension 3, the space of scissor–invariant functions is nontrivial: in addition to the volume, there are also Dehn invariants. In Section 2 of the present article we remind some basic facts about the Dehn invariants. In Section 3 we consider the Dehn invariants of golden tetrahedra. We use the Dehn invariants as a test of an existence of a stone inflation for the golden tetrahedra (Subsection 3.2). We show that if a rational inflation (that is, an inflation whose inflation matrix has rational entries) for the golden tetrahedra with the inflation factor $`\tau `$ exists then the inflation matrix can be uniquely reconstructed with the help of the volumes and the Dehn invariants. This unique inflation matrix $`M_{gt}`$ turns out to have non-integer entries which shows that a stone inflation of the golden tetrahedra with the inflation factor $`\tau `$ cannot exist. However, $`M_{gt}^3`$, the cube of the matrix $`M_{gt}`$, is integer-valued, so we cannot exclude a possibility of a stone inflation for the golden tetrahedra with the inflation factor $`\tau ^3`$. An alternative proof of the nonexistence of a stone inflation for the golden tetrahedra is given in Subsection 3.3. It is based on the analysis of irrationalities of areas of faces of the golden tetrahedra. The analysis in Subsection 3.3 allows to show that a stone inflation for the golden tetrahedra with the inflation factor $`\tau ^k`$, $`k=1,2,3,\mathrm{}`$ cannot exist for any $`k`$. In Subsection 4.1 we present the inflation rules for the decorated Mosseri–Sadoc tiles (they are unions of the golden tetrahedra). These rules we obtain by a local derivation from the inflation rules for the decorated golden tetrahedra (decoration increases the number of tiles: there are eight decorated golden tetrahedra) as the tiles of the projection class $`𝒯^{(2F)}`$. In Subsection 4.2 we show that the inflation matrix in the case of the Mosseri–Sadoc tilings is uniquely reconstructed from the volumes of the prototiles and their Dehn invariants. Also, we explain in Subsection 4.2 that the inflation matrix for the Mosseri–Sadoc tiles is induced by the inflation matrix for the golden tetrahedra. For the calculation of the Dehn invariants of the golden tetrahedra we use a Conway–Radin–Sadun theorem (Appendix). ## 2 Dehn invariants The Dehn invariant of a polyhedron $`P`$ takes values in a ring $`𝐑𝐑_\pi `$ where $`𝐑_\pi `$ is the additive group of residues of real numbers modulo $`\pi `$; the tensor product is over $`𝐙`$, the ring of rational integers. Denote by $`l_i`$ the lengths of edges of $`P`$. Denote by $`\alpha _i`$ the corresponding lateral angles and by $`\overline{\alpha _i}`$ – the residue classes of $`\alpha _i`$ modulo $`\pi `$. The Dehn invariant, $`𝒟(P)`$, of the polyhedron $`P`$ is equal to $$𝒟(P)=l_i\overline{\alpha _i},$$ (1) with the sum over all edges of $`P`$. Historically, Dehn invariants appeared in solving the Hilbert’s third problem which asks whether one can calculate the volume of a polyhedron without a limiting procedure. More precisely, given two polyhedra of the same volume, can one cut one and paste the pieces to build another one? Or, is equality of volumes of two polyhedra sufficient for their scissor equivalence? Dehn has shown that the quantity (1) is scissor–invariant and gave an example of two polyhedra of the same volume but having different Dehn invariants. Thus, equality of Dehn invariants is a necessary condition for the scissor equivalence. Later, Sydler has shown that in dimension 3 the equality of volumes and Dehn invariants is also a sufficient condition for the scissor equivalence. See for more information on the Dehn invariants. ## 3 Inflation of golden tetrahedra In this Section we discuss several aspects of the inflation of the golden tetrahedra, not only the inflation of these tiles as the prototiles in the projection class of the tilings $`𝒯^{(2F)}`$. The projection class of the locally isomorphic tilings $`𝒯^{(2F)}`$ and the inflation rules for the tiles in this class have been considered in Refs. . ### 3.1 Golden tetrahedra and their Dehn invariants Golden triangles are triangles with edge lengths $`1`$ and $`\tau `$ (in some scale) satisfying the condition: not all edges of a triangle are congruent. There are two golden triangles: with edge lengths $`(1,1,\tau )`$ and with edge lengths $`(1,\tau ,\tau )`$. A property of the golden triangles: edges of each of them can be aligned in the plane parallelly to the symmetry axes of a given pentagon. Golden tetrahedra are tetrahedra with edge lengths 1 and $`\tau `$ (therefore the faces of the golden tetrahedra can be either golden or regular triangles) satisfying the condition: not all faces of a tetrahedron are congruent. A property: golden tetrahedra are tetrahedra the edges of which can be aligned in the space parallelly to the 2fold symmetry axes of a given icosahedron. There could be seven golden tetrahedra but it turns out that one of them is flat. The six non–flat golden tetrahedra, $`G^{}`$, $`F^{}`$, $`A^{}`$, $`B^{}`$, $`C^{}`$ and $`D^{}`$ are shown in Fig. 1. All the lateral angles of the golden tetrahedra are expressed in terms of four acute ($`<\pi /2`$) angles $`\alpha `$, $`\beta `$, $`\gamma `$ and $`\delta `$, $$\begin{array}{ccc}\mathrm{cos}\alpha & =& \frac{\tau }{\tau +2}=\frac{1}{\sqrt{5}},\hfill \\ \mathrm{cos}\beta & =& \frac{\tau +1}{\sqrt{3}\sqrt{\tau +2}},\hfill \\ \mathrm{cos}\gamma & =& \frac{\tau +2}{3\tau }=\frac{\sqrt{5}}{3},\hfill \\ \mathrm{cos}\delta & =& \frac{\tau 1}{\sqrt{3}\sqrt{\tau +2}}.\hfill \end{array}$$ (2) In $`𝐑_\pi `$ there are linear dependences between lateral angles $`\alpha `$, $`\beta `$, $`\gamma `$ and $`\delta `$. Lemma 1. $`\alpha +\gamma +2\beta `$ $`=`$ $`\pi ,`$ (3) $`\alpha \gamma +2\delta `$ $`=`$ $`\pi .`$ (4) Proof. Straightforward. Therefore, in $`𝐑_\pi `$ we have relations $`\overline{\alpha }`$ $`=`$ $`\overline{\beta }\overline{\delta },`$ (5) $`\overline{\gamma }`$ $`=`$ $`\overline{\beta }+\overline{\delta }.`$ (6) Next step is to prove that there are no more relations: in other words, the images of angles $`\beta `$ and $`\delta `$ are independent in $`𝐑_\pi `$. Because of (5) and (6) it is sufficient to check the independence of $`\overline{\alpha }`$ and $`\overline{\gamma }`$. Lemma 2. The images of angles $`\alpha `$ and $`\gamma `$ in $`𝐑_\pi `$ are independent. Proof. For notation see Appendix. The angles $`\alpha `$ and $`\gamma `$ are pure geodetic. One can check that $$\alpha =5_1,\gamma =\frac{\pi }{2}23_5.$$ (7) The angles $`5_1`$ and $`3_5`$ are elements of the basis constructed by Conway–Radin–Sadun. Thus, by the Conway–Radin–Sadun theorem (Appendix), the angles $`\alpha `$ and $`\gamma `$ are independent. The calculation of Dehn invariants of the golden tetrahedra is now immediate. We shall use $`\beta `$ and $`\delta `$ as independent angles. We express the Dehn invariants of the golden tetrahedra by the vector $`\stackrel{}{d}_{gt}`$ $$\stackrel{}{d}_{gt}=𝒟\left(\begin{array}{c}A^{}\\ B^{}\\ C^{}\\ D^{}\\ F^{}\\ G^{}\end{array}\right)=\left(\begin{array}{c}\tau 1\\ \tau +5\\ 3\tau 2\\ 2\tau \\ 3\tau \\ 3\tau +3\end{array}\right)\overline{\beta }+\left(\begin{array}{c}5\tau 1\\ \tau 1\\ 2\\ 2\tau 3\\ 3\tau +3\\ 3\end{array}\right)\overline{\delta }.$$ (8) The subscript $`gt`$ stands for “golden tetrahedra”. The vector $`\stackrel{}{v}_{gt}`$ of volumes of the golden tetrahedra is $$\stackrel{}{v}_{gt}=\mathrm{Vol}\left(\begin{array}{c}A^{}\\ B^{}\\ C^{}\\ D^{}\\ F^{}\\ G^{}\end{array}\right)=\frac{1}{12}\left(\begin{array}{c}2\tau +1\\ 1\\ \tau +1\\ \tau \\ \tau +1\\ \tau \end{array}\right).$$ (9) ### 3.2 On inflation of golden tetrahedra First we show how to use the Dehn invariants as a necessary condition for the existence of the stone inflation. By definition, the inflation is “stone” if the inflated tiles are composed of the whole original tiles; in other words, one does not need to cut the original tiles into smaller pieces. In particular, it follows that the volume matrix of the stone inflation has integer entries. Lemma 1. The golden tetrahedra as prototiles of a space tiling do not admit a stone inflation with an inflation factor $`\tau `$. Proof. Assume that the stone inflation exists. Let $`M_{gt}`$ be its inflation matrix. Since the inflation is stone, the matrix elements of $`M_{gt}`$ are rational integers. In particular, $`M_{gt}`$ is stable under the action of the Galois group, $`\tau 1/\tau `$. The vector $`\stackrel{}{v}_{gt}`$ (the vector of volumes of the tiles, eqn. (9)) is an eigenvector of $`M_{gt}`$ with an eigenvalue $`\tau ^3`$. The additivity of Dehn invariants implies that the vector $`\stackrel{}{d}_{gt}`$ (the vector of Dehn invariants of the tiles, eqn. (8)) is an eigenvector of $`M_{gt}`$ with an eigenvalue $`\tau `$ (the eigenvalue is $`\tau `$ because Dehn invariants have dimension \[length\]<sup>1</sup>). Decomposing the vector of Dehn invariants in $`\overline{\beta }`$ and $`\overline{\delta }`$ we obtain two eigenvectors of $`M_{gt}`$ with the eigenvalue $`\tau `$. Explicitely, we have for the volume vector: $$M_{gt}\left(\begin{array}{c}2\tau +1\\ 1\\ \tau +1\\ \tau \\ \tau +1\\ \tau \end{array}\right)=\left(\begin{array}{c}8\tau +5\\ 2\tau +1\\ 5\tau +3\\ 3\tau +2\\ 5\tau +3\\ 3\tau +2\end{array}\right),$$ (10) for the $`\overline{\beta }`$–component of the Dehn vector: $$M_{gt}\left(\begin{array}{c}\tau 1\\ \tau +5\\ 3\tau 2\\ 2\tau \\ 3\tau \\ 3\tau +3\end{array}\right)=\left(\begin{array}{c}2\tau 1\\ 6\tau +1\\ \tau +3\\ 2\tau 2\\ 3\tau 3\\ 6\tau +3\end{array}\right),$$ (11) and for the $`\overline{\delta }`$–component of the Dehn vector: $$M_{gt}\left(\begin{array}{c}5\tau 1\\ \tau 1\\ 2\\ 2\tau 3\\ 3\tau +3\\ 3\end{array}\right)=\left(\begin{array}{c}4\tau +5\\ 1\\ 2\tau \\ 5\tau 2\\ 3\\ 3\tau \end{array}\right).$$ (12) The Galois automorphism $`\tau 1/\tau `$ produces three more eigenvectors of $`M_{gt}`$. Since the entries of $`M_{gt}`$ are integer, to use the Galois automorphism is the same as to decompose vector equalities (10), (11) and (12) in the powers of $`\tau `$ (i.e. consider $`\tau ^0`$– and $`\tau ^1`$–components of (10), (11) and (12)). Writing all the columns together we obtain a matrix equality, $$M_{gt}\left(\begin{array}{cccccc}2& 1& 1& 1& 5& 1\\ 0& 1& 1& 5& 1& 1\\ 1& 1& 3& 2& 0& 2\\ 1& 0& 2& 0& 2& 3\\ 1& 1& 3& 0& 3& 3\\ 1& 0& 3& 3& 0& 3\end{array}\right)=\left(\begin{array}{cccccc}8& 5& 2& 1& 4& 5\\ 2& 1& 6& 1& 0& 1\\ 5& 3& 1& 3& 2& 0\\ 3& 2& 2& 2& 5& 2\\ 5& 3& 3& 3& 0& 3\\ 3& 2& 6& 3& 3& 0\end{array}\right).$$ (13) The matrix $`M_{gt}`$ is acting on a $`6\times 6`$ matrix whose first column is $`\tau ^1`$–component of (10), the second column is $`\tau ^0`$–component of (10); the 3rd and 4th columns are $`\tau ^1`$– and $`\tau ^0`$–components of (11); the 5th and 6th columns are $`\tau ^1`$– and $`\tau ^0`$–components of (12). The eqn. (13) is the matrix equation for the matrix $`M_{gt}`$. We found the complete basis of eigenvectors, therefore the solution is unique and we find $$M_{gt}=\left(\begin{array}{cccccc}2& 0& 1& 0& 2& 1\\ 0& 0& 1& 0& 0& 1\\ 1/2& 1/2& 1& 1& 1& 1\\ 0& 0& 1& 1& 1& 0\\ 1& 0& 1& 1& 1& 0\\ 1/2& 1/2& 1& 0& 0& 1\end{array}\right).$$ (14) The matrix entries of $`M_{gt}`$ are not integers therefore a stone inflation with the inflation factor $`\tau `$ cannot exist. Q. E. D. We actually proved more: we proved that if an inflation with a rational inflation matrix existed then the inflation matrix would necessarily be equal to (14). In other words, having assumed that the inflation matrix is rational we could reconstruct it uniquely. This happened because of a coincidence: $`2\times 3=6`$. Here 2 is the order of the Galois group, 3 is the number of independent invariants (the volume and the two Dehn invariants) while 6 is the number of tiles. Due to this coincidence we obtained the matrix equation for $`M_{gt}`$ admitting a unique solution. We don’t have a good explanation for this coincidence. An inflation, with the inflation factor $`\tau `$ for the golden tetrahedra as the prototiles of the projection class of the tilings $`𝒯^{(2F)}`$ (obtained by the icosahedrally invariant projection from the $`D_6`$ lattice) has been found in Refs. . There, one has to divide the tiles $`C^{}`$ and $`G^{}`$, each into two subtypes: “blue” and “red”, and these subtypes inflate differently. Therefore, the number of tiles becomes 8. The volume inflation matrix $`M_{𝒯^{(2F)}}`$ is equal to $$\left(\begin{array}{cccccccc}11\tau 16& 0& 2\tau 2& 2\tau 3& 0& 9\tau 13& \tau 1& 3\tau 4\\ 0& 0& 0& 1& 0& 0& 0& 1\\ 2\tau +4& 1& 0& \tau +2& 1& \tau +3& 0& \tau +2\\ 9\tau +15& 0& 2\tau +4& \tau +2& 1& 8\tau +14& \tau +2& 2\tau +4\\ 0& 0& 0& 1& 1& 1& 0& 0\\ 1& 0& 1& 0& 1& 1& 0& 0\\ 2\tau +4& 1& 0& \tau +2& 0& \tau +2& 0& \tau +2\\ 9\tau +15& 0& 2\tau +4& \tau +2& 0& 8\tau +13& \tau +2& 2\tau +4\end{array}\right)$$ (15) in the following ordering of the tiles: $`A^{}`$, $`B^{}`$, $`C^b`$, $`C^r`$, $`D^{}`$, $`F^{}`$, $`G^b`$ and $`G^r`$. The upper indices “$`b`$” and “$`r`$” denote the “blue” and the “red” variants of tiles, respectively. It is interesting to note that: 1. for the tiles $`B^{}`$, $`D^{}`$ and $`F^{}`$ the inflation matrices $`M_{gt}`$ and $`M_{𝒯^{(2F)}}`$ give the same results (up to colors); 2. noninteger entries in (14) appear exactly in the columns corresponding to the tiles $`C^{}`$ and $`G^{}`$ – the tiles which are getting blue and red colors in the inflation with the matrix (15). Lemma 1 of this Subsection shows that a stone inflation with the inflation factor $`\tau `$ is impossible. We could however try to construct a hypothetic inflation matrix with an inflation factor $`\tau ^k`$ with integer positive $`k`$, $`k>1`$. As in the proof of the Lemma 1, the volume vectors and the vectors of Dehn invariants fix the inflation matrix uniquely: the only possible inflation matrix with the inflation factor $`\tau ^k`$ can be the matrix $`M_{gt}^k`$. It turns out that there are powers of the matrix $`M_{gt}`$ which are integer-valued. Lemma 2. The matrix $`M_{gt}^k`$ has integer entries if and only if $`k`$ is divisible by 3. Proof. A direct calculation gives $$M_{gt}^2=\left(\begin{array}{cccccc}7& 1& 6& 3& 7& 4\\ 1& 1& 2& 2& 1& 2\\ 3& 1& 5& 3& 4& 3\\ 3/2& 1/2& 3& 3& 3& 1\\ 7/2& 1/2& 4& 3& 5& 2\\ 2& 1& 3& 1& 2& 3\end{array}\right)$$ (16) and $$M_{gt}^3=\left(\begin{array}{cccccc}26& 5& 28& 16& 30& 18\\ 5& 2& 8& 4& 6& 6\\ 14& 4& 19& 12& 18& 12\\ 8& 2& 12& 9& 12& 6\\ 15& 3& 18& 12& 19& 10\\ 9& 3& 12& 6& 10& 9\end{array}\right).$$ (17) Thus, $`M_{gt}^3`$ is an integer-valued matrix and therefore matrices $`M_{gt}^{3k}`$ are integer-valued as well. It is left to prove that if $`n`$ is not a multiple of 3 then $`M_{gt}^n`$ is not integer-valued. By construction, the eigenvalues of $`M_{gt}`$ are $$\tau ^3,(\tau ^3),\tau \mathrm{and}(\tau ^1).$$ (18) Therefore, the minimal polynomial for $`M_{gt}`$ is $$\chi (x)=x^45x^3+2x^2+5x+1,$$ (19) $`\chi (M_{gt})=0`$. A straightforward check shows that if $$x^4=5x^32x^25x1$$ (20) then $$x^n=a_nx^3+b_nx^2+c_nx+d_n$$ (21) with $$a_n=\frac{1}{3}\left(\frac{f_{3(n1)}}{2}f_{n1}\right)$$ (22) and $$\begin{array}{ccc}b_n& =& a_{n+1}a_n,\hfill \\ c_n& =& a_{n+1}+3a_n+f_n,\hfill \\ d_n& =& a_{n+1}+4a_n+f_{n1}.\hfill \end{array}$$ (23) Here $`\{f_n\}`$ are Fibonacci numbers defined by: $`f_0=0`$, $`f_1=1`$ and $`f_{n+1}=f_n+f_{n1}`$. Therefore, $$M_{gt}^n=a_nM_{gt}^3+b_nM_{gt}^2+c_nM_{gt}+d_n\mathrm{𝐈𝐝},$$ (24) where $`\mathrm{𝐈𝐝}`$ is the unit matrix. The numbers $`a_n`$, $`b_n`$, $`c_n`$ and $`d_n`$ are integer. The matrices $`M_{gt}^3`$ and $`\mathrm{𝐈𝐝}`$ have integer entries. The matrices $`M_{gt}`$ and $`M_{gt}^2`$ have – at different places – rational entries with the denominator 2. Therefore, the matrix $`M_{gt}^n`$ has integer entries if and only if the integers $`b_n`$ and $`c_n`$ are even which means that $$a_{n+1}a_n(\mathrm{mod}2)$$ (25) and $$a_{n+1}+3a_n+f_n0(\mathrm{mod}2).$$ (26) Substitution of (25) into (26) gives $`f_n0(\mathrm{mod}2)`$. It is well known that $`f_n`$ is even if and only if $`n`$ is a multiple of 3 (see, e.g., , Chapter 6). To conclude: with the help of the Dehn invariants one is able to show that a stone inflation with the inflation factor $`\tau `$ is impossible. However one cannot exclude a stone inflation with the inflation factor $`\tau ^3`$. In the next Subsection we shall show, using a different method, that a stone inflation with the inflation factor $`\tau ^3`$ is impossible as well. ### 3.3 Faces of golden tetrahedra The Lemma 1 proved in Subsection 3.2 shows that the stone inflation with the inflation factor $`\tau `$ is impossible due to the scissor invariants of the tiles – the volumes and the Dehn invariants. Here we shall give another argument showing the impossibility of a stone inflation. This argument uses the geometry of faces of the tiles. More precisely, using Dehn invariants amounts to analyzing irrationalities in the lateral angles of the golden tetrahedra. Now we shall analyze irrationalities in the areas of the faces of the golden tetrahedra. The faces of the golden tetrahedra are golden and regular triangles. Denote the regular triangle, with the edge length 1, by $`\mathrm{\Delta }_r`$, the acute golden triangle (with edge lengths $`\tau `$, $`\tau `$ and 1) by $`\mathrm{\Delta }_a`$ and the obtuse golden triangle (with edge lengths $`\tau `$, 1 and 1) by $`\mathrm{\Delta }_o`$. For an arbitrary triangle $`\mathrm{\Delta }`$, a notation $`\tau ^k\mathrm{\Delta }`$ means the triangle $`\mathrm{\Delta }`$ scaled by $`\tau ^k`$. Also, for a triangle $`\mathrm{\Delta }`$, denote a set of triangles $`\{\tau ^k\mathrm{\Delta },k=1,2,3,\mathrm{}\}`$ by $`\tau ^{}\mathrm{\Delta }`$. The areas $`A(\mathrm{\Delta })`$ of the triangles are $$\begin{array}{ccc}A_r& & A(\mathrm{\Delta }_r)=\frac{\sqrt{3}}{4},\hfill \\ A_o& & A(\mathrm{\Delta }_o)=\frac{\sqrt{\tau +2}}{4},\hfill \\ A_a& & A(\mathrm{\Delta }_a)=\frac{\tau \sqrt{\tau +2}}{4}.\hfill \end{array}$$ (27) The $`\sqrt{}`$ will always denote the positive branch of the square root. It is interesting to note that the irrationalities in the areas of the faces are exactly the same as in trigonometric functions of the lateral angles (see (2)): $`\sqrt{3}`$, $`\tau `$ and $`\sqrt{\tau +2}`$. We first prove an intuitively obvious technical Lemma which shows that irrationalities expressing the areas (27) are different. Lemma 1. The irrationalities $`\sqrt{3}`$ and $`\sqrt{\tau +2}`$ are independent over the field $`lQ[\tau ]`$. Proof. The number $`\rho =\sqrt{\tau +2}`$ satisfies an equation $`f(\rho )=0`$ where $$f(x)=x^45x^2+5.$$ (28) By the Eisenstein criterion (see, e.g., , Chapter 3), the polynomial $`f`$ is irreducible over $`lQ`$. Moreover, $`f`$ splits in $`lQ[\rho ]`$: its roots are $$\pm \sqrt{\tau +2}\mathrm{and}\pm \sqrt{3\tau }.$$ (29) The irrationality $`\sqrt{3\tau }`$ belongs to the field $`lQ[\rho ]`$: one has $$\sqrt{3\tau }=\tau ^1\sqrt{\tau +2}lQ[\rho ].$$ (30) A splitting field of any polynomial is a Galois extension (, Chapter 4). Therefore, the field $`lQ[\rho ]`$ – as the splitting field of the polynomial $`f`$ – is the Galois extension of $`lQ`$. The automorphism group $`Gal(lQ[\rho ]/lQ)`$ is isomorphic to the cyclic group $`𝐙_4`$, with the generator $`\sigma `$, $$\sigma :\sqrt{\tau +2}\sqrt{3\tau }.$$ (31) In a basis 1, $`\sqrt{\tau +2}`$, $`\tau `$ and $`\sqrt{3\tau }`$ of $`lQ[\rho ]`$ over $`lQ`$, the action of $`\sigma `$ on the other elements of the basis is given by $$\sigma :\tau \tau ^1\mathrm{and}\sigma :\sqrt{3\tau }\sqrt{\tau +2}.$$ (32) Hence, $`\sigma ^4=1`$. By the Fundamental Theorem of Galois Theory (see, e.g., , Chapter 4), a quadratic extension of $`lQ`$ between $`lQ`$ and $`lQ[\rho ]`$ can be only the fixed field of $`\sigma ^2`$ which is $`lQ[\tau ]`$. In particular, $`\sqrt{3}lQ[\rho ]`$ (since, clearly, $`\sqrt{3}lQ[\tau ]`$). Remark. It is also easy to prove in an elementary way that the equation $`x^2=3`$ does not have solutions in $`lQ[\rho ]`$. Corollary. The field $`lQ[\rho ,\sqrt{3}]`$ admits an automorphism $`\varphi `$ which satisfies: 1. $`\varphi :\sqrt{3}\sqrt{3}`$; 2. the fixed field of $`\varphi `$ is $`lQ[\rho ]`$. Proof. It follows from the Lemma 1 that the field $`lQ[\rho ,\sqrt{3}]`$ is a quadratic extension of the field $`lQ[\rho ]`$. In characteristic 0, any quadratic extension is Galois (, Chapter 4). This immediately implies the existence of the automorphism $`\varphi `$. We shall now apply these algebraic preliminaries to the analysis of a stone inflation. If a stone inflation existed, the faces of inflated tiles would be covered by the faces of the original tiles. Lemma 2. 1. Assume that a regular triangle $`\mathrm{\Delta }_r`$ is covered by a finite (interior)-disjoint union of regular triangles from $`\tau ^{}\mathrm{\Delta }_r`$ and golden triangles from $`\tau ^{}\mathrm{\Delta }_a`$ and $`\tau ^{}\mathrm{\Delta }_o`$. Then the golden triangles are absent in the covering. In other words, a regular triangle can be covered by regular triangles only. 2. Similarly, the golden triangles can be covered by the golden triangles only, the regular triangles must be absent in the covering. Proof. Suppose that the triangle $`\mathrm{\Delta }_r`$ is covered by a finite union of triangles from $`\tau ^{}\mathrm{\Delta }_r`$, $`\tau ^{}\mathrm{\Delta }_a`$ and $`\tau ^{}\mathrm{\Delta }_o`$. Then for the areas we have $$A_r=p_1(\tau ^2)A_r+p_2(\tau ^2)A_a+p_3(\tau ^2)A_o,$$ (33) where $`p_1`$, $`p_2`$ and $`p_3`$ are polynomials with nonnegative integer coefficients and the polynomial $`p_1`$ does not have a constant term. Let $`X=\sqrt{3}(1p_1(\tau ^2))`$ and $`Y=p_2(\tau ^2)\tau \sqrt{\tau +2}+p_3(\tau ^2)\sqrt{\tau +2}`$. The equality (33) is equivalent to $`X=Y`$. Applying the automorphism $`\varphi `$ (Corollary, Lemma 1) to the equality $`X=Y`$ we find $`(X)=Y`$ and it follows that $`X=0`$ and $`Y=0`$ separately. Since each term in the expressions $`p_2(\tau ^2)A_a`$ and $`p_3(\tau ^2)A_o`$ is nonnegative, the equality $`Y=0`$ implies that the polynomials $`p_2(x)`$ and $`p_3(x)`$ are identically zero. This means that the golden triangles are absent. The considerations with coverings of the golden triangles are analogous. To prove the nonexistence of a stone inflation we shall consider coverings of the regular triangle. We shall prove that a regular triangle with the edge of length $`\tau ^k`$ cannot be covered by regular triangles with the edge lengths $`\tau ^i`$, $`i=0,\mathrm{},k1`$. This will imply that there is no stone inflation with the inflation factor $`\tau ^k`$ for any $`k`$. In fact, the same arguments can be applied to coverings of any triangle $`\mathrm{\Delta }`$ by $`\tau ^k`$–smaller copies of the same triangle. Consider an arbitrary triangle $`\mathrm{\Delta }`$. Suppose that the triangle $`\tau ^k\mathrm{\Delta }`$ is divided into a finite (interior)–disjoint union of triangles $`\tau ^i\mathrm{\Delta }`$ with $`i=0,\mathrm{},k1`$. Consider such division with a smallest possible $`k`$. Then a triangle $`\mathrm{\Delta }=\tau ^0\mathrm{\Delta }`$ is necessarily present – otherwise, rescaling by $`1/\tau `$ we would obtain the division of the triangle $`\tau ^{k1}\mathrm{\Delta }`$ contradicting to the minimality of $`k`$. Denote by $`\alpha _i`$ the number of triangles $`\tau ^i\mathrm{\Delta }`$. We have $`\alpha _i0`$ for $`i=1,\mathrm{},k1`$ and $`\alpha _0>0`$. Put $`\sigma =\tau ^2`$. From the area consideration it follows that $$\sigma ^k=\alpha _{k1}\sigma ^{k1}+\mathrm{}+\alpha _0.$$ (34) It is this statement which will lead to a contradiction. Lemma 3. The number $`\sigma `$ cannot satisfy an equation $$\sigma ^k\alpha _{k1}\sigma ^{k1}\mathrm{}\alpha _0=0,$$ (35) where $`\alpha _1,\mathrm{},\alpha _{k1}`$ are nonnegative integer numbers and $`\alpha _0`$ is a positive integer number. Proof. The minimal equation (over $`𝐙`$) for $`\sigma =\frac{3+\sqrt{5}}{2}`$ is $$\sigma ^23\sigma +1=0.$$ (36) Let $`p(x)=x^k\alpha _{k1}x^{k1}\mathrm{}\alpha _0`$. Assume that $`p(\sigma )=0`$. This means that one can divide $`p(x)`$ by $`x^23x+1`$: $$p(x)=(x^{k2}+\beta _{k3}x^{k3}+\mathrm{}+\beta _0)(x^23x+1).$$ (37) Collecting coefficients in powers of $`x`$ we obtain the following system: $$\begin{array}{ccc}\alpha _{k1}\hfill & =& 3+\beta _{k3}\hfill \\ \alpha _{k2}\hfill & =& 13\beta _{k3}+\beta _{k4}\hfill \\ \alpha _{k3}\hfill & =& \beta _{k3}3\beta _{k4}+\beta _{k5}\hfill \\ & \mathrm{}& \\ \alpha _2\hfill & =& \beta _23\beta _1+\beta _0\hfill \\ \alpha _1\hfill & =& \beta _13\beta _0\hfill \\ \alpha _0\hfill & =& \beta _0\hfill \end{array}$$ (38) Let $`\psi _n=f_{2n+2}`$ where $`f_n`$ are Fibonacci numbers. Then we have $`\psi _0=1`$, $`\psi _1=3`$ and $$\psi _{n+1}=3\psi _n\psi _{n1}.$$ (39) Let $`S=\alpha _{k1}\psi _0+\alpha _{k2}\psi _1+\mathrm{}+\alpha _0\psi _{k1}`$. Substituting expressions for $`\alpha _i`$ from (38) one finds that due to (39) the terms with $`\psi _i`$ for $`i>1`$ cancel and one is left with $$S=3\psi _0+\psi _13+3=0,$$ (40) which is impossible since all $`\psi _i`$ are positive, $`\alpha _i`$ are nonnegative and $`\alpha _0`$ is positive. As we have seen, Lemma 3 implies the following statement. Corollary. A regular triangle cannot be covered by $`\tau ^k`$–smaller regular triangles. With these preliminaries we are now prepared to show that a stone inflation for the golden tetrahedra is impossible. Proposition. For the golden tetrahedra, a stone inflation with the inflation factor $`\tau ^k`$, with an arbitrary positive integer $`k`$, does not exist. Proof. As it was said above, an existence of a stone inflation implies that the faces of the inflated tiles can be covered by the faces of the tiles of the original size. In particular, a face which is an inflated regular triangle, would be covered by regular and golden triangles. Lemma 2 shows that the golden triangles cannot appear in such covering. Therefore, the inflated regular triangle can be covered by regular triangles only – which is impossible by Corollary, Lemma 3. This contradiction shows that a stone inflation does not exist. Remark. The known tilings $`𝒯^{(2F)}`$ have the following property. The golden tetrahedra in the tiling of the space have their edges parallel to the 2fold symmetry axes of the icosahedron (“the long range orientational order”). The faces of the tiles which are regular triangles are all located in the planes perpendicular to the 3fold symmetry axes of the icosahedron. However the golden triangles are all perpendicular to the 5fold symmetry axes. Therefore if a stone inflation for the tilings $`𝒯^{(2F)}`$ existed, the regular triangles could be covered only by the smaller regular triangles due to the orientation of the faces. In this case we don’t need Lemmas 1 and 2. We stress again that an existence of the “rational” inflation rules for the golden tetrahedra (eqn. 14) is hypothetic because in our algebraic approach we do not impose any restriction on the orientations of the tiles in the tiling of the 3dimensional space. The logic used in this Subsection gives an additional motivation to consider minimal packages of the golden tetrahedra in which the regular faces are all hidden (see Section 4). ## 4 Mosseri–Sadoc tiles The five prototiles, $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ of the projection class of the tilings $`𝒯^{(MS)}`$(see ) are shown in Fig. 2. The tiles $`r`$ and $`m`$ appear in $`𝒯^{(MS)}`$ always together as a tile $`h`$, $`h=rm`$, see Fig. 3. The prototiles $`z`$, $`h`$, $`s`$ and $`a`$ are of the same shape as the prototiles of the inflation class of the tilings introduced by Sadoc and Mosseri , and we call them the Mosseri–Sadoc tiles. The tiles $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ are composed of the golden tetrahedra , as shown in Fig. 4, in such a way that the regular triangles of the golden tetrahedra are all hidden . Hence, the faces of the composed tiles $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ are golden triangles only. The same is true for the Mosseri–Sadoc tiles $`z`$, $`h`$, $`s`$ and $`a`$. Using additivity of Dehn invariants one finds the vector of Dehn invariants for the Mosseri–Sadoc tiles: $$\stackrel{}{d}_{MS}=𝒟\left(\begin{array}{c}z\\ h\\ s\\ a\end{array}\right)=5\left(\begin{array}{c}\tau \\ 2\\ \tau 1\\ \tau \end{array}\right)\overline{\alpha }.$$ (41) Thus, the space of Dehn invariants for the Mosseri–Sadoc tiles becomes 1dimensional, only the combination $`\overline{\alpha }=\overline{\beta }\overline{\delta }`$ appears. For the vector of volumes for the Mosseri–Sadoc tiles one obtains $$\stackrel{}{v}_{MS}=\mathrm{Vol}\left(\begin{array}{c}z\\ h\\ s\\ a\end{array}\right)=\frac{1}{12}\left(\begin{array}{c}4\tau +2\\ 6\tau +4\\ 4\tau +3\\ 2\tau +1\end{array}\right).$$ (42) Note. The Mosseri–Sadoc tile $`h`$ is the union of the tiles $`m`$ and $`r`$ introduced in Ref. . The volumes and the Dehn invariants of the tiles $`m`$ and $`r`$ are $$\mathrm{Vol}(m)=\frac{1}{12}(2\tau +3),\mathrm{Vol}(r)=\frac{1}{12}(4\tau +1).$$ (43) $$𝒟(m)=5(\tau 1)\overline{\alpha },𝒟(r)=5(\tau +1)\overline{\alpha }.$$ (44) The Dehn invariants of both of them contain only the combination $`\overline{\alpha }`$. Thus, were the tiles $`m`$ and $`r`$ not always glued together, we wouldn’t be able to write a matrix equation for the inflation of 5 tiles $`z`$, $`m`$, $`r`$, $`s`$ and $`a`$. That the tiles $`m`$ and $`r`$ in the projection class of the tilings $`𝒯^{(MS)}`$ do appear always together as the prototile $`h`$ has been shown in Ref. by the arguments of the projection method expressed in the “orthogonal space”. For the overview of the space tilings obtained by the projection method see Ref. . ### 4.1 Inflation of decorated Mosseri–Sadoc tiles Mosseri and Sadoc have given the inflation rules for their $`z`$, $`h`$, $`s`$ and $`a`$ tiles . These rules were for the stone inflation of the tiles. The inflation factor is $`\tau =\frac{1+\sqrt{5}}{2}`$. The inflation matrix of the stone inflation of the tiles is the matrix with integer coefficients. It has been given by Sadoc and Mosseri $$M=\left(\begin{array}{cccc}1& 1& 1& 1\\ 2& 1& 2& 2\\ 1& 1& 1& 2\\ 0& 0& 1& 2\end{array}\right),$$ (45) in the following ordering of the tiles: $`z`$, $`h`$, $`s`$ and $`a`$. In the case of the Mosseri–Sadoc tiles, the stone inflation is breaking the symmetry of the tiles. The authors of haven’t given a decoration of the tiles which would take care about the symmetry breaking and uniquely define the inflation–deflation procedure at every step. In it has been shown that the projection class of the locally isomorphic tilings $`𝒯^{(2F)}`$ (see ) can be locally transformed into the tilings $`𝒯^{(MS)}`$, $`𝒯^{(2F)}`$ $``$ $`𝒯^{(MS)}`$. The class $`𝒯^{(MS)}`$ of the locally isomorphic tilings of the space by the Mosseri–Sadoc tiles has been defined by the icosahedral projection from the $`D_6`$–lattice . The important property is that minimal packages of the six golden tetrahedra in $`𝒯^{(2F)}`$, satisfying the condition that their equilateral faces (orthogonal to the 3fold directions) are covered, lead to five tiles $`a`$, $`s`$, $`z`$, $`r`$ and $`m`$ . Moreover, the tiles $`r`$ and $`m`$ appear always as the union $`rm`$, that is, the tile $`h`$ of Sadoc and Mosseri with three mirror symmetries . See Figs. 2, 3 and 4. It is apriori not evident that the inflation rules for the Mosseri–Sadoc tiles in the projection class of the tilings $`𝒯^{(MS)}`$ are the same as those suggested by Sadoc and Mosseri . The inflation rules for the $`𝒯^{(2F)}`$–tiles in the projection class of the tilings $`𝒯^{(2F)}`$ have been obtained in Refs. . The inflation rules for the prototiles in a projection class of tilings are determined in the orthogonal space by a procedure explained in Refs. . All edges of the $`𝒯^{(2F)}`$–tiles are carrying the arrows and some of these arrows are uniquely defining the inflation rules for the $`𝒯^{(2F)}`$–tiles . By a local derivation of $`𝒯^{(MS)}`$ from $`𝒯^{(2F)}`$, the Mosseri–Sadoc tiles inherit these arrows . The arrows which break the symmetry of $`𝒯^{(MS)}`$–tiles are defining the inflation procedure uniquely. The inflation–deflation rules for the decorated $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ tiles in the projection class of the tilings $`𝒯^{(MS)}`$ are obtained through the local derivation from the inflation–deflation rules for the decorated golden tetrahedra (eight prototiles!) as the tiles of the projection class $`𝒯^{(2F)}`$. We give the inflation rules for $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ tiles in Figs. 5 to 9. If we keep in mind that the tiles $`m`$ and $`r`$ appear in $`𝒯^{(MS)}`$ together as $`h`$, $`mr=h`$, these are the inflation–deflation rules for the projection class of the tilings $`𝒯^{(MS)}`$ of the space by the decorated Mosseri–Sadoc tiles $`z`$, $`h`$, $`s`$ and $`a`$. We see that the inflation rules for $`𝒯^{(MS)}`$ as a projection specie are the same (up to the decoration) as for the inflation specie given by Mosseri and Sadoc . By the fact that only the decorated Mosseri–Sadoc tiles do have the uniquely defined inflation–deflation procedure and by the fact that the inflation rules for the projection and inflation species are the same, we identify the inflation and the projection species and denote them by the same symbol, $`𝒯^{(MS)}`$. ### 4.2 Dehn invariants and stone inflation of Mosseri–Sadoc tiles In this Section we show that the inflation matrix for the Mosseri–Sadoc tiles, $`z`$, $`h`$, $`s`$ and $`a`$, can be uniquely reconstructed from the Dehn invariants (and the volume). Denote the inflation matrix by $`M_{MS}`$. The vectors $`\stackrel{}{d}_{MS}`$ and $`\stackrel{}{v}_{MS}`$ (see eqns. (41) and (42)) are eigenvectors of the inflation matrix, with the eigenvalues $`\tau `$ and $`\tau ^3`$ correspondingly (we remind that the eigenvalue is equal to the inflation factor to the power which is the dimension of the corresponding invariant). Explicitely, for the vector of volumes we have $$M_{MS}\left(\begin{array}{c}4\tau +2\\ 6\tau +4\\ 4\tau +3\\ 2\tau +1\end{array}\right)=\left(\begin{array}{c}16\tau +10\\ 26\tau +16\\ 18\tau +11\\ 8\tau +5\end{array}\right)$$ (46) and for the the vector of Dehn invariants: $$M_{MS}\left(\begin{array}{c}\tau \\ 2\\ \tau 1\\ \tau \end{array}\right)=\left(\begin{array}{c}\tau +1\\ 2\tau \\ 1\\ \tau 1\end{array}\right).$$ (47) As for the golden tetrahedra tiles, assume that the inflation matrix is rational. Then, applying the Galois automorphism one finds two more eigenvectors of $`M_{MS}`$. Again, as for tetrahedra, this amounts to the decomposition of (46) and (47) in powers of $`\tau `$. Together, the four vector equations imply a matrix equation $$M_{MS}\left(\begin{array}{cccc}4& 2& 1& 0\\ 6& 4& 0& 2\\ 4& 3& 1& 1\\ 2& 1& 1& 0\end{array}\right)=\left(\begin{array}{cccc}16& 10& 1& 1\\ 26& 16& 2& 0\\ 18& 11& 0& 1\\ 8& 5& 1& 1\end{array}\right).$$ (48) The solution of this equation is unique and we rediscover the matrix (45). Note that as for the tetrahedra, the uniqueness happens because of the coincidence: the number of tiles equals to the number of invariants times the order of the Galois group. Remarks. 1. The inflation matrix $`M_{MS}`$ for the Mosseri–Sadoc tiles is “induced” by the inflation matrix $`M_{gt}`$ for the golden tetrahedra in the following sense. Denote by $`V_{gt}`$ a six-dimensional vector space with a basis $$\{e_A^{},e_B^{},e_C^{},e_D^{},e_F^{},e_G^{}\}$$ (49) labeled by the golden tetrahedra. The matrix $`M_{gt}`$ acts in the vector space $`V_{gt}`$ in an obvious way. We shall denote the corresponding operator by the same symbol $`M_{gt}`$. The lattice $`L_{gt}`$ generated by the basis vectors is not preserved by the operator $`M_{gt}`$ since the entries of $`M_{gt}`$ are not integers. Denote by $`V_{MS}`$ a four-dimensional vector space with a basis $$\{e_z,e_h,e_s,e_a\}$$ (50) labeled by the Mosseri–Sadoc tiles. The basis vectors generate a lattice $`L_{MS}`$. A map $`\psi _{gt}:V_{MS}V_{gt}`$ given by $$\begin{array}{ccc}\psi _{gt}(e_z)& =& e_A^{}+e_C^{}+e_G^{},\hfill \\ \psi _{gt}(e_h)& =& e_A^{}+e_B^{}+2e_F^{}+2e_G^{},\hfill \\ \psi _{gt}(e_s)& =& e_A^{}+2e_C^{},\hfill \\ \psi _{gt}(e_a)& =& e_D^{}+e_F^{}\hfill \end{array}$$ (51) is an embedding. It is compatible with the lattice structure. The map $`\psi _{gt}`$ reflects the way of packing the golden tetrahedra into the Mosseri–Sadoc tiles (see Fig. 4). A direct inspection shows that the four-dimensional subspace Im$`(\psi _{gt})`$ of $`V_{gt}`$ is invariant under the action of $`M_{gt}`$ and the matrix of the induced operator in $`V_{MS}`$, written in the basis (50), coincides with $`M_{MS}`$. This is quite natural since both matrices, $`M_{gt}`$ and $`M_{MS}`$, are uniquely determined by the geometrical data – the volumes and the Dehn invariants. 2. The space $`V_{MS}`$ is a subspace in a five-dimensional space $`V_{MS}^{}`$ with a basis $$\{e_z,e_m,e_r,e_s,e_a\}.$$ (52) The element $`e_h`$ is expressed as $`e_h=e_m+e_r`$. The space $`V_{MS}^{}`$ also maps into $`V_{gt}`$, the second line in (51) gets replaced by $$\begin{array}{ccc}\psi _{gt}(e_m)& =& e_B^{}+2e_F^{},\hfill \\ \psi _{gt}(e_r)& =& e_A^{}+2e_G^{}.\hfill \end{array}$$ (53) It is not an embedding any more: $$\psi _{gt}(e_r+e_s)=\psi _{gt}(2e_z).$$ (54) This explains again (see eqs. (44) and the comment after them) that the inflation matrix for the five tiles $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ cannot be reconstructed from the Dehn invariants and the volumes (in other words, from the matrix $`M_{gt}`$). In fact, the inflation matrix for the tiles $`a`$, $`m`$, $`r`$, $`z`$ and $`s`$ which reads (in this ordering of the tiles) $$\left(\begin{array}{ccccc}2& 0& 0& 0& 1\\ 2& 0& 0& 1& 1\\ 0& 1& 1& 1& 1\\ 1& 1& 1& 1& 1\\ 2& 1& 1& 1& 1\end{array}\right)$$ (55) is degenerate, so it cannot be induced by the nondegenerate matrix $`M_{gt}`$. 3. Denote by $`V_{𝒯^{(2F)}}`$ an eight-dimensional vector space with a basis $$\{\stackrel{~}{e}_A^{},\stackrel{~}{e}_B^{},\stackrel{~}{e}_{C^b},\stackrel{~}{e}_{C^r},\stackrel{~}{e}_D^{},\stackrel{~}{e}_F^{},\stackrel{~}{e}_{G^b},\stackrel{~}{e}_{G^r}\}$$ (56) labeled by the coloured golden tetrahedra. The matrix $`M_{𝒯^{(2F)}}`$ becomes an operator acting in the space $`V_{𝒯^{(2F)}}`$. Define a map $`\psi _{𝒯^{(2F)}}:V_{MS}V_{𝒯^{(2F)}}`$ by $$\begin{array}{ccc}\psi _{𝒯^{(2F)}}(e_z)& =& \stackrel{~}{e}_A^{}+\stackrel{~}{e}_{C^b}+\stackrel{~}{e}_{G^r},\hfill \\ \psi _{𝒯^{(2F)}}(e_h)& =& \stackrel{~}{e}_A^{}+\stackrel{~}{e}_B^{}+2\stackrel{~}{e}_F^{}+\stackrel{~}{e}_{G^b}+\stackrel{~}{e}_{G^r},\hfill \\ \psi _{𝒯^{(2F)}}(e_s)& =& \stackrel{~}{e}_A^{}+\stackrel{~}{e}_{C^b}+\stackrel{~}{e}_{C^r},\hfill \\ \psi _{𝒯^{(2F)}}(e_a)& =& \stackrel{~}{e}_D^{}+\stackrel{~}{e}_F^{}.\hfill \end{array}$$ (57) The map $`\psi _{𝒯^{(2F)}}`$ is an embedding. Again, one can directly check that the subspace Im$`(\psi _{𝒯^{(2F)}})`$ is invariant under the operator $`M_{𝒯^{(2F)}}`$ and the matrix of the induced operator in $`V_{MS}`$, written in the basis (50), coincides with $`M_{MS}`$. The map $`\psi _{𝒯^{(2F)}}`$ can be considered as a “colouring” of the map $`\psi _{gt}`$. One can show that this colouring is unique. 4. The map $`\psi _{𝒯^{(2F)}}`$ also extends to the map from the five-dimensional space $`V_{MS}^{}`$, the second line in (57) gets replaced by $$\begin{array}{ccc}\psi _{𝒯^{(2F)}}(e_m)& =& \stackrel{~}{e}_B^{}+2\stackrel{~}{e}_F^{},\hfill \\ \psi _{𝒯^{(2F)}}(e_r)& =& \stackrel{~}{e}_A^{}+\stackrel{~}{e}_{G^b}+\stackrel{~}{e}_{G^r}.\hfill \end{array}$$ (58) However it is still an embedding. As we have seen in Subsection 4.1, not only the inflation matrix but the actual inflation for the Mosseri–Sadoc tiles (as well as for the five tiles $`z`$, $`m`$, $`r`$, $`s`$ and $`a`$) is induced by the inflation for $`𝒯^{(2F)}`$. ## Acknowledgments Oleg Ogievetsky was supported by the Procope grant 99082. Zorka Papadopolos was supported by the Deutsche Forschungsgemeinschaft. Z. Papadopolos is grateful for the hospitality to the Center of the Theoretical Physics in Marseille, where a part of this work has been done. We also thank the Geometry–Center at the University of Minnesota for making Geomview freely available. ## Appendix: Geodetic angles In Section 3.2 we showed that the space of Dehn invariants for the golden tetrahedra is 2dimensional. The proof is based on a theorem of Conway, Radin and Sadun . For completeness we briefly remind the needed results from . Definition. An angle $`\theta `$ is called “pure geodetic” if $`\mathrm{sin}^2\theta `$ is rational. Let $``$ be a vector space spanned over $`lQ`$ by pure geodetic angles. In a basis of the vector space $``$ is constructed. It is useful to know the basis: one can check whether some given angles are $`lQ`$–independent. An element of the basis of $``$ is denoted by $`p_d`$. Here $`p`$ is a prime integer. The positive integer $`d`$ has to satisfy two conditions: 1. $`d`$ is square–free; 2. $`(d)`$ is a square modulo $`p`$. If $`p=2`$ then $`d7(\mathrm{mod}8)`$ additionally. To define $`p_d`$ one solves an equation $`4p^s=a^2+db^2`$ for $`a`$ and $`b`$, with a smallest positive $`s`$. For $`d=3`$ one requires $`b0(\mathrm{mod}2)`$; For $`d=1`$ one requires $`b0(\mathrm{mod}4)`$. Now, $$p_d=\frac{1}{s}\mathrm{arccos}\frac{a}{2p^{s/2}}.$$ (59) Theorem (Conway–Radin–Sadun). The angles $`p_d`$ together with $`\pi `$ form a basis in $``$.
no-problem/9910/cond-mat9910485.html
ar5iv
text
# Observation of a new excitation in the mixed-valent state of YbInCu4 \[ ## Abstract Infrared measurements are used to obtain conductivity as a function of temperature and frequency in YbInCu<sub>4</sub>, which exhibits an isostructural transition to a mixed-valent state at $`T_v42K.`$ In addition to a gradual loss of spectral weight with decreasing temperature extending up to 1.5 eV, sharp resonances appear in the mixed-valent state at 0 and 0.25 eV . These features may be key to understanding both YbInCu<sub>4</sub> and the nature of the mixed-valent Kondo state. \] The presence of local moments in metallic systems is associated with a variety of interesting phenomena, including the Kondo effect, heavy-fermion physics and mixed-valence. In rare cases, an isostructural first-order transition at which a discontinuous change in valence and volume accompanies an abrupt disappearance of the local moment is observed. This moment disappearance can be described in terms of the formation of a mixed-valent Kondo-singlet state, in which the f-level moment is compensated due to a Kondo-like screening by conduction electrons. The transition to such a state provides an exceptional opportunity to probe a range of fundamental phenomena, including moment compensation, Kondo singlet formation, and mixed-valence. The prototypical example of a volume/valence transition is the $`\gamma \alpha `$ transition of Ce, in which a valence change from about 3 to 3.2 occurs in concert with a volume reduction of about 15% at $`T_v`$$``$200 K . According to the Kondo-volume-collapse model, the reduced lattice constant in the low temperature phase is associated with an increase in hybridization between local moment and conduction electron states. This results in an enhanced Kondo energy, which drives the transition to a Kondo-singlet ground state. The energy reduction associated with the formation of the singlet ground state justifies the loss of entropy associated with the disappearance of the local-moment degrees-of-freedom. Technical difficulties associated with an intermediate phase make it very difficult to study the intrinsic physics of this transition in Ce. YbInCu<sub>4</sub> also exhibits a transition to a mixed-valent Kondo-singlet ground state, which is isostructural to its high-temperature local moment state. In this compound the intrinsic physics is more accessible, as there is no intervening phase, and the transition occurs at $`T_v42`$ K at ambient pressure. At this transition the Yb valence decreases from $``$3 to $``$2.85, and the local moment vanishes. The volume change is of opposite sign to that of Ce– a difference consonant with the observation that Yb has one hole in the f-level, whereas Ce has one f electron; however the magnitude of the volume change in YbInCu<sub>4</sub> ($`0.5\%`$) is too small to provide a basis for an increase in hybridization that would drive the transition. YbInCu<sub>4</sub> is thus a very interesting system, with a transition from a magnetic state to a mixed-valent ground state that is not well understood. In this letter, we focus on changes in the infrared conductivity of YbInCu<sub>4</sub> associated with the transition into the mixed-valent state. The abrupt increase of the Kondo scale below $`T_v`$ may allow us to identify key features of the Kondo state, and thus shed light on fundamental phenomena of Anderson lattice systems. This work is complementary to previous optical work which addressed the relationship between spectral features and band-structure calculations. At the transition two resonances appear. The first is a Drude-like peak centered at zero frequency which is qualitatively similar to low-temperature behavior seen in certain cerium compounds (c.f. ref. 21). The second is a resonance at $`0.25`$ eV, which is present only in the mixed-valent state. We discuss the interpretation of these resonances as intra- and inter-band excitations of coherent Kondo-state quasiparticles, respectively, for which the substantial increase of $`T_K`$ at $`T_v`$ is critical. Also of interest are spectral weight changes extending beyond 1 eV, which may have implications regarding the energy, time and length scales associated with moment compensation and Kondo singlet formation. The samples used in these experiments are high-quality single crystals grown from an In-Cu flux. For these samples a sharp transition occurs at about 42 K in the absence of strain. At the transition the volume increases by about 0.5 % as the sample is cooled, and the susceptibility and resistivity drop abruptly by an order of magnitude. Thermal cycling tends to induce strain in the samples, which can broaden the transition and move it to higher temperature. Infrared and optical measurements are performed using a combination of Fourier transform and grating spectrometers to cover the range from 50 to 50,000 $`cm^1`$. In these measurements we have gone to great efforts to measure in all ranges before going through the transition to avoid disorder effects influencing the infrared data significantly. The conductivity as a function of frequency is obtained from a Kramers-Kronig transform of the reflectivity data. For the purpose of performing this transform, the measured reflectivity is extended from 50,000 to 200,000 $`cm^1`$ as a constant, and above that it is made to decrease like 1/$`\omega ^2`$. At low frequency a Hagen-Rubens termination is attached to the data. In the region of the actual data, the conductivity is insensitive to the details of these terminations. Figure 1 shows the reflectivity and the real part of the conductivity in the low-frequency region in which a narrow Drude-like peak appears at low temperature. Above $`T_v`$ the conductivity is suppressed and only weakly dependent on frequency, due to the the strong scattering of the conduction electrons by the dense magnetic “impurities” (local moments). Below $`T_v`$ this scattering is suppressed, the d.c. resistivity decreases abruptly and a narrow resonance appears in $`\sigma _1(\omega )`$, as shown in figure 1. Extrapolated values of $`\sigma _1(\omega )`$ to $`\omega `$= 0 of about $`\sigma _{d.c.}`$$``$10,000 $`\mathrm{\Omega }^1cm^1`$ above $`T_v`$ , and $`40,000`$ to 80,000 $`\mathrm{\Omega }^1cm^1`$ below $`T_v`$ are consistent with d.c. resistivity measurements. One can view this low-temperature behavior in terms of a frequency dependent scattering rate and effective mass, as shown in figure 1b (inset). At 20 K the scattering rate rises rapidly between about 25 and 200 $`cm^1`$ exhibiting a change in slope in the vicinity of 200 $`cm^1`$, which is comparable to the Kondo scale ($``$280 $`cm^1`$ ) of the low-T state of YbInCu<sub>4</sub>. The effective mass enhancement increases with decreasing $`\omega `$ over the same range and approaches an asymptotic value of about $`m^{}10`$ at low frequency. These low temperature quantities exhibit a crossover from a low energy regime where the compensated moments are ineffective scatterers, to a high energy regime in which conduction electrons are strongly scattered by uncompensated moments. This reflects the evolution of the dynamics from that of dressed, heavy quasiparticles to that of the undressed band-like carriers, which is fundamental to systems with a local moment resonance not too far from the chemical potential. Figure 2 shows reflectivity and conductivity to higher frequency (12,000 $`cm^1`$). These data show the persistence of significant temperature dependence to very high frequency (compared to T or $`T_K`$) in YbInCu<sub>4</sub>. For example, between about 5,000 to 12,000 $`cm^1`$ $`\sigma _1(\omega )`$ decreases substantially as T is reduced both above and below $`T_v`$ . In addition, a prominent resonance appears in the mixed-valent (low-T) state near 2,000 $`cm^1`$. The spectral sharpness and abrupt appearance of this feature at $`\omega 2,000`$ $`cm^1`$ ($`1/4eV`$) as a function of temperature are striking. Figure 3 shows spectral weight, which is the indefinite integral of $`\sigma _1(\omega )`$ , $`n(\omega )=\frac{m}{\pi e^2}_0^\omega \sigma _1(\omega ^{})d\omega ^{}`$, as a function of frequency. In this figure (and figure 2) we see that there is a net loss of spectral weight as the temperature is lowered from 250 K to 55 K. The loss amounts to about 10% of the strength of the broad mode centered around 9,000 $`cm^1`$, and corresponds to $`1`$ carrier/Yb atom with the reasonable assumption of a band mass of 3 (times the free-electron mass). Since spectral weight is ultimately conserved (if one integrates to high enough frequency), these data imply that it must be displaced to still higher frequency (above 16,000 $`cm^1`$$`2`$ eV) as T is reduced from 250 to 55 K. Recent theoretical work has explored possible origins of such high energy spectral weight shifts (involving energies vastly larger than $`K_BT`$ and $`K_BT_K`$) in strongly correlated systems. The coalescence of the 20 and 55 K curves at the high frequency end of figure 3 indicates that the increase in spectral weight associated with the appearance below $`T_v`$ of the resonance at $`2000`$ $`cm^1`$ is balanced by a general reduction of $`\sigma _1(\omega )`$ up to $`12,000`$ $`cm^1`$. The displaced spectral weight corresponds to about 1.5 carriers/Yb. Although the spectral weight of the very narrow low temperature resonance at $`\omega =0`$ (figure 1) is quite small, it is significant to the correspondence between the infrared data and Hall effect data for YbInCu<sub>4</sub>. Hall effect, when corrected for skew scattering, reflects an increase from about 0.7 carriers/Yb above $`T_v`$, to a much higher value ($`4`$ carriers/Yb) below the transition. Above $`T_v`$, the low frequency rise of the conductivity (figure 2) can be fit with a broad ($`700`$ $`cm^1`$) resonance with a strength which corresponds to about one carrier per Yb, which is roughly consistent with the high temperature Hall data. Below $`T_v`$ a much sharper ($`25`$ $`cm^1`$) additional peak appears in $`\sigma _1(\omega )`$ at $`\omega 0`$, as seen in figure 1b. With the inclusion of the frequency dependent, low-temperature mass enhancement of $`m^{}10`$ (figure 1b, inset), this narrow peak represents an additional 2.5 carriers/Yb, consistent with the substantial increase in carrier density inferred from the low T Hall effect data. Both the starting Hamiltonian and the mechanism that drives the transition to the mixed-valent state remain areas of active research for YbInCu<sub>4</sub> . With regard to the mechanism, it has been argued that the lattice expansion is too small to explain the large change in Kondo temperature (from $``$25 K to 400 K) at the transition. The Falikov-Kimball model is capable of producing a quasi Hubbard-like first-order transition, and may be relevant to high-temperature properties of YbInCu<sub>4</sub>, however it ignores hybridization, which is certainly important in the low-T state. In the mixed-valent state, where the Kondo scale is large, the dynamics of the Periodic Anderson Model (PAM) are expected to be relevant. Within the PAM context, the 1/4 eV excitation can be associated with a quasiparticle interband transition involving Kondo resonance states near $`E_f`$ . The abrupt change of $`T_K`$ at the transition and the abrupt appearance of the resonance are consistent with this interpretation. The study of YbInCu<sub>4</sub> , with its first-order transition at which $`T_K`$ increases by an order of magnitude, thus appears to allow the first clear identification of this fundamental excitation. The energy scale for this interband excitation involving the dynamically generated quasi-particle states at $`E_f`$ (the Kondo resonance) is expected to be $`\sqrt{T_KB}`$. Since $`T_K\stackrel{~}{V}^2/B`$ this provides a measure of the renormalized hybridization, $`\stackrel{~}{V}`$. Using the value $`\stackrel{~}{V}1/4`$ eV from our infrared data, along with $`T_K400`$ K ($`35`$ meV), implies a bandwidth of $`B1.8`$ eV, which is reasonable. One can estimate the hybridization broadening, $`\mathrm{\Gamma }`$, using its relationship to $`\stackrel{~}{V}`$, to be $`\mathrm{\Gamma }0.25`$ eV. Further, one can use $`\mathrm{\Gamma }`$ in NCA formulae involving $`n_f(T)`$ along with $`L_{III}`$ edge measurements of valence to infer that the f-level is about 0.5 eV away from the chemical potential. These values are quite reasonable for this mixed valent system. The observation that the growth of the resonance at $``$1/4 eV comes from a redistribution of spectral weight from essentially the entire range below 1.5 eV (comparable to the bandwidth) may have implications for questions related to exhaustion and the time scales relevant to screening in Kondo lattice systems. Does it suggest that conduction electrons further than $`K_BT_K`$ from the chemical potential are significantly involved in screening in the Kondo lattice? Further work can be expected to address such questions. It is also intriguing to note that an excitation of similar frequency is present in YbB<sub>12</sub> , for which $`T_K300`$ K, and that related features may also be present in spectra from mixed-valent Ce compounds. In summary, YbInCu<sub>4</sub> is of interest because of the rarity of valence transitions, a lack of understanding of their underlying mechanism, and due to the opportunity to observe the effect of dramatic changes of $`T/T_K`$ on physical properties. We observe high-energy spectral weight changes, which may be relevant to the mechanism, and the abrupt appearance of a sharp excitation near 1/4 eV, present only in the high $`T_K/T`$ state, which is interpreted as the Kondo-quasiparticle interband excitation. Acknowledgements: The authors acknowledge valuable conversations with J. W. Allen, D. L. Cox, P. Coleman, J. K. Freericks, D. H. Lee and A. P. Young, and technical assistance from Todd Lorey, Sonya Hoobler, Jason Hancock and Petar Kostic. Work at UCSC supported by the NSF through grant# DMR-97-05442. Work at Los Alamos is performed under the auspice of the U.S. Dept. of Energy. NHMFL is supported by the NSF and the state of Florida. ZF and JLS also acknowledge partial support from the NSF under grant # DMR-9501529.
no-problem/9910/astro-ph9910459.html
ar5iv
text
# Statistics of Dark Matter Halos from Gravitational Lensing ## 1. Introduction In the coming years, gravitational lensing is likely to become an effective tool for mapping large-scale structure in the universe. Over the past decade several measurements of weak lensing by galaxy clusters have been made. Mass reconstruction techniques are now being applied to wide field lensing surveys in blank fields that will probe the dark matter distribution over angular scales of order $`1^{}1^{}`$. Wide field lensing observations have already detected filaments and dark halos that were not visible by their light distribution (Kaiser at al 1998; Erben et al 1999; Tyson et al. 1999). Statistical properties of the clustering of dark matter can be probed from lensing data by computing shear correlations over blank fields with area of order 10 square degrees (Blandford et al 1991; Miralda-Escudé 1991; Kaiser 1992; Bernardeau et al 1997; Jain & Seljak 1997; Kaiser 1998; Stebbins 1996; Schneider et al. 1998). An alternative approach is to focus on the statistics of dark matter halos, identified through their lensing strength, using measures such as the aperture mass (Schneider 1996; Kruse & Schneider 1999a; Kruse & Schneider 1999a; Reblinsky et al. 1999). The halo statistics approach has been shown by the above authors to be a useful probe of the mass function for massive, cluster sized halos; the main practical limitation is that only $`10`$ halos per square degree are expected to be detected with adequate signal-to-noise. This paper advocates a new approach to the measurement of the statistics of dark matter halos through lensing. By modeling the distribution of peaks in lensing data induced by the noise due to the intrinsic ellipticities of source galaxies, we show that it is possible to statistically detect the signal due to dark matter halos, even for mass scales below the signal-to-noise limit for the detection of individual halos. Section 2 describes the construction of peak statistics from simulated data and from pure noise. Results for the peak statistics for a set of cosmological models are shown in Section 3. We discuss the prospects for measuring the halo mass function and discriminating models from realistic data in Section 4. ## 2. Peak Statistics in Simulated Data We use shear and convergence fields from ray tracing simulations through the dark matter distribution of N-body simulations (Jain, Seljak & White 1999). The fields we use are about 3 degrees on a side, sampled with a grid spacing of $`0.1^{}`$, with source galaxies taken to be at $`z=1`$. We use two cosmological models, an Einstein-de Sitter model and an open model with $`\mathrm{\Omega }_{\mathrm{matter}}=0.3`$. The power spectrum corresponds to a cold dark matter shape parameter $`\mathrm{\Gamma }=0.21`$ model. Further details of the models and the simulations are given in Jain et al (1999). A simulated noisy map of the convergence, $`\kappa (\stackrel{}{\theta })`$, is built by first smoothing the $`\kappa `$ field over scale $`\theta _G`$ with a Gaussian window $`W(\theta )=\mathrm{exp}(|\stackrel{}{\theta }|^2/\theta _G^2)/\pi \theta _G^2`$. The noise due to the randomly oriented intrinsic ellipticities of source galaxies is modeled as a Gaussian random field with variance, $$\sigma _{\mathrm{noise}}^2=\frac{\sigma _ϵ^2}{2}\frac{1}{2\pi \theta _G^2n_g},$$ (1) where $`\sigma _ϵ`$ is the rms amplitude of the intrinsic ellipticity distribution and $`n_g`$ is the number density of source galaxies. This Gaussian noise is added to the smoothed $`\kappa `$ field; the accuracy of this noise model is discussed below. From the smoothed noisy data, peaks are found by identifying pixels that have a higher/lower value of $`\kappa `$ than all neighboring pixels. This corresponds to the condition that the gradient of the field vanishes and thus includes peaks as well as troughs. The height of the peak $`\nu =\kappa /\sigma _{\mathrm{noise}}`$ is its value in units of the noise rms in the smoothed field. Our choice of the noise model for the convergence field relies on previous work. Van Waerbeke et al (1999) have shown that the convergence field can be accurately reconstructed from observed ellipticity data in the absence of systematic errors. Both the reconstruction schemes of Kaiser & Squires (1993) and the Maximum-Likelihood algorithm of Bartelmann et al (1996) recover the convergence field with adequate accuracy for fields of order a degree on a side. Van Waerbeke (1999) further showed that the noise properties of peaks can be analytically described using Gaussian statistics (Bardeen et al 1986; Bond & Efstathiou 1987), and the weak lensing approximation. Figure 1 shows a test of the analytical model of Van Waerbeke (1999) for peaks due to noise, and checks the accuracy of the peak distribution in the reconstructed $`\kappa `$. The dot-dashed curve shows the histogram of peaks from this analytical noise model; the double peaked shape is due to peaks with positive curvature and troughs with negative curvature. Almost overlapping with the analytical curve is the measured histograms of peaks in a field with pure noise. The distribution of peak heights measured in maps of $`\kappa `$ reconstructed from noisy ellipticity data (dashed line) is compared with the distribution in the $`\kappa `$ maps of the signal plus Gaussian noise with variance given by equation 1 (solid line). The close agreement of the two curves demonstrates the accuracy of the $`\kappa `$ reconstruction scheme. The success of the reconstruction gives us confidence in working with the convergence data and the noise model directly, avoiding the slow and expensive reconstruction process. We have also verified that using the mass aperture statistic, which is constructed directly from ellipticity data, leads to very similar peak distributions. We have also found that a variety of distributions of the intrinsic ellipticity (including non-Gaussian ditributions) produce the same Gaussian statistics for peaks in the $`\kappa `$ maps; these results will be presented elsewhere. Figure 2 shows the actual maps of the convergence, $`\kappa `$, used to measure the peak distributions. The small amplitude peaks in the signal map are swamped by the noise, so there is little hope of recovering them individually from data. However, we show below that their distribution is sufficiently modulated by the signal to distinguish cosmological models. ## 3. Sensitivity of Peak Statistics to $`\mathrm{\Omega }`$ Figure 3 shows the probability distribution function (pdf) of peaks in the convergence field as a function of peak height for noise-free (left panels) and noisy (right panels) fields for two different smoothing scales. The pdf in noise-free fields has the qualitative characteristics due to nonlinear gravitational clustering: at negative peak heights (underdense regions) it has a cutoff related to the minimum $`\kappa `$ resulting from empty beams, and it has a tail at positive $`\nu `$ due to collapsed halos. The cosmological models have different pdf’s, just as they do for the pdf of $`\kappa `$ in the field, shown in figure 4. For the noisy fields, the number density of source galaxies is 30 per square arcminute, and their rms intrinsic ellipticity is $`\sigma _ϵ=0.2`$. Thus the peak height $`\nu =1`$ corresponds to an averaged value of $`\kappa =0.02(0.01)`$ over the smoothing radius for the upper (lower) panels of figure 3. The right panels of figure 3 show that in the presence of noise, the pdf’s look quite different from the noise-free case. However the noisy pdf’s still have different shapes from the pure noise pdf’s and the cosmological models remain distinguishable. The asymmetric double peak for the low amplitude peaks ($`2<\nu <2`$) arises due to the noise maxima and minima, but it is suppressed relative to the pure noise case and is asymmetric due to the gravitational shear. The open and Einstein-de Sitter models are easily distinguishable for these low amplitude peaks, even though almost none of the peaks can be individually associated with dark matter halos. The relative number of very negative troughs (which are mostly noise troughs modulated by the fact that they are located in large voids) can by itself be used to discriminate models with different values of $`\mathrm{\Omega }`$, as noted by Jain et al (1999) for the field pdf. As the smoothing scale is increased from $`0.5^{}`$ to $`1^{}`$, the signal dominates over the shape of the noise pdf, but sample variance becomes larger as there are fewer peaks. As a result, at both smoothing scales, the models can be distinguished at about the same level of significance. The error bars in the right panels are not much larger than in the pdf from the noise-free maps. The addition of ellipticity noise broadens the peak distribution, but the error bars are still dominated by sample variance in the signal. Similary, the primary effect of increasing $`\sigma _ϵ`$ is to broaden the pdf while not changing the error bars by much. It is worth noting that previous theoretical work on halo detection has focused on the peaks that can be individually detected with adequate signal to noise. These would correspond to the parts of the pdf with $`\nu \stackrel{>}{}45`$, where sample variance is large. Clearly the bulk of the information on the mass function and in distinguishing models is at smaller or negative peak heights and can be used only statistically by modeling the noise pdf. ## 4. Discussion The results presented in sections 2 and 3 show that the peak distribution from lensing data has information on the projected mass function of dark matter halos, and is sensitive to the cosmological model. The level of non-Gaussianity of the pdf is a powerful discrimant of models with different values of $`\mathrm{\Omega }`$. Figure 3 shows that the models can be distinguished from the pdf over a wide range of peak heights at 2 to 3-$`\sigma `$; by combining information at different peak heights and smoothing scales we can obtain much higher significance. Further, the third and fourth moments of the peak distribution for different smoothing scales are sensitive to the cosmological model, as expected qualitatively from the shapes of the distribution. We have also compared the peak distributions shown with a model with non-zero cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. The peak distribution in the $`\mathrm{\Lambda }`$-model lies in-between the Einstein de-Sitter and open model with the same value of $`\mathrm{\Omega }_{\mathrm{matter}}`$. Beyond the dependence on the cosmological parameters, the peak pdf contains information on the projected mass function over all mass scales. It is important to test how accurately we can recover the pdf of peaks due to the lensing signal, and hence the projected mass function, from wide field lensing surveys. A straightforward approach is to compare the measured pdf with the predictions of a set of models that include the level of noise observed in the data. The best fit model can be found by minimizing the $`\chi ^2`$. We demonstrate in a forthcoming paper that the projected mass function and $`\mathrm{\Omega }`$ can be simultaneously determined by using the normalization and shape of the distribution (Van Waerbeke & Jain 1999). Since we use information from all peak heights, not just the high-$`\sigma `$ peaks that can be detected individually, the mass function is constrained over mass scales ranging from galactic to cluster sized halos. A more ambitious approach to recover the lensing signal would be to de-convolve the measured peak distribution using the analytical model for the noise. The nearly perfect accuracy of the analytical noise model (see figure 1), which we have checked for four cosmological models with different smoothing scales and noise distribution, gives us confidence that the lensing signal can be extracted from forthcoming data — either using deconvolution, or by comparing the forward convolution for a set of models. Analytical predictions of peak statistics would be valuable in comparing theoretical predictions with observations. Reblinski et al (1999) have shown that predictions of peak number densities based on the Press-Schechter model agree with the simulations for the high-$`\sigma `$ peaks. Detailed analytical predictions of peak number densities and their angular correlations by combining the Press-Schechter model and its extensions with our noise model would be useful. Further work is also needed to test the sensitivity of the results to the shape of the dark matter power spectrum. The dependence on the redshift distribution of source galaxies needs to be computed as well — since the level of non-Gaussianity decreases for more distant galaxies, increasing the redshift of source galaxies could mimic the effect of high-$`\mathrm{\Omega }`$. To place the peaks approach in perspective, it is useful to compare it with the standard approach of measuring dark matter statistics using the entire field (without peak identification). The peak statistics rely on only a subset of the available information (the location and height of peaks, and eventually their profile), and obtaining cosmological information from them requires additional theoretical modeling compared to field statistics. On the other hand, the use of peak statistics has both practical and theoretical advantages. Peak statistics are likely to be robust to certain kinds of systematics — small, unknown errors in the galaxy ellipticities that complicate the use of field statistics. For example, in practice, the shear measured from ellipticity data is multiplied by a factor larger than unity to account for the smearing by the point spread function. If this factor is estimated incorrectly, it could change the height but probably not the location of the peaks. It would then amount to a rescaling of the x-axis in the peak histograms, which does not change the comparisons amongst the cosmological models. On the theoretical side, peak statistics can provide insights into the biasing of galaxies relative to the dark matter, by allowing us to consider the two distinct components of biasing: first, the relation of galaxies to dark halos, and second, of halos to the dark matter. By combining the measured clustering of galaxies with that of dark halos measured through peak statistics from lensing data, the first step in the biasing of galaxies can be directly probed. For the second step of relating halos to the dark matter, we will need to use successful measurements of field lensing, or to interpret the data using theoretical models for the relation of halos to the dark matter. We have shown that the statistics of peaks provides a useful new approach to wide field lensing. It is complementary to the standard statistics such as ellipticity correlations over the field, and is directly linked to the projected distribution of dark matter halos. The characteristic non-Gaussian shape of the peak distribution (its asymmetric double peaked shape) makes it a powerful probe of the cosmological model as well as a useful test of the presence of systematics errors. We are grateful to Uros Seljak, Peter Schneider, Ravi Sheth, Alex Szalay and Simon White for helpful discussions. We thank an anonymous referee for comments.
no-problem/9910/hep-th9910227.html
ar5iv
text
# Introduction ## Introduction The aim of the work is to find a complete set of gauge invariants of a bosonic string. A classical gauge invariant is understood as a parametrization independent object that is a physical observable. And a quantum gauge invariant is an operator which is well-defined in the respective BRST-cohomology. It is an operator which represents a physical observable. A complete set of classical gauge invariants is defined as the set in terms of which an arbitrary physical observable can be expressed. And a complete set of quantum gauge invariants is the set whose enveloping algebra includes all invariant operators. It will be shown how to find all classical invariants at least in the class of polynomials. The question of the quantization of these invariants will be also discussed. The bosonic string is a well studied model. It allows to apply various methods of quantization, it’s spectrum can be obtained in different ways. However a structure of the reduced phase space of the model is rather complicated and is not recognized well. It is the set of gauge invariants that can be applied for investigating the structure. It could be useful for constructing string interaction, for a string field theory. There is also another question less obvious and less well known, namely to understand how the phase space of the string stratifies into phase spaces of the elementary particles constituting it’s spectrum. The information about the invariants seems to be rather useful for elaborating the last question. It is commonly known that the complete set of quantum gauge invariant can be represented by the set of vertex operators . However the vertex operators have no classical limit. And actually we are looking for the another set of invariants which do have a certain classical limit. For simplicity we’ll restrict ourselves to the case of the open bosonic string. ## Classical gauge invariants The complete set of the phase space variables of the open bosonic string consists of the Fubini-Veneziano(FV) fields and the string zero mode $$V^\mu =V^\mu (\sigma ):[0,2\pi ]𝐑^{1,D1}V^\mu (0)=V^\mu (2\pi )q^\mu 𝐑^{1,D1}$$ (1) They are subject to the first class constraints $$L(\sigma )=\frac{1}{4}V^\mu (\sigma )V_\mu (\sigma )$$ (2) First of all let us pose the question whether there are polynomial gauge invariants which depend on the FV fields only. The positive answer to the question can be found in the literature, namely an infinite set of such invariants was proposed in the works of Pohlmeyer and Rehren . $$I_n^{\mu _1\mu _2\mathrm{}\mu _n}=_0^{2\pi }V^{\mu _1}(\sigma _1)𝑑\sigma _1_{\sigma _1}^{\sigma _1+2\pi }𝑑\sigma _2V^{\mu _2}(\sigma _2)_{\sigma _1}^{\sigma _2}V^{\mu _3}(\sigma _3)\mathrm{}_{\sigma _1}^{\sigma _{n1}}𝑑\sigma _nV^{\mu _n}(\sigma _n)$$ (3) In the paper it was proved that these polynomials (3) exhaust all gauge invariants which depend on the FV fields only. If we do need to obtain a complete set of classical gauge invariants we should involve an actual dependence on the string zero mode $`q^\mu `$ . The most general polynomial expression for a classical gauge invariant is as follows $$I=C_{\mu _1\mu _2\mathrm{}\mu _n}q^{\mu _1}q^{\mu _2}\mathrm{}q^{\mu _n}+C_{\nu _1\mu _1\mu _2\mathrm{}\mu _{n1}}^{m_1}\alpha _{m_1}^{\nu _1}q^{\mu _1}q^{\mu _2}\mathrm{}q^{\mu _{n1}}+\mathrm{}+C_{\nu _1\nu _2\mathrm{}\nu _n}^{m_1m_2\mathrm{}m_n}\alpha _{m_1}^{\nu _1}\alpha _{m_2}^{\nu _2}\mathrm{}\alpha _{m_n}^{\nu _n}$$ (4) where $$\alpha _n^\mu =\frac{1}{2\sqrt{\pi }}_0^{2\pi }V^\mu (\sigma )e^{in\sigma }𝑑\sigma $$ As we see all the terms in the expression (4) are of the same order in the phase space variables. One can take the anzats in such a form simply because the gauge transformations are homogeneous in the phase space variables: $$\delta _\epsilon V^\mu =(\epsilon (\sigma )V^\mu (\sigma ))^{}\delta _\epsilon q^\mu =_0^{2\pi }𝑑\sigma \epsilon (\sigma )V^\mu (\sigma )\delta I=\{L[\epsilon ],I\}$$ (5) If one requires the polynomial (4) to be the gauge invariant the respective structure coefficients are subject to following conditions: $$C_{\mu _1\mu _2\mathrm{}\mu _n}=0,$$ $$C_{\nu _1\nu _2\mathrm{}\nu _l\mu _1\mathrm{}\mu _{nl}}^{n_1n_2\mathrm{}n_l}=0$$ if $$n_10,n_20,\mathrm{},n_l0,$$ $$C_{\nu _1\nu _2\mathrm{}\nu _l(\nu _{l+1}\mathrm{}\nu _s\mu _1)\mathrm{}\mu _{ns}}^{n_1n_2\mathrm{}n_l\mathrm{\hspace{0.17em}0}\mathrm{}0}=0.$$ (6) One of the examples of such invariants which depends on the whole set of phase space variables is the momentum tensor of the string: $$^{\mu \nu }=q^\mu \alpha _0^\nu q^\nu \alpha _0^\mu +\underset{n0}{}\frac{i}{n}\alpha _n^\mu \alpha _n^\nu $$ (7) Using the relations (6) one proves that an abitrary polynomial gauge invariant can be expressed, modulo constraints, in terms of the momentum tensor (7) and the polynomials (3). It turns out that the proposed polynomial invariants form only a subalgebra of the algebra of physical observables. Actually they do not exhaust the complete set of string gauge invariants because there are physicaly different points on the constraint surface of the string, that cannot be distinguished with the help of these polynomials. The last means that the complete set of string gauge invariant must include observables which are not polynomial in the phase space variables. Unfortunately no one of such invariant is known yet. ## Quantization problem Let us discuss a quantization of the polynomial invariants. As we know the momentum tensor (7) of the bosonic string can be quantized with out any problems. The same situation takes place with the polynomials (3) while $`n<4`$ . As to the invariants (3) with $`n`$ being more or equaled 4 the situation drastically changes. Namely the invariants being directly defined in the Fock space of the string do not commute with the Virasoro generators because of the quantum corrections. It means that the respective operators are not defined in the space of the physical states. The given situation relates to the common quantization problem of the systems with constraints. It would be rather strange if quantum corrections did not destroy some key relations of a classical theory. In some cases it leads us to the true values of the critical parameters, in other ones it means that it is not possible to construct a consistent quantum theory. There is however the third case when we simply can say that some relations do not have a consistent quantum interpretation, but the quantum theory does exist. We think that the problem we face with can be solved. Firstly let’s note that the classical invariants are defined ambiguously off the constraint surface. Namely one can add to the previous polynomial an expression, which vanishes on constraints. The terms, which vanish classically, may contribute to the quantum commutator between the invariant and the BRST charge. $$\mathrm{\Omega }=\underset{n}{}L_nC_n+\underset{nm}{}mP_nC_mC_{nm},$$ (8) where $`C_n`$ and $`P_n`$ are canonical ghosts and $$L_n=\frac{1}{2}\underset{k}{}\alpha _k^\mu \alpha _{nk}^\nu \eta _{\mu \nu }$$ (9) are the Virasoro generators. And it is the arbitrariness that can be used for constructing genuine quantum BRST invariants polynomial in string operators and ghosts. Let us summarize the things to be done. i)It is necessary to add to the naive invariant the most general expression which vanishes on constraints $$\begin{array}{c}I=I[V][I,L_n]=_mW_{nm}L_m\\ \text{ }\text{ }\text{ }\end{array}$$ (10) ii) to construct a quantum operator with ghosts using the BFV method $$\stackrel{~}{I}=I+C_mP_nW_{nm}+\mathrm{},$$ iii) to evaluate the commutator between the constructed operator and the BRST charge $$\begin{array}{c}[\stackrel{~}{I},\mathrm{\Omega }]=_n[I,L_n]C_n+_{nkl}m[W_{kl}C_lP_k,L_nC_n]+_{nmkl}mW_{kl}[C_lP_k,P_nC_mC_{nm}]+\mathrm{}\hfill \\ \text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\hfill \end{array}$$ (11) where $`\mathrm{}`$ means terms with higher structure functions. While doing the last it is necessary to account only one-loop contributions because higher corrections are simply vanishing. At last we can obtain the equation for the additional terms. These equation could be solvable because the ghost terms give quantum corrections of the same order as those arisen in the anomaly. ## Conclusion Thus we have the set of physical observables which exhaust all polynomial invariants. We have proved that the set of the polynomial invariants is not complete. Also we pose the question whether it is possible to realize the BRST cohomology of the string operator algebra in terms of the operators polynomial in string modes and ghosts. These invariants unlike the vertex operators can clarify the connection between the reduced phase space of the bosonic string and the BRST cohomologies of the corresponding quantum theory .
no-problem/9910/astro-ph9910223.html
ar5iv
text
# A planet orbiting the star Gliese 86Based on observations collected at the La Silla Observatory, ESO Chile, with the echelle spectrograph CORALIE at the 1.2m Euler Swiss telescope ## 1 The search for extra-solar planets with CORALIE Almost 20 planetary companions with minimum masses ($`m_2\mathrm{sin}i`$) less than 10 $`M_J`$ have so far been detected by very high-precision radial velocity surveys (Butler et al. (1999), Marcy et al. 1999a , Fischer et al. (1999) and Marcy et al. 1999b for a review of older detections). The semi-major axes of their orbits range from very small (0.05 AU) to 3 AU. Some have eccentric orbits, others have secondary more massive companions, some have both. The large observed spread in orbital characteristics of all known planetary candidates causes some difficulties in understanding their formation process in comparison with our own solar system. It also raises the issue of the real nature of these objects, particularly the more massive ones. In June 1998 we initiated a systematic and large scale exoplanet search survey (1600 nearby G and K stars) in the southern hemisphere with the new 1.2 m alt-azimuth Euler Swiss telescope at La Silla, ESO Chile. The technique we are using to detect planets is to look for a stellar reflex motion due to an orbiting planet by very precise radial velocity measurements. The CORALIE echelle spectrograph is used to measure star spectra from which the Doppler effect is then computed. CORALIE is an improved version of the ELODIE spectrograph (Baranne et al. (1996)) with which, 4 years ago, the first extra-solar planet orbiting a star (51 Peg) was discovered (Mayor & Queloz (1995)). The CORALIE front-end adaptator is located at the Nasmyth focus of the Euler telescope. Two sets of two fibers can alternatively feed the spectrograph which is located in an isolated and temperature controled room. The set of fibers used for high precision radial velocity measurements includes a double scrambler device designed by Dominique Kohler (see Queloz et al. (1999) for references) to improve the stability of the input illumination of the spectrograph. Thanks to a slightly different optical combination at the entrance of the spectrograph and the use of a 2k by 2k CCD camera with smaller pixels (15$`\mu `$m), CORALIE has a larger resolution than ELODIE. A resolving power of 50,000 ($`\lambda /\mathrm{\Delta }\lambda `$) is observed with a 3 pixel sampling. As with the ELODIE spectrograph, CORALIE makes use of on-line reduction software that computes the radial velocity of stars several minutes after their observation. (See Baranne et al. (1996) for details about the reduction process). The simultaneous thorium technique is used to correct any instrumental drifts occuring during the star exposure (see Queloz et al. (1999) for details). The many improvements carried out in the thermal control and the resolution of the instrument, as well as in the reduction software, yield a factor two improvement in the instrument precision compared with ELODIE. ## 2 A planet orbiting Gliese 86 Gliese 86 (HD13445, HIC 10138) is a bright ($`m_V=6.12`$) early K dwarf ($`BV=0.81`$, $`T_{\text{eff}}=5350K`$, $`\mathrm{log}(g)=4.6`$, Flynn & Morell (1997)) from the southern hemisphere, in the Eridanus (River) constellation. It is a close star, 10.9 pc away from our Sun ($`\pi =91.6`$ mas, measured by the Hipparcos satellite). Its absolute magnitude is 6.257, yielding (with $`BC=0.2`$) a luminosity $`L=0.27L_{}`$. It is a high proper motion star, slightly metal poor (\[Fe/H\]$`=0.24`$, Flynn & Morell (1997)). It has low chromospheric activity ($`\mathrm{log}R_{}^{}{}_{\text{HK}}{}^{}=4.74`$, Saar & Osten (1997)). No rotational broadening has been detected (Saar & Osten (1997)) and there is only an upper limit on the Li content in its atmosphere (N(Li)$`<0.24`$, Favata et al. (1997)). From Hipparcos photometry, the star is stable ($`\sigma (\text{H}_p)=0.008`$). In summary, Gliese 86 bears all the characteristics of a few billion year old K dwarf from the old disk population. In the H-R diagram, Gliese 86 lies slightly below the ZAMS. However we believe that there are enough uncertainties in the temperature and bolometric correction estimates of Gliese 86 – stemming from its low metal content – to believe that its location in the HR diagram below the ZAMS is not significant. A 15.8 day period radial velocity variation has been detected from CORALIE measurements (Fig. 1). In Table 1 are listed the orbital elements of the best fit solution (least square) for an orbital motion after correction of a 0.36 m s<sup>-1</sup> d<sup>-1</sup>linear drift (see below). Assuming a 0.8$`M_{}`$ for the primary and that the radial velocity effect is caused by the orbital motion of the star, we conclude that a 4 M<sub>J</sub> companion (minimum mass) is orbiting Gliese 86. The planetary companion to Gliese 86 is close to its host star with a 0.11 AU semi-major orbital axis. It has a low, although $`99`$% significant non-zero, eccentricity (Lucy & Sweeney (1971)). The 7 m s<sup>-1</sup>residual from the fit indicates very low intrinsic intrumental errors from night to night, taking into account that each measurement has approximately 5 m s<sup>-1</sup>photon noise error and could be affected as well by some low level radial velocity variations intrinsic to the stellar atmosphere. Such low instrumental error agrees with the instrumental error measured by $`P(\chi ^2)`$ analysis of all the stars of our sample so far observed (about 300). See Duquennoy et al. (1991) for a detailed description of the instrument error estimate by the $`P(\chi ^2)`$ statistic. A long term drift of the radial velocity (0.5 m s<sup>-1</sup> d<sup>-1</sup>) is observed from 20 years of CORAVEL measurements (Fig. 2). With the 300 m s<sup>-1</sup>typical precision of CORAVEL radial velocities, the short orbit is marginally detected in the last measurements. Interestingly, with the recent CORALIE measurements a smaller 0.36 m s<sup>-1</sup> d<sup>-1</sup>drift is observed (Fig. 3). A statistical analysis of the reliability of the drift correction shows that an orbital solution without drift correction has 0.0001% chance to occur ($`\chi ^2210`$). The probability jumps to 40% when the linear drift correction is taken into account. A conservative 7 m s<sup>-1</sup>instrumental error is assumed for this calculation. The period measurement of the short period planetary companion is still not accurate enough to correct the old CORAVEL data from their extra scattering and obtain a precise drift estimate from these measurements. Thus, the difference in drift slope between the old CORAVEL measurements and the recent CORALIE measurements is perhaps significant, but remains to be confirmed by further measurements during the course of the next season. The long term radial velocity variation is the signature of a remote and more massive companion. The use of historical radial velocity data together with the CORAVEL and the CORALIE observed drifts suggest a stellar companion with a period longer than 100 yr (semi-major axis larger than 20 AU). A direct detection would be worth attempting since the star is close to us. Alternative explanations to a low mass companion to explain the observed 15.8 day period radial velocity change of Gliese 86 would be activity related phenomena (Saar & Donahue (1997)). However Gliese 86 doesn’t exhibit any of the classical activity signatures seen on young stars, as for exemple HD166435 (Queloz et al. in prep). Gliese 86 has no chromospheric activity. No rotational broadening is detected either, and its photometry is very stable. Therefore, the planetary hypothesis is most likely the correct interpretation for the observed periodic radial velocity changes. ## 3 Discussion The observed orbital charateristics of planets are the direct outcome of their formation processes and of their evolutions. Therefore, these characteristics may be used to retrace their formation mechanisms and to constrain theories of planetary formation. The recent spectroscopic studies of stars where planets have been detected have shown that the host star itself may also bear marks from some processes occuring during planetary formation (Gonzalez (1997)). More specifically a large number of planets with short-period orbits have surprinsingly metal rich host stars. These planets, very close to their stars, are usually referred as 51 Peg like, or “hot Jupiters”. The metallicity of their host star is much higher than the “average field star” and is not the result of a selection process in the survey samples (Marcy et al. 1999b ). Typical metallicities similar to field stars may be assumed for the stars from various surveys, since these star samples have not been selected from any metallicity criteria. If we look in more detail at all the planets with semi-major axis less than 1 AU, where the number of detections is significant and not strongly high-mass biased, we observe a relation between the semi-major axes of the planetary orbits and the metal content of their host stars. All planets with semi-major axes less than 0.08 AU seem to have a star with an unusually very high metal content compared to other stars with planets (see Fig. 4). Actually, a comparison of the two distribution using the Kolmogorov-Smirnov test indicates a 99% probability that the two distributions are indeed different. The unusual metal content of the short orbit planets had been pointed out shortly after the detection of 51 Peg (Mayor & Queloz (1995)). But now, with the large number of detections of similar systems and others with slightly larger semi-major axes, we observe a typical distance (or period) for which this unusual high metal content is systematically observed. A possible explanation may be related to some very specific processes occuring during the formation of these very close systems. However the large incertainties on the estimation of the age of these systems and the small mass range of primaries are noteworthy. Therefore it is difficult to completely rule out a stellar population effect. The migration theory (Lin & Papaloizou (1986), Lin et al. (1996), Ward (1997), Trilling et al. (1998)) is one of theories that has been called for to explain the existence of very close planets that were not described by the “classic” solar-system planetary formation model (Boss (1995), Lissauer (1995)). But so far, we have a poor understanding of the way the planet stops its migration. The two different metallicity distributions pointed out in this article are perhaps a new clue to a better understanding of the migration process or the likelyhood of an in-situ formation (Bodenheimer et al. (99)). Others scenarios involving strong gravity interactions with other planets have been proposed as a possible origin for small planetary orbits (Weidenschilling & Marzari (1996), Rasio et al. (1996)). Since these models are purely driven by dynamical interactions it seems a priori difficult to expect any metallicity enhancement effect. Moreover such scenarios do not really explain the very small orbit planets like 51 Peg. However, if one believes that the high metal content of the star is the end-result of a planet swallowed by the star (Sandquist et al. (1998)), the gravity interaction is a possible means to send planets into their stars. The precision of surveys from which the planets have been found so far has been limited to the detection of systems with a $`V_r`$-amplitude ($`K`$) larger than 25 m s<sup>-1</sup>. Therefore it is still premature to compare the mass of the two sets of planets because there is a direct relationship between the amplitude of the radial velocity curve, the semi-major axis, and the minimum mass of the planet that can be detected. However if we limit our comparison to a sample free of such bias, including planets only having semi-major axes smaller than 0.3 AU, we may be inclined to believe that metal rich stars tend to host on average less massive planets than solar stars or metal poor stars. The distribution of mass of all planets that have been detected can also be studied with a restricted sample of planetary systems in order to avoid a biased selection towards small orbits and massive systems. We see that with a sample restricted to minimum masses greater than 1 M<sub>J</sub> and semi-major axes smaller than 1.3 AU, the number of planets per mass bin is almost constant from 1 to 5 M<sub>J</sub> and then drops suddenly for more massive companions. This reinforces the idea that a maximum planet mass may lie somewhere close to 5-7 M<sub>J</sub> as pointed out earlier by Mayor et al. (98). New discoveries and improved detection precision will allow us to get a better picture of the relation between the mass and certain orbital characteritics of planets and some peculiarities seen in the atmosphere of their host stars. This will perhaps enhance our understanding of the mechanisms of planetary formation. ###### Acknowledgements. We are grateful to all the staff who actively participated into the building and the rapid commissioning of the new 1.2m Euler telescope and the CORALIE echelle spectrograph at la Silla and in particular D. Huguenin, C. Maire, E. Ischi, G. Russinielo, M. Fleury. We also thanks all the staff of the Haute-Provence observatory having contributed to the construction of the spectrometer CORALIE, specially D. Kohler and D. Lacroix as well as Andre Baranne (from Marseilles Observatory) who designed the optic. We thanks N. Molawi for his help in the computation of Gliese 86 evolution tracks and P.R. Lawson for many improvements and corrections to the text. We thanks the Geneva University and the Swiss NSF (FNRS) for their continuous support for this project. Support from FCT to N.S. in the form of a scholarship is gratefully acknowledged.
no-problem/9910/physics9910048.html
ar5iv
text
# Untitled Document Wild cables and survivability of macroscopic molecular structures in hot tokamak plasmas A.B. Kukushkin, V.A. Rantsev-Kartinov INF RRC “Kurchatov Institute”, Moscow, 123182, Russia kuka@qq.nfi.kiae.su, rank@qq.nfi.kiae.su Wild cables and survivability of macroscopic molecular structures in hot tokamak plasmas The evidences for tubular rigid-body structures are found in tokamak plasmas, which are similar to long-living filaments observed in a Z-pinch ( Kukushkin, Rantsev-Kartinov, Proc. 26-th EPS conf., http://epsppd.epfl.ch/cross/p2087.htm). These structures are suggested to be a ”wild cables” produced by the channelling of EM energy pumped from the external electric circuit and propagated to the plasma core in the form of the high frequency EM waves along hypothetical (carbon) microsolid skeletons which are assembled during electric breakdown. It is shown that such skeletons may be protected from the high-temperature ambient plasma by the TEM waves produced thanks to the presence of microsolid skeletons. PACS: 52.55.Fa 1. Introduction. Recently the anomalously high survivability of some filaments in laboratory plasmas was illustrated \[1(a)\] with tracing the history of the typical long rectilinear rigid-body block in a Z-pinch. The pictures were taken in visible light at different time moments from different positions, during about half a microsecond, that is comparable with the entire duration of the Z-pinch discharge (see Fig. 1 in \[1(a)\]). The original images were processed with the help of the method \[2(a,b)\] of multilevel dynamical contrasting (MDC) of the images. The phenomenon of long-living filaments (LLFs) in various laboratory plasmas (gaseous Z-pinches \[2(a,c)), plasma foci \[2(e)\], and tokamaks \[2(d)\]) has lead us to a conclusion \[1(a)\] (see also references therein) that only the quantum (molecular) long-range bonds inside LLFs may be responsible for their observed survivability, rather than the mechanisms of a classical particles plasma. Specifically, the carbon nanotubes have been proposed to be the major microscopic building blocks of the respective microsolid component of LLFs because such nanotubes may be produced in various electric discharges (see, e.g., ). 2. Rigid-body structures in tokamak plasmas. An analysis of available databases carried out with the help of the MDC method \[2(a,b)\], shows the presence of tubular structures. The typical examples for tokamaks TM-2, T-4, T-6 and T-10 (major radius $`R=0.4,0.9,0.7,1.5m`$, minor radius $`a=8,20,20,33cm`$, toroidal field $`B_T=2,4.5,0.9,3T`$, total current $`I_p25,200,100,300kA`$, electron temperature $`T_e(0)0.6,3,0.4,2keV`$, electron density $`n_e(0)(2,3,2,3)\times 10^{13}cm^3`$, respectively) are given in . The figures presented there are taken in visible light with the help of a strick camera and high-speed camera. The effective time exposure is about $`10\mu sec`$. The major features of the structuring are as follows: (a) the length scale of the rigid-body tubular structuring varies in a broad range, from comparable with the minor radius of a tokamak to less than millimeter scale; (b) the typical tubule seems to be a cage assembled from the (much) thinner, long rectilinear rigid-body structures which look like a solid thin-walled cylinders; (c) the (almost rectilinear) tubules form a network which starts at the farthest periphery and is assembled by the tubules of various directions; (d) a radial sectioning of the above network is resolved which looks like a distinct heterogeneity at a certain magnetic flux surface(s) (such a sectioning was suggested \[1(b),2(d)\] to cause the observed internal transport barriers in tokamaks). The pictures include, in particular, the periphery of the tokamak T-10 plasma illuminated by the carbon pellet emission (the pellet track is outside the picture). The system of concentric circles and the inner almost rectilinear tubule located approximately on the axis of these circles form together a sort of the squirrel’s wheel. Major axis of this system is directed nearly orthogonal to toroidal magnetic field. The system is 5 cm long and of $`4÷4.5cm`$ diameter. The central and boundary vertical tubules are of $`4mm`$ diameter. Similar structures appear to form in all tokamaks, i.e. with no regard to pellet injection. 3. Probable mechanism of formation and survivability of microsolid skeletons in tokamak plasmas. (i) A deposit of carbon nanotubes, of the relevant quantity, is produced at the inner surface of the chamber during discharge training, from either graphite-containing construction elements (like, e.g. limiters or walls) or carbon films produced by the deposition of the organic oils normally used in the vacuum pumping systems (the nanotubes may form due to rolling up of monolayers ablated from solid surfaces or thin films). (ii) Electrical breakdown occurs along chamber’s surface (or its part, namely, the inner side of the torus) and is based on the substantially enhanced rate of (cold) autoemission and thermoelectric emission of electrons by the nanotube (as compared to macroscopic needles). (iii) The microsolid skeletons are assembled from individual nanotubes which are attracted and welded to each other by the passing electric current to produce self-similar tubules \[1(a)\] of macroscopic size, of centimeter length scale and larger (this electric current is produced by the poloidal magnetic field $`B_{pol}`$ pumped from the external circuit into the chamber). (iv) Once the skeleton (or its relevant portion) is assembled, the substantial part of the incoming $`B_{pol}`$ brakes at it and produces a cold heterogeneous electric current sheath made of conventional plasma. A part of $`B_{pol}`$ near the skeleton is bouncing along its every rectilinear section (i.e. between the closest points of the deviation, even small enough, from rectilinearity). This produces a high-frequency EM wave which, in turn, produces, by the force of the high-frequency (HF) pressure (sometimes called in literature the Miller force), the cylindrical cavities of a depleted electron density (primary channels) around the skeletons. (v) At the skeleton’s (and plasma column) edge the bouncing boundary of the cavity from the scrape-off layer side produces a HF valve for the incoming $`B_{pol}`$, because of the node of the standing wave at the edge. This works as a HF convertor of a part of the incoming $`B_{pol}`$ which is transported then along the skeleton in the form of EM waves. (Besides, a part of $`B_{pol}`$ which reaches the cavity in the conventional regime of the diffusion of $`B_{pol}`$, is transformed into a HF field by the oscillating boundary of the cavity). The EM waves sustain the cavity and protect the skeletons from direct access of thermal plasma particles. Therefore the skeleton appears to be an inner wire of the cable network (a wild cable network) in which the role of a screening conductor is played by the ambient plasma. In this paper, we restrict ourselves to quantifying the above picture in its quasi-stationary stage of energy inflow through the wild cable network. For the frequency $`\omega _c`$ of the major harmonic of EM oscillations trapped in radial direction in a cylindrical almost-vacuum cavity of effective radius $`r_c`$ around microsolid tubule of length $`L_c`$, one has ($`\omega _{pe}`$ is plasma frequency, $`c`$, the speed of light): $$\omega _c\pi c/L_c\omega _{pe}.$$ (1) For tokamak geometry, one has the following chain of transformations of EM waves. The cavities at plasma edge (they normally possess some declination with respect to the boundary magnetic surface) allows the field lines of $`B_{pol}`$ to move directly inside the cavity and thus produce the magnetic (H) wave. For the strongest EM wave among H waves, $`H_{11}`$ wave, one has: $`\lambda 2L_c\lambda _{crit}\alpha r_c`$, where $`\lambda _{crit}`$ is the critical wavelength for free propagation of the respective EM wave in the cable ($`\alpha _{H11}\pi `$). Therefore, the trapping of $`H_{11}`$ wave in the edge cavity leads to the wiring of magnetic field lines round the inner wire that produces TEM and electric (E) waves propagating in both directions (the strongest wave among E-waves will be $`E_{01}`$ wave). However, the $`E_{01}`$ wave will also be trapped in the cavity ($`\alpha _{E01}2.6`$), in contrast to TEM wave ($`\lambda _{crit}^{TEM}=\mathrm{}`$). Also, the H and E-waves, in contrast to TEM wave, are detached from the wall (in radial direction, these waves are the standing ones) so that only the TEM wave can actually maintain the boundary of the cavity. Thus, the edge cable converts a part of $`B_{pol}`$ into HF TEM wave propagating inward. The signs of this HF field of which a small part is reflected outward may be found in the measurements of EM fields outside plasma column (see below). It is assumed also that the presence of an external stationary strong magnetic field doesn’t influence substantially the form of the cavity, because even for $`\omega _c\omega _{He}`$ ($`\omega _{He}`$ is electron gyrofrequency) the amplitude $`\stackrel{}{E}_0`$ of the HF electric field may have a non-zero component parallel to magnetic field (in that case we will assign $`\stackrel{}{E}_0`$ to the respective component of the amplitude). The distribution of plasma density around the inner wire can be described by a set of equations for the two-temperature quasi-hydrodynamics of a plasma in a HF EM field . Under condition $`l_Er_D`$, where $`l_E`$ is the characteristic length of spatial profile of $`E_0(\stackrel{}{r})`$ and $`r_D`$ is Debye radius, one can neglect the deviation from quasi-neutrality and arrive at quasi-Boltzmann distribution (see e.g. \[6(b)\]): $`n_e=n_{e0}\mathrm{exp}(\mathrm{\Psi }/(T_e+T_i))`$, where $`\mathrm{\Psi }=e^2E_0^2/(4m_e\omega _c^2),`$ $`n_{e0}`$ is background density of plasma electrons. The condition for plasma detachment from the inner wire reads: $$eU_02\pi (r_c/L_c)\sqrt{Am_ec^2(T_e+T_i)},A(r_w^2/r_c^2)ln(n_{e0}/n_{emin}),$$ (2) where $`U_0`$ is the effective voltage bias of the TEM wave in the cable ($`E_0(r)U_0/r`$; $`r`$ is the radial coordinate in a circular cylindrical cable, $`r_w`$, radius of inner wire), $`n_{emin}`$ is the minimal density permitted, at a temperature $`T_e`$, for the inner wire to be not destroyed by the plasma impact. For tokamak case ($`n_{e0}10^{13}cm^3`$), we take $`A5`$. Equation (3) is to be coupled to the condition of the applicability of the concept of the $`(\mathrm{\Psi })`$ force, $`\rho l_E`$ ($`\rho `$ is the amplitude of electron’s oscillations in the HF electric field). For our estimates, this limitation, however, may be weakened and takes the form: $$eU_0\pi ^2m_ec^2r_c(r_cr_w)/L_c^2,$$ (3) And finally, the HF electric field in the cables may be related to the observable turbulent electric fields because wild cables are the strong sources of electrostatic oscillations in plasma. As far as there should be a sort of the feedback between plasma and cavity, one may consider the cable’s cavity as a soliton with such a strong reduction of the eigenfrequency (a redshift) that soliton’s velocity becomes independent from dispersion. For $`W/nT1`$ ($`WE_0^2/16\pi `$) this gives rough estimate: $$W/nT(1(\omega _c/\omega _{pe})).$$ (4) At the quasi-stationary stage of discharge, one may evaluate the spatial distribution of the amplitude $`E_{turb}`$ of the turbulent electric field, regardless of its spectral distribution, as being described, in radial direction with respect to the individual cable, by the scaling law of the TEM wave. For the contribution of a single cable, one has: $$E_{turb}(r)U_0/r.$$ (5) Equations (1), (2) and (3), along with rough estimates of Eqs.(4) and (5), establish a set of equations that enable one to evaluate the plausibility of the presence of wild cables in tokamak plasmas, using available data on measuring the values of $`\omega _c`$ (and/or $`L_c`$) and $`E_{turb}`$ . Now we can test the problem for typical data from the periphery of the T-10 tokamak, keeping in mind the closeness of T-10 regimes analyzed in and those for Figure 4. First, the spectra of the HF EM field in the gap between the plasma column and the chamber measured in the GHz frequency range revealed a distinct bump at $`\nu _c(4÷5)\times 10^9Hz`$, of the width $`2\times 10^9Hz`$, which always exists in ohmic heating regimes and increases with electron cyclotron heating (this bump is a stable formation and it moves to the lower frequencies and turns into a peak only under condition of strong instabilities, especially disruption instability). This gives $`L_c3cm`$. Note that this is in reasonable agreement with the data from the high-speed camera picture for T-10 plasma periphery where $`L_c4÷5cm`$. Second, the analysis of observations of Stark broadening of deuterium spectral lines (and their polarization state) at the periphery of the T-10 tokamak in the region of $`T_e100eV`$, allowed to estimate the spectral range of HF electric fields ($`\omega \omega _{pe}10^{11}Hz`$), their amplitude ($`E10÷20kV/cm`$) and angular distribution. For $`L_c=3cm,T_e=T_i=100eV`$, Eqs. (2) and (3) give a constraint $`S(r_cr_w)/L_c0.03`$. For $`(r_cr_w)r_c`$, from Eq. (2), one can find the absolute minimum of voltage bias: $`(U_0)_{min}5kV`$. For $`S=0.03`$, Eqs. (2) and (3) give $`U_05kV`$, while for $`S=0.1`$ one has $`15U_0(kV)50`$. Further, Eq. (4) gives $`E_0(r_c)50kV/cm`$, while, for $`r_c1÷2mm`$ and $`<r>1÷3cm`$ ($`<r>`$ is the average distance between individual cables in the region of observation), Eq. (5) gives the estimate $`E_0(r_c)10^2kV/cm`$, or $`U_010kV`$. The results of numerical solution of the Poisson equation show that, e.g., for $`U_0=30kV`$ at the distances $`r2÷3mm`$ the plasma density falls down, with respect to its background value, by the seven-eight orders of magnitude. 4. Conclusions The experimental data of Sec. 2 and the model of Sec. 3 support the hypothesis that plasmas with long-living filaments is such a form of the fourth state of matter, which is an intricate mixture of three other states (gaseous, liquid and solid). The presence of the inner wire (namely, electrically conducting microsolid skeleton) in the wild cable is responsible not only for the observed anomalous mechanical stability of this structure but also for the formation of TEM waves in the cavity that is critical for the self-sustainment of the cavity and for the transport of EM energy to plasma core. It follows that observed structuring could be: (i) a strong candidate for the nonlocal (non-difusion) component of heat transport (and observed phenomena of fast nonlocal responses) in tokamaks; (ii) a powerful source of non-linear waves and (strong) turbulence throughout plasma volume; (iii) a low-dissipation waveguide responsible for the spatial profile of poloidal magnetic field in tokamaks, rather than total resistance of plasma (in agreement with the observed applicability of Spitzer, or close, resistivity to describing the ohmic heat release in plasma); (iv) a universal phenomenon in well-done laboratory plasmas and space; in particular, similar wild cables may form in gaseous and wire-array Z-pinches and be responsible for the fast nonlocal transport of EM energy toward Z-pinch axis. Acknowledgments. The authors are indebted to V.M. Leonov, S.V. Mirnov and I.B. Semenov, K.A. Razumova, and V.Yu. Sergeev for presenting the originals of the data from tokamaks T-6, T-4, TM-2, and T-10, respectively. The authors appreciate discussions of the paper with V.V. Alikaev, V.I. Poznyak and V.L. Vdovin, and participants of seminars in the Institute of Nuclear Fusion. Our special thanks to V.I. Kogan for his interest and support, and V.D. Shafranov, for valuable discussion of the paper. REFERENCES Kukushkin A.B., Rantsev-Kartinov V.A., Proc. 26-th Eur. Phys. Soc. conf. on Plasma Phys. and Contr. Fusion, Maastricht, Netherlands, June 1999, (a) p. 873 (http://epsppd.epfl.ch/cross/p2087.htm); (b) p. 1737 (p4096.htm). Kukushkin A.B., Rantsev-Kartinov V.A., (a) Laser and Part. Beams, 16, 445 (1998); (b) Rev. Sci. Instrum., 70, 1387 (1999); (c) Ibid, p. 1421; (d) Ibid., p. 1392; (e) Kukushkin A.B., et. al., Fusion Technology, 32, 83 (1997). Eletskii A.V., Physics-Uspekhi, 167, 945 (1997). Kukushkin A.B., Rantsev-Kartinov V.A., Preprint of the RRC Kurchatov Institute, IAE-6157/6, Moscow, October 1999 (submitted to JETP Lett.). Gaponov A.V., Miller M.A., Zh. Exp. Teor. Fiz. (Sov. Phys. JETP), 34, 242 (1958); Volkov T.F., In: Plasma Physics and the Problem of Controlled Thermonuclear Reaction, Ed. M.A.Leontovich, \[In Rus.\], USSR Acad.Sci., 1958, Vol. 3, p. 336, Vol. 4, p. 98; Sagdeev R.Z., Ibid., Vol.3, p. 346. (a) Gorbunov L.M., Uspekhi Fiz. Nauk (Sov. Phys. Uspekhi), 109, 631 (1973); (b) Litvak A.G., In: Voprosy Teorii Plazmy (Reviews of Plasma Phys.), Eds. M.A.Leontovich and B.B.Kadomtsev, \[In Rus.\], vol. 10, p. 164. Poznyak V.I., et. al. Proc. 1998 ICPP and 25-th Eur. Phys. Soc. Conf. on Plasma Phys. and Contr. Fusion, 1998, Prague, ECA Vol. 22C (1998) p. 607. Rantsev-Kartinov V.A., Fizika Plazmy (Sov. J. Plasma Phys), 14, 387 (1987); Gavrilenko V.P., Oks E.A., Rantsev-Kartinov V.A., Pis’ma Zh. Exp. Teor. Fiz. (JETP Lett.), 44, 315 (1987).
no-problem/9910/astro-ph9910213.html
ar5iv
text
# Efficient absolute aspect determination of a balloon borne far infrared telescope using a solid state optical photometer ## 1 Introduction The Tata Institute of Fundamental Research (TIFR) 1 meter balloon-borne far infrared (FIR) telescope is flown regularly to carry out observations of Galactic star forming regions, external spiral galaxies etc (Daniel et al 1984; Bisht et al 1989). The orientation and pointing system of this telescope uses a star tracker (ST) as a two axis angular position sensor (Almeida et al 1983; Ghosh & Tandon 1982). A bright optical guide star, $`m_B<5`$, within the field of view (2°) of this ST, provides the positional reference. Since the space angle between the nearest usable guide star and the FIR target ($`\eta `$), is typically $`>`$ 3° the star tracker is mechanically offset with respect to the main telescope about the two main control axes (viz., elevation & cross-elevation). The mechanical offset system allows for $`\pm `$ 4$`\stackrel{}{\mathrm{.}}`$5 motion about each axis (in steps of $``$ 20″). The mechanical offsetting of the ST is effected by a pair of stepper motor driven screws through trains of gears, whose positions are measured by shaft encoders. Although the pointing jitter of the telescope orientation system is $``$ 20″rms (adequate for observations at 200 $`\mu `$m, where the diffraction limit is 50″), the achieved absolute positional accuracy is only $``$ 2 – 4′ (for $`\eta `$ $`>`$ 3°) due to fabrication defects of mechanical components. The above implies the necessity of in-situ absolute position calibration of the Cassegrain focal plane of the telescope. In the past, a focal plane photomultiplier tube (FPPM) based optical photometer (sensitive upto $`m_B`$ = 9.0, for 3 sec integration, typical of observational rasters) has been used successfully to improve the absolute aspect accuracy to $`1\mathrm{}`$ for $`\eta `$ as large as 5° (Ghosh et al 1988, Das & Ghosh 1991). With the introduction of bolometer arrays in our two band (12 channel system; Verma, Rengarajan & Ghosh 1993) FIR photometer, the use of the FPPM (which is effectively a single pixel device) for achieving absolute aspect, leads to very poor observational efficiency. The multi element solid state optical photometer (SSOP), is a solution which is briefly described in this paper. Sections 2 and 3 describe the SSOP and the relevant software processing schemes. The results from a recent balloon flight, which quantify its performance, are presented in Section 4. ## 2 Solid State Optical Photometer In order to achieve a good observational efficiency, the FOV subtended by the entire detector-array of the FIR instrument must at least be covered by the SSOP. The SSOP has to detect stars while FIR observations are in progress (e.g. the sky is chopped by wobbling the secondary mirror at 10 Hz; and scanned at 0.5–1.0 arc min/sec). These requirements translate to : resolution element $`<`$ 1′; sensitivity of $`m_R`$ 10 (for integration time corresponding to the typical raster scan); and a dynamic range of $`10^4`$. The detector selected is an EG&G Silicon Photodiode array, PDA-20-2, with 20 elements. Each element is 0.94 mm x 4.0 mm in size and the pitch is 1.0 mm resulting in a very small dead zone. It has a NEP of 7$`\times `$ 10<sup>-15</sup> W Hz<sup>-1/2</sup> (at +23° C) and an operating temperature range of +70° C to -55° C (ambient temperature at balloon float altitude is $``$ -50° C). Two consecutive elements are hardware “binned” (by connecting them in parallel) to implement an effective “pixel” of 0$`\stackrel{}{\mathrm{.}}`$87 (El) x 1$`\stackrel{}{\mathrm{.}}`$7 (XEl) size, at the Cassegrain focal plane of the 1-meter (f/8) telescope. Only 16 of the 20 elements of the PDA have been used (i.e. 8 pixels). Hence, the used part of the PDA (1 pixel $`\times `$ 8 pixel array) subtends an angle of 1$`\stackrel{}{\mathrm{.}}`$7 $`\times `$ 6$`\stackrel{}{\mathrm{.}}`$9 on the sky. A two stage baffle with opening corresponding to $``$ f/7 precede the PDA. The PDA has reasonable spectral response from 5000 Å to 10500 Å with a peak responsivity of 0.6 A/W at 9000 Å. The PDA is used in the photovoltaic mode. A bank of trans-impedance amplifiers, TIAs, pre-amplify signals from each pixel (see Fig. 1), which are placed physically close to the PDA inside a EMI insulated chamber. The preamplified signals are buffered and fed to the 8 channel detector signal processing unit (DSPU). Each DSPU channel consists of : attenuator, buffer, composite band pass filters, phase sensitive detector (PSD), low pass filter and interface to the telemetry system (see the DSPU block diagram in Fig. 2). The final DSPU outputs from all 8 pixels are sampled at 10 Hz and digitized (12 bit ADC) by the telemetry down-link. Since low frequency / DC drifts of the PDA signals are lost in PSD processing, two selected pixels of the PDA are additionally processed through DC-coupled stages (with much lower gain to avoid electronic saturation) and sampled at about 0.3 Hz. This is useful to monitor the background light level and the dark current. ## 3 Software for SSOP ### 3.1 Online processing The PDA signals are processed online at the ground station, while the telescope scans a pre-selected optical star in a clean field near the far infrared target (within 20–30′ ). The results from this processing are used to update the telescope model for absolute aspect. The signals from all the PDA pixels and the data from the sensors relevant for the telescope aspect (all sampled at 10 Hz) are stored in a time sequence for each scan line. The time sequence of the PDA signals for each scan line are convolved with a function which represents the PDA response for scan across a star (including the effect of sky chopping). The time corresponding to the grand maximum of the convolved signal sequence provides the telescope aspect corresponding to the target star. The resulting aspects from several relevant scan lines are combined to update / refine the existing model for the telescope aspect. ### 3.2 Off-line processing The off line data processing involves determination of the instantaneous telescope boresight using the data from the two axis angular position (Star Tracker) and rate (Gyroscopes) sensors used in the telescope orientation and stabilization system (Ghosh et al 1988). The chopped SSOP signals (all 8 pixels) are gridded in a two dimensional sky matrix (the two axes representing the telescope coordinate system, viz., elevation & cross elevation). Signals from all 8 pixels of SSOP are mixed using a focal plane model of their relative location, which is determined during laboratory testings prior to the launch. The telescope raster scans are parallel to the cross elevation axis. The cell size used in this observed (chopped) signal matrix is 0$`\stackrel{}{\mathrm{.}}`$3 $`\times `$ 0$`\stackrel{}{\mathrm{.}}`$3. This observed signal matrix is deconvolved using an indigeneously developed scheme using the Maximum Entropy Method (MEM) similar to Gull and Daniell, 1978 (see Ghosh et al., 1988, for details). The 2-D point spread function (PSF) used in the MEM scheme is determined from the scans across a bright star during the balloon flight. The positions of the peaks in the deconvolved optical map represent detected stars which are compared with various catalogues (SAO, HST Guide Star Catalogue etc) to quantify any systematic shifts / effects. ## 4 Performance of the SSOP The SSOP system was flown during the balloon flight of the 1-meter far infrared telescope payload on March 8, 1998, from Hyderabad, in central India. The payload was at the float altitude of 31 km for 5.5 hours. During this flight, several bright stars were scanned using SSOP (and sometimes using the FPPM) to confirm the focal plane model and establish the absolute aspect of the telescope. In addition, during the scans across the FIR programme targets, the SSOP has covered typically 600 square arc min of the sky (Ghosh 1998). The 2-D Point Spread Function (PSF) of the SSOP (corresponding to one pixel) has been generated from the observations of the star $`\rho `$ Pup. The FWHM for a point source (BS 6546) after MEM deconvolution is found to be 0$`\stackrel{}{\mathrm{.}}`$85 $`\times `$ 1$`\stackrel{}{\mathrm{.}}`$62 (Elev $`\times `$ Cross-elev), which is very close to the expected value. The off-line processing has been carried out for 9 mapped regions, each covering about 30′$`\times `$ 25′ area. Figure 3 shows the resulting optical isophot contour map from a typical observation. The brightest and the faintest star in this map correspond to $`m_R`$ of 7.06 and 9.76 respectively. Clear detections of well identified stars are marked on this map. A total of 40 stars have been detected and identified in these 9 fields. The final absolute map coordinates are determined from the shift parameters ($`\mathrm{\Delta }`$RA, $`\mathrm{\Delta }`$Dec) which best align the peaks of the map with the coordinates of identified stars. For the present sample of 9 regions mapped, the shift angle ($`\theta _{corr}=\sqrt{\mathrm{\Delta }RA^2+\mathrm{\Delta }Dec^2}`$) is found to be increasing with, $`\eta `$, the offset angle between the telescope axis and the Star Tracker axis. The RA and Dec components of the residual angles ($`res_\alpha `$, $`res_\delta `$) show a gaussian distribution (see Fig. 4). The standard deviations ($`\sigma (res_\alpha )`$ = 44″; $`\sigma (res_\delta )`$ = 29″) reflect the ultimate absolute aspect errors in the final maps and quantify the cumulative effects of pointing jitter; electronic and data processing noise; and the quality of telescope optics (the primary & secondary mirrors are designed for FIR wavelengths and hence are very poor for optical wavelengths). By dividing the entire sample into two categories on the basis of the offset angle $`\eta `$ ($`\eta >2^0`$; & the rest), it has been found that $`\sigma (res_\alpha )`$ & $`\sigma (res_\delta )`$ are not sensitive to $`\eta `$. Hence, using the SSOP, it has been possible to achieve an absolute aspect accuracy of $``$ 0$`\stackrel{}{\mathrm{.}}`$8, in the presence of mechanical imperfections leading to 1.5–4′ errors. The spectral response of the PDA elements is such that the $`m_R`$ magnitude of stars represents the signal expected from the SSOP. A total of 22 stars, for which $`m_R`$ could be found / estimated from the literature, have been used to calibrate the SSOP and quantify the system linearity and sensitivity. The SSOP is found to be linear within the testable range of $`2<m_R<9.7`$. The faintest star detected corresponds to $`m_R`$ = 10.9 in our sample. The expected sensitivity for the SSOP (for identical observational conditions) is $`m_R`$ = 10.0. Hence, the achieved sensitivity is quite close to its design goal. The analysis of the DC coupled channels (2 of the 8 pixels) implies a large increase in the scattered light background near the telescope focal plane during the time when moon (illuminated fraction = 0.82) was above horizon. The maximum observed background corresponds to $``$ 14.9 mag ($`m_R`$) arcsec<sup>-2</sup>. ## 5 Conclusions A multielement solid state optical photometer (SSOP) has been developed and successfully used at the Cassegrain focal plane of the TIFR 1-meter balloon borne far infrared telescope. This SSOP has been used on-line as well as off-line to achieve higher absolute positional accuracy (0$`\stackrel{}{\mathrm{.}}`$8) of the telescope during a balloon flight. The achieved sensitivity of the SSOP corresponds to the stellar magnitude of $`m_R`$ $``$ 10.0 (for typical raster scans used for FIR targets) which is consistent with the expectations. The SSOP has also improved the observational and operational efficiency of the telescope. Acknowledgements It is a pleasure to thank members of the Infrared Astronomy Group of TIFR for their encouragement and support.
no-problem/9910/solv-int9910004.html
ar5iv
text
# On integrable discretization of the inhomogeneous Ablowitz-Ladik model. ## 1 Introduction During the last few years a great deal of attention has been paid to integrable discretizations of nonlinear evolution equations. The interest is naturally justified by needs of the computational physics . One of the purposes of integrable discretization is the construction of a discrete analogue of a continuum model which preserves main features of the last one. This point becomes especially important when one deals with inhomogeneous models. In that case even ”the first step” of the discretization of a one-dimensional nonlinear evolution equation, i.e. discrtization with respect to the spatial coordinate, may introduce qualitatively new features into the dynamics. So, for instance, in the case of the inhomogeneous nonlinear Schrödinger equation a constant force, which linearly depends on the spatial coordinate, results only in the renormalization of the phase and velocity of the one-soliton solution while the same force leads to oscillations of solitons of the inhomogeneous Ablowitz-Ladik (AL) model: $$i\dot{q}_n+(1q_nr_n)(q_{n1}+q_{n+1})+2\chi nq_n=0$$ (1) $$i\dot{r}_n+(1q_nr_n)(r_{n1}+r_{n+1})+2\chi nr_n=0$$ (2) (here $`r_n=\pm \overline{q}_n`$, $`\chi `$ is a real constant, which from the physical point of view determines the strength of the linear force, a dot stands for the derivative with respect to time, and a bar stands for the complex conjugation) . Periodic dependence on time is a property of any solution of (1), (2) and it is caused by the discreteness. In the case of the one-soliton solution which reads $$q_n^{(s)}=\overline{r}_n^{(s)}=\frac{\mathrm{sinh}(2w)}{\mathrm{cosh}[2nwX(t)X_0]}e^{i[\mathrm{\Phi }(t)2n\chi (tt_0)]}$$ (3) where $$\mathrm{\Phi }(t)=\frac{1}{\chi }\mathrm{cosh}(2w)\mathrm{sin}[2\chi (tt_0)]$$ (4) $$X(t)=\frac{1}{\chi }\mathrm{sinh}(2w)\mathrm{cos}[2\chi (tt_0)]$$ (5) $`w`$, $`t_0`$, and $`X_0`$ are real constants, the soliton dynamics has deep analogy with the well known Bloch oscillations of an electron in a lattice potential affected by a constant electric field (due to this reason such behaviour is referred to as Bloch oscillations ). As it follows from (5) the period of oscillations is given by $`\tau _0=\pi /\chi `$. The phenomenon of Bloch oscillations becomes especially interesting if one looks for the possibility of integrable discretization of the inhomogeneous AL model with respect to time. Indeed, Bloch oscillations are characterised by the additional temporal scale, $`\tau _0`$. This scale is determined by the strength of the force and must lead to some constrains on the step of discretization. Thus the purpose of the present communication is to introduce integrable discretization of the model (1), (2) and to obtain conditions on parameters of the discretization which preserve the effect of Bloch oscillations in the discrete scheme. ## 2 Integrable discretization At $`\chi =0`$ system (1), (2) transforms to the conventional AL model which discretization is well known . In particular, it can be achieved by using the discrete analogue of the zero-curvature condition $$U(n,t+h)V(n,t)=V(n+1,t)U(n,t).$$ (6) In the case $`\chi 0`$ the same condition involving $`U`$-matrix as follows $`U(n,t)=\left(\begin{array}{cc}\lambda e^{i\chi t}& q(n,t)\\ r(n,t)& e^{i\chi t}/\lambda \end{array}\right)`$ (9) and $`V`$-matrix having the elements $`V_{11}=ih\alpha _0+h\alpha _1\left(\lambda ^2e^{i\chi (2t+nh)}A(n,t)\right)+h\left({\displaystyle \frac{\alpha _2}{\lambda ^2}}e^{i\chi (2tnh)}\delta _2q(n,t+h)r(n1,t)\right)\mathrm{\Lambda }(n,t)`$ $`V_{12}=h\left(\alpha _1\lambda e^{i\chi (t+nh)}q(n,t){\displaystyle \frac{\delta _1}{\lambda }}e^{i\chi (t+(n1)h)}q(n1,t+h)\right)+`$ $`h\left(\delta _2\lambda e^{i\chi (t(n+1)h)}q(n,t+h){\displaystyle \frac{\alpha _2}{\lambda }}e^{i\chi (tnh)}q(n1,t)\right)\mathrm{\Lambda }(n,t)`$ $`V_{21}=h\left(\alpha _1\lambda e^{i\chi (t+(n1)h)}r(n1,t+h){\displaystyle \frac{\delta _1}{\lambda }}e^{i\chi (t+nh)}r(n,t)\right)+`$ $`h\left(\delta _2\lambda e^{i\chi (tnh)}r(n1,t){\displaystyle \frac{\alpha _2}{\lambda }}e^{i\chi (t(n+1)h)}r(n,t+h)\right)\mathrm{\Lambda }(n,t)`$ $`V_{22}=i+h\delta _0h\delta _1\left(e^{i\chi (2t+nh)}/\lambda ^2D(n,t)\right)h\left(\delta _2\lambda ^2e^{i\chi (2tnh)}\alpha _2q(n1,t)r(n,t+h)\right)\mathrm{\Lambda }(n,t)`$ where $`\lambda `$ is a spectral parameter, $`\alpha _j`$ and $`\delta _j`$ are parameters and $`h`$ ($`h>0`$) is a step of the discretization, results in the system $`ih^1[q(n,t+h)q(n,t)]=\delta _1q(n1,t+h)e^{ih\chi (n2)}\delta _0q(n,t+h)`$ $`\alpha _0q(n,t)+\alpha _1q(n+1,t)e^{ih\chi (n+1)}\delta _1q(n,t+h)D(n,t)\alpha _1q(n,t)A(n+1,t)`$ $`\alpha _2q(n1,t)[e^{ih\chi (n+1)}q(n,t+h)r(n,t+h)]\mathrm{\Lambda }(n,t)+`$ $`+\delta _2q(n+1,t+h)[e^{ih\chi (n+2)}q(n,t)r(n,t)]\mathrm{\Lambda }(n+1,t),`$ (10) $`ih^1[r(n,t+h)r(n,t)]=\alpha _1r(n1,t+h)e^{ih\chi (n2)}\alpha _0r(n,t+h)`$ $`\delta _0r(n,t)+\delta _1r(n+1,t)e^{ih\chi (n+1)}\alpha _1r(n,t+h)A(n,t)\delta _1r(n,t)D(n+1,t)+`$ $`\delta _2r(n1,t)[e^{ih\chi (n+1)}q(n,t+h)r(n,t+h)]\mathrm{\Lambda }(n,t)+`$ $`\alpha _2r(n+1,t+h)[e^{ih\chi (n+2)}q(n,t)r(n,t)]\mathrm{\Lambda }(n+1,t)`$ (11) $`\alpha _1\left[A(n+1,t)A(n,t)e^{ih\chi }\right]h^1(ih\alpha _0)\left(1e^{ih\chi }\right)=`$ $`\alpha _1\left[r(n,t)q(n+1,t)e^{ih\chi (n+1)}r(n1,t+h)q(n,t+h)e^{ih\chi (n1)}\right]+`$ $`\delta _2r(n1,t)q(n,t+h)\mathrm{\Lambda }(n,t)\left(e^{ih\chi }e^{ihn\chi }\right)`$ $`\delta _2q(n+1,t+h)r(n,t)\left(1e^{ih\chi (n+2)}\right)\mathrm{\Lambda }(n+1,t)`$ (12) $`\delta _1\left[D(n+1,t)D(n,t)e^{ih\chi }\right]+h^1(i+h\delta _0)\left(1e^{ih\chi }\right)=`$ $`\delta _1\left[r(n,t+h)q(n1,t+h)e^{ih\chi (n1)}r(n+1,t)q(n,t)e^{ih\chi (n+1)}\right]+`$ $`\alpha _2r(n,t+h)q(n1,t)\mathrm{\Lambda }(n,t)\left(e^{ih\chi }e^{ih\chi n}\right)+`$ $`\alpha _2r(n+1,t+h)q(n,t)\mathrm{\Lambda }(n+1,t)\left(e^{ih\chi (n+2)}1\right)`$ (13) $$\mathrm{\Lambda }(n,t)[1q(n,t+h)r(n,t+h)]=\mathrm{\Lambda }(n+1,t)[1q(n,t)r(n,t)]$$ (14) Then the discrete analogue of the AL model (1), (2) is obtained from (2)-(14) by means of the reduction $$r(n,t)=\pm \overline{q}(n,t)$$ (15) which requires the following relation among the parameters: $`\alpha _j=\overline{\delta }_j`$. In order to define solutions of (2)-(14) one has to fix boundary conditions for $`r(n,t)`$, $`q(n,t)`$, $`A(n,t)`$, and $`D(n,t)`$. In what follows we deal only with the case of zero boundary conditions $$\underset{n\pm \mathrm{}}{lim}q(n,t)=\underset{n\pm \mathrm{}}{lim}r(n,t)=0$$ (16) which allow existence of ”bright” solitons. Hence it will be assumed that $`r(n,t)=\overline{q}(n,t)`$. Respectively we have to require $$\underset{|n|\mathrm{}}{lim}A(n,t)=A_0(n,t)=\frac{i+h\alpha _0}{h\alpha _1}\left(e^{ih\chi n}1\right),$$ (17) $$\underset{|n|\mathrm{}}{lim}D(n,t)=D_0(n,t)=\frac{i+h\delta _0}{h\delta _1}\left(e^{ih\chi n}1\right).$$ (18) Notice that (17), (18) transform to the zero boundary conditions in the case of the homogeneous AL model ($`\chi =0`$) . Eqs. (2), (2) subject to (17), (18) allow one to express $`A(n,t)`$ and $`D(n,t)`$ through $`q(n,t)`$ and $`r(n,t)`$: $$A(n,t)=A_0(n,t)+\alpha _1^1\underset{k=1}{\overset{\mathrm{}}{}}f_A(nk,t)e^{i\chi hk}$$ (19) $$D(n,t)=D_0(n,t)+\delta _1^1\underset{k=1}{\overset{\mathrm{}}{}}f_D(nk,t)e^{i\chi hk}$$ (20) Here $`f_A(n,t)`$ and $`f_D(n,t)`$ stand for the right hand sides of the equations (2) and (2), correspondingly. It is to emphasised that formulae (19) and (20) do not give yet explicit solutions for $`A(n,t)`$ and $`D(n,t)`$ and after substitution into (2), (2) represent a source of nonlocality of the discrete scheme. As it has been shown in a convenient approach to treat inhomogeneous discrete models is the use of the gauge transformation which allows one to restrict the study only to the temporal behaviour of the scattering data. The gauge transformation in the discrete case is given by $$\stackrel{~}{U}(n,t)=G(n+1,t)U(n,t)G^1(n,t)$$ (21) $$\stackrel{~}{V}(n,t)=G(n,t+h)V(n,t)G^1(n,t)$$ (22) By choosing $`G(n,t)=\mathrm{exp}\{i\chi nt\sigma _3\}`$, where $`\sigma _3`$ is the Pauli matrix one reduces $`\stackrel{~}{U}(n,t)`$ to the form which corresponds to the $`U`$-matrix of the underline homogeneous model (i.e. to the form which does not have explicit dependence on time and can be obtained from (9) by the replacement $`\mathrm{exp}(i\chi t)1`$). Then the dependence of the transfer matrix $`T(t)`$, associated with $`\stackrel{~}{U}(n,t)`$, on the discrete time is governed by the equation $$T(t+h)=V_hT(t)V_h^1$$ (23) where $`V_h`$ is a diagonal matrix, $`V_h=`$diag$`(\theta _1(\lambda ,t),\theta _2(\lambda ,t))`$, with the elements $$\theta _1(\lambda ,t)=ih\alpha _0+h\lambda ^2\alpha _1e^{2i\chi t}+h\lambda ^2\alpha _2e^{2i\chi t}.$$ (24) $$\theta _2(\lambda ,t)=i+h\delta _0h\lambda ^2\delta _2e^{2i\chi t}h\lambda ^2\delta _1e^{2i\chi t}.$$ (25) Let us now assume that $`t=mh`$ and $`m=0,1,\mathrm{}`$ (it is straightforward to generalise the results to the case $`t=mh+t_0`$ where $`t_0`$ is an arbitrary real constant playing the role of initial moment of time). Then the element $`T^{(11)}`$ of the matrix $`T(t)`$ does not depend on $`m`$ (or $`t`$) while for $`T^{(12)}(t)b_m`$ one obtains $$b_{m+1}=b_0\underset{n=0}{\overset{m}{}}\mu _n(\lambda )$$ (26) where $$\mu _m(\lambda )=\theta _1(\lambda ,mh)/\theta _2(\lambda ,mh).$$ (27) In the case of solitonic solutions (26) formally solves the discrete Cauchy problem since it defines dependence of the scattering data on time and the solution of the eigenvalue problem for the matrix $`U(n,0)`$ is well known . Below we concentrate on some ”physical” consequences of that result. ## 3 Bloch oscillations in the discrete model As it has been mentioned in the Introduction each step of discretization can introduce new features into the dynamics (even in cases of integrable models). One of such features is the oscillatory behaviour of the solutions of the AL model affected by the linear force (Bloch oscillations). Now we address to the question whether it is possible to preserve such evolution subject to the discretization with respect to time. To this end we take into account that periodic behaviour means that there exists a positive integer $`M`$ such that $$\underset{n=m+1}{\overset{m+M}{}}\mu _n(\lambda )=1$$ (28) for any $`\lambda `$ (which can be considered, say, inside the unit circle on the complex plane) and any $`m`$. The period $`\tau `$ of oscillations is then given by $`\tau =Mh`$ (evidently $`M`$ is considered to be the smallest possible integer). The discretzation of the homogeneous AL model is a three parametic one (this, in particular, allows one to represent it in a form of a product of the local maps ). In the inhomogeneous case the imposed conditions lead to constrains on the parameters. To find them we first consider the limit $`|\lambda |0`$ (or $`|\lambda |\mathrm{}`$). Then from (24), (25), (27), and (28) one finds that there must exist relations $$a_1=a_2\varphi _1^{(l)}+\varphi _2^{(l)}=2\pi \frac{l}{M},l=0,1,\mathrm{},M1$$ (29) where $`a_{1,2}`$ and $`\varphi _{1,2}`$ are real prameters connected to $`\alpha _{1,2}`$, $`\alpha _{1,2}=a_{1,2}\mathrm{exp}(\varphi _{1,2})`$, and the upper index has been attributed to the ”quantized” phases. Next, the independence of (28) on $`m`$ implies $`\mu _m=\mu _{m+M}`$ which means that $$\chi h_{\stackrel{~}{l}}M=\pi \stackrel{~}{l}$$ (30) where $`\stackrel{~}{l}`$ is a positive integer. In other words the step of the discretization is not arbitrary (the subindex $`\stackrel{~}{l}`$ is introduced to label different discrete values of $`h`$). Physical sense of the last requirement is quite transparent. Recalling that the period of Bloch oscillations in the continuous-time model is given by $`\tau _0=\pi /\chi `$, one concludes that (29) means that the period of the Bloch oscillations in the discretized model, $`\tau `$, is $`\stackrel{~}{l}`$ times bigger than the period $`\tau _0`$: $`\tau =\stackrel{~}{l}\tau _0`$ and the number $`\stackrel{~}{l}`$ is related to the chosen step of discretization $`h`$. On the other hand rewriting (30) as $`h_{\stackrel{~}{l}}=\stackrel{~}{l}\tau _0/M`$ one can interpret it as a condition for the discretization step to be comensurable with the period of oscillations. As it is evident, for the direct coincidence of the result obtained on the discrete lattice with its continuum counterpart one must let $`\stackrel{~}{l}=1`$. Below we concentrate on this case. Then $`\mu _n(\lambda )`$ takes the form $$\mu _n(\lambda )=\frac{1+ae^{i(\mathrm{\Gamma }_l+\gamma _{l,n})}\lambda ^2ae^{i(\mathrm{\Gamma }_l\gamma _{l,n})}\lambda ^2}{1ae^{i(\mathrm{\Gamma }_l+\gamma _{l,n})}\lambda ^2+ae^{i(\mathrm{\Gamma }_l\gamma _{l,n})}\lambda ^2}e^{2i\varphi _0}$$ (31) where $$\mathrm{\Gamma }_l=\frac{l}{M}\pi \varphi _0,\gamma _{l,n}=\frac{1}{2}\left(\varphi _1^{(l)}\varphi _2^{(l)}\right)\frac{2\pi n}{M}$$ $`a=|h\alpha _1/(ih\alpha _0)|`$ and $`\varphi _0=\mathrm{arg}(ih\alpha _0)`$. Now we consider the unit circle where $`\lambda ^2=\mathrm{exp}(i\psi )`$ ($`\psi `$ being real). Then one can find two possibilities to satisfy the requirement (28). One simplest solution corresponds to $`M`$ even and $`\varphi _0=\pi l/M+\pi /2+\pi p`$ ($`p`$ is an integer). Then $`\mu _n(\lambda )=\mathrm{exp}\{2\pi i(l/M1/2)\}`$. By the direct algebra one ensures that this is the degenerated case, when the limiting transition $`h0`$ results in a trivial linear equation instead of the AL model. A nontrivial and physically relevant solution corresponds to the case when $`M=4N`$ ($`N`$ is an integer) and $`\varphi _0=\varphi _{l,p}=\pi l/M+\pi p`$ (in that case $`\mu _n\mu _{n+N}\mu _{n+2N}\mu _{n+3N}=1`$). Then $`\mu _n(\lambda )`$ which determins evolution of the one-soliton solution associated with the eigenvalue $`\lambda _1=\mathrm{exp}(w+i\theta )`$ is given by $$\mu _n(\lambda _1)=\frac{12(1)^pa\mathrm{sinh}[2wi(\gamma _{l,n}+2\theta )]}{1+2(1)^pa\mathrm{sinh}[2wi(\gamma _{l,n}+2\theta )]}\mathrm{exp}\left(2\pi i\frac{l}{M}\right)$$ (32) Let us illustrate the discrete-time dynamics on example of the one-soliton solution. We assume that (29) holds. For the sake of simplicity we let $`\varphi _1=\varphi _2=\pi /2`$ and $`\varphi _0=0`$. Then the one-soliton solution of (2), (2) can be written down in the form (recall $`t=mh`$) $$q^{(s)}(n,m)=\overline{r}^{(s)}(n,m)=\frac{\mathrm{sinh}(2w)}{\mathrm{cosh}(2nwX_m)}e^{i\mathrm{\Phi }(n,m)}$$ (33) where $$\mathrm{\Phi }(n,m)=\underset{k=0}{\overset{m1}{}}\mathrm{arctan}\left[\frac{4a\mathrm{cos}\left(2\chi kh2\theta \right)\mathrm{cosh}(2w)}{1+2a^2\mathrm{cos}\left(4\chi kh4\theta \right)+4a^2\mathrm{cosh}(4w)}\right]2i\chi nmh$$ (34) $$X_m=\frac{1}{2}\underset{k=0}{\overset{m1}{}}\mathrm{ln}\left[\frac{2a^2\left(\mathrm{cosh}(4w)+\mathrm{cos}\left(4\chi kh4\theta \right)\right)+14a\mathrm{sinh}(2w)\mathrm{sin}\left(2\chi kh2\theta \right)}{2a^2\left(\mathrm{cosh}(4w)+\mathrm{cos}\left(4\chi kh4\theta \right)\right)+1+4a\mathrm{sinh}(2w)\mathrm{sin}\left(2\chi kh2\theta \right)}\right]$$ (35) Comparing these formulae with (4), (5) one can see that choosing $`a=h/2`$ the last ones are obtained by the limiting transition $`h0`$. To be more specific we concentrate on $`X_m`$ which describes evolution of the centre of the soliton. To this end we represent $`h=\pi \xi /(M\chi )`$ (the so chosen step of the discretization satisfies (30) when $`\xi `$ is integer). The results are summarized in Fig. 1 for three values of the parameter $`\xi `$ displaying different situations: (a) When $`\xi =1`$ the discrete model exactly reproduces Bloch oscillations of the continuous-time model. (b) At $`\xi =\sqrt{3}`$ the evolution of the discrete model is not periodic (notice that lines in the figure are used for the sake of convenience of presentation: the truth trajectories are sets of points). However, by considering analytic continuation of the solution the periodicity can be considered between discrete time steps . (c) At $`\xi =2`$ the period of soliton oscillations is two times more than the period of Bloch oscillation of the AL model which is obtained by the limiting transition $`h0`$, $`M\mathrm{}`$ with $`hM=\pi /\chi `$ (it is to be mentioned that the minima of the curve (c) about $`h=48`$ and $`h=145`$ are not numerical zeros). Author acknowledges support from FEDER and Program PRAXIS XXI, grant N<sup>0</sup> PRAXIS/2/2.1/FIS/176/94. Figure caption Trajectory of the soliton centre corresponding to different steps of discrete time (a) $`h=1/97`$, (b) $`h=\sqrt{3}/97`$, and (c) $`h=2/97`$. Other parameters are as follows: $`M=97`$, $`w=0.5`$, $`\chi =\pi `$, $`a=0.1`$.
no-problem/9910/hep-th9910170.html
ar5iv
text
# Topics in D-Geometry ## 1 Introduction The present contribution consists of three parts. The first is a general summary of the theory of D-branes on Calabi-Yau; the second summarizes the works which connect the boundary state approach with large volume results; the third summarizes new results on lines of marginal stability on the quintic found in June 1999. The transparencies for this talk (which emphasize different parts of the material) are also available at . For background material on “D-geometry,” see . This term refers to the study of how the conventional geometry which describes branes in supergravity is generalized in the context of D-branes. As a point of departure we could consider any of the geometrical pictures which branes give us for the various terms in an effective action. Perhaps the simplest example is the following: the moduli space of a $`0`$-brane at a point in a CY<sub>3</sub> is the CY<sub>3</sub> itself; the moduli space metric is just the Ricci-flat metric on the CY<sub>3</sub>. Examples of the “unconventional” geometry we have in mind include the following: 1. Stringy and quantum corrections will generally modify conventional geometric predictions. In particular, we can ask how a D-brane world volume action is affected by“stringy” ($`l_s`$) corrections. An example is to find the moduli space metric for the D$`0`$-brane at a point; this provides a canonical non-Ricci flat metric for each point in CY moduli space. Qualitative effects visible at finite $`l_s`$ include T-duality and mirror symmetry; we will discuss the latter below. 2. Perturbative string compactification can be defined non-geometrically, by specifying an appropriate internal CFT. Some examples (such as Gepner models) turn out to have geometric interpretations, and this definition provides a concrete way to work in the “highly stringy” regime. Others such as asymmetric orbifolds do not have known geometric interpretations; studying D-branes on these spaces will probably lead either to finding such interpretations or showing why they do not exist. 3. D-brane world-volume theories include open strings stretching between pairs of branes, which in many cases provide alternate gauge theory origins for what are gravitational effects in the large distance limit. Orbifold resolution by quiver theories are an example in which non-trivial topology is reproduced as a classical gauge theory moduli space. The short distance gravitational interactions between D-branes are replaced by quantum gauge theory dynamics. In special cases (in the large $`N`$ limit or for quantities protected by supersymmetry) this is believed to reproduce supergravity, but more generally provides another way of defining its stringy generalization. 4. Noncommutative gauge theory arises on D-brane world-volumes in appropriate limits of string theory, such as compactification on a small torus with fixed background $`B`$ field, or in Minkowski space with large $`B`$ field. It seems quite likely that similar theories are relevant in curved backgrounds; finding concrete examples is an important problem for future work. This is by no means a complete list but perhaps includes the most interesting points discovered so far. As each of them would form a topic in its own right, for the rest of the review we will focus on the following meta-question: to what extent do these effects lead to qualitative changes in the brane physics – and thus cannot be ignored? The way to study this question is to frame the alternative (null) hypothesis: the qualitative properties of brane theories (especially, the low energy effective action, dimension of the moduli space, types of singularities and so on) are the same as predicted by naive geometric considerations – and test it in examples. We will refer to this as the “geometric hypothesis” and make it more precise below. ## 2 D-branes on Calabi-Yaus Quite a lot is known about D-branes in flat space (Minkowski or toroidal compactifications) and in K3 compactifications, where type II-heterotic duality and the large supersymmetry already suffice to give a good picture. The geometric hypothesis appears to be essentially true in these cases – the brane spectrum and moduli spaces can be described as the spectrum and moduli spaces of semistable coherent sheaves (a generalization of vector bundle which allows singularities corresponding to pointlike instantons) . D-branes on Calabi-Yau threefolds are not so well understood and look quite interesting for a number of reasons. Physically, supersymmetry preserving branes will have $`𝒩=1`$, $`d=4`$ gauge theories on the world-volume which may be directly relevant for phenomenology. They generalize the strong coupling limit of heterotic string compactification but in some ways appear simpler than the $`(0,2)`$ sigma models which appear there. Many questions can be addressed using the highly developed theory of $`𝒩=2`$ supersymmetry and mirror symmetry. An important difference with the cases of higher supersymmetry is that the spectrum of branes can depend on the particular vacuum (point in moduli space) under discussion. For example, in pure $`SU(2)`$ gauge theory, we know that the strong coupling spectrum is quite different from the semiclassical spectrum; the purely electric “W bosons” are not present. Given $`𝒩=2`$ supersymmetry this dependence of spectrum on moduli is highly constrained: as is well known, the BPS spectrum can change only on lines of marginal stability defined by the condition $`\mathrm{Im}Z(Q_1)/Z(Q_2)=0`$. Thus the problem of finding the spectrum of wrapped branes on CY and deciding whether it too changes at string scales is non-trivial but accessible, as we will discuss in the next sections. Supersymmetric ($`1/2`$ BPS) branes on a CY<sub>3</sub> are divided into A and B branes depending on the boundary condition on the $`U(1)`$ currents in the $`(2,2)`$ superconformal algebra (which determines which part of the world-sheet supersymmetry they preserve) : either $`Q_L=+Q_R`$ or $`Q_L=Q_R`$ is a consistent choice. The notation comes from topological field theory – an A brane is one whose open strings naturally couple to A-twisted topological theory and the Kähler moduli, while a B brane couples to complex structure moduli. Mirror symmetry will exchange the two – the spectrum and world-sheet theories of A branes on a CY $``$ is isomorphic to that of the B branes on its mirror $`𝒲`$. If we consider branes defined by Dirichlet and Neumann boundary conditions in the non-linear sigma model with CY<sub>3</sub> target, the B branes are $`2p`$-branes wrapped on holomorphic cycles and carrying holomorphic vector bundles (this is the case with direct analogy to the heterotic string), while the A branes are $`3`$-branes wrapped on what are called special Lagrangian submanifolds (or sL-submanifolds; more below) . At first this notation may seem backwards given the discussion in the previous paragraph, since the $`2p`$-cycles and the masses of B branes are controlled by Kähler moduli (and thus are calculable in the A-twisted topological closed string theory), while the $`3`$-cycles and masses of A branes are controlled by complex structure moduli. Nevertheless it is correct – in going from the open to closed string channel the boundary conditions on the $`U(1)`$ current change sign, interchanging A and B twistings. This switch has important consequences, especially if we combine it with the known properties of CY sigma models. Specifically, the B twisted models receive no quantum corrections, while A twisted models receive world-sheet instanton corrections. Physically, this means that the $`𝒩=2`$ prepotential in compactified IIb theory, which depends only on complex structure moduli, is classically exact. This means that whereas B brane masses receive world-sheet instanton corrections, the large volume results for central charges and masses of A branes are already exact (this fact and mirror symmetry can then be used to determine B masses). This means that lines of marginal stability for A branes are the same as in the large volume limit, and this fact strongly suggests that the spectrum of A branes is determined entirely by classical geometric considerations. Since we have not argued that the world-volume theory itself does not receive stringy corrections (indeed we expect it to), this might seem to be an unjustified leap of faith at this point. Nevertheless there is a good argument for it, which we now summarize. The classical geometric prediction is that each A brane is a $`3`$-brane wrapped on a sL-submanifold. Now a sL-submanifold $`\mathrm{\Sigma }`$ of a CY $`n`$-fold is a Lagrangian submanifold with respect to the Kähler form: $`\omega |_\mathrm{\Sigma }=0`$, satisfying an additional constraint involving the holomorphic $`n`$-form: there exists a constant $`\theta `$ such that $$\mathrm{Im}e^{i\theta }\mathrm{\Omega }|_\mathrm{\Sigma }=0.$$ (2.1) The constant $`\theta `$ determines which of the original $`𝒩=2`$ supersymmetries remains unbroken; two branes of different $`\theta `$ together break all supersymmetry. While Lagrangian submanifolds are “floppy,” specified locally by an arbitrary function (in canonical coordinates, $`p_i=f/x^i`$), the special Lagrangian condition determines this function up to a finite dimensional moduli space, which for a smooth CY has been shown to be smooth and of real dimension $`b^1=dimH^1(\mathrm{\Sigma },𝐑)`$ . A D-brane configuration is specified by $`\mathrm{\Sigma }`$ and a flat $`U(1)`$ gauge connection, leading to a moduli space of complex dimension $`b^1`$, which before taking stringy corrections into account is a torus fibration. Interesting examples of sL-submanifolds of $`𝐑^6`$ are known, but not too many are known for CY’s. The only general construction known is as the fixed point of an involution, i.e. $`\mathrm{Im}z^i=0`$ in a CICY. Even necessary or sufficient conditions for candidate cycles to support sL-submanifolds are not known. The subject is still rather new however and interest has picked up dramatically as a consequence of the proposal of Strominger, Yau and Zaslow that the mirror $`𝒲`$ to a CY $``$ is just the moduli space of the D$`3`$-brane on $``$ mirror to the D$`0`$ on $`𝒲`$, which will be some (appropriately chosen) $`T^3`$. A number of papers have shown the existence of $`T^3`$ fibrations on particular CY’s which can in principle be deformed to special Lagrangian fibrations. The question of how deformations of the CY itself affect the spectrum of sL-submanifolds has recently been studied by Joyce. The part of this story relevant for complex structure deformations (also summarized in ) is as follows. The natural geometric description of transitions between $`3`$-brane configurations in six dimensions is for two intersecting $`3`$-branes to intercommute, producing a single $`3`$-brane, or the reverse. In the large volume limit, this process can be studied in the neighborhood of the intersection point, and the relevant question is: out of all configurations $`\mathrm{\Sigma }_\mathrm{\Theta }`$ in $`𝐑^6`$ which asymptote to two planes $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ at fixed angles $`\mathrm{\Theta }`$, is the minimal volume surface the union of the two planes, or something else, and if so what? This question was answered some years ago by use of calibrated geometry and the result is known as the “angle theorem”: let $`\mathrm{\Sigma }_1`$ be the first plane and $`\overline{\mathrm{\Sigma }}_2`$ the orientation reversal of the second plane; out of $`SO(2n)`$ rotations turning $`\mathrm{\Sigma }_1`$ into $`\overline{\mathrm{\Sigma }}_2`$ take the eigenvalues $`e^{i\theta _i}`$ and let $`\theta =\theta _i`$. If the minimal such $`\theta `$ is greater than or equal to $`\pi `$, the volume cannot be reduced; while if $`\theta <\pi `$ it can. The surface of lower volume can be approximately described by use of an exact sL-submanifold solution in $`𝐑^6`$ with the prescribed asymptotics, which exists in the case $`\theta =\pi `$. One can try use this solution to lower the volume by orienting it to cross both of $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ near the intersection point; if it does so, the finite region between the intersections is guaranteed to have lower volume than the original planes. This will be possible exactly when $`\theta <\pi `$. The angle theorem tells us which of two configurations is stable in terms of a local geometric condition (the same as the string theory condition for the intersection point to have an associated tachyon ), but the geometric picture furthermore implies that this can be tested just knowing the central charges for the two branes. This is because the relative angle is known given the phase of pullback of $`\mathrm{\Omega }`$ (locally $`dz^1dz^2dz^3`$) to each brane, and $`\mathrm{\Omega }`$ must have constant phase on each brane. Thus decays take place just when $`Z(Q_1)`$ and $`Z(Q_2)`$ are colinear – this is exactly the standard marginal stability condition. These considerations tell us a little more – namely, which state (the single brane, or two branes) is stable on which side of the marginal stability line. This geometrical picture of A brane decay and stability fits with the constraints following from the exact stringy prepotential and thus, despite the fact that other consequences of this geometrical picture may well be false for substringy branes, it is consistent to imagine that the spectrum is the geometric one. This is in contrast to the B description of the spectrum which must be modified by the stringy corrections to the prepotential. This is the first example of what we will call below the “modified geometric hypothesis.” All of this tells us quite a bit about the dependence of the spectrum of $`3`$-branes on the CY moduli, but does not substitute for the need to have some results on the spectrum in at least some part of moduli space. Since so little is known about $`3`$-branes at present we instead take this from the large volume limit of the B brane spectrum as many mathematical results towards classifying holomorphic cycles and vector bundles are known. The most basic of these is the following. Given a holomorphic vector bundle, the Donaldson-Uhlenbeck-Yau theorem gives necessary and sufficient conditions for the existence of a Yang-Mills connection preserving supersymmetry: it must be semistable. This is a somewhat complicated condition involving all holomorphic subbundles, but a simpler necessary condition is known which depends on the Chern character of the bundle (which corresponds to D-brane charge as $`Q_{62k}ch_k(F)`$, the $`2k`$ form in $`\mathrm{Tr}e^{F/2\pi }`$) and the Kähler class: $$(Q_6Q_2+\frac{1}{2}Q_4^2)\omega 0.$$ (2.2) On manifolds with $`b^{1,1}>1`$ this describes an explicit dependence of the spectrum on the Kähler class, as has been discussed by Sharpe. Since the prepotential determining the central charges of B branes receives world-sheet instanton corrections, it is fairly certain that this mathematical stability condition is modified in the stringy regime. This is quite interesting as it would mean that the condition for a bundle to be usable in superstring compactification is not always the geometrical condition which has been implicitly assumed in the past. Given a specific supersymmetric brane, we can try to derive its world-volume effective action and general considerations suggest that the simplest quantities to start with are the holomorphic ones: the superpotential and gauge kinetic term. The latter corresponds to the dilaton and in CY compactifications with zero NS field strength this only becomes non-trivial at string loop level (this is one of the invariants defined in ). However a superpotential can appear at tree level and indeed for multiple parallel branes we expect a generalization of the $`\mathrm{tr}Z^1[Z^2,Z^3]`$ superpotential of $`3`$-branes in flat space. There are also known examples of superpotentials for single branes (see for a discussion). A plausible counterpart of the nonrenormalization theorem for the $`𝒩=2`$ prepotential is the following: the superpotential, being essentially a topological quantity in open string theory, depends only on the moduli of the appropriate twisted theory. Specifically, an A brane superpotential depends only on Kähler moduli, while a B brane superpotential depends only on complex structure moduli, and furthermore is equal to the large volume result. This comes close to showing that a B brane moduli space is the same as in the large volume limit, but not quite – the potential can also contain D terms. These would naturally depend on the Kähler moduli, as in the example of quiver theories. A natural generalization of the preceding conjecture is that these could be determined in the large volume limit from the A brane point of view. As explained in , the D terms are related to the stability question. A world-volume description of the decay process of Joyce starts with the two intersecting 3-branes and $`U(1)\times U(1)`$ gauge theory; the intersection comes with a chiral multiplet charged under both $`U(1)`$’s, and the dependence on complex structure moduli comes through an FI term for the relative $`U(1)`$. As one goes through the transition one goes from a supersymmetry breaking ground state with unbroken $`U(1)\times U(1)`$ to a supersymmetry preserving ground state with broken relative $`U(1)`$. <sup>1</sup><sup>1</sup>1 As in this configuration can be shown by a probe analysis to be equivalent to a single $`3`$-brane produced by intercommutation. A difference with the case of $`2`$-branes considered there is that the resulting $`3`$-brane is not actually special Lagrangian. This is possible because near the transition it has a large extrinsic curvature and this is evidence that the detailed form of the special Lagrangian condition gets $`\alpha ^{}`$ corrections. An analogous statement was already known on the B side. Equality in (2.2) defines a boundary within the Kähler cone on which stability degenerates to semistability. This means that the connection on the brane becomes reducible, and an enhanced gauge symmetry appears, a phenomenon which in $`𝒩=1`$ theory can only arise from D terms as above. We see that this qualitative picture survives the stringy corrections, but the precise location of the boundary is different, in a way determined by the A picture geometry. The upshot of the discussion is that mirror symmetry leads to a natural conjecture for a modified or “mirror geometric hypothesis” – some brane questions are geometric in the A picture, and others are geometric in the B picture. As is well known the prepotential in the complex structure sector is determined geometrically; this determines A brane central charges and stability and strongly motivates the claim that the spectrum of branes can be understood geometrically in the A picture. We can add to this the claim that the superpotential in the B twisted model is classical; this means that brane moduli spaces are largely determined by the geometry of the B picture. Finally, it may be possible to determine the D terms in the A picture and complete the story. So far as I know, these conjectures are consistent with the evidence, but require much more testing. The most interesting tests are in the stringy regime, as we discuss next. ## 3 Boundary states and branes Exactly solvable CFT’s were a fruitful source of insight into compactification of closed string theory and are now beginning to teach us about branes in these compactifications. The fundamental notion is that of “boundary state,” a CFT description of a boundary condition as a linear functional on the closed string Hilbert space. Reparameterization invariance and supersymmetry can be easily implemented by imposing operator constraints. One must then impose the condition that all annulus partition functions (associated with pairs of boundary states) have an open string Hilbert space interpretation (the multiplicities are integers); this condition was proposed by Cardy and can be solved for rational CFT’s. D-brane ground states correspond to such boundary states (not much is known about the non-rational case; possibly additional unknown constraints must be satisfied). The simplest and most studied models are orbifolds and orientifolds. In this case the general boundary state approach can be shown to reduce to the world-sheet prescription proposed in – one introduces image D-branes on the cover and quotients by a simultaneous space-time and gauge action. The case of strings and branes near a single orbifold or orientifold singularity is particularly easy and one obtains quiver gauge theories as world-volume theories. For $`𝐂^3/\mathrm{\Gamma }`$ these have been much studied and among the noteworthy results are the following: 1. The resolution of these singularities is described in quiver gauge theory by FI terms coupling to Kähler moduli. If multiple resolutions with different topology are mathematically possible, they all appear to be accessible physically. 2. The resulting metrics are not Ricci flat. Although some caveats were made in that work, it can be shown that this statement is true at string tree level. 3. The quiver theory depends on the choice of representation of $`\mathrm{\Gamma }`$; the basic case is the regular representation, while non-regular representations correspond to branes wrapped around exceptional cycles (or “fractional branes”). 4. If we take D$`3`$-branes to get a $`3+1`$ theory, the regular representation is distinguished by having zero beta function in the large $`N`$ limit. 5. These theories have supergravity duals corresponding to the quotients $`AdS_5\times S^5/\mathrm{\Gamma }`$. Recently Diaconescu and Gomis have studied the case of $`𝐂^3/𝐙_3`$ in detail. Besides checking the equivalence between the boundary state approach and the proposal of , they determined the mapping between fractional branes and wrapped branes in the large volume limit, using techniques we will describe below. Additional summary of this example can be found in . We now turn to Gepner models and the work . Gepner models provide CFT models which are equivalent to CY compactification at special points in moduli space of enhanced discrete symmetry. The study of boundary states in these models was initiated by Recknagel and Schomerus ; they classified the subset of boundary states which can be obtained by separate boundary conditions in the individual $`𝒩=2`$ minimal model factors, for which Cardy’s techniques apply. (See also , as well as which uses the Landau-Ginsburg approach.) Let us briefly summarize the spectrum of branes one obtains and the main result used in the analysis of – the intersection form between two branes. Cardy’s analysis (for diagonal modular invariant) produces boundary conditions in one-to-one correspondance with closed string primary fields; the spectrum of open strings with two such boundary conditions $`a`$ and $`b`$ is generated by primary fields in one-to-one correspondance with those on the right hand side of the (Verlinde) fusion rules $`\varphi _a\varphi _bN_{ab}^c\varphi _c`$. The $`A_k`$ $`𝒩=2`$ minimal model can be obtained as a deformation of the $`SU(2)_k`$ WZW model, and its primary fields $`\varphi _m^l`$ are labelled similarly, by two integers $`0lk`$ (the $`SU(2)`$ representation label) and $`0m<2k+4`$ (the charge under the $`U(1)`$ of $`𝒩=2`$) up to a $`𝐙_2`$ identification $`(l,m)(kl,m+k+2)`$. The fusion rules are the product of $`U(1)`$ fusion rules (i.e. $`𝐙_{k+2}`$ charge conservation) with $`SU(2)_k`$ fusion rules. Before implementing the GSO projection, the Gepner model boundary conditions are labelled by a set of such integers, and are all A boundary states (since they correspond to left-right symmetric fields). The GSO projection then restricts the closed string spectrum to (odd) integer total $`U(1)`$ charge $`m`$, while twisted states with $`m_Lm_R`$ are added. The restriction has the effect of reducing the number of distinct A boundary states, while the twisted sectors provide new candidate B boundary states. The final result for the $`(3)^5`$ model is that all boundary states are labelled by a set of five $`L_i\{0,1\}`$; the A boundary states are also labelled by five $`M_i`$ satisfying one relation and form representations of $`𝐙_5^4\times S_5`$ discrete symmetry, while the B boundary states have a single $`M`$ label and represent a $`𝐙_5`$ discrete symmetry. These are the known discrete symmetries of the CFT at the Gepner point; it is known to be equivalent to the Fermat quintic $`_{i=1}^5Z_i^5=0`$ in $`𝐏^4`$ with manifest $`𝐙_5^4`$ symmetry, at a special point in Kähler moduli space with quantum $`𝐙_5`$ symmetry. The modified geometric hypothesis of section 2 would imply that these A branes are exactly the sL-submanifolds of the Fermat quintic and we can test this idea for the known sL-submanifolds. These are obtained by taking a real section $`\mathrm{Im}e^{2\pi im_i/5}Z_i=0`$: topologically these are $`\mathrm{𝐑𝐏}^3`$’s, which fall into the same representation of $`𝐙_5^4\times S_5`$ as two sets of boundary states: those with all $`L_i=0`$ and those with all $`L_i=1`$. How can we tell which (if either) is their counterpart? A strong check of any proposed identification is that the geometric intersection number of a pair of $`3`$-branes must agree with the quantity $`\mathrm{Tr}_{ab}(1)^F`$ in this sector of the open string theory. This can be seen by considering electric-magnetic charge quantization in the resulting $`d=4`$ theory. This computation is a special case of those in and it turns out the $`L_i=1`$ states match this intersection form, while the $`L_i=0`$ states do not (they presumably correspond to some other sL-submanifolds). So far this is in agreement with both the original and the modified geometric hypothesis. However, one also finds that the $`L_i=1`$ brane world-volumes have a massless chiral multiplet, and this disagrees with the geometric prediction of . As discussed in it is likely that this is lifted by a superpotential, but even so this contradicts the strongest form of the geometric hypothesis, in which both this massless field and the superpotential would have matched. It does not contradict the modified geometric hypothesis, which allows the A brane superpotential to depend on the Kähler form, and furthermore shows that massive fields in the large volume limit can come down to become (linearized) moduli. Such effects and even jumping of the dimension of moduli space are known to be possible in the B picture; perhaps this superpotential would be manifest in a mirror description. Turning to the B branes, we have more intuition for which of these exist in the large volume limit: namely the condition (2.2) must be satisfied (if $`Q_60`$; there is an analogous statement if $`Q_6=0`$ but $`Q_40`$). Although bundles on the quintic are by no means classified, various considerations suggest that generic charge vectors for which the discriminant (the left hand side of (2.2)) is sufficiently large will be associated to stable bundles. Thus it is interesting to express the charges of the B boundary states in large volume terms, and compare. A precise form of this comparison is to choose a path in Kähler moduli space from the Gepner point to the large volume limit, and use the flat $`Sp(2r,𝐙)`$ connection provided by special geometry to transport the charge lattices between the two regimes. The Kähler moduli space and prepotential for the quintic is of course well known from the famous work of Candelas et. al. which computed the periods of the three-form on the mirror. To review the structure of this moduli space: it is a Riemann sphere with three singularities, a large volume limit at $`z\mathrm{}`$, the Gepner point with a $`𝐙_5`$ quotient singularity at $`z=0`$, and finally a “conifold” singularity at $`z=1`$ at which a three-cycle of the mirror degenerates (has zero period). It turns out that on the original quintic this is precisely the central charge of the “pure” (trivial gauge field) six-brane. The periods $`\mathrm{\Pi }_i(z)`$ can be obtained as solutions of Picard-Fuchs ODE’s or more explicitly as series expansions around each singularity, with radius of convergence determined by the locations of the others. Two concrete results are now needed from this analysis. First, the mirror map gives us an appropriate basis for the large volume limit – central charges of the individual $`2p`$-branes. Second, given that the central charge of a brane with charge vector $`Q^i`$ is $`Z=Q^i\mathrm{\Pi }_i(z)`$, the transition functions of the flat connection on the charge lattice are simply the linear transformations between different bases $`\mathrm{\Pi }_i(z)`$ adapted to different regions of moduli space (these are connection formulas for generalized hypergeometric functions). This tells us what the $`2p`$-brane central charges will be at the Gepner point. In principle these could already be compared with a precise computation of the central charges of our boundary states, but such a comparison will run into tricky problems of normalization. The best way to study the charges of D-branes – as was done in the very first example – is to instead compute the interaction between two D-branes in the open string channel, as this is canonically normalized (it is a partition function). Indeed the simplest quantity of this type is the intersection form $`\mathrm{Tr}_{ab}(1)^F`$ discussed above and thus the simplest way to proceed is to express the known large volume intersection form in terms of a natural basis at the Gepner point (one which represents the quantum $`𝐙_5`$ symmetry in a simple way) and compare this intersection form with the intersection forms of the boundary states. It turns out that the resulting boundary state charges are simple when expressed using the basis first postulated by Candelas – the zero-brane period and its $`𝐙_5`$ images. The states of minimal charge are the five $`L_i=0`$ states; one of these turns out to be the pure six-brane $`6B|(\begin{array}{cccc}1& 0& 0& 0\end{array})`$, and to get the others we just need to know the $`𝐙_5`$ monodromy in the large volume basis, which is given in . In the conventions of it is<sup>2</sup><sup>2</sup>2 Note that these are conventions in which the charge vectors include the factor $`\sqrt{\widehat{A}}`$, which are not the conventions of (2.2). The latter are also given in ; they are the ones in which the large volume monodromy is simple but charges are not necessarily integral. $$g(\begin{array}{cccc}Q_6& Q_4& Q_2& Q_0\end{array})(\begin{array}{cccc}Q_6& Q_4& Q_2& Q_0\end{array})\left(\begin{array}{cccc}4& 1& 8& 5\\ 3& 1& 5& 3\\ 1& 0& 1& 1\\ 1& 0& 0& 1\end{array}\right)$$ (3.1) and thus the others are $`6B|g^M`$. The charges for states with $`L=L_i>0`$ can be derived from these by using the fusion rules: essentially, they are $`Q_6|(1+g)^Lg^M`$. One surprise of the result is that the D$`0`$-brane is not present (as a rational boundary state; this is not to say that it does not exist at the Gepner point). It appears that this is also consistent with the geometric hypothesis in the following sense: any location we might pick for the D$`0`$ would break some of $`𝐙_5^4`$, but all of the rational B boundary states are singlets under $`𝐙_5^4`$, so we should not find the D$`0`$ in this analysis.<sup>3</sup><sup>3</sup>3 It appears that other Gepner models can contain the D$`0`$ as a boundary state. Looking at the charges of all of the boundary states, they appear to be consistent with the original geometric hypothesis, at least in the weak sense that they are all consistent with (2.2). Not too much more is known about vector bundles on the quintic so it is hard to be more precise. On the other hand, the monodromy (3.1) in general can take solutions of (2.2) into non-solutions, making it highly implausible that it is a symmetry of the entire brane spectrum. This is reminiscent of related phenomena in the study of $`𝒩=2`$ gauge theory, and we turn to this analogy. ## 4 Marginal stability on the quintic As we saw in the previous section, the D$`0`$-brane is not a rational boundary state for the Gepner quintic. This leads one to wonder whether it exists in the stringy regime at all, and more generally how much the spectrum of branes varies as we move around. In generic $`𝒩=2`$, $`d=4`$ theories, the spectrum of BPS states depends on the moduli, but it varies in a highly constrained way. A state of charge $`Q`$ will generically be stable under variations of the moduli, but there exist can lines of marginal stability (or “jumping lines”), on which the state can decay to BPS states of charge $`Q_1`$ and $`Q_2`$, if the condition $$|Z(Q)|=|Z(Q_1)|+|Z(Q_2)|$$ (4.1) is satisfied. Here $`Z(Q)=Q\mathrm{\Pi }(z)`$ is the central charge in terms of a vector of periods $`\mathrm{\Pi }(z)`$ at a point $`z`$ in moduli space; for the A branes these are the periods of the three-form $`\mathrm{\Pi }=\mathrm{\Omega }`$ (normalized to $`\mathrm{\Omega }\overline{\mathrm{\Omega }}=1`$). The most familiar examples are supersymmetric gauge theories, which have been studied in great detail. For example, pure $`SU(2)`$ $`𝒩=2`$ gauge theory (the original Seiberg-Witten solution) has a line of marginal stability which goes through the massless monopole and dyon points and separates the strong and weak coupling limits. The strong coupling BPS spectrum consists only of the monopole and dyon, the two states responsible for the singularities. This phenomenon was necessary as otherwise monodromies around the massless monopole point would produce states with arbitrarily large electric charge, which are not present in the known semiclassical spectrum. Besides the known semiclassical spectrum, a number of constraints follow from the solution for the prepotential and justify this result. The primary constraint is the physical correspondance between singularities and massless states : if $`Z(Q)`$ vanishes at some $`z`$, either there is a corresponding singularity which we can think of as coming from integrating out this state at nearby points, or else the state must not exist at $`z`$. If it exists at some $`z^{}`$, there must be a line of marginal stability separating $`z`$ and $`z^{}`$. This is quite strong as it turns out that the ratio of the two periods $`a_D/a`$ assumes all possible real values (in all the asymptotically free $`SU(2)`$ theories in fact) and thus every charge can be constrained. One sees this most easily by combining the result (easily verified numerically) that $`\mathrm{Im}a_D/a`$ changes sign between weak and strong coupling regimes with the $`SL(2,𝐙)`$ transformation properties of $`a_D/a`$ (which force the line $`\mathrm{Im}a_D/a=0`$ to connect the massless monopole and dyon points). Our earlier observation that the $`𝐙_5`$ monodromy obtained by encircling the Gepner point in the quintic does not fit well with the known constraints on the large volume spectrum is our first suggestion that similar phenomena will obtain here. There is also a qualitative similarity to the change of sign of $`\mathrm{Im}a_D/a`$. Let the conifold point be $`z_c=1`$: here the six-brane becomes massless, $`\mathrm{\Pi }_6zz_c`$. Although the other periods are not analytic here, they are still continuous: $`\mathrm{\Pi }(zz_c)\mathrm{log}(zz_c)+\mathrm{regular}`$. Thus just as in gauge theory, $`\mathrm{Im}\mathrm{\Pi }_6/\mathrm{\Pi }_0`$ changes sign as we go through this point. This starts to suggest that the gauge theory picture with its drastic change in the spectrum might also be possible here. Unfortunately few of the other elements of the story there have been developed for the quintic (or indeed any CY) moduli space. In particular, the appropriate analogs of $`SL(2,𝐙)`$ and the fundamental region are not known, making it difficult to get a good global picture of the moduli space. The boundary state results show us that the answer will not be as simple as that for gauge theory – the spectrum will not collapse simply to the states which can become massless. We should also not assume that all of the boundary states exist at large volume. To study this one can simply follow all of the central charges for boundary states out from the Gepner point to the large volume limit, to see what happens. One expects more marginal stability lines in the neighborhood of the conifold point, so to minimize the possibilities for decay we choose the trajectory $`z`$ real and negative opposite to it in moduli space. We then numerically integrated the Picard-Fuchs equations (and checked the results against the series expansions of ) to get the periods and thus the BPS masses. Using these to compute the masses of BPS branes with the charges of all rational boundary states produces a surprise: one of them has its period go through zero! In other words, there exists a BPS state at the Gepner point whose mass appears to go to zero at a non-singular point $`X`$ in moduli space. (Readers who want proof that this is not an error of numerics or conventions will find a semi-qualitative proof in the appendix.) This in itself is not inconsistent as long as there exists a line of marginal stability separating the point $`X`$ from the Gepner point. At this point we run into one of the main difficulties in studying these questions for CY: there are an infinite number of candidate marginal stability lines, and we need more knowledge about the BPS spectrum to decide which are real (i.e. the decay takes place, which requires the states of charge $`Q_1`$ and $`Q_2`$ to actually exist on the line). This is closely related to the fact that at a generic point in moduli space, there exist charges $`Q`$ such that $`|Z(Q)|<ϵ`$ for any positive $`ϵ`$, no matter how small. Consider the Gepner point: there the periods are the fifth roots of unity, so the set of $`Z(Q)`$ is a $`Z_5`$ symmetric lattice embedded in the complex plane. Although we have not as yet found the true marginal stability lines, we can at least try to postulate a pair of charges $`Q_1`$ and $`Q_2`$ into which the problematic state can decay and whose masses do not cross zero on the way to large volume. This is not hard to do, and thus the existence of such a marginal stability line seems perfectly plausible – there seems no reason to doubt the consistency of the theory. Thus we have proof of the existence of at least one marginal stability line; given that we have two points at which $`Z(Q)`$ vanishes for “simple” charges $`Q`$ it is quite likely that many other true marginal stability lines pass through these points. An even stronger consideration of this type is to follow large volume branes to the Gepner point: it is easy to find charge vectors satisfying (2.2) whose period goes through zero on this axis. If it is true that these generally correspond to stable bundles, we have many more examples. All this starts to be significant evidence for the claim that the BPS spectrum is rather different in the stringy regime. ### 4.1 A note on attractor points A question related to marginal stability but somewhat simpler has arisen in the study of BPS black holes in CY compactification. It has been shown that the entropy of such black holes is governed by the “attractor mechanism.” Given a black hole of large charge $`Q`$, the consistency condition for a covariantly constant spinor is a first order equation which is just gradient flow on the moduli space to a minimum of the quantity $`S(z)=|Q\mathrm{\Pi }(z)|`$; the entropy is the minimal value $`S_{min}(Q)`$. For some $`Q`$, it is possible that $`S_{min}(Q)=0`$. In the previously known examples (such as the state which goes massless at the conifold point), the state existed at the minimizing point in moduli space and produced a singularity in the moduli space metric, modifying the discussion. What we have found here is a $`Q`$ for which the discussion above leads to a contradiction (as noted in ) – the attractor equation breaks down (has no sensible solution) before reaching the horizon, so this is not a failure of supergravity. Indeed, this could be interpreted as an argument that such black holes cannot exist, and an observation consistent with this idea is that (at least in some cases) the condition $`S_{min}(Q)=0`$ reduces to the negation of (2.2) in the large volume limit on the quintic, However, we have found a particle with (small) charge $`Q`$ and $`S_{min}(Q)=0`$ at the Gepner point, so we have a paradox. We can take $`N`$ of these particles and put them into a small region of space, using only total energy $`Nm+ϵ`$. For $`N`$ sufficiently large, one would certainly expect that they form a black hole $`NQ`$, for which the previous argument applies. What is going on ? The resolution will almost certainly use the fact that – as a single brane – the object in question was unstable at the minimizing point. One scenario is that the final stable object is a bound state of two black holes of charges $`NQ_1`$ and $`NQ_2`$ with a hard core repulsive potential. This would evade the previously cited argument, which assumed a spherically symmetric configuration. It seems likely that more surprises along this line await us. ## 5 Conclusions and further directions D-branes have played a central role in the study of superstring and M theory duality. Quite a lot has been understood about compactifications with enhanced supersymmetry, but eventually we will need to deal with the physical cases of $`𝒩=0`$ (and hopefully $`𝒩=1`$!) supersymmetry in four dimensions. A large class of $`𝒩=1`$ supersymmetric string compactifications can be obtained by using D-branes on Calabi-Yaus. Many of these are related to known constructions (F theory or the strong coupling limit of heterotic strings) but what I have tried to show here is that we can make further progress by using special properties of the weak type II string coupling limit, namely the close relation between D-brane theories which fill different parts of Minkowski space (e.g. D$`3`$ and D$`0`$-branes), and the powerful tools of mirror symmetry and exactly solvable CFT. A reasonable goal for the current work is to settle the geometric hypothesis and modified geometric hypothesis as described here – namely, to show that the superpotential and D terms depend only on complex and Kähler data for B branes (the reverse for A branes) and answer the following questions: 1. Are all A branes $`3`$-branes wrapped on sL-submanifold’s, even for a stringy CY ? Are all marginal stability lines and decays described by the local intercommutation of $`3`$-branes ? 2. If so, do the mirror symmetry predictions for the spectrum of B branes agree with geometric predictions at large volume? What does the semistability condition translate into in the A picture ? 3. Is the spectrum of B branes on stringy CY’s very different from the large volume spectrum (as the results here suggest) ? If so, is it finite or perhaps characterized by simple inequalities analogous to (2.2) ? 4. Is knowledge of the large volume spectrum and the exact prepotential enough to determine the spectrum throughout moduli space (using consistency arguments of the sort which worked for supersymmetric gauge theory) ? 5. Can we make a complete statement about the potential and moduli space on these branes (presumably combining B picture information to get the superpotential and A picture to get the D terms) ? 6. Can we extend this picture to finite string coupling, perhaps by making contact with the heterotic string limits of the same models ? A longer term goal will be to understand terms in the effective action which are not so strongly constrained by supersymmetry, such as the D$`0`$-metric on a CY. Perhaps interesting non-supersymmetric models can be obtained by considering non-BPS space-filling brane configurations, along the lines of . ## Acknowledgments I would like to thank T. Banks, I. Brunner, D.-E. Diaconescu, B. R. Greene, M. Kontsevich, S. Kachru, A. Lawrence, J. Maldacena, G. Moore, C. Romelsburger, C. Vafa and E. Witten for collaboration and helpful discussions. This research is supported in part by DOE grant DE-FG05-90ER40559. ## Appendix We give a semi-qualitative argument for the vanishing period, using the results for the periods of the mirror to the quintic in . They are functions of a complex modulus $`\psi `$ which covers the Riemann sphere with three punctures. $`\psi \mathrm{}`$ is the large volume limit, with $`t=B+iV=\frac{5}{2\pi i}\mathrm{log}(5\psi )`$. $`\psi =1`$ is the conifold point, and $`\psi =0`$ is the Gepner point. $`\psi \alpha \psi `$ with $`\alpha =e^{2\pi i/5}`$ is the $`𝐙_5`$ quantum symmetry of the Gepner point – it leads to the same bulk theory but acts as an $`Sp(4,𝐙)`$ monodromy on the brane spectrum. Candelas et al. use a basis $`\omega _k(\psi )`$ where $`\omega _0`$ is the $`0`$-brane period in the large volume limit and the others are its images under the $`𝐙_5`$. of the Gepner point. These are multi-valued on the $`\psi `$ plane and thus it is necessary to take care with the domains of definition. There are three lines along which the periods have simple reality properties. We define the line $`A`$ to be $`\psi =x`$ real with $`x>1`$, the line $`B`$ as $`\psi =x`$ real satisfying $`0x<1`$, and the line $`C`$ as $`\psi =e^{2\pi i/10}x`$ with $`x`$ real and positive. From the explicit series expansions for the periods it is easy to check the following qualitative properties: 1. Near the Gepner point, $`\omega _j(\psi )\alpha ^{2+j}C\psi `$ with $`C=\mathrm{\Gamma }(1/5)/\mathrm{\Gamma }(4/5)^4`$ a positive real constant. 2. In the large volume limit, $`\omega _j(\psi )\frac{S_{j3}}{6}t^3`$ where $`S_{j3}=0,5,15,15,5`$ for $`j=0,1,2,3,4`$. 3. Along $`B`$ we have $`(\omega _j(x))^{}=\omega _{1j}(x)`$ and along $`C`$ we have $`(\omega _j(\alpha ^{1/2}x))^{}=\omega _j(\alpha ^{1/2}x)`$ (this must be checked using both large and small volume expansions). From , one can check that the period $$\mathrm{\Pi }_X=\omega _1\omega _4$$ (5.1) is the central charge of a B boundary state $`L_1=1`$, $`L_i=0`$ for $`i>1`$. We now argue that $`\mathrm{\Pi }_X`$ will have a zero along the axis C. From (iii) we see that $`\mathrm{\Pi }_X`$ is purely imaginary along this axis, so if the imaginary part changes sign between the Gepner and large volume limits it must have a zero. This can be checked explicitly given the limiting behaviors we quoted. A way to see that this was inevitable is to consider the behavior of the six-brane period $`\mathrm{\Pi }_6=\omega _1\omega _0`$ on the loop $`ABC`$ in moduli space. At large volume, $`\mathrm{\Pi }_6_{\psi \alpha ^{1/2}\mathrm{}}\frac{5}{6}t^3\frac{5}{6}iV^3`$ so it comes in from negative imaginary infinity towards zero. Along A and B $`\mathrm{\Pi }_6`$ is purely imaginary and as we know it crosses zero at $`\psi =1`$ (the conifold point) and comes out the other side, to reach its value at the Gepner point $`\mathrm{\Pi }_6(\psi )_{\psi 0}C(\alpha ^2\alpha ^3)\psi =C2i\mathrm{sin}\frac{\pi }{5}\psi .`$ As we come back along the axis C, we know that the six-brane does not become massless anywhere, so $`\mathrm{\Pi }_6`$ must move off into the complex plane to avoid the origin, finally joining the same asymptotics $`\mathrm{\Pi }_65/6iV^3`$ we had at large positive $`\psi `$. This behavior implies that $`\mathrm{\Pi }_6`$ must cross the real axis at some point, and since $`\omega _0`$ is real all along C, $`\omega _1`$ must become real at this point. ## References
no-problem/9910/hep-lat9910045.html
ar5iv
text
# A Lanczos approach to the inverse square root of a large and sparse matrix ## 1 Introduction The computation of the inverse square root of a matrix is a special problem in scientific computing. It is related to the matrix sign and polar decomposition . One may define the matrix sign by: $$\text{sign}(A)=A(A^2)^{\frac{1}{2}}$$ (1) where $`A`$ is a complex matrix with no pure imaginary eigenvalues. In polar coordinates, a complex number $`z=x+iy`$, is represented by $$z=|z|e^{i\varphi },\varphi =\mathrm{arctan}\frac{y}{x}$$ (2) In analogy, the polar decomposition of a matrix $`A`$ is defined by: $$A=V(A^{}A)^{\frac{1}{2}},V^1=V^{}$$ (3) where $`V`$ is the polar part and the second factor corresponds to the absolute value of $`A`$. The mathematical literature invloving the matrix sign function traces back to 1971 when it was used to solve the Lyapunov and algebraic Riccati equations . In computational physics one may face a similar problem when dealing with Monte Carlo simulations of fermion systems, the so-called sign problem . In this case the integration measure is proportional to the determinant of a matrix and the polar decomposition may be helpful to monitor the sign of the determinant. The example brought in this paper comes from the recent progress in formulating Quantum Chromodynamics (QCD) on a lattice with exact chiral symmetry . In continuum, the massless Dirac propagator $`D_{cont}`$ is chirally symmetric, i.e. $$\gamma _5D_{cont}+D_{cont}\gamma _5=0$$ (4) On a regular lattice with spacing $`a`$ the symmetry is suppressed according to the Ginsparg-Wilson relation: : $$\gamma _5D+D\gamma _5=aD\gamma _5D$$ (5) where $`D`$ is the lattice Dirac operator. An explicit example of a Dirac operator obeying this relation is the so-called overlap operator : $$aD=1A(A^{}A)^{1/2},A=MaD_W$$ (6) where $`M`$ is a shift parameter in the range $`(0,2)`$, which I have fixed at one. $`D_W`$ is the Wilson operator, $$D_W=\underset{\mu =1}{\overset{4}{}}\gamma _\mu _\mu \frac{a}{2}\underset{\mu =1}{\overset{4}{}}\mathrm{\Delta }_\mu $$ (7) which is a nearest-neighbors discretization of the continuum Dirac operator (it violates the Ginsparg-Wilson relation). $`_\mu `$ and $`\mathrm{\Delta }_\mu `$ are first and second order covariant differences given by: $`(_\mu \psi )_i=\frac{1}{2a}(U_{\mu ,i}\psi _{i+\widehat{\mu }}U_{\mu ,i\widehat{\mu }}^{}\psi _{i\widehat{\mu }})`$ $`(\mathrm{\Delta }_\mu \psi )_i=\frac{1}{a^2}(U_{\mu ,i}\psi _{i+\widehat{\mu }}+U_{\mu ,i\widehat{\mu }}^{}\psi _{i\widehat{\mu }}2\psi _i)`$ where $`\psi _i`$ is a fermion field at the lattice site $`i`$ and $`U_{\mu ,i}`$ an $`SU(3)`$ lattice gauge filed associated with the oriented link $`(i,i+\widehat{\mu })`$. These are unitary 3 by 3 complex matrices with determinant one. A set of such matrices forms a lattice gauge “configuration”. $`\gamma _\mu ,\mu =1,\mathrm{},5`$ are $`4\times 4`$ Dirac matrices which anticommute with each-other. Therefore, if there are $`N`$ lattice points in four dimensions, the matrix $`A`$ is of order $`12N`$. A restive symmetry of the matrix $`A`$ that comes from the continuum is the so called $`\gamma _5symmetry`$ which is the Hermiticity of the $`\gamma _5A`$ operator. By definition, computation of $`D`$ involves the inverse square root of a matrix. This is a non-trivial task for large matrices. Therefore several algorithms have been proposed by lattice QCD physicists . All these methods rely on matrix-vector multiplications with the sparse Wilson matrix $`D_W`$, being therefore feasible for large simulations. In fact, methods that approximate the inverse square root by Legendre and Chebyshev polynomials need to know apriori the extreme eigenvalues of $`A^{}A`$ to be effective. This requires computational resources of at least one Conjugate Gradients (CG) inversion. In the inverse square root is approximated by a rational approximation, which allows an efficient computation via a multi-shift CG iteration. Storage here may be an obstacle which is remedied by a second CG step . The Pade approximation used by needs the knowledge of the smallest eigenvalue of $`A^{}A`$. Therefore the method becomes effective only in connection with the $`D`$ inversion . The method presented earlier by the author relies on taking exactly the inverse square root from the Ritz values. These are the roots of the Lanczos polynomial approximating the inverse of $`A^{}A`$. In that work the Lanczos polynomial was constructed by applying the Hermitian operator $`\gamma _5A`$. The latter is indefinite, thereby responsible for observed oscillations in the residual vector norm . Here I use a Lanczos polynomial on the positive definite matrix $`A^{}A`$. In this case the residual vector norm decreases monotonically and leads to a stable method. This is a crucial property that allows a reliable stopping criterion that I will present here. The paper is selfcontained: in the next section I will briefly present the Lanczos algorithm and set the notations. In section 3, I use the algorithm to solve linear systems, and in section 4, the computation of the inverse square root is given. The method is tested in section 5 and conclusions are drawn in the end. ## 2 The Lanczos Algorithm The Lanczos iteration is known to approximate the spectrum of the underlying matrix in an optimal way and, in particular, it can be used to solve linear systems . Let $`Q_n=[q_1,\mathrm{},q_n]`$ be the set of orthonormal vectors, such that $$A^{}AQ_n=Q_nT_n+\beta _nq_{n+1}(e_n^{(n)})^T,q_1=\rho _1b,\rho _1=1/b_2$$ (8) where $`T_n`$ is a tridiagonal and symmetric matrix, $`b`$ is an arbitrary vector, and $`\beta _n`$ a real and positive constant. $`e_m^{(n)}`$ denotes the unit vector with $`n`$ elements in the direction $`m`$. By writing down the above decomposition in terms of the vectors $`q_i,i=1,\mathrm{},n`$ and the matrix elements of $`T_n`$, I arrive at a three term recurrence that allows to compute these vectors in increasing order, starting from the vector $`q_1`$. This is the $`LanczosAlgorithm`$: $$\begin{array}{c}\beta _0=0,\rho _1=1/b_2,q_0=o,q_1=\rho _1b\hfill \\ fori=1,\mathrm{}\hfill \\ v=A^{}Aq_i\hfill \\ \alpha _i=q_i^{}v\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ \beta _i=v_2\hfill \\ if\beta _i<tol,n=i,endfor\hfill \\ q_{i+1}=v/\beta _i\hfill \end{array}$$ (9) where $`tol`$ is a tolerance which serves as a stopping condition. The Lanczos Algorithm constructs a basis for the Krylov subspace : $$\text{span}\{b,A^{}Ab,\mathrm{},(A^{}A)^{n1}b\}$$ (10) If the Algorithm stops after $`n`$ steps, one says that the associated Krylov subspace is invariant. In the floating point arithmetic, there is a danger that once the Lanczos Algorithm (polynomial) has approximated well some part of the spectrum, the iteration reproduces vectors which are rich in that direction . As a consequence, the orthogonality of the Lanczos vectors is spoiled with an immediate impact on the history of the iteration: if the algorithm would stop after $`n`$ steps in exact arithmetic, in the presence of round off errors the loss of orthogonality would keep the algorithm going on. ## 3 The Lanczos Algorithm for solving $`A^{}Ax=b`$ Here I will use this algorithm to solve linear systems, where the loss of orthogonality will not play a role in the sense that I will use a different stopping condition. I ask the solution in the form $$x=Q_ny_n$$ (11) By projecting the original system on to the Krylov subspace I get: $$Q_n^{}A^{}Ax=Q_n^{}b$$ (12) By construction, I have $$b=Q_ne_1^{(n)}/\rho _1,$$ (13) Substituting $`x=Q_ny_n`$ and using (8), my task is now to solve the system $$T_ny_n=e_1^{(n)}/\rho _1$$ (14) Therefore the solution is given by $$x=Q_nT_n^1e_1^{(n)}/\rho _1$$ (15) This way using the Lanczos iteration one reduces the size of the matrix to be inverted. Moreover, since $`T_n`$ is tridiagonal, one can compute $`y_n`$ by short recurences. If I define: $$r_i=bA^{}Ax_i,q_i=\rho _ir_i,\stackrel{~}{x}_i=\rho _ix_i$$ (16) where $`i=1,\mathrm{}`$, it is easy to show that $$\begin{array}{c}\rho _{i+1}\beta _i+\rho _i\alpha _i+\rho _{i1}\beta _{i1}=0\hfill \\ q_i+\stackrel{~}{x}_{i+1}\beta _i+\stackrel{~}{x}_i\alpha _i+\stackrel{~}{x}_{i1}\beta _{i1}=0\hfill \end{array}$$ (17) Therefore the solution can be updated recursively and I have the following Algorithm1 for solving the system $`A^{}Ax=b`$: $$\begin{array}{c}\beta _0=0,\rho _1=1/b_2,q_0=o,q_1=\rho _1b\hfill \\ fori=1,\mathrm{}\hfill \\ v=A^{}Aq_i\hfill \\ \alpha _i=q_i^{}v\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ \beta _i=v_2\hfill \\ q_{i+1}=v/\beta _i\hfill \\ \stackrel{~}{x}_{i+1}=\frac{q_i+\stackrel{~}{x}_i\alpha _i+\stackrel{~}{x}_{i1}\beta _{i1}}{\beta _i}\hfill \\ \rho _{i+1}=\frac{\rho _i\alpha _i+\rho _{i1}\beta _{i1}}{\beta _i}\hfill \\ r_{i+1}:=q_{i+1}/\rho _{i+1}\hfill \\ x_{i+1}:=y_{i+1}/\rho _{i+1}\hfill \\ if\frac{1}{|\rho _{i+1}|}<tol,n=i,endfor\hfill \end{array}$$ (18) ## 4 The Lanczos Algorithm for solving $`(A^{}A)^{1/2}x=b`$ Now I would like to compute $`x=(A^{}A)^{1/2}b`$ and still use the Lanczos Algorithm. In order to do so I make the following observations: Let $`(A^{}A)^{1/2}`$ be expressed by a matrix-valued function, for example the integral formula : $$(A^{}A)^{1/2}=\frac{2}{\pi }_0^{\mathrm{}}𝑑t(t^2+A^{}A)^1$$ (19) From the previous section, I use the Lanczos Algorithm to compute $$(A^{}A)^1b=Q_nT_n^1e_1^{(n)}/\rho _1$$ (20) It is easy to show that the Lanczos Algorithm is shift-invariant. i.e. if the matrix $`A^{}A`$ is shifted by a constant say, $`t^2`$, the Lanczos vectors remain invariant. Moreover, the corresponding Lanczos matrix is shifted by the same amount. This property allows one to solve the system $`(t^2+A^{}A)x=b`$ by using the same Lanczos iteration as before. Since the matrix $`(t^2+A^{}A)`$ is better conditioned than $`A^{}A`$, it can be concluded that once the original system is solved, the shifted one is solved too. Therefore I have: $$(t^2+A^{}A)^1b=Q_n(t^2+T_n)^1e_1^{(n)}/\rho _1$$ (21) Using the above integral formula and putting everything together, I get: $$x=(A^{}A)^{1/2}b=Q_nT_n^{1/2}e_1^{(n)}/\rho _1$$ (22) There are some remarks to be made here: a) As before, by applying the Lanczos iteration on $`A^{}A`$, the problem of computing $`(A^{}A)^{1/2}b`$ reduces to the problem of computing $`y_n=T_n^{1/2}e_1^{(n)}/\rho _1`$ which is typically a much smaller problem than the original one. But since $`T_n^{1/2}`$ is full, $`y_n`$ cannot be computed by short recurences. It can be computed for example by using the full decomposition of $`T_n`$ in its eigenvalues and eigenvectors; in fact this is the method I have employed too, for its compactness and the small overhead for moderate $`n`$. b) The method is not optimal, as it would have been, if one would have applied it directly for the matrix $`(A^{}A)^{1/2}`$. By using $`A^{}A`$ the condition is squared, and one looses a factor of two compared to the theoretical case! c) From the derivation above, it can be concluded that the system $`(A^{}A)^{1/2}x=b`$ is solved at the same time as the system $`A^{}Ax=b`$. d) To implement the result (22), I first construct the Lanczos matrix and then compute $$y_n=T_n^{1/2}e_1^{(n)}/\rho _1$$ (23) To compute $`x=Q_ny_n`$, I repeat the Lanczos iteration. I save the scalar products, though it is not necessary. Therefore I have the following Algorithm2 for solving the system $`(A^{}A)^{1/2}x=b`$: $$\begin{array}{c}\beta _0=0,\rho _1=1/b_2,q_0=o,q_1=\rho _1b\hfill \\ fori=1,\mathrm{}\hfill \\ v=A^{}Aq_i\hfill \\ \alpha _i=q_i^{}v\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ \beta _i=v_2\hfill \\ q_{i+1}=v/\beta _i\hfill \\ \rho _{i+1}=\frac{\rho _i\alpha _i+\rho _{i1}\beta _{i1}}{\beta _i}\hfill \\ if\frac{1}{|\rho _{i+1}|}<tol,n=i,endfor\hfill \\ \\ Set(T_n)_{i,i}=\alpha _i,(T_n)_{i+1,i}=(T_n)_{i,i+1}=\beta _i,otherwise(T_n)_{i,j}=0\hfill \\ y_n=T_n^{1/2}e_1^{(n)}/\rho _1=U_n\mathrm{\Lambda }_n^{1/2}U_n^Te_1^{(n)}/\rho _1\hfill \\ \\ q_0=o,q_1=\rho _1b,x_0=o\hfill \\ fori=1,\mathrm{},n\hfill \\ x_i=x_{i1}+q_iy_n^{(i)}\hfill \\ v=A^{}Aq_i\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ q_{i+1}=v/\beta _i\hfill \end{array}$$ (24) where by $`o`$ I denote a vector with zero entries and $`U_n,\mathrm{\Lambda }_n`$ the matrices of the eigenvectors and eigenvalues of $`T_n`$. Note that there are only four large vectors necessary to store: $`q_{i1},q_i,v,x_i`$. ## 5 Testing the method I propose a simple test: I solve the system $`A^{}Ax=b`$ by applying twice the $`Algorithm2`$, i.e. I solve the linear systems $$(A^{}A)^{1/2}z=b,(A^{}A)^{1/2}x=z$$ (25) in the above order. For each approximation $`x_i`$, I compute the residual vector $$r_i=bA^{}Ax_i$$ (26) The method is tested for a SU(3) configuration at $`\beta =6.0`$ on a $`8^316`$ lattice, corresponding to an order $`98304`$ complex matrix $`A`$. In Fig.1 I show the norm of the residual vector decreasing monotonically. The stagnation of $`r_i_2`$ for small values of $`tol`$ may come from the accumulation of round off error in the $`64`$-bit precision arithmetic used here. This example shows that the tolerance line is above the residual norm line, which confirms the expectation that $`tol`$ is a good stopping condition of the $`Algorithm2`$. ## 6 Conclusions I have presented a Lanczos method to compute the inverse square root of a large and sparse positive definite matrix. The method is characterized by a residual vector norm that decreases monotonically and a consistent stopping condition. This stability should be compared with a similar method presented earlier by the author , where the underlying Hermitian but indefinite matrix $`\gamma _5A`$ led to appreciable instabilities in the norm of the residual vector. In terms of complexity this algorithm requires less operations for the same accuracy than its indefinite matrix counterpart. This property is guaranteed by the monotonicity of the residual vector norm. Nontheless, the bulk of the work remains the same. With the improvement in store the method is complete. It shares with methods presented in the same underlying Lanczos polynomial. As it is wellknown CG and Lanczos methods for solving a linear system produce the same results in exact arithmetic. In fact, CG derives from the Lanczos algorithm by solving the coupled two-term recurences of CG for a single three-term recurence of Lanczos. However, the coupled two-term recurences of CG accumulate less round off. This makes CG preferable for ill-conditioned problems. There are two main differences between the method presented here and those in : a) Since CG and Lanczos are equivalent, they produce the same Lanczos matrix. Therefore, any function of $`A^{}A`$ translates for both algorithms into a function of $`T_n`$ (given the basis of Lanczos vectors). The latter function translates into a function of the Ritz values, the eigenvalues of $`T_n`$. That is, whenever the methods of papers try to approximate the inverse square root of $`A^{}A`$, the underlying CG algorithm shifts this function to the Ritz values. It is clear now that if I take the inverse square root from the Ritz values exactly, I don’t have any approximation error. This is done in $`Algorithm2`$. b) $`Algorithm2`$ sets no limits in the amount of memory required, whereas the multi-shift CG needs to store as many vectors as the number of shifts. For high accuracy approximations the multi-shift CG is not practical. However, one may lift this limit in expense of a second CG iteration (two-step CG) . Therefore $`Algorithm2`$ and the two-step CG have the same iteration workload, with $`Algorithm2`$ computing exactly the inverse square root. Additionally, $`Algorithm2`$ requires the calculation of Ritz eigenpairs of $`T_n`$, which makes for an overhead proportional to $`n^2`$ when using the QR algorithm for the eigenvalues and the inverse iteration for the eigenvectors . Since the complexity of the Lanczos algorithm is $`nN`$, the relative overhead is proportional to $`n/N`$. For moderate gauge couplings and lattice sizes this is a small percentage. I conclude that the algorithms of may be used in situations where a high accuracy is not required and/or $`A`$ is well-conditioned. Experience with overlap fermions shows that high accuracy is often essential . In such situations the $`Algorithm2`$ is best suited.
no-problem/9910/cond-mat9910197.html
ar5iv
text
# Dynamical Transverse Meissner Effect and Transition in Moving Bose Glass ## Abstract We study moving periodic structures in presence of correlated disorder using renormalisation group. We find that the effect of disorder persists at all velocities resulting at zero temperature in a Moving Bose Glass phase with transverse pinning. At non zero temperature we find two distinct moving glass phases. We predict a sharp transition, as velocity increases between the Moving Bose Glass, where the transverse Meissner effect persists in the direction transverse to motion and a Correlated Moving Glass at high velocity, where it disappears. Experimental consequences for vortex lattices and charge density wave (CDW) systems are discussed. The interplay between elastic or plastic properties of periodic systems and quenched disorder, relevant to many experimental systems, has been proposed to lead to several static glass phases with complex ground states, pinning barriers and slow creep dynamics. Experiments on vortex lattices support a transition upon increase of point disorder between a dislocation-free quasi-ordered Bragg glass (BrG) and an amorphous vortex glass VG . Correlated disorder, e.g. heavy ion columnar tracks, leads to a stronger glass, termed the Bose glass (BoG) by analogy to localized 2D bosons . The hallmark of the static BoG, compared to an anisotropic VG, is the transverse Meissner effect (TME) : the flux lines being localized along the columns, the equilibrium response to a perpendicular field vanishes below a threshold $`H_{}^c(T)`$. Although there is experimental evidence for a liquid to BoG transition at $`T_{BoG}`$ with anomalous angular dependences of $`T_{BoG}`$ and transport , only recently have attempts been made to observe the TME directly. The dynamical states for moving periodic structures in presence of point disorder have been investigated recently . The naive expectation is that a fast moving system averages out disorder, resulting only in an increase of the effective temperature. It was found instead that the residual periodicity transverse to motion still sees static disorder, and glassy features survive, such as transverse pinning, leading to a moving glass (MG) state. The system was shown to be well described by the MG model which involves only transverse displacements. Rows of a vortex lattice driven by a Lorentz force should thus flow along well defined static channels, with barriers to transverse motion and a transverse depinning threshold $`F_c^t(v)`$ . This was confirmed in numerical simulations . Bitter decoration experiments and STM imaging show vortex lattices moving along their principal axis and forming stable channel structures either fully coupled (no dislocations) at large velocity $`v`$ or decoupled (with smectic order) at lower $`v`$ , as found in simulations . Transverse barriers could also explain the anomalously small Hall effect in Wigner solids . Finally, CDW systems in a steady current were observed to exhibit a depinning threshold under an electric field applied along their periodicity direction transverse to carrier motion . It provides a direct measurement of the transverse critical force $`F_c^t(v)`$, observed to decay exponentially with $`v`$ in $`d=3`$, as predicted in . These studies having focused on point disorder it is natural to ask whether lattices moving in presence of correlated disorder also retain some glassy features of the static BoG. One question is whether the transverse periodicity results in a persistence of the TME when the field is applied transverse to motion. Additional physics compared to point disorder is expected since the localization effect of the columns (reducing thermal wandering) compete with heating by motion. Probing these effects as well as vortex correlations is experimentally accessible. In this Letter we investigate moving lattices in presence of correlated disorder. We describe the system within the MG model which involves only transverse displacements. Using renormalization group (RG) we find that the effect of weak disorder persists at all $`v`$ resulting at $`T=0`$ in a moving Bose glass phase (MBoG) with transverse pinning and TME. At $`T>0`$ we find two distinct moving glass phases. Our model exhibits a sharp transition, as velocity increases, between the MBoG where the TME persists and a higher velocity phase, the Correlated Moving Glass (CMG), where it disappears. While in the CMG the thermal fluctuations grow logarithmically with distance, in the MBoG they remain bounded as the channels remain localized. The properties of these moving phases and transition are summarized in Table 1 and should be testable in vortex and CDW systems . Since heavy ions tracks act as strong pinning centers, except at higher $`T`$ or fields, it is also important here, as in the case of point disorder, to investigate the effect of dislocations. At large enough $`v`$ the effect of disorder is strongly reduced and the system should recover a large degree of topological order. Point disorder studies in 2D suggest by analogy to straight lines in columnar disorder that transverse periodicity (even in the presence of few dislocations) survives down to relatively low velocity. Hence, we expect the transverse MG model based on the channel structure to be good starting point. We consider a lattice moving over a substrate with weak gaussian disorder correlated along $`z`$. The velocity $`v`$ is along a principal lattice direction $`x`$. The MG model only assumes topological order in the direction transverse to motion $`y`$ and consists in the equation of motion for the component of the displacement field along $`y`$, denoted $`u_{rt}`$, $`r=(x,y,z)`$. It describes both (i) fully elastic flow, coupled channels (weak disorder or large $`v`$) (ii) flow with phase slips (dislocations) occuring between channels (stronger disorder or intermediate $`v`$) and reads: $`(\eta _t+v_xc_x_x^2c_y_y^2c_z_z^2)u_{rt}`$ (1) $`=F[x,y,u_{rt}]+\zeta _{rt}`$ (2) with $`\zeta _{rt}\zeta _{r^{}t^{}}=2\eta T\delta ^d(rr^{})\delta (tt^{})`$ and the correlator of the static pinning force along $`y`$ is $`\overline{F[x,y,u]F[x^{},y^{},u^{}]}=\delta (xx^{})\delta ^{d_y}(yy^{})\mathrm{\Delta }(uu^{})`$. The bare $`\mathrm{\Delta }(u)`$, defined in , is of range $`r_f`$ and has the lattice period $`a`$. We denote the spatial dimension of $`y`$ by $`d_y`$, the case of physical interest being $`d=2+d_y=3`$. The bare friction $`\eta _0`$ is absorbed in $`v`$. The $`c_i`$ can be estimated for a flux lattice with the field along $`z`$ . Adding a transverse field $`H^y`$ along $`y`$ amounts to add in (2) a surface force $`h^y(\delta (zL_z/2)\delta (z+L_z/2))`$, $`h^yH^y`$. Let us first analyze the effect of disorder using first order perturbation theory. Correlations split into static disorder-induced displacements and thermal displacements $`\overline{|u_{q,\omega }|^2}=\delta (\omega )\delta (q_z)C_{stat}(q_{})+C_{th}(q,\omega )`$. The static displacements are $`z`$ independent and identical to the case of point disorder in $`d1`$ dimensions: $`\overline{(u_{rt}u_{0t^{}})^2}\mathrm{\Delta }(0){\displaystyle _{q_x,q_y}}{\displaystyle \frac{2(1\mathrm{cos}(q_xx+q_yy))}{(c_xq_x^2+c_yq_y^2)^2+v^2q_x^2}}`$ (3) They become unbounded for $`d<d_{uc}=4`$ ($`d_{uc}=3`$ for point disorder). As in simple perturbation theory breaks down beyond a dynamical Larkin length $`R_c^y(v)`$ (at which $`u_{stat}r_f`$) which interpolates between the static Larkin length $`R_c^y(v=0)(1/\mathrm{\Delta }(0))^{1/(5d)}`$ and the large $`v`$ estimate (from (3)) $`R_c^y(v)(c_yvr_f^2/\mathrm{\Delta }(0))^{1/(4d)}`$. A characteristic feature of the static BoG is the reduction of thermal displacements by disorder (while they are unaffected by point disorder) resulting in the upward shift of the melting line . To analyze the competition between these localization effects and heating by motion we compute equal time thermal displacements at low $`T`$: $`C_{th}(q,t=0)=(1+|\mathrm{\Delta }^{\prime \prime }(0)|G_v(q))T/(c_iq_i^2)`$ (4) We find that $`G_v(q)`$ changes sign as a function of $`v`$: it is negative (reduced thermal displacements) for $`v<v^{}(q)`$ and positive (heating by motion) when $`v>v^{}(q)`$. Setting $`1/q`$ to be the Larkin length raises the possibility of a dynamical transition at $`v_c`$ where heating by motion wins over the localization by the columns. To describe the system beyond the Larkin length we now use the RG on the dynamical field theory associated to (2). Reducing the cutoff $`\mathrm{\Lambda }_l=\mathrm{\Lambda }e^l`$ for $`q_y`$ we get : $`_l\mathrm{ln}c_z`$ $`=`$ $`_l\mathrm{ln}\eta ={\displaystyle \frac{\stackrel{~}{\mathrm{\Delta }}^{\prime \prime }(0)}{1+\stackrel{~}{v}^2e^{2l}}}`$ (5) $`_l\mathrm{ln}\stackrel{~}{T}`$ $`=`$ $`d_y+\stackrel{~}{\mathrm{\Delta }}^{\prime \prime }(0){\displaystyle \frac{\frac{1}{2}\stackrel{~}{v}^2e^{2l}}{1+\stackrel{~}{v}^2e^{2l}}}`$ (6) $`_l\stackrel{~}{\mathrm{\Delta }}(u)`$ $`=`$ $`(2d_y+{\displaystyle \frac{1}{1+\stackrel{~}{v}^2e^{2l}}})\stackrel{~}{\mathrm{\Delta }}(u)+\stackrel{~}{T}\stackrel{~}{\mathrm{\Delta }}^{\prime \prime }(u)`$ (8) $`+\stackrel{~}{\mathrm{\Delta }}^{\prime \prime }(u)(\stackrel{~}{\mathrm{\Delta }}(0)\stackrel{~}{\mathrm{\Delta }}(u)){\displaystyle \frac{\stackrel{~}{\mathrm{\Delta }}^{}(u)^2}{1+\stackrel{~}{v}^2e^{2l}}}`$ $`c_x`$, $`c_y`$, $`v`$ are not renormalized, $`\stackrel{~}{v}=v/(2\mathrm{\Lambda }\sqrt{c_xc_y})`$, $`\stackrel{~}{T}_l=T_lC_l\mathrm{\Lambda }_l^{d_y}/\sqrt{c_xc_z(l)}`$ the reduced temperature and $`\stackrel{~}{\mathrm{\Delta }}_l(u)=S_{d_y}^{}\mathrm{\Lambda }_l^{d_y3}\mathrm{\Delta }_l(u)/4c_y\sqrt{(1+\stackrel{~}{v}^2e^{2l})c_xc_y}`$. These equations reveal three phases and a transition as follows: Static Bose Glass: at $`v=0`$ (5,6,8) describe dynamically an elastic version of the BoG similar to the one studied at equilibrium in . The model exhibits analytically many of the properties of the BoG induced by strong columnar defects e.g. the TME . At $`T=0`$ the ground state is $`z`$-independent, thus the problem is identical to point disorder in $`x,y`$ space with identical RG equations. Beyond the Larkin length $`R_cae^{l_c}`$, $`\stackrel{~}{\mathrm{\Delta }}_l(u)`$ develops a cusp $`\stackrel{~}{\mathrm{\Delta }}_l^{\prime \prime }(0)\mathrm{}`$ giving rise to a depinning threshold. From (5) the tilt modulus $`c_z`$ diverges at $`R_c`$ implying a vanishing linear response to $`H_{}`$ and leading to the TME . The $`T=0`$ fixed point reads , to $`𝒪(ϵ=5d)`$, $`\stackrel{~}{\mathrm{\Delta }}_{BoG}^{}(u)=\frac{ϵ}{6}(\frac{a^2}{6}u(au))`$ (for $`0<u<a`$). At $`T>0`$ we observe from our RG equations a remarkable feature compared to point disorder: the effective temperature $`\stackrel{~}{T}_l`$ runs to $`0`$ at a finite length scale $`R_{loc}>R_c`$. This is consistent with localization effects, expected in the BoG, setting in beyond $`R_{loc}`$. Moving Bose Glass: in the moving system, at $`T=0`$ for any $`v`$ and at $`T>0`$ for $`v<v_c(T)`$ defined below, the RG flows to a $`T=0`$ fixed point, which we call the MBoG. The RG equation (8) at $`\stackrel{~}{T}_l=0`$ resembles the one for the MG with point disorder in $`d1`$ dimensions . To $`𝒪(ϵ=4d)`$ the asymptotic behavior for any initial velocity is $`\stackrel{~}{\mathrm{\Delta }}_l(0)\stackrel{~}{\mathrm{\Delta }}_l(u)\frac{ϵ}{2}u(au)`$ while $`\stackrel{~}{\mathrm{\Delta }}_l(0)e^{ϵl}`$. Since $`\stackrel{~}{\mathrm{\Delta }}_l(u)`$ develops a cusp at the dynamical Larkin length $`R_c^y(v)`$ the MBoG is also characterized by transverse barriers and a $`T=0`$ transverse depinning threshold $`F_c^t(v)c_yr_f/R_c^y(v)^2`$: it can be checked by adding a small transverse force and is manifest from the divergence of the relaxation time $`\eta _l`$ at $`R_c^y(v)`$ (from 5). The novel feature, specific to correlated disorder, is that the linear response to a transverse field $`H^y`$ along $`y`$ vanishes as the tilt modulus $`c_z(l)`$ diverges beyond $`R_c^y(v)`$ (from (5)). Since a finite $`H^y`$ acts as an additional surface force $`h^y`$, the existence of a transverse depinning threshold $`F_c^t(v)>0`$ implies that the field cannot penetrate from the surface inside the bulk. Thus the TME persists in the MBoG, the $`T=0`$ threshold field $`H_c^y`$ being related to $`F_c^t`$ through $`h_c^y(v)R_c^zF_c^t\sqrt{c_yc_z}r_f/R_c^y(v)`$ where the penetration length along $`z`$ is $`R_c^z(v)\sqrt{c_z/c_y}R_c^y(v)`$. Positional correlations in the MBoG are found to be dominated by a correlated random force which yields static, $`z`$-independent displacements growing as $`uy^{1/2}x^{1/4}`$, i.e as in the point disorder MG at $`T=0`$ in $`d=2`$ (and faster than in the elastic BoG where $`u\mathrm{ln}y`$). We now study the stability of this phase to temperature. As long as $`\stackrel{~}{T}_l>0`$, $`\stackrel{~}{\mathrm{\Delta }}_l(u)`$ is analytic ($`\stackrel{~}{\mathrm{\Delta }}_l^{\prime \prime }(0)<0`$). Remarkably, the coefficient of $`\stackrel{~}{\mathrm{\Delta }}_l^{\prime \prime }(0)`$ in (6) changes sign as a function of $`v`$. Thus $`\stackrel{~}{T}_l`$ decreases at small $`v`$ and increases at large $`v`$, consistent with the above first order estimate. Our RG shows that this competition between localization and heating leads to a sharp transition in (2) at $`v=v_c(T)`$. Indeed, as depicted in fig. 1, for $`v<v_c`$, $`\stackrel{~}{T}_l`$ decreases to $`\stackrel{~}{T}_l=0`$ at a finite scale $`R_{loc}(T,v)=ae^{l_{loc}}`$ and remains exactly zero thereafter. Thus all properties beyond $`R_{loc}`$ are governed by the $`T=0`$ MBoG fixed point. For $`v>v_c`$ heating wins ($`\stackrel{~}{T}_l`$ never vanishes), driving the system to the CMG phase described below. Thus the MBoG $`T=0`$ fixed point is stable to temperature for $`v<v_c`$. This is specific to correlated disorder and is in contrast with the MG with point disorder. We surmise that, as in the static BoG, barriers also diverge in the MBoG, leading to transverse creep, which would be interesting to check numerically. Localization effects survive in the MBoG as seen from equal time connected fluctuations along $`z`$, $`l_{}^2=lim_{|zz^{}|\mathrm{}}\frac{1}{2}\overline{(u_{xyzt}u_{xyz^{}t})^2_c}=_l\stackrel{~}{T}_l`$, which are bounded since the integral only runs up to $`l_{loc}`$. To confirm the persistence of Bose Glass features in moving systems we also studied numerically the $`d_y=0`$ version of (2) (known to exhibit BoG and TME at $`v=0`$ in equilibrium ). Results are consistent with a TME at $`v,T>0`$. Correlated Moving Glass: as shown in fig. 1 for $`T>0`$, $`v>v_c(T)`$, the RG flows to a finite temperature fixed point (perturbative in $`𝒪(ϵ=4d)`$) corresponding to a novel dynamical phase, the CMG. It is characterized by $`\stackrel{~}{T}^{}\frac{ϵ^2a^2}{16|\mathrm{ln}ϵ|}`$ and by an analytic fixed point for $`\stackrel{~}{\mathrm{\Delta }}_l(u)\stackrel{~}{\mathrm{\Delta }}_l(0)`$. A correlated random force $`\stackrel{~}{\mathrm{\Delta }}_l(0)e^{ϵl}`$ is also generated. This behavior is analogous, but not identical to the MG fixed point with point disorder, marginal in $`d=3`$ while the CMG is well below its upper critical dimension ($`ϵ=1`$). Contrarily to the BoG and the MBoG, no cusp singularity occurs. From (5) one hence finds that the system responds linearly to transverse forces and tilting fields, with finite renormalized coefficients $`\eta ^R`$ and $`c_z^R`$. However, since at $`T=0`$ the system instead flows to the MBoG fixed point, one does expect strongly non linear transverse $`IV`$ and $`B^yH^y`$ characteristics and diverging $`\eta ^R(T),c_z^R(T)`$ as $`T0`$. Due to the correlated random force, the static roughness of the channels is $`uy^{1/2}x^{1/4}`$ as in the MBoG (with a smaller amplitude). Thermal displacements however, while bounded in the MBoG, grow logarithmically in the CMG, as $`\overline{(u(y)u(0))^2_c}2\stackrel{~}{T}^{}\mathrm{ln}y`$. Hence the channels are thermally broadened, with slow decay of $`\overline{e^{i2\pi u_{r,t}/a}e^{i2\pi u_{0,t}/a}_c}y^\eta `$, $`\eta 0.48`$ in $`d=3`$, in contrast with the finite-width channels of the MG . Dynamical transition: although for $`T=0`$ the moving system is always in the MBoG phase, for $`T>0`$ a sharp transition occurs at a critical velocity $`v_c(T)`$ between the two above phases. The separatrix in the RG flow (fig. 1) is parabolic . Thus the transition occurs when the MBoG length $`R_{loc}(T,v)`$ equals $`R_0(v)\sqrt{2c_xc_y}/v`$, the (here $`T`$-independent) length in the CMG at which the flow of $`\stackrel{~}{T}_l`$ reverts. At low $`T`$, $`R_{loc}(T,v)R_{loc}(0^+,v)=R_c^y(v)`$ and thus $`\eta _0v_c1.3\sqrt{c_xc_y}/R_c`$ for weak disorder whereas it saturates at $`\eta _0v_c\sqrt{c_xc_y}/a`$ for stronger disorder. For lattices with $`c_{66}c_{11}`$, yields $`c_yc_{66}`$ near $`v_c`$ and $`\eta _0v_cc_{66}/\mathrm{max}(R_c,a)`$. For weak disorder ($`R_c>a`$, $`F_c=c_{66}r_f/R_c^2`$), one gets $`\eta _0v_c/F_cR_c/r_f`$. (5) and the parabolic shape of the separatrix leads in the CMG to renormalized tilt modulus and friction diverging as $`c_z^R\eta ^Re^{\frac{\mathrm{Cst}}{(vv_c)^\alpha }}`$, $`\alpha =\frac{1}{2}`$ for $`vv_c^+`$. It appears as an ordering transition along $`z`$ with a characteristic length $`R^y`$ along $`y`$ remaining finite while the length $`R^z\sqrt{c_z^R/c_y}R^y`$ diverges as $`R^ze^{\frac{\mathrm{Cst}}{(vv_c)^\alpha }}`$. However, coming from the MBoG, $`l_{}`$ jumps discontinuously to $`l_{}=+\mathrm{}`$ in the CMG, and thus the transition is of mixed discontinuous-continuous character. Since the MG model relies on the transverse order it describes the system for $`v>v_{tr}`$ at which channels appear. For $`v>v_{top}`$ these channels recouple and the lattice recovers a good amount of topological order . Thus the dynamical transition should be observable (fig. 2 a) for $`v_c>v_{tr}`$ (weaker disorder) while it is rounded by plastic effects for $`v_c<v_{tr}`$ (stronger disorder). Lindemann estimates of $`v_{top}`$ can be obtained as in (for $`c_{66}c_{11}`$, $`v_{tr}v_{top}`$). We consider columnar defects of spacing $`d`$, of strength $`u_0U_0(T)/ϵ_0`$, with $`r_f=l_{}(T)`$($`=b_0`$ at low $`T`$), using . When the decay length of translational order $`R_a`$ in the static BoG is such that $`R_a>a`$, i.e $`d/a>u_0(r_f/a)^{3/2}`$, the MBoG and the transition at $`\eta _0v_cϵ_0/a^3`$ should be observable. For stronger disorder the CMG exists (fig. 2 b) for $`\eta _0v>\eta _0v_{tr}U_0\sqrt{r_f/a}/(a^2d)`$. These should be compared to $`F_cU_0/(r_fa^2)`$, the single vortex pinning longitudinal threshold. The plastic limit at stronger disorder can be investigated extending the creep arguments of . We find that the penetration of $`H^y`$ results from (super)kinks nucleated at the $`\pm `$ surfaces with a bias along $`\pm y`$ and propagating inside the bulk. For not very thick samples they produce a small $`B^y`$ of order the creep velocity. In thick samples the response is determined by their competition with (super)kinks nucleated in the bulk unaffected by $`H^y`$, which propagate to the surface. At $`T=0,f>f_c`$ the persistence of anomalous tilt response depends on whether trajectories of a point in 2D attract on average . Thus the absence of $`H^y`$ penetration in the MG model studied here results from interactions responsible for the transversely ordered channel structure. Surface kinks along $`y`$ will also be nucleated at $`T>0`$ but in the MBoG they should not penetrate deep in the bulk. Additional weak point disorder (known to destabilize the $`d=3`$ plastic BoG ) transforms the CMG (above a large length) into a new fixed point with mixed features: the static roughness of the channels along $`x,y`$ grows as in the CMG but is logarithmic along $`z`$ (delocalization) with finite thermal width. There is a $`T=0`$ transverse critical force but no TME. At smaller $`v`$ the MBoG exhibits a greater stability to point disorder. To conclude, some effects of correlated disorder on moving periodic structures (e.g. the existence of two distinct moving phases) were found to differ radically from those due to point disorder. In particular we found a $`T>0`$ moving phase (MBoG) with strong glassy features.