id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0002/astro-ph0002488.html
ar5iv
text
# Fundamental parameters of Galactic luminous OB stars ## 1 Introduction Although microturbulence is a fundamental parameter when deriving abundances from stellar spectra, it has been systematically ignored when deriving He abundances from quantitative spectroscopical NLTE analyses of OB stars (Herrero et al. Herrero92 (1992), Smith & Howarth Smith&Howarth94 (1994)), being the first exceptions to this the works by McErlean et al. (McErlean98 (1998)) and Smith & Howarth (Smith&Howarth98 (1998)). One of the main reasons to ignore it is that the use of NLTE techniques reduced the need of microturbulence for the reproduction of the observed metallic line strengths and they even made abundances less sensitive to the value adopted for microturbulence than in LTE analyses (Becker & Butler Becker&Butler89 (1989)). In quantitative NLTE analyses of OB stars, the lines of H and He are used to determine stellar parameters (namely, effective temperature, $`T_{\mathrm{eff}}`$, logarithmic surface gravity, $`\mathrm{log}g`$, and He abundance). Their line profiles are dominated by the Stark broadening, and microturbulence, included in the standard way, only adds an extra Doppler width to the thermal broadening. First values for microturbulence found in LTE for main sequence B stars, of the order of 5 kms<sup>-1</sup> (Hardorp & Scholz Hardorp&Scholz70 (1970)), showed to be small enough to be of no importance when compared to H and He thermal velocities, that in such hot atmospheres are well above 5 kms<sup>-1</sup>. This together with the use of NLTE made the influence of microturbulence in the overall profile seem negligible. When measuring microturbulence in OB supergiants the situation is quite different, as its value has always showed to be comparable or well above the thermal velocities of H and He, and even, in some cases, above the speed of sound in these atmospheres (Lamers Lamers72 (1972), Lennon & Dufton Lennon&Dufton86 (1986)). This was not changed by later NLTE analyses (Lennon et al. Lennon91 (1991), Hubeny et al. Hubeny91 (1991), Gies & Lambert Gies&Lambert92 (1992), Smartt et al. Smartt97 (1997)), in which the values of microturbulence obtained are usually reduced but never to values clearly without contradiction. Kudritzki (Kudritzki92 (1992)) and Lamers & Achmad (Lamers&Achmad94 (1994)) explained that this could be due to the presence of wind outflow in these stars, that can mimic the effect of microturbulence in the line profiles. Most of the previously referred works are on early B and late O supergiants. Little work has been done on early O stars (see Hubeny et al. Hubeny91 (1991)), mainly because for them, with scarce metallic lines, it is very difficult to measure the value of microturbulence. And it is in the whole range of O stars that we are interested. One problem that is systematically found in all analyses of OB spectra is the imposibility of finding a consistent fit of all He i lines with a unique set of parameters. This difficulty appears worse in supergiants than in main sequence stars, and it is bigger between results from singlet and from triplet lines, but also appears within different lines in one system. We call this the *He i lines problem*. This discrepancy between singlet and triplet lines of He i was investigated by Voels et al. (Voels89 (1989)), who considered that atmospheric extension was responsible and so they called this effect “generalized dilution effect”. But recent works with spherical, mass losing models show that this can not be the only reason (Herrero et al., Herrero95 (1995), Herrero00 (2000)). Furthermore, McErlean et al. (McErlean98 (1998)) and Smith & Howarth (Smith&Howarth98 (1998)) find that the inclusion of microturbulence in the line formation calculations reduces, although not to all its extent, the discrepancy between the fits of singlet and triplet lines. They also find better and more consistent fits for the whole set of He i lines when microturbulence is considered. One more reason that encourages us to study the effect of microturbulence is that a careful inspection of several works shows that even for main sequence stars,values of microturbulence comparable to the thermal velocities of at least He are obtained in NLTE (Gies & Lambert Gies&Lambert92 (1992), Kilian et al. Kilian91 (1991)), which can invalidate the hypothesis of negligible influence of microturbulence in the profiles. So in this paper we want to study microturbulence in the range of parameters typical for O stars of any luminosity class, specifically pointing to its effect on the determination of stellar parameters and also on the *He i lines problem*. We are specially interested in investigating whether the inclusion of microturbulence can solve the He discrepancy (Herrero et al. Herrero92 (1992)), as Vrancken (Vrancken97 (1997)), McErlean et al. (McErlean98 (1998)) and Smith & Howarth (Smith&Howarth98 (1998)) suggest. This now well known problem has induced a lot of theoretical improvements in both evolutionary and model atmosphere theories in order to explain it. Recent works including sphericity and mass loss in the spectral analysis (Herrero et al. Herrero95 (1995), Herrero00 (2000), Israelian et al. Israelian99 (2000)) show that with these new models, more suitable for these stars than the plane–parallel hydrostatic ones, the discrepancy is not solved. Evolutionary models including additional mixing inside the star can explain enhanced He photospheric abundances during the H-burning phase, this mixing being induced by different physical mechanisms like rotation and turbulent diffusion (Dennisenkov, denn94 (1994), Meynet & Maeder, Meynet&Maeder97 (1997), Maeder & Zahn, Maeder&Zahn98 (1998)), or turbulent diffusion and semiconvection (Langer Langer92 (1992)). Sometimes angular rotational velocity changes during evolution are included (Langer & Heger, LH98 (1998), Heger, Heger98 (1998)). For a recent review, see Maeder & Meynet (maemey00 (2000)). Therefore it becomes of great importance to see whether microturbulence can be responsible for the He discrepancy. In Sect. 2 we present line formation calculations of H and He lines and their behaviour with microturbulence in the O stars domain. In Sect. 3 we determine the effect of including microturbulence in the determination of stellar parameters, and in Sect. 4 we analyze some stars with and without considering microturbulence. Sections 5 and 6 are then dedicated to our discussion and conclusions, respectively . ## 2 Microturbulence in H and He line formation calculations In order to study the behaviour of H and He lines with microturbulence, we perform line formation calculations in the parameter range typical for O to early B stars of any luminosity class: –between 30 000 and 45 000 K in $`T_{\mathrm{eff}}`$ –between 3.05 and 4.00 in $`\mathrm{log}g`$ (in c.g.s. units) –for $`ϵ`$= 0.10 and 0.20, with $`ϵ=\frac{N(He)}{N(He)+N(H)}`$ where N(x) is the number density of atom X. We follow the classical technique of calculating a NLTE model atmosphere of H and He, in radiative and hydrostatic equilibrium and with plane–parallel geometry (calculated with ALI, see Kunze, Kunze95 (1995)), and then solve the statistical equilibrium and transfer equations and perform the formal solution for the lines of H and He chosen using DETAIL & SURFACE (Butler & Giddings Butler&Giddings85 (1985)). In this final step we have included UV metal line opacities in the calculations, in order to obtain more realistic profiles (see Herrero, Herrero94 (1994), and Herrero et al., Herrero00 (2000), for details). As shown in the last reference, plane–parallel models are unable to reproduce properly spectra of massive OB stars around 50 000 K and hotter, that is why we stop our study at 45 000K. Microturbulence is introduced in the standard way, by adding an extra Doppler width to the thermal broadening of the line, which is then convolved with the rest of the broadening mechanisms. We have considered it in both the equations of statistical equilibrium and radiative transfer. As we don’t consider other motions (turbulent or not) in the determination of the stellar structure, we prefer to restrict the introduction of microturbulence to the absorption coefficient. Therefore, we do not include it in the calculation of the structure of the atmosphere via turbulent pressure. For similar reasons also we do not even bother about its dependence on depth. We don’t treat separately the effect on the populations and line profiles, as this has been recently studied by McErlean et al. (McErlean98 (1998)) and Smith & Howarth (Smith&Howarth98 (1998)) (whose results we support). We make line formation calculations for the lines we usually consider to perform our analyses: H<sub>γ</sub> and H<sub>β</sub> for H i; He i $`\lambda `$$`\lambda `$ 4387, 4922, 4471 Å for He i; and He ii $`\lambda `$$`\lambda `$ 4200, 4541, 4686 Å for He ii. In order to study the behaviour of these lines with microturbulence, we perform line formation calculations for microturbulence velocity values from 0 to 20 kms<sup>-1</sup>, for different sets of parameters, representing the spectral types and luminosity classes we are interested in: $`T_{\mathrm{eff}}`$= 30 000, $`\mathrm{log}g`$= 3.05, $`ϵ`$= 0.10 for late O and early B supergiants. The same with $`\mathrm{log}g`$= 4.00 for dwarfs. $`T_{\mathrm{eff}}`$= 35 000, $`\mathrm{log}g`$= 3.20, $`ϵ`$= 0.10 for “middle” O supergiants. The same with $`\mathrm{log}g`$= 4.00 for dwarfs. $`T_{\mathrm{eff}}`$= 40 000, $`\mathrm{log}g`$= 3.40, $`ϵ`$= 0.10 for early O supergiants. The same with $`\mathrm{log}g`$= 4.00 for dwarfs. $`T_{\mathrm{eff}}`$= 45 000, $`\mathrm{log}g`$= 3.60, $`ϵ`$= 0.10 for very early O supergiants. The same with $`\mathrm{log}g`$= 4.00 for dwarfs. The values of $`\mathrm{log}g`$ for supergiants are close to the lowest that could be converged for every $`T_{\mathrm{eff}}`$. For dwarfs of increasing $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$ becomes a little below 4.00, but this value still represents well this luminosity class. Finally, these calculations were repeated for $`ϵ`$= 0.20, to look for differential effects with He abundance. Figs. 1, 2 and 3 display the behaviour of H and He line profiles with microturbulence, for three different models, adequate respectively for late O supergiants, late O dwarfs and early O supergiants. In Table 1 equivalent widths are given for all these lines. The first two Figs. will allow us to compare the effects of microturbulence when varying $`\mathrm{log}g`$, whereas the first and last Figs. will be used to compare the effects when varying $`T_{\mathrm{eff}}`$ . The other calculated models behave consistently with what it is shown in Figs. 13 and will not be further discussed. Looking first at Fig. 1 we see that there are two different line groups. The first one is composed by the strong H lines and the relatively weak lines He ii $`\lambda \lambda `$ 4200, 4541. All these lines are not affected by microturbulence, although our highest values are comparable to the thermal velocity of H atoms and well above that of He atoms. The reason is that the Stark broadening dominates the profiles, and therefore masks the changes that microturbulence produce. The second group is composed by He i lines and the strong He ii$`\lambda `$4686 line. He i lines show sensitivity to microturbulence in both core and wings, while the core of He ii $`\lambda `$4686 is desaturated by microturbulence, but its effect is masked in the wings by the Stark broadening, because, as it has been explained by Smith & Howarth (Smith&Howarth98 (1998)), the effect of microturbulence will depend on the stepness of the line wings. When we then compare Figs. 1 and 2 we see that the increased pressure broadening (directly related to the increased density) simply reduces the effect of microturbulence in the line profiles. Thus, as gravity increases, the effect of microturbulence decreases for all considered lines. In the case of He ii lines this effect is reinforced by the displacement introduced in the He ionization equilibrium, that makes He ii lines weaker. More interesting is the comparison with higher temperatures. Looking to Fig. 1 and Fig. 3 we can see the behaviour of the lines of supergiants with increasing $`T_{\mathrm{eff}}`$. For higher values of $`T_{\mathrm{eff}}`$ marginal effects on H i and He ii line cores are seen (Fig. 3), but only He i lines and He ii $`\lambda `$4686 are again really sensitive to microturbulence. He ii line profiles are still insensitive, although they are now much stronger and even have a larger equivalent width than He i lines, because they are dominated by the Stark broadening. The reason why He i line profiles are still sensitive to microturbulence, although they are much weaker, has to be found in the influence of microturbulence on the shape of the absorption profile of weak lines. Thus, we see that He i $`\lambda \lambda `$ 4387, 4922 are not saturated in the whole temperature range. Their equivalent widths increase with increasing microturbulence, until, as $`T_{\mathrm{eff}}`$ increases, they become weak enough to have equivalent widths, but not line profiles, independent of microturbulence. This happens at 40 000 K for He i $`\lambda `$ 4387 (see Table 1) and at 45 000 K for He i $`\lambda `$ 4922. For them, Doppler broadening is important enough to let microturbulence shape the profiles appreciably in the whole temperature range. He i $`\lambda `$ 4471 is saturated at $`T_{\mathrm{eff}}`$ 30 000 K, but it desaturates at $`T_{\mathrm{eff}}`$= 35 000, and starts behaving like the other He i lines. Finally, we point out that all these results also apply for models with $`ϵ=0.20`$, and that they are a natural extension to O stars of the results of Mc Erlean et al. (McErlean98 (1998)) and Smith & Howarth (Smith&Howarth98 (1998)). ## 3 The influence of microturbulence in the determination of parameters of O-stars In this work we are interested in estimating the changes introduced in the derived stellar parameters by the inclusion of microturbulence in the analysis of O stars, with special attention to its influence in the derived He abundance. As we saw in the previous section, models of low gravity, corresponding to supergiants, are more sensitive to microturbulence, and so we expect supergiants to show the largest changes. Values of microturbulence of 10, 12 kms<sup>-1</sup> are found for late O and early B stars (see references in the introduction), therefore we decided to choose a fixed value for microturbulence of 15 kms<sup>-1</sup> to perform a test of the influence of microturbulence in the determination of $`T_{\mathrm{eff}}`$ , $`\mathrm{log}g`$ and He abundance. As a first approach to the problem, we take a model spectrum at $`\xi `$=15 kms<sup>-1</sup> as the *observed* spectrum, to which we fit model profiles in a grid around it at $`\xi `$=0 kms<sup>-1</sup>, with parameters differing in 500 to 1000 K in $`T_{\mathrm{eff}}`$, 0.05 to 0.10 in $`\mathrm{log}g`$ and 0.02 to 0.04 in $`ϵ`$ (these small values are suggested by previous inspection of a larger, coarser grid). To maximize the effect of microturbulence, we consider neither rotational broadening nor instrumental broadening. Taking advantage of the fact that we are using only theoretical profiles (although the one with $`\xi `$= 15 kms<sup>-1</sup> has been adopted as *the observation*) we use a least–squares fit procedure to determine the best fit. For each line of each model in the grid, we calculate the quadratic difference with the *observed* line, and then we add the results for all lines of a given ion. For example for H we calculate: $$\chi _\mathrm{i}^2(\mathrm{H})=\underset{Hlines}{}\frac{1}{\mathrm{\Delta }\lambda }\underset{\lambda }{}w_\lambda (f_{\lambda ,grid}f_{\lambda ,obs})^2$$ as the result for H for model i of the grid, where $`w_\lambda `$ is the spectral sampling and $`\mathrm{\Delta }\lambda `$ is the wavelength interval containing the whole line. We perform these calculations for all the models in the grid and then normalize these values to their minimum, so that a 1.00 gives us the model that best fits the lines of the ion at all temperatures, gravities and He abundances. The calculation is performed in the same way for He lines, taking He i and He ii lines separately. Finally we define for model i: $`\chi _\mathrm{i}^2`$= $`\chi _\mathrm{i}^2(\mathrm{H})`$+$`\chi _\mathrm{i}^2`$(He i)+$`\chi _\mathrm{i}^2`$(He ii) so that the best fit is adopted to be that of the model with the minimum value of $`\chi _\mathrm{i}^2`$. We have carried on the exercise for a model with $`T_{\mathrm{eff}}`$=40 000 K, $`\mathrm{log}g`$=3.40, $`ϵ`$=0.10 and $`\xi `$=15 kms<sup>-1</sup> as the *observed* spectrum. In Fig. 4 we have plotted contour levels of $`\chi ^2`$ in the $`T_{\mathrm{eff}}`$$`\mathrm{log}g`$ plane for $`ϵ`$ = 0.10. We see that, at this He abundance, there are two models that fit well the *observation*, with parameters $`T_{\mathrm{eff}}`$= 40 000K, $`\mathrm{log}g`$= 3.40 and $`T_{\mathrm{eff}}`$= 42 000K, $`\mathrm{log}g`$= 3.45. Although the last model has a slightly lower value of $`\chi _\mathrm{i}^2`$ (5.190), the dispersion in the individual $`\chi _\mathrm{i}^2`$ values is larger, reflecting the fact that looking at the line fits we would choose the other as the best compromise. This is a indication of the differences between both criteria, we should improve our $`\chi ^2`$ fitting criterium in order to match the fitting by eye. In Table 2 we list the results for the best fits at other He abundances. We see inmediately that for $`ϵ`$ = 0.06 and 0.14 we find large values of $`\chi ^2`$, indicating a relatively poor fit. On the contrary, the models for $`ϵ`$= 0.12, 0.10, 0.08 fit with approximately the same quality (without a more detailed study we cannot say whether the difference in $`\chi ^2`$ is significant). In a fit by eye we would choose the model at $`ϵ`$ 0.08 as the best fit. Actually, this is the model with the lowest value of $`\chi ^2`$ and has a *lower* He abundance than the *observed* spectrum. The reason is that the larger gravity rather than the He abundance mimics the microturbulence effect. In principle, it would be also possible to choose the model with $`ϵ`$= 0.12, in which the higher He abundance corresponds with a lower gravity. Fig. 5 shows the fit of the “best model” to the *observed* spectrum. The lack of rotation and instrumental broadening makes the differences in He i lines appear evident. Thus, the conclusion of our exercise is that neglection of microturbulence will change slightly the derived stellar parameters, but keeping them within our standard error box (see next Sect.). The direction in which parameters are moved will depend on the criteria used for defining a model as “the best model fit”, such as the relative weight given to different lines, or to the core and wings of a line. Thus the definition of these criteria for the “best model” will be a critical point of any future automatic fitting procedure, which will soon be demanded by new observing capabilities. ## 4 Spectral analysis including microturbulence Now we want to see the difference we obtain in the derived parameters of real observed O stars spectra when we include microturbulence in the calculations. We selected two late-O supergiants to see how our results match with previous works, as well as one intermediate and two early ones, in order to cover the whole O spectral type. We chose HD 5 689 in particular, an O6 V fast rotator with a high He-overabundance derived in previous works without microturbulence (see Herrero et al., Herrero00 (2000)), to check how it can modify this strong overabundance. The description of the observations and data reduction can be found in Herrero et al. (Herrero92 (1992), Herrero00 (2000)). To analyse stellar spectra we first determine the projected rotational velocity of the star (see Herrero et al. Herrero92 (1992) for details). This is an additional parameter that, in fact, reduces the effect of microturbulence in the derivation of stellar parameters. The results of the analyses are listed in Table 3. The fitting procedure and adopted errors of $`\pm `$ 1 500 K in $`T_{\mathrm{eff}}`$, $`\pm `$ 0.1 in $`\mathrm{log}g`$ and $`\pm `$ 0.03 in $`ϵ`$ are explained in Herrero et al. (Herrero99 (1999)). To make the new analyses with microturbulence we construct a model with the same parameters but for $`\xi `$= 15 kms<sup>-1</sup>, and also a small grid around it with changes in $`T_{\mathrm{eff}}`$ of $`\pm `$ 500 to 2000 K, in $`\mathrm{log}g`$ of $`\pm `$ 0.05 to 0.15 dex and in $`ϵ`$ of $`\pm `$ 0.02 to 0.04. These small changes are suggested by the results of the preceeding section, and are in fact confirmed by the small differences in the fit we see between the two models at 0 and 15 kms<sup>-1</sup>. The new best fit is found as explained above, and it gives the new parameters for the star at $`\xi `$=15 kms<sup>-1</sup>. All the stars except HDE 338 926 have been previously analyzed by our group without microturbulence (Herrero et al., Herrero92 (1992), Herrero00 (2000)). In the present analysis, small differences for the stars in common, specially with respect to the first reference, can be found due to the larger weight given here to He i $`\lambda `$4387, rather than to He i $`\lambda `$4922, and to the inclusion of line–blocking in the calculations (the temperatures of the late type supergiants are slightly hotter in the present analysis). All this also helps to solve the small difference between He i $`\lambda `$ 4387 and He i $`\lambda `$ 4922 found in Herrero et al. (Herrero92 (1992)) for these late O supergiants. In Table 3 we see the results for the five stars. Changes in the parameters induced by microturbulence are not beyond the standard error box of the analysis , and in particular, we see that He abundance is reduced for four of the stars, but only slightly. In Table 4 we see the values obtained for the radius, mass and luminosity, following the same procedures outlined in Herrero et al. (Herrero92 (1992)). We see that changes are not significant. (We point out that the values for mass, radius and luminosity given in Table 4 for HD 210 809 and HD 18 409 differ slightly from those given in Herrero et al. (Herrero92 (1992)). These differences are not due to the new parameters, but to a change in the distance moduli. Herrero et al. used the values from Humphreys (hum78 (1978)), while we use the values from Garmany & Stencel (GarS (1992))). In Figs. 6 to 10 we can see the fits with microturbulence 0 and 15 kms<sup>-1</sup>. The quality of the fits to individual lines is the same at both values, although the triplet line He i $`\lambda `$ 4471 becomes stronger at 15 kms<sup>-1</sup>, which produces an improved fit, but not to the extent as to completely resolve the so–called dilution effect. Sometimes, a compromise between He i $`\lambda `$4387 and He i $`\lambda `$4922 is adopted, like in HD 5 689 and HD 210 809 (realize that for 15 km$`s^1`$ the first one is slightly weak and the second one slightly strong). With respect to HDE 338 926, that has been analyzed here for the first time, we have to point out that the analysis has been very difficult. The final parameters are extrapolated beyond the models we could converge. However, we are confident that these parameters do characterize the star appropriately (or as appropriately as those of other stars), because the fits with already converged models are reasonably good, and because we are able to fit the star with converged models when we do not consider line–blocking. Thus, it is only the small temperature increase introduced by line–blocking that moves the star beyond the convergence region. The value we obtain for the evolutionary mass, derived from the tracks by Schaller et al., (Schall92 (1992)), is 57.6 $`M_{\mathrm{}}`$ without microturbulence, and 56.8 $`M_{\mathrm{}}`$ with 15 kms<sup>-1</sup>. Comparing with the values in Table 4, it is clear that HDE 338 926 also shows a mass discrepancy, as usual. Of course, we will have to reanalyze HDE 338 926 in the future with spherical, mass lossing models. ## 5 Discussion We will not go here into the discussion of the real physical entity of microturbulence, neither on the validity of the aproximation of small scale turbulent movements assumed to introduce it just as an extra Doppler width, which may be not suitable for the big values we deal with. This is beyond the scope of this work. As explained in Sect. 2 we do not introduce microturbulence in the structure calculations because we think that, having neglected other motions, it is actually not more physically consistent to introduce a turbulent pressure term. We just accept the necessity of using microturbulence in the analysis of stellar spectra, specially in the determination of metal abundances, that is our final interest. We find that contrary to our previous considerations, He i lines and He ii $`\lambda `$ 4686 do have profiles sensitive to the usual values of microturbulence found in OB stars. We confirm this for the whole range of parameters describing O and early B spectra, in agreement with McErlean et al. (McErlean98 (1998)) and Smith & Howarth (Smith&Howarth98 (1998)) for early B and late O supergiants respectively. This sensitivity is first due to the fact that the Stark broadening is not dominating completely these profiles and second to the high values of microturbulence involved, that are comparable to the thermal velocity of He ions. Taking (0.84 $`T_{\mathrm{eff}}`$) as representative of the temperature in the zone of formation of the lines, we find that for $`T_{\mathrm{eff}}`$= 30 000 K $`v_{\mathrm{th}}`$ of He ions is 10 kms<sup>-1</sup>, and for $`T_{\mathrm{eff}}`$= 45 000 K it is 12 kms<sup>-1</sup>. For H ions, with a fourth of the He atomic weigth, thermal velocities are two times larger, so thermal broadening and Stark broadening dominate the profile. For all other He ii lines Stark broadening is hiding the effect of microturbulence. Quantifying the effect of microturbulence on the determination of stellar parameters we find that they are not changed beyond the standard error box of our analyses. This implies that for stars with high He overabundances, such as HD 5 689 analysed here, the He discrepancy will not be solved by considering microturbulence. This result is also supported by what we obtained in Sect. 3. This seems to be in contradiction with Smith & Howarth (Smith&Howarth98 (1998)) and McErlean et al. (McErlean98 (1998)), who affirm that solar He abundances are found for the supergiants they analyse, when microturbulence is considered. However, the situation is still unclear, as an analysis of published results can show. Let us begin with the He overabundances obtained by Herrero et al. (Herrero92 (1992)). Following the argument by Smith & Howarth (Smith&Howarth98 (1998)) and McErlean et al. (McErlean98 (1998)), the preferred use of the strong line He i $`\lambda `$ 4922 in the analyses (slightly more sensitive to microturbulence than He i $`\lambda `$ 4387) would be responsible for the high He abundances found in that work. As a conclusion, the derived He overabundances would be an artifact introduced by the neglection of microturbulence. This argument applies in the cases in which Herrero et al. (Herrero92 (1992)) found a discrepancy between He i $`\lambda `$ 4922 and He i $`\lambda `$ 4387, but not in the rest of the cases. In Fig. 11 we have plotted He abundance versus $`T_{\mathrm{eff}}`$, as derived by Herrero et al. (Herrero92 (1992)), using different symbols for stars for which a discrepancy between these two lines was reported. We can see that the priority given to He i $`\lambda `$ 4922 can affect some of the abundances derived, specially at lower temperatures. To illustrate the situation, let us briefly discuss one of these objects for which Herrero et al. report a difference between He i $`\lambda `$ 4922 and He i $`\lambda `$ 4387: HD 210 809. This star has been analyzed here again, where we have given more weight to He i $`\lambda `$ 4387 (contrary to Herrero et al.). The final difference between the He abundance obtained here with a microturbulence of 15 kms<sup>-1</sup> and that obtained by Herrero et al. is 0.04 (our new He abundance being lower). This, however, has to be adscribed to different effects. First, giving more weight to He i $`\lambda `$ 4387 in absence of microturbulence results in a higher temperature (as already stated by Herrero et al.), which leads to a lower He abundance to keep the fit of He ii lines. The effect of line blocking, not included in Herrero et al., goes in the same direction. This accounts for a difference of 0.02 and the additional difference of 0.02 is purely due to microturbulence (see Table 3). Comparing our Fig. 9 with Fig. 5 of Herrero et al. (Herrero92 (1992)) we see that the effects considered here help to reduce the discrepancy between the two He i lines. The discrepancy between He i $`\lambda `$ 4387 and He i $`\lambda `$ 4922 found by Herrero et al. (Herrero92 (1992)) is larger at lower temperatures, and thus, around 30 000 K it cannot be solved by varying $`T_{\mathrm{eff}}`$ or introducing line blocking. Therefore, around or below this temperature microturbulence is at present the only considered effect that could bring them into agreement. In the literature we can find analyses of early B and late O supegiants including microturbulence (Gies & Lambert, Gies&Lambert92 (1992), Smith & Howarth, Smith&Howarth94 (1994), Smith&Howarth98 (1998), Smith et al., Smithetal98 (1998), McErlean et al., McErlean98 (1998)). In Table 5 we list stars for which parameters have been derived with and without microturbulence. Comparisons in Table 5 have to be done with care. It mixes results from different authors and different criteria. We see however that it indicates that the changes found here are consistent with those of other authors, i.e., stellar parameters are changed within the uncertainties adopted here. These changes do not follow a clear, systematic pattern (i.e., going always in the same direction when introducing microturbulence) and thus we conclude that the effect of microturbulence is indeed not larger than present–day uncertainties. An exception to this might be $κ$ Ori, which shows a large change in He abundance. However, this large reduction of the He abundance found by McErlean et al. (McErlean98 (1998)) as compared to Lennon et al. (Lennon91 (1991)) is accompanied by a large change in the stellar parameters. Even if we attribute the whole change to the effect of microturbulence (and not to a new analysis with new data and more refined calculations) we see that $κ$ Ori is by far the coolest of the stars in Tables 3 and 5 and thus we expect a larger influence of microturbulence for it. In addition, we note that McErlean et al. (McErlean98 (1998)) do not exclude a larger He abundance for this star. We conclude that data in the literature do not lead to the conclusion that the He discrepancy in O and early B supergiants is completely due to microturbulence, although microturbulence helps to reduce it. Except for the discussed reduction of the He abundance, we do not see a clear pattern in the parameter changes in Tables 3 and 5. $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ can both increase or decrease when microturbulence is introduced, and thus we are tempted to conclude that these changes are still dominated by internal inconsistencies in the analyses that appear when we compare values that cannot be distinguished within the adopted uncertainties. About the problem of the consistency of the fits to He i lines, we find that the dilution of He i $`\lambda `$ 4471 is only partially solved, even when line–blocking is considered as here. The fits to the rest of He i lines do not improve much either in the stars we analyze here. So the consideration of both microturbulence and line–blocking in the analysis cannot completely make an agreement between the results from triplet and singlet He i lines, though they help to improve it. ## 6 Conclusions We study for the first time the effect of microturbulence in the whole O spectral range, from late to early O types. Introducing microturbulence in the solution of the statistical equilibrium and transfer equations, and then in the formal solution (i.e., in the absorption coefficient), we conclude that for higher gravities the effect in the lines is negligible. This together with the low values of microturbulence usually found for high luminosity class stars, lead us to conclude that only O supergiants have sensitivity to microturbulence effects at a given $`T_{\mathrm{eff}}`$. In examining the behaviour with microturbulence of the lines we use in our analysis, we show that only He i lines and the core of He ii $`\lambda `$ 4686 Å are sensitive to microturbulence. For He i lines we show that, as should be expected, there is not a constant pattern for each line, but it depends on the parameters considered, that determine the strength of the line and its degree of saturation. This invalidates generalizations to the whole O spectral range made upon results obtained just for a certain spectral type. In order to quantify the sensitivity of stellar parameters to microturbulence we find that changes in the parameters induced by a value of microturbulence of 15 kms<sup>-1</sup> are enclosed within the standard error box of our analyses. We think that the lack of a clear pattern in the changes induced in $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ is just due to the fact that we are varying the parameters within this error box. We do however find a systematic change in $`ϵ`$ towards lower He abundances when microturbulence is introduced. In particular we find that late O supergiants show a decrease of 0.02–0.04 in $`ϵ`$ (this last value when including other effects that add to microturbulence), which is in agreement with previous results pointing to the inverse relation between the microturbulence assumed and the He abundance obtained (Smith & Howarth Smith&Howarth98 (1998)). Early types are less sensitive to microturbulence, and might not show a difference in the derived He abundance. Thus microturbulence is not capable of explaining the He discrepancy at all for early O stars, and neither it is for late O types with high overabundances. Looking to individual lines we find that the fits to He i $`\lambda `$ 4471 Å are improved when considering microturbulence, but not to the extent of completely explaining its dilution. On the other hand He i $`\lambda `$$`\lambda `$ 4922, 4387 Å are sometimes slightly better and sometimes slightly worse fitted, in the last case with model cores a bit too strong or a bit too weak, respectively. The rest of the lines keep the same quality in the fit. The *He i lines problem* is therefore only partially solved by simply considering microturbulence, even with line–blocking included in the model profiles. Therefore our conclusion is that microturbulence is affecting the derivation of stellar parameters, but its effect is comparable to the adopted uncertainties. Thus it can reduce moderate He overabundances and solve line fit quality differences, but it cannot explain by itself large He overabundances in O stars, and we are forced to conclude that these are due to other effects, whether real or caused by artifacts in our analyses. This last point will probably not find a definitive answer until we are able to derive reliable abundances of C, N and O in the atmospheres of O stars that we can correlate with the He abundances. ###### Acknowledgements. We want to thank Neil McErlean and Danny Lennon for their help, suggestions and many clarifying discussions. AH wants to acknowledge support for this work by the spanish DGES under project PB97-1438-C02-01
no-problem/0002/physics0002023.html
ar5iv
text
# Interfering with Interference: a Pancharatnam Phase Polarimeter ## I Introduction Pancharatnam explored how the phase of polarized light changes as the light passes through cycle of polarizations. He found that the phase increases by $`\mathrm{\Omega }/2`$, where $`\mathrm{\Omega }`$ is the solid angle that the geodesic path of polarizations subtends on the Poincaré sphere. If the path does not consist of great circles, an additional dynamical phase will develop. Berry developed corresponding theory for general quantum systems and re-derived Pancharatnam’s result . A series of experiments have been performed which demonstrate the Pancharatnam phase and the related geometric phase . De Vito and Levrero have criticized many of these experiments because they use retarders which introduce a dynamical component to the phase. Berry & Klein and Hariharan et al. have performed a series of experiments using only polarizers and beam splitters to introduce and measure the geometric phase. In this paper, I describe a simple experiment using polarizers and a double slit to demonstrate the Pancharatnam phase and use this phase to determine the polarization state of the incoming light. If the incoming light is linearly polarized and the polarizers are also linear, the resulting phase is limited to 0 or $`\pi `$. However, elliptically polarized light will result in intermediate values of the phase. This experiment uses a double slit rather than a half silvered mirror to split the beam, eliminating a possible source of dynamic phase; furthermore, with fewer and less complex optical elements this experiment may be performed over a wide range of photon energies. Furthermore, since the optics are simple, one can analyze the experiment in terms of Maxwell’s equations, illustrating the connection between these equations and the Pancharatnam phase. ## II Experimental Apparatus Fig. 1 illustrates the experiment setup (c.f. ). RP and FP denote rotatable and fixed linear polarizers respectively. Each fixed linear polarizer covers one of the two slits and is oriented perpendicular to the other. Each rotatable polarizer is oriented at forty-five degrees to the polarizers to maximize throughput. Incidentally, if the final polarizer is removed, the interference pattern disappears according to the Frensel-Arago law . The arrangement is similar to that used by Schmitzer, Klein & Dultz . Instead of using a Babinet-Soleil compensator to vary the geometric phase. The phase depends on the input and output polarizations of interferometer. If the two polarizers are aligned, the Pancharatnam phase between the two paths vanishes; and if they are orthogonal, the phase is $`\pi `$. The setup is identical to the standard physics demonstration of Young’s double-slit experiment with the exception that each slit is covered with a polarizing filter; consequently, both the construction and analysis of the experiment are amenable as a demonstration or student laboratory experiment. Furthermore, the light follows the same spatial trajectory, independent of the position of the polarizers and the geometric phase observed. ## III The Poincaré Sphere Understanding how the apparatus works for a general polarization is most simply achieved by tracing the polarization of the light through the system along the Poincaré sphere. The initial polarization from the laser is unknown but it is depicted in the figures as left circular. The left panel of Fig. 2 depicts the configuration for zero geometric phase. As the laser passes through the first polarizer, its polarization is projected onto the horizontal direction. After passing through the two slits, the polarization is projected onto two orthogonal polarizations oriented at forty-five degrees to the horizontal. Finally, the last polarizer projects the polarization back onto the horizontal. One can form a closed loop by following the path along one leg and returning along the other leg (c.f. ). This closed loop does not enclose any solid angle on the sphere. The right panel of Fig. 2 illustrates the path of the polarization when the two rotatable polarizers are orthogonal. The polarization is now projected onto the vertical first, followed by the two diagonal polarizations and the final horizontal projection. Constructing the closed loop as described earlier yields an area of $`2\pi `$ and a Pancharatnam phase of $`\pi `$ between the two slits. Since a constant phase difference of $`\pi `$ is equivalent to $`\pi `$, this implementation hides the fact that the area on the sphere is oriented and consequently the geometric phase may be positive or negative. A third configuration when the input polarizer is followed by a quarter-wave plate yielding right-hand circular polarized input to the interferometer illustrates this point. The direction that the loop is traversed determines which slit is designated by $`\varphi _1`$ such that $`\varphi _1\varphi _2=\mathrm{\Omega }/2`$. The phase difference is given by $`\pi /2`$ if the final polarizer lies clockwise relative to polarizer behind the first slit, and $`\pi /2`$ if the final polarizer lies counterclockwise. The converse result holds for left-hand polarized light. If the axis of polarizer behind left-hand slit (as one looks toward the screen) lies clockwise of that of the final polarizer, one obtains the result that the fringes will shift to the left (relative to their position for the input and output polarizations being identical) if the light is left elliptically polarized and to the right if it is right elliptically polarized. This configuration automatically includes the minus sign present in Pancharatnam’s definition of the geometric phase. The upper and lower panels of Fig. 3 show the fringe pattern produced by the apparatus in the configuration described above for the input and output polarizations being linear and identical and linear and orthogonal. The middle panel shows the fringes for left-hand circularly polarized input light at the input with the input polarizer removed. ## IV Measuring the Input Polarization If the input polarizer is removed, the apparatus can be used to measure the polarization of the light source. The procedure requires four measurements of the fringe positions: two for calibration and two to determine the geometric phases. 1. Locate the positions of the fringes with the input and output polarizers parallel and midway between the polarizers at the slits. 2. Remove the input polarizer and compare the position of the fringes relative to the two previous measurements. Let $`\alpha /(2\pi )`$ be the ratio of the offset of the fringes in step (2) relative to step (1) to the distance between the fringes in step (1). Also, note the direction that fringes shift – left for left-circular polarization and right for right-circular polarization. 3. Rotate either all of the polarizers by forty-five degrees or rotate the light source by forty-five degrees and repeat steps (1) and (2) and denote the resulting ratio by $`\beta /(2\pi )`$. The left panel of Fig. 4 depicts how the polarization evolves on the Poincaré sphere for the two configurations. It is straightforward to calculate the input polarization using spherical trigonometry by following the right panel of Fig. 4. $`\mathrm{sin}d`$ gives the fraction of circular polarization and $`e`$ is the angle between the long axis of the polarization ellipse and the vertical axis. Napier’s analogies yield $$a=\left\{\mathrm{tan}^1\left[\frac{\mathrm{sin}\frac{1}{2}(\alpha \beta )}{\mathrm{sin}\frac{1}{2}(\alpha +\beta )}\right]+\mathrm{tan}^1\left[\frac{\mathrm{cos}\frac{1}{2}(\alpha \beta )}{\mathrm{cos}\frac{1}{2}(\alpha +\beta )}\right]\right\}$$ (1) and the law of sines gives $$\mathrm{sin}d=\mathrm{sin}a\mathrm{sin}\beta .$$ (2) Combining these two results yields $$\mathrm{tan}\frac{e}{2}=\mathrm{tan}\frac{1}{2}(ad)\frac{\mathrm{sin}\left(\frac{\pi }{4}+\frac{\beta }{2}\right)}{\mathrm{sin}\left(\frac{\pi }{4}\frac{\beta }{2}\right)}$$ (3) If the input polarization is linear, one finds that measurements of the fringes can only locate the polarization vector to within forty-five degrees. However, the contrast of the interference pattern constrains the linear polarization further, as well as the fractional polarization of the input light. ## V Conclusions A variation of Young’s double-slit experiment provides a excellent and simple demonstration of the Pancharatnam phase for polarized light. Furthermore, the observed phase difference between the two slits is simply related to the polarization of the incoming light. The phase difference determines upon which great circle of the Poincaré sphere the polarization lies, and by performing the measurement after rotating the apparatus the input polarization can be determined precisely. The distinct advantage of the experiment is the simplicity of the optics. Only linear polarizers and a double slit are required. Since the Pancharatnam phase is achromatic, the procedure may be performed for the wide range of photon energies where suitable materials are available. A companion paper discusses the implementation of the experiment in X-rays and possible applications. ## ACKNOWLEDGMENTS I would like to thank Jackie Hewitt, Lior Burko and Eugene Chiang for useful discussions and to acknowledge a Lee A. DuBridge Postdoctoral Scholarship.
no-problem/0002/hep-ex0002045.html
ar5iv
text
# SELEXTalk given at the Session honoring Leon Lederman at the VII Mexican Workshop on Particles and Fields, Mérida, México, November 10-17, 1999. Proceedings to be published by AIP. ## The history: E761 Experiment E761, “An Electroweak Enigma: Hyperon Radiative Decays” proposal:761 was my first encounter with experimental high energy physics. I got experience on silicon micro strip and wire chamber detectors, magnet spectrometers for momentum precision measurements, data acquisition and trigger systems based on NIM and CAMAC, reconstruction and analysis packages, and performed high precision physics measurements of the $`\mathrm{\Sigma }^+`$ magnetic moment and of the $`\mathrm{\Sigma }^+`$ and $`\overline{\mathrm{\Sigma }}^{}`$ production polarization am:ptxfspol ; am:mmssb ; am:ssbpol . This experiment now represents the starting push for the particle physics group at San Luis Potosí. ## The present: SELEX There are several aspects to talk about SELEX, recent hyperon and charm physics results, I only highlight a few of them. Also prior to talk about physics I should mention that E781 or SELEX is the first experiment where San Luis Potosí is a formal collaborating institution. Two students have gotten their master degree thesis related to SELEX, Ricardo López Fernández working in the RICH group and analyzing the beam composition using the RICH, and Galileo Domínguez Zacarías working with the $`K_s^0`$ sample looking into $`\pi `$ asymmetry uaslp:trlf ; uaslp:tgdz . Personally I worked in the smart crate controller in the CAMAC setup and trigger installation. An impact at San Luis Potosí also happen in relation to this collaboration: Jürgen Engelfried, member of SELEX and previously WA89 at CERN, accepted to work as professor at San Luis Potosí, his expertise brings more life and conforms the local group and opens more physics opportunities as I’ll discuss in the section “The near future”. SELEX is a new fixed target experiment designed to enhance charm - strange baryon over meson data. The data taking lasted from summer 1996 to fall 1997. It includes a tagged hadron beam on $`\pi `$, $`\mathrm{\Sigma }`$ and proton using a TRD detector; a micro-vertex detector, and particle id using TRD, lead glass, and RICH detectors; all these detectors distributed among three magnet spectrometers proposal:781 . SELEX was designed to be a high $`x_F`$ charm baryon spectrometer, and in fact this can be appreciated by the high acceptance at $`x_F`$ $`>`$ 0.5. In the three modes, $`\mathrm{\Lambda }_c^+\text{ }pK^{}\text{ }\pi ^+\text{ }`$ , $`D^0\text{ }K^{}\text{ }\pi ^+\text{ }`$ , and $`D^+\text{ }K^{}\text{ }\pi ^{}\text{ }\pi ^+\text{ }`$, the acceptance is grater than 6 %, and identical for particle and antiparticle decays. The control on the acceptance gives the opportunity to study charm baryon production as a function of $`x_F`$ with very good precision for challenging theoretical models and other experiments talk:fernanda ; fg:ucla ; jr:vancuver . Charm baryon and meson lifetime measurements are also under control in the experiment. SELEX was designed with a trigger to enhance all charm baryon production and decay modes, as a result it has the ability to look for unseen decay modes, right now SELEX is reporting the first observation of Cabibbo suppressed $`\mathrm{\Xi }_c^+\text{ }pK^{}\text{ }\pi ^+\text{ }`$ decay selex:cabibbo . The fact of having a $`\mathrm{\Sigma }^{}`$ beam to study charm production also provides by its own nature the tool to study properties of the hyperon itself. At present there are preliminary reports on $`\mathrm{\Sigma }^{}`$ radius and total cross section of $`\mathrm{\Sigma }^{}`$ beam on different target material uwe:total ; mam:inelastic ; vk:ados ; ivo:hyperon . ## The near future: CKM & Instrumentation On April 26 1996, Fermilab invited physicists for a workshop towards the use of a 120 GeV/c proton beam on collider and fixed target mode. All mexican groups had the opportunity for joining this activity. The event really represented a great opportunity for young mexican groups since the enterprises are small in size and with a lot of opportunities to start at the zero point of the design and construction for leading a project or subproject. Of all the mexican existing groups only San Luis Potosí, so far, has taken the challenge to participate actively in one of these experiments. CKM “Charged Kaons at the Main Injector”, a proposal for a Precision Measurement of the Decay $`K^+\text{ }\pi ^+\text{ }\nu \text{ }\overline{\nu }\text{ }`$ and Other Rare $`K^+`$ Processes at Fermilab Using the Main Injector, is one of the experiments which were born after the April 96 workshop proposal:ckm . The experiment will measure the branching ratio of the ultra-rare charged kaon decay $`K^+\text{ }\pi ^+\text{ }\nu \text{ }\overline{\nu }\text{ }`$ by observing a large sample of those decays with small background. The physics goal is to measure the magnitude of the Cabibbo, Kobayashi, Maskawa matrix element $`V_{td}`$ with a statistical precision of about 5% based upon a $``$ 100 event sample with total backgrounds of less than 10 events. This decay mode is known to be theoretically clean. The only significant theoretical uncertainty in the calculation of this branching ratio is due to the charm contribution. A 10% measurement of the branching ratio will yield a 10% total uncertainty on the magnitude of $`V_{td}`$. In this experiment IF-UASLP is in charge for testing parts and the whole design of two Ring Imaging Cherenkov Counters, RICH’s plan:erik . Experience on this technology came from the participation of Jürgen Engelfried on the design, construction, operation, and analysis of two previous RICH’s, one in WA89 and another in SELEX. Also, in SELEX, Ricardo López Fernández, graduate student from IF-UASLP, worked on and used the RICH as part of his M.Sc. thesis. CKM marks the near future experimental enterprise we are working on at IF-UASLP. We are initiating high energy physics instrumentation at IF-UASLP, aimed right now at the RICH technology applied to the CKM experiment. ## HEP: a group at IF-UASLP Experimental high energy physics evolves with projects and this aspect is also reflected on the IF-UASLP group which has seen the pass of experiments E761, WA89, E781 and now CKM. Presently, the group is creating a high energy instrumentation laboratory towards detector research and development. The basic idea of the laboratory is to target user defined detectors at world wide experiments, right now we have the RICH design and testing for CKM. In 1999, IF-UASLP just hired Ruben Flores Mendieta to strength particle physics theory and phenomenology research. Looking backwards from the beginning of the 80’s to the end of the 90’s a spawn of close to 20 years has happened for initiating an experimental group at San Luis Potosí, that is a positive result from an initial kick. ## Acknowledgement This work was partly financed by IF-UASLP and CONACyT.
no-problem/0002/astro-ph0002332.html
ar5iv
text
# Probing black hole X-ray binaries with the Keck telescopes ## 1 INTRODUCTION Zel’dovich and Novikov (1966) were the first to propose the technique which is still in use for “weighing” black holes. They suggested that black holes could be detected indirectly from light emitted through the interaction with a donor star in an X-ray binary system. The motion of the donor star around the black hole would produce a radial velocity sinusoidal curve which could be detected from the Doppler shifts of the photospheric absorption lines of the donor star. The semi-amplitude ($`K`$) of the curve together with the binary period ($`P`$) determine the mass function of the black hole (a lower limit to its mass), using Kepler’s third law: $`f_x=PK^3/(2\pi G)`$. Indeed, X-ray binaries were found in the late 1960s and the first black-hole candidate, Cyg X-1, in 1971 (Oda et al. 1971). Efforts in measuring the mass of Cyg X-1 were affected by uncertainties in the evolution of the massive donor star, and with a low mass function ($`f_x=0.22\pm 0.01M_{}`$; Bolton 1975) this was not regarded as unequivocal evidence for a black hole (see Herrero et al. 1995 for the most recent work). ## 2 HUNTING FOR BLACK HOLES IN X-RAY NOVAE In the 1980s the observational effort was turned to X-ray novae (XRNs, a sub-group of low-mass X-ray binaries). Unlike classical novae, XRNs are accretion-driven events that show disk outbursts with a typical rise of 8–10 mag in a few days and a subsequent decline over several months. After the XRN has subsided into quiescence, the accretion disk does not dominate the observed flux, rendering the companion star visible. The low-mass companion star allows the mass function of the black hole, a good approximation of the mass in a high-inclination system, to be determined. In the 1990s, X-ray satellites found 6 XRNs with identified companion stars in the optical (Nova Muscae 1991, Cheng et al. 1992; Nova Persei 1992, Casares et al. 1995a and references therein; Nova Sco 1994, Bailyn et al. 1995; Nova Vel 1993, Filippenko et al. 1999; GRO J1719-24, e.g., Ballet et al. 1993; XTE J1550-564, e.g., Smith et al. 1998). The prototype target in the 1980s was A0620–00, but unfortunately its mass function was close to the maximum mass of a neutron star ($`f_x=3.2\pm 0.2M_{}`$; McClintock and Remillard 1986). It was not until 1992 that a mass function of a candidate black hole in the XRN 1989, GS 2023+338, was found to be much heavier than the maximum mass of a neutron star ($`f_x=6.08\pm 0.06M_{}`$; Casares et al. 1992). Since then, efforts have been directed toward measuring actual masses, thus producing the first observed mass distribution of black holes (Bailyn et al. 1998; Miller et al. 1998; for the theoretical distribution see Fryer 1999). The determination of the masses of stellar remnants after supernova explosions is essential for an understanding of the late stages of evolution of massive stars. Very recently, the first observational evidence for the progenitor, a supernova or hypernova with a mass $`>30M_{}`$, that produced the black hole of $`7.0\pm 0.2M_{}`$ in GRO J1655-40 was found with the Keck-I telescope (from high metal abundances that were presumably deposited onto the surface of the companion F5IV-star by the supernova explosion; Israelian et al. 1999). ## 3 RADIAL VELOCITY CURVES Utilizing the Doppler effect produced by the shifting photospheric lines due to the orbital motion of the companion star around the black hole, but now with the 10-m Keck-I and Keck-II telescopes, Filippenko and his collaborators have produced the four most accurate mass functions ($`f_x=5.0\pm 0.1M_{}`$, $`1.2\pm 0.1M_{}`$, $`4.7\pm 0.2M_{}`$, $`3.2\pm 0.1M_{}`$, respectively for GS 2000+25, GRO J0422+32, Nova Oph 1977, Nova Vel 1993; Filippenko et al. 1995a, 1995b, 1997, 1999). Figures 1 and 2 show the great improvement that the large aperture of Keck offers in comparison to 4-m-class telescopes in extracting radial velocity curves of the motion of the donor star around the black hole by cross-correlating main-sequence template spectra with the observed spectra. ## 4 THE MAIN SEQUENCE COMPANION STAR The line broadening function affecting the absorption lines of the object spectra consists of the convolution of the instrumental profile (full width at half-maximum = 108 km s<sup>-1</sup>) with the companion star’s rotational broadening profile (of width $`\upsilon \mathrm{sin}i`$), with further smearing due to changes in the orbital velocity of the companion star during a given exposure. The exposure time for each object spectrum ($`T_{\mathrm{exp}}`$ 25–40 min) resulted in orbital smearing of the lines up to $`2\pi K_\mathrm{c}T_{\mathrm{exp}}/P`$, which can range up to 242 km s<sup>-1</sup>; hence, the template spectra were subsequently smeared by the amount corresponding to the orbital motion through convolution with a rectangular profile and the resulting template spectrum was further broadened from 2 to 150 km s<sup>-1</sup> by convolution with the Gray (1976) rotational profile. We scaled the blurred template spectrum by a factor $`0<f<1`$ to match the absorption-line strengths in the Doppler-corrected average spectrum. Finally, the simulated template spectrum (i.e., smeared and broadened) was subtracted from the Doppler-corrected average spectrum of Nova Oph 1977 and $`\chi ^2`$ was computed from a smoothed version of the residual spectrum. The minimum $`\chi ^2`$ gives the optimal $`\upsilon \mathrm{sin}i`$, $`f`$, and spectral type of the companion star (for more details see Harlaftis et al. 1996, 1997, 1999). Figure 3 summarizes the procedure we follow to deconvolve the main-sequence spectrum from the target spectrum. The spectrum of a M2 V template (BD $`+44^{}2051`$) is shown at the bottom, binned to 124 km s<sup>-1</sup> pixels (=4 pixels and similar to the instrumental resolution). This template was then treated so that its line profiles simulate those of the GRO J0422+32 spectra. The smearing in radial velocity due to the orbital line broadening while exposing are applied to individual copies of the M2 template, and these were subsequently averaged using weights identical to those corresponding to the GRO J0422+32 spectra. Next, a rotational broadening profile corresponding to $`\upsilon \mathrm{sin}i=50`$ km s<sup>-1</sup> was applied; the result is the second spectrum from the bottom in Figure 3. The spectrum above is the Doppler-corrected average of the GRO J0422+32 data in the rest frame of the M2 V template. Finally, the residual spectrum is shown at the top after 0.61 times the simulated M2 V template ($`f=0.61\pm 0.04`$ for M2; Table 4) was subtracted from the Doppler-shifted average spectrum. The M-star absorption lines and TiO bands are evident in the Doppler-corrected average, and they are almost completely removed by subtraction of the template spectrum (e.g., the Na I D line). Emission from He I $`\lambda `$5876 becomes prominent after subtraction of the M2 V template and weak emission from He I $`\lambda `$6678 is also present. Note that there is no evidence for Li I $`\lambda `$6708 absorption, to an equivalent width upper limit of 0.13 Å ($`1\sigma `$) relative to the original continuum, except in GS 2000+25 (see Martín et al. 1994 for lithium in X-ray binaries). ## 5 THE MASS RATIO OF THE BLACK HOLE BINARIES Determination of the mass ratio (from the rotational broadening of the photospheric lines in the companion star) and the inclination (inferred from the ellipsoidal modulations of the companion star), when combined with the mass function, can fully describe the system’s parameters and the masses of the binary components. The mass ratio $`q=M_2/M_1`$ is found by measuring the rotational broadening of the absorption lines of the companion, $`\upsilon \mathrm{sin}i`$, through the relation $$\frac{\upsilon \mathrm{sin}i}{K_c}=0.46\left[(1+q)^2q\right]^{1/3},$$ which is valid since the binary period is so short that the companion star is tidally locked to the black hole. We determined mass ratios for the first time for binaries as faint as 21 mag using the $`\chi ^2`$ optimization technique described in the previous section to extract the rotational broadening of the absorption lines of the donor star (Harlaftis et al. 1996, 1997, 1999). The complete results of the analysis of the Keck data are given in Table 1. ## 6 THE ACCRETION DISK The accretion disk in its quiescent state has mainly been undetected so far by X-ray satellites but can be studied in the optical. Double-peaked Balmer profiles are observed with “S”-wave components either from the companion star (Nova Oph 1977) or the bright spot (GS 2000+25) and an H$`\alpha `$ emissivity law is observed, similar to that seen in dwarf novae. An imaging technique, Doppler tomography, shows the accretion disks in GS 2000+25, Nova Oph 1977 and GRO J0422+32 to be present (Fig. 4). Further, mass transfer from the donor star continues vigorously to the outer disk as evidenced by the “bright spot,” the impact of the gas stream onto the outer accretion disk in GS 2000+25 (Fig. 4; Harlaftis et al. 1996).
no-problem/0002/astro-ph0002094.html
ar5iv
text
# Water Ice in 2060 Chiron and its Implications for Centaurs and Kuiper Belt Objects ## 1 Introduction The Centaurs are a set of solar system objects whose orbits are confined between those of Jupiter and Neptune. Their planet-crossing orbits imply a short dynamical lifetime ($`10^610^7`$ yr). The current belief is that Centaurs are objects scattered from the Kuiper Belt that may eventually end up in the inner solar system as short-period comets. The first discovered and brightest known Centaur, 2060 Chiron, is relatively well studied. The object is firmly established as a comet, with a weak but persistent coma. It is well documented that Chiron possesses neutral colors (e.g., Hartmann et al. 1990, Luu and Jewitt 1990), and a low albedo of $`0.14_{0.03}^{+0.06}`$ (Campins et al. 1994). (It must be noted that most of these measurements were made when Chiron clearly exhibited a coma so that the measurements are likely to have been contaminated by dust scattering from the coma. The albedo, in particular, should be viewed as an upper limit). Chiron has a rotation period of $`6`$ hr (e.g., Bus et al. 1989) and a photometric amplitude that is modulated by the cometary activity level (Luu and Jewitt 1990, Marcialis and Buratti 1993, Lazzaro et al. 1997). Published optical and near-IR spectra of Chiron show a nearly solar spectrum, varying from slightly blue to completely neutral (Hartmann et al. 1990, Luu 1993, Luu et al. 1994, Davies et al. 1998), and devoid of specific mineralogical features. As a group, the Centaurs display remarkable spectral diversity (e.g., Luu and Jewitt 1996). The Centaur 5145 Pholus is among the reddest bodies in the solar system (Mueller et al. 1992, Fink et al. 1992), and shows absorption features at 2.00 and 2.25 $`\mathrm{\mu m}`$ (see, e.g., Davies et al. 1993a, Luu et al. 1994). Cruikshank et al. (1998) interpreted the 2.0 $`\mathrm{\mu m}`$ feature as due to water ice, and the 2.27 $`\mathrm{\mu m}`$ due to methanol. They derived a best fit to the spectrum which consisted of carbon black and an olivine-tholin-water-methanol mixture. Pholus’s extreme red color and low albedo ($`0.044\pm 0.013`$, Davies et al. 1993b) strongly suggest that long-term irradiation of carbon- and nitrogen-bearing ices has resulted in an organic-rich, dark ”irradiation mantle” (e.g., Johnson et al. 1987). The spectral differences between Chiron and Pholus have been attributed to the presence of cometary activity in Chiron and the lack thereof in Pholus (Luu 1993, Luu et al. 1994). A continuous rain of sub-orbital cometary debris falling onto the surface of Chiron may have buried a more primordial irradiation mantle with unirradiated matter ejected from the interior. In this paper we show further evidence supporting this hypothesis, and that scattering effects by coma dust particles also eliminate spectral features seen in solid surface reflection. ## 2 Observations (a) Keck observations. The Keck near-infrared observations were made on UT 1999 April 03, at the $`f`$/25 Cassegrain focus of the Keck I telescope, using the NIRC camera (Matthews and Soifer 1994). The NIRC detector is a 256 x 256 pixel InSb array that can be switched from direct imaging to slit spectroscopy. In imaging mode the pixel scale is 0.15 arcsec per pixel (38” x 38” field of view), and in spectroscopy mode the resolution is $`\lambda /\mathrm{\Delta }\lambda 100`$. We used a 0.68” $`\times `$ 38” north-south slit for all spectral observations. Uneven illumination of the slit and pixel-to-pixel variations were corrected with spectral flat fields obtained from a diffusely illuminated spot inside the dome. Since the target was not visible during spectral observations, Chiron’s position was confirmed by centering it at the location of the slit and taking an image. The slit, grism, and blocking filter were then inserted in the beam for the spectroscopic observation. Spectra were made in pairs dithered along the slit by 13”. We obtained spectra in two different grating positions, covering the JH wavelength region (1.00 - 1.50 $`\mathrm{\mu m}`$) and the HK region (1.4 - 2.5 $`\mathrm{\mu m}`$). Non-sidereal tracking at Keck showed a slight drift with time, so we recentered Chiron in the slit every 15-20 minutes. (b) UKIRT observations. The UKIRT observations were made on UT 1996 Feb 7 and 8 with the CGS4 infrared spectrometer mounted at the Cassegrain focus. The detector was a 256 $`\times `$ 256 pixel InSb array, with a 1.2” per pixel scale in the spatial direction. An optical TV camera fed by a dichroic beam splitter gave us slit viewing capability and thus we were able to guide on the target during all observations. The conditions were photometric and the image quality was $``$ 1” Full Width at Half Max (FWHM), so we used a 1.2” $`\times `$ 80” slit aligned North-South on the sky at all times. A 75 line per mm grating was used in first order for all observations, yielding a dispersion of 0.0026 $`\mathrm{\mu m}`$/pixel in the H and K band ($`\lambda /\mathrm{\Delta }\lambda 850`$). However, the detector was dithered by 1/2 pixel during each observation, so the spectra were oversampled by a factor of 2. The effective spectral coverage in the H band was 1.4 – 2.0 $`\mathrm{\mu m}`$ and in the K band 1.9 – 2.4 $`\mathrm{\mu m}`$. Sky background removal was achieved by nodding the telescope 30” (23 pixels) along the slit. Dark frames and calibration spectra of flat fields and comparison lamps (Ar) were also taken every night. Both the Keck and UKIRT observations were calibrated using stars on the UKIRT Faint Standards list (Casali and Hawarden 1992). At both telescopes, we took care to observe the standard stars at airmasses similar to those of Chiron (airmass difference $``$ 0.10), to ensure proper cancellation of sky lines. The separate reflectance spectra from each night of observation are shown in Fig. 1. ## 3 Discussion ### 3.1 The spectra The Chiron spectra (Fig. 1) show that: (a) Chiron is nearly neutral in the 1.0 – 2.5 $`\mathrm{\mu m}`$ region; (b) there is a subtle but definite absorption feature at 2 $`\mathrm{\mu m}`$ ($``$ 0.35 $`\mathrm{\mu m}`$ wide, $`10\%`$ deep) in spectra from 1996 and 1999, and a marginal absorption feature near 1.5 $`\mathrm{\mu m}`$ in the 1999 spectrum; and (c) the spectral slope and the strength of the 2 $`\mathrm{\mu m}`$ feature change with time. Reflectivity gradients in the JHK region span the range $`S^{}`$ = -2 %/1000 Å to $`S^{}=`$ 1%/1000 Å (Table 2). In Fig. 2 we compare Chiron spectra from 1993 (from Luu et al. 1994) with the present observations. The flat and featureless 2 $`\mathrm{\mu m}`$ spectrum from 1993 stands in sharp contrast with the later spectra. The presence of the 2 $`\mathrm{\mu m}`$ feature in different spectra taken with different telescopes, instruments, and spectral resolutions provides convincing evidence that it is real. The 2 $`\mathrm{\mu m}`$ and 1.5 $`\mathrm{\mu m}`$ features are clear signatures of water ice, and the shallowness of the features (compared to that of pure water ice) indicates that this ice is mixed with dark impurities (see Clark and Lucey 1984). We note that the Chiron spectra are very similar to spectra of minerals and water ice (compare the spectra in Fig. 2 with Fig. 14 and 15 of Clark 1981, respectively). In Fig. 3 we show that the Keck Chiron spectrum is well fitted by a model consisting of a linear superposition of a water ice spectrum and an olivine spectrum. The olivine spectrum is needed to provide the required continuum slope, and at the very small grain size used, the spectrum of olivine is essentially featureless. The water ice spectrum was calculated based on the Hapke theory for diffuse reflectance (Hapke 1993) and used a grain diameter of 1 $`\mathrm{\mu m}`$. A description can be found in Roush (1994). However, we caution that the model is non-unique, and due to the many free parameters in the model (e.g., grain albedo, porosity, roughness), the $`1\mathrm{\mu m}`$ grain size should not be taken literally. Similarly, olivine could also be replaced in the fit by other moderately red, featureless absorbers. The 2 $`\mathrm{\mu m}`$ feature in Chiron is clearly time-variable: it was not apparent in 1993 but changed to a depth of 8 - 10% in 1996 and 1999. Chiron’s lightcurve variations can be explained by the dilution of the lightcurve by an optically thin coma (Luu and Jewitt 1990). Based on this model, we estimate that the 1993 coma cross-section was $`1.5`$ times larger than in 1996 or now. The likeliest explanation for the time-variability of the $`2\mathrm{\mu m}`$ feature is the degree of cometary activity in Chiron. In 1993, Chiron’s activity level was high (Luu and Jewitt 1993, Lazzaro et al. 1997), resulting in a featureless spectrum dominated by scattering from the coma. By 1996, when the UKIRT observations were made, Chiron’s total brightness had dropped by $`1`$ mag to a minimum level (comparable to that of 1983–1985, Lazzaro et al. 1997), leaving spectral contamination by dust at a minimum. ### 3.2 Implications of water ice on Chiron #### 3.2.1 Cometary activity in Centaurs Considering (1) Chiron’s time-variable spectrum, (2) the presence of surface water ice, and (3) Chiron’s persistent cometary activity, we conclude that Chiron’s surface coverage is not dominated by an irradiated mantle but more probably by a layer of cometary debris. Occultation observations suggest that sublimation by supervolatiles (e.g., CO, N<sub>2</sub>) on Chiron occurs in a few localized icy areas (Bus et al. 1996). Dust grains ejected at speeds $`<100`$ m s<sup>-1</sup> (the escape velocity) will reimpact the surface, building a refractory layer which tends to quench sublimation. Nevertheless, the outgassing is still sustained by the sporadic exposure of fresh ice on the surface. Sublimation experiments with cometary analogs illustrate this phenomenon: outgassing produces a dust layer, but fresh icy material can still be periodically exposed by avalanches and new vents created by impacts from large dust particles (Grün et al. 1993). If, in keeping with a Kuiper Belt origin, Chiron once possessed an irradiation mantle, we suspect that it has been either blown off by sublimation or buried under a dust layer thick enough to mask its features. If so, the present low albedo of Chiron would be due to cometary dust particles rather than irradiated material. In contrast, the lack of cometary activity in the Centaur Pholus is consistent with an encompassing surface coverage by the irradiation mantle – witness the extreme red color and absorption features associated with hydrocarbon materials. Although water ice may exist locally on the surface (Cruikshank et al. 1998), Pholus’s spectral properties are still dominated by the organic irradiated crust. If this hypothesis is correct, cometary activity should be uncommon among Centaurs with very red colors (irradiated material), and more common among those with neutral (ice) colors. As the observational sample of Centaurs grows, this is a simple prediction that can be directly tested. However, it remains unclear why cometary activity was activated on Chiron and not on Pholus, even though the two Centaurs are at similar heliocentric distances. One possibility is that Pholus was recently expelled from the Kuiper Belt and has not yet been heated internally to a degree sufficient to blow off the irradiation mantle, but this hypothesis cannot be easily tested, given the chaotic nature of the orbits of Centaurs. #### 3.2.2 Centaur and Kuiper Belt surfaces We summarize the spectral properties of Centaurs and KBOs in Table 3. Thus far, water ice has been reported in 3 Centaurs (Chiron, Pholus, 1997 CU<sub>26</sub>) and 1 KBO (1996 TO<sub>66</sub>). The existing data are too sparse to establish whether a correlation exists between color and the abundance of water ice among Centaurs and KBOs. However, water ice is present in all three Centaurs for which near-IR spectra are available, and in 1 out of 3 studied KBOs. The preponderance of water ice among Centaurs makes us suspect that that the ”low” rate of detection of water ice in KBOs has more to do with the faintness of the targets and the resulting low-quality spectra than with the intrinsic water contents in KBOs. Considering the existing data and the high cosmochemical abundance of water ice, we predict that water ice is ubiquitous among all objects that originated in the Kuiper Belt, although the amount might vary from one object to another and thus determines the possibility for cometary activity in these bodies. In short, it would be a good idea to re-observe those KBOs which show no apparent water ice feature (1993 SC and 1996 TL<sub>66</sub>) at higher signal-to-noise ratios. Water ice might be present after all. ## 4 Summary 1. The near-infrared reflectance spectrum of Chiron is time-variable: in 1996 and 1999 it shows an absorption feature at 2 $`\mathrm{\mu m}`$ due to water ice. Another absorption feature due to water ice at 1.5 $`\mathrm{\mu m}`$ is also marginally detected in 1999. The features were not present in spectra taken in 1993. 2. Chiron’s time-variable spectrum is consistent with variable dilution by the coma. During periods of low-level outgassing, surface features are revealed. 3. Chiron’s nearly neutral spectrum suggests the surface dominance of a dust layer created from cometary debris, consisting of unirradiated dust particles from the interior. Chiron’s original irradiation mantle has either been blown off or buried under this layer. 4. Chiron is the third Centaur in which water ice has been detected. This trend suggests that water ice is common on the surface of Centaurs. We predict that water ice is ubiquitous in all objects originating in the Kuiper Belt. The surface coverage of this water ice determines its detectability. Note \- As we finished the preparation of this manuscript, we received a preprint by Foster et al. (1999) in which water ice is independently identified in spectra from 1998. Foster et al. also report an unidentified absorption feature at 2.15 $`\mathrm{\mu m}`$ that is not confirmed in our spectra. Acknowledgements The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council. JXL thanks Ted Roush for his generosity in sharing his software and database, Dale Cruikshank for constructive comments, and Ronnie Hoogerwerf and Jan Kleyna for helpful discussions. This work was partly supported by grants to JXL and DCJ from NASA. FIGURE CAPTIONS Figure 1. Infrared reflectance spectrum of 2060 Chiron, normalized at 2.2 $`\mathrm{\mu m}`$. The date of each spectrum is indicated. The top panel shows the original spectra, while in the bottom panel the 1996 spectra have been smoothed by 3 pixels (the 1999 spectrum remains unsmoothed). There is a clear absorption feature at 2 $`\mathrm{\mu m}`$ in all three spectra, and a very weak absorption feature at 1.5 $`\mathrm{\mu m}`$ in the 1999 spectra. Figure 2. Infrared reflectance spectrum of Chiron from 1993 (from Luu et al. 1994) compared with the 1996 and 1999 spectra. There was no apparent spectral feature in the 1993 spectra. Figure 3. Chiron’s 1996 spectra fitted with a model consisting of a linear superposition of water ice and olivine spectra.
no-problem/0002/astro-ph0002431.html
ar5iv
text
# Constraints to the SSC model for Mkn 501 ## 1. Introduction The study of Blazars has been recently enriched by the detection of very high energy gamma rays (energy in the TeV range) from a handful of nearby sources. This discovery opens the possibility to effectively test and constrain the radiative mechanisms invoked to explain the emission from Blazars and to investigate the physical conditions in relativistic jets. As discussed in Tavecchio et al. 1998 (hereafter T98), the knowledge of the simultaneous X-ray and TeV spectra of Blazars allows to univocally find the set of physical parameters necessary within a homogeneous Synchrotron–Self Compton model with an electron distribution described by a broken power-law. Here we apply the SSC model to recent high quality, simultaneous X-ray and TeV data of the well studied source Mkn 501 and discuss the results. ## 2. The data TeV spectra measured by the CAT team during 1997 were recently reported in Djannati-Atai et al (1999). In particular the TeV spectra associated with $`Beppo`$SAX observations of April 16 and 7 (reported in Pian et al 1998) are also discussed. For the large flare of April 16, given the high flux level, it has been possible to obtain a good quality TeV spectrum from a single observation which partially overlaps with the $`Beppo`$SAX observation. On Apr 7, Mkn 501 was less bright and the TeV spectrum is obtained using different observations with similar TeV fluxes and hardness ratios (for more details see Djannati-Atai et al 1999). We combine these data with the X-ray spectra reported in Pian et al. (1998) for the same days constructing quasi simultaneous SEDs for two epoches. The TeV spectra of Mkn 501 are clearly curved. It was suggested that this curvature could be the ”footprint” of the absorption of TeV photons by the Infra Red Background (IRB, e.g Konopelko et al. 1999, but see the detailed discussion in Vassiliev 1999) ## 3. Spectral Fits with the SSC model T98 obtained analytical relations for the IC peak frequency in Klein-Nishina (KN) regime adopting the step function approximation for the KN cross-section. A comparison between these approximate estimates and the detailed numerical calculations with the full KN cross section shows that the analytical formulae for the IC peak frequency given in T98 can overestimate its value by a factor of 3-10, depending on $`n_2`$, the index of the steeper part of the electron distribution. Therefore, although the analytical estimates are useful as a guideline, it is very important for the study of the TeV spectrum of Blazars to use precise numerical calculations, like those discussed in the following. We reproduced the SED of Mkn 501 (reported in Fig.1) using the SSC model described in T98 and Chiappetti et al 1999. In a spherical region with radius R, magnetic field B and Doppler factor $`\delta `$, a population of electrons with energy distribution of the form $`N(\gamma )=K\gamma ^{n_1}(1+\gamma /\gamma _{break})^{n_1n_2}`$ emits synchrotron and IC radiation. The IC spectrum is calculated with the full Klein-Nishina cross-section, using the formula derived by Jones (1968). The seed photons for the IC scattering are those produced by the same electrons through the synchrotron mechanism. For the case of the high state of Mkn 501 we considered possible intervening absorption by the IRB adopting the prescriptions for the low model discussed in Stecker & De Jager (1998). In the calculation reported here we assume a typical radius of the emitting region of $`R10^{16}`$ cm and a Doppler factor $`\delta =10`$. With this choice the minimum variability timescale is of $`10`$ h (see Tab.1 for the parameters used). It is useful to calculate the IC emission from electrons with different energy ranges. These ”slices” are reported in Fig. 2 (see figure caption for the detailed energy ranges). It is interesting to note that, because of the KN cut at the high energy emission, the peak of the IC component is produced by electrons with Lorentz factor below $`\gamma _{break}`$ while the peak of the Synchrotron component is produced by electrons at $`\gamma _{break}`$. This effect has important consequences for the study of the X-ray/TeV correlated variability (see also Maraschi et al. 1999). ## 4. Discussion and Conclusions The parameters adopted for the model are reported in Table 1. Our model indicates a rather low value for the magnetic field and a high $`\gamma _{break}`$ (see Tab.1). The transition from the low state to the high state in Mkn 501 is consistent with an increase of a factor of 2 in both $`\gamma _{break}`$ and B and with an almost constant value of $`\delta `$ and $`R`$. These results are in agreement with a similar analysis of the April 16 flare by Bednarek & Protheroe (1999): for a minimum timescale of 2.5 h they found that the TeV spectrum can be well reproduced by $`B0.03`$ and $`\delta 15`$. Although some authors (see Konopelko et al. 1999) have recently proposed that the curvature of the TeV spectrum of Mkn 501 provides evidence for absorption by IRB, the curvature could be intrinsic and related to the curved electron distribution necessary to fit the X-ray data. Our models show that a curved spectrum is a plausible explanation for the curvature. The introduction of the IRB does not dramatically change the inferred physical parameters. As shown in Fig.2 IRB/no IRB TeV spectra are very similar up to 15 TeV, while for higher energy the power-law spectrum of the no IRB case is changed in an exponential profile. Therefore spectral data above 10 TeV are needed in order to understand if IRB absorption affects Blazar spectra. Finally we note that in the theory of particle acceleration by shocks (for a recent critical discussion see Henri et al. 1999) the maximum Lorentz factor of accelerated electrons, obtained equating acceleration time and cooling time, is given by $`\gamma _{max}10^7(B/0.02G)^{1/2}`$, where we used the value of B we found in the high state of Mkn 501. Therefore our fit suggests that during the high activity states of Mkn 501 the electrons can reach the maximum energy fixed by the balance between cooling and acceleration processes. In states of lower activity the acceleration time could be longer and the condition $`t_{esc}<t_{acc}`$ (where $`t_{esc}`$ and $`t_{acc}`$ are the escape time and the acceleration time respectively) could prevent the electrons for reaching the maximum energy. ## ACKNOWLEDGEMENTS We thank A. Djannati-Atai for sending us the TeV data of Mkn 501. ## REFERENCES Bednarek, W., & Protheroe, R.J., 1999, MNRAS, in press, astro-ph/9902050 Catanese, M., et al. 1997, ApJ, 487, L143 Chiappetti, L., et al. 1999, ApJ, 521,552 Djannati-Atai, A., et al. 1999, submitted to A&A Henri, G., et al., Astropart. Phys, in press (astro-ph/9901051) Jones, F., 1968, Phys. Rev., 167, 1159 Konopelko, A.K., et al. 1999, ApJ, 518, L13 Maraschi, L., et al. 1999b, ApJ Letters, in press Pian, E., et al. 1998. ApJ, 491, L17 Stecker, F.W., & de Jager, O.C., 1998, A&A, 334, L85 Tavecchio, F., Maraschi, L. and Ghisellini, G., 1998, ApJ, 509, 608 Vassiliev, V.V., 1999, Astropart. Phys., in press Weekes, T.C., et al. 1996, A&AS, 120, 603
no-problem/0002/hep-ph0002150.html
ar5iv
text
# 1 Model with 𝒰⁢(1) flavor symmetry ## 1 Model with $`𝒰(1)`$ flavor symmetry We introduce an anomalous $`𝒰(1)`$ flavor symmetry which, may arise in effective field theories from strings. The cancellation of its anomalies occurs through the Green-Schwarz mechanism . Due to the anomaly, the Fayet-Iliopoulos term $$\xi d^4\theta V_A$$ (13) is always generated , where, in string theory, $`\xi `$ is given by $$\xi =\frac{g_A^2M_P^2}{192\pi ^2}\mathrm{Tr}Q.$$ (14) The $`D_A`$-term will have the form $$\frac{g_A^2}{8}D_A^2=\frac{g_A^2}{8}\left(\mathrm{\Sigma }Q_a|\phi _a|^2+\xi \right)^2,$$ (15) where $`Q_a`$ is the ‘anomalous’ charge of $`\phi _a`$ superfield. In the anomalous $`𝒰(1)`$ symmetry was considered as a mediator of SUSY breaking. In , the anomalous Abelian symmetries were exploited as flavor symmetries for a natural understanding of hierarchies of fermion masses and mixings, while in the various neutrino oscillation scenarios with $`𝒰(1)`$ symmetry were studied. Assuming $`\mathrm{Tr}Q>0`$ ($`\xi >0`$) and introducing a superfield $`X`$ with $`Q_X=1`$, we can ensure that the cancellation of (15) fixes the VEV of the scalar component of $`X`$ to be $$X=\sqrt{\xi }.$$ (16) Further, we will assume that $$\frac{X}{M_P}ϵ0.2.$$ (17) The parameter $`ϵ`$ is an important expansion parameter for understanding the magnitudes of fermion masses and mixings. Starting our considerations with the neutrino sector within the framework of MSSM (which is more general than some specific GUT model), let us consider the following prescription for the $`𝒰(1)`$ charges $$Q_X=1,Q_{l_2}=Q_{l_3}=k,Q_{l_1}=k+n,Q_{h_u}=Q_{h_d}=0.$$ (18) In order to obtain non-zero neutrino masses, we introduce two right-handed neutrino superfields $`𝒩_1,𝒩_2`$, which, by a judicious choice of $`𝒰(1)`$ charges <sup>3</sup><sup>3</sup>3For models in which $`𝒰(1)`$ flavor symmetry plays a crucial role for achieving maximal/large mixings, see refs. . $$Q_{𝒩_1}=Q_{𝒩_2}=k,$$ (19) will provide a texture similar to (1). From (18), (19), the relevant couplings will be: $$\begin{array}{cc}& \begin{array}{cc}𝒩_1& 𝒩_2\end{array}\\ \begin{array}{c}l_1\\ l_2\\ l_3\end{array}& \left(\begin{array}{ccc}ϵ^{2k+n}& ϵ^n& \\ ϵ^{2k}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \\ ϵ^{2k}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \end{array}\right)h_u,\end{array}\begin{array}{cc}& \begin{array}{cc}𝒩_1& 𝒩_2\end{array}\\ \begin{array}{c}𝒩_1\\ 𝒩_2\end{array}& \left(\begin{array}{ccc}ϵ^{2k}& 1& \\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& 0& \end{array}\right)M,\end{array}$$ (20) where $`M`$ is some mass scale. Integration of $`𝒩_{1,2}`$ states yields the neutrino mass matrix $$\begin{array}{ccc}& & \\ \begin{array}{c}\end{array}\widehat{M}_\nu & \left(\begin{array}{ccc}ϵ^n& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\\ \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\end{array}\right)m,m=\frac{ϵ^{2k+n}h_u^2}{M},& \end{array}$$ (21) which resembles the texture (1), but differs from it by a nonzero (1,1) entry, and provides the $`\mathrm{\Delta }m_{21}^2`$ splitting. From (21) and (10) we have for the atmospheric and solar neutrino oscillation parameters (respectively) $$\mathrm{\Delta }m_{32}^2m_{\mathrm{atm}}^2=m^210^3\mathrm{eV}^2,$$ $$𝒜(\nu _\mu \nu _\tau )1,$$ (22) $$\mathrm{\Delta }m_{21}^22m_{\mathrm{atm}}^2ϵ^n,$$ $$𝒜(\nu _e\nu _{\mu ,\tau })=1𝒪(ϵ^{2n}).$$ (23) We observe that the $`\mathrm{mass}^2`$ splitting for solar neutrinos is expressed in terms of the atmospheric scale $`m_{\mathrm{atm}}`$ and $`n`$-th power of $`ϵ`$. From (23) we have $$\mathrm{\Delta }m_{21}^2\{\begin{array}{cc}10^{10}\mathrm{eV}^2\hfill & \text{for }n=10\hfill \\ 10^5\mathrm{eV}^2\hfill & \text{for }n=3\hfill \end{array}$$ (24) Therefore, $`n=10`$ corresponds to vacuum oscillations of solar neutrinos, while $`n=3`$ gives the large angle MSW solution. The MSSM does not constrain $`n`$ to be either $`10`$ or $`3`$, and so both scenarios are possible. To see this, let us consider the charged fermion sector. With the prescription $$Q_{e_3^c}=p,Q_{e_2^c}=p+2,Q_{e_1^c}=p+5n,$$ (25) the Yukawa couplings for charged leptons have the form $$\begin{array}{ccc}& \begin{array}{ccc}e_1^c& e_2^c& e_3^c\end{array}& \\ \begin{array}{c}l_1\\ l_2\\ l_3\end{array}& \left(\begin{array}{ccc}ϵ^5& ϵ^{n+2}& ϵ^n\\ ϵ^{5n}& ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\\ ϵ^{5n}& ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)ϵ^{p+k}h_d,& \end{array}$$ (26) providing the desirable hierarchies $$\lambda _\tau ϵ^{p+k},\lambda _e:\lambda _\mu :\lambda _\tau ϵ^5:ϵ^2:1,$$ (27) with $$\mathrm{tan}\beta ϵ^{p+k}\frac{m_t}{m_b}.$$ (28) As far as the quark sector is concerned, with $$Q_{q_3}=0,Q_{q_2}=2,Q_{q_1}=3,Q_{d_3^c}=Q_{d_2^c}=p+k,$$ $$Q_{d_1^c}=p+k+2,Q_{u_3^c}=0,Q_{u_2^c}=1,Q_{u_1^c}=3,$$ (29) the appropriate Yukawa couplings are $$\begin{array}{ccc}& \begin{array}{ccc}d_1^c& d_2^c& d_3^c\end{array}& \\ \begin{array}{c}q_1\\ q_2\\ q_3\end{array}& \left(\begin{array}{ccc}ϵ^5& ϵ^3& ϵ^3\\ ϵ^4& ϵ^2& ϵ^2\\ ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)ϵ^{p+k}h_d,& \end{array}$$ (30) $$\begin{array}{ccc}& \begin{array}{ccc}u_1^c& u_2^c& u_3^c\end{array}& \\ \begin{array}{c}q_1\\ q_2\\ q_3\end{array}& \left(\begin{array}{ccc}ϵ^6& ϵ^4& ϵ^3\\ ϵ^5& ϵ^3& \mathrm{\hspace{0.17em}\hspace{0.17em}2}\\ ϵ^3& ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)h_u,& \end{array}$$ (31) yielding $$\lambda _bϵ^{p+k},\lambda _d:\lambda _s:\lambda _bϵ^5:ϵ^2:1,$$ (32) $$\lambda _t1,\lambda _u:\lambda _c:\lambda _tϵ^6:ϵ^3:1.$$ (33) From (30), (31), for the CKM matrix elements we find $$V_{us}ϵ,V_{cb}ϵ^2,V_{ub}ϵ^3.$$ (34) We see that the MSSM does not fix the values of $`n,p,k`$. However, specific GUTs can be more restrictive. To demonstrate this, we consider the simplest version of $`SU(5)`$ GUT, with three families of $`(10+\overline{5})`$-plets. Due to these unified multiplets: $$Q_q=Q_{e^c}=Q_{u^c}=Q_{10},Q_l=Q_{d^c}=Q_{\overline{5}}.$$ (35) The known hierarchies (34) of the CKM matrix elements now fix the relative charges of $`10`$-plets, $$Q_{10_3}=0,Q_{10_2}=2,Q_{10_1}=3,$$ (36) while (27), (32) dictate $$Q_{\overline{5}_3}=Q_{\overline{5}_2}=k,Q_{\overline{5}_1}=k+2.$$ (37) Comparing (35)-(37) with (18), (25) (29) we see that the minimal $`SU(5)`$ GUT fixes $`n`$ and $`p`$ to be $$n=2,p=0,$$ (38) From (23) (which now turns out to be predictive since $`n`$ is fixed) we get $$\mathrm{\Delta }m_{21}^210^4\mathrm{eV}^2.$$ (39) This value is close to the scale corresponding to the large angle MSW oscillations of the solar neutrinos. We see that our mechanism within $`SU(5)`$ GUT strongly suggests large angle MSW oscillations for solar neutrinos. The same conclusion can be reached with $`SO(10)`$ GUT, since also in this case the prescription of the $`𝒰(1)`$ charges would exclude the vacuum solution of the solar neutrino problem. Within this framework the large $`\nu _\mu \nu _\tau `$ mixing remains unchanged. Let us note that the conclusions presented above are valid if the anomalous $`𝒰(1)`$ only acts as flavor symmetry and is not tied with SUSY breaking. In several models an anomalous $`𝒰(1)`$ symmetry also acts as a mediator of SUSY breaking . This can be very useful for adequate suppression of FCNC and dimension five nucleon decay . In this type of scenarios the soft $`\mathrm{mass}^2`$ for sparticles, which have non-zero $`𝒰(1)`$ charges, emerge from non-zero $`D_A`$-term and equal $$m_{\stackrel{~}{\varphi }_i}^2=m_S^2Q_{\varphi _i},$$ (40) where $`m_S`$ is taken $`𝒪(10\mathrm{TeV})`$. Therefore, the $`𝒰(1)`$ charges of matter superfields must be positive in order to avoid $`SU(3)_c\times U(1)_{\mathrm{em}}`$ breaking. On the other hand, from (25) we see that the choice $`n=10`$ is excluded (!) \[since $`0\stackrel{<}{_{}}p+k\stackrel{<}{_{}}3`$ ($`1\stackrel{>}{_{}}\lambda _{\tau ,b}\stackrel{<}{_{}}10^2`$)\], which means that in this case the vacuum oscillation solution for solar neutrinos is not realized even within the MSSM framework. The cases $`n=2,3`$, which correspond to the large angle MSW solution, are still allowed. The interesting point is that the neutrino oscillation scenario is linked with SUSY breaking mechanism. In conclusion, we have suggested a scenario for obtaining bi-maximal neutrino mixing. A crucial role is played by an anomalous $`𝒰(1)`$ flavor symmetry for obtaining the simple neutrino mass matrix texture in (1). The atmospheric neutrino puzzle is resolved by large/maximal $`\nu _\mu \nu _\tau `$ mixing, while the scenario for large angle $`\nu _e\nu _{\mu ,\tau }`$ oscillations is model dependent: within the MSSM, both the large angle vacuum or the large angle MSW oscillations are possible, while the $`SU(5)`$ GUT (and also $`SO(10)`$) predicts the large angle MSW solution. If the anomalous $`𝒰(1)`$ flavor symmetry is also a mediator of SUSY breaking, then the solar neutrino vacuum oscillations are excluded and the large angle MSW solution is responsible for the solar neutrino deficit. Finally, the $`𝒰(1)`$ flavor symmetry also nicely explains the hierarchies between the charged fermion masses and the magnitudes of the CKM matrix elements. This work was supported in part by the DOE under Grant No. DE-FG02-91ER40626.
no-problem/0002/astro-ph0002273.html
ar5iv
text
# Jupiter’s hydrocarbons observed with ISO-SWS: vertical profiles of C₂⁢H₆ and C₂⁢H₂, detection of CH₃⁢C₂⁢H ## 1 Introduction Hydrocarbons in Jupiter are produced in a series of chemical pathways initiated by the photolysis of methane in the upper stratosphere. Vertical transport, mainly turbulent diffusion, redistributes the molecules throughout the stratosphere and down to the troposphere, where they are eventually destroyed. Hydrocarbons, and particularly the most stable of them, are therefore good tracers of the upper atmospheric dynamics. In addition, as they strongly contribute to the atmospheric opacity in the UV and IR, hydrocarbons act as major sources and sinks of heat, thereby participating to the stratospheric dynamics. All these reasons have strongly motivated theoretical studies of the jovian stratospheric photochemistry (Strobel Strobel69 (1969); Yung & Strobel Yung80 (1980); and most recently Gladstone et al. Gladstone96 (1996)). Although nowadays very detailed, these models still need to be constrained by observations of minor species. However, prior to the ISO mission, only two molecules, acetylene ($`\mathrm{C}_2\mathrm{H}_2`$) and ethane ($`\mathrm{C}_2\mathrm{H}_6`$), had been detected, except in the auroral zones where several other minor species ($`\mathrm{C}_2\mathrm{H}_4`$, $`\mathrm{C}_3\mathrm{H}_4`$ and $`\mathrm{C}_6\mathrm{H}_6`$) have been observed (Kim et al. Kim85 (1985)). Mean stratospheric mole fractions have been inferred for $`\mathrm{C}_2\mathrm{H}_2`$ and $`\mathrm{C}_2\mathrm{H}_6`$ by various authors, but no precise information on their vertical variations was made available. In this paper, we analyse the ISO-SWS spectrum of Jupiter between 7 and 17 $`\mu \mathrm{m}`$, in order to determine the vertical distributions of $`\mathrm{C}_2\mathrm{H}_2`$ and $`\mathrm{C}_2\mathrm{H}_6`$, and to search for more complex (C<sub>3</sub> and C<sub>4</sub>) molecules. Sect. 2 presents the observations. Our analysis of the spectrum is presented in Sect. 3. The results are compared with previous observations and theoretical predictions in Sect. 4. ## 2 Observations Descriptions of the Infrared Space Observatory (ISO) and of the Short Wavelength Spectrometer (SWS) can be found respectively in Kessler et al. (Kessler96 (1996)) and de Graauw et al. (deGraauw96 (1996)). A preliminary analysis of the Jupiter SWS spectrum can be found in Encrenaz et al. (Encrenaz96 (1996)). New ISO-SWS grating observations of Jupiter were obtained on May 23, 1997 UT using the AOT 01 observing mode. These observations have an average spectral resolution of 1500, and range from 2.4 to 45 $`\mu \mathrm{m}`$. However, the useful range is limited to 2.4–17 $`\mu \mathrm{m}`$, due to partial saturation at longer wavelengths. The instrument aperture, $`14\times 20`$ arcsec<sup>2</sup> at $`\lambda <12.5`$ $`\mu \mathrm{m}`$ and $`14\times 27`$ arcsec<sup>2</sup> at $`\lambda >12.5`$ $`\mu \mathrm{m}`$, was centered on the planet with the long axis aligned perpendicular to the ecliptic, thus roughly parallel to the N-S polar axis. It covered latitudes between 30S and 30N, and $`\pm 20^{}`$ longitude range from the central meridian. The absolute flux accuracy is $`20\%`$. Instrumental fringing generates a spurious signal between 12.5 and 17 $`\mu \mathrm{m}`$, which amounts to $``$10% of the continuum level. This parasitic signal was for the most part removed by fitting the detector relative response function to the observed spectrum and then dividing it out. Residual fringes were further removed by selective frequency filtering. ## 3 Analysis We analysed the ISO-SWS spectrum using a standard line-by-line radiative transfer code adapted to Jupiter’s conditions. We included the molecular absorptions by $`\mathrm{NH}_3`$, $`\mathrm{CH}_4`$, $`\mathrm{C}_2\mathrm{H}_2`$, $`\mathrm{C}_2\mathrm{H}_6`$, and $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ using the spectroscopic parameters of the GEISA97 databank (Jacquinet-Husson et al. Husson99 (1999)). We also considered $`\mathrm{C}_4\mathrm{H}_2`$ absorption, using a linelist provided by E. Arié (private communication) and band intensities from Koops et al. (Koops84 (1984)). Spectroscopic parameters for the $`\mathrm{H}_2`$ S(1) line were calculated using molecular constants from Jennings et al. (Jennings87 (1987)) and Reuter & Sirota (Reuter94 (1994)). The H<sub>2</sub>-He collision-induced continuum was calculated following the work of Borysow et al. (Borysow85 (1985), Borysow88 (1988)). The $`\mathrm{NH}_3`$ vertical distribution was taken from Fouchet et al. (Fouchet99 (2000)). ### 3.1 Temperature profile We first calculated synthetic spectra in the region of the $`\mathrm{CH}_4`$ $`\nu _4`$ band. For $`\mathrm{CH}_4`$, we used a deep mixing ratio of $`2.1\times 10^3`$ (Niemann et al. Niemann98 (1998)) and the vertical profile derived by Drossart et al. (Drossart99 (1999)) from the $`\mathrm{CH}_4`$ fluorescence emission at 3.3 $`\mu \mathrm{m}`$. The $`\nu _4`$ band allows one to retrieve 4 independent points on the temperature profile between 35 and 1 mbar. We also generated synthetic spectra of the $`\mathrm{H}_2`$ S(1) rotational line at 17 $`\mu \mathrm{m}`$. This line probes a broad atmospheric layer at 3–30 mbar. The ortho-to-para ratio of $`\mathrm{H}_2`$ was assumed to follow local thermodynamical equilibrium. The stratospheric temperature profile was adjusted in order to best match the absolute emission in the $`\mathrm{CH}_4`$ band, and the line-to-continuum ratio of the $`\mathrm{H}_2`$ S(1) line. In practice, starting with the temperature profile measured in-situ by the Galileo Probe (Seiff et al. Seiff98 (1998)), it was necessary to cool it by 2 K between 30 and 5 mbar. At pressures lower than 5 mbar, the initial profile was warmed by 4 K up to $`165`$ K. The $`\nu _4`$ band is also somewhat sensitive to the temperature around the 10-$`\mu `$bar pressure level. We found that the temperature remains constant within a few degrees between 1 mbar and 1 $`\mu `$bar, as already observed by Seiff et al. (Seiff98 (1998)). At pressures lower than 1 $`\mu `$bar, we adopted the measurements of Seiff et al. (Seiff98 (1998)), vertically smoothed in order to remove oscillations due to gravity waves, noting that our measurements are essentially insensitive to this pressure range. The fit to the $`\mathrm{CH}_4`$ and $`\mathrm{H}_2`$ emissions is presented in Fig. 1. The 20% uncertainty on the absolute flux calibration directly results in an uncertainty of $`\pm 2`$K on the temperature profile inferred from the $`\mathrm{CH}_4`$ emission. This uncertainty partly explains the minor disagreement between the modelled and observed $`\mathrm{H}_2`$ line. Indeed, our model, while giving an optimum fit to the $`\mathrm{CH}_4`$ emission, slightly (5–10%) overpredicts the observed line-to-continuum ratio of the S(1) line. We also note that the $`\mathrm{H}_2`$ ortho-to-para ratio could differ from the thermal equilibrium value, as observed in the troposphere by Conrath et al. (Conrath98 (1998)). For example, a synthetic spectrum calculated with a constant para fraction of 0.34, corresponding to the thermal equilibrium value at 115 K, would fully reconcile the $`\mathrm{CH}_4`$ and $`\mathrm{H}_2`$ measurements. ### 3.2 Acetylene The ISO-SWS observations in the vicinity of the $`\mathrm{C}_2\mathrm{H}_2`$ emission at 13.7 $`\mu \mathrm{m}`$ were compared with synthetic spectra obtained with three distinct $`\mathrm{C}_2\mathrm{H}_2`$ vertical distributions (Fig. 2). All three profiles reproduce the emissions due to the P-,Q-, and R-branches of the main $`\nu _5`$ $`\mathrm{C}_2\mathrm{H}_2`$ band (Fig. 3). However, only one profile (Fig. 2, solid line) fits the observed spectrum in the Q-branches of the $`\nu _4+\nu _5\nu _4`$ band at 13.68 and 13.96 $`\mu \mathrm{m}`$. Indeed, while the $`\nu _5`$ band probes atmospheric levels between 2 and 5 mbar, the hot band sounds warmer, higher levels around 0.3 mbar, allowing us to determine the slope of the $`\mathrm{C}_2\mathrm{H}_2`$ profile between these two regions. Error bars on the $`\mathrm{C}_2\mathrm{H}_2`$ mixing ratio were estimated by taking into account instrumental noise and the uncertainty in the relative strengths of the $`\mathrm{C}_2\mathrm{H}_2`$ lines. The resulting mixing ratios are $`q=(8.9_{0.6}^{+1.1})\times 10^7`$ at 0.3 mbar and $`q=(1.1_{0.1}^{+0.2})\times 10^7`$ at 4 mbar, giving a slope $`d\mathrm{ln}q/d\mathrm{ln}P=0.8\pm 0.1`$. The error on the temperature profile ($`\pm 2`$K) introduces an additional uncertainty on the $`\mathrm{C}_2\mathrm{H}_2`$ mixing ratios of about 20%. This error, however, essentially equally applies to all pressure levels, and thus leaves the retrieved $`\mathrm{C}_2\mathrm{H}_2`$ profile slope mostly unaffected. ### 3.3 Ethane Similarly to $`\mathrm{C}_2\mathrm{H}_2`$, we compare in Fig. 4 the ISO-SWS spectrum in the $`\mathrm{C}_2\mathrm{H}_6`$ $`\nu _9`$ band with synthetic spectra calculated with three distinct $`\mathrm{C}_2\mathrm{H}_6`$ vertical profiles (Fig. 2). Each of the profiles was designed to reproduce the observed emission in the $`{}_{}{}^{\mathrm{R}}\mathrm{Q}_{0}^{}`$ multiplet at 12.16 $`\mu \mathrm{m}`$, which probes the 1-mbar pressure level. The rest of the $`\nu _9`$ band consists of weaker multiplets, which sound deeper levels extending from 1 to 10 mbar. This combination of strong and weak multiplets makes this band sensitive to the $`\mathrm{C}_2\mathrm{H}_6`$ vertical distribution. In addition, the pseudo-continuum level in between the $`\mathrm{C}_2\mathrm{H}_6`$ emissions is also sensitive to the $`\mathrm{C}_2\mathrm{H}_6`$ abundance in the lower stratosphere. Our best-fit model (Fig. 2, solid line) has a slope $`d\mathrm{ln}q/d\mathrm{ln}P=0.6`$, but the steep-slope model (Fig. 2, dotted line) is also marginally compatible with the observations. This results in a relatively large uncertainty on the slope determination: $`q=(1.0\pm 0.2)\times 10^5`$ at 1 mbar, and $`q=(2.6_{0.6}^{+0.5})\times 10^6`$ at 10 mbar, giving $`d\mathrm{ln}q/d\mathrm{ln}P=0.6\pm 0.2`$. An additional error of 25% on $`q`$ comes from the uncertainty on the temperature. Again, it should not affect the retrieved slope. ### 3.4 Methylacetylene and diacetylene The ISO-SWS spectrum exhibits a broad emission near 15.8 $`\mu \mathrm{m}`$, which coincides with the $`\nu _9`$ band of $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ (Fig. 5). Since the $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ lines are optically thin, no information on the vertical profile can be derived. Using a vertical profile similar to that calculated by Gladstone et al. (Gladstone96 (1996)), we found a column density of $`(1.5\pm 0.4)\times 10^{15}`$ molecule cm<sup>-2</sup>. The synthetic spectrum exhibits small-scale structures which are not seen in the observations. This mismatch is propably due to an imperfect data reduction. Indeed, the respective frequencies of the fringes and of the $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ features lie close to each other. It is therefore very difficult to fully remove the former without altering the latter. On the contrary, the broad emission is a low frequency signal and is therefore left unaffected by the frequency filtering. This explanation is admittedly not entirely satisfactory, but given the wavelength match of the emission with the $`\nu _9`$ mode of $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$, and in the absence of any other plausible candidates, we regard the detection of methylacetylene as unambiguous. The $`\nu _8`$ band of $`\mathrm{C}_4\mathrm{H}_2`$ is not detected at 15.92 $`\mu \mathrm{m}`$. We inferred an upper limit of the $`\mathrm{C}_4\mathrm{H}_2`$ column density of $`7\times 10^{13}`$ molecule cm<sup>-2</sup>, using the Gladstone et al. (Gladstone96 (1996)) profile. ## 4 Discussion Bézard et al. (Bezard95 (1995)) first showed from 13.4-$`\mu \mathrm{m}`$ high-resolution spectroscopy that the $`\mathrm{C}_2\mathrm{H}_2`$ mixing ratio increases with height in the stratosphere, and that most of the acetylene is concentrated above the $``$0.5-mbar level. Their distribution has a mixing ratio of about $`7\times 10^7`$ at 0.3 mbar, in reasonable agreement with our results. More recently, Bétremieux & Yelle (Betremieux99 (1999)), using UV observations, found an average $`\mathrm{C}_2\mathrm{H}_2`$ mixing ratio in the 20–60 mbar range of $`1.5\times 10^8`$, consistent with our value of $`1.9\times 10^8`$, extrapolated to this pressure range. Using height-dependent mixing ratio profiles to analyse high-resolution infrared observations, Sada et al. (Sada98 (1998)) derived mixing ratios of $`3.9_{1.3}^{+1.9}\times 10^6`$ for $`\mathrm{C}_2\mathrm{H}_6`$ at 5 mbar and $`2.3\pm 0.5\times 10^8`$ for $`\mathrm{C}_2\mathrm{H}_2`$ at 8 mbar. While the former value exactly agrees with our results, the latter is almost 3 times less than that extrapolated downwards from our $`\mathrm{C}_2\mathrm{H}_2`$ profile. Also from infrared observations, Noll et al. (Noll86 (1986)), using a slope of $`d\mathrm{ln}q/d\mathrm{ln}P=0.3`$, found a $`\mathrm{C}_2\mathrm{H}_6`$ mixing ratio of $`7.5\times 10^6`$ at 1 mbar, which compares reasonably well with our measurement at the same pressure level. We also compared our retrieved mixing ratios with those calculated in the photochemical model of Gladstone et al. (Gladstone96 (1996)). For $`\mathrm{C}_2\mathrm{H}_6`$, our results are in excellent agreement with their model both at 1 and 10 mbar (Fig. 2). For $`\mathrm{C}_2\mathrm{H}_2`$, while the agreement is good at 4 mbar, their mixing ratio at 0.3 mbar is higher than ours by a factor of 4. Our $`\mathrm{C}_2\mathrm{H}_2`$ slope is then significantly lower than theirs ($`d\mathrm{ln}q/d\mathrm{ln}P=0.8\pm 0.1`$ vs. $`d\mathrm{ln}q/d\mathrm{ln}P=1.2`$). Note that our derived $`\mathrm{C}_2\mathrm{H}_2`$ slope is still steeper than that of $`\mathrm{C}_2\mathrm{H}_6`$ ($`d\mathrm{ln}q/d\mathrm{ln}P=0.6\pm 0.2`$), as expected. Indeed, $`\mathrm{C}_2\mathrm{H}_6`$, being less subject to photolysis, has a longer lifetime in the jovian stratosphere than $`\mathrm{C}_2\mathrm{H}_2`$ (Gladstone et al. Gladstone96 (1996), their Fig. 6). Hydrocarbons are formed from the photolysis of $`\mathrm{CH}_4`$ around the homopause. Small-scale turbulence, parameterized in a photochemical model by the eddy diffusion coefficient ($`K`$), transports them downwards in the stratosphere. This process is approximately modelled for long-lived products by the equation $`K(z)n(z)\mathrm{d}q/\mathrm{d}z=P(z)`$, where $`n(z)`$ is the number density at altitude $`z`$ and $`P(z)`$ the vertically integrated net production rate above altitude $`z`$. A first hypothesis would attribute the difference between the observed and calculated $`\mathrm{C}_2\mathrm{H}_2`$ abundances to an overestimation of the $`\mathrm{C}_2\mathrm{H}_2`$ production rate $`P(z)`$ in the Gladstone et al. model. Our two abundance measurements at 4 and 0.3 mbar allow us to calculate the mean $`\mathrm{d}q/\mathrm{d}z`$ over this pressure range. Comparing with the value of $`\mathrm{d}q/\mathrm{d}z`$ predicted by Gladstone et al. at 1 mbar, we found that $`P(z)`$ should be decreased by a factor of $``$3. A second explanation would be that Gladstone et al. underestimated the eddy diffusion coefficient $`K`$ by a factor of $``$3 around 1 mbar. This underestimation at 1 mbar should also apply to the level and eddy diffusion coefficient ($`K_H`$) at the homopause. However, it is difficult to directly quantify the changes on the homopause parameters, because of the strong coupling between production rates and homopause level. Simply, we note that our analysis could imply an increase in $`K_H`$. It is consistent with Drossart et al. (Drossart99 (1999)), who, analysing $`\mathrm{CH}_4`$ fluorescence, found $`K_H=(7\pm 1)\times 10^6`$ $`\mathrm{cm}^2\mathrm{s}^1`$, higher than the value of 1.4$`\times 10^6`$ assumed in the Gladstone et al. model. In this case, the agreement on the $`\mathrm{C}_2\mathrm{H}_6`$ slope would also imply that the $`\mathrm{C}_2\mathrm{H}_6`$ production rate has been underestimated by Gladstone et al. (Gladstone96 (1996)). In fact, the most direct conclusion of our measurements is that Gladstone et al. have overestimated the $`\mathrm{C}_2\mathrm{H}_2`$/$`\mathrm{C}_2\mathrm{H}_6`$ production rate ratio. The ISO-SWS spectrum enables the first detection of $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ in the equatorial region of Jupiter. We retrieved a column density of $`(1.5\pm 0.4)\times 10^{15}`$ molecule cm<sup>-2</sup>. Kim et al. (Kim85 (1985)) had previously detected $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ in the north polar auroral zone of Jupiter, and had retrieved a column density of $`(2.8_{1.1}^{+2.4})\times 10^{16}`$ molecule cm<sup>-2</sup>. At least a large part of the difference is explained by different modelling assumptions. Kim et al. (Kim85 (1985)) assumed a uniform vertical distribution throughout the stratosphere and used a temperature profile for the auroral region which is now known to be incorrect (Drossart et al. Drossart93 (1993)). For $`\mathrm{C}_4\mathrm{H}_2`$, we found only an upper limit of $`7\times 10^{13}`$ molecule cm<sup>-2</sup>, 65 times lower than the stratospheric column density predicted by Gladstone et al. (4.5$`\times `$10<sup>15</sup> molecule cm<sup>-2</sup>). This is consistent with their overestimate of $`\mathrm{C}_2\mathrm{H}_2`$ production, since the production of $`\mathrm{C}_4\mathrm{H}_2`$ is essentially quadratically dependent on the abundance of $`\mathrm{C}_2\mathrm{H}_2`$. As our $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ column density is 3.5 times larger than calculated by Gladstone et al., we derived a $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$/$`\mathrm{C}_4\mathrm{H}_2`$ ratio larger than 20, while they found a ratio of $`2`$. This discrepancy is all the more remarkable as, in the case of Saturn, where both $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$ and $`\mathrm{C}_4\mathrm{H}_2`$ are detected (de Graauw et al. deGraauw97 (1997)), the photochemical model of Moses et al. (Moses99 (2000)) gives, as observed, a $`\mathrm{CH}_3\mathrm{C}_2\mathrm{H}`$/$`\mathrm{C}_4\mathrm{H}_2`$ ratio of about 10. This stresses that the C<sub>3</sub> and C<sub>4</sub> chemistry in Jupiter should be reassessed with a complete photochemical model of the jovian stratosphere. ###### Acknowledgements. This study is based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the principal investigators countries: France, Germany, the Netherlands and the United Kingdom), and with participation of ISAS and NASA.
no-problem/0002/astro-ph0002116.html
ar5iv
text
# Interplanetary Network Localization of GRB991208 and the Discovery of its Afterglow ## 1 Introduction Many gamma-ray burst counterparts have now been identified using the rapid, precise localizations available from the BeppoSAX spacecraft , as well as from the Rossi X-Ray Timing Explorer , starting with GRB970228 (Costa et al. 1997; van Paradijs et al. 1997). Such detections occur at a low rate ($`8y^1`$), and they have been limited to the long–duration events so far, but they have confirmed the cosmological origin of at least this class of bursts. Since 1977, interplanetary networks of omnidirectional GRB detectors have provided precise triangulations of both short and long bursts at rates up to $``$ 1/week, but often the networks have been incomplete, or the data return from the interplanetary spacecraft has been slow. The present, 3rd IPN is now complete with Ulysses and NEAR as its distant points (Cline et al. 1999) and, in conjunction with numerous near-Earth spacecraft, can produce precise GRB error boxes within $``$ 1 d, making them useful for multi-wavelength follow-up observations. Here we present the observations of GRB991208, which was rapidly localized to a small error box, leading to the identification of its radio and optical afterglow, and eventually to the measurement of its redshift. ## 2 IPN Observations GRB 991208 was observed by the Ulysses GRB (Hurley et al. 1992), KONUS-Wind (Aptekar et al. 1995), and NEAR X-ray/Gamma-ray Spectrometer (XGRS: Goldsten et al. 1997) experiments. Ulysses , in heliocentric orbit, NEAR, approaching rendezvous with the asteroid Eros, and Wind , were 2176, 937, and 1.5 light-seconds from Earth, respectively. We focus here on the Ulysses and NEAR data; KONUS data will be presented elsewhere. The Ulysses and XGRS light curves are shown in figure 1. Although Ulysses recorded the first 57 s of the burst with 0.03125 s resolution in the triggered mode, the peak of the event occurred slightly later, and we have shown the 0.5 s resolution real-time data in the figure. The XGRS BGO anticoincidence shield is employed as the NEAR burst monitor, and the only time history data available from it are 1 s resolution count rates for the 100 - 1000 keV energy range. The burst can be characterized by a T<sub>90</sub> duration of 68 s, placing it firmly in the “long” class of bursts (Hurley 1992; Kouveliotou et al. 1993). The event-integrated Ulysses spectrum is well fit between 25 and 150 keV by a power law, thermal bremsstrahlung, or blackbody spectrum. For the following analysis, we adopt the power law, which has a photon index 1.68 $`\pm `$ 0.19, and a $`\chi ^2`$ of 4.3 for 11 degrees of freedom (figure 2). The 25 - 100 keV fluence is $`4.9\times 10^5\mathrm{erg}\mathrm{cm}^2`$, with an uncertainty of $`\pm 10\%`$ due to count rate statistics and systematics. Since the XGRS shield provides only very low resolution (40 minutes) spectral data, we can only estimate the fluence in the XGRS energy range from its light curve to be about the equivalent of that observed by Ulysses . Thus the total fluence above 25 keV is $`10^4\mathrm{erg}\mathrm{cm}^2`$. Bursts with this intensity or greater occur at a rate $`10\mathrm{y}^1`$. Since the peak of the event occurs when only the 0.5 s Ulysses data are available, we can estimate the peak flux only over this time interval; it is $`5.1\times 10^6\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, 25 - 100 keV ($`\pm 10\%`$), with a contribution in the 100 - 1000 keV energy range which is again probably equivalent. A preliminary $``$ 14 sq. arcmin. IPN error box was circulated $``$ 44 h after the earth-crossing time of the event (04:36:53 UT: Hurley et al. 1999). The final error box is shown in figure 3, and nests within the preliminary one; it has an area of $``$ 1.3 sq. arcmin. Its coordinates are given in Table 1. Also shown in figure 3 is the position of the radio counterpart to the burst detected by Frail (1999). We note that this is the first time that a GRB position determined by the 3rd IPN alone (i.e., no Compton Gamma-Ray Observatory BATSE or BeppoSAX observations) has been successfully used for multiwavelength counterpart searches. ## 3 Observations with NRAO Very Large Array Very Large Array (VLA)<sup>1</sup><sup>1</sup>1The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. observations were begun on 1999 December 10.92 UT, 2.73 days after the gamma-ray burst. In order to image the entire 14 sq. arcmin. initial IPN error box (Hurley et al. 1999) with the VLA at 8.46 GHz two pointings were required, each with a field-of-view at half power response of 5.3′. The full 100 MHz bandwidth was used in two adjacent 50-MHz bands centered at 8.46 GHz. A single pointing was also made at 4.86 GHz, but the bandwidth was halved in order to image the full 9′ field-of-view without distortion. The flux density scale was tied to the extragalactic sources 3C 48 (J0137+331), while the array phase was monitored by switching between the GRB and the phase calibrators J1637+462 (at 8.46 GHz) and J1658+476 (at 4.86 GHz). There were three radio sources inside the initial IPN error box (figure 3). Of these, two were previously known from an earlier survey of this part of the sky (Becker, White & Helfand 1995). The third source, located at located at (epoch J2000) $`\alpha `$ = $`16^h33^m53.50^s`$ ($`\pm 0.01^s`$) $`\delta `$ = $`+46^{}27^{}20.9^{\prime \prime }`$ ($`\pm 0.1^{\prime \prime }`$) was near the center of the initial IPN error box. On the basis of its position, compactness ($`<0.8^{\prime \prime }`$), and a rising flux density between 4.86 GHz and 8.46 GHz (327 $`\pm `$ 45 $`\mu `$J and 707 $`\pm `$ 39 $`\mu `$J respectively) , Frail (1999) proposed that it was the afterglow of GRB 991208. Despite the proximity of this location to the Sun ($`70\mathrm{°}`$), numerous optical observations were carried out, and this suggestion was quickly confirmed by the independent detection of a coincident optical source, not visible on the Digital Sky Survey (Castro-Tirado et al. 1999). In the week following the afterglow discovery, it was determined that the optical flux faded as a power-law with a rather steep temporal decay index $`\alpha 2.15`$ (where $`F_\nu t^\alpha `$) (Jensen et al. 1999; Garnavich and Noriega-Crespo 1999; Masetti et al. 1999). Optical spectroscopy emission lines from a presumed host galaxy, if identified with \[OII\] and \[OIII\] features, place GRB 991208 at a redshift $`z=0.707\pm 0.002`$ (Dodonov et al 1999), making this the third closest GRB with a spectroscopically-measured redshift. GRB 991208 has been no less interesting at radio wavelengths. It has the brightest radio afterglow detected to date and consequently it has been detected and is well-studied between 1 GHz and 350 GHz (Pooley 1999; Shepherd et al. 1999; Bremer et al. 1999). ## 4 Discussion For a 25 – 1000 keV fluence of $`10^4\mathrm{erg}\mathrm{cm}^2`$ and a redshift z=0.707, the isotropic energy of this burst would have been $`1.3\times 10^{53}\mathrm{erg}`$; for a peak flux of $`10^5\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in the same energy range, the isotropic peak luminosity would have been $`1.3\times 10^{52}\mathrm{erg}\mathrm{s}^1`$. These estimates assume $`\mathrm{\Omega }=0.2,\mathrm{\Lambda }=0,\mathrm{and}\mathrm{H}_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. Rhoads (1997) has pointed out that one signature of beaming is a steep decay in the afterglow light curve, $`t^2`$. As the initial optical light curve for GRB991208 indeed appears to decay this steeply, the emission may well be beamed, reducing these estimates. The current IPN now consists of Ulysses and NEAR in interplanetary space, and numerous near-Earth spacecraft such as Wind , BeppoSAX , and the Compton Gamma-Ray Observatory . The Mars Surveyor 2001 Orbiter will join the network in mid-2001. The IPN currently observes $``$ 1 – 2 GRBs per week and is localizing many of them rapidly to small error boxes. These events tend to be the brighter ones, but apart from this, there is no bias towards any particular event duration; indeed, the IPN generally obtains its smallest error boxes for the short bursts. Neither is there any sun-angle restriction for the event locations, which means that bursts will be detected whose locations are close to the Sun, as this one was, making prompt radio observations of these positions important. The other advantages of radio observations over optical are the longer lifetime of the radio afterglow, the immunity from weather, and the freedom to operate at any part of the diurnal cycle. This should increase the rate of counterpart detections substantially over the next several years. KH acknowledges support for Ulysses operations under JPL Contract 958056, for IPN operations under NASA LTSA grant NAG5-3500, and for NEAR operations under the NEAR Participating Scientist program. On the Russian side, this work was partially supported by RFBR grant # 99-02-17031. We are grateful to R. Gold and R. McNutt for their assistance with the NEAR spacecraft. RS is supported by NASA grant NCC5-380. We are indebted to T. Sheets for her excellent work on NEAR data reduction. Special thanks also go to the NEAR project office for its support of post-launch changes to XGRS software that made these measurements possible. In particular, we are grateful to John R. Hayes and Susan E. Schneider for writing the GRB software for the XGRS instrument and to Stanley B. Cooper and David S. Tillman for making it possible to get accurate universal time for the NEAR GRB detections.
no-problem/0002/quant-ph0002027.html
ar5iv
text
# Preparation information and optimal decompositions for mixed quantum states ## 1 Introduction It is a distinctive feature of quantum mechanics that more information is required to prepare an ensemble of nonorthogonal quantum states than can be recovered from the ensemble by measurements. Whereas the von Neumann entropy of the density operator of the ensemble is bounded above by the logarithm of the dimension of Hilbert space, $`\mathrm{log}D`$, the preparation information for a uniform ensemble of pure states is of the same order as $`D`$ . An ensemble of quantum states is defined by a list of states together with their probabilities, $`\{\widehat{\rho }_r,p_r\}`$. An ensemble can also be regarded as a decomposition of the average density operator, $`\widehat{\rho }=p_r\widehat{\rho }_r`$. Ensembles of quantum states of a system $`𝒮`$ arise in a natural way from the correlations of $`𝒮`$ with an environment $``$. Given the total state $`\widehat{\rho }_{\mathrm{total}}`$ of the joint system $`𝒮`$, any generalized measurement or POVM on $``$ induces an ensemble on $`𝒮`$. In this paper, we give a precise definition of the preparation information of a state in the ensemble induced by $`\widehat{\rho }_{\mathrm{total}}`$ and an environment POVM. The concept of preparation information leads naturally to a definition of optimal ensembles or, equivalently, optimal density-operator decompositions. The average preparation information of a state in an optimal ensemble is then a property of $`\widehat{\rho }_{\mathrm{total}}`$ alone (given the split of the total Hilbert space into $`𝒮`$ and $``$). The average preparation information characterizes the system-environment correlations by the information about the environment needed to obtain a given amount of information about the system. Optimal decompositions in the sense defined here have been used for the investigation of optimal quantum trajectories in quantum optics . Quantum trajectories are defined as follows. In a typical quantum-optical experiment, the system consists of selected atoms and field modes inside an optical cavity, whereas the environment consists of the continuum of modes outside the cavity. The time evolution of the cavity state conditional on the results of, e.g., homodyne measurements outside the cavity defines a quantum trajectory . For an alternative concept of optimality see Ref. . The average preparation information for an optimal ensemble has been proposed as a measure of quantum chaos . When a chaotic system interacts with its environment, one loses the ability to predict its time evolution. The preparation information quantifies the amount of information needed about the environment to keep the ability to predict the system state to a given accuracy. In conjunction with Landauer’s principle , this places a fundamental lower limit on the free-energy cost of predicting the time evolution of a dynamical system . The paper is organized as follows. Section 2 defines the concepts of preparation information and optimal ensembles, and derives some basic properties. In Sec. 3, we illustrate the theory through several examples. Some mathematical details are deferred to Sec. 4. ## 2 Preparation information and optimal ensembles Let $`D`$ and $`D_{}`$ denote the Hilbert-space dimensions of the system $`𝒮`$ and the environment $``$, respectively. We will normally assume that $`D_{}D`$. Now consider a joint state $`\widehat{\rho }_{\mathrm{total}}`$ on $`𝒮`$. The state of the system alone, $`\widehat{\rho }`$, is then obtained by tracing out the environment, $$\widehat{\rho }=\mathrm{tr}_{}(\widehat{\rho }_{\mathrm{total}}).$$ (1) The von Neumann entropy of the system is $$H=\mathrm{tr}(\widehat{\rho }\mathrm{log}\widehat{\rho }),$$ (2) where here and throughout this paper, $`\mathrm{log}`$ denotes the base-2 logarithm. We now perform an arbitrary measurement on the environment , described by a POVM, $`\{\widehat{E}_r\}`$, where the $`\widehat{E}_r`$ are positive environment operators such that $$\underset{r}{}\widehat{E}_r=\text{1}\text{l}_{}=\text{(environment unit operator).}$$ (3) The probability of obtaining result $`r`$ is given by $$p_r=\mathrm{tr}(\widehat{\rho }_{\mathrm{total}}\widehat{E}_r),$$ (4) and the system state after a measurement that yields result $`r`$ is $$\widehat{\rho }_r=\frac{\mathrm{tr}_{}(\widehat{\rho }_{\mathrm{total}}\widehat{E}_r)}{p_r}.$$ (5) By summing over $`r`$ and using the completeness of the POVM, we obtain $$\underset{r}{}p_r\widehat{\rho }_r=\mathrm{tr}_{}(\widehat{\rho }_{\mathrm{total}})=\widehat{\rho }.$$ (6) The ensemble $`\{\widehat{\rho }_r,p_r\}`$ forms a decomposition of $`\widehat{\rho }`$. To characterize the ensemble, we define the system entropy conditional on measurement outcome $`r`$, $$H_r=\mathrm{tr}(\widehat{\rho }_r\mathrm{log}\widehat{\rho }_r),$$ (7) the average conditional entropy, $$\overline{H}=\underset{r}{}p_rH_r,$$ (8) and the average entropy decrease due to the measurement, $`\mathrm{\Delta }\overline{H}=H\overline{H}`$. These quantities obey the double inequality $$0\mathrm{\Delta }\overline{H}H,$$ (9) which follows from the concavity of the von Neumann entropy . The content of the first of these inequalities is that a measurement on the environment will not, on the average, increase the system entropy. Now let $`\{\widehat{\rho }_r,p_r\}`$ be the ensemble induced by the POVM $`\{\widehat{E}_r\}`$. We denote by $`I(\widehat{\rho }_k|\widehat{\rho }_{\mathrm{total}},\{\widehat{E}_r\})`$ the conditional algorithmic information to specify $`\widehat{\rho }_k`$, given the ensemble (see and references in ). The quantity $`I(\widehat{\rho }_k|\widehat{\rho }_{\mathrm{total}},\{\widehat{E}_r\})`$ defines the preparation information of the state $`\widehat{\rho }_k`$, given the total state $`\widehat{\rho }_{\mathrm{total}}`$ and the POVM. We also define the average preparation information $$\overline{I}(\widehat{\rho }_{\mathrm{total}},\{\widehat{E}_r\})=\underset{r}{}p_r\mathrm{log}p_r.$$ (10) This definition is justified, because the average algorithmic information can be bounded above and below as follows: $$\underset{r}{}p_r\mathrm{log}p_r\underset{k}{}p_kI(\widehat{\rho }_k|\widehat{\rho }_{\mathrm{total}},\{\widehat{E}_r\})\underset{r}{}p_r\mathrm{log}p_r+1.$$ (11) The average preparation information is never smaller than the average system entropy decrease $`\mathrm{\Delta }\overline{H}`$, $$\overline{I}(\widehat{\rho }_{\mathrm{total}},\{\widehat{E}_r\})\mathrm{\Delta }\overline{H}.$$ (12) This inequality is a consequence of a general theorem about average density operators . The next step is to define a $`\mathrm{\Delta }H`$-decomposition of $`\widehat{\rho }`$ as a decomposition for which $`\mathrm{\Delta }\overline{H}\mathrm{\Delta }H`$, and an optimal $`\mathrm{\Delta }H`$-decomposition of $`\widehat{\rho }`$ as a $`\mathrm{\Delta }H`$-decomposition with minimal average preparation information $`\overline{I}`$. The average preparation information for an optimal $`\mathrm{\Delta }H`$-decomposition, $$\overline{I}_{\mathrm{min}}=\underset{\mathrm{\Delta }H\mathrm{decompositions}}{inf}\overline{I},$$ (13) is then a property of $`\widehat{\rho }_{\mathrm{total}}`$, and characterizes the system-environment correlations. (If there is no $`\mathrm{\Delta }H`$ decomposition for which $`\overline{I}`$ is minimal, we will call any decomposition optimal for which $`\overline{I}<\overline{I}_{\mathrm{min}}+ϵ`$ for some given small constant $`ϵ`$.) The quantity $`\overline{I}_{\mathrm{min}}`$ is the information about the environment needed to reduce the system entropy by $`\mathrm{\Delta }H`$. A useful generalization results from taking the infimum in Eq. (13) over a restricted class of POVMs, as in the quantum-optical example of Ref. . This defines ensembles that are optimal with respect to a given class of environment measurements. ## 3 Examples In this section, $`\{\widehat{P}_k^{},k=1,\mathrm{},D_{}\}`$ denotes a complete set of orthogonal environment projectors. In the three examples discussed below, we will restrict the class of environment measurements to orthogonal projections of the form $$\widehat{E}_r=\underset{kK_r}{}\widehat{P}_k^{},$$ (14) where $`K_r\{1,\mathrm{},D_{}\}`$. In all three examples, it seems intuitively clear that ensembles which are optimal with respect to this class of measurements are also, to a good approximation, optimal with respect to the class of all possible environment measurements. We have not, however, been able to prove this statement rigorously. ### 3.1 A trivial example Here, the system is a qubit, for which the dimension of Hilbert space is $`D=2`$. Let $`|0`$ and $`|1`$ be orthogonal basis states for the qubit, define $`|\psi _1=|0`$, $`|\psi _2=|1`$, $`|\psi _3=\frac{1}{\sqrt{2}}(|0+|1)`$ and $`|\psi _4=\frac{1}{\sqrt{2}}(|0|1)`$, and let $$\widehat{\rho }_{\mathrm{total}}=\frac{1}{4}\underset{k=1}{\overset{4}{}}|\psi _k\psi _k|\widehat{P}_k^{}$$ (15) be the joint density operator of system and environment. The state of the system alone is then given by $$\widehat{\rho }=\mathrm{tr}_{}(\widehat{\rho }_{\mathrm{total}})=\frac{1}{2}(|00|+|11|),$$ (16) for which the system entropy in the absence of measurements is given by $`H=1`$. Suppose we want to reduce the system entropy by $`\mathrm{\Delta }H=1`$bit, i.e., we want the conditional system state to be pure. The only environment measurement achieving this is given by $`\widehat{E}_r=\widehat{P}_r^{}`$, which results in the unique and therefore optimal $`\mathrm{\Delta }H=1`$ ensemble given by $`\widehat{\rho }_r=|\psi _r\psi _r|`$, $`p_r=1/4`$. The average preparation information for this ensemble is $`\overline{I}=2`$bits, and hence $`\overline{I}_{\mathrm{min}}=2`$bits. For a different value of $`\mathrm{\Delta }H`$, consider the ensemble defined by $`\widehat{\rho }_1=\frac{1}{2}(|\psi _1\psi _1|+|\psi _3\psi _3|)`$, $`\widehat{\rho }_2=\frac{1}{2}(|\psi _2\psi _2|+|\psi _4\psi _4|)`$, $`p_1=p_2=1/2`$, which is induced by the POVM $`\widehat{E}_1=\widehat{P}_1^{}+\widehat{P}_3^{}`$ and $`\widehat{E}_2=\widehat{P}_2^{}+\widehat{P}_4^{}`$. For this ensemble, $`H_1=H_2=\mathrm{tr}(\widehat{\rho }_1\mathrm{log}\widehat{\rho }_1)0.81`$, and hence $`\mathrm{\Delta }\overline{H}=HH_10.19`$. The average preparation information is $`\overline{I}=1`$. It is easy to see that, with respect to our restricted class of measurements, this is an optimal $`\mathrm{\Delta }\overline{H}`$-decomposition, and hence $`\overline{I}_{\mathrm{min}}=1`$. In this example, to obtain 0.19 bits of information about the system, 1 bit of information about the environment is needed. ### 3.2 Random vectors in Hilbert space In the trivial example considered above, the average preparation information $`\overline{I}_{\mathrm{min}}`$ for an optimal ensemble is significantly larger than the corresponding entropy reduction $`\mathrm{\Delta }\overline{H}`$. In the present subsection, we show that $`\overline{I}_{\mathrm{min}}`$ can vastly exceed $`\mathrm{\Delta }\overline{H}`$. Assume that $`\mathrm{log}D_{}\mathrm{log}D`$ and consider $$\widehat{\rho }_{\mathrm{total}}=\frac{1}{D_{}}\underset{k=1}{\overset{D_{}}{}}|\psi _k\psi _k|\widehat{P}_k^{},$$ (17) where the $`|\psi _k`$ are distributed randomly in $`D`$-dimensional (projective) Hilbert space . Here, the system entropy in the absence of measurements is $`H\mathrm{log}D`$. It has been conjectured that states of a similar form arise from the interaction of a chaotic system with a random environment. We will see that the complexity of the resulting system-environment correlations, as quantified by the average preparation information, is very large. This is in marked contrast to the third example discussed below. Environment measurements of the form (14) correspond to grouping the vectors $`|\psi _k`$ into disjoint groups. We construct an approximation to an optimal measurement by grouping the vectors into Hilbert-space spheres of radius $`\varphi `$. (See Ref. for a detailed argument.) We assume that $`D_{}`$ is sufficiently large so that the state vectors in each such sphere fill it randomly. Since all spheres are chosen to be of equal size, the average entropy $`\overline{H}`$ is equal to the entropy of one sphere, i.e., the entropy of a uniform mixture of states within a Hilbert-space sphere of radius $`\varphi `$, given by $$\overline{H}=\left(1\frac{D1}{D}\mathrm{sin}^2\varphi \right)\mathrm{log}\left(1\frac{D1}{D}\mathrm{sin}^2\varphi \right)\frac{D1}{D}\mathrm{sin}^2\varphi \mathrm{log}\left(\frac{\mathrm{sin}^2\varphi }{D}\right).$$ (18) The volume contained within a sphere of radius $`\varphi `$ in Hilbert space is $`(\mathrm{sin}\varphi )^{2(D1)}V_D`$, where $`V_D`$ is the total volume of projective Hilbert space . The number of spheres of radius $`\varphi `$ in $`D`$-dimensional Hilbert space is thus $`(\mathrm{sin}\varphi )^{2(D1)}`$, so the information needed to specify a particular sphere is $$\overline{I}_{\mathrm{min}}\stackrel{~}{I}_{\mathrm{min}}\stackrel{\mathrm{def}}{=}\mathrm{log}\left((\mathrm{sin}\varphi )^{2(D1)}\right)=(D1)\mathrm{log}(\mathrm{sin}^2\varphi ).$$ (19) The information $`\stackrel{~}{I}_{\mathrm{min}}`$ slightly underestimates the actual value of $`\overline{I}_{\mathrm{min}}`$, because the perfect grouping into nonoverlapping spheres of the same size assumed by Eq. (19) does not exist. As an example, let us choose a Hilbert-space dimension $`D=101`$ and a radius of $`\varphi 1.025`$. Equations (18,19) then give $`\mathrm{\Delta }\overline{H}=\mathrm{log}D\overline{H}1`$ and $`\stackrel{~}{I}_{\mathrm{min}}45.3`$, which means that here, to obtain 1 bit of information about the system, more than 45 bits of information about the environment are needed. Using Eq. (19) to eliminate $`\varphi `$ from Eq. (18) gives a complicated expression for $`\stackrel{~}{I}_{\mathrm{min}}`$ as a function of $`\mathrm{\Delta }\overline{H}`$ , which is a good approximation to the average preparation information for an optimal $`\mathrm{\Delta }\overline{H}`$ ensemble. Figure 1 shows this function for a Hilbert space dimension $`D=101`$. To obtain more insight into the properties of this curve, we consider the derivative $$\frac{d\stackrel{~}{I}_{\mathrm{min}}}{d\mathrm{\Delta }\overline{H}}=\frac{D}{\mathrm{sin}^2\varphi \mathrm{ln}(1+D\mathrm{cot}^2\varphi )},$$ (20) which is the marginal tradeoff between between information and entropy. For $`\varphi `$ near $`\pi /2`$, so that $`ϵ=\pi /2\varphi 1`$, the information becomes $`\stackrel{~}{I}_{\mathrm{min}}=(D1)ϵ^2/\mathrm{ln}2`$, and the derivative (20) can be written as $$\frac{d\stackrel{~}{I}_{\mathrm{min}}}{d\mathrm{\Delta }\overline{H}}\frac{D}{\mathrm{ln}(1+Dϵ^2)},$$ (21) which is proportional to $`D`$ with a slowly varying logarithmic correction. We have thus identified a situation where the average preparation information is of the same order as the dimension of Hilbert space $`D`$, despite the fact that the von Neumann entropy of a state cannot exceed $`\mathrm{log}D`$. ### 3.3 Random coherent states In this example the system considered is a spin-$`j`$ particle, for which the dimension of Hilbert space is $`D=2j+1`$. As in the preceding section, we assume that the Hilbert-space dimension of the environment is much larger than $`D`$, $`D_{}D`$, and consider $$\widehat{\rho }_{\mathrm{total}}=\frac{1}{D_{}}\underset{k=1}{\overset{D_{}}{}}|\psi _k\psi _k|\widehat{P}_k^{},$$ (22) but now we choose the $`|\psi _k`$ to be distributed randomly on the submanifold of angular-momentum coherent states (24). We will see that the resulting complexity of the system-environment correlations, as quantified by the average preparation information, is small. The angular momentum coherent state $`|\theta ,\varphi `$ can be defined by rotating the $`\widehat{J}_z`$ eigenstate $`|j;j`$ through Euler angles $`\varphi `$ around the $`z`$-axis, and then by $`\theta `$ around the new $`y`$-axis. This gives $$|\theta ,\varphi =\underset{m=j}{\overset{j}{}}|j;m\left(\begin{array}{c}2j\\ j+m\end{array}\right)^{\frac{1}{2}}\mathrm{cos}^{j+m}(\theta /2)\mathrm{sin}^{jm}(\theta /2)e^{im\varphi }.$$ (23) Each coherent states corresponds to a point on the surface of a three-dimensional sphere. Assuming that $`D_{}`$ is sufficiently large the state of the system alone is $$\widehat{\rho }=\mathrm{tr}_{}(\widehat{\rho }_{\mathrm{total}})=\frac{1}{4\pi }_0^{2\pi }_0^\pi |\theta ,\varphi \theta ,\varphi |\mathrm{sin}\theta d\theta d\varphi .$$ (24) As in the previous section, environment measurements of the form (14) correspond to grouping the vectors $`|\psi _k`$ into disjoint groups. Approximately optimal measurements correspond to grouping the vectors into approximately equal, compact areas on the surface of the sphere. We choose the areas to be of the form $$\mathrm{\Omega }_r(\mathrm{\Theta })=\{\theta ,\varphi :\mathrm{arccos}[\underset{¯}{n}(\theta ,\varphi )\underset{¯}{n}(\theta _r,\varphi _r)]\mathrm{\Theta }\}$$ (25) centered at points $`(\theta _r,\varphi _r)`$. The corresponding density operators $$\widehat{\rho }_r(\mathrm{\Theta })=\frac{_{\mathrm{\Omega }_r(\mathrm{\Theta })}|\theta ,\varphi \theta ,\varphi |\mathrm{sin}\theta d\theta d\varphi }{_{\mathrm{\Omega }_r(\mathrm{\Theta })}\mathrm{sin}\theta d\theta d\varphi },$$ (26) can be used to construct a nearly optimal decomposition of $`\widehat{\rho }`$. The preparation information $`\overline{I}_{\mathrm{min}}`$ is then approximately given by $$\overline{I}_{\mathrm{min}}\stackrel{~}{I}_{\mathrm{min}}\stackrel{\mathrm{def}}{=}\mathrm{log}\frac{4\pi }{2\pi (1\mathrm{cos}\mathrm{\Theta })},$$ (27) where the denominator is the area of $`\mathrm{\Omega }_r(\mathrm{\Theta })`$ In the following section, we show that $`\widehat{\rho }_r(\mathrm{\Theta })`$, in the coordinates where $`(\theta _r,\varphi _r)=(0,0)`$, can be written in the diagonal form $$\widehat{\rho }_r(\mathrm{\Theta })=\underset{m=j}{\overset{j}{}}|j;m\lambda _m^\mathrm{\Theta }j;m|,$$ (28) where $$\lambda _m^\mathrm{\Theta }=\frac{(2j)!\mathrm{sin}^{2(jm)}\frac{\mathrm{\Theta }}{2}}{(j+m)!(jm+1)!}F(jm,jm+1,jm+2,\mathrm{sin}^2\frac{\mathrm{\Theta }}{2}),$$ (29) and where $`F`$ is the hypergeometric function $`{}_{2}{}^{}F_{1}^{}`$. Since all density operators $`\widehat{\rho }_r(\mathrm{\Theta })`$ in the decomposition of $`\widehat{\rho }`$ have the same entropy, the average entropy $`\overline{H}`$ can be written as $$\overline{H}=\underset{m=j}{\overset{j}{}}\lambda _m^\mathrm{\Theta }\mathrm{log}\lambda _m^\mathrm{\Theta }.$$ (30) For the entropy of the system, Eq. (2), in the absence of measurements, $`\widehat{\rho }=\widehat{\rho }_r(\pi )`$, we have (see Eq. 45) $$H=\underset{m=j}{\overset{j}{}}\lambda _m^\pi \mathrm{log}\lambda _m^\pi =\mathrm{log}(2j+1)=\mathrm{log}D.$$ (31) We can analyse $`\overline{H}`$ in detail for the important special value $`\mathrm{\Theta }=\pi /2`$, for which the measurement corresponds to a grouping into two disjoint hemispheres, and is therefore strictly optimal. For such a grouping, $`\overline{I}_{\mathrm{min}}=1`$. In the next section, we show that the eigenvalues $`\lambda _m^{\pi /2}`$ obey the bounds $`0\lambda _m^{\pi /2}{\displaystyle \frac{1}{j}}e^{\sqrt[3]{j}/3},`$ $`m<1j^{2/3},`$ (32) $`{\displaystyle \frac{2}{2j+1}}(1e^{\sqrt[3]{j}/3}4^j)\lambda _m^{\pi /2}{\displaystyle \frac{1}{j}},`$ $`m>1+j^{2/3}.`$ (33) Using these bounds we have derived the following asymptotic expression for the average entropy: $$\overline{H}=\underset{m=j}{\overset{j}{}}\lambda _m^{\pi /2}\mathrm{log}\lambda _m^{\pi /2}=\mathrm{log}j+O(\sqrt[3]{j}e^{\sqrt[3]{j}/3}),$$ (34) and hence, in the limit $`j\mathrm{}`$, $$\frac{\overline{I}_{\mathrm{min}}}{\mathrm{\Delta }\overline{H}}=\frac{1}{\mathrm{log}(2j+1)\overline{H}}1.$$ (35) Figure 2 shows a parametric plot of the average preparation information $`\stackrel{~}{I}_{\mathrm{min}}`$ versus the average entropy reduction $`\mathrm{\Delta }\overline{H}=H\overline{H}`$ for $`j=50`$, i.e., D=101. It can be seen that for moderate values of $`\mathrm{\Delta }\overline{H}`$, $`\stackrel{~}{I}_{\mathrm{min}}\mathrm{\Delta }\overline{H}`$. To reduce the system entropy by 1 bit, not more than approximately 1 bit of information about the environment is needed. This should be compared to the previous example, where the required environment information is of the same order as $`D`$. In the limit of $`\mathrm{\Delta }\overline{H}`$ approaching its maximum value $`H=\mathrm{log}D`$, the information $`\stackrel{~}{I}_{\mathrm{min}}`$ diverges. This is due to the fact that an infinite amount of information is needed to specify a general state exactly. The complexity of the system-environment correlations is characterized by the slope of the curve for small values of $`\mathrm{\Delta }\overline{H}`$ rather than its asymptotic behaviour for $`\mathrm{\Delta }\overline{H}\mathrm{log}D`$. ## 4 Mathematical details Our task is to calculate the eigenvalues of $`\widehat{\rho }_r(\mathrm{\Theta })`$ given by Eq. (29) and to derive the expression (33) for the case $`\mathrm{\Theta }=\pi /2`$. Choosing the coordinate system such that $`(\theta _r,\varphi _r)=(0,0)`$, we have $$\widehat{\rho }_r(\mathrm{\Theta })=\frac{_0^{2\pi }_0^\mathrm{\Theta }|\theta ,\varphi \theta ,\varphi |\mathrm{sin}\theta d\theta d\varphi }{2\pi (1\mathrm{cos}\mathrm{\Theta })}.$$ (36) Substitution of (23) and subsequent integration over $`\varphi `$ gives $$\widehat{\rho }_r(\mathrm{\Theta })=\frac{4}{1\mathrm{cos}\mathrm{\Theta }}\underset{m=j}{\overset{j}{}}|j;mj;m|\left(\begin{array}{c}2j\\ j+m\end{array}\right)\mathrm{\Lambda }^\mathrm{\Theta }(j+m,jm),$$ (37) where $$\mathrm{\Lambda }^\mathrm{\Theta }(p,q)=_0^{\mathrm{\Theta }/2}\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta d\vartheta .$$ (38) Since $`\widehat{\rho }_r(\mathrm{\Theta })`$ is diagonal in the $`|j;m`$ basis, the task of finding the eigenvalues is equivalent to the problem of evaluating the integral $`\mathrm{\Lambda }^\mathrm{\Theta }(p,q)`$. Consider the integrand $$\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta =\frac{1}{2}\mathrm{cos}^{2p}\vartheta (1\mathrm{cos}^2\vartheta )^q\frac{d\mathrm{cos}^2\vartheta }{d\vartheta }.$$ (39) Using the formula $$(1+z)^n=F(n,b;b;z),$$ (40) where $`F`$ is the hypergeometric function $`{}_{2}{}^{}F_{1}^{}`$, we obtain $$\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta =\frac{1}{2}(\mathrm{cos}^2\vartheta )^pF(q,b;b;\mathrm{cos}^2\vartheta )\frac{d\mathrm{cos}^2\vartheta }{d\vartheta }.$$ (41) We now use $$\frac{d^n}{dz^n}[z^{c1}F(a,b;c;z)]=\frac{\mathrm{\Gamma }(c)}{\mathrm{\Gamma }(cn)}z^{cn1}F(a,b;cn;z),$$ (42) with $`a=q`$, $`c=b+1`$ and $`z=\mathrm{cos}^2\vartheta `$ to get $$b(\mathrm{cos}^2\vartheta )^{b1}F(q,b;b;\mathrm{cos}^2\vartheta )=\frac{d}{d\mathrm{cos}^2\vartheta }[(\mathrm{cos}^2\vartheta )^bF(q,b;b+1;\mathrm{cos}^2\vartheta )].$$ (43) Comparing (41) and (43) we see that, choosing $`b=p+1`$, we find $$\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta =\frac{1}{2(p+1)}\frac{d}{d\vartheta }[\mathrm{cos}^{2p+2}\vartheta F(q,p+1;p+2;\mathrm{cos}^2\vartheta )],$$ (44) and hence, using $$F(a,b;c;1)=\frac{\mathrm{\Gamma }(c)\mathrm{\Gamma }(cab)}{\mathrm{\Gamma }(ca)\mathrm{\Gamma }(cb)},\mathrm{}c>\mathrm{}(a+b),$$ (45) one obtains $$\mathrm{\Lambda }^\mathrm{\Theta }(p,q)=\frac{p!q!}{2(p+q+1)!}\frac{\mathrm{cos}^{2p+2}\frac{\mathrm{\Theta }}{2}}{2(p+1)}F(q,p+1;p+2;\mathrm{cos}^2\frac{\mathrm{\Theta }}{2}),q>1.$$ (46) To simplify the above equation we return to the definition (38) and split the integration to get $$\mathrm{\Lambda }^\mathrm{\Theta }(p,q)=_0^{\frac{\pi }{2}}\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta d\vartheta +_{\frac{\pi }{2}}^{\frac{\mathrm{\Theta }}{2}}\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta d\vartheta .$$ (47) The first integral is proportional to the beta function and the second integral can be transformed into $`\mathrm{\Lambda }^{\pi \mathrm{\Theta }}(q,p)`$ by substitution $`\vartheta \pi /2\vartheta `$ so that $$\mathrm{\Lambda }^\mathrm{\Theta }(p,q)=\frac{p!q!}{2(p+q+1)!}\mathrm{\Lambda }^{\pi \mathrm{\Theta }}(q,p).$$ (48) Using this formula Eq. (46) can be transformed into $$\mathrm{\Lambda }^\mathrm{\Theta }(p,q)=\frac{\mathrm{sin}^{2q+2}\frac{\mathrm{\Theta }}{2}}{2(q+1)}F(p,q+1;q+2;\mathrm{sin}^2\frac{\mathrm{\Theta }}{2}).$$ (49) Expression (37) for $`\widehat{\rho }_r(\mathrm{\Theta })`$ can now be rewritten in the compact form of Eq. (28). To investigate the case $`\mathrm{\Theta }=\pi /2`$, we calculate directly $$\mathrm{\Lambda }^{\frac{\pi }{2}}(p,q)=_0^{\frac{\pi }{4}}\mathrm{cos}^{2p+1}\vartheta \mathrm{sin}^{2q+1}\vartheta d\vartheta .$$ (50) Substitution of $`t=\mathrm{tan}^2\vartheta `$ gives $$\mathrm{\Lambda }^{\frac{\pi }{2}}(p,q)=\frac{1}{2}_0^1t^q(1+t)^{(p+q+2)}𝑑t.$$ (51) Using the integral representation of the hypergeometric function , $$F(a,b;c;z)=\frac{1}{B(b,cb)}_0^1t^{b1}(1t)^{cb1}(1tz)^a𝑑t,\mathrm{}c>\mathrm{}b>0,$$ (52) we find $$\mathrm{\Lambda }^{\frac{\pi }{2}}(p,q)=\frac{F(p+q+2,q+1;q+2;1)}{2(q+1)},q>1,$$ (53) which is a rather compact expression. We can obtain additional insight in the following way. Consider the Gauss formula for the so-called contiguous functions $`F(a,b1;c;z)`$ and $`F(a,b;c+1;z)`$ $$c(1z)F(a,b;c;z)cF(a,b1;c;z)+(ca)zF(a,b;c+1;z)=0.$$ (54) For the case $`b=c`$ and $`z=1`$ we get using (40) $$F(a,b;b+1;1)=\alpha (b)F(a,b1;b;1)+\beta (b),$$ (55) where $$\alpha (b)=\frac{b}{ab},\text{ }\beta (b)=2^{1a}\alpha (b).$$ (56) Iteration of (55) $`b1`$ times gives $$F(a,b;b+1;1)=F(a,1;2;1)\underset{k=0}{\overset{b2}{}}\alpha (bk)+\underset{s=0}{\overset{b2}{}}\alpha (bs)\underset{k=0}{\overset{s1}{}}\beta (bk).$$ (57) Noticing that $$\underset{k=0}{\overset{s}{}}\alpha (bk)=\frac{b!(ab1)!}{(bs1)!(ab+s)!}$$ (58) and substituting $`x=b1s`$, we obtain $$F(a,b;b+1;1)=\frac{b!(ab1)!}{(a1)!}[(a1)F(a,1;2;1)2^{1a}\underset{x=1}{\overset{b1}{}}\left(\begin{array}{c}a1\\ x\end{array}\right)].$$ (59) The value of $`F(a,1;2;1)`$ can be calculated using the integral representation (52) $$F(a,1;2;1)=\frac{12^{1a}}{a1},$$ (60) and hence we find $$F(a,b;b+1;1)=\frac{b!(ab1)!}{(a1)!}[12^{1a}\underset{x=0}{\overset{b1}{}}\left(\begin{array}{c}a1\\ x\end{array}\right)].$$ (61) Using this formula, Eq. (53) can be rewritten as $$\mathrm{\Lambda }^{\frac{\pi }{2}}(p,q)=\frac{1}{2(p+q+1)}\left(\begin{array}{c}p+q\\ p\end{array}\right)^1[12^{(p+q+1)}\underset{x=0}{\overset{q}{}}\left(\begin{array}{c}p+q+1\\ x\end{array}\right)].$$ (62) Using (48) we have $$\mathrm{\Lambda }^{\frac{\pi }{2}}(p,q)=\frac{2^{(p+q+2)}}{p+q+1}\underset{x=0}{\overset{p}{}}\left(\begin{array}{c}p+q+1\\ x\end{array}\right),$$ (63) and therefore $$\lambda _m^{\frac{\pi }{2}}=\frac{2}{2j+1}\underset{x=0}{\overset{j+m}{}}\left(\begin{array}{c}2j+1\\ x\end{array}\right)2^{(2j+1)}.$$ (64) The sum on the right hand side is the univariate cumulative distribution function $$G(y;n,p)=\underset{x=0}{\overset{y}{}}P(x;n,p)$$ (65) for the binomial distribution $$P(x;n,p)=\left(\begin{array}{c}n\\ x\end{array}\right)p^x(1p)^{nx},$$ (66) where $`n=2j+1`$, $`p=1/2`$ and $`y=j+m`$. In the limit $`j\mathrm{}`$, $`G(y;n,p)`$ approaches the step function $$\underset{j\mathrm{}}{lim}G(j+m;2j+1,1/2)=\{\begin{array}{cc}0,\hfill & lim_j\mathrm{}\frac{m}{j}<0,\hfill \\ \frac{1}{2},\hfill & m=0,\hfill \\ 1,\hfill & lim_j\mathrm{}\frac{m}{j}>0.\hfill \end{array}$$ (67) To obtain the behaviour for large but finite values of $`j`$, we use the Chernoff bounds $`{\displaystyle \underset{x<np(1ϵ)}{}}P(x;n,p)`$ $``$ $`e^{ϵ^2np/3},`$ $`{\displaystyle \underset{x>np(1+ϵ)}{}}P(x;n,p)`$ $``$ $`e^{ϵ^2np/3},`$ (68) which are valid for $`0ϵ1`$. Choosing $`ϵ=j^{1/3}`$, we obtain Eq. (33), in agreement with Eq. (67). ## 5 Acknowledgements The authors profited from discussions with Jens G. Jensen. This work was supported in part by the UK Engineering and Physical Sciences Research Council.
no-problem/0002/hep-ph0002039.html
ar5iv
text
# 1 Introduction ## 1 Introduction Uncovering the mechanism of symmetry breaking is one of the major tasks of the high energy colliders. Most prominent is the search for the Higgs particle. Within the standard model, $`𝒮`$, this scalar particle poses the problem of naturalness and its mass is a free parameter. Current data seem to indicate a preference for a light Higgs with a mass that can nicely fit within a supersymmetric version of the $`𝒮`$. In fact an intermediate mass Higgs, IMH, is one of the most robust prediction of SUSY, since one does not have strict predictions on the large array of the other masses and parameters in this model. Another, perhaps circumstantial, evidence of SUSY is the successful unification of the gauge couplings at some high scale. Add to this the fact that the neutralino can provide a good dark matter candidate explains the popularity of the model. Even so the search for the lightest Higgs is not so easy. LEP2 where the Higgs signature is easiest may unfortunately be some $`2030`$ GeV short to be able to cover the full range of the minimal SUSY lightest Higgs mass. Searches at the Tevatron need very good background rejection and in any case need to upgrade the present luminosities quite significantly. At the LHC, most analyses have relied extensively on the two-photon decay of the IMH either in the dominant inclusive channel through $`ggh\gamma \gamma `$ or in associated production. Only recently has it been shown that associated production of the Higgs with tops with the former decaying into $`b\overline{b}`$ can improve the discovery of the Higgs, albeit in the region $`m_h<120`$GeV. Unfortunately, until recently, most simulations for Higgs searches have in effect decoupled the rest of the supersymmetric spectrum from the Higgs sector, like in the much advertised ATLAS/CMS $`M_A\mathrm{tan}\beta `$ plane. This assumption of a very heavy SUSY spectrum can not be well justified. First, naturalness arguments require that at least some of the SUSY masses be below $`1TeV`$ or even much less. Second, it has been known that relaxing this assumption can have some very important consequences on the Higgs search at the LHC. This is not surprising considering the fact that the most important production channel $`ggh`$ is loop induced as is the main discovery channel $`h\gamma \gamma `$. One of the most dramatic effect is that of a light stop with large mixing which drastically reduces the production rate. Fortunately, when this happens, a careful analysis shows that the Higgs signal can be rescued in a variety of channels that become enhanced or that open up precisely for the same reason that the normal inclusive channel drops, so that in a sense there is a complementarity. For instance with all other sparticles but the stops heavy, one can show that whenever the production rate in the inclusive channel drops, the branching ratio into two photons increases with the consequence that associated $`Wh/Zh`$ and $`t\overline{t}h`$ where the Higgs decays into two photons becomes a very efficient means of tracking the Higgs. Moreover associated $`\stackrel{~}{t}_1\stackrel{~}{t}_1h`$ production becomes important through the cascade of the heavier stop $`\stackrel{~}{t}_2`$, $`\stackrel{~}{t}_2\stackrel{~}{t}_1h`$. At the same time since the $`hb\overline{b}`$ coupling is hardly affected $`t\overline{t}h`$ production could play an important role. Similar sort of complementarity has also been pointed out in supersymmetric scenarios where the coupling $`hb\overline{b}`$ can be made very small. In our investigation of the effects of light stops with large mixing, all other particles but the stops were assumed rather heavy. It is then important to ask how the overall picture changes had we allowed other sparticles to be relatively light. Considering that the present LEP and Tevatron data precludes the decay of the lightest Higgs into sfermions, the effect of the latter on the properties of the lightest Higgs can only be felt through loops. These effects can therefore be considered as a special case of the stop that we studied at some length and apart from the sbottom at large $`\mathrm{tan}\beta `$ the effects will be marginal. One can then concentrate on the spin-half gaugino-higgsino sector. In order to extract the salient features that may have an important impact on the Higgs search at the LHC, we leave out in this study the added effects of a light stop. Compared to the analysis with the stop, this sector does not affect inclusive production nor the usual associated production mechanisms. The effect will be limited to the Higgs decay. First, if the charginos are not too heavy they can contribute at the loop level. We find however, by imposing the present limits on their masses, that this effect is quite small. On the other hand we show that the main effect is due to the possible decay of the Higgs into the lightest neutralino. This is especially true if one relaxes the usual so-called unification condition between the two gaugino components of the LSP neutralino. Although at LEP an invisible Higgs is not so much of a problem, since it can be easily tagged through the recoiling Z, it is a different matter at the LHC. Few studies have attempted to dig out such an invisible (not necessarily supersymmetric) Higgs at the LHC, in the associated $`Zh,(Wh)`$ channel. Even with rather optimistic rejection efficiencies the backgrounds seem too overwhelming. It has also been suggested to use associated $`t\overline{t}h`$ production but this requires very good $`b`$-tagging efficiencies and a good normalisation of the backgrounds. Recently Ref. looked at how to hunt for an invisible Higgs at the Tevatron. For $`m_h>100`$GeV a $`5\sigma `$ discovery requires a luminosity in excess of $`30`$fb<sup>-1</sup>. Compared to the effects of the stop or a vanishing $`hb\overline{b}`$, where a sort of compensation occurs in other channels, the opening up of the invisible decay reduces all other channels including the branching ratio in $`b\overline{b}`$. Previous studies have mainly concentrated if not on a mSUGRA scenario then on a scenario based on the mSUGRA inspired relation between the electroweak gaugino masses, $`M_1,M_2`$. Moreover LEP searches and limits refer essentially to the latter paradigm. In the course of this analysis we had to re-interpret the LEP data in the light of more general scenarios. We therefore had to take recourse to various limits on cross sections rather than absolute limits on some physical parameters quoted in the literature. We have also tried to see whether new mechanisms come to the rescue of the Higgs search at the LHC when the invisible channel becomes substantial and reduces the usual signal significantly. Much like in our analysis of the stop where we found that the Higgs could be produced through the decay of the heavier stop into the lighter one, we inquired whether a similar cascade decay from the heavier neutralino or charginos to their lighter companions can take place. This is known to occur for instance for some mSUGRA points, but its rate is found not to be substantial when an important branching ratio into invisibles occurs. Even if it were substantial it would be difficult to reconstruct the Higgs since again at the end of the chain the Higgs will decay predominantly invisibly. Considering the dire effects of a large invisible branching ratio occurring for rather light neutralinos, we have investigated the astrophysical consequences of such scenarios, specifically the contribution of such light neutralinos on the relic density. We find that these models require rather light slepton masses. In turn, with such light sleptons, neutralino production at LEP2 provides much restrictive constraints than the chargino cross sections. Taking into account the latter constraints helps rescue some of the Higgs signals. The paper is organised as follows. In the next section we introduce our notation for the chargino-neutralino sector and make some qualitative remarks concerning the coupling of the lightest Higgs as well as that of the $`Z`$ to this sector. This will help understand some of the features of our results. In section 2 we review the experimental constraint and discuss how these are to be interpreted within a general supersymmetric model. Section 3 presents the results for Higgs detection at the LHC within the assumption of the GUT relation for the gaugino masses. Section 4 analyses the “pathological” cases with a sneutrino almost degenerate with the chargino, leading to lower bounds on the chargino mass. Section 5 analyses how the picture changes when one relaxes the GUT inspired gaugino masses constraint and the impact of the astrophysical constraints on the models that may jeopardise the Higgs search at the LHC. Section 6 summarises our analysis. ## 2 The physical parameters and the constraints ### 2.1 Physical parameters When discussing the physics of charginos and neutralinos it is best to start by defining one’s notations and conventions. All our parameters are defined at the electroweak scale. The chargino mass matrix in the gaugino-higgsino basis is defined as $`\left(\begin{array}{cc}M_2& \sqrt{2}M_W\mathrm{cos}\beta \\ \sqrt{2}M_W\mathrm{sin}\beta & \mu \end{array}\right)`$ (2.3) where $`M_2`$ is the soft SUSY breaking mass term for the $`SU(2)`$ gaugino while $`\mu `$ is the so-called higgsino mass parameter whereas $`\mathrm{tan}\beta `$ is the ratio of the vacuum expectation values for the up and down Higgs fields. Likewise the neutralino mass matrix is defined as $`\left(\begin{array}{cccc}M_1& 0& M_Z\mathrm{sin}\theta _W\mathrm{cos}\beta & M_Z\mathrm{sin}\theta _W\mathrm{sin}\beta \\ 0& M_2& M_Z\mathrm{cos}\theta _W\mathrm{cos}\beta & M_Z\mathrm{cos}\theta _W\mathrm{sin}\beta \\ M_Z\mathrm{sin}\theta _W\mathrm{cos}\beta & M_Z\mathrm{cos}\theta _W\mathrm{cos}\beta & 0& \mu \\ M_Z\mathrm{sin}\theta _W\mathrm{sin}\beta & M_Z\mathrm{cos}\theta _W\mathrm{sin}\beta & \mu & 0\end{array}\right)`$ (2.8) where the first entry $`M_1`$ (corresponding to the bino component) is the $`U(1)`$ gaugino mass. The oft-used gaugino mass unification condition corresponds to the assumption $`M_1={\displaystyle \frac{5}{3}}\mathrm{tan}^2\theta _WM_2M_2/2`$ (2.9) Then constraints from the charginos alone can be easily translated into constraints on the neutralino sector. Relaxing Eq. 2.9, or removing any relation between $`M_1`$ and $`M_2`$ means that one needs further observables specific to the neutralino sector. The other parameters that appear in our analysis emerge from the Higgs sector. We base our study on the results and prescription of for the improved two-loop calculations based on the effective Lagrangian<sup>3</sup><sup>3</sup>3There is now a two-loop diagrammatic calculation which is in good agreement with an updated version of the two-loop effective Lagrangian approach .. The parameters here are, apart from the ubiquitous $`\mathrm{tan}\beta `$, the mass of the pseudo-scalar Higgs, $`M_A`$, $`A_t`$ the trilinear mixing parameter in the stop sector, as well as $`M_S`$ a supersymmetric scale that may be associated to the scale in the stop sector. Since we want to delimit the problem compared to our previous study on the stop effects, we will set the stop masses (and all other squarks) to $`1`$TeV. We will also be working in the decoupling limit of large $`M_A`$ that we also set at 1TeV. The lightest Higgs mass is then larger than if we had taken a lower $`M_A`$. As we will see the most important effect that results in small branching ratio for the two-photon width is when the invisible decay opens. This occurs if one has enough phase space and therefore if the mass of the Higgs is made as large as possible. Thus for a given $`\mathrm{tan}\beta `$ the effect is maximal for what is called maximal mixing: $`A_t\sqrt{6}M_S`$ in the implementation of . One would also think that one should make $`\mathrm{tan}\beta `$ large, however this parameter also controls the masses of the neutralinos and for the configuration of interest, those leading to the largest drops in the two-photon signal, one needs to keep $`\mathrm{tan}\beta `$ as low as possible to have the lightest neutralino as light as possible. In principle we would have liked to decouple all other sparticles, specifically sfermions as stated in the introduction. However sleptons (in particular selectrons and sneutrinos) masses determine also the cross sections and the decay signature of the charginos and the neutralinos. Therefore, allowing for smaller sfermions masses does not so much directly affect the two-photon width but can relax quite a bit some of the limits on the chargino-neutralino sector which in turn affect the Higgs search. We thus allow for this kind of indirect dependence on the sfermion mass. Often, especially in the case of neutralinos, LEP analyses set absolute bounds on masses. Ideally, since one is using bounds that are essentially set from the couplings of neutralinos to gauge bosons, to translate to couplings of these neutralinos and charginos to the Higgs, one needs to have access to the full parameter space $`\mu ,\mathrm{tan}\beta ,M_1,M_2`$. Thus absolute bounds are only indicative and it is much more informative to reinterpret the data. In the case of limits set solely from the chargino data, the re-interpretation is quite straightforward since no assumption on the parameters in Eq. 2.3 is made and the limits ensue from $`e^+e^{}\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}`$. Limits on the neutralinos are a bit more involved. To make some of these points clearer and to help understand some of our results it is worth reviewing the couplings to neutralinos. ### 2.2 Couplings of Neutralinos to the Higgs and $`Z`$ The width of the lightest Higgs to the lightest neutralinos writes $`\mathrm{\Gamma }(h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0)={\displaystyle \frac{G_FM_Wm_h}{2\sqrt{2}\pi }}(14m_{\stackrel{~}{\chi }_1^0}^2/m_h^2)^{3/2}|C_{h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0}|^2`$ (2.10) where $`C_{h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0}`$ $`=`$ $`(O_{12}^N\mathrm{tan}\theta _WO_{11}^N)(\mathrm{sin}\alpha O_{13}^N+\mathrm{cos}\alpha O_{14}^N)`$ (2.11) $``$ $`(O_{12}^N\mathrm{tan}\theta _WO_{11}^N)(\mathrm{sin}\beta O_{14}^N\mathrm{cos}\beta O_{13}^N)`$ $`O_{ij}^N`$ are the elements of the orthogonal ( we assume $`𝒞𝒫`$conservation) matrix which diagonalizes the neutralino mass matrix. $`\alpha `$ is the angle that enters the diagonalization of the CP-even neutral Higgses which in the decoupling (large $`M_A`$ and ignoring radiative corrections) is trivially related to the angle $`\beta `$. $`|O_{1j}^N|^2`$ defines the composition of the lightest neutralino $`\stackrel{~}{\chi }_1^0`$. For instance $`|O_{11}^N|^2`$ is the bino purity and $`|O_{11}^N|^2+|O_{12}^N|^2`$ is the gaugino purity. It is clear then, apart from phase space, that the LSP has to be a mixture of gaugino and higgsino in order to have a large enough coupling to the Higgs. The same applies for the diagonal coupling of the charginos ($`h\chi _i^{}\chi _i^+`$). In Fig. 1 we show the strength $`C_{h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0}^2`$ assuming the GUT unification condition between $`M_1`$ and $`M_2`$ for $`\mathrm{tan}\beta =5`$ and $`\mathrm{tan}\beta =15`$. One should note that the coupling is much larger for positive values of $`\mu `$. The largest effect (peak) occurs for small values of $`\mu `$ and $`M_2`$ which however are ruled out by LEP data on the chargino mass. Note also, by comparing the $`\mathrm{tan}\beta =5`$ and $`\mathrm{tan}\beta =15`$ case in Fig. 1, that especially for $`\mu >0`$, as $`\mathrm{tan}\beta `$increases the Higgs coupling to the LSP gets smaller. At the same time the neutralino LSP gets heavier. Thus large $`\mathrm{tan}\beta `$values corresponding to higher Higgs masses will not lead to the largest $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$. Similar behaviour is also observed for the coupling of the chargino to Higgs, the largest coupling sits in the $`\mu >0`$ and small $`M_2`$ region. However, it turns out that the effect of charginos in the loop never becomes very large. As seen in Fig. 2 for the case with $`M_1=M_2/10`$, the same kind of behaviour persists: $`\mu >0`$ and moderate $`\mathrm{tan}\beta `$lead to stronger couplings. On the other hand, the constraints on $`M_2,M_1,\mu `$ which are derived for instance from neutralino production are more sensitive to the higgsino component of the neutralino. Indeed the $`Z`$ coupling to these writes $`Z^{}\chi _i^0\chi _j^0(O_{i3}^NO_{j3}^NO_{i4}^NO_{j4}^N)^2`$ (2.12) Chargino production in $`e^+e^{}`$is not as much critically dependent on the amount of mixing since both the wino and (charged) higgsino components couple to the $`Z`$ and the photon. Some interference with the t-channel sneutrino exchange may occur in the case of a wino component (i.e. $`|\mu |M_2`$), therefore the kinematic limit can be reached quite easily, except the situation where the signature of the chargino leads to almost invisible decay products. ### 2.3 Accelerator Constraints This brings us to how we have set the constraints. $``$ Higgs mass: In the scenarios we are considering with large $`M_A`$ and large stop masses the $`ZZh`$ coupling is essentially SM-like and LEP2 limits on the mass of the SM Higgs apply with little change even for an invisible Higgs. In any case, as discussed earlier, to make the chargino-neutralino effect most dramatic we will always try to maximise the Higgs mass independently of $`\mathrm{tan}\beta `$ by choosing an appropriate $`A_t`$. The LEP2 mass limit are thus always evaded. For $`M_1=M_2/10`$ we stick to $`\mathrm{tan}\beta =5`$, considering there is always enough phase space for $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$, it is sufficient to discuss the case with $`A_t=0`$. For the canonical unification case, the effect of maximising the Higgs mass through $`A_t`$ is crucial. $``$ Chargino cross section: Typically when no sparticle is degenerate with the chargino, the lower limit on the chargino mass reaches the LEP2 kinematic limit independently of the exact composition of the chargino and does not depend much on the sneutrino mass as explained earlier. Latest LEP data give , $`m_{\chi _1^+}94.5GeV.`$ (2.13) Very recent combined preliminary data suggest $`m_{\chi _1^+}100.5GeV`$. We will also comment on how our results can change by imposing this latter limit. Degeneracy with the LSP Even when slepton masses can be large, in which case the chargino cross section is larger, the chargino mass constraint weakens by a few GeV when the lightest chargino and neutralino are almost degenerate. The $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^0f\overline{f}^{}`$ decay leads to soft “visible” products that are difficult to detect . Recent LEP data has greatly improved the limits in this small $`\mathrm{\Delta }M_{\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^0}`$ mass difference region. However within the assumption of gaugino mass unification the highly degenerate case with a light chargino/neutralino occurs in the region $`\mu M_2,M_22`$ TeV. In this region the light (and degenerate) neutralino and chargino are almost purely Higgsino and therefore as seen from Eq. 2.11 do not couple strongly to the Higgs. Their effect on the Higgs invisible width as well as indirectly on the two-photon width is negligible. We will not consider this case. Degeneracy with a light sneutrino: There is another degeneracy which is of more concern to us. It occurs for small slepton masses that are almost degenerate with the chargino, rendering the dominant two-body decay mode $`\chi ^+\stackrel{~}{\nu }l^+`$ undetectable (the three flavours of sneutrinos are also degenerate). When this occurs, for $`\mathrm{\Delta }_{\mathrm{deg}}=m_{\stackrel{~}{\chi }_1^+}m_{\stackrel{~}{\nu _e}}<3GeV`$, neutralino production is also of no use since the neutralinos will also decay into invisible sneutrinos. Since SU(2) relates the mass of the sneutrinos to that of the left selectrons, the search for the latter will then set a limit on the charginos in this scenario. The explorable mass of the selectron is a few GeV from the LEP2 kinematical limit. In fact the LEP Collaborations make a stronger assumption to relate the mass of the sneutrinos to those of the selectrons. Left and right sleptons masses are calculated according to a mSUGRA scenario by taking a common scalar mass, $`m_0`$, defined at the GUT scale. This gives $`m_{\stackrel{~}{e}_R}^2`$ $`=`$ $`m_0^2+\mathrm{\hspace{0.33em}0.15}M_{1/2}^2\mathrm{sin}^2\theta _WD_z`$ $`m_{\stackrel{~}{e}_L}^2`$ $`=`$ $`m_0^2+\mathrm{\hspace{0.33em}0.52}M_{1/2}^2(.5\mathrm{sin}^2\theta _W)D_z`$ $`m_{\stackrel{~}{\nu _e}}^2`$ $`=`$ $`m_0^2+\mathrm{\hspace{0.33em}0.52}M_{1/2}^2+D_z/2\mathrm{with}`$ $`D_z`$ $`=`$ $`M_Z^2\mathrm{cos}(2\beta )`$ (2.14) where $`M_{1/2}`$ is the common gaugino mass at the GUT scale also, which we can relate to the $`SU(2)`$ gaugino mass as $`M_20.825M_{1/2}`$. With these assumptions, $`m_{\stackrel{~}{e}_R}`$ gives the best limit. One thus arrives at a limit $`m_{\chi _1^+}m_{\stackrel{~}{\nu _e}}70GeV(\mathrm{tan}\beta =5).`$ (2.15) The above reduction in the chargino mass limit compared to Eq. 2.13 will have dramatic effects on the Higgs two-photon width. In this very contrived scenario the conclusions we will reach differ significantly from the general case. This very contrived scenario will be discussed separately in section 4. $``$ Neutralino Production and decays: LEP2 also provides a constraint on the mass of the neutralino LSP from the search for a pair of neutralinos, specifically $`e^+e^{}\chi _i^0\chi _j^0`$. This constraint is relevant for the small $`(\mu ,M_2)`$ and also when we relax the unification condition. We have implemented the neutralino constraint by comparing the crosssection for neutralino production with the tables containing the upper limit on the production cross-section for $`\chi _1^0\chi _j^0`$ obtained by the L3 collaboration at $`\sqrt{s}=189GeV`$. These tables give an upper limit on the cross-section for the full range of kinematically accessible $`\stackrel{~}{\chi }_1^0+\stackrel{~}{\chi }_2^0`$ masses. The limits depend in a non-trivial manner on the masses of the produced particles. Moreover the limits are slightly different depending on whether one assumes purely hadronic final states from the decay of the heavier neutralino or whether one assumes leptonic final states. <sup>4</sup><sup>4</sup>4The combination of the various selections, the leptonic and hadronic ones, is an a priori optimisation using Monte Carlo signal and background events. This optimisation procedure is defined to maximise the signal efficiency (including the leptonic and hadronic branching ratios) and minimise the background contamination. This consists in the minimisation of $`\kappa `$, expressed mathematically by: $`\kappa =\mathrm{\Sigma }_{n=0}^{\mathrm{}}k(b)_nP(b,n)/ϵ`$, where $`k(b)_n`$ is the $`95\%`$ confidence level Bayesian upper limit, $`P(b,n)`$ is the Poisson distribution for $`n`$ events with an expected background of $`b`$ events, and $`ϵ`$ is the signal efficiency including the branching ratios. Under the same assumptions we have also used these tables for setting the upper limit on $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_3^0`$ production. In all models where gaugino mass unification is imposed, the virtual Z decay mode, $`\chi _{2,3}^0\stackrel{~}{\chi }_1^0Z^{}`$, constitutes the main decay mode when the neutralinos are light enough to be accessible at LEP2. In models where the gaugino mass unification is relaxed and very light neutralinos exist, as will be discussed in section “no-unification”, other decay channels may open up, for example $`\stackrel{~}{\chi }_3^0\stackrel{~}{\chi }_1^0h`$. The analysis, and hence the derived constraints, is made more complicated if one allows for light sleptons as will be suggested by cosmology in these models. Though light sleptons enhance the neutralino cross section quite significantly, in the case of left sleptons the efficiency is degraded because the branching ratio of the heavier neutralino into invisible (through a three body or even two-body $`\stackrel{~}{\chi }_2^0\nu \stackrel{~}{\nu }^{}`$) may be important. As just discussed one also needs to take into account the various branching ratios of the neutralinos and charginos. These were also needed when considering production of neutralinos and charginos at the LHC. We have taken into account all two-body and three-body decay modes of gauginos , including fermionic and bosonic final states, $`\chi _j^0\chi _i^0Z,\chi _i^0h,\chi ^\pm W^{},\stackrel{~}{l}l`$ and $`\chi _j^+\chi _i^+Z,\chi _1^+h,\chi _j^0W^+,\stackrel{~}{l}l,\stackrel{~}{\nu }\nu `$ . The analytical formulas were checked against the outputs of programs for automatic calculations such as GRACE and COMPHEP . For channels involving a Higgs boson, the radiatively corrected Higgs mass as well as the Higgs couplings, $`\mathrm{sin}\alpha `$, following the same implementation as in were used. $``$ Invisible width of the $`Z`$ and single photon cross section at LEP2: In the case were we lift the unification condition that leads to rather small neutralino masses which are kinematically accessed through $`Z`$ decays we have imposed the limits on the invisible width of the $`Z`$: $`\mathrm{\Gamma }_{\mathrm{inv}}^Z\mathrm{\Gamma }(Z\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0)<3MeV`$ (2.16) In view of the limits on the single photon cross section which can translate into limits on $`\sigma (e^+e^{}\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0\gamma )`$, with cuts on the photon such that $`E_\gamma >5GeV`$ and $`\theta _{\mathrm{beam}\gamma }>10^0`$, we used $`\sigma (e^+e^{}\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0\gamma )<.1pb`$ at $`\sqrt{s}=189GeV`$. In fact L3 gives a limit of $`.3pb`$, foreseeing that similar analysis will be performed for the other collaborations and the results will be combined we conservatively took $`.1`$pb. However this constraint turned out not to be of much help. ### 2.4 Cosmological Constraints Scenarios with $`M_1=M_2/10`$ have very light neutralino LSP into which the Higgs can decay, suppressing quite strongly its visible modes. Accelerator limits still allow for such a possibility. However, it has been known that a very light neutralino LSP can contribute quite substantially to the relic density if all sfermions are light. In the last few years constraints on the cosmological parameters that enter the calculation of the relic density have improved substantially. Various observations suggest to take as a benchmark $`\mathrm{\Omega }_\chi h_0^2<.3`$ where we identify $`\mathrm{\Omega }_\chi `$ with the fraction of the critical energy density provided by neutralinos. $`h_0`$ is the Hubble constant in units of 100 km sec<sup>-1</sup> Mpc<sup>-1</sup>. This constraint is quite consistent with limits on the age of the Universe, the measurements of $`h_0`$ , the measurements of the lower multipole moment power spectrum from CMB data and the determination of $`\mathrm{\Omega }_{\mathrm{matter}}`$ from rich clusters, see for reviews. It also, independently, supports data from type Ia supernovae indicative of a cosmological constant. For illustrative purposes and to show how sensitive one is to this constrain we will also consider a higher value, $`\mathrm{\Omega }_\chi h_0^2<.6`$ that may be entertained if one relies on some mild assumptions based on the age of the Universe and the CMB result only. In this scenario the calculation of the relic density is rather simple since one only has to take into account annihilations into the lightest fermions. Keeping with our analysis, we required all squarks to be heavy but allowed the sleptons to be light. To calculate the relic abundance we have relied on a code whose characteristics are outlined in . To help with the discussion we will also give an approximate formula that agrees better than $`30\%`$ with the results of the full calculation. ### 2.5 LHC Observables The principal observables we are interested in are those related to the Higgs production and decay. Since we are only considering the effects of non-coloured particles and are in a regime of large $`M_A`$, all the usual production mechanisms (inclusive direct production through gluon-gluon as well as the associated production $`W(Z)h`$ and $`t\overline{t}h`$) are hardly affected compared to a SM Higgs with the same mass. Contrary to the indirect effects of light stops and/or sbottoms, the main effects we study in this paper affect only decays of the Higgs. The main signature into photons is affected both by the indirect loop effects of light enough charginos (and in some cases sleptons) and by the possible opening up of the Higgs decay into neutralinos. When the latter is open it leads to the most drastic effects reducing both the branching into photons as well as into $`b\overline{b}`$, hence posing a problem even for the search in $`t\overline{t}h`$ with $`hb\overline{b}`$. To quantify the changes of the branching ratios we define, as in , the ratio of the Higgs branching ratio into photons normalised to that of the SM, defined for the same value of the Higgs mass: $`R_{\gamma \gamma }={\displaystyle \frac{BR^{SUSY}(h\gamma \gamma )}{BR^{SM}(h\gamma \gamma )}}`$ (2.17) Likewise for the branching ratio into $`b\overline{b}`$ $`R_{b\overline{b}}={\displaystyle \frac{BR^{SUSY}(hb\overline{b})}{BR^{SM}(hb\overline{b})}}`$ (2.18) The latter signature for the Higgs has only recently been analysed within a full ATLAS simulation and found to be very useful for associated $`t\overline{t}h`$ production, but only for $`m_h<120`$GeV. With $`100`$fb<sup>-1</sup> the significance for a $`𝒮`$Higgs with $`m_h=100`$GeV is 6.4 but drops to only $`3.9`$ for $`m_h=120`$GeV. Since this is the range of Higgs masses that will interest us, we will consider a drop corresponding to $`R_{b\overline{b}}=.7`$ to mean a loss of this signal. As concerns the two-photon signal, we take $`R_{\gamma \gamma }<.6`$ as a benchmark for this range of Higgs masses. This is somehow a middle-of-the-road value between the significances given by ATLAS and the more optimistic CMS simulations. For the computation of the various branching ratios of the Higgs and its couplings we rely on HDECAY in which the Higgs masses are determined following the two-loop renormalisation group approach. Since appreciable effects in the Higgs search occur for relatively light spectra, this means that the light particles should also be produced at an appreciable rate at the LHC even though they are electroweak processes. We have calculated, at leading order, all associated chargino and neutralino cross sections. $`pp\chi _i^\pm \chi _j^0i=1,2;j=1,2,3,4`$ (2.19) Neutralino pair production<sup>5</sup><sup>5</sup>5K-factors for these processes have been computed in . $`pp\chi _j^0\chi _k^0`$ is much smaller with the heavy squark masses that we assume. These processes have been calculated with the help of CompHEP. For the structure function we use CTEQ4M at a scale, $`Q^2=\widehat{s}/4`$. It is also possible for the heaviest of these neutralinos to cascade into the lighter ones and the lightest Higgs. We have therefore calculated all branching ratios for all the charginos and neutralinos. In principle other means of neutralino/chargino production are possible through cascade decays of heavy squarks, if these are not too heavy to be produced at the LHC. ## 3 Gauginos masses unified à la GUT ### 3.1 The available parameter space In the case of no-degeneracy of the lightest chargino with the sneutrino, the constraint comes essentially from the chargino cross section. With heavy sleptons, neutralino production does not constrain the parameter space any further. Therefore the $`\mathrm{tan}\beta `$ independent limit Eq. 2.13 applies. All these limits map into the $`M_2\mu `$ parameter space for a specific $`\mathrm{tan}\beta `$. The available parameter space for $`\mathrm{tan}\beta =5,30`$ is shown in Fig. 3 The absolute limit on the lightest neutralino for $`\mathrm{tan}\beta =5`$ turns out to be: $`m_{\stackrel{~}{\chi }_1^0}47.5GeV(\mathrm{tan}\beta =5).`$ (3.20) Therefore in the non degenerate case there is a very small window for the Higgs to decay into neutralinos. For the lower limit on the neutralino mass the reduction factor brought about by the $`\beta ^3`$ P-wave factor in Eq. 2.10 factor amounts to about $`.1`$, for $`m_h=109`$GeV. ### 3.2 The $`A_t`$ $`\mathrm{tan}\beta `$dependence The above mass of the Higgs for $`\mathrm{tan}\beta =5`$ corresponds to a mixing angle in the stop sector $`A_t=0`$. Obviously to maximise the effect of the neutralinos through the opening up of the Higgs decay into neutralino one should increase the mass of the Higgs. We have already taken $`M_A=M_S=m_{\stackrel{~}{t}}=m_{\stackrel{~}{g}}=1TeV`$. We can therefore increase $`A_t`$ and $`\mathrm{tan}\beta `$. However increasing $`\mathrm{tan}\beta `$also increases the neutralino masses and reduces the $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ couplings as we discussed earlier. Scanning over $`\mu (>0)`$, $`M_2`$ and $`\mathrm{tan}\beta `$we show, Fig. 4, the extremal variation of the $`R_{\gamma \gamma }`$ as a function of $`\mathrm{tan}\beta `$for maximal mixing and taking the available constraints into account. We see that the maximum drop is for $`\mathrm{tan}\beta 5`$. Below this value of $`\mathrm{tan}\beta `$the Higgs mass is small compared to the neutralino threshold, while above this value the LSP gets heavier “quicker” than does the Higgs. Moreover the Higgs coupling to the LSP gets weaker as $`\mathrm{tan}\beta `$ increases. On the other hand the increase $`R_{\gamma \gamma }>1`$ grows with smaller $`\mathrm{tan}\beta `$, but this is mainly due to the loop effects of the charginos. Also, as expected, the variation with $`A_t`$ affects essentially the maximal reduction curve. This said, let us however not forget that especially in the two-photon signal at the LHC the significance increases with increasing Higgs mass. One can already conclude on the basis of Fig. 4 and our benchmark $`R_{\gamma \gamma }>.6`$, that critical regions are for moderate $`\mathrm{tan}\beta `$, $`\mathrm{tan}\beta 5`$, and maximal stop mixing. ### 3.3 The case with $`A_t=0`$ and $`\mathrm{tan}\beta =5`$ We now go into more detail and choose $`\mathrm{tan}\beta =5`$ in the case of no mixing. The results are summarised in Fig. 5. First of all note that in this scenario the ratio $`R_{\gamma \gamma }`$ can vary at most by $`15\%`$ and that this can lead to either a slight increase or a slight decrease. Contrary to what we will see for other scenarios, the largest drop occurs for negative values of $`\mu `$ and is due to the contribution of the light charginos in the two-photon width (see also the dependence with $`m_{\stackrel{~}{\chi }_1^+}`$ and $`M_2`$). The sign of $`\mu `$ is also that of the interference between the dominant $`W`$ loop and the chargino loop contribution. A decrease for positive $`\mu `$ is strongly correlated with the opening up of the little window for $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$. The latter channel leads to a branching ratio which is at most some $`20\%`$. When this occurs (only for positive $`\mu `$) it will affect also the branching into $`b\overline{b}`$ and thus the channel $`t\overline{t}ht\overline{t}b\overline{b}`$. However with our benchmark for observability of the Higgs in this channel, $`R_{b\overline{b}}>.7`$, the Higgs should still be observed in this channel. At this stage one can conclude that the effect of light charginos/neutralinos, especially in view of the theoretical uncertainty (higher order QCD corrections) in predicting the signal, is very modest. Furthermore the small window for Higgs decaying into LSP will be almost closed, at least at $`\mathrm{tan}\beta =5`$, with an increase of a few GeV on the lower limit on charginos. ### 3.4 The case with maximal $`A_t`$ and $`\mathrm{tan}\beta =5`$ Increasing the mass of the Higgs through as large $`A_t`$ as possible for the same value of $`\mathrm{tan}\beta `$changes the picture quite substantially. With our implementation of the corrections to the Higgs mass the increase is about 10GeV and leaves enough room for $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ in the small $`\mu M_2`$ region. In this case the two-photon rate and the $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ branching ratios are well correlated as shown (Fig. 6a), the result of a scan over the parameters $`M_2=50300`$ GeV, $`\mu =100500`$ GeV for $`\mathrm{tan}\beta =5`$ in the maximal mixing case, $`A_t=2.4`$ TeV. A scan over a wider range, $`M_22`$ TeV and $`|\mu |1`$ TeV, was also performed. The points for larger values of $`M_2\mu `$ all cluster around $`R_{\gamma \gamma }1`$ allowing for only a few percent fluctuations. The Higgs branching ratio into neutralinos can reach as much as $`40\%`$, leading to a reduction of $`R_{\gamma \gamma }`$and $`R_{b\overline{b}}`$ of about $`60\%`$. This means that there might be problems with Higgs detection especially in the $`t\overline{t}h`$ channel. The contour plots of constant $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ in the $`M_2\mu `$ plane are displayed in Fig. 6b). It is only in a small region $`M_2160`$ GeV and $`\mu 400`$ GeV that $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ exceeds 10%. As the results presented here depend critically on the minimum allowed value for the mass of the lightest chargino and neutralino, see Fig. 6c-d), it is interesting to enquire about the consequence of an improved lower limit of the chargino masses in the last runs of LEP2. We have therefore imposed the constraint $`m_{\stackrel{~}{\chi }_1^+}100`$ GeV. As the maximum reduction occurs for the lightest allowed value for the chargino mass, an increase of just a few GeV’s has a drastic effect. The reduction in $`R_{\gamma \gamma }`$ is no longer more than $`80\%`$. In conclusion, the effect of gauginos/higgsinos on the crucial branching ratio of the Higgs, when one assumes the unification condition and no degeneracy, will only be marginal at the LHC if LEP2 does not observe any charginos or neutralinos before the end of its final run. ### 3.5 Associated chargino and neutralino production at the LHC In our previous study of the effects of light stops on the Higgs search at the LHC, reduction in the usual two photon signals was due essentially to a drop in the main production mechanism through gluons and occurred when the stops developed strong couplings to the Higgs. When this occurs, as a lever, one has large production of stops as well as associated stop Higgs production, thus recovering a new mechanism for Higgs. In the present case uncovering a new effective Higgs production mechanism will be more complicated. First the effects are due to weakly interacting particles whose cross sections at the LHC are smaller than those for stops. Also since the largest drops are when the branching ratio of the Higgs into invisible is appreciable, this means that even if one triggers Higgs production through charginos and neutralinos , the reconstruction of the Higgs will be more difficult. Nevertheless one should enquire how large any additional production mechanism, if any, can get. In the present scenario with a common gaugino mass at the GUT scale and no (accidental) degeneracy between the chargino and the sneutrinos, $`R_{\gamma \gamma }`$ (and $`R_{b\overline{b}}`$) being at worst $`.6`$ (for maximal mixing), the Higgs should be discovered in the usual channels. Moreover since the $`Br(hb\overline{b})`$ does not drop below about $`.6`$, we could use this signature in the cascade decay of the heavier neutralinos and charginos into Higgs. Since the reduction in the usual inclusive two photon channel always occurs in the small $`(M_2,\mu )`$ region, all gauginos are relatively light and therefore have reasonable production rates. In fact as Fig. 7 shows, the rates are more than reasonable in the parameter space that leads to the largest drops. For instance, with $`M_2=140`$GeV, the cross section $`\stackrel{~}{\chi }_2^0\chi _1^+`$ is about $`6`$pb and is mildly dependent on $`\mu `$, while production of $`\stackrel{~}{\chi }_4^0\chi _2^+`$, is some $`100`$fb (with $`m_{\stackrel{~}{\chi }_4^0}250`$GeV) when $`R_{\gamma \gamma }=.6`$, and decreases quickly with increasing $`\mu `$ (where however $`R_{\gamma \gamma }`$ increases). With the first process, considering the rather large cross section, it should be possible through measurements of the masses and some of the signatures of $`\stackrel{~}{\chi }_2^0`$ and $`\chi _2^+`$ to get some information on the parameters of the neutralinos and charginos<sup>6</sup><sup>6</sup>6See for instance ., we would then know that one might have some difficulty with the Higgs signal in the inclusive channel. As for the latter process, it has more chance to trigger light Higgs than the former. Since in our scenario there isn’t enough phase space for $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0h`$. The following modes are potentially interesting: $`\chi _4^0\chi _{1,2}^0h`$ and $`\chi _2^+\chi _1^+h`$. For the former one obtains as much as $`25\%`$ branching ratio for $`\stackrel{~}{\chi }_4^0h+\mathrm{anything}`$ when $`R_{\gamma \gamma }`$is lowest , see Fig . 8. Much higher branching are of course possible, but they occur for higher values of $`m_{\stackrel{~}{\chi }_4^0}`$ where there is no danger for Higgs discovery in the usual modes. Less effective and not always open is the mode $`\chi _3^0\chi _{1,2}^0h`$ where the branching never exceeds a few per-cent. We are now in a position of folding the different branching ratios for the heavier neutralinos and chargino into Higgs (h) with the corresponding cross sections to obtain the yield of Higgs in these channels. As advertised, for the parameters of interest, we see from Fig. 9 that the largest cross sections originate from the decays of the heaviest neutralino $`\stackrel{~}{\chi }_4^0`$ while the chargino helps also. Still, the yield is quite modest, about $`20`$fb. It rests to see whether a full simulation with a reduced branching ratio of $`h`$ into b’s can dig out the Higgs signal from such cascade decays. We should make another remark. In , where $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0h`$ and $`hb\overline{b}`$ is advocated, the neutralinos themselves are produced through cascade decays of gluinos and squarks which can have large cross sections. In our case we have taken these to be as heavy as $`1`$TeV and thus their cross section is rather modest. For instance gluino pair production at the LHC with this mass is about $`.2`$pb. However, without much effect on the decoupling scenario we have assumed, if we had taken $`m_{\stackrel{~}{g}}=500`$GeV, which by the way corresponds to a situation where the gaugino mass unification extends also to $`M_3`$, the gluino cross section jumps to about $`20`$pb. So many gluinos could, therefore, through cascade decays provide an additional source of Higgs. ## 4 Gauginos masses unified à la GUT degenerate with sleptons In the so-called sneutrino-degenerate case where charginos can be as low as $`70`$GeV, the absolute lower limit on the neutralino LSP mass: $`m_{\chi _0}34.5GeV(\mathrm{tan}\beta =5).`$ (4.21) This lower bound rises by roughly 1GeV for $`\mathrm{tan}\beta =2.5`$ and never goes below $`34`$GeV for larger values of $`\mathrm{tan}\beta `$. We will only study the case with $`A_t`$ maximum. ### 4.1 Results Relaxing the chargino mass by some $`20`$GeV has quite impressive effects that result in dramatic drops, see Fig. 10. The branching fraction into invisibles can be as large as $`90\%`$. For these situations clearly the Higgs would be difficult to hunt at the LHC in both the two-photon and (associated) $`b\overline{b}`$ channels. As seen for $`R_{\gamma \gamma }`$vs $`m_{\stackrel{~}{\chi }_1^+}`$, there is an immediate fall for $`m_{\stackrel{~}{\chi }_1^+}<100`$GeV. But then this should be compensated by the production of plenty of charginos and sleptons while some of the heavier neutralinos and chargino should still be visible. As indicated by Fig. 7, in this situation all charginos and all neutralinos will be produced with cross sections exceeding $`100`$fb. $`\chi _1^+`$ has a cross section in excess of $`10`$pb!. These processes can trigger Higgs production. However, because of the decays into light sleptons the rates are modest as seen in Fig. 11. In fact, the largest rates occur when the Higgs has the largest branching into invisible. These modes will probably not help much. ## 5 Relaxing the gaugino mass unification As we have seen, having light neutralinos can very much jeopardise the Higgs discovery at the LHC. However in the canonical model with $`M_1M_2/2`$ and no pathological degeneracy the effect is never a threat. Basically this is because the almost model independent limit on the chargino translates into values of $`M_2`$ (hence $`M_1`$) and $`\mu `$ large enough that the neutralinos are not so light that they contribute significantly a large invisible Higgs width. On the other hand if $`M_1`$ were made much smaller than $`M_2`$, one could make $`m_{\stackrel{~}{\chi }_1^0}`$ small enough without running into conflict with the chargino mass limits. The LSP could then be very light and almost bino. To make it couple to the Higgs though one still needs some higgsino component and thus $`\mu `$ should not be too large. Largest couplings will be for smallest values of $`\mu `$ which are however, again, constrained by the chargino mass limit for instance. To investigate such scenarios we have studied the case with $`M_1=rM_2\mathrm{with}r=0.1`$ (5.22) and have limited ourselves to the case with $`\mathrm{tan}\beta =5`$. Models with $`r>1`$ would not affect the Higgs phenomenology at the LHC, since their lightest neutralino should be of the order of the lightest chargino. LEP data already excludes such a neutralino to contribute to the invisible width of the Higgs and therefore the situation is much more favourable to what we have just studied assuming the usual GUT relation. It is important to stress that the kind of models we investigate in this section are quite plausible. The GUT-scale relation which equates all the gaugino masses at high scale need not be valid in a more general scheme of SUSY breaking. In fact even within SUGRA this relation need not necessarily hold since it requires the kinetic terms for the gauge superfields to be the most simple and minimal possible (diagonal and equal). One can easily arrange for a departure from equality by allowing for more general forms for the kinetic terms. In superstring models, although dilaton dominated manifestations lead to universal gaugino masses, moduli-dominated or a mixture of moduli and dilaton fields lead also to non universality of the gaugino masses and may or may not (multi-modulii) lead to universal scalar masses. The recent so-called anomaly-mediated SUSY breaking mechanisms are also characterised by non-universal gaugino masses, though most models in the literature lead rather to $`r>1`$ which is of no concern for the Higgs search. With $`r=1/10`$ the main feature is that the neutralino mass spectrum is quite different. Most importantly LSP have masses in the range $`1020`$GeV for the cases of interest. Since there is plenty of phase space for the decay of the lightest Higgs into such neutralinos we will only consider $`A_t=0`$ for the stop mixing. ### 5.1 The available parameter space In the case of heavy sleptons we find that the $`\mu M_2`$ allowed parameter space is still determined from the chargino mass limit through $`e^+e^{}\chi _1^+\chi _1^{}`$ production. Neutralino pair production $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$ and $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_3^0`$, although kinematically possible do not squeeze the parameter space further. The contour plot, see Fig. 12, is therefore essentially the same as the one with the GUT relation. Since cosmological arguments will drive us to consider light sleptons masses, we show on the same figure, Fig. 12, how the $`\mu M_2`$ parameter space is squeezed in this case. The squeezing comes from limits on $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_2^0`$ and $`\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_3^0`$ cross sections properly folded with branching ratios where two-body and three-body decays involving the relatively light sleptons play an important role. In fact, while light sleptons generally enhance the neutralino cross sections, this enhancement can be counterbalanced by the fact that a non negligible branching ratio into invisible neutrinos can occur with small enough left selectrons. In all cases the leptonic final state signature can be enhanced at the expense of the hadronic signature which usually have a better efficiency. To illustrate this, we have considered three cases: i) $`m_{\stackrel{~}{e}_R}=100`$GeV with large $`m_{\stackrel{~}{e}_L}`$, ii) $`m_{\stackrel{~}{e}_L}=150`$GeV with large $`m_{\stackrel{~}{e}_R}`$ iii)$`m_{\stackrel{~}{e}_R}=100,m_{\stackrel{~}{e}_L}=150`$GeV. One sees that, with a very mild $`M_2`$ dependence, light right selectrons eliminate smallest $`|\mu |`$ values that are otherwise still allowed by chargino searches. That $`\stackrel{~}{e}_R`$ do not cut on $`M_2`$ values can be understood on the basis that they do not have any $`SU(2)`$ charge. Since smallest values of $`\mu `$ are the ones that enhance $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ these limits are important. With light left selectrons the gain with respect to the chargino limit is appreciable and occurs across all $`M_2`$ values, more so for the smallest $`M_2`$ values. When both left and right selectrons are relatively light, one carves out an important region, although this region does not cover all the available neutralino phase space. ### 5.2 Heavy sleptons The main message is that there are some dangerous reductions in the branching ratios of the Higgs both into photons and into $`b\overline{b}`$ which can be only a $`1/5`$th of what they are in the $`𝒮`$, see Fig. 13 . These drops are due essentially to a large branching ratio of the Higgs into invisibles. The most dramatic reductions occur for chargino masses at the edge of the LEP2 limits, however even for chargino masses as high as $`200`$GeV the drop can reach $`60\%`$. In these configurations the lightest chargino and $`\stackrel{~}{\chi }_2^0,\stackrel{~}{\chi }_3^0`$ have a large higgsino component. This explains why, in the $`M_2\mu `$ plane the decrease in the ratios is strongly dependent on $`\mu `$. ### 5.3 Cosmological constraint Considering these large reductions and the fact that the LSP is very light, $`1020`$GeV, we investigated whether the most dramatic scenarios are not in conflict with a too large relic density<sup>7</sup><sup>7</sup>7Cosmological consequences of non-unified gaugino masses have been investigated in but not from the perspective followed in this paper.. One knows that for a very light LSP bino the annihilation cross section is dominated by sfermions with largest hypercharge, that is right sleptons. This calls for light (right) sfermions. As a rule of thumb, with all sfermions heavy but the three right sleptons, an approximate requirement is $`m_{\stackrel{~}{l}_R}^2<10^3\sqrt{(\mathrm{\Omega }_\chi h^2)_{\mathrm{max}}}\times m_{\stackrel{~}{\chi }_1^0}.`$ (5.23) with all masses expressed in GeV. In our case the LSP is not a pure bino, the bino purity is around $`90\%`$ for the worst case scenarios, otherwise it would not couple to the Higgs. We have therefore relied on a full calculation. We assumed all squarks heavy and took a common mass for the SUSY breaking sfermion mass terms of both left and right sleptons of all three generations, $`m_0`$, defined at the GUT scale, thus assuming unification for the scalar masses. As for the gaugino masses to obtain $`M_1=M_2/10`$ at the electroweak scale one needs $`\overline{M}_1\overline{M}_2/5`$ at the GUT scale. $`\overline{M}_2`$ is the $`SU(2)`$ gaugino mass at the GUT scale which again relates to $`M_2`$ at the electroweak scale as $`M_20.825\overline{M}_2`$. This scheme leads to almost no running of the right slepton mass, since the contribution from the running is of order $`M_1^2`$, while left sleptons have an added $`M_2^2`$ contribution and would then be “much heavier”. Indeed neglecting Yukawa couplings one may write $`m_{\stackrel{~}{e}_R}^2`$ $`=`$ $`\overline{m}_0^2+\mathrm{\hspace{0.33em}0.006}\overline{M}_2^2\mathrm{sin}^2\theta _WD_z`$ $`m_{\stackrel{~}{e}_L}^2`$ $`=`$ $`\overline{m}_0^2+0.48\overline{M}_2^2(.5\mathrm{sin}^2\theta _W)D_z`$ $`m_{\stackrel{~}{\nu _e}}^2`$ $`=`$ $`\overline{m}_0^2+0.48\overline{M}_2^2+D_z/2`$ (5.24) Note in passing that Eq. 5.3 can be extended to squarks and if we take $`M_3=r_3M_2r_3>1`$ at the GUT scale one could make the squarks “naturally heavy” as we have assumed. Note also in this respect that had we not taken the squarks, specifically the stops, sufficiently heavy we would not have had large enough radiative corrections to the Higgs mass and would have been in conflict with the LEP2 constraint on the Higgs mass. Since the limit on the relic density in these scenarios with $`M_1=M_2/10`$ constrain essentially the right slepton mass, this means that one has an almost direct limit on $`m_0`$. Putting all this together the parameter space still allowed by requiring that the relic density be such that $`\mathrm{\Omega }_\chi h^2<.3`$ and by taking into account all accelerator constraints listed in section 2 is shown in Fig. 14. The most important message is that sleptons must be lighter than about $`140`$GeV. The approximate rule of thumb given by Eq. 5.23 is therefore quite good and explains the various behaviours of Fig. 14. Had we imposed a lower $`\mathrm{\Omega }_\chi h^2`$, $`\mathrm{\Omega }_\chi h^2<.2`$ would have meant $`m_{\stackrel{~}{e}_R}<125`$GeV. Even with the very mild constraint $`\mathrm{\Omega }_\chi h^2<.6`$ right selectron masses are below $`160`$GeV. The same figure also shows the effect of not taking into account the constraint from the LEP2 neutralino cross sections. As expected the latter cut on smallest $`\mu `$ values (and also a bit on smaller $`M_2`$ values), that not only allow accessible $`\stackrel{~}{\chi }_2^0,\stackrel{~}{\chi }_3^0`$ but also cut on the amount of the higgsino component in $`\stackrel{~}{\chi }_1^0`$ and thus on the contribution of $`\stackrel{~}{\chi }_1^0`$ to the invisible decay of $`h`$. We therefore see that a combination of LEP2 neutralino cross sections with improved constraints from the relic density are important. ### 5.4 Light Sleptons We now allow for light sleptons with masses such that $`m_{\stackrel{~}{l}}>90`$GeV but take into account all cosmological and accelerator constraints. The masses are calculated according to Eq. 5.3. Although one has reduced the $`\mu M_2`$ parameter space somehow one has also allowed for light sleptons that indirectly contribute to $`h\gamma \gamma `$ beside the light charginos. Right and left charged sleptons of equal masses contribute almost equally and interfere destructively with the dominant $`W`$ loop hence reducing the width $`h\gamma \gamma `$. Once again large drops are possible with reduction factor as small as $`.3`$ in both the branching ratio of the Higgs into photons and $`b\overline{b}`$. The loop effects of the sleptons are rather marginal compared to the effect of the opening up of the neutralino channels. They account for some $`1015\%`$ drop as can be seen when $`h\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ is closed and by comparing with the heavy slepton case. ### 5.5 Associated chargino and neutralino production In cases where there are very large reductions in the usual $`b\overline{b}`$ and $`\gamma \gamma `$ signatures of the Higgs, production of charginos and neutralino at the LHC is quite large<sup>8</sup><sup>8</sup>8Production of light sleptons, as constrained from cosmology in these scenarios, is on the other hand quite modest at the LHC.. Fig. 16 shows that, for values of $`\mu M_2`$ where $`R_{\gamma \gamma }`$ is below $`.6`$, all neutralinos and charginos can be produced. For instance with $`M_2=250`$GeV, the cross section for $`\stackrel{~}{\chi }_4^0\chi _2^+`$ is in excess of $`100`$fb while $`\stackrel{~}{\chi }_2^0\chi _1^+`$ is above $`1pb`$. Therefore early observations of these events, could probably allow the determination of the parameters of the higgsino-gauginos sector “sending an early warning signal” that indicates difficulty in the detection of the Higgs. If we now look at the (lightest) Higgs that can be produced through cascade decays in these processes, one sees from Fig. 17 that, through essentially $`\stackrel{~}{\chi }_3^0`$ decays, associated Higgs cross sections of about $`30`$fb are possible. Nonetheless, again, it is in these regions with highest yield that the Higgs has a large branching ratio into invisible and would be difficult to track. ## 6 Conclusions In a model that assumes the usual common gaugino mass at the GUT scale and where, apart from the charginos and neutralinos, all other supersymmetric particles are heavy, we have shown that current LEP limits on charginos imply that there should be no problem finding the lightest SUSY Higgs at the LHC in the two-photon mode or even $`b\overline{b}`$ in the associated $`t\overline{t}h`$ channel. The loop effects of charginos in the two-photon width are small compared to the theoretical uncertainties, they amount to less than about $`15\%`$ and can either increase or decrease the signal. The LEP data in this scenario mean that the decay of the Higgs into invisibles is almost closed. In scenarios “on the fringe” with a conspiracy between the sneutrino mass and the lightest chargino mass, the Higgs signal can be very much degraded in both the two-photon and the $`b`$ final states. This is because the (invisible) Higgs decay into light neutralinos may become the main decay mode, suppressing all other signatures. This also occurs in models that do not assume the GUT inspired gaugino mass, specifically those where, at the weak scale, the $`U(1)`$ gaugino mass is much smaller than the $`SU(2)`$ gaugino mass. However we point out that limits from the relic density in these types of models require rather light right selectron masses. These in turn contribute quite significantly to the cross section for neutralino production at LEP2 which then constrains the parameter space in the gaugino-higgsino sector where the invisible branching ratio of the Higgs becomes large. Although large reductions in the usual channels are still possible, the combination of LEP2 data and cosmology means that observation of the Higgs signal at the LHC is jeopardised in only a small region of the SUSY parameter space. Moreover, we show that in these scenarios where the drops in the Higgs signals are most dramatic, one is assured of having a quite healthy associated chargino and neutralino cross section at the LHC. Some of the heavier of these particles may even trigger Higgs production through a cascade decay into their lighter partner. It rests to see whether the Higgs can be seen in this new production channels, considering that it will predominantly have an “invisible” signature. Acknowledgments We would like to thank Sacha Pukhov and Andrei Semenov for help and advice on the use of CompHEP. R.G. acknowledges the hospitality of LAPTH where this work was initiated. This work is done under partial financial support of the Indo-French Collaboration IFCPAR-1701-1 Collider Physics.
no-problem/0002/hep-ph0002262.html
ar5iv
text
# Low-Scale Anomalous U(1) and Decoupling Solution to Supersymmetric Flavor Problem ## Abstract Supersymmetric standard models where the ultraviolet cut-off scale is only a few orders of magnitude higher than the electroweak scale are considered. Phenomenological consequences of this class of models are expected to be very different from, for example, the conventional supergravity scenario. We apply this idea to a model with an anomalous U(1) gauge group and construct a viable model in which some difficulties of the decoupling solution to the supersymmetric flavor problem are ameliorated. preprint: TU-587 What is the fundamental scale of the unified theory of particle physics? Usually it is supposed to be near the Planck scale. In fact this view is supported by the weakly coupled heterotic string theory where the string scale must be close to the Planck scale . It has recently been recognized, however, that the fundamental scale can be much lower than the Planck scale, even as low as around 1 TeV . In this case, the largeness of the Planck scale is accounted for by extra dimensions with large volume, in which gravity propagates, while the standard model particles have to be confined on four dimensional subspace (3-brane). Remarkably this configuration survives various phenomenological constraints . Furthermore it can be realized in, for example, type I (or more precisely type I’) superstring theory where the string scale may not directly be related to the Planck scale (See also Ref. for earlier attempts). Even when the fundamental scale itself is close to the Planck scale, special geometrical configuration of extra dimension(s) such as an AdS slice may enable us to obtain a model with effectively very low cut-off scale on a visible brane where the standard model sector is confined. In this paper, we would like to consider the situation where the fundamental scale, or the ultraviolet cut-off scale, is much lower than the Planck scale but still lies a few order of magnitude above the electroweak scale so that low energy supersymmetry is needed to protect the electroweak scale from radiative corrections. A supersymmetric standard model with such a low-scale cut-off will naturally fall into a class of low-scale supersymmetry breaking models. An immediate consequence of this class of models is that the gravitino, the superpartner of the graviton, is much lighter than other superparticles, and thus tends to be the lightest superparticle. Moreover soft supersymmetry breaking masses are given at the low energy scale so that the mass spectrum is in general quite different from that of high-scale supersymmetry breaking models. Inspired by the arguments given above, here we would like to discuss phenomenological implications of supersymmetric standard models with such a low fundamental scale. Specifically we consider a model with anomalous $`U(1)_X`$ gauge symmetry . It is well-known that appropriate assignment of the $`U(1)_X`$ charges provides the decoupling solution of the supersymmetric flavor problem . Namely the first two generations of squarks and sleptons are assumed to be heavy, thus suppress flavor changing neutral current (FCNC) processes from superparticle loops, while squarks and sleptons in the third generation are set to be at the electroweak scale, i.e. a few hundred GeV: otherwise the large Yukawa couplings for the third generation would generate large radiative corrections to the Higgs mass and then we would lose the very motivation to introduce low-energy supersymmetry. The decoupling solution is simple and attractive, in particular when symmetries relate the smallness of the masses of the quarks and leptons in the first two generations with the largeness of the masses of the squarks and sleptons in the first two generations. However it has been pointed out that there are several difficulties in this scenario. First, although the squarks and sleptons in the first two generations do not influence the running of the Higgs mass at one-loop level if one ignores small Yukawa couplings, they do at two-loop level. Thus they cannot be arbitrarily heavy: rather their masses are severely limited by the naturalness argument. In fact, this issue was discussed in detail in Ref. which gave an upper bound of 5 TeV from the condition that the fine tuning should be less than $`10\%`$. The second problem is that the heavy squarks would give negative contribution to the mass squared of the third generation squarks at two-loop level, driving the mass squared negative to cause color breaking . It turns out that the bound is very severe. In fact the masses of the sfermions in the first two generations must be much smaller than 10 TeV and thus they are not large enough to solve the FCNC problem, as far as the mass of the stop is lighter than about 1 TeV. Finally with a mass spectrum typical in the decoupling solution, the relic abundance of the lightest superparticle which is assumed to be bino-like neutralino tends to overclose the Universe . What we will do in this paper is first to construct a viable model with the low fundamental scale and then to show that the problems above are ameliorated in such a framework. Here we should note that the issue of the color instability in a scenario where supersymmetry breaking is mediated at a substantially low energy was discussed in Ref. , but without an explicit model. Also a different approach to the problem of the color instability by adding extra matter multiplets to eliminate dangerous contributions has been proposed in Ref. . We begin by describing the model we are considering. The model is similar to that of Dvali and Pomarol (See also Ref. ). Ref. considered a high scale cut-off theory such as the heterotic string, where the Fayet-Illiopoulos (FI) term for the anomalous $`U(1)_X`$ gauge group is given by $`\xi =\frac{\mathrm{Tr}Qg^2}{192\pi ^2}M_{Pl}^2`$ with $`g`$ the gauge coupling constant. On the contrary, what we will consider is a low-scale cut-off theory with the cut-off $`M_{}`$. Here we assume that the standard model sector as well as the $`U(1)_X`$ is confined on a brane-like object such as a D-brane and the large four dimensional Planck scale requires the existence of large extra dimensions in which gravity propagates. Then it is natural to expect that the FI term $`\xi `$ is, if it is non-zero, $$|\xi |M_{}^2.$$ (1) From now on we will take a convention $`\xi >0`$. $`M_{}`$ may be identified with the string scale. In type I and type IIB string models, $`\xi `$ will be generated through non-vanishing expectation values of some moduli fields when combined with a generalized Green-Schwarz anomaly cancellation mechanism . As for chiral multiplets, we introduce $`\varphi _+`$ and $`\varphi _{}`$ with $`U(1)_X`$ charge $`+1`$, and $`1`$, respectively, and $`y_i`$ with charge $`Q_i`$, which represent fields in the standard model sector. Then the $`U(1)_X`$ D-term is written $$D=\xi +|\varphi _+|^2|\varphi _{}|^2+\underset{i}{}Q_i|y_i|^2$$ (2) The model also has the following mass term in the superpotential $$W=m\varphi _+\varphi _{},$$ (3) besides the superpotential of the standard model sector. Here we assume $`m^2g^2\xi `$. Though it is possible to generate the mass term Eq.(3) dynamically, we will treat it as a given parameter. Note that this does not mean to introduce a huge hierarchy into mass parameters, since all the mass scales of this low-scale theory are not very far from the electroweak scale. By minimizing the scalar potential of the model $$V=\left|\frac{W}{\varphi _+}\right|^2+\left|\frac{W}{\varphi _{}}\right|^2+\frac{g^2}{2}D^2$$ (4) with $`g`$ being the gauge coupling constant for $`U(1)_X`$, we find the following vacuum expectation values $`\varphi _+`$ $`=`$ $`0,`$ (5) $`\varphi _{}`$ $`=`$ $`\sqrt{\xi {\displaystyle \frac{m^2}{g^2}}},`$ (6) $`F_{\varphi _+}`$ $`=`$ $`m\varphi _{}=m\sqrt{\xi {\displaystyle \frac{m^2}{g^2}}},`$ (7) $`F_\varphi _{}`$ $`=`$ $`0,`$ (8) $`D`$ $`=`$ $`{\displaystyle \frac{m^2}{g^2}}`$ (9) Here we have neglected the contributions from the standard model sector, which are assumed to be tiny. In this model the scalar masses in the standard model sector are written in the following form: $$m_0^2=Qg^2D+m_F^2=Qm^2+m_F^2,$$ (10) which are given at the cut-off scale $`M_{}`$. The first term is the $`U(1)_X`$ D-term contribution which is solely controlled by the $`U(1)_X`$ charge $`Q`$. On the other hand, the second term which represents a F-term contribution comes from non-renormalizable interaction in the Kähler potential and is sensitive to the physics close to the cut-off scale. In fact we expect to have the following term in the Kähler potential: $$\frac{\eta _{ij}}{M_{}^2}\varphi _+^{}\varphi _+y_i^{}y_j$$ (11) with numerical coefficients $`\eta _{ij}`$ of order unity or less, which are in general generation dependent.<sup>*</sup><sup>*</sup>*Non-renormalizable interactions including bulk fields will be suppressed by the four dimensional Planck mass $`M_{Pl}`$. Eq. (11) yields a (possibly) generation dependent F-term mass estimated at most as $$m_F^2\frac{F_{\varphi _+}^2}{M_{}^2}m^2\left(\frac{\xi m^2/g^2}{M_{}^2}\right).$$ (12) Therefore this is potentially a source for the FCNC. However it is always sub-dominant compared to the first term, provided that there is a little hierarchy of one order of magnitude or so between $`\sqrt{\xi m^2/g^2}`$ and $`M_{}`$: $$ϵ\frac{\sqrt{\xi m^2/g^2}}{M_{}}O(10^1).$$ (13) Here it is interesting to mention the mass spectrum of $`\varphi _+`$ and $`\varphi _{}`$. What happens is that both supersymmetry and the $`U(1)_X`$ gauge symmetry are broken spontaneously. Since the scalar component of $`\varphi _{}`$ acquires non-zero vacuum expectation value, its real component has a similar mass to the gauge boson mass, $`\sqrt{g^2\xi m^2}`$, and its imaginary component becomes the would-be Nambu-Goldstone boson. On the other hand, it is essentially the $`\varphi _+`$ multiplet which is responsible for supersymmetry breaking, and its spinor component is the Goldstino absorbed into the gravitino in supergravity framework. In this case the mass of the scalar component of the $`\varphi _+`$ multiplet is found to be $`\sqrt{2}m`$, similar in size with the soft supersymmetry breaking masses. Let us now discuss how the low cut-off scale model ameliorates the problems of the decoupling solution. First we will consider the color instability. For simplicity, we assign $`U(1)_X`$ charges for quarks and leptons in the first-two generations to be $`+1`$ and those for other matters to be $`0`$. In this framework, from the cut-off scale $`M_{}`$ to the mass scale of the sfermions in the first two generations $`\stackrel{~}{m}_{1,2}`$ the nature can be described by four-dimensional field theory with the matter content of the minimal supersymmetric standard model (MSSM), and from the $`\stackrel{~}{m}_{1,2}`$ scale to about 1 TeV it can be described by an effective theory in which squarks and sleptons in the first-two generations are integrated out. A constraint on the soft masses in the third generation at the cut-off scale is obtained by requiring physical masses to be positive at the electroweak scale. Although the physical masses receive D-term contributions of the $`SU(2)_L\times U(1)_Y`$ gauge interactions and also left-right mixing effects, we neglect them for simplicity ( the effects due to these neglect are discussed in ). Hereafter we will obtain a constraint by requiring the running masses to be positive at 1 TeV scale as was done in . The values of soft masses at 1 TeV scale are computed by using renormalization group equations (RGEs). In our analysis, we use the two-loop RGEs in the $`\overline{\text{DR}}^{}`$ scheme . For our charge assignment, the RGEs for the soft masses in the third generation that include Yukawa couplings, A-terms at one-loop level and heavy sfermion contributions at two-loop level in this scheme are $`\mu {\displaystyle \frac{d\stackrel{~}{m}_f^2}{d\mu }}`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle \underset{A}{}}\alpha _AC_A^fM_A^2+{\displaystyle \frac{1}{2\pi }}\eta _f^t\alpha _t(\stackrel{~}{m}_{Q_3}^2+\stackrel{~}{m}_{U_3}^2+\stackrel{~}{m}_{H_u}^2+A_t^2)`$ (16) $`+{\displaystyle \frac{1}{2\pi }}\eta _f^b\alpha _b(\stackrel{~}{m}_{Q_3}^2+\stackrel{~}{m}_{D_3}^2+\stackrel{~}{m}_{H_d}^2+A_b^2)+{\displaystyle \frac{1}{2\pi }}\eta _f^\tau \alpha _b(\stackrel{~}{m}_{L_3}^2+\stackrel{~}{m}_{E_3}^2+\stackrel{~}{m}_{H_d}^2+A_\tau ^2)`$ $`+{\displaystyle \frac{2}{\pi ^2}}{\displaystyle \underset{A}{}}\alpha _{A}^{}{}_{}{}^{2}C_A^f\stackrel{~}{m}_{1,2}^2,`$ where $`\alpha _A`$ and $`C_A^f`$ are the gauge couplings and the quadratic Casimir of $`SU(3)_C`$, $`SU(2)_L`$ and $`U(1)_Y`$ for $`A=3,2,1`$, respectively, and $`\alpha _f=Y_f^2/4\pi (f=t,b,\tau )`$ are Yukawa couplings, with $`\eta _f^t=(1,2,3)`$ for $`f=Q_3,U_3,H_u`$ , $`\eta _f^b=(1,2,3)`$ for $`f=Q_3,D_3,H_d`$, and $`\eta _f^\tau =(1,2,1)`$ for $`f=L_3,E_3,H_d`$, respectively, and zero otherwise. Note that the contribution proportional to the $`U(1)_Y`$ D-term does not appear in (16) for the sake of the boundary conditions for the squark and slepton masses. We solve the RGEs as follows. At the cut-off scale, we assume that gaugino masses satisfy $`M_3(M_{})={\displaystyle \frac{\alpha _3(M_{})}{\alpha _2(M_{})}}M_2(M_{})={\displaystyle \frac{\alpha _3(M_{})}{\alpha _1(M_{})}}M_1(M_{})`$ (17) for simplicity. Below the $`\stackrel{~}{m}_{1,2}`$ scale, the heavy sfermions do not contribute to the running of the couplings and masses. Note that the gaugino masses evolve differently from the gauge coupling constants, but we ignored this deviation which is not important in our analysis. For the squark sector the gluino contribution dominate the other gaugino contributions, so that the constraint for the squark masses is insensitive to this assumption. Because of the charge assignment mentioned above, the squarks and sleptons in the first-two generations have soft masses $`\stackrel{~}{m}_{1,2}m`$ at the cut-off scale. The soft masses of the third generation scalar bosons and the Higgs bosons have no contribution from the $`U(1)_X`$ D-term and their F-term contributions are model dependent. Here to simplify the analysis we assume them all universal: $`m_0ϵ\stackrel{~}{m}_{1,2}ϵm`$ at the cut-off scale. We take into account the bottom and tau Yukawa couplings as well as the top Yukawa coupling. In our analysis, we fix the top quark mass in the $`\overline{\text{MS}}`$ scheme $`m_t^{\overline{MS}}(m_t)`$ to be 167 GeV and that of the bottom quark $`m_b^{\overline{MS}}(m_b)`$ to be 4.3 GeV. We checked whether our results depend on $`\mathrm{tan}\beta `$ and $`A`$ parameters and found that the dependence on these parameters are very small. Thus we will present results with all $`A`$ parameters zero at the cut-off scale and $`\mathrm{tan}\beta =2`$. We also have to include finite term contribution because the scale at which the initial condition of RGEs is given is not much lager than the electroweak scale. We follow Ref. to evaluate this effect and the result for our case is $`\stackrel{~}{m}_{f,finite}^2(\mu )={\displaystyle \frac{1}{\pi ^2}}({\displaystyle \frac{\pi ^2}{3}}2\mathrm{ln}\left({\displaystyle \frac{\stackrel{~}{m}_{1,2}^2}{\mu ^2}}\right)){\displaystyle \underset{A}{}}\alpha _A^2C_A^f\stackrel{~}{m}_{1,2}^2.`$ (18) At the $`\stackrel{~}{m}_{1,2}`$ scale, we add this contribution to the running mass, as a threshold effect. The $`U(1)_Y`$ D-term contribution is absent as in the case of Eq. (16) . Note that the finite contribution (18) is different from that of Ref.. The difference can be absorbed by a redefinition of the renormalization scale $`\mu `$. In fig. 1, we show the allowed maximum values of the sfermion masses in the first two generations $`\stackrel{~}{m}_{1,2}`$ by requiring that the mass squared $`\stackrel{~}{m}_{Q_3}^2`$ be positive at the 1 TeV scale. The horizontal axis represents the running gluino mass at the scale $`\mu =1`$ TeV. Here we take all the scalar masses of the third generation and the soft masses of the Higgs doublets at the cut-off scale $`\stackrel{~}{m}_f(M_{})`$ to be $`1\text{TeV}`$ . We found that the constraints from the positivity requirements of $`\stackrel{~}{m}_{U_3}^2`$ and $`\stackrel{~}{m}_{D_3}^2`$ are similar to that from $`\stackrel{~}{m}_{Q_3}^2`$ presented here, and in fact the differences are less than 10 %. We consider the cases $`M_{}=100,`$ $`10^3`$, $`10^4`$ and $`10^5`$ TeV. For comparison, we also show the case $`M_{}=10^{16}`$ GeV. The region above each curve is excluded. As $`M_{}`$ decreases, one finds that the allowed region becomes larger because the RGE effect becomes less significant. Indeed the finite term dominates when $`M_{}=100`$ TeV. When the gluino mass is, for instance, about 1 TeV, $`\stackrel{~}{m}_{1,2}`$ can be as heavy as about 17 TeV for $`M_{}=100`$ TeV. In this case the contribution to the kaon mass difference $`\mathrm{\Delta }m_K`$ will become at an acceptable level with the help of a small alignment between squark mass eigenstates and interaction eigenstates . The constraint from the positivity requirement of the third generation slepton $`\stackrel{~}{m}_{L_3}^2`$ alone is not so strong. For example, $`\stackrel{~}{m}_{1,2}`$ is required to be smaller than about $`80`$ TeV and $`25`$ TeV, for $`M_{}=100`$ TeV and $`M_{}=10^5`$ TeV, respectively, as far as the gluino mass at the 1 TeV is smaller than 3 TeV. On the other hand, in the case $`M_{}=10^{16}`$ GeV $`\stackrel{~}{m}_{1,2}`$ is required to be smaller than about $`13`$ TeV. Next we would like to discuss how the other difficulties are cured in our setting. The point of the naturalness problem discussed in Ref. is that the heavy first two generation scalar masses will influence the running of the Higgs mass at two loop level, causing the fine tuning to obtain the electroweak scale if the masses are very heavy. Now since the contribution to the running is roughly proportional to the “length” of running in logarithmic scale, the fine-tuning problem should be relaxed in our low-scale cut-off case in which the length of running is much shorter than the high-scale cut-off case discussed by . In our scenario, the gravitino becomes very light with the estimate $$m_{3/2}=\frac{F_{\varphi _+}}{\sqrt{3}M_{Pl}}m\frac{\sqrt{\xi m^2/g^2}}{M_{Pl}}0.1\mathrm{eV}\left(\frac{M_{}}{100\mathrm{T}\mathrm{e}\mathrm{V}}\right)\left(\frac{ϵm}{1\mathrm{T}\mathrm{e}\mathrm{V}}\right).$$ (19) Thus it is likely to be the lightest superparticle (LSP). Then the lightest superpartner in the supersymmetric standard model sector is no longer stable, but it immediately decays into the gravitino and hence it is obvious that the overclosure problem of the neutralinos is evaded. In this scenario, the gravitino is stable. Its cosmological implications are discussed in the literature. For the gravitino which weighs much less than 1 keV, its relic abundance is much smaller than the critical density of the Universe and thus it is cosmologically harmless. See Ref. and references therein for detail. We should also note that superparticle signals at collider experiments in our scenario have some characteristic features. The lifetime of the next to the lightest superparticle (NSP) is roughly of the order $$\tau _{\mathrm{NSP}}16\pi \frac{F_{\varphi _+}^2}{m_{\mathrm{NSP}}^5}10^{17}\mathrm{sec}\left(\frac{100\text{GeV}}{m_{\mathrm{NSP}}}\right)^5\left(\frac{ϵm}{1\text{TeV}}\right)^2\left(\frac{M_{}}{100\text{TeV}}\right)^2,$$ (20) assuming that the decay is a two-body decay. Thus the lifetime is so short that it will decay inside a detector. If the NSP is bino-like, the decay contains a photon and a gravitino which escapes detection. If the NSP is a slepton, most likely a stau, the decay contains a tau lepton and a gravitino. In our scenario, the stop may be the NSP. In this case, the stop decays into a top (or a W boson and a bottom quarkFor the three-body decay, the decay length increases substantially. An analysis in this case has been given in .) and a gravitino. In either case the signals will be distinguishable from those of high-scale supersymmetry breaking scenario where the signals will be associated with a massive LSP escaped from a detector. Here it should also be noticed that we can probe the heavy mass scale of the first-two generations via superoblique corrections , even though they cannot be produced directly in near future colliders. Here we would like to briefly mention a gaugino mass. In our model we can write the following term $$\frac{\varphi _+\varphi _{}}{M_{}^2}W^\alpha W_\alpha ,$$ (21) where $`W^\alpha `$ is a supersymmetric field strength of a gauge field. It follows from this that the gaugino mass is of the order $$\frac{F_{\varphi _+}\varphi _{}}{M_{}^2}=ϵ^2m.$$ (22) Recall that the mass of the third generation squark is $`ϵm`$. An additional suppression factor $`ϵ`$ in Eq. (22) may be compensated by unknown numerical coefficients in front. Note that in the low-scale supersymmetry breaking scenario, the contribution to the gaugino mass from superconformal anomaly is negligible because it is proportional to the tiny gravitino mass. Before concluding we will comment on the large extra dimensions needed to obtain the large Planck scale with the low string scale scenario. The size of the compact $`n`$-dimensional extra dimensions $`R`$ is given by $$M_{Pl}^2M_{}^{2+n}R^n,$$ (23) or $$R^1M_{}\left(\frac{M_{}}{M_{Pl}}\right)^{2/n}$$ (24) to reproduce the Planck scale $`M_{Pl}`$. Here we have assumed that the compact manifold is isotropic and is characterized by a single size $`R`$. To illustrate, let us take $`n=6`$. Then $$R^110^2\text{GeV}\left(\frac{M_{}}{10^6\text{GeV}}\right)^{4/3}.$$ (25) The masses of graviton’s Kaluza-Klein modes are quantized in units of $`R^1`$. Thus we find that the KK mode masses are in the electroweak scale or higher. This contrasts with the case of the large extra dimension scenario with $`M_{}1`$ TeV where $`R^110`$ MeV. Since the KK modes have masses of the electroweak scale or so and have interactions similar to the graviton, they may affect cosmological evolution of the early Universe. In particular, they are produced after the inflationary epoch and decay typically around the epoch of the big-bang nucleosynthesis. Here we will not go into detailed discussion, but make some remarks. First if $`R^1>10^4`$ GeV, the KK modes decay before the nucleosynthesis and thus they are harmless. On the other hand, for $`10^2\text{GeV}<R^1<10^4`$ GeV, the reheat temperature after inflation must be low to suppress the production of the KK modes. We expect that the reheat temperature of $`10^2`$ GeV will be allowed since then the production of the KK modes whose masses are heavier than $`10^2`$ GeV is highly suppressed. This is very different from the TeV gravity case where the reheat temperature is forced to be even smaller than 1 GeV . The higher reheat temperature in our case has an advantage for baryogenesis. In particular one may be able to use the electroweak baryogenesis. It is interesting to mention here that the radius of the extra dimension can be as large as a sub-millimeter for $`n=1`$ and $`M_{}10^8`$ GeV. This case may be tested in a future gravity experiment . There remains a problem of how one realizes a viable inflation model and a subsequent stabilization of the size of the extra dimensions. A hope is that model building for this may be somewhat easier than the original large extra dimension scenario . This issue should deserve further study. To summarize, we have considered the supersymmetric standard model with the anomalous $`U(1)`$ gauge symmetry when the ultraviolet cut-off scale is not far from the electroweak scale. In our scenario, the Fayet-Illiopoulos D-term is set to be a bit smaller than the cut-off scale squared. Except this, the model is similar to that of . We applied this model to the decoupling solution of the supersymmetric flavor problem and showed that the difficulties of the solution become less severe than the conventional high-scale cut-off scenario. The model should be combined with the idea of the large extra dimensions to obtain the large Planck scale of the gravitational interaction. We briefly discussed some of the related cosmological issues. We would like to thank K. Kurosawa and Y. Nomura for valuable discussions and explaining the works in . MY also thanks Y. Kawamura and H. Nakano for helpful discussions on anomalous U(1) theories in type I and type IIB string models. The work of YY was supported in part by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports, and Culture of Japan, No.10740106, and the work of MY by the Grant-in-Aid on Priority Area 707 ”Supersymmetry and Unified Theory of Elementary Particles”, and by the Grant-in-Aid No.11640246.
no-problem/0002/cond-mat0002361.html
ar5iv
text
# Nature of phase transitions in a probabilistic cellular automaton with two absorbing states ## 1 Introduction Probabilistic cellular automata (PCA) have been widely used to model a variety of systems with local interactions in physics, chemistry, biology and social sciences . Moreover, PCA are simple and interesting models that can be used to investigate fundamental problems in statistical mechanics. Many classical equilibrium spin models can be reformulated as PCA, for example the kinetic Ising model with parallel heat-bath dynamics is strictly equivalent to a PCA with local parallel dynamics . On the other hand, PCA can be mapped to spin models by expressing the transition probabilities as exponentials of a local energy. PCA can be used to investigate nonequilibrium phenomena, and in particular the problem of phase transitions in the presence of absorbing states. An absorbing state is represented by a set of configurations from which the system cannot escape, equivalent to an infinite energy well in the language of statistical mechanics. A global absorbing state can be originated by one or more local transition probabilities which take the value zero or one, corresponding to some infinite coupling in the local energy . The Domany-Kinzel (DK) model is a boolean PCA on a tilted square lattice that has been extensively studied . Let us denote the two possible states of each site with the terms “empty” and “occupied”. In this model a site at time $`t`$ is connected to two sites at time $`t1`$, constituting its neighborhood. The control parameters of the model are the local transition probabilities that give the probability of having an occupied site at a certain position once given the state of its neighborhood. The transition probabilities are symmetric for all permutations of the neighborhood, and this property is equivalent to saying that they depend on the sum of “occupied” sites in the neighborhood, whence the term “totalistic” used to denote this class of automata. In the DK model the transition probability from an empty neighborhood to an occupied state is zero, thus the empty configuration is an absorbing state. For small values of the other transition probabilities, any initial configuration will evolve to the absorbing state. For larger values, a phase transition to an active phase, represented by an ensemble of partially occupied configurations, is found. The order parameter of this transition is the asymptotic average fraction of occupied sites, which we call the density. The critical properties of this phase transition belong to the directed percolation (DP) universality class (except for one extreme point) , and the DK model is often considered the prototype of such a class. The evolution of this kind of models is the discrete equivalent of the trajectory of a stochastic dynamical system. One can determine the sensitivity with respect to a perturbation, by studying the trajectories originating by two initially different configurations (replicas) evolving with the same realization of the stochasticity, e.g. using the same sequence of random numbers. The order parameter here is the asymptotic difference between the two replicas, which we call the damage. It turns out that, inside the active phase, there is a “chaotic” phase in which the trajectories depend on the initial configurations and the damage is different from zero, and a non chaotic one in which all trajectories eventually synchronize with the vanishing of the damage. In simple models like the DK one, this transition does not depend on the choice of the initial configurations (provided they are different from the absorbing state) and the initial damage . It has been conjectured that all second-order phase transitions from an “active” phase to a non degenerate, quiescent phase (generally represented by an absorbing state) belong to the DP universality class if the order parameter is a scalar and there are no extra symmetries or conservation laws . This has been verified in a wide class of models, even multi-component, and in the presence of several asymmetric absorbing states . Also the damage phase transition has a similar structure. Once synchronized, the two replicas cannot separate, and thus the synchronized state is absorbing. Indeed, numerical simulations shows that it is in the DP universality class . Moreover, in the DK model, the damage phase transition can be mapped onto the density one . On the other hand, some models with conserved quantities or symmetric absorbing states belong to a different universality class called parity conservation (PC) or directed Ising . This universality class appears to be less robust since it is strictly related to the symmetry of the absorbing states; a slight asymmetry is sufficient to bring the model to the usual DP class . An interesting question concerns the simplest, one-dimensional PCA model with short range interactions exhibiting a first order phase transition. Dickman and Tomé proposed a contact process with spontaneous annihilation, auto catalytic creation by trimers and hopping. They found a first order transition for high hopping probability, i.e., in the region more similar to mean field (weaker spatial correlations). Bassler and Browne discussed a model whose phase diagram also presents first and second-order phase transitions . In it, monomers of three different chemical species can be adsorbed on a one-dimensional surface and neighboring monomers belonging to different species annihilate instantaneously. The control parameters of the model are the absorption rates of the monomers. The transition from a saturate to a reactive phase belongs to the DP universality class, while the transition between two saturated phases is discontinuous. The point at which three phase transition lines join does belong to the PC universality class. Scaling and fluctuations near first order phase transitions are also an interesting subject of study , which can profit from the existence of simple models. In this paper we study a one-dimensional, one-component, totalistic PCA with two absorbing states. It can be considered as a natural extension of the DK model to a lattice in which the neighborhood of a site at time $`t`$ contains the site itself and its two nearest neighbors at time $`t1`$. This space-time lattice arises naturally in the discretization of one-dimensional reaction-diffusion systems. In our model, the transition probabilities from an empty neighborhood is zero, and that from a completely occupied neighborhood is one. The model has two absorbing states: the completely empty and the completely occupied configurations. The order parameter is again the density; it is zero or one in the two quiescent phases, and assumes other values in the active phase. The system presents a line of symmetry in the phase diagram, over which the two absorbing phases have the same importance. A more detailed illustration of the model can be found in Section 2. This model can arise as a particular case of a nonequilibrium wetting of a surface. In this framework, only a single layer of particles can be absorbed on the surface. If we assume that particles can be absorbed or desorbed only near the boundaries of a patch of already absorbed particles (when the neighborhood is not homogeneous), then the completely empty and occupied configurations are absorbing states. This totalistic PCA can also be interpreted as a simple model of opinion formation. It assumes that an individual may change his mind according to himself and his two nearest neighbors. The role of social pressure is twofold. If there is homogeneity of opinions, individuals cannot disagree (absorbing states), otherwise they can agree or disagree with the majority with a certain probability. The density phase diagram shows two second order phase transition curves separating the quiescent phases from the active one, and a first order transition line between the two quiescent phases, as discussed in Section 3. These curves meet on the line of symmetry in a bicritical point. We use both mean field approximations and direct numerical simulations. The former simple approximation gives a qualitatively correct phase diagram. The numerical experiments are partially based on the fragment method . This is a parallel algorithm that implements directly the evolution rule for different values of the control parameters on the bits of one or more computer words. In Section 4, we investigate numerically the second order phase transitions and find they belong to the DP universality class. Along the line of symmetry of the model the two absorbing phases are equivalent. In Appendix B we show that on this line one can reformulate the problem in terms of the dynamics of kinks between patches of empty and occupied sites. Since the kinks are created and annihilated in pairs, the dynamics conserves the initial number of kinks modulo two. In this way we can present an exact mapping between a model with symmetric absorbing phases and one with parity conservation. We find that the critical kink dynamics at the bicritical point belongs to the PC universality class. In Section 5 we study the chaotic phase, using dynamic mean field techniques (reported in Appendix A) and direct numerical simulations. The location of this phase is similar to that of the DK model: it joins the second-order critical curves at the boundary of the phase diagram. Our model exhibits a first-order phase transition along the line of symmetry in the upper part of the phase diagram. A first-order transition is usually associated to an hysteresis cycle. It is possible to observe such a phenomena by adding a small perturbing field to the absorbing states, as discussed in Section 6. The DP universality class is equivalent to the Reggeon field theory , which in $`d=0`$ corresponds to a quadratic potential with a logarithmic divergence at the origin. The Langevin description for systems in the PC class yields a similar potential, except for irrelevant terms . It has been shown that one can reconstruct the potential from the numerical integration of the Langevin equation, which, however, requires special techniques in the presence of absorbing states . In Section 7 we show how the potential is reconstructed from actual simulations of a phenomenological model, such as our original cellular automata or the kink dynamics. In this way we obtain the shape of the potential for a system in the parity conservation universality class. ## 2 The model We describe here a one-dimensional, totalistic, probabilistic cellular automaton with three inputs. The state of the model at time $`t`$ is given by $`𝒙^t=(x_0^t,\mathrm{},x_{L1}^t)`$ with $`x_i^t\{0,1\}`$; $`t=1,2,\mathrm{}`$ and $`L`$ is the number of sites. All operations on spatial indices are assumed to be modulo $`L`$ (periodic boundary conditions). For simplicity of notation, we write $`x=x_i^t`$, $`x_{}=x_{i1}^t`$, $`x_+=x_{i+1}^t`$ and $`x^{}=x_i^{t+1}`$. We shall indicate by $`\sigma =x_{}+x+x_+`$ number of occupied cells in the neighborhood. The most general three-input totalistic PCA is defined by the quantities $`p_s`$, which are the conditional probabilities that $`x^{}=1`$ if $`\sigma =s`$. The microscopic dynamics of the model is completely specified by $$x^{}=\underset{s=0}{\overset{3}{}}R_s\delta _{\sigma ,s}.$$ (1) In this expression $`R_s`$ is a stochastic binary variable that takes the value 1 with probability $`p_s`$ and 0 with probability $`1p_s`$, and $`\delta `$ is the Kronecker delta. In practice, we implement $`R_s`$ by extracting a random number $`r_s`$ uniformly distributed between 0 and 1, and setting $`R_s`$ equal to 1 if $`r_s<p_s`$ and 0 otherwise. Eq. (1) implies the use of four random numbers $`r_s`$ for each site. The evolution of a single trajectory is not affected by the eventual correlations among the $`r_s`$, since only one $`\delta _{\sigma ,s}`$ is different from zero. This is not true when computing the simultaneous evolution of two or more replicas using the same noise. If not otherwise stated, we use only one random number for all the $`r_s`$. More discussions about the choice of random numbers can be found in Section 5 and in Appendix B. With $`p_0=0`$ and $`p_3=1`$ the model presents two quiescent phases: the phase 0 corresponding to the configuration $`x=(0,\mathrm{},0)`$ and the phase 1 corresponding to the configuration $`x=(1,\mathrm{},1)`$. In this case there are two control parameters, $`p_1`$ and $`p_2`$, and the model is symmetric under the changes $`p_11p_2`$, $`p_21p_1`$ and $`xx1`$ where $``$ is the exclusive disjunction (or the sum modulo 2). ## 3 Phase diagram In order to have a qualitative idea of the behavior of the model, we first study the mean-field approximation. If $`c`$ and $`c^{}`$ denote the density of occupied sites at times $`t`$ and $`t+1`$ respectively, $$\begin{array}{cc}\hfill c^{}& =3p_1c\left(1c\right)^2+3p_2c^2\left(1c\right)+c^3.\hfill \end{array}$$ (2) This map has three fixed points, $`c_0`$, $`c_1`$, and $`c_2`$ given by $$c_0=0,c_1=\frac{3p_11}{1+3p_13p_2},\text{and}c_2=1.$$ The asymptotic density will assume one of the latter values according to the values of the control parameters and the initial state as we show in Fig. 1. In the square $`1/3<p_11`$, $`0p_2<2/3`$, the only stable fixed point is $`c_1`$. Inside this square, on the segments $`p_22/3=m(p_11/3)`$ with $`m<0`$, $`c_1=1/(1m)`$.The first fixed point $`c_0`$ is stable when $`p_1<1/3`$ and $`c_2`$ is stable when $`p_2>2/3`$. There is a continuous second-order transition from the quiescent phase 0 to the active phase on the segment $`p_1=1/3`$, $`0p_2<2/3`$ and another continuous transition from the active to the quiescent phase 1 on the segment $`1/3<p_11,p_2=2/3`$. In the hatched region of Fig. 1 $`c_0`$ and $`c_2`$ are both stable. Their basins of attraction are, respectively, the semi-open intervals $`[0,c_1)`$ and $`(c_1,1]`$. Starting from a uniformly distributed random value of $`c`$, as time $`t`$ goes to infinity, $`c`$ tends to $`c_0`$ with probability $`c_1`$, and to $`c_2`$ with probability $`1c_1`$. Since, for $`p_1+p_2=1`$, $`c_1=1/2`$, the segment $`p_1+p_2=1`$, with $`0p_1<1/3`$ and $`2/3<p_21`$, is similar to a first-order transition line between the phase 0 and the phase 1. In Fig. 2 we show the phase diagram of the model obtained numerically starting from a random initial state with half of the cells occupied. The scenario is qualitatively the same as predicted by the mean-field analysis. In the vicinity of the point $`(p_1,p_2)=(0,1)`$ we observe a discontinuous transition from $`c=0`$ to $`c=1`$. The two second-order phase-transition curves from the active phase to the quiescent phases are symmetric, and the critical behavior of the order parameter, $`c`$ for the lower curve and $`1c`$ for the upper one, is the same. Due to the symmetry of the model the two second-order phase transition curves meet at a bicritical point $`(p_t,1p_t)`$ where the first-order phase transition line $`p_1+p_2=1`$, $`p_1<p_t`$ ends. Crossing the second-order phase boundaries on a line parallel to the diagonal $`p_1=p_2`$, the density $`c`$ exhibits two critical transitions, as shown in the inset of Fig. 2. Approaching the bicritical point the critical region becomes smaller, and corrections to scaling increase. Finally, at the transition point the two transitions coalesce into a single discontinuous one. ## 4 Critical dynamics and universality classes We performed standard dynamic Monte Carlo simulations starting from a single site in the origin out of the nearest absorbing state, and measured the average number of active sites $`N(t)`$, the survival probability $`P(t)`$ and the average square distance from origin $`R^2(t)`$ (averaged over surviving runs) defined as $$\begin{array}{cc}\hfill N(t)& =\frac{1}{K}\underset{k=1}{\overset{K}{}}\underset{i}{}\omega _i^t(k),\hfill \\ \hfill P(t)& =\frac{1}{K}\underset{k=1}{\overset{K}{}}\theta \left(\underset{i}{}\omega _i^t(k)\right),\hfill \\ \hfill R^2(t)& =\frac{1}{KN(t)}\underset{k=1}{\overset{K}{}}\underset{i}{}\omega _i^t(k)i^2.\hfill \end{array}$$ (3) In these expressions, $`k`$ labels the different runs and $`K`$ is the total number of runs. The quantity $`\omega _i`$ is $`x_i`$ if the nearest absorbing state is $`\mathrm{𝟎}`$ and $`1x_i`$ otherwise; $`\theta `$ is the Heaviside step function, that assumes the value 1 if its argument is greater than 0, and the value 0 if it is smaller than 0. At the critical point one has $$N(t)t^\eta ,P(t)t^\delta ,R^2(t)t^z.$$ At the transition point $`(p_1^{}=0.6625(3),p_2=0)`$, we get $`\eta =0.308(5)`$, $`\delta =0.160(2)`$ and $`z=1.265(5)`$, in agreement with the best known values for the directed percolation universality class . Near the bicritical point, on the line $`p_1+p_2=1`$, the two absorbing states have symmetrical weight. We define a kink $`y_i`$ as $`y_i=x_ix_{i+1}`$. For the computation of the critical properties of the kink dynamics, one has to replace $`\omega _i`$ with $`y_i`$ in Eq. (3). The evolution equation is derived in Appendix B. In the kink dynamics there is only one absorbing state (the empty state), corresponding to one of the two absorbing states $`\mathrm{𝟎}`$ or $`\mathrm{𝟏}`$. For $`p_1<p_t`$ the asymptotic value of the density of kinks is zero and it starts to grow for $`p_1>p_t`$. In models with multiple absorbing states, dynamical exponents may vary with initial conditions. Quantities computed only on survival runs ($`R^2(t)`$) appear to be universal, while others (namely $`P(t)`$ and $`N(t)`$) are not . We performed dynamic Monte Carlo simulations starting either from one and two kinks. In both cases $`p_t=0.460(2)`$, but the exponents were found to be different. Due to the conservation of the number of kinks modulo two, starting from a single site one cannot observe the relaxation to the absorbing state, and thus $`\delta =0`$. In this case $`\eta =0.292(5)`$, $`z=1.153(5)`$. On the other hand, starting with two neighboring kinks, we find $`\eta =0.00(2)`$, $`\delta =0.285(5)`$, and $`z=1.18(2)`$. These results are consistent with those found by other authors . ## 5 The chaotic phase Let us now turn to the sensitivity of the model to a variation in the initial configuration, i.e. to the study of damage spreading or, equivalently, to the location of the chaotic phase. Given two replicas $`x`$ and $`y`$, we define the difference $`w`$ as $`w=xy`$. The damage $`h`$ is defined as the fraction of sites in which $`w=1`$, i.e. as the Hamming distance between the configurations $`x`$ and $`y`$. The precise location of this phase transition depends on the particular implementation of the stochasticity. Since the sum of occupied cells in the neighborhood of $`x`$ is in general different from that of $`y`$, the evolution equation (1) for the two replicas uses different random numbers $`r_s`$. The correlations among these random numbers affect the location of the chaotic phase boundary . We limit our investigation to the case of maximal correlations by using just one random number per site, i.e. all $`r_s`$ are the same for all $`s`$ at the same site. This gives the smallest possible chaotic region. Notice that we have to extract a random number for all sites, even if in one or both replicas have a neighborhood configuration for which the evolution rule is deterministic (if $`\sigma =0`$ or $`\sigma =3`$). One can write the mean field equation for the damage by taking into account all the possible local configurations of two lattices. The evolution equation for the damage depends on the correlations among sites but in the simplest case we can assume that $`h(t+1)`$ depends only on $`h(t)`$ and $`c(t)`$, the density of occupied sites. In Appendix A we find the evolution equation for the damage in the mean-field approximation. In Fig. 3 we show the phase diagram of the chaotic phase in this approximation. There is a qualitative agreement with the mean field phase diagram found for the DK model . In Fig. 4 we show the phase diagram for the damage found numerically by considering the evolution starting from uncorrelated configurations with initial density equal to 0.5. The damage region is shown in shades of grey. Outside this region there appear small damaged domains on the other phase boundaries. This is due either to the divergence of the relaxation time (second-order transitions) or to the fact that a small difference in the initial configuration can drive the system to a different absorbing state (first-order transitions). The chaotic domain near the point $`(p_1,p_2)=(1,0)`$ is stable regardless of the initial density. On the line $`p_2=0`$ the critical points of the density and the damage coincide at $`p_1^{}`$. ## 6 First-order phase transition and hysteresis cycle First-order phase transitions are usually associated to a hysteresis cycle due to the coexistence of two phases. In the absence of absorbing states, the coexistence of two stable phases for the same values of the parameters is a transient effect in finite systems, due to the presence of fluctuations. To find the hysteresis loop we modify the model slightly by putting $`p_0=1p_3=\epsilon `$ with $`\epsilon 1`$. In this way the empty and fully occupied configurations are no longer absorbing. This brings the model back into the class of equilibrium models for which there is no phase transition in one dimension but metastable states can nevertheless persist for long times. The mean field equation for the density $`c`$ becomes $$\begin{array}{cc}\hfill c^{}=& \epsilon (1c)^3+3p_1c\left(1c\right)^2+\hfill \\ & 3p_2c^2\left(1c\right)+(1\epsilon )c^3.\hfill \end{array}$$ (4) We study the asymptotic density as $`p_1`$ and $`p_2`$ move on a line with slope 1 inside the hatched region of Fig. 1. For $`p_1`$ close to zero, Eq. (4) has only one fixed point, which is stable and close to $`\epsilon `$. As $`p_1`$ increases adiabatically (by taking $`c`$ at $`t=0`$ equal the previous value of the fixed point) the new asymptotic density will still assume this value even when two more fixed points appear one of which is unstable and the other stable and close to one. Eventually the first fixed point disappears, and the asymptotic density jumps to the stable fixed point close to one. Going backwards on the same line, the asymptotic density will be close to one until that fixed point disappears and it will jump back to a small value close to zero. By symmetry, the hysteresis loop is centered around the line $`p_1+p_2=1`$ which we identify as a first-order phase transition line inside the hatched region. The hysteresis region is found by two methods, the dynamical mean field, which extends the mean field approximation to blocks of $`l`$ sites , and direct numerical experiments. As stated before it is necessary to introduce a small perturbation $`\epsilon =p_0=1p_3`$. We consider lines parallel to the diagonal $`p_1=p_2`$ in the parameter space and increase the value of $`p_1`$ and $`p_2`$ after a given relaxation time $`t_r`$ up to $`p_2=1`$; afterwards the scanning is reverted down to $`p_1=0`$. The hysteresis region for various values of parameters are reported Fig. 5. In numerical simulations, one can estimate the size of the hysteresis region by starting with configurations $`\mathrm{𝟎}`$ and $`\mathrm{𝟏}`$, and measuring the size $`d`$ of the region in which the two simulations disagree. ## 7 Reconstruction of the potential An important point in the study of systems exhibiting absorbing states is the formulation of a coarse-grained description using a Langevin equation. It is generally accepted that DP universal behavior is represented by $$\frac{c(x,t)}{t}=ac(x,t)bc^2(x,t)+^2c(x,t)+\sqrt{c(x,t)}\alpha (x,t),$$ (5) where $`c`$ is the density field, $`a`$ and $`b`$ are control parameters and $`\alpha `$ is a Gaussian noise with correlations $`\alpha (x,t)\alpha (x^{},t^{})=\delta _{x,x^{}}\delta _{t,t^{}}`$. The diffusion coefficient has been absorbed into the parameters $`a`$ and $`b`$ and the time scale. This equation can be obtained by a mean-field approximation of the evolution equation keeping only the relevant terms. The state $`c(x,t)=0`$ is clearly stationary, but its absorbing character is given by the balance between fluctuations, which are of order of the field itself, and the “potential” part $`acbc^2`$ (See also Ref. ). The role of the absorbing state can be illustrated by taking a sequence of equilibrium models whose energy landscape exhibits the coexistence of an infinitely deep well (the absorbing state) and another broad local minimum (corresponding to the “active”, disordered state), separated by an energy barrier. There is no true stationary active state for such a system (with a finite energy barrier), since there is always a probability of jumping into the absorbing state. However, the system can survive in a metastable active state for time intervals of the order of the inverse of the heigth of the energy barrier. The parameters controlling the height of the energy barrier are the size of the lattice and the length of the simulation: the equilibrium systems are two-dimensional with asymmetric interactions in the time direction . In the limiting case of an infinite system, the height of the energy barrier is finite below the transition point, and infinite above, when the only physically relevant state is the disordered one. It is possible to introduce a zero-dimensional approximation to the model by averaging over the time and the space, assuming that the system has entered the metastable state. In this approximation, the size of the original systems enters through the renormalized coefficients $`\overline{a}`$, $`\overline{b}`$, $$\frac{c(x,t)}{t}=\overline{a}c(x,t)\overline{b}c^2(x,t)+\sqrt{c(x,t)}\alpha (x,t),$$ where also the time scale has been renormalized. The associated Fokker-Planck equation is $$\frac{P(c,t)}{t}=\frac{}{c}(\overline{a}c\overline{b}c^2)P(c,t)+\frac{1}{2}\frac{^2}{c^2}cP(c,t),$$ where $`P(c,t)`$ is the probability of observing a density $`c`$ at time $`t`$. One possible solution is a $`\delta `$-peak centered at the origin, corresponding to the absorbing state. By considering only those trajectories that do not enter the absorbing state during the observation time, one can impose a detailed balance condition, whose effective agreement with the actual probability distribution has to be checked *a posteriori*. A stationary distribution $`P(c)=\mathrm{exp}(V(c))`$ corresponds to an effective potential $`V(c)`$ of the form $$V(c)=\mathrm{log}(c)2\overline{a}c+\overline{b}c^2.$$ Note that this distribution is not normalizable. One can impose a cutoff for low $`c`$, making $`P(c)`$ normalizable. For finite systems the only stationary solution is the absorbing state. However, by increasing the size of the system, one approximates the limit in which the energy barrier is infinitely hight, the absorbing state unreachable and $`P(c)`$ is the observable distribution. In order to find the form of the effective potential for spatially extended systems, Muñoz numerically integrated Eq. (5), using the procedure described by Dickman . It is however possible to obtain the shape of the effective potential from the actual simulations, simply by plotting $`V(c)=\mathrm{log}(P(c))`$ versus $`c`$, where $`c`$ is the density of the configuration. In Fig. 6 we show the profile of the reconstructed potential $`V`$ for some values of $`p`$ around the critical value or the infinite system $`p_1^{}`$ on the line $`q=0`$. We used rather small systems and followed the evolution for a limited amount of time in order to balance the weight of the $`\delta `$-peak with respect to $`P(c)`$ (which is only metastable). For larger systems the absorbing state is not visible above the transition and dominates below it. On the line $`q=0`$ the model belongs to the DP universality class. One can observe that the curve becomes broader in the vicinity of the critical point, in correspondence of the divergence of critical fluctuations $`\chi |pp_c|^\gamma ^{}`$, $`\gamma ^{}=0.54`$ . By repeating the same type of simulations for the kink dynamics (random initial condition), we obtain slightly different curves, as shown in Fig. 7. We notice that all curves have roughly the same width. Indeed, the exponent $`\gamma ^{}`$ for systems in the PC universality class is believed to be exactly 0 , as given by the scaling relation $`\gamma ^{}=d\nu _{}2\beta `$. Clearly, much more informations can be obtained from the knowledge of $`P(c)`$, either by direct numerical simulations or dynamical mean field trough finite scale analysis, as shown for instance in Ref. . ## 8 Discussions and conclusions We have studied a probabilistic cellular automata with two absorbing states and two control parameters. This is a simple and natural extension of the Domany-Kinzel (DK) model. Despite its simplicity it has a rich phase diagram with two symmetric second-order phase curves that join a first-order line at a bicritical point. The phase diagram and the critical properties of the model were found using several mean field approximations and numerical simulations. The second-order phase transitions belong to the directed percolation universality class except for the bicritical point, which belongs to the parity conservation (PC) universality class. The first-order phase transition line was put in evidence by a modification of the model that allows one to find the hysteresis cycles. The model also presents a chaotic phase analogous to the one present in the DK model. This phase was studied using direct numerical simulations and dynamical mean field. On the line of symmetry of the model the relevant behavior is given by kink dynamics. We found a closed expression for the kink evolution rule and studied its critical properties, which belong to the PC universality class. The effective potential governing the coarse-grained evolution for the DP and the PC phase was found through direct simulations, confirming that critical fluctuations diverge at most logarithmically in the PC class. The phase diagram of our model is qualitatively similar to Bassler and Browne’s (BB) one . In both models two critical lines in the DP universality class meet at a bicritical point in the PC universality class, and give origin to a first-order transition line. This suggests that the observed behavior has a certain degree of universality. An interesting feature of the BB model is that the absorbing states at the bicritical point are indeed symmetric, but the model does not show any conserved quantities. We have shown that the bicritical dynamics of our model can be exactly formulated either in terms of symmetric states or of kinks dynamics, providing an exact correspondence between the presence of conserved quantities and the symmetry of absorbing states. Furthermore, in order to obtain a qualitatively correct mean-field phase diagram of the BB model, one has to include correlations between triplets, while the mean-field phase diagram of our model is already correct at first approximation. This suggests that we have described a simpler model, which can be used as prototype for multi-critical systems. ## Acknowledgements Helpful and fruitful discussions with Paolo Palmerini, Antonio Politi and Hernán Larralde are acknowledged. This work benefitted from partial economic support from CNR (Italy), CONACYT (Mexico), project IN-116198 DGAPA-UNAM (Mexico) and project G0044-E CONACYT (Mexico). We wish to thank one of the Referees for having indicated us Ref. . ## Appendix A Damage spreading in the mean field approximation The minimum damage spreading occurs when the two replicas $`𝒙`$ and $`𝒚`$ evolve using maximally correlated random numbers, i.e. when all $`r_s`$ in Eq. (1) are the same. . Let $`w=xy`$ be the damage at a site $`i`$ and time $`t`$. It is also possible to consider $`w`$ as an independent variable and write $`y=xw`$. We denote $`s=x_{}+x+x_+`$, $`s^{}=y_{}+y+y+=(x_{}w_{})+(xw)+(x_+w_+)`$ and $`s^{\prime \prime }=w_{}+w+w_+`$. The evolution equation for $`h`$, the density of damaged sites $`w`$ at time $`t`$, is obtained by considering all the local configurations $`x_{}xx_+`$ and $`w_{}ww_+`$ of one replica and of the damage $$h^{}=\underset{\begin{array}{c}x_{}xx_+\\ w_{}ww_+\end{array}}{}\pi (c,s,3)\pi (h,s^{\prime \prime },3)\left|p_sp_s^{}\right|,$$ (A.1) where $$\pi (\alpha ,n,m)=\alpha ^n(1\alpha )^{nm}.$$ In this equation all the sums run from zero to one. The value of $`c`$ is given by Eq. (2). The term $`|p_sp_s^{}|`$ is the probability that $`R_sR_s^{}`$ is one using only one random number for the $`r_s`$. The argument of the sum is the probability that $`x^{}y^{}`$. It is possible to rewrite Eq.(A.1) in a different form $$h^{}=\underset{ss^{}\mathrm{}}{}\left(\genfrac{}{}{0pt}{}{m}{s}\right)\left(\genfrac{}{}{0pt}{}{ms}{\mathrm{}}\right)\left(\genfrac{}{}{0pt}{}{ms}{s^{}\mathrm{}}\right)\pi (c,s,m)\pi (\eta ,s+s^{}2\mathrm{},m)|p_sp_s^{}|,$$ (A.2) where $`s`$ and $`s^{}`$ are the same as above, and $`\mathrm{}`$ is the overlap between $`x`$ and $`y`$, i.e. $`\mathrm{}=(x_{}y_{})+(xy)+(x_+y_+)`$ ($``$ is the AND operation). Assuming that $`\left(\genfrac{}{}{0pt}{}{a}{b}\right)=0`$ if $`b>a`$, $`a<0`$ or $`b<0`$, the sum of (A.2) can run over all positive integers. This expression is valid for all totalistic rules with a neighborhood of size $`m`$ (here $`m=3`$). The stationary state of Eq. (A.1) (or Eq. (A.2)) can be found analytically using a symbolic manipulation program. The chaotic transition line is $$p_2=p_1\frac{1}{9},$$ with $`1/3<p_1<1`$,$`0<p_2<2/3`$. ## Appendix B Kink dynamics On the segment $`p_1+p_2=1`$, $`p_1<p_t`$ the order parameter is the number of kinks. The dynamics of the kinks $`y_i=x_ix_{i+1}`$ (that for the ease of notation we write $`y=xx_+`$) is obtained by taking the exclusive disjunction of $`x^{}=x_i^{t+1}`$ and $`x_+^{}=x_{i+1}^{t+1}`$ given by Eq. (1). In order to obtain a closed expression for the $`y`$, a little of Boolean algebra is needed. The totalistic functions $`\delta _{\sigma ,s}`$ where $`s=x_{}+x+x_+`$ can be expressed in terms of the symmetric polynomials $`\xi ^{(j)}`$ of degree $`j`$ . These are $$\begin{array}{cc}\hfill \xi ^{(1)}& =x_{}xx_+,\hfill \\ \hfill \xi ^{(2)}& =x_{}xx_{}x_+xx_+,\hfill \\ \hfill \xi ^{(3)}& =\xi ^{(1)}\xi ^{(2)}=x_{}xx_+.\hfill \end{array}$$ The totalistic functions are given by $$\begin{array}{cc}\hfill \delta _{\sigma ,1}& =\xi ^{(1)}\xi ^{(3)},\hfill \\ \hfill \delta _{\sigma ,2}& =\xi ^{(2)}\xi ^{(3)},\hfill \\ \hfill \delta _{\sigma ,3}& =\xi ^{(3)}.\hfill \end{array}$$ In the evolution equation (1), one has $`R_1=1`$ if $`r_1<p_1`$, and $`R_2=1`$ if $`r_2<p_2`$. On the line $`p_1+p_2=1`$ (i.e. $`p_2=1p_1`$), $`R_2`$ takes the value 1 if $`1r_2>p_1`$. Choosing $`1r_2=r_1`$ (this choice does not affect the dynamics of a single replica) we have $`R_2=R_11`$ and Eq. (1) becomes, after some manipulations, $$x^{}=R(\xi ^{(1)}\xi ^{(2)})\xi ^{(2)},$$ where $`R=R_1`$. One can easily check that $$\xi ^{(1)}=y_{}yx,$$ and $$\xi ^{(2)}=y_{}yx.$$ Finally, we obtain the evolution equation for the $`y`$ $$\begin{array}{cc}\hfill y^{}& =x^{}x_+^{}\hfill \\ & =R(y_{}yy_{}y)R_+(yy_+yy_+)y_{}yyy_+y\hfill \\ & =R(y_{}y)R_+(yy_+)y_{}yyy_+y,\hfill \end{array}$$ (B.3) In this expression $``$ denotes the disjunction operation (OR). The sum modulo 2 (XOR) of all $`y_i`$ over the lattice is invariant with time, since all repeated terms cancel out ($`aa=0`$). Note that the kink dynamics uses correlated noise between neighboring sites.
no-problem/0002/hep-ex0002038.html
ar5iv
text
# Search for Resonances Decaying to 𝑒⁺-jet in 𝑒⁺⁢𝑝 Interactions at HERA ## 1 Introduction The $`e^+`$-jet mass spectrum in $`e^+p`$ scattering has been investigated with the ZEUS detector at HERA. An excess of events relative to Standard Model expectations has previously been reported by the H1 and ZEUS collaborations in neutral current deep inelastic scattering at high $`x`$ and high $`Q^2`$. These events contain high-mass $`e^+`$-jet final states. Several models have been discussed as possible sources of these events, including leptoquark production and R-parity-violating squark production . This paper presents an analysis of ZEUS data specifically aimed at searching for high mass states decaying to $`e^+`$-jet. Candidate events with high transverse energy, an identified final-state positron, and at least one jet are selected. The measured energies ($`E_e^{},E_j`$) and angles of the final-state positron and the jet with highest transverse momentum are used to calculate an invariant mass $$M_{ej}^2=2E_e^{}E_j(1\mathrm{cos}\xi ),$$ (1) where $`\xi `$ is the angle between the positron and jet. The angle between the outgoing and incoming positron in the $`e^+`$-jet rest frame, $`\theta ^{}`$, is also reconstructed using the measured energies and angles. No assumptions about the production process are made in the reconstruction of either $`M_{ej}`$ or $`\theta ^{}`$. The search was performed using $`47.7`$ pb<sup>-1</sup> of data collected in the 1994-1997 running periods. In the following, expectations from the Standard Model, leptoquark production and R-parity-violating squark production are summarized. After a discussion of the experimental conditions, the analysis is described and the $`M_{ej}`$ and $`\mathrm{cos}\theta ^{}`$ distributions presented. Since these distributions do not show a clear signal for a narrow resonance, limits on the cross section times branching ratio are extracted for the production of such a state. Limits are also presented in the mass versus coupling plane which can be applied to leptoquark and squark production. ## 2 Model Expectations High-mass $`e^+`$-jet pairs, produced in the Standard Model (SM) via neutral current (NC) scattering, form the principal background to the search for heavy states. This process is reviewed first. Leptoquark (LQ) production and squark production in R-parity-violating ($`\mathit{}_P`$) supersymmetry are used as examples of physics beyond the SM that could generate the $`e^+`$-jet final state. The diagrams for NC and LQ processes are shown in Fig. 1. The squark production diagrams are similar to the LQ diagrams, but different decay modes are possible, as discussed below. ### 2.1 Standard Model Expectations The kinematic variables used to describe the deep inelastic scattering (DIS) reaction $$e^+pe^+X$$ are $`Q^2`$ $`=`$ $`q^2=(kk^{})^2,`$ (2) $`y`$ $`=`$ $`{\displaystyle \frac{qP}{kP}}\mathrm{and}`$ (3) $`x`$ $`=`$ $`{\displaystyle \frac{Q^2}{2qP}},`$ (4) where $`k`$ and $`k^{}`$ are the four-momenta of the incoming and outgoing positron, respectively, and $`P`$ is the four-momentum of the incoming proton. The center-of-mass energy is given by $`s=(k+P)^2(300\mathrm{GeV})^2`$. The NC interaction occurs between the positron and a parton (quark) inside the proton (see Fig. 1). The production of the large $`e^+`$-jet masses of interest requires high $`x`$ partons, where the valence quarks dominate the proton structure. In leading-order electroweak theory, the cross section for the NC DIS reaction can be expressed as $$\frac{d^2\sigma (e^+p)}{dxdy}=\frac{2\pi \alpha ^2}{sx^2y^2}\left[Y_+F_\mathit{2}Y_{}xF_\mathit{3}+y^2F_L\right]$$ (5) with $`Y_\pm =1\pm (1y)^2`$ and $`\alpha `$ the fine structure constant. The contribution from the longitudinal structure function, $`F_L`$, is expected to be negligible in the kinematic range considered here. The $`x`$ dependence of the NC cross section is very steep. In addition to the explicit $`1/x^2`$ factor, the structure functions $`F_2`$ and $`xF_3`$ are dominated at large $`x`$ by valence-quark densities that fall quickly for $`x>0.3`$. The $`y`$ dependence of the cross section is dominated by the $`1/y^2`$ term. The structure functions vary slowly with $`y`$ at fixed $`x`$. The uncertainty in the NC cross section predicted by Eq. 5 is dominated by the uncertainty in the structure functions (parton densities), and is small, about $`5`$% at the high-$`x`$ and moderate-$`y`$ ranges of this analysis . The quantity of interest in this paper is the $`e^+`$-jet cross section, which is sensitive to QCD corrections. The uncertainty arising from these corrections has been estimated to be small for this analysis. For DIS or LQ events produced via the diagrams shown in Fig. 1 (i.e. assuming no QED or QCD radiation), the mass of the $`eq`$ system is related to $`x`$ via $$M^2=sx$$ (6) and $`\theta ^{}`$ is related to $`y`$ via $$\mathrm{cos}\theta ^{}=12y.$$ (7) The steeply-falling $`x`$ and $`y`$ dependences of DIS events will therefore produce distributions falling sharply with mass and peaking towards $`\mathrm{cos}\theta ^{}=1`$. ### 2.2 Leptoquark Production and Exchange Leptoquark production is an example of new physics that could generate high-mass $`e^+`$-jet pairs. The set of leptoquarks with $`SU(3)\times SU(2)\times U(1)`$-invariant couplings has been specified . Only LQs with fermion number $`F=L+3B=0`$ are considered here, where $`L`$ and $`B`$ denote the lepton and baryon number, respectively. These leptoquarks are listed in Table 1 together with some of their properties. The $`F=0`$ LQs have higher cross sections in $`e^+p`$ scattering than $`e^{}p`$ scattering since in the $`e^+p`$ case a valence quark can fuse with the positron. In principle, additional LQ types can be defined which depend on the generations of the quarks and leptons to which they couple. Only LQs which preserve lepton flavor and which couple to first-generation quarks are considered in this analysis. As shown in Fig. 1, leptoquark production can generate an s-channel resonance provided $`m_{LQ}<\sqrt{s}`$. Contributions to the $`e^+p`$ cross section would also result from u-channel exchange and interference of LQ diagrams with photon and $`Z^0`$ exchange. The cross section in the presence of a leptoquark can be written as $$\frac{d^2\sigma (e^+p)}{dxdy}=\frac{d^2\sigma ^{NC}}{dxdy}+\frac{d^2\sigma _{u/NC}^{Int}}{dxdy}+\frac{d^2\sigma _{s/NC}^{Int}}{dxdy}+\frac{d^2\sigma _u^{LQ}}{dxdy}+\frac{d^2\sigma _s^{LQ}}{dxdy}.$$ (8) The first term on the right-hand side of Eq. 8 represents the SM contribution discussed previously. The second (third) term arises from the interference between the SM and u-channel (s-channel) LQ diagram, and the fourth (fifth) term represents the u-channel (s-channel) LQ diagram alone. The additional contributions to the SM cross section depend on two parameters: $`m_{LQ}`$, and $`\lambda _R`$ or $`\lambda _L`$, the coupling to $`e_{L,R}^+`$ and quark. Leptoquarks of well-defined helicity ($`\lambda _R\lambda _L=0`$) are assumed for simplicity in the limit-setting procedure, and one species of LQ is assumed to dominate the cross section. The $`\mathrm{cos}\theta ^{}`$ dependence varies strongly for the different terms: it is flat for scalar-LQ production in the s-channel, and for vector-LQ exchange in the u-channel, while it varies as $`(1+\mathrm{cos}\theta ^{})^2`$ for vector-LQ production in the s-channel or scalar-LQ exchange in the u-channel. The interference terms produce a $`\mathrm{cos}\theta ^{}`$ dependence which is steeper due to the sharply-peaking $`\mathrm{cos}\theta ^{}`$ distribution in NC DIS. In general, the s-channel term dominates the additional contributions to the SM cross section if $`m_{LQ}<\sqrt{s}`$, the coupling $`\lambda `$ is small, and the LQ is produced from a quark rather than an antiquark. However, there are conditions for which the other terms can become significant, or even dominant , leading to important consequences for the expected mass spectra and decay angular distributions. The u-channel and interference terms cannot produce a resonance peak in the mass spectrum and the angular distributions from such terms can behave more like those of NC deep inelastic scattering. Limits are presented in this paper for narrow-width LQ and under conditions for which the s-channel term dominates. The width of a LQ depends on its spin and decay modes, and is proportional to $`m_{LQ}`$ times the square of the coupling. In the narrow-width approximation, the LQ production cross section is given by integrating the s-channel term : $$\sigma ^{NWA}=(J+1)\frac{\pi }{4s}\lambda ^2q(x_0,\mu )$$ (9) where $`J`$ represents the spin of the LQ, $`q(x_0,\mu )`$ is the quark density evaluated at $`x_0=m_{LQ}^2/s`$ and with the scale $`\mu =m_{LQ}^2`$. In the limit-setting procedure (Sect. 8), this cross section was corrected for expected QED and QCD (for scalar LQ only) radiative effects. The QCD corrections enhance the cross section by 20 - 30% for the $`F=0`$ LQ considered here. The effect of QED radiation on the LQ production cross section was calculated and was found to decrease the cross section by 5-25% as $`m_{LQ}`$ increases from $`100290`$ GeV. ### 2.3 R-Parity-Violating Squark Production In the supersymmetry (SUSY) superpotential, $`R`$-parity-violating terms of the form $`\lambda _{ijk}^{}L_L^iQ_L^j\overline{D}_R^k`$ are of particular interest for lepton-hadron collisions. Here, $`L_L`$, $`Q_L`$, and $`\overline{D}_R`$ denote left-handed lepton and quark doublets and the right-handed down-type quark-singlet chiral superfields, respectively. The indices $`i`$, $`j`$, and $`k`$ label their respective generations. For $`i=1`$, which is the case for $`ep`$ collisions, these operators can lead to $`\stackrel{~}{u}`$\- and $`\stackrel{~}{d}`$-type squark production. There are 9 possible production couplings probed in $`e^+p`$ collisions, corresponding to the reactions $`e^++\overline{u}_j`$ $``$ $`\stackrel{~}{\overline{d}}_k,`$ (10) $`e^++d_k`$ $``$ $`\stackrel{~}{u}_j.`$ (11) For production and decay via the $`\lambda _{1jk}^{}`$ coupling, squarks behave like scalar leptoquarks and the final state is indistinguishable, event by event, from Standard Model neutral and charged current events. However, as for the scalar leptoquarks, the angular distributions of the final-state lepton and quark will be different and this fact can be exploited in performing searches. Limits derived for scalar LQ production can then be directly related to limits on squark production and decay via $`\lambda _{ijk}^{}`$. In addition to the Yukawa couplings, gauge couplings also exist whereby $`\stackrel{~}{q}`$ can decay by radiating a neutralino or chargino which can subsequently decay. The final-state signature depends on the properties of the neutralino or chargino. The search for such decay topologies from a squark is outside the scope of this analysis. ## 3 Experimental Conditions In 1994-97, HERA operated with protons of energy $`E_p=820`$ GeV and positrons of energy $`E_e=27.5`$ GeV. The ZEUS detector is described in detail in . The main components used in the present analysis were the central tracking detector (CTD) positioned in a 1.43 T solenoidal magnetic field and the uranium-scintillator sampling calorimeter (CAL). The CTD was used to establish an interaction vertex with a typical resolution along (transverse to) the beam direction of $`0.4(0.1)`$ cm. It was also used in the positron-finding algorithm that associated a charged track with an energy deposit in the calorimeter. The CAL was used to measure the positron and hadronic energies. The CAL consists of a forward part (FCAL), a barrel part (BCAL) and a rear part (RCAL), with depths of $`7,\mathrm{\hspace{0.33em}5}\mathrm{and}\mathrm{\hspace{0.33em}4}`$ interaction lengths, respectively. The FCAL and BCAL are segmented longitudinally into an electromagnetic section (EMC), and two hadronic sections (HAC1,2). The RCAL has one EMC and one HAC section. The cell structure is formed by scintillator tiles; cell sizes range from $`5\times 20`$ cm<sup>2</sup> (FEMC) to $`24.4\times 35.2`$ cm<sup>2</sup> at the front face of a BCAL HAC2 cell. The light generated in the scintillator is collected on both sides of the module by wavelength-shifter bars, allowing a coordinate measurement based on knowledge of the attenuation length in the scintillator. The light is converted into an electronic signal by photomultiplier tubes. The cells are arranged into towers consisting of $`4`$ EMC cells, a HAC1 cell and a HAC2 cell (in FCAL and BCAL). The transverse dimensions of the towers in FCAL are $`20\times 20`$ cm<sup>2</sup>. One tower is absent at the center of the FCAL and RCAL to allow space for passage of the beams. The outer boundary of the inner ring of FCAL towers, used to define a fiducial cut for the jet reconstruction, defines a box of $`60\times 60`$ cm<sup>2</sup>. Under test beam conditions, the CAL has energy resolutions of $`\sigma /E=0.18/\sqrt{E}`$ for positrons hitting the center of a calorimeter cell and $`\sigma /E=0.35/\sqrt{E}`$ for single hadrons, where energies are in GeV. In the ZEUS detector, the energy measurement is affected by the energy loss in the material between the interaction point and the calorimeter. For the events selected in this analysis, the positrons predominantly strike the BCAL, while the jets hit the FCAL. The in-situ positron-energy resolution in the BCAL has been determined to average $`\sigma /E=0.32/\sqrt{E}0.03`$ while the jet-energy resolution in the FCAL averages $`\sigma /E=0.55/\sqrt{E}0.02`$. The jet-energy resolution was determined by comparing reconstructed jet energies in the calorimeter with the total energy of the particles in the hadronic final-state using Monte Carlo simulation, and therefore includes small contributions from the jet-finding algorithm. In the reconstruction of the positron and jet energies, corrections were applied for inactive materials located in front of the calorimeter and for non-uniformities in the calorimeter response . For the high energies important in this analysis, the overall energy scale is known to $`1`$% for positrons in BCAL and $`2`$% for hadrons in FCAL and BCAL. The electromagnetic energy scale was determined by a comparison with momentum measurements in the central tracking detector (using lower-energy electrons and positrons). Its linearity was checked with energies reconstructed from the double angle (DA) method . The hadronic-energy scales in the FCAL and BCAL were determined by using transverse-momentum balance in NC DIS events. The angular reconstruction was performed using a combination of tracking and calorimeter information. From Monte Carlo studies, the polar-angle resolutions were found to be $`2.5`$ mrad for positrons and approximately ($`220/\sqrt{E}4`$) mrad for jets with energies above $`100`$ GeV. The luminosity was measured from the rate of the bremsstrahlung process $`e^+pe^+p\gamma `$ , and has an uncertainty of $`1.6`$%. The ZEUS coordinate system is right-handed and centered on the nominal interaction point, with the $`Z`$ axis pointing in the direction of the proton beam (forward) and the $`X`$ axis pointing horizontally toward the center of HERA. The polar angle $`\theta `$ is defined with respect to the $`Z`$ axis. ## 4 Event Selection The events of interest with large $`e^+`$-jet mass contain a final-state positron at a large angle and of much higher energy than that of the incident positron beam, as well as one or more energetic jets. The only important SM source of such events is NC scattering with large $`Q^2`$. Other potential backgrounds, such as high transverse-energy ($`E_T`$) photoproduction, were determined to be negligible. The following requirements selected events of the desired topology: * A reconstructed event vertex was required in the range $`|Z|<50`$ cm. * The total transverse energy, $`E_T`$, was required to be at least $`60`$ GeV. * An identified positron was required with energy $`E_e^{}>25`$ GeV, located either in the FCAL or BCAL. The positron was required to be well-contained in the BCAL or FCAL and not to point to the BCAL/FCAL interface, at approximately $`31^{}<\theta <36^{}`$. Positrons within $`1.5`$ cm of the boundary between adjacent BCAL modules, as determined by tracking information, were also discarded to remove showers developing in the wavelength-shifter bars. * A hadronic jet with transverse momentum $`P_T^j>15`$ GeV, located in a region of good containment, was required. The jets were reconstructed using the longitudinally-invariant $`k_T`$-clustering algorithm in the inclusive mode . Only jets with a reconstructed centroid outside the inner ring of FCAL towers were considered. In events where multiple jets were reconstructed, the jet with highest transverse momentum was used. After all cuts, 12% of the events had more than one jet, both in the data and Monte Carlo simulation (see below). The $`E_T`$ cut, the jet-containment cut and the positron-containment cut define the available kinematic region for further analysis, as shown in Fig. 2. The jet containment cut, in particular, limits the values of $`\mathrm{cos}\theta ^{}`$ that can be measured at the highest $`e^+`$-jet masses. Because most such events have $`\mathrm{cos}\theta ^{}`$ near $`1`$, the acceptance for NC DIS events (with $`E_T>60`$ GeV) falls below $`10`$% for masses beyond 220 GeV. In the region allowed by the cuts shown in Fig. 2, the acceptance is typically $`80`$%. A total of $`7103`$ events remained after applying all cuts, compared to $`6949\pm 445`$ events predicted by the NC Monte Carlo simulation based on the measured luminosity of $`47.7`$ pb<sup>-1</sup> (the sources of uncertainty on the expected number of events are described in Sect. 7.1). The $`E_T`$ distributions for data and NC simulation are compared in Fig. 3a. The positron transverse-momentum ($`P_T^e`$) spectrum, jet transverse-momentum ($`P_T^j`$) spectrum and the ratio $`P_T^e/P_T^h`$, where $`P_T^h`$ is the transverse momentum of the hadronic system, are shown in Figs. 3(b-d), respectively. The missing transverse momentum, $`\mathit{}_T`$, and the longitudinal momentum variable, $`EP_Z`$ , are compared in Figs. 3(e,f). The global properties of the events are well reproduced by the simulation. ## 5 Event Simulation The SM deep inelastic scattering events were simulated using the HERACLES 4.5.2 program with the DJANGO 6 version 2.4 interface to the hadronization programs. In HERACLES, corrections for initial- and final-state electroweak radiation, vertex and propagator corrections, and two-boson exchange are included. The NC DIS hadronic final state was simulated using the MEPS model of LEPTO 6.5 , which includes order $`\alpha _S`$ matrix elements and models of higher-order QCD radiation. As a systematic check, the NC final state was simulated using the color-dipole model of ARIADNE 4.08 . The CTEQ4 parton-distribution set was used to evaluate the expected number of events from NC DIS scattering. The leptoquark events were generated using PYTHIA 6.1 . This program takes into account the finite width of the LQ, but only includes the s-channel diagrams. Initial- and final-state QCD radiation from the quark and the effect of LQ hadronization before decay are taken into account, as are initial- and final-state QED radiation from the positron. The generated events were input into a GEANT-based program which simulated the response of the ZEUS detector. The trigger and offline processing requirements applied to the data were applied to the simulated events. The luminosity of the NC Monte Carlo samples ranges from $`46`$ pb<sup>-1</sup> at $`Q^2=400`$ GeV<sup>2</sup> to $`7.310^6`$ pb<sup>-1</sup> at $`Q^2=50000`$ GeV<sup>2</sup>. ## 6 Mass and $`\theta ^{}`$ Reconstruction The mass of each $`e^+`$-jet pair was reconstructed from the measured energies and angles of the positron and jet as described by Eq. 1. This formula makes no correction for the finite jet mass. Possible mass shifts and the resolutions for resonant lepton-hadron states were estimated from PYTHIA. Narrow scalar LQ events in the mass range $`150290`$ GeV were simulated. The mean mass for reconstructed events was found to be within 6% of the generated value, while the peak position as determined by a Gaussian fit was typically lower than the generated value by only 1%. The average mass resolution, determined from a Gaussian fit to the peak of the reconstructed mass spectrum, ranged from 5.5% to 3% for masses from 150 to 290 GeV. The RMS of the distribution was typically twice as large. The positron scattering angle in the $`e^+`$-jet rest frame, $`\theta ^{}`$, was reconstructed as the angle between the incoming and outgoing positron directions in this frame. These directions were determined by performing a Lorentz transformation using the measured positron and jet energies and angles in the laboratory frame. The resolution in $`\mathrm{cos}\theta ^{}`$ near $`|\mathrm{cos}\theta ^{}|=1`$, as determined from a Gaussian fit, was $`0.01`$ degrading to $`0.03`$ as $`|\mathrm{cos}\theta ^{}|`$ decreases. The shift in $`\mathrm{cos}\theta ^{}`$ was less than $`0.01`$ for both the NC MC and the leptoquark MC. In order to determine limits on leptoquark and squark production, the mass of the electron-hadron system was reconstructed by the constrained-mass method. This method reconstructs the $`e^+`$-hadron mass as $`M_{CJ}`$ $`=`$ $`\sqrt{2E_e(E+P_Z)}`$ (12) where $`(E+P_Z)`$ is the sum of the energy and $`P_Z`$ contributions from the positron and all jets satisfying $`P_T^j>15`$ GeV and pseudorapidity $`\eta _j<3`$ (with the highest $`P_T`$ jet required to be outside the FCAL inner ring). The $`\eta _j`$ cut removes contributions from the proton remnant. The constraints $`\mathit{}_T=0`$ and $`EP_Z=2E_e`$, which are satisfied by fully contained events, have been assumed in arriving at this equation. When using this mass-reconstruction method, events with measured $`EP_Z<40`$ GeV were removed to avoid large initial-state QED radiation. The $`M_{CJ}`$ method gave, on average, improved resolution over the $`M_{ej}`$ method for narrow LQ MC events. The improved resolution occurred at smaller $`\mathrm{cos}\theta ^{}`$ (for $`\mathrm{cos}\theta ^{}0`$ the mass resolution determined from a Gaussian fit to the reconstructed mass distribution for $`m_{LQ}=200`$ GeV was about $`1.5`$% in the $`M_{CJ}`$ method and $`3`$% for the $`M_{ej}`$ method); at the larger $`\mathrm{cos}\theta ^{}`$ values where NC DIS events are concentrated, the resolutions of the two methods were similar (about $`3`$% for $`m_{LQ}=200`$ GeV). The $`M_{CJ}`$ method relies on constraints which do not necessarily apply to a resonant state whose properties cannot be anticipated in detail. We therefore choose to use the $`M_{ej}`$ method as our primary search method. The $`M_{CJ}`$ method is used in the limit-setting procedure. ## 7 $`M_{ej}`$ and $`\mathrm{cos}\theta ^{}`$ Distributions The reconstructed values of $`M_{ej}`$ are plotted versus $`\mathrm{cos}\theta ^{}`$ for the selected events in Fig. 4. Most of the events are concentrated at large $`\mathrm{cos}\theta ^{}`$ and small mass, as expected from Standard Model NC scattering. The five events indicated as open circles are from data taken in 1994-96, with total luminosity $`20`$ pb<sup>-1</sup>. They were the subject of a previous publication . In this earlier analysis, the kinematic variables were reconstructed with the DA method. The five events also stand out with the $`M_{ej}`$ reconstruction technique. The average value of $`M_{ej}`$ for these events is $`224`$ GeV, or $`7`$ GeV less than the corresponding mass calculated previously via $`M=\sqrt{sx_{DA}}`$, where $`x_{DA}`$ is the estimator of Bjorken-$`x`$ calculated with the DA method. This mass shift is compatible with expectations based on resolution and initial state radiation effects. With the present luminosity of $`47.7`$ pb<sup>-1</sup>, 7 events are observed in the region of $`M_{ej}>200`$ GeV and $`\mathrm{cos}\theta ^{}<0.5`$, where $`5.0`$ events are expected. The $`M_{ej}`$ spectrum for events with $`M_{ej}>100`$ GeV is shown in Fig. 5a on a logarithmic scale. The high-mass part of the spectrum is shown on a linear scale in the inset. The predicted number of events ($`N^{pred}`$) from NC processes is shown as the histogram. The ratio of the measured mass spectrum to the expectation is shown in Fig. 5b. The shaded band indicates the systematic uncertainty on the expectations. ### 7.1 Systematic Uncertainties The uncertainty on $`N^{pred}`$ varies with mass from $`7`$% at 100 GeV up to 30% at 250 GeV. The most important uncertainties are on the energy scale and the jet position. The NC DIS cross section given in Eq. 5 (neglecting $`F_L`$) can be rewritten in terms of the $`e^+q`$ invariant mass, $`M`$, and the polar angle of the outgoing struck quark in the laboratory frame, $`\gamma `$: $$\frac{d^2\sigma (e^+p)}{dMd\gamma }=\frac{32\pi \alpha ^2E_e^2\mathrm{sin}\gamma }{M^5(1\mathrm{cos}\gamma )^2}\left[Y_+F_\mathit{2}Y_{}xF_\mathit{3}\right].$$ (13) The mass dependence is very steep. In addition to the explicit $`M^5`$ dependence, there is also a strong suppression of high masses implicit in the structure functions. An incorrect energy scale will produce a shift in the mass spectrum and potentially a significant error in the number of expected events at a given mass. The dependence on the quark angle is also steep, approximately $`\gamma ^3`$ at small $`\gamma `$. The number of events passing the jet fiducial cut is therefore strongly dependent on the accuracy of the jet position reconstruction. The jet fiducial-volume cut requires the highest-$`P_T`$ jet to point outside the inner ring of FCAL towers. Many distributions from data and MC were compared to search for possible systematic biases. The dominant sources of uncertainty are itemized below in order of decreasing importance: 1. Knowledge of the calorimeter energy scales: The scale uncertainties discussed in Sect. 3 are $`1`$% for BCAL positrons and $`2`$% for hadrons, leading to an uncertainty of $`5(18)`$% in $`N^{pred}`$ at $`M_{ej}=100(210)`$ GeV. 2. Uncertainties in the simulation of the hadronic energy flow, including simulation of the proton remnant, the energy flow between the struck quark and proton remnant, and possible detector effects in the innermost calorimeter towers: Many distributions of data and MC were compared and no important systematic differences were found. Figure 6 shows the fraction of the jet energy in the inner ring of FCAL towers associated with the highest $`P_T`$ jet as a function of $`\eta _j`$. This is shown for all events in Fig. 6a, as well as for those with $`M_{ej}>210`$ GeV in Fig. 6b. For the highest $`\eta _j`$ values considered, this ratio is about $`20`$%. The energy located in the innermost towers of the FCAL and not associated with the highest $`P_T`$ jet is shown in Fig. 6c,d, and compared to the MC simulation. No large differences are seen between data and MC (the lowest $`\eta `$ bin in Fig 6d contains only five data events). The innermost towers of the FCAL have a larger uncertainty in the energy scale than the rest of the FCAL owing to their slightly different construction and proximity to the beam. The energy in these cells has been varied by $`\pm 10`$%. As a test of the simulation of the forward energy flow, the ARIADNE MC has been used instead of the LEPTO MC. These tests yielded variations in $`N^{pred}`$ of 13% at $`M_{ej}=210`$ GeV. 3. Uncertainty in the parton density functions: The parton density functions were estimated as in , and led to an uncertainty of 5% in $`N^{pred}`$ at $`M_{ej}=210`$ GeV. 4. Uncertainties in the acceptance: The alignment of the FCAL was determined to better than $`5`$ mm, and various jet position reconstruction algorithms were compared. These studies yielded an uncertainty of 2% in $`N^{pred}`$. 5. Uncertainties in the energy resolution functions. These were studied by comparing tracking information with calorimeter information for individual events, as well as by comparing different reconstruction methods. The MC energies were smeared by additional amounts to represent these uncertainties, leading to a variation of less than $`5`$% in $`N^{pred}`$. Other uncertainties include positron finding efficiency, luminosity determination, vertex simulation, multijet production rates, and hadronization simulation. These were found to be small in comparison to the items listed above. The overall systematic uncertainty was obtained by summing the contributions from all these sources in quadrature. ### 7.2 Discussion The data in Fig. 5 are in good agreement with the SM expectations up to $`M_{ej}210`$ GeV. Some excess is seen at higher masses. For $`M_{ej}>210`$ GeV, $`49`$ events were observed in the data, while $`24.7\pm 5.6`$ events are expected. A careful study of individual events in this mass region uncovered no signs of reconstruction errors. Rather, the events always contain clear examples of a high-energy positron (typically $`70`$ GeV) near $`90^{}`$ and a high-energy jet (typically $`400`$ GeV) in the forward direction (2 events have a second jet, in accord with NC DIS Monte Carlo expectations). The distributions shown in Fig. 3 for all the data are restricted to the events with $`M_{ej}>210`$ GeV in Fig. 7. Whereas the shapes of the distributions are similar, the data lie systematically above the MC, which is normalized to the integrated luminosity. The events with large $`M_{ej}`$ have characteristics similar on average to NC DIS events. In particular, the $`\mathrm{cos}\theta ^{}`$ projection of the events with $`M_{ej}>210`$ GeV is shown in Fig. 8 and compared to the MC expectations for neutral current DIS (solid histogram). The expectations for narrow s-channel scalar and vector LQ production are also shown for comparison. For $`F=0`$ LQs with $`\lambda <1`$, the u-channel and interference terms would not significantly affect these expectations. The shape of the data and NC MC $`\mathrm{cos}\theta ^{}`$ distributions are qualitatively similar, peaking at high values of $`\mathrm{cos}\theta ^{}`$. In summary, there is some excess of events with $`M_{ej}>210`$ GeV above the Standard Model predictions. The probability of observing such an excess depends strongly on possible systematic biases. The most important of these are biases in the energy scales. As a test, many MC experiments were generated where the jet energy scale was shifted by +2% and the electron energy scale by +1%. A window of width $`3\sigma (M_{ej})`$, where $`\sigma (M_{ej})`$ is the mass resolution at mass $`M_{ej}`$, was moved over the accessible mass range. For each simulated experiment, the number of observed events within the mass window was compared with the nominal expectations as a function of $`M_{ej}`$, seeking the excess which gave the largest statistical significance. The same procedure was applied to the data. As a result, it was found that 5% of the simulated experiments would observe, somewhere in the mass spectrum, an excess of statistical significance at least as large as the one found in the data. The excess is therefore not statistically compelling. Furthermore, the events have the characteristics of neutral current scattering. Limits are therefore set on the production of narrow scalar or vector states, as discussed below. ## 8 Limits on Narrow Scalar and Vector States Limits are set on the production cross section times branching ratio into positron+jet, $`\sigma B`$, for a narrow scalar or vector state. For definiteness, limits on coupling strength versus mass for $`F=0`$ leptoquarks are presented, as well as limits on $`\lambda \sqrt{B}`$ versus mass for scalar states coupling to $`u`$ or $`d`$ quarks, such as $`\mathit{}_P`$ squarks. The limits are extracted for $`\lambda 1`$, allowing the use of the narrow-width approximation assumed in Eq. 9. The $`M_{CJ}`$ mass reconstruction method was used to set limits as described in Sect. 6. The positron fiducial cuts were removed since this method is less sensitive to the positron-energy measurement, while the cut $`EP_Z>40`$ GeV was applied to reduce radiative effects. The mass spectrum reconstructed with this technique is shown in Fig. 9a. In total, 8026 events passed all selection cuts while 7863 events are predicted by the MC. The leptoquark MC described in Sect. 5 was used to determine the event selection efficiency and the acceptance of the fiducial cuts, as well as to estimate the mass resolution. This MC and the NC background simulation were used to calculate an optimal bin width, $`\mathrm{\Delta }M_{CJ}`$, for each $`M_{CJ}`$, and optimal $`\mathrm{cos}\theta ^{}`$ range, $`\mathrm{cos}\theta ^{}<\mathrm{cos}\theta _{cut}^{}`$, to obtain on average the best limits on LQ couplings. The bin widths were typically 20 GeV. The values of $`\mathrm{cos}\theta _{cut}^{}`$ for setting limits range from $`0.5`$ to $`0.9`$ for vector leptoquarks with masses between $`150290`$ GeV, and from $`0.1`$ to $`0.9`$ for scalar leptoquarks in the same mass range. The mass spectrum after applying the optimal $`\mathrm{cos}\theta ^{}`$ cut for the scalar search is shown in Fig. 9b. No significant deviations from expectations are seen after applying this cut. The 95% confidence level (CL) limits on $`\sigma B`$ were obtained directly from the observed number of data events with $`\mathrm{cos}\theta ^{}<\mathrm{cos}\theta _{cut}^{}`$ in the particular mass window . The procedure described in was extended to include the systematic uncertainties in the numbers of predicted events. This was found to have negligible effect on the limits. The limits for a narrow scalar or vector state are shown in Fig. 10. These limits lie between $`1`$ and $`0.1`$ pb as the mass increases from $`150`$ to $`290`$ GeV. The 95% CL exclusion limits for different species of LQ are given in the coupling versus mass plane in Fig. 11. The limits exclude leptoquarks with coupling strength $`\lambda =\sqrt{4\pi \alpha }0.3`$ for masses up to 280 GeV for specific types of F=0 leptoquarks. The H1 collaboration has recently published similar limits . In Fig. 11, the ZEUS results are compared to recent limits from OPAL. At LEP , sensitivity to a high-mass LQ arises from effects of virtual LQ exchange on the hadronic cross section. The HERA and LEP limits are complementary to Tevatron limits , which are independent of the coupling $`\lambda _{L,R}`$. The limits by D0 (CDF) extend up to $`225(213)`$ GeV for a scalar LQ with 100% branching ratio to $`eq`$. The D0 limits are shown as vertical lines in Fig. 11. The Tevatron limits for vector LQs are model dependent , but are expected to be considerably higher than for scalar LQs. The ZEUS limits presented in Fig. 11 can also be applied to any narrow state which couples to a positron and a $`u`$ or $`d`$ quark and with unknown branching ratio to $`e^+`$-jet(s). These states correspond to the leptoquark types as labelled in the figure. For these states, the limits are on the quantity $`\lambda \sqrt{B}`$. Examples of scalar states for which these limits apply are $`\mathit{}_P`$-squarks (e.g.,the limit on the $`\stackrel{~}{S}_{1/2}^L(e^+d)`$ LQ can be read as a limit on the $`\lambda _{1j1}^{}`$ R-parity-violating coupling). ## 9 Conclusion Data from $`47.7`$ pb<sup>-1</sup> of $`e^+p`$ collisions at a center-of-mass energy of 300 GeV have been used to search for a resonance decaying into $`e^+`$-jet. The invariant mass of the $`e^+`$-jet pair was calculated directly from the measured energies and angles of the positron and jet. This approach makes no assumptions about the production mechanism of such a state. The observed mass spectrum is in good agreement with Standard Model expectations up to $`e^+`$-jet masses of about $`210`$ GeV. Above this mass, some excess is seen. The angular distribution of these events is typical of high-$`Q^2`$ neutral current events and does not give convincing evidence for the presence of a narrow scalar or vector state. By applying restrictions on the decay angle to optimize sensitivity to a narrow state in the presence of NC background, limits have been derived on the cross section times decay branching fraction for a scalar or vector state decaying into positron and jet(s). These limits can be interpreted, for example, as limits on leptoquark or R-parity-violating squark production. Limits on the production of leptoquarks and squarks are presented in the coupling strength versus mass plane. At a coupling strength $`\lambda =0.3`$, new states are ruled out at 95% confidence level for masses between $`150`$ and $`280`$ GeV. ## Acknowledgements We thank the DESY Directorate for their strong support and encouragement, and the HERA machine group for their diligent efforts. We are grateful for the support of the DESY computing and network services. The design, construction and installation of the ZEUS detector have been made possible by the ingenuity and effort of many people from DESY and home institutes who are not listed as authors. It is also a pleasure to thank W. Buchmüller, R. Rückl and M. Spira for useful discussions.
no-problem/0002/astro-ph0002387.html
ar5iv
text
# Untitled Document UTTG-01-00 A Priori Probability Distribution of the Cosmological Constant Steven Weinberg<sup>*</sup><sup>*</sup>*Electronic address: weinberg@physics.utexas.edu Theory Group, Department of Physics, University of Texas Austin, TX, 78712 Abstract In calculations of the probability distribution for the cosmological constant, it has been previously assumed that the a priori probability distribution is essentially constant in the very narrow range that is anthropically allowed. This assumption has recently been challenged. Here we identify large classes of theories in which this assumption is justified. I. INTRODUCTION In some theories of inflation<sup>1</sup> and of quantum cosmology<sup>2</sup> the observed big bang is just one of an ensemble of expanding regions in which the cosmological constant takes various different values. In such theories there is a probability distribution for the cosmological constant: the probability $`d𝒫(\rho _V)`$ that a scientific society in any of the expanding regions will observe a vacuum energy between $`\rho _V`$ and $`\rho _V+\rho _V`$ is given by<sup>3,4,5</sup> $$d𝒫(\rho _V)=𝒫_{}(\rho _V)𝒩(\rho _V)d4\rho _V,$$ (1) where $`𝒫_{}(\rho _V)d\rho _V`$ is the a priori probability that an expanding region will have a vacuum energy between $`\rho _V`$ and $`\rho _V+d\rho _V`$ (to be precise, weighted with the number of baryons in such regions), and $`𝒩(\rho _V)`$ is proportional to the fraction of baryons that wind up in galaxies. (The constant of proportionality in $`𝒩(\rho _V)`$ is independent of $`\rho _V`$, because once a galaxy is formed the subsequent evolution of its stars, planets, and life is essentially unaffected by the vacuum energy.) The factor $`𝒩(\rho _V)`$ vanishes except for values of $`\rho _V`$ that are very small by the standards of elementary particle physics, because for $`\rho _V`$ large and positive there is a repulsive force that prevents the formation of galaxies<sup>6</sup> and hence of stars, while for $`\rho _V`$ large and negative the universe recollapses too fast for galaxies or stars to form.<sup>7</sup> The fraction of baryons that form galaxies has been calculated<sup>5</sup> for $`\rho _V>0`$ under reasonable astrophysical assumptions. On the other hand, we know little about the a priori probability distribution $`𝒫_{}(\rho _V)`$. However, the range of values of $`\rho _V`$ in which $`𝒩(\rho _V)0`$ is so narrow compared with the scales of energy density typical of particle physics that it had seemed reasonable in earlier work <sup>4,5</sup> to assume that $`𝒫_{}(\rho _V)`$ is constant within this range, so that $`d𝒫(\rho _V)`$ can be calculated as proportional to $`𝒩(\rho _V)d\rho _V`$. In an interesting recent article,<sup>8</sup> Garriga and Vilenkin have argued that this assumption (which they call “Weinberg’s conjecture”) is generally not valid. This raises the problem of characterizing those theories in which this assumption is valid and those in which it is not. It is shown in Section II that this assumption is in fact valid for a broad range of theories, in which the different regions are characterized by different values of a scalar field that couples only to itself and gravitation. The deciding factor is how we impose the flatness conditions on the scalar field potential that are needed to ensure that the vacuum energy is now nearly time-independent. If the potential is flat because the scalar field renormalization constant is very large, then the a priori probability distribution of the vacuum energy is essentially constant within the anthropically allowed range, for scalar potentials of generic form. It is also essentially constant for a large class of other potentials. Section III is a digression, showing that the same flatness conditions ensure tht the vacuum energy has been roughly constant since the end of inflation. Section IV takes up the sharp peaks in the a priori probability found in theories of quantum cosmology and eternal inflation. II. SLOWLY ROLLING SCALAR FIELD One of the possibilities considered by Garriga and Vilenkin is a vacuum energy that depends on a homogeneous scalar field $`\varphi (t)`$ whose present value is governed by some smooth probability distribution. The vacuum energy is $$\rho _V=V(\varphi )+\frac{1}{2}\dot{\varphi }^2,$$ (2) and the scalar field time-dependence is given by $$\ddot{\varphi }+3H\dot{\varphi }=V^{}(\varphi ),$$ (3) where $`H(t)`$ is the Hubble fractional expansion rate, $`V(\varphi )`$ is the scalar field potential, dots denote derivatives with respect to time, and primes denote derivatives with respect to $`\varphi `$. Following Garriga and Vilenkin,<sup>8</sup> we assume that at present the scalar field energy appears like a cosmological constant because the field $`\varphi `$ is now nearly constant in time, and that this scalar field energy now dominates the cosmic energy density. For this to make sense it is necessary for the potential $`V(\varphi )`$ to satisfy certain flatness conditions. In the usual treatment of a slowly rolling scalar, one neglects the inertial term $`\ddot{\varphi }`$ in Eq. (3) as well as the kinetic energy term $`\dot{\varphi }^2/2`$ in Eq. (2). With the inertial term neglected, the condition that $`V(\varphi )`$ should change little in a Hubble time $`1/H`$ is that<sup>9</sup> $$V^2(\varphi )3H^2|V(\varphi )|.$$ (4) With the scalar field energy dominating the total cosmic energy density, the Friedmann equation gives $$|V(\varphi )|\rho _V3H^2/8\pi G,$$ (5) so Eq. (4) requires $$\left|V^{}(\varphi )\right|\sqrt{8\pi G}\rho _V.$$ (6) (The kinetic energy term $`\dot{\varphi }^2/2`$ in Eq. (2) can be neglected under the slightly weaker condition $$\left|V^{}(\varphi )\right|\sqrt{18H^2\left|V(\varphi )\right|}\sqrt{48\pi G}\rho _V,$$ which is the flatness condition given by Garriga and Vilenkin.) There is also a bound on the second derivative of the potential, needed in order for the inertial term to be neglected. With the scalar field energy dominating the total cosmic energy density, this condition requires that<sup>9</sup> $$\left|V^{\prime \prime }(\varphi )\right|8\pi G\rho _V.$$ (7) As Garriga and Vilenkin correctly pointed out, the smallness of the slope of $`V(\varphi )`$ means that $`\varphi `$ may vary appreciably even when $`\rho _VV(\varphi )`$ is restricted to the very narrow anthropically allowed range of values in which galaxy formation is non-negligible. They concluded that it would be possible for the a priori probability $`𝒫_{}(\rho _V)`$ to vary appreciably in this range. In particular, Garriga and Vilenkin assumed an a priori probability distribution for $`\varphi `$ that is constant in the anthropically allowed range, in which case the a priori probability distribution for $`\rho _V`$ is $$𝒫_{}(\rho _V)1/|V^{}(\varphi )|$$ (8) which they said could vary appreciably in the anthropically allowed range. Though possible, this rapid variation is by no means the generic case. As already mentioned, the second as well as the first derivative of the potential must be small, so that the a priori probability density (8) may change little in the anthropically allowed range. It all depends on how the flatness conditions are satisfied. There are two obvious ways that one might try to make the potential sufficiently flat. Potentials of the first type are of the general form $$V(\varphi )=V_1f(\lambda \varphi ),$$ (9) where $`V_1`$ is some large energy density, in the range of $`m_W^4`$ to $`G^2`$; the constant $`\lambda `$ is very small: and $`f(x)`$ is some dimensionless function involving no very large or very small parameters. Potentials of the second type are of the general form $$V(\varphi )=V_1\left[1ϵg(\lambda \varphi )\right],$$ (10) where $`V_1`$ is again some large energy density; $`\lambda `$ is here some fixed inverse mass, perhaps of order $`\sqrt{G}`$; now it is $`ϵ`$ instead of $`\lambda `$ that is very small; and $`g(x)`$ is some other dimensionless function involving no very large or very small parameters. For potentials (9) of the first type, it is always possible to meet all observational conditions by taking $`\lambda `$ sufficiently small, provided that the function $`f(x)`$ has a simple zero at a point $`x=a`$ of order unity, with derivatives at $`a`$ of order unity. Because $`V_1`$ is so large, the present value of $`\lambda \varphi `$ must be very close to the assumed zero $`a`$ of $`f(x)`$. With $`f^{}(a)`$ and $`f^{\prime \prime }(a)`$ of order unity, the flatness conditions (6) and (7) are both satisfied if $$|\lambda |\left(\frac{\rho _V}{V_1}\right)\sqrt{8\pi G}.$$ (11) Galaxy formation is only possible for $`|V(\varphi )|`$ less than an upper bound $`V_{\mathrm{max}}`$ of the order of the mass density of the universe at the earliest time of galaxy formation,<sup>6</sup> which in the absence of fine tuning of the cosmological constant is very much less than $`V_1`$. The anthropically allowed range of $`\varphi `$ is therefore given by $$\mathrm{\Delta }\varphi |\varphi a/\lambda |_{\mathrm{max}}=\frac{V_{\mathrm{max}}}{|\lambda f^{}(a)V_1|}.$$ (12) The fractional change in the a priori probability density $`1/|V^{}(\varphi )|`$ in this range is then $$\left|\frac{V^{\prime \prime }(\varphi )\mathrm{\Delta }\varphi }{V^{}(\varphi )}\right|=\left|\frac{V_{\mathrm{max}}}{V_1}\right|\left|\frac{f^{\prime \prime }(a)}{f^2(a)}\right|,$$ (13) with no dependence on $`\lambda `$. As long as the factor $`f^{\prime \prime }(a)/f^2(a)`$ is roughly of order unity the fractional variation (13) in the a priori probability will be very small, as was assumed in references 4 and 5. This reasoning applies to potentials of the form $$V(\varphi )=V_1\left[1(\lambda \varphi )^n\right],$$ which, as already noted by Garriga and Vilenkin, lead to an priori probability distribution that is nearly constant in the anthropically allowed range. (In this case $`a=1`$ and $`f^{\prime \prime }(a)/f^2(a)=(1n)/n`$.) But this reasoning also applies to the “washboard potential” that was taken as a counterexample by Garriga and Vilenkin, which with no loss of generality can be put in the form: $$V(\varphi )=V_1\left[1+\alpha \lambda \varphi +\beta \mathrm{sin}(\lambda \varphi )\right].$$ The zero point $`a`$ is here determined by the condition $$1+\alpha a+\beta \mathrm{sin}a=0,$$ and the factor $`f^{\prime \prime }(a)/f^2(a)`$ in Eq. (13) is $$\frac{f^{\prime \prime }(a)}{f^2(a)}=\frac{\beta \mathrm{sin}a}{(\alpha +\beta \mathrm{cos}a)^2}.$$ If the flatness condition is satisfied by taking $`\lambda `$ small, with $`\alpha `$ and $`\beta `$ of order unity, as is assumed for potentials of the first kind, then the factor $`f^{\prime \prime }(a)/f^2(a)`$ in Eq. (13) is of order unity unless $`\alpha `$ and $`\beta `$ happen to be chosen so that $$\left|1+\alpha \mathrm{cos}^1\left(\frac{\alpha }{\beta }\right)+\beta \sqrt{1\frac{\alpha ^2}{\beta ^2}}\right|1.$$ Of course it would be possible to impose this condition on $`\alpha `$ and $`\beta `$, but this is the kind of fine-tuning that would be upset by adding a constant of order $`V_1`$ to the potential. Aside from this exception, for all $`\alpha `$ and $`\beta `$ of order unity the factor $`f^{\prime \prime }(a)/f^2(a)`$ is of order unity, so the washboard potential also yields an a priori probability distribution for the vacuum energy that is flat in the anthropically allowed range. In contrast, for potentials (10) of the second kind the flatness conditions are not necessarily satisfied no matter how small we take $`ϵ`$. Because the present vacuum energy is much less than $`V_1`$, the present value of $`\varphi `$ must be very close to a value $`\varphi _ϵ`$, satisfying $$g(\lambda \varphi _ϵ)=1/ϵ.$$ (14) This requires $`\lambda \varphi _ϵ`$ to be near a singularity of the function $`g(x)`$, perhaps at infinity, so it is not clear in general that such a potential would have small derivatives at $`\lambda \varphi _ϵ`$ for any value of $`ϵ`$. For instance, for an exponential $`g(x)=\mathrm{exp}(x)`$ we have $`\varphi _ϵ=\mathrm{ln}ϵ/\lambda `$, and $`V^{}(\varphi _ϵ)`$ approaches an $`ϵ`$-independent value proportional to $`\lambda `$, which is not small unless we take $`\lambda `$ very small, in which case have a potential of the first kind, for which as we have seen the a priori probability density (8) is flat in the anthropically allowed range. The flatness conditions are satisfied for small $`ϵ`$ if $`g(x)`$ approaches a power $`x^n`$ for $`x\mathrm{}`$. In this case $`\varphi _ϵ`$ goes as $`ϵ^{1/n}`$, so $`V^{}(\varphi _ϵ)`$ goes as $`ϵ^{1/n}`$ and $`V^{\prime \prime }(\varphi _ϵ)`$ goes as $`ϵ^{2/n}`$, both of which can be made as small as we like by taking $`ϵ`$ sufficiently small. In particular, if the singularity in $`g(x)`$ at $`x\mathrm{}`$ consists only of poles in $`1/x`$ of various orders up to $`n`$ (as is the case for a polynomial of order $`n`$) then the anthropically allowed range of $`\varphi `$ is $$\left|\varphi \varphi _ϵ\right|_{\mathrm{max}}\frac{V_m}{V_1ϵ|g^{}(\varphi _ϵ)|}ϵ^{1/n}\left(\frac{V_m}{V_1}\right).$$ (15) The flatness conditions make this range much greater than the Planck mass, but the fractional change in the a priori probability density (8) in this range is still very small $$\left|\frac{V^{\prime \prime }(\varphi _ϵ)}{V^{}(\varphi _ϵ)}\right|\left|\varphi \varphi _ϵ\right|_{\mathrm{max}}\frac{V_m}{V_1}1.$$ (16) To have a large fractional change in the a priori probability distribution in the anthropically allowed range for potentials of the second type that satisfy the flatness conditions, we need a function $`g(x)`$ that goes like a power as $`x\mathrm{}`$, but has a more complicated singularity at $`x=\mathrm{}`$ than just poles in $`1/x`$. An example is provided by the washboard potential with $`\alpha `$ and $`\beta `$ very small and $`\lambda `$ fixed, the case considered by Garriga and Vilenkin, for which $`g(x)`$ has an essential singularity at $`x=\mathrm{}`$. In summary, the a priori probability is flat in the anthropically allowed range for several large classes of potentials, while it seems to be not flat only in exceptional cases. It remains to consider whether the small parameters $`\lambda `$ or $`ϵ`$ in potentials respectively of the first or second kind could arise naturally. Garriga and Vilenkin argued that a term in a potential of what we have called the second kind with an over-all factor $`ϵ1`$ could be naturally produced by instanton effects. On the other hand, for potentials of type 1 a small parameter $`\lambda `$ could be naturally produced by the running of a field-renormalization factor. The field $`\varphi `$ has a conventional “canonical” normalization, as shown by the fact that the term $`\dot{\varphi }^2/2`$ in the vacuum energy (2) and the inertial term $`\ddot{\varphi }`$ in the field equation (3) have coefficients unity. Factors dependent on the ultraviolet cutoff will therefore be associated with external $`\varphi `$-lines. In order for the potential $`V(\varphi )`$ to be expressed in a cut-off independent way in terms of coupling parameters $`g_\mu `$ renormalized at a wave-number scale $`\mu `$, the field $`\varphi `$ must be accompanied with a field-renormalization factor $`Z_\mu ^{1/2}`$, which satisfies a differential equation of the form $$\mu \frac{dZ_\mu }{d\mu }=\gamma (g_\mu )Z_\mu .$$ (17) At very large distances, the field $`\varphi `$ will therefore be accompanied with a factor $$\lambda =Z_0^{1/2}=\mathrm{exp}\left\{\frac{1}{2}_0^\mu \frac{d\mu ^{}}{\mu ^{}}\gamma (g_\mu ^{})\right\}Z_\mu ^{1/2}.$$ (18) The integral here only has to be reasonably large and negative in order for $`\lambda `$ to be extremely small. III. SLOW ROLLING IN THE EARLY UNIVERSE When the cosmic energy density is dominated by the vacuum energy, the flatness conditions (6) and (7) insure that the vacuum energy changes little in a Hubble time. But if the vacuum energy density is nearly time-independent, then from the end of inflation until nearly the present it must have been much smaller than the energy density of matter and radiation, and under these conditions we are not able to neglect the inertial term $`\ddot{\varphi }`$ in Eq. (3). A separate argument is needed to show that the vacuum energy is nearly constant at these early times. This is important because, although there is no observational reason to require $`V(\varphi )`$ to be constant at early times, it must have been less than the energy of radiation at the time of nucleosynthesis in order not to interfere with the successful prediction of light element abundances, and therefore at this time must have been very much less than $`V_1`$, which we have supposed to be at least of order $`m_W^4`$. For potentials (9) of the first kind, this means that $`\varphi `$ must have been very close to its present value at the time of helium synthesis. Also, if $`\varphi `$ at the end of inflation were not the same as $`\varphi `$ at the time of galaxy formation, then a flat a priori distribution for the first would not in general imply a flat a priori distribution for the second. At times between the end of inflation and the recent past the expansion rate behaved as $`H=\eta /t`$, where $`\eta =2/3`$ or $`\eta =1/2`$ during the eras of matter or radiation dominance, respectively. During this period, Eq. (3) takes the form $$\ddot{\varphi }+\frac{3\eta }{t}\dot{\varphi }=V^{}(\varphi ),$$ (19) If we tentatively assume that $`\varphi `$ is nearly constant, then Eq. (19) gives for its rate of change $$\dot{\varphi }\frac{tV^{}(\varphi )}{1+3\eta }.$$ (20) The change in the vacuum energy from the end of inflation to the present time $`t_0`$ is therefore $$\mathrm{\Delta }V_0^{t_0}V^{}(\varphi )\dot{\varphi }𝑑t\frac{V^2(\varphi )t_0^2}{2(1+3\eta )}.$$ (21) The present time is roughly given by $`t_0\eta \sqrt{3/8\pi G\rho _{V0}}`$, so the fractional change in the vacuum energy density since the end of inflation is $$\left|\frac{\mathrm{\Delta }V}{\rho _{V0}}\right|\left(\frac{3\eta ^2}{2(1+3\eta )}\right)\left(\frac{V^2(\varphi )}{8\pi G\rho _{V0}^2}\right),$$ (22) a subscript zero as usual denoting the present instant. The factor $`3\eta ^2/2(1+3\eta )`$ is of order unity, so the inequality (6) tells us that the change in the vacuum energy during the time since inflation has indeed been much less than its present value. III. QUANTUM COSMOLOGY In some theories of quantum cosmology the wave function of the universe is a superposition of terms, corresponding to universes with different (but time-independent) values for the vacuum energy $`\rho _V`$. It has been argued by Baum<sup>2</sup>, Hawking<sup>2</sup> and Coleman<sup>10</sup> that these terms are weighted with a $`\rho _V`$-dependent factor, that gives an a priori probability distribution with an infinite peak at $`\rho _V=0`$, but this claim has been challenged.<sup>11</sup> As already acknowledged in references 4 and 5, if this peak at $`\rho _V=0`$ is really present, then anthropic considerations are both inapplicable and unnecessary in solving the problem of the cosmological constant. Garriga and Vilenkin<sup>8</sup> have proposed a different sort of infinite peak, arising from a $`\rho _V`$-dependent rate of nucleation of sub-universes operating over an infinite time. Even granting the existence of such a peak, it is not clear that it really leaves a vanishing normalized probability distribution at all other values of $`\rho _V`$. For instance, the nucleation rate might depend on the population of sub-universes already present, in such a way that the peaks in the probability distribution are kept to a finite size. If $`𝒫_{}(\rho _V)=0`$ except at the peak, then anthropic considerations are irrelevant and the cosmological constant problem is as bad as ever, since there is no known reason why the peak should occur in the very narrow range of $`\rho _V`$ that is anthropically allowed. On the other hand, if there is a smooth background in addition to a peak outside the anthropically allowed range of $`\rho _V`$ then the peak is irrelevant, because no observers would ever measure such values of $`\rho _V`$. In this case the probability distribution of the cosmological constant can be calculated using the methods of references 4 and 5. ACKNOWLEDGEMENTS I am grateful for a useful correspondence with Alex Vilenkin. This research was supported in part by the Robert A. Welch Foundation and NSF Grant PHY-9511632. REFERENCES 1. A. Vilenkin, Phys. Rev. D27, 2848 (1983); A. D. Linde, Phys. Lett. B175, 395 (1986). 2. E. Baum, Phys. Lett. B133, 185 (1984); S. W. Hawking, in Shelter Island II - Proceedings of the 1983 Shelter Island Conference on Quantum Field Theory and the Fundamental Problems of Physics, ed. R. Jackiw et al. (MIT Press, Cambridge, 1995); Phys. Lett. B134, 403 (1984); S. Coleman, Nucl. Phys. B 307, 867 (1988). 3. An equation of this type was given by A. Vilenkin, Phys. Rev. Lett. 74, 846 (1995); and in Cosmological Constant and the Evolution of the Universe, K. Sato, et al., ed. (Universal Academy Press, Tokyo, 1996) (gr-qc/9512031), but it was not used in a calculation of the mean value or probability distribution of $`\rho _V`$. 4. S. Weinberg, in Critical Dialogs in Cosmology, ed. by N. Turok (World Scientific, Singapore, 1997). 5. H. Martel, P. Shapiro, and S. Weinberg, Ap. J. 492, 29 (1998). 6. S. Weinberg, Phys. Rev. Lett. 59, 2607 (1987). 7. J. D. Barrow and F. J. Tipler, The Anthropic Cosmological Principle (Clarendon Press, Oxford, 1986). 8. J. Garriga and A. Vilenkin, Tufts University preprint astro-ph/9908115, to be published. 9. P. J. Steinhardt and M. S. Turner, Phys. Rev. D29, 2162 (1984). 10. S. Coleman, Nucl. Phys. B 310, 643 (1988). 11. W. Fischler, I. Klebanov, J. Polchinski, and L. Susskind, Nucl. Phys. B237, 157 (1989).
no-problem/0002/quant-ph0002046.html
ar5iv
text
# 1 ## 1 Bohm’s theory<sup>2</sup><sup>2</sup>2This is the theory described by David Bohm in 1952. Bohm’s most complete description of his theory is found in Bohm and Hiley (1993). has become increasingly popular as a nonrelativistic solution to the quantum measurement problem. It makes the same empirical predictions for the statistical distribution of particle configurations as the standard von Neumann-Dirac collapse formulation of quantum mechanics whenever the latter makes unambiguous predictions. Bohm’s theory also treats measuring devices exactly the same way it treats other physical systems. The quantum-mechanical state of a system always evolves in the usual linear, deterministic way, so one does not encounter the problems that arise in collapse formulations of quantum mechanics when one tries to stipulate the conditions under which a collapse occurs. And Bohm’s theory does not require one to postulate branching worlds or disembodied minds or any of the other extravagant assumptions that often accompany no-collapse formulations of quantum mechanics. While Bohm’s theory avoids many of the problems associated with other formulations of quantum mechanics, it does have its own problems. One problem, it has been argued, is that the particle trajectories it predicts are not the real particle trajectories. This is the surreal trajectories problem. If Bohm’s theory does in fact make false predictions concerning particle trajectories, then this is presumably a serious problem. I will argue, however, that there is no reason to suppose that Bohm’s theory makes false predictions concerning the trajectories of particles. Indeed, I will argue that a good position measuring device need never be mistaken concerning the actual position of a particle at the moment that the particle’s position is in fact recorded. While surreal trajectories are not a problem for Bohm’s theory, the way that it accounts for the results of the surreal trajectories experiments reveals the sense in which it is fundamentally incompatible with relativity, and this is a problem. ## 2 On Bohm’s theory the quantum-mechanical state $`\psi `$ evolves in the usual linear, deterministic way, but one supposes that every particle always has a determinate position and follows a continuous, determinsitic trajectory. The motion of a particular particle typically depends on the evolution of $`\psi `$ and the positions of other (perhaps distant) particles. The particle motion is described by an auxiliary dynamics, a dynamics that supplements the usual linear quantum dynamics. In its simplest form, what one might call the minimal version (the version of the theory described by Bell 1987, 127), Bohm’s theory is characterized by the following basic principles: 1. State Description: The complete physical state at a time is given by the wave function $`\psi `$ and the determinate particle configuration $`Q`$. 2. Wave Dynamics: The time evolution of the wave function is given by the usual linear dynamics. In the simplest case, this is just Schrödinger’s equation $$i\mathrm{}\frac{\psi }{t}=\widehat{H}\psi $$ (1) More generally, one uses the form of the linear dynamics appropriate to one’s application (as in the spin examples discussed below). 3. Particle Dynamics: The particles move according to $$\frac{dQ_k}{dt}=\frac{1}{m_k}\frac{\text{Im}(\psi ^{}_k\psi )}{\psi ^{}\psi }\text{ evaluated at }Q$$ (2) where $`m_k`$ is the mass of particle $`k`$ and $`Q`$ is the current particle configuration. 4. Distribution Postulate: There is a time $`t_0`$ when the epistemic probability density for the configuration $`Q`$ is given by $`\rho (Q,t_0)=|\psi (Q,t_0)|^2`$. If there are $`N`$ particles, then $`\psi `$ is a function in $`3N`$-dimensional configuration space (three dimensions for the position of each particle), and the current particle configuration $`Q`$ is represented by a single point in configuration space (in configuration space a single point gives the position of every particle). Again, each particle moves in a way that depends on its position, the evolution of the wave function, and the positions of the other particles. Concerning how one should think of the role of the wave function in Bohm’s theory, John Bell once said that “no one can understand this theory until he is willing to think of $`\psi `$ as a real objective field rather than just a ‘probability amplitude.’ Even though it propagates not in $`3`$-space but in $`3N`$-space” (1987, 128). While the ontology suggested by Bell here is at best puzzling, the practical idea behind it is a good one: The best way to picture what the particle dynamics does is to picture the point representing the $`N`$-particle configuration being carried along by the probability currents generated by the linear evolution of the wave function $`\psi `$ in configuration space. Once one has this picture firmly in mind one will understand how Bohm’s theory accounts for quantum-mechanical correlations in the context of the surreal-trajectory experiments and the sense in which the theory is fundamentally incompatible with relativity. Since the total particle configuration can be thought of as being pushed around by the probability current in configuration space, the probability of the particle configuration being found in a particular region of configuration space changes as the integral of $`|\psi |^2`$ over that region changes. More specifically, the continuity equation $$\frac{\rho }{t}+\text{div}(\rho v^\psi )=0$$ (3) is satisfied by the probability density $`\rho =|\psi |^2`$. And this means that if the epistemic probability density for the particle configuration is ever $`|\psi |^2`$, then it will always be $`|\psi |^2`$, unless one makes an observation. That is, if one starts with an epistemic probability density of $`\rho (t_0)=|\psi (t_0)|^2`$, then, given the dynamics, one should update this probability density at time $`t`$ so that $`\rho (t)=|\psi (t)|^2`$. And if one makes an observation, then the epistemic probability density will be given by the system’s effective wave function, the component (in the configuration space representation) of the total wave function that is in fact responsible for the post-measurement time evolution of the system’s configuration. The upshot is that if the distribution postulate is ever satisfied, then the most that one can learn from a measurement is the wave packet that the current particle configuration is associated with and the epistemic probability distribution for the actual configuration over this packet.<sup>3</sup><sup>3</sup>3See Dürr, D., S. Goldstein, and N. Zanghí (1993) for a discussion of the equivariance of the statistical distribution $`\rho `$ and the notion of the effective wave function in Bohm’s theory. This is why Bohm’s theory makes the same statistical predictions for particle configurations as the standard collapse formulation of quantum mechanics. While it makes the same statistical predictions as the standard formulation of quantum mechanics, Bohm’s theory is deterministic. More specifically, given the energy properties of a simple closed system, the complete physical state at any time (the wave function and the particle configuration) fully determines the physical state at all other times.<sup>4</sup><sup>4</sup>4We will say that a closed system is simple if the Hamiltonian is bounded and if the particle configuration always has positive wave function support.. It follows that, given a particular evolution of the wave function, possible trajectories for the configuration of a system can never cross at a time in configuration space. And this feature of Bohm’s theory will prove important later. Another feature of Bohm’s theory that will prove imporant later is the special role played by position in accounting for our determinate measurement results. In order for Bohm’s theory to explain why we get the determinate measurement records that we do (which is presumably a precondition for it counting as a solution to the measurement problem), one must suppose, as a basic interpretational principle, that, given the usual quantum mechanical state, making particle positions determinate provides determinate measurement records. Since particle positions are always determinate on Bohm’s theory, this would guarantee determinate measurement records. And, at least on the minimal version of Bohm’s theory, position is the only determinate, noncontextual property that could serve to provide determinate measurement records.<sup>5</sup><sup>5</sup>5There is a sense in which one might say that some dynamical properties, like momentum, are also noncontextual in Bohm’s theory, but, as we will see, the noncontextual momentum is not the measured momentum. The distinction between noncontextual and contextual properties deserves some explanation. Whether a system is found to have a particular contextual property or not typically depends on how one measures the property: one might get the result “Yes” if the contextual property is measured one way and “No” if it is measured another. Consequently, contextual properties are not intrinsic properties of the system to which they are typically ascribed. One might say that contextual properties serve to prop up our talk of those properties that we are used to talking about but which arguably should not count as properties at all in Bohm’s theory. While the language of contextual properties provides a convenient (but often misleading!) way of comparing the predictions of Bohm’s theory with the predictions of other physical theories, the predictions of Bohm’s theory are always ultmately just predictions about the evolution of the wavefunction and the positions of the particles relative to the wavefunction. The upshot of all this is just that position relative to the wave function, or more precisely configuration relative to the wave function, is ultimately the only property that one can appeal to in the minimal version of Bohm’s theory to explain how it is that we end up with the determinate measurement records we do. And this means that for an interaction to count as a measurement, it must produce a record in terms of the position of something—it must correlate the position of something with some aspect of the quantum-mechanical state of the system being measured. So in order to explain our determinate measurement records on Bohm’s theory one must suppose that all measurement records are ultimately position records, records represented in the relationship between the particle configuration and the wave function. And since Bohm’s theory predicts the right quantum statistics for particle positions relative to the wave function, it predicts the right quantum statistics for our measurement records.<sup>6</sup><sup>6</sup>6The point here is that making the right statistical predictions concerning particle configurations is not necessarily a sufficient condition for making the right statistical predictions for our measurement records. One needs to make an extra assumption about the relationship between particle configurations and measurement records. ## 3 In their 1992 paper Englert, Scully, Süssman, and Walther (ESSW) argued that the trajectories predicted by Bohm’s theory are not the real trajectories followed by particles, but rather are “surreal”. The worry is that the observed trajectories of particles are not the trajectories that the particles actually follow in Bohm’s theory. And if our observations are reliable and if Bohm’s theory predicts the wrong particle trajectories, then this is presumably a problem for the theory. ESSW describe the surreal trajectories problem in the context of a two-path, delayed-choice interference experiment. John Bell (1980, reprinted in 1987) was perhaps the first to consider such an experiment in the context of Bohm’s theory. Consider an experiment where a spin-$`1/2`$ particle $`P`$ starts at region $`S`$ in a $`z`$-spin up eigenstate, has its wave packet split into an $`x`$-spin up component that travels from $`A`$ to $`A^{}`$ and an $`x`$-spin down component that travels from $`B`$ to $`B^{}`$.<sup>7</sup><sup>7</sup>7Since the standard line is that position is the only observable physical quantity in Bohm’s theory, this does not mean that $`P`$ has a determinate $`z`$-spin; rather, it is just a description of the spin index associated with $`P`$’s effective wave function. \[Figure 1: Crossing-Paths Experiment\] The wave function evolves as follows: Initial state: $$|_z_P|S_P=1/\sqrt{2}(|_x_P+|_x_P)|S_P$$ (4) After the initial wave packet splits: $$1/\sqrt{2}(|_x_P|A_P+|_x_P|B_P)$$ (5) Final state: $$1/\sqrt{2}(|_x_P|A^{}_P+|_x_P|B^{}_P)$$ (6) Bell explained that if one measures the properties of $`P`$ in region $`I`$, then one would observe interference phenomena (in this experiment, for example, one would observe $`z`$-spin up with probability one). The observation of interference phenomena is usually taken to entail that $`P`$ could not have followed path $`A`$ and could not have followed path $`B`$ since, in either case, the probably of observing $`z`$-spin up in $`I`$ would presumably be $`1/2`$ (as predicted by the standard collapse formulation of quantum mechanics). In such a situation, one would say, on the standard view, that $`P`$ followed a superposition of the two trajectories (which, on the standard interpretation of states is supposed to be neither one nor the other nor both trajectories). But according to Bohm’s theory, $`P`$ determinately follows one or the other of the two trajectories: that is, it either determinately follows $`A`$ or it determinately follows $`B`$. On Bohm’s theory, one might say that the interference effects that one observes at $`I`$ are the result of the wave function following both paths. If we do not observe the particle in region $`I`$, then $`P`$ will arrive at one of the two detectors to the right of the interference region: either the one at $`A^{}`$ or the one at $`B^{}`$. If it arrives at $`A^{}`$, then one might suppose that the particle traveled path $`A`$; and if it arrives at $`B^{}`$, then one might suppose that it traveled path $`B`$. But such inferences do not work in the standard collapse formulation of quantum mechanics, where (according to the standard eigenvalue-eigenstate link) under these circumstances $`P`$ would have traveled a superpostion of the two paths. And such inferences do not work in Bohm’s theory either, but for a very different reason. In Bohm’s theory the particle really does travel one or the other of the two paths, it is just that its trajectory is not what one might at first expect. In figuring out what trajectory Bohm’s theory predicts, the first thing to note is that, by symmetry, the probability current across the line $`L`$ is always zero.<sup>8</sup><sup>8</sup>8See Phillipidas, Dewdney, and Hiley (1979) for an explicit calculation of the trajectories in for a similar experiment. The explicit calulations, of course, show that possible particle trajectories never cross $`L`$. Bell cites this paper at the end of his 1980 paper on delayed-choice experiments in Bohm’s theory. This means that if $`P`$ starts in the top half of the initial wave packet, then it must move from $`S`$ to $`A`$ to $`I`$ to $`B^{}`$; and if it starts in the bottom half of the initial wave packet, then it must move from $`S`$ to $`B`$ to $`I`$ to $`A^{}`$. That is, whichever path $`P`$ takes, Bohm’s theory predicts that it will bounce when it gets to region $`I`$—in order to follow either trajectory, the particle $`P`$ must accelerate in the field-free region $`I`$. Concerning this odd bouncing behavior Bell says that “it is vital here to put away the classical prejudice that a particle moves in a straight path in ‘field free’ space” (1987, 113). But certainly, one might object, this is more than a prejudice. After all, this particle bouncing nonsense is a direct violation of the conservation of momentum, and we have very good empirical reasons for supposing that momentum is conserved. Isn’t this alone reason enough to dismiss Bohm’s theory? Put another way, whenever we observe which path the particle in fact travels, if we find it at $`A^{}`$, then we also observed it traveling path $`A`$ and if we find it at $`B^{}`$, then we also observed it traveling path $`B`$. That is, whenever we make the appropriate observations, we never observe the crazy bouncing behavior (or any of the other violations of the conservation of momentum) predicted by Bohmian mechanics. This puzzling situation is the basis for ESSW’s surreal trajectories argument. The argument goes something like this: Assumption 1 (explicit): Our experimental measurement records tell us that in a two-path interference experiment like that described above each particle either travels from $`A`$ to $`A^{}`$ or from $`B`$ to $`B^{}`$; that is, they never bounce. Assumption 2 (implicit): Our measurement records reliably tell us where a particle is at the moment the record is made. Assumption 3 (implicit): One can record which path a particular particle takes without breaking the symmetry in the probability currents that prevents the particle from crossing the line $`L`$. Conclusion: The trajectory predicted by Bohm’s theory, where the particle bounces, cannot be the particle’s actual trajectory; that is, Bohm trajectories are not real, they are “surreal.” And if the trajectories predicted by Bohm’s theory are not the actual particle trajectories, then Bohm’s theory is false, and this constitutes very good grounds for rejecting it. Dürr, Fusseder, Goldstein, and Zanghi (DFGZ) immediately responded to defend Bohm’s theory against the surreal trajectories argument: > In a recent paper \[ESSW (1992)\] it is argued that dispite its many virtues—its clarity and simplicity, both conceptual and physical, and the fact that it resolves the notorious conceptual difficulties which plague orthodox quantum theory—BM \[Bohmian mechanics\] itself suffers from a fatal flaw: the trajectories that it defines are “surrealistic”. It must be admitted that this is an intriguing claim, though an open minded advocate of quantum orthodoxy would presumably have preferred the clearer and stronger claim that BM is incompatible with the predictions of quantum theory, so that, despite its virtues, it would not in fact provide an explanation of quantum phenomena. The authors are, however, aware that such a claim would be false. (1993, 1261) And since Bohm’s theory makes the same predictions as the standard theory of quantum mechanics, DFGZ argue that ESSW cannot possibly provide, as ESSW describe it, “an experimentum crucis which, according to our quantum theoretic prediction, will clearly demonstrate that the reality attributed to Bohm trajectories is rather metaphysical than physical.” And with this DFGZ dismiss ESSW’s argument against Bohmian mechanics: > On the principle that the suggestions of scientists who propose pointless experiments cannot be relied upon with absolute confidence, with this proposal the \[ESSW\] paper self-destructs: The authors readily agree that the “quantum theoretical predictions” are also the predictions of BM. Thus they should recognize that the \[experimental\] outcome on the basis of which they hope to discredit BM is precisely the outcome predicted by BM. Under the circumstances it would appear prudent for the funding agencies to save their money! (1261) DFGZ conclude their defense of Bohm’s theory by making a point about the theory-ladenness of talk of particle trajectories and a point about the theory-ladenness of observation itself. But we will return to these two (important) points later, when we have the conceptual tools hand to make sense of them (Section 5). In their reply to DFGZ’s comment, ESSW want to make it perfectly clear that they did not anywhere conceed that Bohm’s theory had “many virtues” nor did they admit that the orthodox formulation of quantum mechanics was “plagued by notorious conceptual difficulties.” But, for their part, ESSW do seem to conceed, as DFGZ insisted, that Bohmiam mechanics makes the same empirical predictions as standard quantum mechanics: “Nowhere did we claim that BM makes predictions that differ from those of standard quantum mechanics” (1263).<sup>9</sup><sup>9</sup>9But ESSW later make the following argument in favor of actually funding the surreal trajectories experiments that they describe: “Funding agencies were and are well advised to support experiments that have probed or would probe the “surprises” of quantum theory. Imagine the (farfetched) situation that the experimenter finds the photon always in the resonator through which the Bohm trajectory passes rather than the one predicted by quantum theory. Wouldn’t that please the advocates of BM?” (1263–4). This is, of course, very puzzling talk indeed once EWWS conceed that Bohmian mechanics makes the same empirical predictions as the standard theory—a proponent of Bohm’s theory would most certainly not be pleased if experiments showed that the standard quantum-mechanical predictions were false because this would mean that Bohm’s theory was itself false! When ESSW say things like this, it is easy to understand DFGZ’s frustration. Rather than argue that Bohm’s theory made the wrong empirical predictions, ESSW claim that the purpose of their original paper was “to show clearly that the interpretation of the Bohm trajectory—as the real retrodicted history of the \[test particle that travels through the interferometer\]—is implausible, because this trajectory can be macroscopically at variance with the detected, actual way though the interferometer” (1263). This last clause identifies the detected path with the actual path traveled by the test particle. This is their (implicit) assumption that particle detectors would be reliable (in a perfectly straightforward way) on the delayed-choice interference experiments that they discuss. ESSW conclude, “Irrespective of what can be said in addition, we think that we have done a useful job in demonstrating just how artificial the Bohm trajectories can be” (1264). Again, ESSW’s claim is not that Bohm’s theory makes the wrong empirical predictions nor it is that the theory is somehow logically inconsistent; rather, they argue (on the implicit assumption that our particle detectors reliably tell us where particles are) that Bohm’s theory makes the wrong predictions for the actual motions of particles—that the predicted particle trajectories are “artificial,” “metaphysical,” and, at best, “implausible.” While I agree with DFGZ that surreal trajectories are not something that a proponent of Bohm’s theory should worry about, the full story is a bit more involved than the sketch given in their comment on ESSW’s paper. In order to get everything straight, let’s return to Bell’s original analysis of the delayed-choice interference experiment in the context of Bohm’s theory. ## 4 Bell’s analysis of the delayed-choice interference experiment provides a good first step in explaining why conservation-of-momentum-violating “surreal” trajectories do not pose a problem for Bohm’s theory. While Bohm’s theory does indeed predict that momentum (in the usual sense) is not conserved in experiments like that described above, Bell explained why one would never detect violations of the conservation of momentum. The short story is this: while the actual momentum (mass times particle velocity) is typically not conserved in Bohm’s theory, the measured momentum (as expressed by the results of what one would ordinarily take to be momentum measurements) is always conserved. In order to detect a momentum-violating bounce in an experiment like that described above, one would have to perform two measurements: one to show which path the particle travels, ($`A`$ or $`B`$) and another to show where the particle ends up ($`A^{}`$ or $`B^{}`$). One might then try to show that a particle that travels path $`A`$, say, ends up at $`B^{}`$, and thus violates the conservation of momentum. But one will never observe such a bounce in Bohm’s theory because measuring which path the particle follows will destroy the symmetry in the probability currents that generate the bounce. That is, particles only exhibit their crazy bouncing behavior in Bohm’s theory when no one is looking! Suppose (following Bell 1980) that one puts a detector on path $`B`$ designed to correlate the position of a flag with the position of the test particle $`P`$ (see figure 2). More specifically, consider a single flag particle $`F`$ whose position (as represented by the quantum-mechanical state) gets correlated with the position of $`P`$ as follows: (1) if $`P`$ is in an eigenstate of traveling path $`A`$, then $`F`$ remains in an eigenstate of pointing at “No” and (2) if $`P`$ is in an eigenstate of traveling path $`B`$, then $`F`$ ends up in an eigenstate of pointing at “Yes”. That is, the detector is designed so that the position of $`F`$ will record the path taken by $`P`$. \[Figure 2: Experiment where $`F`$’s position is correlated with the position of $`P`$\] While this experiment may look like the earlier one, introducing such a detector requires one to tell a very different story than the one told without the detector. Given the nature of the interaction between $`P`$ and $`F`$ and the linearity of the dynamics, if $`P`$ begins in the $`z`$-spin up state (a superposition of $`x`$-spin eigenstates), then the effective wave function of the composite system would evolve as follows: Initial state: $$|_z_P|S_P|\text{“No”}_F=|S_P|\text{“No”}_F1/\sqrt{2}(|_x_P+|_x_P)$$ (7) $`P`$’s wave packet splits: $$|\text{“No”}_F1/\sqrt{2}(|_x_P|A_P+|_x_P|B_P)$$ (8) $`M`$’s position is correlated with the position of $`P`$: $$1/\sqrt{2}(|_x_P|A_P|\text{“No”}_F+|_x_P|B_P|\text{“Yes”}_F)$$ (9) The two wave packets appear to pass though each other in region $`I`$ (but they miss each other in configuration space!): $$1/\sqrt{2}(|_x_P|I_P|\text{“No”}_F+|_x_P|I_P|\text{“Yes”}_F)$$ (10) Final state: $$1/\sqrt{2}(|_x_P|A^{}_P|\text{“No”}_F+|_x_P|B^{}_P|\text{“Yes”}_F)$$ (11) Note that the position of $`F`$ does in fact reliably record where $`P`$ was when the position record was made. Because the wave function associated with the two possible positions for $`F`$ do not overlap in configuration space, the position correlation between $`P`$ and $`F`$ destroys the symmetry that prevents $`P`$ from crossing $`L`$. While the two wave packets both appear to pass through region $`I`$ at the same time, they in fact miss each other in configuration space. In order to see how $`P`$ and $`F`$ move, consider the evolution of the wave function and the two-particle configuration in configuration space. \[Figure 3: The Last Experiment in Configuration Space\] If the two-particle configuration starts in the top half of the initial wave packet (as represented in Figure 3), then $`P`$ would move from $`S`$ to $`A`$ to $`I`$ to $`A^{}`$ and $`F`$ would stay at “No”. If the configuration starts in the bottom half of the initial wave packet, then $`P`$ would move from $`S`$ to $`B`$ then $`F`$ would move to “Yes” then $`P`$ would move from $`B`$ to $`I`$ to $`B^{}`$. That is, regardless of where $`P`$ starts, it will pass though the region $`I`$ without bouncing. Moreover, $`F`$ will record that $`P`$ was on path $`A`$ if and only if $`P`$ ends up at $`A^{}`$ and that $`P`$ was on path $`B`$ if and only if $`P`$ ends up at $`B^{}`$. That is, if one makes a determinate record of $`P`$’s position before $`P`$ gets to $`I`$, then $`P`$ will follow a perfectly natural trajectory, and the record will be reliable. Again, recording the position of $`P`$ destroys the symmetry that prevents $`P`$ from crossing $`L`$. This experiment illustrates why a measurement record is reliable in Bohm’s theory whenever there is a strong correlation between the position of the system being observed and the position of the recording system. And since all measurements are ultimately position measurements on the minimal Bohm’s theory, one might simply conclude that all determinate records produced by strong correlations are reliable in Bohm’s theory and dismiss the surreal-trajectories problem as a problem that was solved by Bell before it was even posed by ESSW. This is not such a bad conclusion, but the right thing to say about surreal trajectories is slightly more subtle. Note that in order to tell a story like the one above, one must record the path taken by the test particle in terms of the position of something. Here the record is in terms of the position of the flag particle $`F`$. It is this position correlation that breaks the symmetry in the probability currents, which then allows the test particle $`P`$ to follow a momentum-conserving trajectory. All it takes is a strong position correlation with even a single particle. And it is this that makes the final position record reliable.<sup>10</sup><sup>10</sup>10Bell explained that a good measurement record must make a macroscopic difference. He emphasized that a discharged detector is macroscopically different from an undischarged detector. This is also something emphasized by DFGZ (1993, 1262) in order to argue that one would not expect one of ESSW’s detectors to generate a sensible record of which path the test particle followed ”until an interaction with a suitable macroscopic device occurs.” But note that all that really matters here is that the wave packets that correspond to different measurement outcomes (in terms of the position of $`F`$) be well-separated in configuration space in the $`F`$-position-record direction. This is not to say that Bell’s (and DFGZ’s) point concerning macroscopic differences irrelevant. If the flag is a macroscopic system that makes a macroscopic movement, then this will obviously help to provide the wave packet separation required for a reliable record. But while a macroscopic position correlation with a macroscopic system sufficient, it is not a necessary condition for generating a reliable record in Bohm’s theory. See Aharonov, Y. and L. Vaidman (1996) for a discussion of partial measurements in Bohm’s theory, measurements where the separation between the post-measurement wave packets in configuration space is incomplete. So what would happen if one tried to record the position of $`P`$ in terms of some physical property other than position? This is something that is important to our making sense of the history of the surreal trajectories problem, but it is something that Bell did not consider. ## 5 In order to avoid Bell’s (preemptive) dissolution of the surreal trajectories problem ESSW must have had in mind a different sort of which-path detector than the one considered by Bell. Indeed, the experiments that ESSW describe in their 1992 paper employ detectors that record the path followed by the test particle in the creation of photons. A proponent of Bohm’s theory might point out that since the theory is explicitly nonrelativistic and since the very statement of its auxillary dynamics requires there to be a fixed number of particles, these experiments are simply outside the domain of the theory. But perhaps it is possible to capture at least the spirit of ESSW’s experiments with experiments that are well within the domain of the minimal Bohm’s theory.<sup>11</sup><sup>11</sup>11The experiment below shows what would happen if one tried to record the path taken by the particle in the $`x`$-spin of another particle. Dewdney, Hardy, and Squires (1993) tried to capture the spirit of ESSW’s experiments by showing in graphic detail what would happen if one tried to record the path in terms of energy. Consider what happens when one tries to record $`P`$’s position in something other than position (one might naturally, and quite correctly, object that there is no other quantity in Bohm’s theory that one could use to record $`P`$’s position, but with the aim of trying to revive the surreal trajectories problem, read on). Suppose, for example, that one tries to record $`P`$’s position in a particle $`M`$’s $`x`$-spin: that is, suppose that the interaction between $`P`$ and $`M`$ is such that if $`P`$’s initial effective wave function were an $`x`$-spin up eigenstate, then nothing would happen to $`M`$’s effective wave function; but if $`P`$’s initial effective wave function were an $`x`$-spin down eigenstate, then the spin index of $`M`$’s effective wave function would be flipped from $`x`$-spin up to $`x`$-spin down (since $`x`$-spin is a contextual property in the minimal Bohm’s theory, the value of the $`x`$-spin record depends, as we will see, on how it is read, and one might thus, quite correctly, argue that it is not a record of the position of $`P`$ at all, but read on). In the standard von Neumann-Dirac collapse formulation of quantum mechanics (once a collapse had eliminated one term or the other of the correlated superposition!) one might naturally think of this interaction as recording $`P`$’s position in $`M`$’s $`x`$-spin. On this view, $`M`$ might be thought of as a sort of which-path detector. Continuing with the experimental set up, suppose further that $`M`$’s $`x`$-spin might then be converted to a position record by a detector with a flag particle $`F`$ designed to point at “No” if $`M`$ is in the $`x`$-spin up state and to point at “Yes” if $`M`$ is in the $`x`$-spin down state. The conversion of the $`x`$-spin record (though it will turn out that there is no determinate $`M`$-record until after this conversion is made in the delay-choice interference experiments!) to a position record here consists in correlating the position of $`F`$ with the $`x`$-spin of $`M`$ (as represented by the quantum-mechanical state). The idea is that if $`P`$ is in an eigenstate of traveling path $`B`$, say, then $`M`$ will record this fact by the $`x`$-spin index on its wave function being flipped to $`x`$-spin down, which is something that might then be converted into a record in terms of the position of $`F`$ through the interaction between $`M`$ and $`F`$. In this case, $`F`$ would move to record the measurement result “Yes”. \[Figure 4: One tries to record the position of $`P`$ in the $`x`$-spin of $`M`$\] The effective wave function of the composite system then evolves as follows: Initial state: $$|_z_P|S_P|_x_M|\text{“No”}_F=|S_P|_x_M|\text{“No”}_F1/\sqrt{2}(|_x_P+|_x_P)$$ (12) $`P`$ wave packet is split: $$|_x_M|\text{“No”}_F1/\sqrt{2}(|_x_P|A_P+|_x_P|B_P)$$ (13) The $`x`$-spin component of $`M`$’s wave packet is correlated to the position of $`P`$’s: $$|\text{“No”}_F1/\sqrt{2}(|_x_P|A_P|_x_M+|_x_P|B_P|_x_M)$$ (14) The two wave packets pass through each other in configuration space: $$|\text{“No”}_F1/\sqrt{2}(|_x_P|I_P|_x_M+|_x_P|I_P|_x_M)$$ (15) Then they separate: $$|\text{“No”}_F1/\sqrt{2}(|_x_P|A^{}_P|_x_M+|_x_P|B^{}_P|_x_M)$$ (16) Then the position of the $`F`$ is correlated to the $`x`$-spin component of $`M`$’s wave packet: $$1/\sqrt{2}(|_x_P|A^{}_P|_x_M|\text{“No”}_F+|_x_P|B^{}_P|_x_M|\text{“Yes”}_F)$$ (17) Note that here the symmetry in the probability current that prevents $`P`$ from crossing $`L`$ is preserved. That is, $`P`$ bounces just as it did in the first experiment we considered. \[Figure 5: Last experiment in configuration space\] If the three-particle configuration begins in the top half of the initial wave packet (as represented in Figure 5), then $`P`$ will move from $`S`$ to $`A`$ to $`I`$ to $`B^{}`$ then, when $`M`$ and $`F`$ interact, $`F`$ will move to “Yes”. If the three-particle configuration begins in the bottom half of the initial wave packet, then $`P`$ will move from $`S`$ to $`B`$ to $`I`$ to $`A^{}`$ and, when $`M`$ and $`F`$ interact, $`F`$ will stay at “No”. That is, the final position of the $`F`$ will be at “No” if $`P`$ traveled along the lower path and it will be at “Yes” if $`P`$ traveled along the upper path. In other words, $`F`$’s final position does not tell us which path $`P`$ followed in the way that it was intended. One might naturally conclude that the which-path detector is fooled by the late measurement, and defend Bohm’s theory against ESSW by denying their implicit assumption that the which-path detectors are reliable.<sup>12</sup><sup>12</sup>12In their 1993 paper, Dewdney, Hardy, and Squires argue precisely this. There is, however, another way of looking at a delayed-choice interference experiment where one tries to record the path in some property other than position. One might claim both that Bohm’s theory is true and that one’s detectors are perfectly reliable. This, it seems to me, is an option suggested by DFGZ’s discussion of the theory-ladenness of talk of trajectories and of observation itself near the end of their response to ESSW’s original paper. Given their contention that the experiments described by ESSW could provide no empirical reason for rejecting Bohmian mechanics, DFGZ ask the question “So what on earth is going on here?” > The answer appears to be this: The authors \[ESSW\] distinguish between the Bohm trajectory for the atom and the detected path of the atom. In this regard it would be well to bear in mind that before one can speak coherently about the path of a particle, detected or otherwise, one must have in mind a theoretical framework in terms of which this notion has some meaning. BM provides one such framework, but it should be clear that within this framework the \[test particle\] can be detected passing only through the slit through which its trajectory in fact passes. More to the point, within a Bohmian framework it is the very existence of trajectories wich allows us to assign some meaning to this talk about detection of paths. (1261–2) It seems that there are two points here. The first point concerns the theory-ladenness of talk about trajectories. On the orthodox formulation of quantum mechanics, there is no matter of fact at all concerning which path the test particle traveled since it simply fails to have any determinate position whatsoever before it is detected. Indeed, insofar as ESSW’s description of the surreal trajectories experments presupposes that there are determinate particle trajectories, they are presupposing something that is incompatible with the very quantum orthodoxy they seek to defend! The point here is that any talk of determinate trajectories is talk within a theory. A precondition of such talk is that one have a theory where there are determinate trajectories, a theory like Bohmian mechanics. DFGZ’s second point, if I understand it correctly, concerns the theory-ladenness of observation, but this will first require some clarification. Their claim that in Bohmian mechanics a test particle “can be detected passing only through the slit through which its trajectory in fact passes” suggests that they were considering only experiments like those in the last section where the which-path detector indicates in a perfectly straighforward way the path that the test particle in fact followed. But DFGZ do in fact grant that there are situations where Bohm’s theory predicts that a late observation of a which-path detector would find that the detector registers that the test particle traveled one path when it in fact traveled the other. They also grant that this is somewhat surprising. But they explain that “if we have learned anything by now about quantum theory, we should have learned to expect surprises!” And DFGZ maintain that even in such experiments the measurement performed by the which-path detector “can indeed be regarded as a measurement of which path the \[test particle\] has taken, but one that conveys information which contradicts what naively would have been expected.” DFGZ then draw the moral that “BM, together with the authors \[ESSW\] of the paper on which we are commenting, does us the service of making it dramatically clear how very dependent upon theory is any talk of measurement or observation” (1262). While it is not entirely clear what DFGZ have in mind, one way to read this is that, contrary to what is later argued by Dewdney, Hardy, and Squires (1993), DFGZ take the which-path detector to be perfectly reliable even in experiments where it “records” that the test particle traveled one path when it in fact (according to Bohm’s theory) traveled the other once one understands what the detector is detecting. On this reading, then, the point here is that since observation is itself a theory-laden notion, what one is detecting can only be determined in the context of a theory that explains what it is that one is detecting. But if this is what DFGZ had in mind, then what exactly does Bohm’s theory tell us that a which-path detector is detecting in the context of a late-measurement experiment? (And is it really possible to tell a plausible story where the detectors are perfectly reliable here?) Perhaps the easiest answer would be to insist that when one tries to record the path taken by the test particle in a property other than position (in a delayed-choice interference experiment), one’s which-path detector simply works in exactly the opposite way that one would expect. The detector is perfectly reliable—it is just that when it records that the test particle traveled path $`A`$, the detector record (under such circumstances) really means, according to Bohm’s theory, that the test particle in fact traveled path $`B`$; and, similarly, on this view a $`B`$ record means, according to Bohm’s theory, that the test particle traveled path $`A`$. It seems, however, that this cannot be quite right. When one tries to record the path that the test particle traveled in a property other than position (in the delayed-choice interference experiment), there is no determinate record whatsoever (on the minimal Bohm’s theory) before the test particle passes through the interference region $`I`$ because the which-path detector has not yet correlated the position of anything with the position of the test particle. The right thing to say, it seems to me, is that while the which-path detector does not detect anything before one correlates the position of the flag $`F`$ with $`M`$’s $`x`$-spin (on the minimal Bohm’s theory), whenever one makes a determinate record in Bohm’s theory using a device that induces a strong correlation between the measured position of the object system and the position that records the outcome, then that record will be perfectly reliable at the moment the determinate record is made. On this view, there is still a sense in which one can think of the detectors in the delayed-choice interference experiments as being perfectly reliable, but this will take some explaining. As DFGZ suggest, we naturally rely on our best physical theories to tell us what it is that our measuring devices in fact measure, so what does Bohm’s theory tell us about the late-measurement of the which-path detector in the delayed-choice interference experiment? Note, again, that there is no determinate record whatsoever before the late measurement. Also note that while the final position of $`F`$ does not tell us where $`P`$ was when $`P`$ interacted with $`M`$, it does reliably tell us where $`P`$ is at the moment that the x-spin correlation is converted into a determinate measurement record (when the position of $`F`$ is correlated with the $`x`$-spin of $`M`$): if one gets the result “No” ($`x`$-spin up), then the theory tells us that $`P`$ is currently associated with the $`x`$-spin up wave packet wherever that wave packet may be, and if one gets the result “Yes” ($`x`$-spin down), then it tells us that $`P`$ is currently associated with the $`x`$-spin down wave packet wherever that wave packet may be. So this is how it works. Since the only determinate noncontextual records in Bohm’s theory are records in terms of the position of something, there is, stictly speaking, no determinate record of $`P`$’s position until we convert the correlation between the position of $`P`$ and the $`x`$-spin of $`M`$ into a correlation between the position of $`P`$ and the position of $`F`$. And whenever this position correlation is made, we reliably, and nonlocally, generate a record of $`P`$’s position at that moment. If we wait until after $`P`$ has passed through region $`I`$, then if $`F`$ stays at “No”, this means that $`P`$ is associated with the $`x`$-spin up component which means that it is at position $`A^{}`$, and if $`F`$ moves to “Yes”, this means that $`P`$ is associated with the $`x`$-spin down component which means that it is at position $`B^{}`$. The moral is that one cannot use a record in Bohm’s theory to figure out which path $`P`$ took unless one knows how and when the record was made. But note that in this Bohm’s theory is arguably better off than the standard von Neumann-Dirac collapse formulation of quantum mechanics. On the standard eigenvalue-eigenstate link (where a system determinately has a property if and only if it is in an eigehstate of having the property) one can say nothing whatsoever about which trajectory a particle followed since it would typically fail to have any determinate position until it was observed. If one does not worry about the unreliability of retrodiction in the context of the standard collapse theory (and ESSW do not seem to be worried about this!), then I can see no reason at all to worry about it in the context of Bohm’s theory. Further, there is no reason to suppose that Bohmian particle trajectories are not the actual particle trajectories. Nor is there any reason to conclude that our good particle detectors are somehow unreliable. Rather than saying that a detector is fooled by a late measurement, one should, I suggest, say that the late measurement reliably detects the position of the test particle nonlocally. On this view the surreal trajectories experiments simply serve to reveal the special role played by position and, ultimately, the nonlocal structure of Bohm’s theory. As Bell explained, “The fact that the guiding wave, in the general case, propagates not in ordinary three space, but in a multi-dimentional configuration space in the origin of the notorious ‘nonlocality’ of quantum mechanics. It is a merit of the de Broglie-Bohm version to bring this out so explicity that it cannot be ignored” (1987, 115). But one should note that it is not some subtle sort of nonlocality involved in the account of quantum-mechanical correlations here. The configuration space particle dynamics that accounts for the nonlocal correlations in the late-measurement experiments makes Bohm’s theory incompatible with relativity. But there is one more point that I would like to make before turning to a discussion of the relationship between how Bohm’s theory accounts for surreal trajectories and its incompatibility with relativity. As suggested above (on the minimal Bohm’s theory) whenever the position of one system is recorded in the position of another system via a strong correlation between the effective wave functions of the two systems (one that produces the appropriate separation of the wave function in the recording parameter in configuration space), then that record will reliably indicate where the measured particle is at the moment the determinate record is made. It is also the case (on the minimal Bohm theory) that all determinate records are ultimately position records. One can only take these facts to provide a solution to the surreal trajectories if one allows for Bohm’s theory to tell one something about what one is observing when one observes (or, in somewhat different language, what constitues a good measuring device). But it seems that this is precisely the sort of thing that one must be willing to do when entertaining a new theoretical option. One might dogmatically insist on holding to one’s pre-theoretic intuitions concerning what one’s detectors detect come what may, but this would certainly be a methodological mistake. ## 6 Consider again the late-measurement experiment of the last section (see figures 4 and 5). If $`P`$ begins in the top half of the wave function at $`S`$, it will travel path $`A`$ to $`I`$ in the $`x`$-spin up wave packet. That is, before $`P`$ gets to $`I`$, the three-particle configuration will be associated with the $`x`$-spin up component of the wave function in configuration space. And this means that if one converts the spin record into a position record before the two wave packets interfer at $`I`$, one will get the result “No”. But if $`P`$ continues to $`I`$, bounces, and the two-particle configuration is picked up by the $`x`$-spin down wave packet, then, since the two-particle configuration is now associated with the $`x`$-spin down wave packet, if one now converts the $`x`$-spin record into a position record, one will get the result “Yes” . This means that one might instantaneously determine the value of the converted record at $`B`$ (the record one gets by converting the $`M`$ $`x`$-spin “record” into an $`F`$ position record) by choosing whether or not to interfer the two wave packets at $`I`$. If the two wave packets pass through each other, then $`F`$ will move to “Yes” when the spin record is converted; if not, then $`F`$ will stay at “No” when the spin record is converted. So, if someone at $`I`$ knew which path $`P`$ was on (something, as explained earlier, that is prohibited in Bohm’s theory if the distribution postulate is satisfied), then he or she could use this information to send a superluminal signal to a friend on path $`B`$ by deciding whether or not to interfere the wave packets at $`I`$. But regardless of whether one knows which path $`P`$ is on, the theory predicts (insofar as one is comfortable with the relavant counterfactuals in the context of a deterministic theory) that one can instantaneously affect the result of a measurement of $`M`$ from region $`I`$, and one might take the possibility of superluminal effects here to illustrate the incompatibility of Bohm’s theory and relativity. This incompatibility is more clearly illustrated by considering the role that the temporal order of events plays in Bohm’s theory. Consider the late-measurement experiment one more time. If one converts the spin record before the two wave packets interfer at $`I`$, then one will get the result “No”; and if one converts the spin record after the wave packets interfer, then one will get the result “Yes”. But if the conversion of the spin record and the interference of the wave packets are space-like separated events, then the conversion event occurs before the interference event in some inertial frames and after the interference event in others. So in order to get any empirical predictions whatsoever out of Bohm’s theory for this experiment whenever the conversion and interference events are space-like separated, one must choose a perfered inertial frame that imposes a perfered temporal order on the conversion and interference events. But having to choose a perfered intertial frame here is a direct violation of the basic principles of relativity. This is the sense in which the account that Bohm’s theory provides of the late-measurement experiment is fundamentally incompatible with relativity. If the distribution postulate is satisfied, then Bohm’s theory makes the same empirical predictions as the standard von Neumann-Dirac formulation of quantum mechanics (whenever the latter makes unambiguous predictions) and the standard quantum statistics do not allow one to send superluminal messages (given the usual quantum statistics, one can prove a no-signaling theorem). So while Bohm’s theory is not Lorentz-covariant, it explains why one would never notice this fact (just as it explains why one would never notice violations in the conservation of momentum). A proponent of Bohm’s theory might argue that nonlocally correlated motions like the correlated motions in the conversion and interference events describe above is too weak of a relationship to be causal, and that Bohm’s theory thus does not in fact allow for nonlocal causation. While such a conclusion would do nothing to make Bohm’s theory compatible with relativity even if it were granted, I do not think that it should be granted. It seems to me that if any correlated motions should count as causally connected in Bohm’s theory, then nonlocal correlated motions should as well. Nonlocal correlated motions, like local correlated motions (insofar as their are any truly local correlated motions!), are simply the result of the configuration space evolution of the physical state. The point here is that Bohm’s theory handles nonlocal correlated motions precisely the same way that it handles events that one would presumably want to count as causal—like the correlated motion produced between a football and the foot that kicks it. Of course, one might resist the conclusion that nonlocal correlated motions are causally related by denying that there are any causal relationships whatsoever in Bohm’s theory. But this would mean that even those explanations that one gives that look like causal explanations are not, and this seems to me to be putting things the wrong way around. Just as we look to our best theories to tell us how to build good detectors and to explain what it is that they detect, it seems that we should also look to our best theories to tell us something about the nature of causal relations. There is nothing inherently wrong with sitting down and deciding once and for all the necessary and sufficient conditions for events to be causally related. It is just that one risks adopting a notion of causation that is irrelavant to the sort of explanations provided by our best physical theories.<sup>13</sup><sup>13</sup>13Michael Dickson (1996) has argued that it does not make any sense to ask whether a deterministic theory like Bohm’s theory is local because of the difficulty supporting counterfactual conditionals in such a theory. He also suggests that the notion of causality may not make make sense in such a theory either (1996, 329). I agree that some intuitions concerning what it would mean for a theory to be local or what it would mean for one event to cause another cannot be supported in a deterministic theory, but this does not mean that we can make no sense at all of what it would be for a deterministic theory to be local or for one event to cause another. Indeed, whether Bohm’s theory is Lorentz covariant is a perfectly good sensible question concerning its locality—and it isn’t. But regardless of what one thinks about causality, the particle trajectories predicted by Bohm’s theory depend on one’s choice of inertial frame, which means that the theory is incompatible with the basic principles of relativity. And this is the real problem. ## 7 It is the configuration space dynamics that makes Bohm’s theory incompatible with relativity. But it is also the instantaneous correlated motion predicted by the configuration space dynamics that explains the quantum-mechanical correlations in Bohm’s theory and makes the theory empirically adequate. And it is the configuration space dynamics that allows one to say that whenever the position of one system is recorded in the position of another system via a strong correlation between the effective wave functions of the two systems, then that record will reliably indicate where the measured system is at the moment the determinate record is made, which, it seems to me, is ultimately the best response to the supposed surreal trajectory problem. But this leaves a proponent of Bohm’s theory with a difficult choice. One might try to find some new way to account for quantum-mechanical correlations, one that does not require a preferred temporal order for space-like separated events where objects exhibit correlated properties. But it should be clear from the the configuration-space stories told above that such a theory would have to explain quantum-mechanical correlations in a way that is fundamentally different from the configuration-space way in which they are explained by Bohm’s theory. And, of course, actually finding such an alternative is much easier said than done.<sup>14</sup><sup>14</sup>14For other discussions of the incompatibility of Bohm’s theory and relativity see Albert (1992) and Artnzenius (1994). For a recent discussion concerning the difficulty in getting a Bohm-like auxiliary quantum dynamics that is compatible with relativity see Dickson and Clifton (1998). Or one might simply drop the requirement of Lorentz covariance as a feature of a satisfactory dynamics and settle for something weaker, perhaps something like appearant Lorentz covariance. But this would be an enormous theoretical sacrifice—presumably one that few physicists would seriously entertain. REFERENCES Aharonov, Y. and L. Vaidman: 1996, “About Position Measurements which do not show the Bohmian Particle Position,” in J. T. Cushing et al. (eds), (1996, 141–154). Albert, D. Z.: 1992, Quantum Mechanics and Experience, Harvard University Press, Cambridge. Arntzenius, F.: 1994, ‘Relativistic Hidden-Variable Theories?’ Erkenntnis 41, 207–231. Bacciagaluppi, G. and M. Dickson: 1996, ‘Modal Interpretations with Dynamics’ In Dieks and Vermaas eds. Barrett, J. A.: 1999 The Quantum Mechanics of Minds and Worlds, Oxford University Press. 1996, ‘Empirical Adequacy and the Availability of Reliable Records in Quantum Mechanics,’ Philosophy of Science 63, 49–64. 1995, ‘The Distribution Postulate in Bohm’s Theory,’ Topoi 14, 45–54. Bell, J. S.: 1987, Speakable and Unspeakable in Quantum Theory, Cambridge University Press, Cambridge. 1982, ‘On the Impossible Pilot Wave,’ Foundations of Physics 12:989-899. Reprinted in Bell (1987,159–168). 1981, ‘Quantum Mechanics for Cosmologists,’ in Quantum Gravity 2, C. Isham, R. Penrose, and D. Sciama (eds.), Oxford: Clarendon Press, 611–637. Reprinted in Bell (1987,117–138). 1980, ‘de Broglie-Bohm, Delayed-Choice Double-Slit Experiment, and Density Matrix,’ International Journal of Quantum Chemistry: Quantum Chemistry Symposium 14, 155–9. Reprinted in Bell (1987, 111–6). 1976b, ‘The Measurement Theory of Everett and de Broglie’s Pilot wave,’ in Quantum Mechanics, Determinism, Causality, and Particles, M. Flato et al. (eds.), D. Reidel, Dordrecht, Holland, 11–17. Reprinted in Bell (1987, 93–99). Berndl, K., M. Daumer, D. Dürr, S. Goldstein, and N. Zanghí: 1995, “A Survey of Bohmian Mechanics,” Il Nuovo Cimento, vol. 110B, n. 5–6, 737–750. Bohm, D.: 1952, ‘A Suggested Interpretation of Quantum Theory in Terms of “Hidden Variables”, ’ Parts I and II, Physical Review 85, 166–179, 180–193. Bohm, D. and B. J. Hiley: 1993, The Undivided Universe: An Ontological Interpretation of Quantum Theory. London: Routledge. Cushing, J. T.: 1996, ‘What Measurement Problem?’ in R. Clifton (ed.) (1996). 1994, Quantum Mechanics: Historical Contingency and the Copenhagan Hegemony. Chicago: University of Chicago Press. Cushing, J. T., A. Fine, and S. Goldstein: 1996, Bohmian Mechanics and Quantum Theory: An Appraisal, Boston Studies in the Philosophy of Science, vol. 184, Kluwer Academic Publishers, Dordrecht, The Netherlands. Dewdney, C., L. Hardy, and E. J. Squires: 1993, ‘How Late Measurements of Quantum Trajectories Can Fool a Detector,’ Physics Letters A 184, 6–11. Dickson, M.: 1996, “Is the Bohm Theory Local?”, in J. T. Cushing et al. (eds) Bohmian Mechanics and Quantum Theory: An Appraisal, (1996, 321–30). Dickson, M., R. Clifton: 1998, “Lorentz-Invariance in Modal Interpretations,” forthcoming in Dieks and Vermaas (eds) (1998). Dieks, D. G. B. J., and P. E. Vermaas (eds): 1998, The Modal Interpretation of Quantum Mechanics, Kluwer Academic Press, forthcoming. Dirac, P. A. M.: 1958, The Principles of Quantum Mechanics, fourth edition, Clarendon Press, Oxford. Dürr, W. Fussender, S. Goldstein, and N. Zanghì: 1993, “Comment on ‘Surrealistic Bohm Trajectories’, ” Zeitschrift für Naturforschung 48a, 1261–1262. Dürr, D., S. Goldstein, and N. Zanghí: 1993, “A Global Equilibrium as the Foundation of Quantum Randomness,” Foundations of Physics 23, no. 5, 721–738. 1992, ‘Quantum Mechanics, Randomness, and Deterministic Reality’, Physics Letters A 172, 6-12. 1992, “Quantum Equilibrium and the Origin of Absolute Uncertainty,” Journal of Statistical Physics, vol. 67, nos. 5–6, 843–907. Englert, B. G., M. O. Scully, G. Süssmann, and H. Walther: 1993, “Reply to Comment on ‘Surreal Bohm Trajectories’, ”Zeitschrift für Naturforschung 48a, 1263–1264. 1992, “Surrealistic Bohm Trajectories,” Zeitschrift für Naturforschung 47a, 1175–1186. Maudlin, T.: 1994, Quantum Nonlocality and Relativity, Oxford, Blackwell. Phillipidas, C. C. Dewdney, and B. H. Hiley: 1979, ‘Quantum Interference and the Quantum Potential’, Il Nuovo Cimento 52B, pp. 15–28. von Neumann, J.: 1955, Mathematical Foundations of Quantum Mechanics, Princeton University Press, Princeton; translated by R. Beyer from Mathematische Grundlagen der Quantenmechanik, Springer, Berlin, 1932.
no-problem/0002/astro-ph0002066.html
ar5iv
text
# HST Images of Stephan’s Quintet: Star Cluster Candidates in a Compact Group Environment ## 1. Introduction The Hickson Compact Groups (HCG; Hickson 1982) are among the densest concentrations of galaxies in the local universe. These high densities combined with relatively low velocity dispersions, $`\sigma (23)\times 10^2`$ km s<sup>-1</sup> (Hickson et al. 1992), make them active sites of strong galaxy interactions. Interactions are believed to initiate bursts of star cluster formation on many scales from dwarf galaxies along tidal tails to massive star clusters, the progenitors of today’s globular clusters. One group in particular, Stephan’s Quintet (SQ; also known as HCG 92), is notable for evidence of multiple interactions. SQ is comprised of five galaxies: NGC 7317, NGC 7318A and B, NGC 7319 and NGC 7320 (see Fig. 1 for galaxy identifications). Based on multiwavelength observations of the group, NGC 7317 and NGC 7320 show no evidence for recent interactions, unlike the other three galaxies (NGC 7320 is a foreground galaxy). In particular, NGC 7318B shows morphological disruption of spiral structure, and a long tidal tail extends from NGC 7319. The interactions have resulted in recent and ongoing star formation as evident from $`BV`$ (Schombert et al. 1990), H$`\alpha `$ (Vílchez & Iglesias-Páramo 1998) and far-infrared (Xu, Sulentic & Tuffs 1999) imaging. Furthermore, in the photometric dwarf galaxy study of Hunsberger, Charlton, & Zaritsky (1996), SQ was identified as hosting the richest known system of tidal dwarf galaxy candidates. From these studies, only the largest star-forming regions were resolved; many of the young stars appeared to be distributed in the diffuse light in the tidal features between the galaxies. High spatial resolution is required to identify star cluster candidates (SCC) which at the distance of SQ ($`z=0.02`$; $`d66h^1`$ Mpc) are faint point sources on the Wide Field and Planetary Camera 2 (WFPC2). Hubble Space Telescope (HST) imaging was the obvious next step for investigating the full range in scale of massive star formation structure. Furthermore, with these images we could investigate whether star clusters form in diverse environments from the inner regions of galaxies to tidal debris tens of kiloparsecs from a galaxy center. ## 2. Observations and Data Analysis SQ was observed with the HST WFPC2 in two pointings. The first on 30 Dec 1998, encompassed NGC 7318A/B and NGC 7319. The second, on 17 Jun 1999, covered the extended tidal tail of NGC 7319. On both occasions, the images were once dithered<sup>1</sup><sup>1</sup>1Dithering entails offsetting the image position by a half-integer pixel amount in both the $`x`$ and $`y`$ directions in order to increase the effective resolution of the combined image by better sampling the PSF. In this case, we obtained two images in each field and filter. and taken through three wide-band filters: F450W ($`B`$), F569W ($`V`$) and F814W ($`I`$). The exposure times in each field were $`4\times 1700`$ s, $`4\times 800`$ s and $`4\times 500`$ s for $`B`$, $`V`$ and $`I`$, respectively. The data were first processed through the standard HST pipeline. Subsequently, they were cleaned of cosmic rays using the STSDAS task GCOMBINE, followed by the IRAF task COSMICRAYS to remove hot pixels. Fig. 1 shows the $`V`$ band image of both fields combined with the regions of interest labeled. The initial detection of point sources was undertaken using the DAOFIND routine in DAOPHOT (Stetson 1987) with a very low detection threshold. This produced thousands of sources per chip, and we then performed aperture photometry on all sources. Those sources with $`S/N>3.0`$ that appeared in the images at both dither positions were retained. Sources with $`\mathrm{FWHM}>2.5`$ or $`\mathrm{\Delta }_V>2.4`$<sup>2</sup><sup>2</sup>2$`\mathrm{\Delta }_V`$ is the difference between the $`V`$ magnitudes calculated with two photometric apertures: one with radius 0.5 pix and the other with radius 3.0 pix. were rejected as extended (Miller et al. 1997). Those point sources with $`VI>2.0`$ are likely foreground stars, and the remaining sources are considered star cluster candidates (SCC). This sample will clearly contain some foreground stars and background galaxies, but the spatial coincidence of most of the sources with the galaxy bulges and tidal features is evidence that many candidates are legitimate SCC. Approximately 150 sources were found in all three filters; they are plotted in the $`BV`$ versus $`VI`$ color-color plot in Fig. 2. In Fig. 3, zoom images of the tidal tail in NGC 7319 and the northern starburst region (NSR) have the SCC marked with circles. For a discussion of the extended sources in the field, see Hunsberger et al. (this proceedings). ## 3. Discussion ### 3.1. Dynamical History of Stephan’s Quintet The diversity of tidal features in SQ is indicative of the complex interaction history in the group. In the dynamical history proposed by Moles, Sulentic, & Márquez (1997; hereafter MSM97), NGC 7320C (out of the frame of Fig. 1 to the northeast) passed through the group a few hundred million years ago stripping NGC 7319 of much of its HI (Shostak et al. 1984) and inducing the extension of the tidal tail. In addition, gas was deposited in the area that is currently the NSR. This first event would have induced star formation in the environs of NGC 7319 and perhaps triggered the observed Seyfert 2 activity in the nucleus. Two of the four galaxies in Fig. 1, NGC 7319 and NGC 7318A, have radial velocities within 50 km s<sup>-1</sup> of 6600 km s<sup>-1</sup>. A third, NGC 7318B, while apparently interacting with NGC 7318A, has a discordant velocity, $`v=5700`$ km s<sup>-1</sup> (Hickson et al. 1992). This discrepancy is inconsistent with the interpretation of NGC 7318B as a foreground galaxy because of the obvious morphological distortion seen in Fig. 1. Instead, in the most recent and ongoing interaction event NGC 7318B is falling into the group for the first time. HI maps of the group show that NGC 7318B still retains the bulk of its gas (Shostak et al. 1984; MSM97) unlike all of the galaxies with concordant velocities. As NGC 7318B approaches SQ, its ISM is shocking the gas of the intragroup medium (IGM) in the NSR and along its eastern spiral arm. An extended arc of both radio continuum (van der Hulst & Rots 1981) and X-ray (Pietsch et al. 1997) emission supports the shocked gas scenario, and H$`\alpha `$ emission in the same region at the radial velocity of NGC 7318B indicates that the collision-compressed gas is being converted into stars (MSM97). Hunsberger et al. (1996) also found tidal dwarf galaxy candidates along part of the same structure. ### 3.2. Star Formation History from SCC Colors In all regions with tidal features or galaxies, we identified SCC. From the simulations of Ashman & Zepf (1992) of merger remnants, we expected to find massive young star clusters in the bulges of NGC 7318A and B, but there we only detected point sources with colors consistent with old globular clusters. This result can be understood if the interaction between NGC 7318A and B is relatively recent, and star formation is just beginning in the outer regions of the galaxies. This picture is consistent with the observations of NGC 7252 (Miller et al. 1997) and the Antennae (Whitmore et al. 1999) which suggest that cluster formation is initiated at large galactic radii and propagates inward over time. In NGC 7319, we do find young SCC in the disk and bulge, supporting the older interaction scenario for the event which stripped it of its gas and pulled out the tidal tail. From our images, it is also clear that star clusters can form outside of galaxies. In the NSR, the star formation is occurring $`\stackrel{>}{}20`$ kpc from the bulge of the nearest galaxy. In addition, we discovered several young star clusters in the tidal tail of NGC 7319. In the color-color plot (Fig. 2), there is a clear distinction between the sources associated with NGC 7318B and those in NGC 7319 and its tidal tail. The most recent star formation is occurring in the NSR and the spiral arms of NGC 7318B; ages of some SCC in those regions are at least as young as 5 Myr. Any intrinsic dust extinction would only cause an overestimate of the ages as the reddening vector is approximately parallel to the evolutionary tracks at that point. In addition to the youngest SCC in each region, we also observe a spread of ages from old globular cluster candidates (GCC) with ages $`\tau 10^{10}`$ yr to more intermediate-aged SCC, $`\tau 10^8`$ yr. This spread is most apparent in the NSR and along the tidal tail, both regions where extended periods of interaction-induced star formation are reasonable. Furthermore, the presence of the old GCC in the tidal features suggests they were pulled out of their birth galaxies as a result of the interactions. ## 4. Conclusions From HST WFPC2 images, we find $`150`$ SCC in the environs of SQ. SCC are found both within the bulges of each of the galaxies NGC 7318A/B and NGC 7319, and also in tidal features. The ages deduced from $`BV`$ versus $`VI`$ colors of SCC are consistent with the complex interaction scenario outlined by MSM97. Since only old GCC are found in the centers of NGC 7318A/B, this suggests that recent star formation has not yet occurred there. Very young SCC are found along the interaction shock front between the ISM of NGC 7318B and the IGM of SQ supporting the hypothesis that this is a recent event. The spread of ages in SCC found throughout the field is indicative of recurring episodes of interaction-induced star formation. #### Acknowledgments. We are grateful to A. Kundu and B. Whitmore for sharing their expertise in identifying and analyzing point sources in WFPC2 images. This work was supported by Space Telescope Science Institute under Grant GO–06596.01. ## References Ashman, K. M. & Zepf, S. E. 1992, ApJ, 384, 50 Bruzual, G. A. & Charlot, S. 1993, ApJ, 405, 538 Hickson, P. 1982, ApJ, 255, 382 Hickson, P., Mendes De Oliveira, C., Huchra, J. P. & Palumbo, G. G. 1992, ApJ, 399, 353 Hunsberger, S. D., Charlton, J. C. & Zaritsky, D. 1996, ApJ, 462, 50 Miller, B. W., Whitmore, B. C., Schweizer, F. & Fall, S. M. 1997, AJ, 114, 2381 Moles, M., Sulentic, J. W., & Márquez, I. 1997, ApJ, 485, L69 (MSM97) Moles, M., Márquez, I., & Sulentic, J. W. 1998, A&A, 334, 473 Paturel, G., et al. 1997, A&AS, 124, 109 Pietsch, W., Trinchieri, G., Arp, H., & Sulentic, J. W. 1997, A&A, 322, 89 Schombert, J. M., Wallin, J. F. & Struck-Marcell, C. 1990, AJ, 99, 497 Shostak, G. S., Allen, R. J., & Sullivan, W. T. 1984, A&A,139, 15 Stetson, P.B. 1987, PASP, 99, 191 van der Hulst, J. M. & Rots, A. H. 1981, AJ, 86, 12 Vílchez, J. M. & Iglesias-Páramo, J. 1998, ApJS, 117, 1 Whitmore, B. C., Zhang, Q. , Leitherer, C. , Fall, S. M. , Schweizer, F. & Miller, B. W. 1999, AJ, 118, 1551 Xu, C. , Sulentic, J. W. & Tuffs, R. 1999, ApJ, 512, 178 ## Discussion J. Gallagher: What is the spatial distribution of the cluster colors as compared to colors of the more diffuse debris? This might help in investigating differences between cluster formation versus cluster evolution. S. G.: In general, the diffuse emission between the galaxies has $`BV`$ colors similar to those of the outer regions of spiral disks. More specifically, in the NSR and along the eastern spiral arm of NGC 7318B, the diffuse light has $`BV`$ colors between 0.3 and 0.5 (Schombert et al. 1990), as do some regions in the tidal tail. We find young cluster candidates in those regions with similar colors as well as some with $`BV<0.3`$. T. Böker: In your color-color diagram, there are a handful of “clusters” that are not explained by reddening. Do you have any idea what they are? S. G.: There does appear to be a group of point sources clumped below the evolutionary tracks on the red end of the $`VI`$ axis. I have investigated each of them, and they do not appear to be part of a distinct population. A few of these sources are quite faint in $`B`$ which could cause some scatter, and there is certainly some contamination from background galaxies and stars. U. Fritze-von Alvensleben: Where is HI located? Is there any correlation between the absence of HI and the absence of young star clusters? S. G.: The HI distribution is unusual as most of the gas in the group is outside of the galaxies. There is as much HI as is typically found in an entire spiral galaxy to the south of NGC 7319, including the tidal tail, and a fair amount in the NSB as well (Shostak et al. 1984). We find young SCC in both of those regions. The disk of NGC 7319 is almost entirely lacking in gas, but we find some young SCC candidates in that galaxy, though they are strung along the spiral arms. The bulge of NGC 7318B still has its HI, and does not appear to contain any young SCC. G. Meurer: Are any of the centers of the galaxies blue? I suspect the reason that you don’t see any nuclear clusters is because the galaxies are too far away hence crowding makes them difficult to distinguish. S. G.: NGC 7318A and B have similar central colors: $`BV1.0`$ and $`VI1.2`$ that are not particularly blue (though there may be a significant amount of intrinsic reddening). NGC 7319 is bluer with $`BV0.5`$ and $`VI1.2`$; those colors are consistent with the Seyfert 2 activity in the nucleus. The complex structure in the center of each of these galaxies would certainly make detecting a nuclear cluster very difficult. However, we find no young SCC within the inner 2–3 kpc even where the light distribution is smooth. In NGC 7319 we do find young SCC within the bulge of the galaxy.
no-problem/0002/hep-ex0002064.html
ar5iv
text
# Dynamics of the decay 𝜂→3⁢𝜋⁰ ## 1 Introduction Chiral perturbation theory offers a consistent description of low energy QCD particularly applicable to meson decays. An outstanding difficulty is the large discrepancy between the predicted and observed rate of the $`\eta 3\pi `$ decay gasser . We report here a measurement of the transition matrix element $`M(\eta 3\pi ^0)^2`$. The density of events on the Dalitz plot has been predicted to be uniform by Gasser and Leutwyler gasser and by Di Vecchia vecchia . More recently a calculation of Kambor, Wiesendanger and Wyler kambor found a density which decreases as the distance from the center of the plot increases. The Dalitz plot density is specified by a single parameter, conventionally denoted as $`\alpha `$. Kambor et al. show that both $`\alpha `$ and the rate $`\mathrm{\Gamma }(\eta 3\pi )`$ are sensitive to a parameter of the theory ( $`\overline{c}`$ in their notation ). The dependence of $`\alpha `$ and $`\mathrm{\Gamma }`$ on $`\overline{c}`$ is such that a more negative value of $`\alpha `$ reduces the value of $`\mathrm{\Gamma }`$ thereby increasing the disagreement with experiment. Two values of $`\alpha `$ are determined by the authors of kambor , -0.014 and -0.007, the latter containing higher order corrections and leading to a prediction for $`\mathrm{\Gamma }(\eta 3\pi )`$ more consistent with experiment. To second order in the center of mass pion energy zemach \- cbar the Dalitz plot density for the decay $`\eta 3\pi ^0`$ is parameterized as $$M^21+2\alpha z$$ (1) with $$z=\frac{2}{3}\underset{i=1}{\overset{3}{}}\left(\frac{3E_im_\eta }{m_\eta 3m_{\pi ^0}}\right)^2=\left(\frac{\rho }{\rho _{max}}\right)^2$$ (2) where $`E_i`$ is the energy of the $`i`$’th pion in the $`\eta `$ rest frame, $`\rho `$ is the distance from the center of the Dalitz plot and $`\rho _{max}`$ the largest kinematically allowable value of $`\rho `$. A uniformly populated Dalitz plot, (as predicted by gasser and vecchia ) gives $`\alpha =0`$. $`\alpha `$ has been measured previously by Baglin et al. baglin , Alde et al. alde , and Abele et al. cbar . Recent results are shown in figure 1 along with the result of this work. Reference cbar finds a value significantly different from zero and in probable disagreement $`(2\sigma )`$ with theoretical expectations. The results reported in baglin and alde are consistent with both zero and the non-zero expectation of kambor . In this work we report on a determination of $`\alpha `$ from the decay $`\eta 3\pi ^0`$ where the $`\eta `$ was produced by 18.3 GeV/c negative pions incident on liquid hydrogen. The data were collected in 1995 by the Brookhaven National Laboratory E852 collaboration. The apparatus has been described previously brabson ,teige and consisted of a large, segmented lead glass calorimeter (LGD), charged particle tracking, veto and trigger systems capable of determining the charged particle multiplicity and the total energy deposited in the lead glass. The data used for this analysis were collected with the “all-neutral” trigger which required no charged particles, an energy deposition greater than 12 GeV in the lead glass and no photons in the downstream photon veto. $`\eta 3\pi ^0`$ decays were selected by requiring exactly 6 reconstructed clusters in the LGD and a visible total energy greater than 16.5 GeV. All combinations of assignments to the hypothesis $`3\pi ^0`$ were tested and if the best $`\chi ^2`$ corresponded to a confidence level greater than 1% the event was considered a candidate. A preliminary fit using only the $`\pi ^0`$ mass constraint was performed to allow evaluation of the $`3\pi ^0`$ effective mass. Figure 2 shows the resulting distribution. Events with a mass less than 0.65 $`\mathrm{GeV}/\mathrm{c}^2`$ (corresponding to $`3\sigma `$ from the $`\eta `$ mass) were subjected to a full kinematic fit with the hypothesis $`\pi ^{}p\eta \mathrm{n}`$; $`\eta 3\pi ^0`$. A confidence level (CL) cut selected decays for the final analysis. It also was required that no photon have an energy less than 0.25 GeV and that the minimum separation between photons be larger than 9 cm. The effect and purpose of these requirements is discussed below. The final resulting data set contained 87,500 events. To determine $`\alpha `$, eq. 1 is fitted to the observed distribution of $`z`$ corrected for acceptance and phase space dependence. The acceptance correction was based on Monte-Carlo simulation, taking into account the apertures of the apparatus and a large sample of GEANT generated electromagnetic showers. The requirements on the minimum photon energy and the minimum allowable photon separation mentioned above removed any uncertainties in the Monte-Carlo simulation associated with the transverse development of electromagnetic showers and the behavior of low energy photons. The separation requirement also selected events where the nearest two electromagnetic showers were well resolved. The phase space dependence of z was removed by dividing the observed distribution by the distribution due to a uniformly populated Dalitz plot. The distribution resulting from removing the phase space dependence and correcting for the acceptance is proportional to $`M^2`$ and is shown in figure 3 along with the result of the fit to eq. 1. The distribution has been numerically scaled to give a proportionality constant of one. Our result is $`\alpha =0.0047\pm 0.0074`$, where the error is statistical only. Systematic effects have been considered and the important contributors identified. Since $`\alpha `$ is a static property of the $`\eta `$ a measurement of $`\alpha `$ cannot depend on any data selection requirement, for example, momentum-transfer, separation of photons in the detector, photon energy detection thresholds, confidence level of fits, etc. It was observed that our measured $`\alpha `$ remained constant as a function of the Confidence Level (CL) cut, down to about CL = 30%, below which $`\alpha `$ began to fall slightly. This is consistent with an increasing contribution from reconstructed events with one or more wrong-assignments of a $`\gamma `$ to a $`\pi ^0`$. It was required the an event have a confidence level greater than 30% for this analysis. Studies of the variation of $`\alpha `$ with the value chosen for the CL cut yielded an estimate of $`\pm 0.003`$ for the systematic contribution associated with this cut. The analysis further required that no two photons have reconstructed impact positions closer than 9 cm, a distance denoted by $`\mathrm{\Delta }\mathrm{r}`$. Variation of the value of $`\mathrm{\Delta }\mathrm{r}`$ used in the analysis between 8.5 and 11 cm yielded our estimate of $`\pm 0.002`$ for the contribution to the systematic error due to this effect. The effect of requiring the lowest energy photon to have an energy larger than 0.25 GeV was investigated by removing this requirement (increasing the data set by 1,140 events) and by replacing it with a requirement of 0.5 GeV (decreasing the data set by 15,230 events). A contribution of $`\pm 0.001`$ to the systematic error was found. For 20% of the events in the data sample it was possible to choose more than one assignment of photons to pions. The analysis presented here chose the assignment with the best confidence level. The effect of this choice was studied by choosing the second best assignment (when possible) and by using all assignments with an appropriate weight to calculate $`\alpha `$. No statistically significant differences were observed. Finally, the data set was divided into two statistically independent halves by experimental run numbers and, separately, by range of momentum transfers. In both cases, the values determined for $`\alpha `$ were not statistically significantly different leading us to conclude these selections do not contribute to our systematic error. All effects considered combine to give our final estimate of $`\pm 0.004`$ for the systematic error. ## 2 Conclusion We have measured the $`\alpha `$ parameter of the transition matrix element $`M(\eta 3\pi ^0)`$ to be $`\alpha =0.005\pm 0.007(stat)\pm 0.004(syst)`$. Our result differs from one of the previous measurements cbar by slightly more than $`2\sigma `$ but is consistent with several other alde , baglin measurements. When compared to theoretical expectations, the value of zero favored by Di Vecchia, Gasser and Leutwyler gasser , vecchia ,cannot be ruled out. The result reported here is consistent with the two values (-0.007 and -0.014) reported by Kambor, Wiesendanger and Wyler kambor .
no-problem/0002/astro-ph0002199.html
ar5iv
text
# The host galaxies of luminous radio-quiet quasars ## 1 Introduction Models of the cosmological evolution of quasars often use galaxy mergers as the primary mechanism for quasar activation and require the mass of the structure within which a quasar is formed as a basic parameter . One step towards testing hypotheses about quasar initiation is to answer the question: Is a quasar’s luminosity correlated with the luminosity of the structure within which it formed? Such a correlation has been shown to exist for low redshift ($`0<z<0.3`$) Seyferts and quasars with luminosities $`M_V\mathrm{}>25`$ in that there appears to be a lower limit to the host luminosity which increases with quasar luminosity . However this limit is poorly defined, particularly for high luminosity quasars when the strong nuclear component makes it increasingly more difficult to find low luminosity hosts. Recent work has shown that the majority of nearby galaxies have massive dark objects in their cores, which are suggested to be super-massive black holes potentially capable of powering AGN . These studies have also found evidence for a correlation between the mass of the compact object and the luminosity of the spheroidal component of the host. Assuming a link between nuclear luminosity and black hole mass, the average nuclear luminosity emitted by low redshift quasars is expected to increase with host spheroidal luminosity. In light of this prediction there has been a resurgence of interest in host galaxy studies and recent work has found weak evidence for a correlation in accord with the relations of Magorrian et al. . However, this correlation relies on spheroid/disk decomposition for two quasars with low nuclear luminosities, and only a small number of luminous radio-quiet quasars were observed. Host galaxy properties of AGN are known to be correlated with the radio power: radio galaxies tend to be large spheroidal galaxies, while disk galaxies tend to be radio-quiet. Recent evidence suggests that the hosts of radio-loud quasars are also predominantly massive spheroidal galaxies regardless of the nuclear luminosity . However, studies of radio-quiet quasars with luminosities $`M_V\mathrm{}>25`$ have shown that the hosts can be dominated by either disk-like or spheroidal components or can be complex systems of gravitationally interacting components . There is therefore strong justification for studies to see if these luminosity and morphological trends extend to the hosts of more luminous ($`M_V\mathrm{}<25`$) radio-quiet quasars. There have been many recent detections of host galaxies in the optical thanks to results from HST , which add to our knowledge from ground-based studies \[1984a, 1984b, 1992, 1990\]. However, quasar hosts often appear significantly disturbed, as if by interaction or merger which can lead to strong bursts of star-formation and significant extended line and blue continuum emission at optical wavelengths which are not indicative of the mass of the underlying host. The nuclear-to-host light ratio in the optical is also typically higher than at longer wavebands. These problems can be circumvented by observing in the infrared where the contrast between host and nuclear component is improved and the emission associated with star bursting activity is largely absent: the $`K`$ magnitude is a better measurement of the long-lived stellar populations in the host . Previous observations in the infra-red have been successfully used to determine quasar host galaxy luminosities and morphologies (McLeod & Rieke 1994a;b; Dunlop et al. 1993; Taylor et al. 1996). However, recent advances in telescope design, in particular the advent of adaptive optics systems such as the tip-tilt system on UKIRT produce clearer images of quasars and enable accurate point spread functions (psfs) to be more readily obtained as differences between successive observations are reduced. Such advances coupled with improved analysis techniques mean we are now able to reveal the host galaxies of luminous quasars with $`M_V\mathrm{}<25`$ using infra-red observations. In order to obtain enough luminous radio-quiet quasars our sample was forced to cover redshifts $`0.26z0.46`$. At such redshifts, with typical seeing, the structure of the host galaxy is hidden in the wings of the psf from the nuclear component. There are two main ways of proceeding: either the psf can be deconvolved from the quasar light to directly observe the host galaxy, or known galaxy profiles can be used to model the hosts, a nuclear component can be added in and the profiles can be fitted to the data. Because the host galaxies sometimes have disturbed morphologies indicative of violent mergers, it is difficult to assume a form for the galaxy. However without such modelling, it is not easy to determine the contribution of the host to the light from the centre of the quasar and deconvolution routines tend to produce biased solutions which may alter important features. For the analysis of our quasar sample, an approach is adopted which uses both methods. Initially (Section 5) the images were restored using a deconvolution algorithm, based on the Clean algorithm , developed for this problem, which will be described elsewhere . This routine was used to reveal the extent to which the ‘nebulosity’ around the point source is disturbed. Deconvolution of the light from two of our quasars reveals violently disturbed host galaxies indicative of close merger events. In the remainder of our sample, the non-nuclear light is more uniformly distributed around the centre of the quasar. We should note that the resolution provided by this deconvolution technique is probably not sufficient to reveal evidence for weak mergers, where the host galaxy is only slightly disturbed. Where the image-restoration routine revealed approximate elliptical symmetry in the non-nuclear component, 2D galaxy profiles were fitted to the hosts. Analysis of non-interacting, low redshift galaxies has shown that an empirical fit to both disk and spheroidal systems is given by: $$\mu =\mu _o\mathrm{exp}\left[\left(\frac{r}{r_o}\right)^{1/\beta }\right].$$ (1) where $`\mu `$ is the average surface brightness in concentric elliptical annuli around the core, and $`r`$ is the geometric average of the semi-major and semi-minor axes. Model images were carefully created using this profile and were tested against the data using the $`\chi ^2`$ statistic to determine goodness of fit. Five host parameters were required, the half-light radius, integrated luminosity, axial ratio, angle on the sky, and the power-law parameter of the galaxy $`\beta `$, as well as the nuclear-to-host ratio. Section 6 describes the modelling procedure in detail, and in Section 8 the best-fit parameters are presented for the host galaxies. Much previous work has produced ambiguous results because of a lack of error analysis and insufficient testing of the modelling. A detailed analysis of the reliability of the 2D modelling method used in this paper has therefore been undertaken and is presented in Section 7. Although hosts are detected in all of our sample, the upper limit of the host luminosity is only usefully constrained for 9 of the 12 quasars modelled (see Section 8.1). Similar analysis of the best-fit $`\beta `$ parameter which determines the morphology of the host reveals that this parameter is, unsurprisingly, more poorly constrained than the luminosity. However, we have created Monte-Carlo simulations of images with the same signal-to-noise as the original images (Sections 9 & 10). By analysing these images using exactly the same procedure as for the original data we find that it is possible to distinguish between disk and spheroidal structure. Unless stated otherwise we have adopted a flat, $`\mathrm{\Lambda }=0`$ cosmological model with $`H_0=50`$km s<sup>-1</sup>Mpc<sup>-1</sup> and have converted previously published data to this cosmology for ease of comparison. ## 2 The Sample and Observations We have selected 13 luminous ($`M_V25.0`$) quasars and one less luminous quasar within the redshift range $`0.26z0.46`$. The quasars were checked for radio loudness using the NVSS survey . Three of the 14 quasars were detected at 1.4 GHz in this survey (see Table 1) but their flux densities are all $`<10^{24}`$ W Hz<sup>-1</sup>Sr<sup>-1</sup> and they are considered part of the radio-quiet population. Three quasars, 0137$``$010, 0316$``$346 and 2233+134 were observed at UKIRT before the tip-tilt system was operational and so these data are not of the same quality as those from subsequent runs. Of the 13 luminous quasars selected, three, 0956$``$073, 1214+186 and 1636+384 have not had previous attempts to measure host magnitudes and morphologies. It is difficult to assess the significance of claimed host detections for the other quasars and the associated parameters calculated because of the lack of error analysis which abounds in this field, and the great potential for systematic errors caused by the requirement for accurate psf measurements. However, individual results from these studies are compared to the results of this paper in Section 8.4. The observations were all taken using the $`256\times 256`$ pixel InSb array camera IRCAM 3 on the 3.9 m UK Infrared Telescope (UKIRT). The pixel scale is 0.281 arcsec pixel<sup>-1</sup> which gives a field of view of $``$72 arcsec. Our sample of quasars was observed during three observing runs in 09/1996, 09/1997 and 05/1998. For the later two runs the image quality was exceptional with consistent FWHM of 0.45 arcsec observed. The $`K`$-band quasar images were taken using a quadrant jitter pattern. This cycled 2 or 4 times through a 4-point mosaic placing the quasar in each of the quadrants in turn. The actual position of the central value within each quadrant was shifted slightly for each image to reduce the effect of bad pixels. Each image consists of $``$100 secs of integration time divided into exposures calculated to avoid saturation. The exposures varied between 5-10 secs for the quasars alone, down to 0.2 secs for the quasars with a bright star on the chip which we hoped to use as a psf star. Standard stars from the sample of UKIRT faint standards were observed for photometric calibration between observations of different quasars. All of the images were corrected for the non-linear response of IRCAM 3 using a formula supplied by the telescope support staff. ## 3 Obtaining the correct PSF Obtaining an accurate psf is vital to the analysis of the images. With these ground-based observations the psf varies with seeing conditions and telescope pointing. An experimental psf was therefore determined for each of the quasars by observing a nearby bright star. This led to an unbiased, accurate psf without recourse to the quasar images. For three of the quasars, 0956$``$073, 1214+180 and 1216+069 there was a nearby star which could be placed on the frame with the quasar. This gave an accurate psf measurement with no loss of integration time on the quasar. If required, the position of the quasar for each observation was altered slightly to allow both the quasar and psf to be well within the boundaries of the chip. For the remaining quasars the telescope was offset to a nearby bright star to use as the psf, before and after each quasar integration (which lasted a maximum of 1600 secs). A number of psf measurements were therefore obtained for each night and each quasar. To ensure consistent adaptive optics correction, properties of the tip-tilt guiding were matched between quasar and psf measurements. To do this psf stars were selected to enable tip-tilt guiding from a star of a similar magnitude, distance from the object, and position angle to that used for the quasar image. Magnitudes of the stars chosen to provide a psf measurement are given in Table 3. By examining fine resolution contour plots of the psf images, it was found that the psf was stable over the course of each night, but varied between nights at the telescope and for different telescope pointing. Because of this, the final stacking of psf images was performed with the same weighting between days as for the quasar images (see Section 4). As a test of the effectiveness of this procedure to provide the correct psf, the fit between measured psf and image for quasar 1543+489 has been compared to the fit between psfs measured for different quasars and the same image. Fig. 1 shows the radial profile of $`\sigma ^2(\mathrm{image}\mathrm{psf})`$ calculated in circular annuli of width 0.5 arcsec. Here the psf has been scaled so the total intensity is the same for both quasar and psf. As can be seen, the psf observed with the quasar image matches the quasar close to its centre better than any other psf. As the core of the psf is undersampled and the sampling between psf and image has not been matched (see Section 6.2 for further discussion of this), this result demonstrates the validity of the psf measuring technique. ## 4 The Data Reduction The data reduction procedure was optimised to search for low surface brightness extended objects. The same procedure was used for both image and psf data so no extra differences between measured and actual psf were introduced. In calculating the flat-field for each mosaic, it was decided to ignore all pixels within the quadrant containing the quasar. This ensured that the flat-fielding technique was not biased to remove or curtail extended emission which could occur if a routine based on pixel values, such as a $`\sigma `$-clipping routine, was used. Outside this quadrant, any areas occupied by bright stars were also removed from the calculation. The sky background level, assumed to be spatially constant was also calculated ignoring these areas. As the images were undersampled in the central regions we decided to use a sub-pixel shifting routine to centralise the images before they were median stacked to provide the final composite. Having replaced bad pixel values, the images were shifted using a bicubic spline interpolation routine in order to equalise the intensity weighted centres, and were median stacked. Because the psf quality was found to vary from day-to-day, the final stacking of psf images was performed with the same weighting between days as for the quasar images. Finally, any nearby bright objects in the psf frame were replaced by the average in an annulus of width 0.5 arcsec at the distance of the object from the centre of the psf, around the centre. In the quasar frame any nearby objects were noted and blanked out of the error frame so they were not included when measuring $`\chi ^2`$ between image and model (see Section 6). ## 5 Determining Simple Morphology Because of the often disturbed morphology of quasar hosts it is not possible to immediately assume a form for the galaxy structure. For instance if the host is involved in a close merger, modelling it with a smooth profile will not provide the correct host luminosity. The extended wings of the psf from the intense nuclear component hide the host galaxy sufficiently that direct observation cannot easily reveal even violently disturbed morphologies. Simply subtracting a multiple of the psf from the centre of the image will reveal some structure, but a deconvolution routine will reveal more structure. The routine used was a modified Clean algorithm developed for this problem which will be described elsewhere . The results show that this routine was of sufficient quality to reveal the approximate symmetry of the host on a scale which includes most of the light important for modelling the galaxy. Examination of the deconvolved images revealed a clear distinction between disturbed and symmetric systems. Two of the quasars have morphologies which showed no sign of elliptical symmetry and instead show signs of recent merger. The deconvolved images of these quasars are shown in Fig. 2. From these images a value for the non-nuclear luminosity was obtained by summing the residual light excluding the central pixel. Unfortunately the non-nuclear structure revealed was not of sufficient quality to be extrapolated into the central region so the amount of nuclear light which originates in the host galaxy is unknown. Magnitudes obtained from these deconvolved images should therefore be treated as approximate. The structure revealed for these quasars is discussed in Section 8.4. Deconvolving the remaining quasars revealed host galaxies with approximate elliptical symmetry. ## 6 Modelling the Quasar Images Having determined that the extended structure around a quasar did not show signs of a disturbed morphology indicative of a close merger and revealed approximate elliptical symmetry, the luminosity and morphology of the host galaxy were estimated by fitting model images to the data. A $`\chi ^2`$ minimisation technique described below was used to estimate the goodness of fit of the models. ### 6.1 Producing a model galaxy In this Section we describe how the empirical galaxy surface brightness profile given by Equation 1 was used to estimate the contribution from the host to the counts in each pixel. This had to be done carefully because of the poor sampling of the images. The profile given by Equation 1 has proved to be an excellent fit to many different types of galaxy and it is assumed that, if the hosts are not undergoing violent merger, this profile provides a good representation of the galaxy light. Before the method is described, it is useful to revise how an image is obtained from the light emitted by the quasar. Initially, the continuous distribution of light is altered by the atmosphere and the optics of the telescope in a way approximately equivalent to convolution with a continuous point spread function. The resulting continuous distribution is sampled by the detector which integrates the light over each pixel. This is equivalent to convolving the light with a square function of value 1 within a pixel and 0 otherwise, and sampling the resulting distribution assuming uniform response across each pixel. The dithering and subsequent stacking of the images will provide another convolution, although by sub-pixel shifting the images prior to stacking, the effective smoothing width of this function is reduced to less than 1 pixel. The whole process can therefore be thought of as convolving the true psf, the quasar light and a narrow smoothing function (of width $``$1 pixel) and sampling the resulting continuous image on the pixel scale. Because the psf measurements were obtained using exactly the same procedure as the quasar images, the measured psf is the result of a convolution of the true psf with the narrow smoothing function. Convolution is commutative and associative so this smoothing function is accounted for in the measured psf and further smoothing of the model galaxy is not required. For this reason the unconvolved model galaxy should not be obtained by simply integrating the model profile over each pixel. Sampling the model galaxy profile onto a grid with spacing equivalent to the pixel scale and convolving with the psf will not produce a correct model galaxy because of aliasing. In order to limit the aliased signal, the procedure adopted was to extrapolate the psf onto a grid which was finer than the pixel scale using a sinc function (so no extra high frequency components are introduced). The surface brightness of the model galaxy was then calculated at each point on a grid of the same size and was convolved with the psf on this grid. To provide the final model, this distribution was subsampled onto the pixel scale. Progressively finer grids were used until the total counts in the sampled model galaxy converged, when the majority of the aliased signal is assumed to have been removed. The algorithm adopted used a fine grid with 4$`\times `$ the number of points at each successive step, and was stopped when the average of all the counts differs from that of the previous step by a factor less than 0.01. Unless stated otherwise, all model host luminosities and magnitudes which relate to a 2D profile should be assumed to have been integrated to infinite radius. For the large radius within which such models were fitted to the data, this makes only a small difference in the luminosity. The four parameters of the host galaxy are the geometric radius of the elliptical annulus which contains half of the integrated light $`R_{1/2}`$, the total integrated host luminosity $`L_{\mathrm{int}}`$, the projected angle on the sky $`\alpha `$, the axial ratio $`a/b`$ and the power law parameter $`\beta `$. In this paper, the integrated host luminosity is quoted in counts /analogue data units (adu) detected in a 1 sec exposure. ### 6.2 The nuclear component In principle, adding in the nuclear light is simple - the correct amount is added to the centre of the model galaxy to minimise $`\chi ^2`$ between model and observed images. However, it is important to account for all of the nuclear light. Differences between measured and true psf caused by undersampling, seeing variations or effects such as telescope shake must be accounted for, even though the adopted observing strategy has limited some of these. In particular, when a nearby star (as used in Section 7.5) is deconvolved, the resulting light appears not only in the central pixel, but in the surrounding pixels as well: if similar components from the nuclear light are not accounted for in the quasar images, the host luminosities and morphologies derived will be wrong. Because of the large peak in both the image and psf, trying to alter the sampling of the images by extrapolating onto a fine grid and resampling without inducing unwarranted frequency components causes ‘ringing’ in the images which is large enough to affect the results of the modelling. It would be possible to use a different extrapolation technique, but this risks altering the image and measured psf in different ways. Instead, a more simple correction to this problem is adopted: rather than only adding a multiple of the psf to the central pixel, variable multiples of the psf are also added centred on the surrounding 8 pixels. In the perfect case where the measured psf is accurate, while such free parameters make convergence to the minimum $`\chi ^2`$ value slower, they do not affect the position of the minimum: the additional components make a negligible contribution to the model. However, suppose there is a discrepancy between measured and true psf so that the deconvolved image of the nuclear light consists of a central spike surrounded by corrective components which decrease in magnitude with distance away from the spike. Allowing the value of the pixels close to the core to be free parameters in our model will correct these discrepancies and any light observed originating away from the core will be more likely to be from the host galaxy and not from escaping nuclear light. The opposite is also true, and these extra components will also correct psf measurement errors which cause light from the galaxy to be wrongly ascribed to the core (as for quasar 0956$``$073: see below). Undersampling problems do not affect the modelled galaxy to the same extent because the galaxy light is more uniformly distributed and discrepancies are smoothed. In particular, the total integrated light measured to be from the host will only be minimally affected: see below. Quasar 0956$``$073 was modelled using different numbers of these extra components, and the recovered host parameters are given in Table 2. As expected, the $`\chi ^2`$ value decreases with an increasing number of additive psf components showing that the fit between model and image is being improved. The recovered parameters for 1 or 9 additive components show moderate differences, but allowing more components makes no further significant change. Because the host luminosity increases for this quasar with increasing numbers of added components, the sum of the extra psf contributions must be negative which suggests that, for this quasar, the psf has a slightly broader central profile than the quasar. For all of the quasars modelled, the total light within the eight extra psf components was not found to be systematically positive or negative. If adding 8 extra ‘psf components’ around the core had always resulted in a total positive (or negative) component being subtracted from the quasar, this would have suggested that either these components were removing host light in addition to ‘leaking’ nuclear light, and that the host profile breaks down in a systematic way for these pixels, or that our observing strategy had produced a systematically incorrect psf. For all quasar host galaxy studies, there is no escaping the fundamental problem that the galaxy profile has to extrapolated into the central region from some radius (to separate host and nuclear light). By adding in these extra components, all we’re doing is extrapolating from different distances, and arguing that simply extrapolating only into the central pixel is not necessarily correct for these data. This is because the measured psf is incorrect for the (discrete) deconvolution problem we’re trying to solve. ### 6.3 The error frame Determining the fit between model and image requires an estimate of the relative noise in each pixel, from both intrinsic noise in the image and differences between measured and true psf. Ideally these errors should be estimated without recourse to the images but, unfortunately, this is impractical for these data. Faced with a similar problem, Taylor et al. estimated the radial error profile by measuring the error in circular annuli of the image from which a multiple of the psf had been subtracted centred on the quasar with matched total luminosity. Using both the image and measured psf in this way allows the error from psf differences to be included in the error frame. However, this model assumes that the host galaxies do not introduce any intrinsic variations in the annuli within which the variance is calculated. Such variations could result from either differences between the radial profile of the host (convolved with the psf) and the psf profile, or significant deviation from circular host profiles. These effects will be small because the hosts only contribute a small percentage of the light and deviations from circular hosts are small. In order to reduce the number of parameters required to calculate the error profile and hopefully alleviate any damage caused by calculating the error frame from the data, Taylor et al. showed that a function of the form $`\mathrm{log}(\sigma _i)=A\mathrm{exp}^{0.5(r/S)^\gamma }B`$, where A,B,S and $`\gamma `$ are four parameters, provides a good fit to the resulting profile. This profile models both the error in the central regions of the image and the Poisson background error outside the core. The four parameters are determined for each quasar by least-square fitting to the observed error profile. Such a fitting procedure also enables the error to be determined in the central regions where the gradient is too steep and there are too few points in each annulus to predict confidently the error. In general this function fits the observed error profiles very well, and is used here (without including a contribution from the host) to estimate the errors in each pixel. Fig. 3 shows the observed and fitted error profiles for quasars 0956$``$073 and 1543+489 with and without including the best-fit model galaxy in the analysis. The best-fit host around quasar 1543+489 has an axial ratio of 0.89 in contrast to 0.64 for 0956$``$073. The deviation of the host around 0956$``$073 from circular symmetry explains why the error profile changes when including this host more than for quasar 1543+489. ### 6.4 $`\chi ^2`$ minimisation The algorithm used to find the global minimum in $`\chi ^2`$ was a multi-dimensional direction set technique based on a method introduced by Powell in 1964 . This algorithm requires an initial ‘start point’ from which it works its way downwards until it finds the minimum position. Briefly, the algorithm minimises $`\chi ^2`$ by sequentially adjusting each parameter (i.e. minimising along the axes of the parameter space), and then it minimises $`\chi ^2`$ on the vector along which the greatest change was made to $`\chi ^2`$ in the previous steps. This procedure is repeated until the algorithm converges. Additionally, for all the quasars it was ascertained that the algorithm had found the correct minimum and not erroneously finished early due to a numerical convergence problem by repeatedly re-running the algorithm starting from the previous best-fit parameters until the total host luminosity found for successive runs differed by less than 0.1 adu. The testing performed for this algorithm is described in Section 7.2. For all the results presented in Section 8, the algorithm was started from an initial position in parameter space corresponding to a broad, low luminosity galaxy. This was chosen so the algorithm avoided straying into a region of parameter space where all of the host light was in the core (i.e small $`R_{1/2}`$). This is a relatively flat region for $`\chi ^2`$ in parameter space and it can therefore take a long time for the algorithm to work its way out of this region. All pixels within a radius of 31 pixels (8.7 arcsec) measured from the centre of the quasar were included in the calculation of $`\chi ^2`$. For all of the best-fit host galaxies, the difference between model host luminosity within this area and the integrated luminosity was negligible, which implies that this area contained all of the important signal. ## 7 Testing the Modelling Procedure ### 7.1 Robustness to the error profile As a test of the robustness of the best-fit host luminosities to the determination of the error profile, we have modelled our image of quasar 0956$``$073 using different error profiles. Quasar 0956$``$073 was chosen for this test because the derived axial ratio of the host is the lowest of any quasar (although the range of values is quite small: Section 8.3). If the galaxy is important in the error frame calculation, the error profile calculated for this quasar as in Section 6.3 should be the most affected by the fact that we are ignoring the host (see Fig. 3). Using an error profile calculated as in Section 6.3 but using the image only (i.e. not subtracting the psf), the integrated host brightness was found to drop from 350.1 adu to 316.6 adu, corresponding to a variation of $``$0.1 mag. We have also tried re-calculating the error frame from the image minus the best-fit model image (galaxy and nuclear component convolved with the psf), again using the above formula to fit the error frame. Radial profiles of the two error frames are shown in Fig. 3. The best fit model parameters were used to calculate a new error frame, and we repeated this process until the best-fit host luminosity converged (subsequent iterations altered the integrated host luminosity by less than 0.1 adu). The final best-fit luminosity was found to be 351.5 adu, a negligible difference from the original minimum. ### 7.2 Finding the minima Obvious tests to perform are that there is only one minimum for each quasar, and that the $`\chi ^2`$ function is well behaved around this point. Obviously, it is impossible to cover every position in parameter space to check that $`\chi ^2`$ is well-behaved and that there are no local minima. However, we have examined the region of parameter space of interest using a variety of techniques and have found no potential problems. The minimisation algorithm is itself designed to cover a large region of parameter space; the algorithm sequentially searches for the minimum along a series of vectors (see Section 6.4 for details), and considers a large number of diverse values along each vector. Rerunning the algorithm starting at the best-fit location previously found also tests any minimum along each axis in parameter space, as does the calculation of the error bars, described in Section 7.4. The shapes of the surfaces around each minimum are also revealed by this calculation. A test for local minima has been performed for quasar 0956$``$073 over a larger region of parameter space: the minimisation algorithm was started at a large number of diverse initial host parameters, and no significant change in the best-fit parameters was discovered. Quasar 0956$``$073 was chosen for this test because it has average signal-to-noise of any quasar. Fig. 4 shows a ‘slice’ through parameter space revealing how smoothly the constrained $`\chi ^2`$ minimum varies with fixed host luminosity for quasar 0956$``$073. To calculate each point of this curve, all of the parameters except the host luminosity were varied until the constrained $`\chi ^2`$ minimum was reached. The remarkable smoothness of this curve demonstrates both that the global minimum is well pronounced and the function varies smoothly towards it, and that the minimisation routine is finding the correct minimum at each point: if it were not, a more rough surface would be expected, signifying that the optimum position had not been reached for each host luminosity. ### 7.3 Using the $`\chi ^2`$ statistic Use of the $`\chi ^2`$ statistic is dependent on the error in each pixel being independent of the errors in the other pixels. This is expected if the errors in the images are dominated by Poisson shot noise. Any large-scale differences between actual and model host could provide correlated errors, although these would hopefully have been discovered by the analysis of Section 5. It is possible that small-scale discrepancies remain that extend across more than one pixel. However, the relatively large pixel scale works to our advantage by reducing the likelihood of this. The central limit theorem then suggests that the error in each pixel should have approximately Gaussian distribution. The minimum $`\chi ^2`$ values are highly dependent on the normalisation of the error frames, and cannot directly provide tests of the model fits. The position of these minima are unaffected by the normalisation of the error frame as they are only dependent on relative variations between pixels. Examining the reduced $`\chi ^2`$ values at the minima given in Table 4, we see that the reduced $`\chi ^2`$ is less than 1 for the majority of the quasars, and deduce that the procedure outlined in Section 6.3 slightly over-estimates the error in each pixel. This is as expected due to the effect of the host galaxy. The confidence intervals calculated in Section 7.4 will therefore be slightly too large, thus providing a moderately pessimistic error analysis. As any nearby companions were excluded when measuring $`\chi ^2`$, the number of pixels used, presented in Table 4, varies between quasars. For quasar 1214+180, a diffraction spike from a nearby star which ran close to the quasar was also excluded. Unfortunately the position of the pixels which were not modelled is more important than the number of such pixels and, for this quasar the position of the diffraction spike was such that it covered a highly important region of pixels. Even though the area covered was small, the modelling suffered greatly. ### 7.4 Calculating error bars on the parameters Provided that the galaxy model is a good representation of the true underlying host galaxy, the errors between the model and image are uncorrelated between separate pixels, and the procedure in Section 6.3 provides approximately the correct error frame (see Section 7.3), it is possible to calculate error bars on the true parameters using the $`\chi ^2`$ statistic. The procedure to do this is to hold the chosen parameter fixed at a certain value, and minimise the remaining parameters to find the local minimum in $`\chi ^2`$. The end points of the 68.3% confidence intervals on the best-fit parameter are given by the points for which $`\mathrm{\Delta }\chi ^2=\chi ^2\chi _{\mathrm{min}}^2=1`$, where $`\chi _{\mathrm{min}}^2`$ is the minimum value calculated allowing all parameters to vary . A standard binary search has been used to find the required limits. As well as allowing error bars to be calculated, this procedure enables the parameter space to be examined and any problems for each quasar to be spotted. In order to match the light from the host galaxies, the behaviour of the integrated host luminosity, $`\beta `$ and $`R_{1/2}`$ are coupled . The determination of the error bars is therefore complicated by the question ‘what limits, if any should be placed on the parameters being adjusted to find the constrained minima?’. In finding the global minima, all of the parameters are effectively allowed to vary over all space: although bounds are placed on the parameters, they are not reached (except when modelling the star, see Section 7.5). However, at fixed integrated host luminosity, these limits are often reached because the profiles required to optimally match the light do not necessarily have to be those of galaxies. The philosophy adopted is that all the parameters should be allowed to vary except $`\beta `$, upon which limits of $`0.25<\beta <6.0`$ should be set to provide some adherence to standard galaxy profiles. For quasar 0956$``$073, we have examined the required cut through parameter space for the integrated host luminosity, calculated by minimising all other parameters to obtain each point. The distribution of local $`\chi ^2`$ minima are shown in Fig. 4: the curve displays simple structure, monotonically decreasing to the global minimum from both directions so we are justified in using the simple $`\mathrm{\Delta }\chi ^2=1`$ cut-off for the error bars. The resulting 68.3% confidence interval for the luminosity is also shown. The value of $`\chi ^2`$ depends on the error frame used, and it is expected that the error bars do so as well. The effect of altering the error frame for quasar 0956$``$073 has been tested by using the error frame calculated from the image only as in Section 6.3. Using this error frame, the 68.3% confidence interval on the host magnitude changed slightly from $`25.11<M_K<24.87`$ to $`25.10<M_K<24.66`$. ### 7.5 Fitting to a normal star On the same frame as quasar 0043+039 we observed a star of similar signal-to-noise as the quasar. As a test of the fitting procedure we decided to see if we could fit a ‘galaxy’ to the star. Starting from an initial position in parameter space corresponding to a broad, low luminosity galaxy as adopted in all of the modelling, the resulting best-fit parameters are given in Table 3. As can be seen, the fitting procedure rolled down the hill towards a host galaxy of very low luminosity. At such low total luminosity, the remaining four galaxy parameters are poorly determined: altering these parameters results in a very small change in $`\chi ^2`$. Consequently it is no surprise to find that the best-fit $`\beta =6`$ value is one of the limits set in the modelling procedure. ## 8 Results of the Analysis ### 8.1 Luminosities For three of the quasars, analysis of how $`\chi ^2`$ varies within the parameter space revealed that the best-fit host luminosity was not well constrained. A host galaxy was determined as being present in that a lower limit was determined in all cases. However, the maximum light which could have come from the host was not clear because the shape of the host was not sufficiently resolved. The morphology of the best-fit galaxy at large $`L_{\mathrm{int}}`$ could alter to place the majority of the host light in the central region. This effect could have been avoided by placing limits on $`R_{1/2}`$ or, for instance, using the near-infrared Fundamental Plane , although these upper limits would have been highly dependent on the criteria set. The host luminosity is ultimately limited by the total light in the image, and it is expected that the host luminosities for these quasars do have upper bounds at high values of $`L_{\mathrm{int}}`$, but these high values would not be of any use in determining the actual host light. For the remaining nine quasars, the minima were sufficiently constrained to provide 68.3% confidence intervals. Comparison of different confidence intervals provided information on the depth of the valleys within which each minimum was found and the quality of each determination. Host and nuclear luminosities for our quasars are compared with the results of other studies in Fig. 5. In order to compare with the $`H`$-band host galaxy studies undertaken by McLeod & Rieke (1994a;b), we convert their total (nuclear + host) and host luminosities to the $`K`$-band by applying a single conversion factor to the apparent magnitudes. This then sets the relative normalisation of the $`K`$-band and $`H`$-band samples; conversion to absolute magnitudes is subsequently undertaken in exactly the same way for all of the infra-red samples. In a study of the energy distribution of the PG quasars (from which McLeod & Rieke chose their samples), Neugebauer et al. found $`HK=0.90`$ for the sample of Mcleod & Rieke \[1994a\] and $`HK=0.98`$ for McLeod & Rieke \[1994b\]. In the upper panel of Fig. 5, we adopt these values to convert the total luminosities of the McLeod & Rieke quasars into the $`K`$-band. The light from the galaxy component is assumed to be dominated by an evolved stellar population, the colour of which reddens with increasing redshift. For nearby galaxies, $`HK0.25`$, which was used by McLeod & Rieke to convert galaxy absolute magnitudes. However, the apparent $`HK`$ is dependent on redshift and, at the redshifts of the quasars imaged by McLeod & Rieke (1994a;b), $`HK0.6`$ is expected for an evolved stellar population . This was adopted to convert the McLeod & Rieke galaxy luminosities into the $`K`$-band. We have also checked the calibration of the McLeod & Rieke samples and our sample of modelled quasars (with 6 overlapping objects) against the data of Neugebauer et al. . The average total quasar luminosity for the subsamples are in good agreement, although individual values vary by up to $`0.7`$ mag, presumably due to intrinsic quasar variability. One quasar ($`1354+213`$) was imaged by McLeod & Rieke \[1994b\], Neugebauer et al. and in our study. Neugebauer et al. derived $`HK=1.0`$ for this object, which is higher than $`HK=0.3`$ derived by combining the McLeod & Rieke $`H`$-band and our $`K`$-band observation. However, the McLeod & Rieke and our observations were undertaken at different epochs, and the luminosity is not expected to remain constant. The study of Taylor et al. was performed in the $`K`$-band, and the apparent $`K`$-band magnitudes of host and nuclear components were taken directly from this work. The data from the different infra-red samples were then converted to absolute magnitudes, applying the $`K`$-correction of Glazebrook et al. for the host galaxy and assuming the nuclear component follows a standard power law spectrum $`f(\nu )=\nu ^{0.5}`$. Using the error bars calculated in Section 7.4 to weight the data, the average integrated host galaxy magnitude for our quasars was found to be $`M_K=25.15\pm 0.04`$. For comparison, when converted for cosmology exactly as our data, the sample of Taylor et al. gives $`M_K=25.68`$, McLeod & Rieke \[1994a\] $`M_K=25.42`$ and McLeod & Rieke \[1994b\] $`M_K=25.68`$. Recent determinations of the $`K`$-band luminosity of an $`L^{}`$ galaxy have resulted in $`M_K^{}=24.6`$, compared to previous determinations of $`M_K^{}=24.3`$ and $`M_K^{}=25.1`$ . The Gardner et al. value is plotted in the top panel of Fig. 5. This shows that the average luminosity of our hosts is $`1.6`$ times that of an $`L^{}`$ galaxy. Note that for all three values, the derived average luminosity is $`12`$ times that of an $`L^{}`$ galaxy, and the conclusions of Section 11.1 are not affected by this choice. We compare our sample to recent HST $`R`$-band results in the lower panel of Fig. 5 assuming an apparent $`RK=2.5`$ for the total light from our quasars based on the average value for the 6 quasars which overlap our sample and the sample of Neugebauer et al. . The $`RK`$ colour of an evolved stellar population, assumed to dominate the host galaxies, is dependent on the redshift of the source and, for the redshifts of our sample ($`z0.35`$), is expected to be $`3.5`$ . All the data (including our data after conversion to apparent $`R`$-band magnitudes) presented in the bottom panel of Fig. 5 were adjusted for cosmology assuming that the nuclear component has a spectrum of the form $`f(\nu )\nu ^{0.5}`$, and the galaxy component has $`f(\nu )\nu ^{1.5}`$. ### 8.2 Morphologies Morphologies are parametrised by the best-fit value of $`\beta `$: $`\beta =1`$ values correspond to disk-like, and $`\beta =4`$ to spheroidal profiles. The technique described in Section 7.4 has been used to reveal how well the $`\beta `$ parameter is constrained by the modelling. The result of this analysis is presented in Table 5. As can be seen, the $`\beta `$ parameter is well constrained for fewer quasars than the luminosity and $`\chi ^2`$ error bars reveal a highly skewed distribution for the expected true value given the best-fit $`\beta `$ value. In order to correctly determine the differential probability between disk and spheroidal profiles, we need to know the relative dispersion of $`\beta `$ for each morphological type. However, examining the best-fit parameters, the error bars on $`\beta `$, and the shape of $`\chi ^2`$ surface, on which we have information from the binary search to find the error bars, we can infer the best fit morphology for some of the quasars. The suggestion from this is that luminous radio-quiet quasars can exist in hosts dominated by either disk-like or spheroidal components. A histogram of these data is plotted in Fig. 8, where the distribution is compared to that recovered from simulated data with exact $`\beta =1`$ or $`\beta =4`$ profiles. ### 8.3 Axial ratios and angles Analysis of the parameter space reveals that the axial ratio and projected angle of each host are better constrained than the other parameters. Fig. 6 shows histograms of these parameters for all quasars modelled. The distribution of axial ratios is small with $`a/b=0.79\pm 0.03`$. This is in agreement with those found by McLure et al. , but higher than found by Hooper, Impey & Foltz . The projected angles are uniformly distributed as expected. ### 8.4 Highlighted results for selected quasars #### 8.4.1 Quasar 0043+039 The broad-absorption-line (BAL) quasar PG0043+039 has been subject to 2 previous studies to determine host galaxy properties. It was observed in the $`i`$ band by Veron-Cetty & Woltjer who determined $`M_i=23.9`$ if the host is a disk like ($`\beta =1`$) galaxy, or $`M_i=24.7`$ for a spheroidal ($`\beta =4`$) galaxy. This quasar was also observed using the wide-field camera on HST by Boyce et al. , who used a cross correlation technique to determine that the host was slightly better fit by a disk galaxy with $`M_V=21.6`$. We also find that the dominant morphology is disk-like and calculate $`M_K=25.29`$. The old burst model of Bruzual & Charlot predicts $`VK=3.3`$ which is consistent with the derived $`VK=3.7`$. #### 8.4.2 Quasar 0316$``$346 This quasar was previously observed using the wide field camera on HST and the host was found to reveal evidence of a merger, in particular tidal tails extending $`20`$ kpc west of the quasar. Bahcall et al. also provide a 2D fit to the host properties and find that the best-fit host is disk galaxy with $`M_V=22.3`$. We also calculate a best-fit disk galaxy and find $`M_K=25.44`$, giving $`VK=3.1`$, again consistent with the old burst model of Bruzual & Charlot which predicts $`VK=3.3`$. #### 8.4.3 Quasar 1214+180 There have been no previous attempts to determine the morphology of the host galaxy around this quasar possibly due to the nearby star which was utilised in this work to obtain an accurate psf. Unfortunately in our images, a diffraction spike from this star passed close to the quasar reducing the area that could be used to calculate $`\chi ^2`$. Although the modelling converged to give basic galaxy parameters, further analysis of the parameter space revealed that this minimum was not well constrained. #### 8.4.4 Quasar 1216+069 Our analysis of this quasar benefited because the images were obtained using the tip-tilt system and there is a nearby bright star which was placed on the same frame as the quasar and used to obtain a psf measurement. Previously, ‘nebulosity’ has been observed around this quasar , and a more detailed HST study found a best-fit spheroidal ($`\beta =4`$) galaxy with $`M_V=22.3`$ . We also find that the most likely host is a large spheroidal galaxy and obtain $`M_K=25.1`$, giving $`VK=2.8`$. #### 8.4.5 Quasar 1354+213 Using a psf subtraction technique, McLeod & Rieke \[1994b\] found a residual host galaxy with $`M_K=25.6`$ when converted to our cosmology using the $`K`$-correction from Glazebrook et al. and the apparent colour correction $`HK=0.6`$ (see Section 8.1). Our best fit host luminosity was $`M_K=25.2`$. Analysis shows that the luminosity and the $`\beta `$ parameter are both tightly constrained by the modelling and the best-fit $`\beta =0.73`$ suggests that the host is dominated by a disk component. The rest-frame nuclear-to-host ratio for this quasar is only $`6.8`$ (the apparent nuclear-to-host ratio is $`4.6`$), which explains why the derived parameters have small error bars. #### 8.4.6 Quasar 1636+384 We are not aware of any previous attempts to determine the luminosity and morphology for the host galaxy of quasar 1636+384. Preliminary deconvolution of the light revealed that the excess, non-central light displayed a morphology greatly disturbed from elliptical symmetry (as shown in Fig. 2). The structure includes an excess of light to the NW of the core which is interpreted as a merging component as well as light around the central core which probably originates from the host. From this image it was unclear how to distinguish between the host and the interacting companion, so the luminosity of the host was estimated by summing pixel values excluding the central pixel. This provided an approximate $`K`$-band absolute magnitude of $`M_K=23.5`$. #### 8.4.7 Quasar 1700+518 Quasar 1700+518 is a bright BAL quasar of low redshift ($`z=0.29`$). Such low redshift BAL objects are rare and hard to discover since the broad absorption lines are in the UV and consequently quasar 1700+518 has received much interest: specific studies of this quasar have been undertaken in many different wavebands . Because of the low redshift and the brightness of the quasar, 1700+518 has also been included in many samples of quasars imaged to obtain details of their host galaxies \[1985, 1994b\], although these have only provided upper limits for the host magnitude. More recent imaging studies have shown that the morphology of the underlying structure consists of a disturbed host predominantly to the SW of the core and a close interacting companion to the NE which is most likely a ring galaxy . Deconvolution of the light from this quasar, as shown in Fig. 2, confirms this picture of the structure. With the disturbed morphology it is difficult to know how to split the light in the central pixels into nuclear and host components. As for quasar 1636+384 the host luminosity was estimated by summing the counts in the pixels surrounding the central one (ignoring those from the NE companion). There will be errors caused by leakage of light from the nuclear component and from the contribution of the host to the central pixel. An approximate $`K`$-band absolute magnitude of $`24.9`$ was obtained for the host galaxy and $`24.4`$ for the NE companion galaxy. #### 8.4.8 Quasar 2233+134 Both Smith et al. and Veron-Cetty & Woltjer included this quasar in their samples, but both failed to resolve the host galaxy beyond obtaining upper limits for the luminosity. Hutchings & Neff did resolve the host galaxy and found the host to be best-fit by a $`\beta =4`$ model, although they did not resolve further information about the galaxy. However, we find that the most probable host has an disk profile and calculate $`M_K=24.1`$, the lowest luminosity host modelled. If we constrain the host to have an elliptical profile, the best fit luminosity becomes $`M_K=25.9`$, although the half light radius is very small for this model ($`R_{1/2}=1.5`$ kpc) which places it a long way from the $`K`$-band Fundamental Plane of Pahre, Djorgovski & de Carvalho . If the host parameters are constrained to lie on this plane, then rerunning the modelling gives a best fit host with $`M_K=24.6`$. Neither of these changes would be sufficient to alter our conclusions. ## 9 Simulated Data I - Single component galaxies Trying to recover known host parameters from Monte-Carlo simulations of the actual data enables the distribution of recovered parameters given the true values to be determined. Note that the error bars calculated using the $`\chi ^2`$ statistic are instead determined from the distribution of possible true values given the data. These two distributions are not necessarily equal. We need to determine the distribution of recovered values in order to answer questions such as ‘Are our results biased towards low $`\beta `$ values?’. In view of the distribution of recovered $`\beta `$ values, it was decided to simulate data to match the images of quasars 0956$``$073 and 1543+489. These quasars span the distribution of signal-to-noise of all the images, and 2D model fitting revealed evidence for disk dominated hosts in both cases. Verification of this result is interesting as recent work has suggested that the hosts of luminous quasars should be dominated by the spheroidal component (see Section 11.2). ### 9.1 Creating the mock data Simulated galaxies were created using the procedure outlined in Section 6.1 and a single $`\delta `$ function added to the centre of each to create a ‘perfect unconvolved model’. The height of the $`\delta `$ function was chosen to match the total signal of the original images. These models were then convolved with the psf measured to match the quasar. Gaussian noise was added with a radially dependent variance as given by the error profile calculated in Section 6.3 including the best-fit host galaxy in the calculation. The error profiles used for quasars 0956$``$073 and 1543+489 are given by the solid lines in Fig. 3. Differences between measured and true psf were included in this analysis, and are therefore included in the noise levels added to the simulated data. This noise model assumes that the errors in different pixels are independent (see Section 7.3). We have simulated 100 images with exact disk hosts, and 100 images with exact spheroidal hosts for each of the two quasars chosen. The true integrated host luminosity was set at 300 adu for simulated data of both quasars. This conservative value is below the best-fit value obtained from the data for both quasars, providing a stringent test of the modelling. This is particularly true for a $`\beta =4`$ host: constraining $`\beta =4`$ when modelling the observed image would have resulted in a best-fit $`L_{\mathrm{int}}300`$ adu. The simulated images were analysed using exactly the same 2D modelling procedure described above for the observed data. The range of recovered parameters is analysed below. ### 9.2 Results from the simulated data: luminosities Recovered luminosities, presented in Fig. 7 reveal a skewed distribution, particularly for hosts with exact spheroidal profiles where the recovered luminosity is biased towards a low value. This is consistent with the morphology being skewed towards a low $`\beta `$ value (see next Section): if $`\beta `$ is decreased, the luminosity also has to decrease to keep the counts in the outer pixels (those most important for fitting the host) the same. The counts in the centre of the galaxy are less important because of the additional nuclear component which is adjusted to match the data. The mean and variance in the recovered luminosities are presented in Table 6. Although the error bars reveal the extent of the skewed distribution, the mean is within 10% of the true value for each quasar and morphology. ### 9.3 Results from the simulated data: morphologies The skewed distribution observed in the error bars on the true host $`\beta `$ value is mirrored by the distribution of $`\beta `$ values recovered using the standard modelling procedure described in Section 6. Fig. 8 shows the relative distribution of $`\beta `$ values retrieved from the simulated images. Limits of $`0.25<\beta <8`$ were placed on fitted $`\beta `$ values. For quasar 0956$``$073, 16 of the simulated images created with exact spheroidal hosts, had recovered $`\beta >8`$. For quasar 1543+489, this number was 14: these values are not included in Fig. 8. The distribution was used to calculate the mean and standard deviation given in Table 6, assuming all fits with $`\beta >8`$ actually had $`\beta =8`$. If the host were a spheroidal galaxy with $`\beta =4`$, the probability of recovering a best-fit value of $`\beta <1`$ is $`0.03`$ for 0956$``$073 and $`0.01`$ for 1543+489: the best-fit values from the images were $`\beta =0.92`$ and $`\beta =0.67`$ respectively. The evidence for the existence of hosts dominated by a disk component therefore appears to be strong. In Fig. 8, the distribution of retrieved $`\beta `$ values for the 12 quasars modelled is also shown. This distribution is inconsistent with the hypothesis that all the hosts are dominated by spheroidal components on the scales probed by these measurements. The histogram is divided to show the probable distribution of morphologies given the options $`\beta =1`$ or $`\beta =4`$. As can be seen, the modelling suggests that approximately half of the hosts are dominated by disk components. ## 10 Simulated Data II - Two component galaxies In order to constrain the potential importance of spheroidal cores in the galaxies found to be dominated by disk-like profiles, we have analysed synthetic quasars created with two host galaxy components. Using the Fundamental-Plane (FP) relation between $`R_{1/2}`$ and $`L_{\mathrm{int}}`$ found in the K-band by Pahre et al. , we have added extra spheroidal ($`\beta =4`$) components to the recovered best-fit host galaxy of quasar 1543+489. Note that this best fit host had $`\beta =0.67`$. We have tried the same analysis using $`\beta =1`$ and found no change in the effects produced by the spheroidal core. After adding in the nuclear component and noise as described in Section 9, we have recovered the best-fit host galaxy parameters using our single component modelling. Spheroidal components were added with a variety of different luminosities, and five different realisations of the additional noise component were added to each. The resulting average recovered $`L_{\mathrm{int}}`$ & $`\beta `$ are given in Table 7. Because $`R_{1/2}(\mathrm{spheroidal})`$ and $`L_{\mathrm{int}}(\mathrm{spheroidal})`$ follow a FP relation, the importance of this component is enhanced for large $`L_{\mathrm{int}}(\mathrm{spheroidal})`$ and diminished for small $`L_{\mathrm{int}}(\mathrm{spheroidal})`$. The recovered total luminosity for small spheroidal components is therefore very similar to that of the disk alone. For large spheroidal components, the modelling places an excess of host light in the core in order to simultaneously fit the outer disk-like profile and the inner profile with a single, large $`\beta `$ value. This explains the behaviour of the difference between the actual and recovered $`L_{\mathrm{int}}`$ values. Recovered $`\beta `$ monotonically increases with the increasing luminosity of the spheroidal core, suggesting that the spheroidal core cannot be completely ‘hidden’ without affecting the best fit galaxy. This adds to the evidence that the low $`\beta `$ values recovered for some of the quasars implies that they do not contain strong spheroidal components. Note that the recovered host luminosities are approximately correct for recovered values of $`\beta `$ consistent with a host dominated by a disk-like profile. For the quasars which have best-fit hosts dominated by spheroidal components, a disk-like profile at larger radii could have erroneously increased the recovered total host luminosity. However, in order to simultaneously fit these regions, small values of $`R_{1/2}`$ were required. For the quasars with hosts found to be dominated by spheroidal components, the relatively large values of $`R_{1/2}`$ recovered suggest that such a disk-like component is not present. ## 11 Discussion ### 11.1 Luminosities The integrated host luminosities derived from our $`K`$-band images exhibit a low dispersion around a mean similar to that calculated in studies of less luminous quasars. This is in accord with the work of McLure et al. who also found no evidence for an increasing trend, although they had fewer data points at high nuclear luminosity. Previous HST studies have found evidence that host luminosity increases with nuclear luminosity , although the trend observed in this work could be due to incorrect nuclear component removal: escaping nuclear light which increases in luminosity with the core could be added to the host light. It has recently been stated that the psf derived by packages such as tinytim, as used by Hooper, Impey & Foltz deviate from empirical WFPC 2 psfs at large radii ($`2`$ arcsec), due to scattering within the camera , and this could be the reason for an excess of nuclear light at larger radii which could be mistaken as host light. This excess light could also be the reason the low axial ratios observed in the Hooper, Impey & Foltz work are not in accord with those derived in McLure et al. , or in this $`K`$-band study. The triangular shape of the McLeod & Rieke points in Fig. 5 found for low redshift ($`0<z<0.3`$) Seyferts and quasars of lower luminosity than those in our sample, has been shown to be in accord with a lower limit to the host luminosity which increases with nuclear luminosity . This cut-off in the host luminosity is equivalent to there being an upper limit to allowed nuclear-to-host ratios. The triangular shape is not followed by the results of the work presented in this paper which lie to the right of the McLeod & Rieke points. The relative positions of the two data sets in this Figure are set by the empirical $`HK`$ corrections applied to the apparent $`H`$-band data (see Section 8.1 for details). Quasar 1354+213 was included in both our sample and the sample of McLeod & Rieke \[1994b\], and the results of both studies independently suggest a rest-frame nuclear-to-host ratio of $`79`$. This places 1354+213 at the right of the triangular shape of the McLeod & Rieke points in Fig. 5, but it has a nuclear-to-host ratio lower than most of the quasars in our sample, and is therefore to the left of most of our points. We conclude that the limit suggested by McLeod & Rieke must break down for quasars with the highest nuclear luminosities. This is in contrast to recent work by McLeod, Rieke & Storrie-Lombardi who claim that the lower bound on host luminosity extends to the highest luminosity quasars, partly based on the discovery of one luminous quasar, 1821+643 which appears to be in a host at $`25L^{}`$. What should we expect? The hosts of the quasars known to date already extend to about $`2L^{}`$. Should the hosts of quasars which are ten times more luminous be found in galaxies at $`20L^{}`$? Our analysis suggests not. This result is highly important for recent quasar models. In particular, the model of Kauffmann & Haehnelt predicts that the upper limit to the nuclear-to-host ratio should extend to quasars such as those imaged in this work. However, this is clearly not the case. A possible fix to their model would be to invoke the scatter of the Magorrian et al. relations to explain high luminosity quasars (& high mass black holes) within low luminosity structures, and invoke a steeply-declining host mass function to explain the lack of really massive hosts. Further work on this model would then be required, particularly with regard to the revised slope of the high luminosity tail of the quasar luminosity function. Alternatively, factors other than black-hole mass, such as nuclear obscuration, accretion processes, etc. could be the cause of differing nuclear luminosities within reasonably similar galaxies (with similar black hole masses). ### 11.2 Morphologies Recent HST results suggest that luminous nuclear emission predominantly arises from hosts with large spheroidal components . The two least luminous radio-quiet quasars imaged by McLure et al. have disk-like structure at radii $`\mathrm{}>3`$ arcsec, while the more luminous quasars are completely dominated by spheroidal profiles. Could we be seeing a relationship between host morphology and nuclear luminosity? This is particularly interesting when compared to the black hole mass-spheroid mass and spheroid mass-luminosity relations determined for nearby galaxies by Magorrian et al. : a large black hole, potentially capable of powering luminous AGN appears more likely to be present in galaxies with large spheroidal components. Both the results of McLure et al. and the relations of Magorrian et al. suggest that quasars with strong nuclear emission should predominantly exist in hosts with large spheroidal components which dominate any disk-like structure. By careful analysis we have provided evidence that a large fraction of the host galaxies found in this work are dominated by disk-like profiles. However, the most important light for this modelling comes from radii greater than those of the HST study, where the disk component, if present, is expected to be strong. The $`K`$-band images described here are not of sufficient quality for us to resolve the inner region and produce a 2-component fit to the host galaxy. This is in contrast to results from HST where the increased resolution enables the inner region to be resolved, and the spheroidal core of the galaxy becomes more important for modelling with a single $`\beta `$ parameter. By analysing synthetic data, we have been able to show that for hosts where we find a dominant disk-like component, any additional spheroidal component will not result in a large change in the recovered total host luminosities. We have also provided suggestive evidence that the spheroidal cores of these quasars are of relatively low luminosity. Further analysis of both the regions and profiles probed by different studies, and higher resolution data on the cores of the quasars analysed in this work would be very interesting, and could help to explain the different morphological results of recent host galaxy studies. ## 12 Conclusions We have presented the results from a deep $`K`$-band imaging study designed to reveal the host galaxies of quasars with higher luminosities than targeted by previous studies. Extending host-galaxy studies to these quasars was made possible by the stability provided by the tip-tilt adaptive optics system at UKIRT, which enabled accurate psf measurements to be obtained for the deep quasar images. We have been able to resolve host galaxies for all of our sample. The principle conclusion of this study is that the luminous quasars in this sample have host galaxies with similar luminosities to quasars of lower total luminosity. Derived nuclear-to-host ratios are therefore larger than those obtained by previous work, and place these quasars beyond the upper limit suggested by studies of quasars with lower total luminosities. Host morphologies are less certain, but there is weak evidence that the hosts of these quasars can be dominated by either disk-like or spheroidal profiles on the scales probed by these images. ## 13 Acknowledgements The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council.
no-problem/0002/astro-ph0002441.html
ar5iv
text
# Winds from accretion disks driven by the radiation and magnetocentrifugal force. ## 1 Introduction Accretion disks are believed to lose mass via powerful outflows in many astrophysical environments such as active galactic nuclei (AGN); many types of interacting binary stars, non-magnetic cataclysmic variables (nMCVs), for instance; and young stellar objects (YSOs). Magnetically driven winds from disks are the favored explanation for the outflows in many of these environments. Blandford & Payne 1982 (see also Pelletier & Pudritz 1992) showed that the centrifugal force can drive a wind from the disk if the poloidal component of the magnetic field, $`𝐁_𝐩`$ makes an angle of $`>30^o`$ with respect to the normal to the disk surface. Generally, centrifugally-driven MHD disk winds (magnetocentrifugal winds for short) require the presence of a sufficiently strong, large-scale, ordered magnetic field threading the disk and the poloidal magnetic field to be comparable to the toroidal magnetic field, $`|B_\varphi /B_p|<1`$ (e.g., Cannizzo & Pudritz 1988, Pelletier & Pudritz 1992). Additionally, magnetocentrifugal winds require some thermal assistance to flow freely and steadily from the surface of the disk, to pass through a slow magnetosonic surface (e.g., Blandford & Payne 1982). Many authors have studied magnetocentrifugal winds from a Keplerian disk (e.g., Ouyed & Pudritz 1997, Ustyugova et al. 1999, Krasnopolsky, Li & Blandford 1999, and references therein). These studies are either analytic, looking for stationary, often self-similar solutions or numerical, looking for both stationary and time-dependent solutions. However in numerical simulations of magnetocentrifugal winds, the lower boundary is between the slow magnetosonic surface and the Alfv$`\stackrel{´}{\mathrm{e}}`$n surface. This specification requires setting the mass loss rate in advance (e.g., Bogovalov 1997). Simulations of magnetocentrifugal winds considering the whole disk (even regions below the slow magnetosonic surface) and not requiring an ad hoc mass loss rate, are only now becoming feasible. This requires, however, an accurate treatment of the radiation and gas pressure effects, among other physical processes. Thermal expansion and the radiation force have been suggested as other mechanisms that can drive disk winds. These mechanisms can produce powerful winds without the presence of a magnetic field. Winds are likely thermally driven in X-ray-irradiated accretion disks in systems such as X-ray binaries and AGNs (e.g., Begelman, McKee & Shields 1983, Woods et al. 1996). These winds require the gas temperature to be so high that the gas is not gravitationally bound. In such cases, the radiation driving is probably not important because the gas is fully ionized, at least in the hottest regions, and the radiation force is only due to electron scattering. However in the regions where gas is cooler and not fully ionized the radiation force will be enhanced by spectral lines and play an important role in controlling the dynamics of the flow. In fact Murray et al. (1995) designed a line-driven disk wind model specifically for AGNs. Radiation-driven disk winds have been extensively modeled (e.g., Pereyra, Kallman & Blondin 1997; Proga, Stone & Drew 1998, hereafter PSD I; Proga 1999; Proga, Stone & Drew 1999, hereafter PSD II). These recent studies showed that radiation pressure due to spectral lines can drive winds from luminous disks. This result has been expected (e.g., Vitello & Shlosman 1988). These studies, in particular those by PSD I, also showed some unexpected results. For example, the flow is unsteady in cases where the disk luminosity dominates the driving radiation field. Despite the complex structure of the unsteady disk wind, the time-averaged mass loss rate and terminal velocity scale with luminosity, as do steady flows obtained where the radiation is dominated by the central object. To calculate the line force for disk winds, PSD I adopted the method introduced by Castor, Abbott & Klein (1975, hereafter CAK) and further developed by Friend & Abbott (1986), Pauldrach, Puls & Kudritzki (1986) for one-dimensional radial flows within the context of a wind from hot, luminous OB stars. To extend the CAK method to model multi-dimensional disk winds, it is necessary to accommodate the effects of the three-dimensional velocity field and the direction-dependent intensity. Owocki, Cranmer & Gayley (1996) showed that these effects can lead to qualitatively different results compared to those obtained from a one-dimensional treatment in the case of a rapidly rotating star. The most difficult aspect of calculating the line force due to a disk is in the evaluation of the integral involving the velocity gradient tensor over the entire solid angle occupied by the radiating surface. PSD I used an angle-adaptive quadrature to ensure an accurate result. However, computational limitations required that they simplified the integrand, retaining only the dominant terms in the velocity gradient tensor. PSD II generalized the PSD I method and introduced a new quadrature that avoids any simplification of the integrand. This allowed them to evaluate the radiation force for completely arbitrary velocity fields within the context of the CAK formalism. They applied this method to recalculate several disk wind models first discussed in PSD I. These more physically accurate models show that PSD I’s more approximate method was very robust. The PSD II calculations predict total mass loss rates and velocities marginally different from those published in PSD I. Additionally, PSD II find that models which display unsteady behavior in PSD I are also unsteady with the new method. The largest change caused by the new method is in the disk-wind opening angle: winds driven only by the disk radiation are more polar with the new method while winds driven by the disk and central object radiation are typically more equatorial. In this paper, we will numerically check how strong, ordered magnetic fields can change disk winds driven by the line force for a given disk luminosity. We assume that the transport of angular momentum in the disk is dominated by local disk viscosity, for instance due to the local shear instability in weakly magnetized disk (Balbus & Hawley 1997). Instead of adding the magnetic fields to the PSD II model and solving consistently the equations of MHD, we simply start by adding some of the effects due to the magnetic fields, namely (1) the azimuthal force so the wind conserves the specific angular velocity along the streamlines and (2) the force perpendicular to the field lines so the geometry of the disk wind is controlled by the magnetic field. We thus adopt the popular concept that the magnetic field dominates outside the disk and at least near the disk surface, one can think of the magnetic field lines as rigid wires that control any flow (e.g., Blandford & Payne 1982; Pelletier & Pudritz 1992). Such an approach is clearly simplistic and can be applied only to sub-Alfv$`\stackrel{´}{\mathrm{e}}`$nic flows. Nevertheless, the results presented here provide a useful exploratory study of line-driven disk winds in the presence of a strong, large-scale field magnetic threading the disk. The organization of the paper is as follows. In Section 2 we describe our numerical calculations; in Section 3 we present our results; in Section 4 we conclude with a brief discussion. ## 2 Method To calculate the structure and evolution of a wind from a disk, we solve the equations of hydrodynamics $$\frac{D\rho }{Dt}+\rho 𝐯=0,$$ (1) $$\rho \frac{D𝐯}{Dt}=(\rho c_s^2)+\rho 𝐠+\rho 𝐅^{rad}+\rho 𝐅^{mc}$$ (2) where $`\rho `$ is the mass density, $`𝐯`$ the velocity, $`𝐠`$ the gravitational acceleration of the central object, $`𝐅^{rad}`$ the total radiation force per unit mass, and $`𝐅^{mc}`$ the total ’magnetocentrifugal’ force. The term with $`𝐅^{mc}`$ ensures that gas flows along an assumed direction and conserves its specific angular velocity. The gas in the wind is isothermal with a sound speed $`c_s`$. We solve these equations in spherical polar coordinates $`(\rho ,\theta ,\varphi )`$. The geometry and assumptions needed to compute the radiation field from the disk and central object are as in PSD II (see also PSD I). The disk is flat, Keplerian, geometrically-thin and optically-thick. We specify the radiation field of the disk by assuming that the temperature follows the radial profile of the so-called $`\alpha `$-disk (Shakura & Sunyaev 1973), and therefore depends only on the mass accretion rate in the disk, $`\dot{M}_a`$, and the mass and radius of the central object, $`M_{}`$ and $`r_{}`$. In particular, the disk luminosity, $`L_DGM_{}\dot{M}_a/2r_{}`$. In models where the central object radiates, we take into account the stellar irradiation of the disk, assuming that the disk re-emits all absorbed energy locally and isotropically. We express the central object luminosity $`L_{}`$ in units of the disk luminosity $`L_{}=xL_D`$. See PSD I and PSD II for further details. Our numerical algorithm for evaluating the line force is described in PSD II. For a rotating flow, there may be an azimuthal component to the line force even in axisymmetry. However we set this component of the line force to zero because it is rather weak as compared to other components and is not of great importance (e.g., PSD II). Then we assume that the rotational velocity $`v_\varphi `$ is determined by the azimuthal component of the magnetocentrifugal force. The description of our calculation of the ’magnetocentrifugal’ force follows. We choose the simplest geometry of the magnetic field and the flow: the poloidal component of the magnetic field, $`𝐁_𝐩`$ and of the velocity, $`𝐯_p`$ are parallel to one another, and the inclination angle $`i`$ between $`𝐯_p`$ and the disk midplane is fixed for all locations and times. In other words, we constrain the geometry of the flow to straight cones of pre-specified, radius- and time-independent opening angle. In practice, we impose this geometry in the following way: (i) we evaluate the physical accelerations and the curvature terms in eq. 2, hereafter collectively referred to as $$𝐅^{}=\frac{1}{\rho }(\rho c_s^2)+𝐠+𝐅^{rad}+𝐅^i,$$ (3) where $`𝐅^i`$ represents the curvature terms appearing on the left hand side of eq. 2 when $`\frac{D𝐯}{Dt}`$ is expressed in the spherical polar coordinate system (e.g., Shu 1992): $$𝐅^i=\left(\begin{array}{c}\frac{v_\theta ^2+v_\varphi ^2}{r}\\ \frac{v_\varphi ^2}{r}\mathrm{cot}\theta \\ 0\end{array}\right),$$ (4) (ii) we calculate the component of $`𝐅^{}`$ perpendicular to a streamline: $$𝐅_{}^{}{}_{}{}^{}=𝚲𝐅^{},$$ (5) where $$𝚲=\left(\begin{array}{ccc}\mathrm{sin}^2(\theta i)& \mathrm{sin}(\theta i)\mathrm{cos}(\theta i)& 0\\ \mathrm{sin}(\theta i)\mathrm{cos}(\theta i)& \mathrm{cos}^2(\theta i)& 0\\ 0& 0& 0\end{array}\right),$$ (6) (iii) and finally we subtract $`𝐅_{}^{}`$ from $`𝐅^{}`$ so the total remaining force is tangent to the streamline. The disk outflow that conserves its specific angular velocity, $`\mathrm{\Omega }=v_\varphi /(r\mathrm{sin}\theta )`$ along a streamline is simply the flow that corotates with the disk, i.e., $`\mathrm{\Omega }`$ is constant and equals to the disk rotational velocity, $`\mathrm{\Omega }_D=v_\varphi (r_D,90^o)/r_D`$ at the footpoint of the streamline on the disk at the radius, $`r_D`$. To ensure corotation of the wind, we introduce the azimuthal force: $$𝐅^t=\left(\begin{array}{c}0\\ 0\\ 2\frac{v_\varphi (v_r\mathrm{sin}\theta +v_\theta \mathrm{cos}\theta )}{r\mathrm{sin}\theta }\end{array}\right).$$ (7) Summarizing, our magnetocentrifugal force is $$𝐅^{mc}=𝚲𝐅^{}+𝐅^t.$$ (8) As in PSD II, we use the ZEUS-2D code (Stone & Norman 1992) to numerically integrate the hydrodynamical equations (1) and (2). We did not change the transport step in the code and so the changes we made are just to the source step as outlined above. We explore disk wind models for three cases: (I) a wind corotates with the disk ($`𝐅_{}^{}{}_{}{}^{}=0`$ and $`F_{\varphi }^{}{}_{}{}^{t}=2v_\varphi (v_r\mathrm{sin}\theta +v_\theta \mathrm{cos}\theta )/(r\mathrm{sin}\theta )`$) ; (II) the wind geometry is fixed by adopting a fixed inclination angle between the poloidal component of the velocity and the disk midplane ( $`𝐅_{}^{}{}_{}{}^{}=𝚲𝐅^{}`$ and $`𝐅^t=0`$) ; and (III) a combination of (I) and (II) ( $`𝐅_{}^{}{}_{}{}^{}=𝚲𝐅^{}`$ and $`F_{\varphi }^{}{}_{}{}^{t}=2v_\varphi (v_r\mathrm{sin}\theta +v_\theta \mathrm{cos}\theta )/(r\mathrm{sin}\theta )`$). In case I with $`x=0`$, we decided to put an additional constraint on the radial velocity to produce outward flows. Near the disk midplane, fluctuations of the density and velocity can occur (e.g., PSD I). Additionally, the radial component of the radiation force is negative near the disk midplane and the central object (e.g., Icke 1980, PSD I). Thus $`v_r`$ can be less than zero; gravity can dominate and pull the flow toward the center. To avoid this we apply the condition: $`v_r=\mathrm{max}(v_r,\mathrm{cot}\theta v_\theta )`$. This condition does not change the mass loss rate, but does reduce the wind opening angle. To produce transonic flows in cases II and III, we need to increase the density along the disk midplane, $`\rho _o`$, from $`10^9\mathrm{g}\mathrm{cm}^3`$ (as used by PSD I) up to $`10^4\mathrm{g}\mathrm{cm}^3`$. For low $`\rho _o`$, the wind velocity does not depend on $`\rho `$, the mass loss rate is proportional to $`\rho _o`$ and may be higher than the mass accretion rate. For $`\rho _o>10^4\mathrm{g}\mathrm{cm}^3`$ the mass loss rate is dramatically lower, does not change with $`\rho _o`$, and the flow is subsonic near the disk midplane. These are desired properties of the outflow; we expect gas near the midplane to be nearly in hydrostatic equilibrium. Equally important, we assume that the disk is in a steady state so the mass loss rate should be lower than the mass accretion rate. In reality, gas near the disk midplane (inside the disk) is in hydrostatic equilibrium for the densities lower than those we assumed because the radiation force is lower as the radiation becomes isotropic. We would like to stress that we treat the region very close to the midplane as a boundary condition and do not attempt to model the disk interior. As in PSD I and PSD II, we calculate disk winds with model parameters suitable for a typical nMCV (see PSD I’s Table 1 and our Table 1). We vary the disk and central object luminosity and the inclination angle. We hold all other parameters fixed, in particular the parameters of the CAK force multiplier: $`k=0.2`$, $`\alpha =0.6`$ and $`M_{max}=4400`$ (see PSD I). Nevertheless we can use our results to predict the wind properties for other parameters and systems – such as AGN – by applying the dimensionless form of the hydrodynamic equations and the scaling formulae as discussed in PSD I and Proga (1999). ## 3 Results PSD I and PSD II showed that radiation-driven winds from a disk fall into two categories: 1) intrinsically unsteady with large fluctuations in density and velocity, and 2) steady with smooth density and velocity. The type of the flow is set by the geometry of the radiation field, parametrized by $`x`$: if the radiation field is dominated by the disk ($`x<1`$) then the flow is unsteady, and if the radiation is dominated by the central object ($`x>1`$) then the flow is steady. The geometry of the radiation field also determines the geometry of the flow; the wind becomes more polar as $`x`$ decreases. However the mass-loss rate and terminal velocity are insensitive to geometry and depend more on the total system luminosity, $`L_D+L_{}`$. We recalculated some of the PSD II models to check how inclusion of the magnetocentrifugal force will change line-driven disk winds. Figure 1 compares the density in the wind in two models from PSD II where they used a generalized CAK method (top panels), models using PSD II’s method but conserving specific angular velocity – our case I (middle panels), and models using PSD II’s method, conserving specific angular velocity and having fixed inclination angle of the stream lines – our case III (bottom panels). Models in case III (bottom panels) are with the inclination angles approximately equal to those in the PSD II case (the top panels). The left column shows the results for a model with $`\dot{M}_a=10^8\mathrm{M}_{}\mathrm{yr}^1`$ and $`x=0`$, while the right column shows the results with $`\dot{M}_a=\pi \times 10^8\mathrm{M}_{}\mathrm{yr}^1`$ and $`x=1`$. The former corresponds to the fiducial $`x=0`$ model while the latter corresponds to the fiducial $`x=1`$ model discussed in detail in PSD I and PSD II. We start with describing the $`x=0`$ wind model (left column panels). The most striking yet expected difference between PSD II’s case (top) and our case I (middle) is a decrease of the wind opening angle, $`\omega `$ from $`50^o`$ to $`15^o`$ when the specific angular momentum conservation is replaced by the specific angular velocity conservation. This result is caused by a much stronger centrifugal force in our case I than in the PSD II case which is also reflected in an increase of the radial velocity by more than one order of magnitude (Table 1). Another big difference between the two cases is in the mass loss rate, $`\dot{M}_w`$ which increases from $`5.5\times 10^{14}\mathrm{M}_{}\mathrm{yr}^1`$ to $`1.3\times 10^{12}\mathrm{M}_{}\mathrm{yr}^1`$. An unchanged property in these two cases is that the wind is unsteady. The quantitative changes between PSD I’s case (top) and case III (bottom) are similar or smaller than the changes described above (see Table 1). The most pronounced change caused by holding the inclination angle fixed is that the wind becomes steady. However this is not surprising because models with the fixed, rigid geometry are pseudo one-dimensional and the flow has one degree of freedom in space. In PSD I and PSD II models where the wind geometry is calculated self-consistently, a strong radial radiation force is required to produce a steady outflow. The x=1 wind model (top right hand panel of Figure 1) is an example of a steady outflow from PSD II. This model remains steady in the two other cases shown in the figure and in case I listed in Table 1 (run Ic). The model changes from case to case mainly in the radial and rotational velocities which increase when the azimuthal force is added (middle and bottom panel). For example, the radial velocity increased from 3500 $`\mathrm{km}\mathrm{s}^1`$ in PSD II’s run C (top panel) to 28000 $`\mathrm{km}\mathrm{s}^1`$ in our run IIIc (bottom panel). As in the x=0 wind model, adding the azimuthal force also reduces the wind opening angle (middle panel). However an increase in the mass loss rate from PSD II’s case (top panel) to our case II (bottom panel) is only a factor of $`2`$. The mass loss rate is practically the same in case II and case III (run IIc and IIIc). Models presented in Figure 1 both for $`x=0`$ and $`x=1`$ suggest that an increase in the mass loss rate due to conservation of the specific angular velocity decreases with the disk luminosity. Our other results, shown in Table 1, confirm this. For example, the x=0 model Ib with $`\dot{M}_a=\pi \times 10^8\mathrm{M}_{}\mathrm{yr}^1`$ has $`\dot{M}_w`$ higher than the corresponding PSD II’s model B by a factor of $`3`$ while an increase of $`\dot{M}_w`$ for models with $`\dot{M}_a=10^8\mathrm{M}_{}\mathrm{yr}^1`$ is by a factor $`24`$ (i.e., runs A and Ia). The mass loss rate of the line-driven disk wind also increases by imposing the wind geometry such that $`i>0^o`$ as comparison of our results in case II and from PSD II reveals. Once the geometry is fixed the mass loss rate does not change with adding the azimuthal force – the corresponding models in our case II and III have the same $`\dot{M}_w`$. Purely line-driven x=0 winds have streamlines perpendicular to the disk midplane near the midplane because the total horizontal force is negligible in comparison with the vertical force due to lines. This property of disk winds has been assumed in analytic studies of line-driven disk winds (e.g., Vitello & Shlosman 1988). We calculated a model assuming $`i=0^o`$ (run IIa) and find that the mass loss rate is very similar to the corresponding PSD II’s run A. This confirms then that the mass loss rate in line-driven disk winds for $`x=0`$ is determined in the regions where the matter flows still perpendicularly to the disk. ## 4 Conclusions and Discussion We have studied winds from accretion disks with very strong organized magnetic fields and the radiation force due to lines. We use numerical methods to solve the two-dimensional, time-dependent equations of hydrodynamics. We have accounted for the radiation force using a generalized multidimensional formulation of the Sobolev approximation. To include the effects of the strong magnetic field we have added to the hydrodynamic equation an approximate magnetocentrifugal force. Our approach of treating the latter allows us to study cases where the wind geometry is controlled entirely by the magnetic field with the lines of the poloidal component of the field, $`𝐁_p`$ being straight and inclined at a fixed angle to the disk midplane where the specific angular velocity is conserved along the streamlines due to the magnetic azimuthal force, or both. Our simulations of line-driven disk winds show that inclusion of the azimuthal force, $`𝐅^t`$ which makes the wind corotate with the disk, increases the wind mass loss rate significantly only for low disk luminosities. As expected such winds have much higher velocities than winds with zero azimuthal force. Fixing the wind geometry effectively reduces the flow from two-dimensional to one-dimensional. This in turn stabilizes winds which are unsteady when the geometry is allowed to be determined self-consistently. The inclination angle between the poloidal velocity and the normal to the disk midplane is important. If it is higher than zero it can significantly increase the mass loss rate for low luminosities, and increase the wind velocity for all luminosities. The presence of the azimuthal force does not change the mass loss rate if the geometry is fixed. It is very intriguing that the mass loss rate can be enhanced by the magnetocentrifugal force the most for low luminosities where line force is weak and at the same time the mass loss rate remains a strong function of the disk luminosity for all luminosities. In particular, there is no wind when $`\dot{M}_a<10^8\mathrm{M}_{}\mathrm{yr}^1`$ as in purely line-driven case (e.g., PSD I; Proga 1999). Model IIa’ with $`i=0`$ is steady and has the same $`\dot{M_w}`$ as the corresponding PSD II model. The mass loss rate is then the same regardless of the time behavior. This is consistent with PSD I conclusion that the mass loss rate depends predominately on the total system luminosity. Drew & Proga (1999) applied results from PSD I, PSD II and Proga 1999 to nMCV. In particular, they compared mass loss rates predicted by the models with observational constraints. They concluded that either mass accretion rates in high-state nMCV are higher than presently thought by a factor of 2-3 or that radiation pressure alone is not quite sufficient to drive the observed hypersonic flows. If the latter were true then an obvious candidate to assist radiation pressure in these cases is MHD (e.g., Drew & Proga 1999). An increases of the mass loss rate due to inclusion of a magnetocentrifugal force (our case I) brings our predictions close to the observational estimates for nMCVs. However we should bear in mind that at the same time, the radial velocity of the wind increases above the observed velocities in nMCVs that are of the order a few thousand $`\mathrm{km}\mathrm{s}^1`$. The increase of the wind velocity may not be large if the magnetic field is moderately strong and spins up the disk wind only close to its base where the mass loss rate is determined. Then farther away from the disk the wind would gain less or no angular momentum and velocities would be similar to those in the case without the magnetic field at all. Outflows which conserve specific angular velocity have a higher rotational velocity than those which conserve specific angular momentum. The former also have higher terminal velocities due to stronger centrifugal force. Additionally in the wind where the specific angular velocity is conserved the rotational velocity is comparable to the terminal velocity while in the wind where the specific angular momentum is conserved the rotational velocity decreases asymptotically to zero with increasing radius and is therefore much lower than the terminal velocity. Thus we should be able to distinguish these two kinds of winds based on their line profiles. For example, highly rotating winds should produce emission lines much broader than slowly rotating, expanding winds, if we see the disk edge-on. So far we have discussed changes of the line-driven wind caused by inclusion of the magnetocentrifugal force. However we can also anticipate some changes in the magnetocentrifugal wind if we add the line force. For example, there may be a difference in the wind collimation caused by the line force. The centrifugal force decollimates the magnetocentrifugal outflow near the disk as $`i`$ must be greater than $`30^o`$. In such a case the magnetic field collimates the outlow only beyond the Alfv$`\stackrel{´}{\mathrm{e}}`$n surface where the pinch force exerted by the toroidal component of the field operates. In the case with a significant line driving, the magnetic field does not have to be inclined at the angle $`>30^o`$. Then in such case, outflows are more collimated than pure magnetocentrifugal outflows from the start and may end up more collimated far away from the disk. Let us now discuss some results of other related works. Recently Ogilvie & Livio (1998) studied a thin accretion disk threaded by a poloidal magnetic field. Their purpose was to determine qualitatively how much thermal assistance is required for the flow to pass through the slow magnetosonic surface. They found that a certain potential difference must be overcome even when $`i>30^o`$ and that thermal energy is not sufficient to launch an outlow from a magnetized disk. Ogilvie & Livio suggested that an additional source of energy, such as coronal heating may be required. PSD I showed the radiative line force can drive disk winds. Thus the ’missing’ energy may be in the radiation field. Our calculations here show that if the disk luminosity is sufficiently high the line and magnetocentrifugal forces produce strong transonic winds regardless of the wind geometry. Then the magnetocentrifugal force can assist the line force in producing disk winds – our approach to the problem or the line force or/and the thermal force can well assist MHD – Ogilvie & Livio’s approach (see also Wardle & K$`\ddot{\mathrm{o}}`$ngil 1993; Cao & Spruit 1994, for instance). In seeking a steady state solution of a line-driven disk wind, Vitello & Shlosman (1988) also found that the flow must overcome a potential difference – an increase of the vertical gravity component with height. They suggested that for the radiation force to increase with height above the disk midplane and overcome this potential difference a very particular variation in the ionization state of the gas is required. There are a number of limitations of our treatment of magnetic fields which are worthy of mention. Our models do not include the whole richness of MHD because we approximate the Lorentz force by the ’magnetocentrifugal’ force (eqs 3-8) instead of solving self-consistently the equation of motion and the induction equation. We have included effects of a magnetic field on the disk winds in a simplistic manner that mimics some effects of a very strong, organized magnetic field where $`B_\varphi <B_p`$ near the disk surface. The magnetic field is treated as a rigid wire that controls the flow geometry outside the disk. More quantitatively, this corresponds to the situation where outside the disk, at least near the disk photosphere, the magnetic field dominates i.e., the magnetic pressure exceeds the disk gas pressure $`\beta 8\pi p_D/B_D^2<1`$. Inside the disk however the situation may be different. We assume that the disk is in a steady state that is also stable. In such a case, the regions near the disk midplane are likely supported by the gas pressure and the magnetic pressure should be smaller than the gas pressure, i.e., $`\beta >1`$, otherwise the disk may be unstable (Stella & Rosner 1984). Using the system parameters adopted here: $`c_s=14\mathrm{km}\mathrm{s}^1`$ and $`\rho _o=10^4\mathrm{g}\mathrm{cm}^3`$, we find that the $`\beta >1`$ condition yields a maximum value for the disk magnetic fields strentgh of the order of $`10^5`$ G. Our assumption that poloidal magnetic field lines are straight excludes any magnetic tension in the $`(r,\theta )`$ plane. In our cases I and III we assume that the specific angular velocity, $`\mathrm{\Omega }`$ is conserved along a streamline from the footpoint of the line to infinity or rather to the outer boundary of the computational domain. However in a steady state axisymmetric MHD flow, the quantity which is conserved is the total angular momentum per unit mass which can be written as $$l=\mathrm{\Omega }r^2\mathrm{sin}^2\theta \frac{r\mathrm{sin}\theta B_\varphi }{4\pi \kappa }\mathrm{\Omega }_Dr_A^2,$$ (9) where $`r_A`$ is the position of the the Alfv$`\stackrel{´}{\mathrm{e}}`$n point of the flow on the streamline, and $`\kappa =\rho v_p/B_p`$ is the mass load– another constant along the streamline (e.g., Pelletier & Pudritz 1992). The total angular momentum has contributions from both the flowing rotating gas and the twisted magnetic field. This means that as the material angular momentum, $`\mathrm{\Omega }r^2\mathrm{sin}^2\theta `$ increases due to corotation with the disk, the toroidal component of the magnetic field must increase so the total angular momentum is conserved. Then our approximation in cases I and III corresponds to the situation where near the disk the total angular momentum is dominated by the contribution from the twisted magnetic field and the Alfv$`\stackrel{´}{\mathrm{e}}`$nic surface is beyond our computational domain. In other words our approximation is valid in the region where the outflow is sub-Alfv$`\stackrel{´}{\mathrm{e}}`$nic. Consequently, our approach does not allow us to study collimation of the wind by the magnetic field because this happens beyond the fast magnetosonic surface where the flow is super-Alfv$`\stackrel{´}{\mathrm{e}}`$nic. PSD II’s models and ours in case II correspond to the other extreme case where the total angular momentum has no contribution from the twisted magnetic field and the material angular momentum is conserved. Our treatment of magnetic fields does not include effects of the magnetic pressure. The magnetocentrifugal driving presupposes the existence of a strong poloidal magnetic field comparable to the toroidal magnetic field near the disk surface, $`|B_\varphi /B_p|<1`$. However when the poloidal magnetic field is weaker than the toroidal magnetic field, $`|B_\varphi /B_p|>>1`$ the magnetic pressure may be dynamically important in driving disk winds (e.g., Uchida & Shibata 1985; Pudritz & Norman 1986, Shibata & Uchida 1986, Contopoulos 1995; Kudoh & Shibata 1997; Ouyed & Pudritz 1997 and references therein). For $`|B_\varphi /B_p|>>1`$, there is initially the buildup of the toroidal magnetic field by the differential rotation of the disk that in turn generates the magnetic pressure of the toroidal field. The magnetic pressure then gives rise to a self-starting wind. To produce a steady outflow driven by the magnetic pressure a steady supply of the advected toroidal magnetic flux at the wind base is needed, otherwise the outflow is likely a transient (e.g., K$`\ddot{\mathrm{o}}`$nigl 1993, Contopoulos 1995, Ouyed & Pudritz 1997). However it is not clear whether the differential rotation of the disk can produce such a supply of the toroidal magnetic flux to match the escape of magnetic flux in the wind and even if it does whether such a system will be stable (e.g., Contopoulos 1995, Ouyed & Pudritz 1997 and references therein). Concluding, we would like to stress that a further development of models of radiation-driven winds from disks should include taking into account magnetic field but it is equally important to consider adding the radiation force to models of MHD winds from luminous disks. Both kinds of models are quite well understood now and if merged they could allow us to study better disk winds in systems such as cataclysmic variables and AGNs. As we mentioned above thermal assistance may not be sufficient to launch wind from a magnetized disk and we should consider not only coronal heating but also line driving. ACKNOWLEDGEMENTS: We thank John Cannizzo, Janet Drew, Achim Feldmeier, Tim Kallman, Scott Kenyon, Mario Livio, and James Stone for comments on earlier drafts of this paper. We also thank Steven Shore and an anonymous referee for comments that helped us clarify our presentation. This work was performed while the author held a National Research Council Research Associateship at NASA/GSFC. Computations were supported by NASA grant NRA-97-12-055-154. ## REFERENCES Balbus, S.A., & Hawley, J.F. 1998, Rev. Mod. Phys., 70, 1 Begelman M.C., McKee C.F., Shields G. A. 1983, ApJ, 271, 70 Blandford R.D., Payne D.G. 1982, MNRAS, 199, 883 Bogovalov S.V. 1997, A&A, 323, 634 Cannizzo J. K., Pudritz R.E. 1988, ApJ, 327, 840 Cao X., Spruit H.C. 1994, A&A, 287, 80 Castor J.I., Abbott D.C., Klein R.I. 1975, ApJ, 195, 157 (CAK) Contopoulos J. 1995, ApJ, 450, 616 Drew J.E., Proga D., in “Cataclysmic Variables”, Symposium in Honour of Brian Warner, Oxford 1999, ed. by P. Charles, A. King, D. O’Donoghue, in press Friend D.B., Abbott D.C., 1986 ApJ, 311, 701 Icke V. 1980, AJ, 85, 329 K$`\ddot{\mathrm{o}}`$nigl A. 1993, in “Astrophysical Jets”, ed. by D.P. O’Dea (Cambridge: Cambridge Univ. Press), 239 Krasnopolsky R., Li Z.-Y., Blandford R., 1999, ApJ, 526, 631 Kudoh T., Shibata K. 1997, ApJ, 474, 362 Murray N., Chiang J., Grossman S.A., Voit G.M. 1995, ApJ, 451, 498 Ogilvie G.I., Livio M. 1998, ApJ, 499, 329 Ouyed R., Pudritz R.E. 1997, ApJ, 484, 794 Owocki S.P., Cranmer S.R., Gayley K.G. 1996, ApJ, 472, L115 Pauldrach A., Puls J., Kudritzki R.P. 1986, A&A, 164, 86 Pelletier G., Pudritz R.E. 1992, ApJ, 394, 117 Pereyra N.A., Kallman T.R. Blondin J.M. 1997, ApJ, 477, 368 Proga D. 1999, MNRAS, 304, 938 Proga D., Stone J.M., Drew J.E. 1998, MNRAS, 295, 595 (PSD I) Proga D., Stone J.M., Drew J.E. 1999, MNRAS, 310, 476 (PSD II) Pudritz R.E., Norman C.A. 1986, ApJ, 301, 571 Shakura N.I., Sunyaev R.A. 1973 A&A, 24, 337 Shibata K., Uchida Y. 1986, PASJ, 38, 631 Shu, F. 1992, The Physics of Astrophysics, Vol. 2, Gas dynamics (Mill Valley: University Science Books) Stellar L., Rosner R. 1984, ApJ, 277, 312 Stone J.M., Norman M.L. 1994, ApJ, 433, 746 Uchida Y., Shibata K. 1985, PASJ, 37, 515 Ustyugova G.V., Koldoba A.V., Romanova M.M. Chechetkin V.M. Lovelace R.V.E. 1999, ApJ, 516, 221 Vitello P.A.J., Shlosman I., 1988 ApJ, 327, 680 Wardle M., K$`\ddot{\mathrm{o}}`$nigl A. 1993, ApJ, 410, 218 Woods, D. T., Klein, R. I., Castor, J. I., McKee, C. F., Bell, J. B. 1996, ApJ, 461, 767 Table 1. Summary of results for disc winds | run | $`\dot{M}_a`$ | x | $`\dot{M}_w`$ | $`v_r(10r_{})`$ | $`\omega `$ or $`(90^oi)^{}`$ | | --- | --- | --- | --- | --- | --- | | | (M yr<sup>-1</sup> ) | | (M yr<sup>-1</sup>) | $`(\mathrm{km}\mathrm{s}^1)`$ | degrees | | PSD II | | | | | | | A | $`10^8`$ | 0 | $`5.5\times 10^{14}`$ | 900 | 50 | | B | $`\pi \times 10^8`$ | 0 | $`4.0\times 10^{12}`$ | 3500 | 60 | | C | $`\pi \times 10^8`$ | 1 | $`2.1\times 10^{11}`$ | 3500 | 32 | | I | | | | | | | a | $`10^8`$ | 0 | $`1.3\times 10^{12}`$ | $`15000`$ | 15 | | b | $`\pi \times 10^8`$ | 0 | $`1.3\times 10^{11}`$ | 20000 | 38 | | c | $`\pi \times 10^8`$ | 1 | $`2.4\times 10^{11}`$ | 32000 | 12 | | II | | | | | | | a | $`10^8`$ | 0 | $`6.3\times 10^{14}`$ | 600 | 90$``$ | | a’ | $`10^8`$ | 0 | $`6.3\times 10^{13}`$ | 1100 | 60$``$ | | c | $`\pi \times 10^8`$ | 1 | $`4.2\times 10^{11}`$ | 5000 | 30$``$ | | III | | | | | | | a | $`10^8`$ | 0 | $`6.3\times 10^{13}`$ | 16000 | 60$``$ | | c | $`\pi \times 10^8`$ | 1 | $`4.2\times 10^{11}`$ | 28000 | 30$``$ | $``$ For all models from PSD II and our models in case I, the last column lists the wind opening angle, $`\omega `$. While for models in cases II and III, the last column lists $`90^oi`$ (marked with $``$), where $`i`$ is the assumed inclination angle between the poloidal component of the velocity and the normal to the disk midplane; note that then $`\omega (90^oi)`$.
no-problem/0002/cond-mat0002238.html
ar5iv
text
# Identification of clusters of companies in stock indices via Potts super-paramagnetic transitions ## Acknowledgments Partial support by OTKA-T029985 is acknowledged with thanks.
no-problem/0002/math0002231.html
ar5iv
text
# Surgeries on periodic links and homology of periodic 3-manifolds ## Abstract We show that a closed orientable 3-manifold $`M`$ admits an action of $`𝐙_p`$ with fixed point set $`S^1`$ iff $`M`$ can be obtained as the result of surgery on a $`p`$-periodic framed link $`L`$ and $`𝐙_p`$ acts freely on the components of $`L`$. We prove a similar theorem for free $`𝐙_p`$-actions. As an interesting application, we prove the following, rather unexpected result: for any $`M`$ as above and for any odd prime $`p`$, $`H_1(M,𝐙_p)𝐙_p`$. We also prove a similar criterion of 2-periodicity for rational homology 3-spheres. 0. Introduction In the early 1960’s both Wallace \[Wa\] and Lickorish \[Li\] proved that every closed, connected, orientable 3-manifold may be obtained by surgery on a framed link in $`S^3`$. Thus, link diagrams may be used to depict manifolds. Every manifold has infinitely many different framed link descriptions. However, in 1970’s Kirby \[K\] showed that two framed links determine the same 3-manifold iff they are related by a finite sequence of two specific types of moves. This calculus of framed links, together with the earlier results, gives a classification of 3-manifolds in terms of equivalence classes of framed links. The framed link representation of 3-manifolds has proven to be extremely useful. For instance, most of the new 3-manifold invariants originating from famous Witten’s paper \[Wi\] are based on the framed link approach. Therefore, it is always very useful to have some kind of correspondence between certain classes of 3-manifolds and some classes of framed links. One example of such correspondence is the classical relationship between the lens spaces and the chain-link diagrams (see, for instance, \[Ro\]). This result, in particular, allowed L. Jeffrey to determine exact formulas for the Witten-Reshetikhin-Turaev invariants of the lens spaces \[Je\]. Another example is the fact that a closed oriented 3-manifold is an integral homology 3-sphere iff it can be obtained by surgery on an algebraically split link with framing numbers $`\pm 1`$ (see \[Mu-1\], \[O-1\]). This relationship plays a key role in many papers on quantum and finite invariants of integral homology 3-spheres (see, for instance, \[O-1\], \[O-2\], \[Mu-1\], \[Mu-2\]). In Section 1 we will establish an analogous relationship between periodic 3-manifolds and periodic links. Namely, we prove the following theorem: Theorem 1.1 Let $`p`$ be a prime integer and $`M`$ be a closed oriented 3-manifold. There is an action of the cyclic group $`𝐙_p`$ on $`M`$ with the fixed-point set equal to a circle if and only if there exists a framed $`p`$-periodic link $`LS^3`$ such that $`M`$ is the result of surgery on $`L`$ and $`𝐙_p`$ acts freely on the set of components of $`L`$. A special case of Theorem 1.1 when $`M`$ is a homology sphere was proven in \[Ka-Pr\]. In the general form the theorem was proven for the first time by the first author in his graduate course Topics in Algebra Situs<sup>1</sup><sup>1</sup>1The George Washington University, February of 1999.. A similar result is obtained for manifolds with free $`𝐙_p`$ actions: Theorem 1.2 Let $`p`$ be a prime integer and $`M`$ be a closed oriented 3-manifold. There is a free action of the cyclic group $`𝐙_p`$ on $`M`$ iff there exists a framed $`p`$-periodic link $`LS^3`$ admitting a free action of $`𝐙_p`$ on the set of its components such that $`M`$ is the result of surgery on $`L^{}=L\gamma `$, where $`\gamma `$ is the axis of the action with framing co-prime to $`p`$. In Section 2 we give an interesting application of Theorem 1.1. Namely, we prove the following result: Theorem 2.1 If a closed orientable 3-manifold $`M`$ admits an action of a cyclic group $`𝐙_p`$ where $`p`$ is an odd prime integer and the fixed point set of the action is $`S^1`$ then $`H_1(M;𝐙_p)𝐙_p`$. Note that this theorem provides a non-trivial criterion for 3-manifolds admitting the described action. The simplest examples of 3-manifolds with $`H_1(M;𝐙_p)=𝐙_p`$ are lens spaces $`L_{pn,q}`$ (or more generally, $`(pn/q)`$ Dehn surgeries on knots in $`S^3`$). Theorem 2.1 was first announced as a conjecture<sup>2</sup><sup>2</sup>2 The conjecture was obtained as a result of extensive computations performed with a program written in Mathematica. and partially proven<sup>3</sup><sup>3</sup>3In the case when the orbit space of the action can be obtained from $`S^3`$ by an integer surgery on a knot. in April of 1999 \[So\] (It is interesting to mention that the conjecture was influenced by the study of Murakami-Ohtsuki-Okada invariants on periodic 3-manifolds<sup>4</sup><sup>4</sup>4 In turn, our interest in Murakami-Ohtsuki-Okada invariants was sparked by their relation with the second skein module \[P-4\]. , but the equation $`MOO_p(M)=\pm G_p^{rkH_1(M;𝐙_p)}`$ eventually led to the more “classical” algebraic topology. Here $`p`$ is an odd prime integer, $`MOO_p`$ is the Murakami-Ohtsuki-Okada invariant parameterized by $`q=e^{2\pi i/p}`$, and $`G_p=_{j𝐙_p}q^{j^2}`$ ). Recently (November, 1999), Adam Sikora announced a proof of the theorem \[Si\]. In fact, using some classical but involved algebraic topology, he obtained more general results implying our theorem. The surgery presentation of periodic 3-manifolds developed in Section 1 allowed us to find an elementary proof of Theorem 2.1, presented in Section 2. Theorem 2.1 is not true for $`p=2`$, see Remark 2.12. An interesting criterion for 2-periodic rational homology spheres is provided by the following theorem. Theorem 2.2. Let $`M`$ be a rational homology 3-sphere such that the group $`H_1(M;𝐙)`$ does not have elements of order 16. If $`M`$ admits an orientation preserving action of $`𝐙_2`$ with the fixed point set being a circle then the canonical decomposition of the group $`H_1(M;𝐙)`$ has even number of terms $`𝐙_2`$ and even number of terms $`𝐙_4`$, and arbitrary number of terms $`𝐙_8`$. The second author thanks Yongwu Rong and Adam Sikora for useful conversations. When a preliminary version of the paper was ready, we received an e-mail from James Davis saying that he and his student Karl Bloch also found a proof for Theorem 2.1. 1. Periodic 3-manifolds are surgeries on periodic links We show in this section that $`p`$-periodic closed oriented 3-manifolds can be presented as results of integer surgeries on $`p`$-periodic links. We show also an analogous result for manifolds with free action of $`𝐙_p`$. Before we prove theorems 1.1 and 1.2, we need to establish some basic terminology and preliminary lemmas. 1.1 Periodic Links. Definition. By a framed knot $`K`$ we mean a ring $`S^1\times [0,\epsilon ]`$ embedded in $`S^3`$. By the framing of $`K`$ we mean an integer defined as follows. Let $`V_\epsilon `$ be the $`\epsilon `$-neighborhood of $`K_0=S^1\times \{0\}`$, then $`K_\epsilon =S^1\times \{\epsilon \}`$ is a projection of $`K_0`$ onto $`V_\epsilon `$. Let $`P`$ be the projection of $`K_0`$ onto $`V_\epsilon `$which is homologically trivial in $`S^3V_\epsilon `$. The framing $`f`$ is defined as the algebraic number of intersections of $`K_\epsilon `$ and $`P`$. A framed link is a collection of non-intersecting framed knots. We will adopt the usual “blackboard” convention for framed link diagrams. Definition. A (framed) link $`L`$ in $`S^3`$ is called $`p`$-periodic if there is a $`𝐙_p`$-action on $`S^3`$, with a circle as a fixed point set, which maps $`L`$ onto itself, and such that $`L`$ is disjoint from the fixed point set. Furthermore, if $`L`$ is an oriented link, one assumes that each generator of $`𝐙_p`$ preserves the orientation of $`L`$ or changes it to the opposite one. By the positive solution of Smith Conjecture (\[M-B\], \[Th\]) we know that the fixed point set of the action of $`𝐙_p`$ is an unknotted circle and the action is conjugate to an orthogonal action on $`S^3`$. In other words, if we identify $`S^3`$ with $`𝐑^3\mathrm{}`$, then the fixed point set can be assumed to be equal to the “vertical” axis $`z=0`$ together with $`\mathrm{}`$, and a generator $`\phi `$ of $`𝐙_p`$ can be assumed to be the rotation $`\phi (z,t)=(e^{2\pi i/p}z,t)`$, where the coordinates on $`𝐑^3`$ come from the product of the complex plane and the real line $`𝐂\times 𝐑`$. Thus, any (framed) $`p`$-periodic link $`L^p`$ may be represented by a $`\phi `$-invariant diagram, $`p`$-periodic diagram (with framing parallel to the projection plane), see Fig. 1. Fig. 1 By the underlying link for $`L^p`$ we will mean the orbit space of the action, that is $`L_{}=L^p/𝐙_p`$. Lemma 1.3 Let $`p`$ be a prime integer and $`L^pS^3`$ be a (framed) $`p`$-periodic link. The following three conditions are equivalent: 1) $`𝐙_p`$ acts freely on the set of components of $`L^p`$; 2) The linking number of each component of the underlying link $`L_{}`$ with the axis of rotation is congruent to zero modulo $`p`$; 3) The number of components of $`L^p`$ is $`p`$ times greater than the number of components of $`L_{}`$. Proof. The equivalence of the conditions 1) and 3) is obvious. To prove that 2) is equivalent to 3), consider the covering projection $`L^pL_{}`$. Let $`l`$ be a component of $`L_{}`$. The preimage $`\rho ^1`$ of the closed path $`\lambda `$ which traverses $`l`$ exactly once (i.e., $`l`$ with a base point) consists of $`p`$ paths $`\lambda _1,\mathrm{},\lambda _p`$ in $`L^p`$. The condition $`\text{lk}(l,\gamma )=0(modp)`$ is equivalent to the condition that each of $`\lambda _i`$ is closed. Thus, $`l`$ lifts to $`p`$ components in $`L^p`$ iff $`\text{lk}(l,\gamma )=0(modp)`$. $`\mathrm{}`$ Definition. A $`p`$-periodic link $`L^p`$ that satisfies any of the conditions from Lemma 1.3 will be called strongly $`p`$-periodic. 1.2 Periodic Manifolds. Definition. A $`3`$-manifold $`M`$ is called $`p`$-periodic if it admits an orientation preserving action of the cyclic group $`𝐙_p`$ with a circle as a fixed point set, and the action is free outside the circle. We can show immediately the easy part of Theorem 1.1. Indeed, consider a strongly $`p`$-periodic framed link $`L^p`$, and let $`M`$ be the 3-manifold obtained by surgery on $`L^p`$. By definition of a framed $`p`$-periodic link, there is a $`𝐙_p`$ action on $`S^3`$, and on $`S^3L^p`$, with a circle $`\gamma `$ as a fixed point set. This action induces a $`𝐙_p`$ action on $`M`$. Moreover, since the action of $`𝐙_p`$ is free on the set of components of $`L^p`$, there are no other fixed points of the action of $`𝐙_p`$ on $`M`$ but the circle $`\gamma `$. To show the difficult part of Theorem 1.1 we first fix some notation. Suppose that $`𝐙_p`$ acts on $`M`$ with the fixed-point set equal to a circle $`\gamma `$. Denote the quotient by $`M_{}=M/𝐙_p`$, the projection map by $`h:MM_{}`$ and $`\gamma _{}=h(\gamma )`$. Lemma 1.4 The map $`h_{}:H_1(M)H_1(M_{})`$ is an epimorphism. Proof. Let $`x_0\gamma `$. Since $`x_0`$ is a fixed point of the action, any loop based at $`h(x_0)`$ lifts to a loop based at $`x_0`$. Thus $`h_\mathrm{\#}:\pi _1(M,x_0)\pi _1(M_{},h(x_0))`$ is an epimorphism, and since $`H_1`$ is an abelianization of $`\pi _1`$, the map $`h_{}`$ is also an epimorphism. $`\mathrm{}`$ Notice that the proof works for any finite group action on a manifold with a non-empty fixed point set. Let us recall the Lefschetz’ duality theorem which we will use in our proof of Theorem 1.1. First some terminology: a compact connected $`n`$-dimensional manifold $`M`$ is called $`R`$-oriented for a commutative ring with identity $`R`$, if $`H_n(M,M;R)=R`$. In particular, any manifold is $`𝐙_2`$-oriented, and an oriented manifold is $`R`$-oriented for any ring $`R`$. For a reference, see \[Sp\]. Theorem 1.5 (Lefschetz) Let $`M`$ be a compact $`n`$-dimensional, $`R`$-oriented manifold. Then there is an isomorphism $`\tau :H^q(M;R)H_{nq}(M,M;R)`$. Furthermore if $`R`$ is a PID (principal ideal domain) and $`H_{q1}(M,R)`$ is free then $`H^q(M;R)=Hom(H_q(M;R),R)`$ and for $`\alpha H^q(M;R)`$ and $`cH_q(M;R)`$ one has: $`\alpha (c)=alg(c,\tau (\alpha ))`$, where $`alg(c,\tau (c))R`$ is the algebraic intersection number of $`c`$ and $`\tau (\alpha )`$ in $`M`$ ( $`alg:H_q(M;R)\times H_{nq}(M,M;R)R`$). We use Lefschetz’ Theorem to show that the covering $`h:MM_{}`$ is yielded by a 2-chain whose boundary is a multiple of $`\gamma _{}`$. Because we work with $`q=1`$, then $`H_{q1}(M,R)`$ is free and we can use the intersection number interpretation of the Lefschetz’ Theorem. Lemma 1.6 Let $`M`$ be a closed orientable $`p`$-periodic 3-manifold. With the notation as before, one has: 1. $`\gamma _{}0`$ in $`H_1(M_{},𝐙_p)`$. 2. There is a 2-chain $`CC_2(M_{},𝐙_p)`$ such that $`Cm\gamma _{}modp`$ and the covering $`h:(M\gamma )(M_{}\gamma _{})`$ is yielded by the map $`\varphi _C:H_1(M_{}\gamma _{})𝐙_p`$ where $`\varphi _C(K)`$ is the intersection number of $`K`$ with $`C`$ (i.e. for a 1-cycle $`KM_{}\gamma _{}`$, $`\varphi _C(K)=alg(K,C)`$ where $`alg(K,C)`$ is the intersection number of $`K`$ with $`C`$, well defined $`modp`$)<sup>5</sup><sup>5</sup>5 If $`H_2(M_{},𝐙)=0`$, then $`alg(K,C)=lk(K,m\gamma _{})`$, but generally $`alg(K,C)`$ depends on the choice of $`C`$.. In particular $`\varphi _C(\mu _{})=m`$, where $`\mu _{}`$ is a meridian of $`\gamma _{}`$. Proof. To work with Lefschetz’ Theorem we have to consider compact manifolds. Thus, instead of $`M_{}\gamma _{}`$ we consider a homotopically equivalent compact manifold $`\widehat{M}_{}=M_{}int(V_\gamma _{})`$, where $`V_\gamma _{}`$ is a regular neighborhood of $`\gamma _{}`$ in $`M_{}`$. Similarly, let $`V_\gamma =h^1(V_\gamma _{})`$ be a $`Z_p`$-invariant regular neighborhood of $`\gamma `$ in $`M`$. Let also $`\widehat{M}=Mint(V_\gamma )`$. Since $`\widehat{h}:\widehat{M}\widehat{M}_{}`$ is a regular covering, it is characterized by an epimorphism $`\pi _1(\widehat{M}_{})\pi _1(\widehat{M}_{})/\pi _1(\widehat{M})=𝐙_p`$ (up to an automorphism of $`𝐙_p`$). Thus, since $`𝐙_p`$ is abelian, $`\widehat{h}`$ is defined by an epimorphism $`\varphi :H_1(\widehat{M}_{})𝐙_p`$, where $`\varphi `$ is unique up to an automorphism of $`𝐙_p`$. Let $`\widehat{C}`$ be a 2-cycle representing the element of $`H_2(\widehat{M}_{},\widehat{M}_{};𝐙_p)`$ dual to the epimorphism $`\varphi `$, that is, such that $`alg(K,\widehat{C})\varphi (K)(modp)`$ for any $`KH_1(\widehat{M}_{})`$. We can assume that $`\widehat{C}\widehat{M}_{}`$ is a collection of simple noncontractible curves in the torus $`\widehat{M}_{}`$. Finally let $`C`$ be a 2-chain obtained from $`\widehat{C}`$ by adding to $`\widehat{C}`$ annuli connecting the components of $`\widehat{C}\widehat{M}_{}`$ with $`\gamma _{}`$ in $`V_\gamma _{}`$. Thus, $`C`$ of part (2) is constructed. Let $`\mu _{}`$ be a meridian of $`\gamma _{}`$ (or more precisely, of $`V_\gamma _{}`$). $`\varphi (\mu _{})0(modp)`$ because $`\gamma _{}`$ is a branching set of the covering, so the preimage of $`\mu _{}`$ (under $`h`$) is a connected curve $`\mu `$, (a meridian of $`\gamma `$ in $`M`$), by the definition of a branched covering. Thus, there is $`0<m<p`$ such that $`\varphi (\mu _{})=m`$. We can conclude also that $`m\gamma _{}C`$ mod $`p`$, thus $`\gamma _{}0`$ in $`H_1(M_{};𝐙_p)`$. $`\mathrm{}`$ Proof of Theorem 1.1. By the classical result of Wallace (1960) and Lickorish (1962), every closed oriented 3-manifold is a result of a surgery on a framed link in $`S^3`$. In particular $`M_{}`$ can be represented as a result of surgery on some framed link $`L_\mathrm{\#}`$ in $`S^3`$. Inversely $`S^3`$ can be obtained as a result of surgery on some framed link $`\widehat{L}_\mathrm{\#}`$ in $`M_{}`$. We can assume that $`\widehat{L}_\mathrm{\#}`$ satisfies the following conditions (possibly after deforming $`\widehat{L}_\mathrm{\#}`$ by ambient isotopy): 1. $`\gamma _{}\widehat{L}_\mathrm{\#}=\mathrm{}`$; 2. $`alg(\widehat{L}_\mathrm{\#}^i,C)0modp`$, for any component $`\widehat{L}_\mathrm{\#}^i`$ of $`\widehat{L}_\mathrm{\#}`$. $`\widehat{L}_\mathrm{\#}`$ satisfying the conditions 1-2 can be obtained as follows: Let $`L_\mathrm{\#}S^3`$ be a framed link in $`S^3`$ such that $`M_{}`$ is a result of surgery on $`L_\mathrm{\#}`$. Let $`\widehat{L}_\mathrm{\#}`$ denote the co-core of the surgery <sup>6</sup><sup>6</sup>6In our notation, core of the surgery is the framed surgery link, that is the framed link, regular neighborhood of which is removed in the “drilling” part of the surgery, with framing yielded by the meridian of the attached (“filling” part of the surgery) solid torus. The co-core of the surgery is the core of the “filling” solid torus, with its framing yielded by the meridian of the removed solid torus. The surgery on the co-core link brings back the initial manifold.. In particular, $`\widehat{L}_\mathrm{\#}`$ is a framed link in $`M_{}`$ such that $`S^3`$ is a result of surgery on $`\widehat{L}_\mathrm{\#}`$. By a general position argument, we can make $`\gamma _{}`$ and $`\widehat{L}_\mathrm{\#}`$ disjoint, but in order to get condition (2) we should do so in a controllable manner. Let $`C`$ be the 2-chain from Lemma 1.6. Let $`\widehat{L}_\mathrm{\#}^i`$ be any component of $`\widehat{L}_\mathrm{\#}`$. If we change a crossing between $`\widehat{L}_\mathrm{\#}^i`$ and $`\gamma _{}`$ then the algebraic crossing number, $`alg(\widehat{L}_\mathrm{\#}^i,C)`$ changes by $`\pm m`$ mod $`p`$. Thus, by a series of crossing changes we can get $`alg(\widehat{L}_\mathrm{\#}^i,C)0modp`$ for any component of $`\widehat{L}_\mathrm{\#}`$, providing condition (2). This implies that we can easily modify $`C`$ (outside $`\gamma _{}`$) so that $`C\widehat{L}_\mathrm{\#}=\mathrm{}`$. Therefore, $`C`$ survives the surgery (as well as $`\gamma _{}`$), and in $`S^3`$ it has $`𝐙_p`$ boundary $`m\gamma _{}`$ and it is disjoint from $`L_\mathrm{\#}`$ (link in $`S^3`$ being the co-core of the surgery on $`\widehat{L}_\mathrm{\#}`$ in $`M_{}`$). Thus, $`\mathrm{lk}(L_\mathrm{\#}^i,\gamma _{})0modp`$ for any component $`L_\mathrm{\#}^i`$ of $`L_\mathrm{\#}`$. Now we are ready to unknot $`\gamma _{}`$ using Kirby calculus (\[K\], \[F-R\]). Choose some orientation on $`\gamma _{}`$. We can add unlinked components with framing $`\pm 1`$ to $`L_\mathrm{\#}`$ around each crossing of $`\gamma _{}`$, making sure that arrows on $`\gamma _{}`$ run opposite ways (i.e. the linking number of $`\gamma _{}`$ with the new component is zero, see Fig. 2). Use the K-move to change the appropriate crossings and thus to unknot $`\gamma _{}`$. Thus we trivialized $`\gamma _{}`$ without compromising conditions (1) and (2). Denote the framed link obtained from the initial link $`L_\mathrm{\#}`$ after the described isotopy and adding the new components by $`L_{}`$. Notice, that each new component that we introduce during the above procedure has linking number $`0`$ with any other component of $`L_{}`$. Fig. 2 To complete the proof of the theorem, consider the $`p`$-fold cyclic branched covering of $`S^3`$ by $`S^3`$ with branching set $`\gamma _{}`$. Let $`L`$ denote the preimage of $`L_{}`$. Notice that $`L`$ is strongly $`p`$-periodic. We claim that the result of performing surgery on $`L`$ is $`𝐙_p`$-homeomorphic to $`M`$. The preimage of each component of $`L_{}`$ consists of $`p`$ components permuted by a $`𝐙_p`$ action, by Lemma 1.4. Therefore, $`𝐙_p`$ acts on the result of the surgery on $`S^3`$ along $`L`$, $`\stackrel{~}{M}=(S^3,L)`$, with a branched set $`\gamma `$ and quotient $`M_{}=(S^3,L_{})`$. The group $`H_1(M_{}\gamma _{}\widehat{L}_\mathrm{\#})=H_1(S^3\gamma _{}L_{})`$ is generated by $`\mu _{}`$ (a meridian of $`\gamma _{}`$) and meridians of the components of $`L_{}`$, say $`b_1,\mathrm{},b_k`$. Of course, $`H_1(M_{}\gamma _{})`$ is also generated by $`\mu _{},b_1,\mathrm{},b_k`$. By Lemma 1.6, the covering $`\rho :(M\gamma )(M_{}\gamma _{})`$ is characterized by the map $`\varphi _C:H_1(M_{}\gamma _{})𝐙_p`$ (up to an automorphism of $`𝐙_p`$), where $`\varphi _C(K)`$ is the intersection number of $`KH_1(M_{}\gamma _{})`$ with $`C`$ modulo $`p`$. Similarly, the covering $`\stackrel{~}{\rho }:(\stackrel{~}{M}\gamma )(M_{}\gamma _{})`$ is characterized by a map $`\varphi _2:H_1(M_{}\gamma _{})𝐙_p`$. By our construction, $`\varphi _C(\mu _{})=m`$ and $`\varphi _C(b_i)=0`$ for every $`1ik`$. We need to show that $`\varphi _2(\mu _{})=m^{}`$ for some $`m^{}`$ coprime to p, and $`\varphi _2(b_i)=0`$ for every $`1ik`$. This follows from the fact that $`\stackrel{~}{\rho }^1(b_i)`$ consists of $`p`$ loops and $`\stackrel{~}{\rho }^1(\mu _{})`$ is a single loop. Thus, $`\varphi _2`$ and $`\varphi _C`$ are equivalent up to the automorphism of $`𝐙_p`$ sending $`m^{}`$ to $`m`$. Therefore, the manifolds $`(\stackrel{~}{M}\gamma )`$ and $`(M\gamma )`$ are $`𝐙_p`$-homeomorphic, with a homeomorphism given by $`g:(\stackrel{~}{M}\gamma )(M\gamma )`$ such that $`\stackrel{~}{\rho }=\rho g`$. Notice, that $`\stackrel{~}{M}`$ can be obtained from $`(\stackrel{~}{M}\gamma )`$ by attaching a 2-handle along $`\stackrel{~}{\rho }^1(\mu _{})`$ and then a 3-handle, and $`M`$ can be obtained from $`(M\gamma )`$ by attaching a 2-handle along $`\rho ^1(\mu _{})`$ and then a 3-handle. Since $`g(\stackrel{~}{\rho }^1(\mu _{}))=\rho ^1(\mu _{})`$, the homeomorphism $`g`$ can be extended to a $`𝐙_p`$-homeomorphism $`\widehat{g}:\stackrel{~}{M}M`$. Our proof of the Theorem 1.1 is completed. $`\mathrm{}`$ Remark 1.7. Notice, that if in the proof of Theorem 1.1 we assumed that the link $`L_\mathrm{\#}`$ was algebraically split then the link $`L_{}`$ would be algebraically split as well. This remark will be important later in the proofs of Theorems 2.1 and 2.2. As a corollary we obtain a proof of Theorem 1.2. Proof of Theorem 1.2. Consider any $`𝐙_p`$ equivariant knot, say $`\widehat{\gamma }`$ in $`M`$. Let $`V_{\widehat{\gamma }}`$ be a $`𝐙_p`$ equivariant regular neighborhood of $`\widehat{\gamma }`$ in $`M`$ and $`\gamma ^{}`$ a curve on $`V_{\widehat{\gamma }}`$ which is also $`𝐙_p`$ equivariant. Notice, that $`\gamma ^{}`$ intersects a meridian of $`V_{\widehat{\gamma }}`$ exactly once. Now let $`M^{}`$ be a manifold obtained from $`M`$ by a surgery on $`\widehat{\gamma }`$ with the framing defined by $`\gamma ^{}`$. Let $`\gamma M^{}`$ be the co-core of the surgery. The $`𝐙_p`$ action on $`M`$ yields the action on $`M^{}`$ and our choice of framing guarantees that $`\gamma `$ is the (only) fixed point set of the action. Thus, we can apply to $`M^{}`$ the previous theorem. This proves that $`M`$ can be obtained by an integer surgery on $`L\gamma `$, where $`L`$ is a strongly $`p`$-periodic link. Furthermore, the framing of $`\gamma `$ must be coprime to $`p`$, to insure that $`𝐙_p`$ acts on the resulting manifold with no fixed points. $`\mathrm{}`$ Remark 1.8. We plan to extend Theorem 1.1 to any $`𝐙_p`$ orientation preserving action on a closed 3-manifold $`M`$, and to $`𝐙_{p^k}`$ actions. 2. Homology of periodic 3-manifolds. The main goal of this section is to give elementary proofs of Theorems 2.1 and 2.2 using the surgery presentation of $`p`$-periodic 3-manifolds developed in Section 1. 2.1 Linking matrices of framed strongly $`p`$-periodic links and of algebraically split links. Let $`L`$ be a framed oriented link of $`n`$ components $`l_1,\mathrm{},l_n`$. The Linking matrix of $`L`$ is the matrix $`(a_{ij})_{n\times n}`$ defined by $$a_{ij}=\{\begin{array}{cc}\text{lk}(l_i,l_j)\hfill & \text{if }ij\hfill \\ \text{framing of }l_i\hfill & \text{if }i=j\hfill \end{array}$$ Let $`L^p`$ be a framed strongly $`p`$-periodic link and $`L_{}`$ be the corresponding underlying link. Fix an orientation of $`L_{}`$ and denote the components of $`L_{}`$ by $`l_1,\mathrm{},l_n`$. Consider a $`p`$-periodic diagram of $`L^p`$. Denote the $`p`$ copies of the tangle $`R`$ from the diagram (see Fig. 1) by $`R_1,\mathrm{},R_p`$ in the clockwise order. Lift the orientation of $`L_{}`$ to $`L^p`$. By Lemma 1.3, each component $`l_i`$ of $`L_{}`$ has $`p`$ covering preimages in $`L^p`$. Denote them in a clockwise order by $`l_{i1},\mathrm{},l_{ip}`$. By the clockwise order here we mean such an order that if we choose any point $`xl_{ij}R_j`$ then the corresponding point in $`R_{j+1}`$ will belong to $`l_{ij+1}`$ (subscripts are treated modulo $`p`$). Now consider the following natural order for the components of $`L^p`$: $$l_{11},\mathrm{},l_{1p},l_{21},\mathrm{},l_{2p},\mathrm{},l_{n1},\mathrm{},l_{np}.$$ It is not hard to see that with regard to this order, the linking matrix $`A_p`$ for $`L^p`$ is of the following form $$A_p=\left(\begin{array}{cccc}A_{11}& B_{12}& \mathrm{}& B_{1n}\\ B_{21}& A_{22}& \mathrm{}& B_{2n}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ B_{n1}& B_{n2}& \mathrm{}& A_{nn}\end{array}\right),$$ where all the blocks are $`p\times p`$, $`A_{ii}`$ is the linking matrix for the sublink consisting of $`l_{i1},\mathrm{},l_{ip}`$, and $`B_{ij}`$ is the matrix with elements $`b_{ks}^{ij}=\text{lk}(l_{ik},l_{js})`$. Recall, that a matrix $`(a_{ij})_{k\times k}`$ is called circulant if $`a_{ij}=a_{i+1j+1}`$, $`i,j=1,\mathrm{},k`$ (subscripts mod $`k`$). Proposition 2.3. Every block in $`A_p`$ is a circulant matrix. Proof. Consider $`B_{ij}=(b_{ks}^{ij})_{p\times p}`$. Then $`b_{ks}^{ij}=\text{lk}(l_{ik},l_{js})`$ and $`b_{k+1s+1}^{ij}=\text{lk}(l_{ik+1},l_{js+1})`$. If one rotates the $`p`$-periodic diagram of $`L^p`$ around the center in the clockwise direction by $`2\pi /p`$ then the pair $`(l_{ik},l_{js})`$ will go into $`(l_{ik+1},l_{js+1})`$, taking subscripts modulo $`p`$. Thus, $`\text{lk}(l_{ik},l_{js})=\text{lk}(l_{ik+1},l_{js+1})`$. If we notice that the framing numbers of $`l_{i1},\mathrm{},l_{ip}`$ are all the same, then the above argument shows that $`A_{ii}`$ is also circulant for any $`i=1,\mathrm{},n`$. $`\mathrm{}`$ Definition. We will call a (framed) link $`L`$ algebraically split if the linking number between any two components of $`L`$ is zero. A strongly $`p`$-periodic (framed) link $`L^p`$ will be called orbitally separated if the underlying link $`L_{}`$ is algebraically split. Remark 2.4. It is not hard to see that a strongly $`p`$-periodic link $`L^p`$ is orbitally separated iff any two components of $`L^p`$ that cover different components of $`L_{}`$ have the linking number equal to zero. Corollary 2.5. It follows from Proposition 2.3 and Remark 2.4 that $`L^p`$ is an orbitally separated link iff all the non-diagonal blocks $`B_{ij}`$ in $`A_p`$ are zero matrices. $`\mathrm{}`$ 2.2 Nullity of symmetric circulant matrices over $`𝐙_p`$. Circulant matrices are very well studied and a lot is known about them (see, for instance, \[D\]). But, apparently, not much is known about circulant matrices over finite fields (or rings). The following two results provide the key tool for our proof of Theorem 2.1, but they also appear to be interesting from a purely matrix theoretical point of view. Lemma 2.6. If $$A=\left(\begin{array}{ccccc}a_1& a_2& a_3& \mathrm{}& a_n\\ a_n& a_1& a_2& \mathrm{}& a_{n1}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ a_2& a_3& a_4& \mathrm{}& a_1\end{array}\right)$$ is a circulant matrix with integer elements then $$\text{det }A=\{\begin{array}{cc}a_1^n+a_2^n+\mathrm{}+a_n^n(modn),\hfill & \text{if n is odd;}\hfill \\ a_1^na_2^n+\mathrm{}a_n^n(modn),\hfill & \text{if n is even.}\hfill \end{array}$$ Proof. The determinant of $`A`$ is a sum of $`n!`$ terms. The terms of the form $`a_i^n`$, $`i=1,\mathrm{},n`$, will be called diagonal. Note that any term different from diagonal appears in the sum exactly $`n`$ times: $$a_{i_1}a_{i_2}a_{i_3}\mathrm{}a_{i_n},$$ $$a_{i_2}a_{i_3}\mathrm{}a_{i_n}a_{i_1},$$ $$\mathrm{}$$ $$a_{i_n}a_{i_1}a_{i_2}\mathrm{}a_{i_{n1}}.$$ Note that the sign for all such terms is the same. To prove this we need to show that the permutations $$\sigma =\left(\begin{array}{ccccc}1& 2& 3& \mathrm{}& n\\ i_1& i_2& i_3& \mathrm{}& i_n\end{array}\right)\text{ and }\sigma ^{}=\left(\begin{array}{ccccc}1& 2& 3& \mathrm{}& n\\ i_n+1& i_1+1& i_2+1& \mathrm{}& i_{n1}+1\end{array}\right)$$ have the same parity (everything is modulo $`n`$). Obviously, $`\left(\begin{array}{ccccc}1& 2& 3& \mathrm{}& n\\ i_n+1& i_1+1& i_2+1& \mathrm{}& i_{n1}+1\end{array}\right)=`$ $`\left(\begin{array}{ccccc}2& 3& \mathrm{}& n& 1\\ i_1+1& i_2+1& \mathrm{}& i_{n1}+1& i_n+1\end{array}\right).`$ The row $`\left(\begin{array}{cccccc}2& 3& 4& \mathrm{}& n& 1\end{array}\right)`$ has $`n1`$ inversions. The numbers of inversions in $`\left(\begin{array}{cccc}i_1& i_2& \mathrm{}& i_n\end{array}\right)`$ and $`\left(\begin{array}{cccc}i_1+1& i_2+1& \mathrm{}& i_n+1\end{array}\right)`$ also differ by $`n1`$. Thus the parities of $`\sigma `$ and $`\sigma ^{}`$ are the same. Therefore, the total sum of all non-diagonal terms in $`\text{det }A`$ is 0 mod $`n`$. The result follows. $`\mathrm{}`$ Denote the nullity of $`A`$ over $`𝐙_n`$ by $`\text{null}_nA`$. Lemma 2.7. Let $`p`$ be an odd prime integer and $`A`$ be a $`p\times p`$ symmetric circulant matrix over $`𝐙_p`$, then $`\text{null}_pA1`$. Proof. Assume $`\text{null}_pA>0`$, i. e. $`a_1^p+2a_2^p+\mathrm{}+2a_{\frac{p+1}{2}}^p=a_1+2a_2+\mathrm{}+2a_{\frac{p+1}{2}}=0(modp)`$, by Lemma 2.6 (the second equality follows from Fermat’s theorem). After adding all rows to the last one and all columns to the last column we get $$\text{det }A=\text{det}\left(\begin{array}{cccccc}a_1& a_2& a_3& \mathrm{}& a_3& 0\\ a_2& a_1& a_2& \mathrm{}& a_4& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ a_3& a_4& a_5& \mathrm{}& a_1& 0\\ 0& 0& 0& \mathrm{}& 0& 0\end{array}\right).$$ Denote the $`i`$th column of the above matrix by $`C_i`$. Then the linear combination $$(p1)C_1+(p2)C_2+\mathrm{}+2C_{p2}+C_{p1}$$ is 0 modulo $`p`$. Indeed, the $`i`$th row of the linear combination is $$(p1)a_i+(p2)a_{i+1}+\mathrm{}+2a_{i3}+a_{i2},$$ all the coefficients and subscripts are modulo $`p`$. It is not hard to see that after the substitution $`a_1=2a_22a_3\mathrm{}2a_{\frac{p+1}{2}}`$, the coefficient for $`a_k`$ ($`k1`$) in the above sum is $$2(pi)+(p(i+k))+(p(ik))=0.$$ Thus, if $`\text{det }A=0`$ over $`𝐙_p`$ then $`\text{null}_pA2`$. $`\mathrm{}`$ Remark 2.8. Lemma 2.7 is not true if $`p=2`$. For instance, nullity of $`\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)`$ is 1 over $`𝐙_2`$. 2.3. Proof of Theorem 2.1. In this section we will prove Theorem 2.1, the main theorem of Section 2. Let $`M`$ be a closed oriented 3-manifold obtained by a Dehn surgery on a framed oriented link $`L`$, and let $`A`$ be the linking matrix of $`L`$. The following fact is well-known. Lemma 2.9. $`\text{null}_pA=\text{rank }H_1(M;𝐙_p)`$. $`\mathrm{}`$ Now we are ready to prove an important special case of Theorem 2.1. Proposition 2.10. Let $`p`$ be an odd prime integer. If a closed orientable 3-manifold $`M`$ can be obtained from $`S^3`$ by Dehn surgery on an orbitally separated framed link $`L^p`$ then $`H_1(M;𝐙_p)𝐙_p`$. Proof. Assume that $`M`$ can be obtained by Dehn surgery on an orbitally separated framed link $`L^p`$. Let $`A_p`$ be the linking matrix of $`L^p`$, as constructed in Section 2.1. By Corollary 2.5, $`A_p`$ is a block diagonal matrix. Therefore, $`\text{null}_pA_p`$ is equal to the sum $`\text{null}_pA_{11}+\mathrm{}+\text{null}_pA_{nn}`$. By Proposition 2.3, each $`A_{ii}`$ is a circulant matrix. Moreover, since $`A_p`$ is a linking matrix, each $`A_{ii}`$ is symmetric. By Lemma 2.7, $`\text{null}_pA_{ii}1`$, hence, $`\text{null}_pA_p1`$. It follows from Lemma 2.9 that $`H_1(M;𝐙_p)𝐙_p`$. $`\mathrm{}`$ To finish our proof of the main theorem of Section 2 we need the following result (Corollary 2.3 in \[Mu-2\]). Proposition 2.11 (H. Murakami) Fix an odd prime $`r`$. For every connected, closed, oriented 3-manifold $`M`$, there exist lens spaces $`L(n_1,1),\mathrm{},L(n_k,1)`$ with $`n_i`$ coprime to $`r`$ such that the connected sum $`M\mathrm{\#}L(n_1,1)\mathrm{\#}\mathrm{}\mathrm{\#}L(n_k,1)`$ can be obtained by Dehn surgery on an algebraically split link with integer framing. $`\mathrm{}`$ Now we are ready to prove Theorem 2.1 in full generality. Proof of Theorem 2.1. Let $`M`$ be a $`p`$-periodic closed oriented 3-manifold and $`M_{}=M/𝐙_p`$. By Proposition 2.11, there are integers $`n_1,\mathrm{},n_k`$ coprime to $`p`$ such that the connected sum $`\stackrel{~}{M_{}}=M_{}\mathrm{\#}L(n_1,1)\mathrm{\#}\mathrm{}\mathrm{\#}L(n_k,1)`$ can be obtained by Dehn surgery on an algebraically split framed link $`\stackrel{~}{L_{}}`$. Consider $`\stackrel{~}{M}=M\mathrm{\#}pL(n_1,1)\mathrm{\#}\mathrm{}\mathrm{\#}pL(n_k,1)`$. Obviously, $`\stackrel{~}{M}`$ is $`p`$-periodic such that $`\stackrel{~}{M_{}}=\stackrel{~}{M}/𝐙_p`$. Moreover, it easily follows from the proof of Theorem 1.1 that $`\stackrel{~}{M}`$ can be obtained using Dehn surgery on an orbitally separated framed link (see Remark 1.7). Therefore, by Proposition 2.10, $`H_1(\stackrel{~}{M};𝐙_p)𝐙_p`$. Since the numbers $`n_1,\mathrm{},n_k`$ are coprime to $`p`$, we have $`H_1(L(n_i,1),𝐙_p)=0`$ for every $`i`$. This implies that $`H_1(M;𝐙_p)=H_1(\stackrel{~}{M};𝐙_p)𝐙_p`$. $`\mathrm{}`$ 2.4. Orientation preserving $`𝐙_2`$ actions. Proof of Theorem 2.2. Remark 2.12. Theorem 2.1 is not true in the case $`p=2`$. A simple counterexample is $`S^2\times S^1`$. It is interesting to notice that $`S^2\times S^1`$ admits two different orientation preserving actions of $`𝐙_2`$ with the fixed point set being a circle. Indeed, let $`H(1,1)`$ be the negative Hopf link with framing 1 on each component and let $`H(1,1)`$ be the negative Hopf link with framing $`1`$ on each component (see Fig. 3). Dehn surgery on each of these framed links produces $`S^2\times S^1`$. Moreover, permutation of the components defines $`𝐙_2`$ actions on $`S^2\times S^1`$ with a circle as the fixed point set. These two actions are different. In the first case the orbit space of the action $`M_{}`$ is $`S^2\times S^1`$, and in the second case $`M_{}=RP^3`$. Furthermore, one can see that in the first case $`𝐙_2`$ acts on $`𝐙=H_1(S^2\times S^1)`$ trivially, and in the second case it sends 1 to $`1`$. Both of the described actions were studied in \[P-3\]. Fig. 3 An interesting criterion for 2-periodic rational homology 3-spheres is provided by Theorem 2.2. Before we prove it, let us recall that every finite abelian group can be uniquely decomposed into a direct sum of cyclic groups whose orders are powers of prime numbers. Such decomposition will be called canonical. Proof of Theorem 2.2. Let $`M`$ be a rational homology 3-sphere such that $`H_1(M,𝐙)`$ does not have elements of order 16. Assume that $`M`$ admits an orientation-preserving action of $`𝐙_2`$ such that the fixed point set is a circle. As before, let $`M_{}=M/𝐙_2`$ be the orbit space of the action. By Theorem 1.1, $`M`$ can be obtained using Dehn surgery on a strongly 2-periodic framed link $`L^2`$ with the underlying link $`L_{}`$. By Lemma 1.4, $`M_{}`$ is also a rational homology 3-sphere. Therefore, $`M_{}`$ can be obtained by surgery on an algebraically split framed link (see \[Mu-1\], \[O-1\]). Thus, we may assume that $`L_{}`$ is algebraically split (see Remark 1.7). By definition $`L^2`$ is orbitally separated, and by Proposition 2.3 and Corollary 2.5 the linking matrix of $`L^2`$ is block diagonal with every block being a $`2\times 2`$ symmetric circulant matrix. Recall that the linking matrix of $`L^2`$ can be considered as a presentation matrix for the abelian group $`H_1(M,𝐙)`$. Therefore, it is enough to show that every finite abelian group $`G`$ presented by a matrix $`A=\left(\begin{array}{cc}a& b\\ b& a\end{array}\right)`$ with $`a,b𝐙`$ either has an element of order 16 or there are even number of terms $`𝐙_2`$ and even number of terms $`𝐙_4`$ in the canonical decomposition of $`G`$, moreover it is possible that the canonical decomposition of $`G`$ contains only one term of the form $`𝐙_{2^t}`$ for any $`t3`$. Denote by $`g`$ the greatest common divisor of $`a`$ and $`b`$. Since $`G`$ is finite, we have $`detA0`$ and $`g>0`$. It is easy to see that $`G`$ can be presented by the diagonal matrix $`\stackrel{~}{A}=\left(\begin{array}{cc}g& 0\\ 0& \frac{|detA|}{g}\end{array}\right)`$. Therefore $`G𝐙_g𝐙_{|detA|/g}`$ (here by $`𝐙_1`$ we mean the trivial group). If $`a`$ and $`b`$ are both odd then $`g`$ is odd and $`\frac{|detA|}{g}`$ is divisible by 8. If $`a`$ and $`b`$ are both even then $`g`$ is even. Let $`t`$ be the power of 2 in the prime decomposition of $`g`$. We have two different cases: 1) $`\frac{|detA|}{g^2}`$ is odd. Then $`G𝐙_{g/2^t}𝐙_{2^t}𝐙_{2^t}𝐙_{2l+1}`$, where $`2l+1=\frac{|detA|}{g2^t}`$. 2) $`\frac{|detA|}{g^2}`$ is even. This means that both $`\frac{a}{g}`$ and $`\frac{b}{g}`$ are odd and therefore $`\frac{|detA|}{g^2}`$ is divisible by $`2^s,s3`$, which implies that $`G𝐙_{g/2^t}𝐙_{2^t}𝐙_{2^{s+t}}𝐙_{2h+1}`$, where $`2h+1=\frac{|detA|}{g2^{s+t}}`$. In this case we have an element of order $`2^{s+t}`$, which is at least 16. We are left with the case when one of the numbers $`a`$ and $`b`$ is even and the other one is odd. But in this case $`g`$ is odd and $`|detA|`$ is odd. Therefore, if $`G`$ does not have elements of order 16 then the number of terms $`𝐙_2`$ and the number of terms $`𝐙_4`$ in the canonical decomposition of $`G`$ are both even numbers. To see that it is possible to have a single $`𝐙_8`$ term, consider the case $`a=3`$ and $`b=1`$ (An example of a manifold with such first homology is the lens space $`L(8,3)`$). $`\mathrm{}`$ Corollary 2.13 Let $`M`$ be a rational homology 3-sphere such that the group $`H_1(M;𝐙)`$ does not have elements of order 8. If $`M`$ admits an orientation preserving action of $`𝐙_2`$ with the fixed point set being a circle then $`H_1(M;𝐙_2)𝐙_2^m`$ for some even integer $`m`$. przytyck@research.circ.gwu.edu sokolov@gwu.edu http://gwu.edu/\̃hskip0.1ptsokolov/math\_page/mathematics.htm
no-problem/0002/astro-ph0002098.html
ar5iv
text
# Chemistry in the Envelopes around Massive Young Stars ## 1. Introduction Massive star-forming regions have traditionally been prime targets for astrochemistry owing to their bright molecular lines (e.g., Johansson et al. 1984, Cummins et al. 1986, Irvine et al. 1987, Ohishi 1997). Massive young stellar objects (YSOs) have luminosities of $`10^410^6`$ L and involve young O- and B-type stars. Because their formation proceeds more rapidly than that of low-mass stars and involves ionizing radiation, substantial chemical differences may be expected. The formation of high mass stars is much less well understood than that of low-mass stars. For example, observational phenomena such as ultracompact H II regions, hot cores, masers and outflows have not yet been linked into a single evolutionary picture. Chemistry may well be an important diagnostic tool in establishing such a sequence. Most of the early work on massive star-forming regions has centered on two sources, Orion–KL and SgrB2. Numerous line surveys at millimeter (e.g., Blake et al. 1987, Turner 1991) and submillimeter (Jewell et al. 1989, Sutton et al. 1991, 1995, Schilke et al. 1997) wavelengths have led to an extensive inventory of molecules through identification of thousands of lines. In addition, the surveys have shown strong chemical variations between different sources. In recent years, new observational tools have allowed a more detailed and systematic study of the envelopes of massive YSOs. Submillimeter observations routinely sample smaller beams (typically 15<sup>′′</sup> vs. 30<sup>′′</sup>–1 ) and higher critical densities ($`10^6`$ vs. $`10^4`$ cm<sup>-3</sup>) than the earlier work. Moreover, interferometers at 3 and 1 millimeter provide maps with resolutions of $`0.5^{\prime \prime }`$–5<sup>′′</sup>. Finally, ground- and space-based infrared observations allow both the gas and the ices to be sampled (e.g., Evans et al. 1991, van Dishoeck et al. 1999). These observational developments have led to a revival of the study of massive star formation within the last few years. Recent overviews of the physical aspects of high-mass star formation are found in Churchwell (1999) and Garay & Lizano (1999). In this brief review, we will first summarize available observational diagnostics to study the different phases and physical components associated with massive star formation. Subsequently, an overview of recent results on intermediate mass YSOs is given, which are often better characterized than their high-mass counterparts because of their closer distance. Subsequently, we will discuss a specific sample of embedded massive YSOs which have been studied through a combination of infrared and submillimeter data. After illustrating the modeling techniques, we address the question how the observed chemical variations are related to evolutionary effects, different conditions in the envelope (e.g., $`T`$, mass) or different luminosities of the YSOs. More extensive overviews of the chemical evolution of star-forming regions are given by van Dishoeck & Blake (1998), Hartquist et al. (1998), van Dishoeck & Hogerheijde (1999) and Langer et al. (2000). Schilke et al. (this volume) present high spatial resolution interferometer studies, whereas Macdonald & Thompson (this volume) focus on submillimeter data of hot core/ultracompact H II regions. Ices are discussed by Ehrenfreund & Schutte (this volume). ## 2. Submillimeter and Infrared Diagnostics The majority of molecules are detected at (sub-)millimeter wavelengths, and line surveys highlight the large variations in chemical composition between different YSOs, both within the same parent molecular cloud and between different clouds. The recent 1–3 mm surveys of Sgr B2 (Nummelin et al. 1998, Ohishi & Kaifu 1999) dramatically illustrate the strong variations between various positions (see Figure 1). The North position is typical of ‘hot core’-type spectra, which are rich in lines of saturated organic molecules. This position has also been named the ‘large molecule heimat’ (e.g., Kuan & Snyder 1994, Liu & Snyder 1999). The Middle position has strong SO and SO<sub>2</sub> lines, whereas the Northwest position has a less-crowded spectrum with lines of ions and long carbon chains. A similar differentiation has been observed for three positions in the W 3 giant molecular cloud by Helmich & van Dishoeck (1997), who suggested an evolutionary sequence based on the chemistry. The availability of complete infrared spectra from 2.4–200 $`\mu `$m with the Infrared Space Observatory (ISO) allows complementary variations in infrared features to be studied. Figure 2 shows an example of ISO–SWS and LWS spectra of two objects: Cep A ($`L2.4\times 10^4`$ L) and S 106 ($`L4.2\times 10^4`$ L). The Cep A spectrum is characteristic of the deeply embedded phase, in which the silicates and ices in the cold envelope are seen in absorption. The S 106 spectrum is typical of a more evolved massive YSO, with strong atomic and ionic lines in emission and prominent PAH features. A similar sequence has been shown by Ehrenfreund et al. (1998) for a set of southern massive young stars with luminosities up to $`4\times 10^5`$ L. The most successful models for explaining these different chemical characteristics involve accretion of species in an icy mantle during the (pre-)collapse phase, followed by grain-surface chemistry and evaporation of ices once the YSO has started to heat its surroundings (e.g., Millar 1997). The evaporated molecules subsequently drive a rapid high-temperature gas-phase chemistry for a period of $`10^410^5`$ yr, resulting in complex, saturated organic molecules (e.g., Charnley et al. 1992, 1995; Charnley 1997; Caselli et al. 1993, Viti & Williams 1999). The abundance ratios of species such as CH<sub>3</sub>OCH<sub>3</sub>/CH<sub>3</sub>OH and SO<sub>2</sub>/H<sub>2</sub>S show strong variations with time, and may be used as ‘chemical clocks’ for a period of 5000–30,000 yr since evaporation. Once most of the envelope has cleared, the ultraviolet radiation can escape and forms a photon-dominated region (PDR) at the surrounding cloud material, in which molecules are dissociated into radicals (e.g., HCN $``$ CN) and PAH molecules excited to produce infrared emission. The (ultra-)compact H II region gives rise to strong ionic lines due to photoionization. Table 1 summarizes the chemical characteristics of the various physical components, together with the observational diagnostics at submillimeter and infrared wavelengths. Within the single-dish submillimeter and ISO beams, many of these components are blended together and interferometer observations will be essential to disentangle them. Nevertheless, the single-dish data are useful because they encompass the entire envelope and highlight the dominant component in the beam. Combined with the above chemical scenario, one may then attempt to establish an evolutionary sequence of the sources. The physical distinction between the ‘hot core’ and the warm inner envelope listed in Table 1 is currently not clear: does the ‘hot core’ represent a separate physical component or is it simply the inner warm envelope at a different stage of chemical evolution? Even from an observational point of view, there appear to be different types of ‘hot cores’: some of them are internally heated by the young star (e.g., W 3(H<sub>2</sub>O)), whereas others may just be dense clumps of gas heated externally (e.g., the Orion compact ridge). This point will be further discussed in §§4 and 5. Disks are not included in Table 1, because little is known about their chemical characteristics, or even their existence, around high-mass YSOs (see Norris, this volume). ## 3. Intermediate-Mass YSOs Intermediate-mass pre-main sequence stars, in particular the so-called Herbig Ae/Be stars, have received increased observational attention in recent years (see Waters & Waelkens 1998 for a review). These stars have spectral type A or B and show infrared excesses due to circumstellar dust. Typical luminosities are in the range $`10^210^4`$ L, and several objects have been located within 1 kpc distance. Systematic mapping of CO and the submillimeter continuum of a sample of objects has been performed by Fuente et al. (1998) and Henning et al. (1998). The data show the dispersion of the envelope with time starting from the deeply embedded phase (e.g., LkH$`\alpha `$234) to the intermediate stage of a PDR (e.g., HD 200775 illuminating the reflection nebula NGC 7023) to the more evolved stage where the molecular gas has disappeared completely (e.g., HD 52721). The increasing importance of photodissociation in the chemistry is probed by the increase in the CN/HCN abundance ratio. This ratio has been shown in other high-mass sources to be an excellent tracer of PDRs (e.g., Simon et al. 1997, Jansen et al. 1995). Line surveys of these objects in selected frequency ranges would be useful to investigate their chemical complexity, especially in the embedded phase. ISO-SWS observations of a large sample of Herbig Ae/Be stars have been performed by van den Ancker et al. (2000b,c). In the embedded phase, shock indicators such as \[S I\] 25.2 $`\mu `$m are strong, whereas in the later phases PDR indicators such as PAHs are prominent. An excellent example of this evolutionary sequence is provided by three Herbig Ae stars in the BD+40<sup>o</sup>4124 region ($`d1`$ kpc). The data suggest that in the early phases, the heating of the envelope is dominated by shocks, whereas in later phases it is controlled by ultraviolet photons. ISO-LWS data have been obtained for a similar sample of Herbig Ae/Be stars by Lorenzetti et al. (1999) and Giannini et al. (2000), and are summarized by Saraceno et al. (1999). The \[C II\] 158 $`\mu `$m and \[O I\] 63 and 145 $`\mu `$m lines are prominent in many objects and are due primarily to the PDR component in the large LWS beam ($`80^{\prime \prime }`$). High-$`J`$ CO and OH far-infrared lines have been detected in some objects and indicate the presence of a compact, high temperature and density region of $``$1000 AU in size, presumably tracing the inner warm envelope (see Figure 3). Far-infrared lines of H<sub>2</sub>O are seen in low-mass YSO spectra, but are weak or absent in those of intermediate- and high-mass YSOs, with the exception of Orion-KL and SgrB2 (e.g., Harwit et al. 1998, Cernicharo et al. 1997, Wright et al. 2000). The absence of H<sub>2</sub>O lines in higher-mass objects may be partly due to the larger distance of these objects, resulting in substantial dilution in the LWS beam. However, photodissociation of H<sub>2</sub>O to OH and O by the enhanced ultraviolet radiation may also play a role. In summary, both the submillimeter and infrared diagnostics reveal an evolutionary sequence from the youngest ‘Group I’ objects to ‘Group III’ objects (cf. classification by Fuente et al. 1998), in which the envelope is gradually dispersed. Such a sequence is analogous to the transition from embedded Class 0/I objects to more evolved Class II/III objects in the case of low-mass stars (Adams et al. 1987). The ISO data provide insight into the relative importance of the heating and removal mechanisms of the envelope. At the early stages of intermediate-mass star formation, shocks due to outflows appear to dominate whereas at later stages radiation is more important. ## 4. Embedded, Infrared-Bright Massive YSOs ### 4.1. Sample The availability of complete, high quality ISO spectra for a significant sample of massive young stars provides a unique opportunity to study these sources through a combination of infrared and submillimeter spectroscopy, and further develop these diagnostics. Van der Tak et al. (2000a) have selected a set of $``$10 deeply embedded massive YSOs which are bright at mid-infrared wavelengths (12 $`\mu `$m flux $`>`$ 100 Jy), have luminosities of $`10^32\times 10^5`$ L and distances $`d`$4 kpc. The sources are all in an early evolutionary state (comparable to the ‘Class 0/I’ or ‘Group I’ stages of low- and intermediate-mass stars), as indicated by their weak radio continuum emission and absence of ionic lines and PAH features. In addition to ISO spectra, JCMT submillimeter data and OVRO interferometer observations have been obtained. For most of the objects high spectral resolution ground-based infrared data of CO, <sup>13</sup>CO and H$`{}_{}{}^{+}{}_{3}{}^{}`$ are available (Mitchell et al. 1990, Geballe & Oka 1996, McCall et al. 1999), and occasionally H<sub>2</sub> (Lacy et al. 1994, Kulesa et al. 1999). For comparison, 5 infrared-weak sources with similar luminosities are studied at submillimeter wavelengths only. This latter set includes hot cores and ultracompact H II regions such as W 3(H<sub>2</sub>O), IRAS 20126+4104 (Cesaroni et al. 1997, 1999), and NGC 6334 IRS1. ### 4.2. Physical structure of the envelope In order to derive molecular abundances from the observations, a good physical model of the envelope is a prerequisite. Van der Tak et al. (1999, 2000a) outline the techniques used to constrain the temperature and density structure (Figure 4). The total mass within the beam is derived from submillimeter photometry, whereas the size scale of the envelope is constrained from line and continuum maps. The dust opacity has been taken from Ossenkopf & Henning (1994) and yields values for the mass which are consistent with those derived from C<sup>17</sup>O for warm sources where CO is not depleted onto grains. The temperature structure of the dust is calculated taking the observed luminosity of the source, given a power-law density structure (see below). At large distances from the star, the temperature follows the optically thin relation $`r^{0.4}`$, whereas at smaller distances the dust becomes optically thick at infrared wavelengths and the temperature increases more steeply (see Figure 5). It is assumed that $`T_{\mathrm{gas}}=T_{\mathrm{dust}}`$, consistent with explicit calculations of the gas and dust temperatures by, e.g., Doty & Neufeld (1997) for these high densities. The continuum data are sensitive to temperature and column density, but not to density. Observations of a molecule with a large dipole moment are needed to subsequently constrain the density structure. One of the best choices is CS and its isotope C<sup>34</sup>S, for which lines ranging from $`J`$=2–1 to 10–9 have been observed. Assuming a power-law density profile $`n(r)=n_o(r/r_o)^\alpha `$, values of $`\alpha `$ can be determined from minimizing $`\chi ^2`$ between the CS line data and excitation models. The radiative transfer in the lines is treated through a Monte-Carlo method. The best fit to the data on the infrared-bright sources is obtained for $`\alpha =1.01.5`$, whereas the hot core/compact H II region sample requires higher values, $`\alpha 2`$. This derivation assumes that the CS abundance is constant through the envelope; if it increases with higher temperatures, such as may be the case for hot cores, the values of $`\alpha `$ are lowered. Note that the derived values of $`\alpha =1.01.5`$ are lower than those found for deeply embedded low-mass objects, where $`\alpha 2`$ (e.g., Motte et al. 1998, Hogerheijde et al. 1999). Figure 5 displays the derived temperature and density structure for the source GL 2591, together with the sizes of the JCMT and OVRO beams. While the submillimeter data are weighted toward the colder, outer envelope, the infrared absorption line observations sample a pencil-beam line of sight toward the YSO and are more sensitive to the inner warm ($`1000`$ K) region. On these small scales, the envelope structure deviates from a radial power law, which decreases the optical depth at near-infrared wavelengths by a factor of $`3`$ (van der Tak et al. 1999). For sources for which interferometer data are available, unresolved compact continuum emission is detected on scales of a few thousand AU or less. This emission is clearly enhanced compared with that expected from the inner “tip” of the power-law envelope, and its spectral index indicates optically thick warm dust, most likely in a dense circumstellar shell or disk. The presence of this shell or disk is also indicated by the prevalence of blue-shifted outflowing dense gas without a red-shifted counterpart on $`<10^{\prime \prime }`$ scales. ### 4.3. Chemical structure: infrared absorption lines The ISO-SWS spectra of the infrared-bright sources show absorption by various gas-phase molecules, in addition to strong features by ices. Molecules such as CO<sub>2</sub> (van Dishoeck et al. 1996, Boonman et al. 1999, 2000a), H<sub>2</sub>O (van Dishoeck & Helmich 1996, Boonman et al. 2000b), CH<sub>4</sub> (Boogert et al. 1998), HCN and C<sub>2</sub>H<sub>2</sub> (Lahuis & van Dishoeck 2000) have been detected (see also van Dishoeck 1998, Dartois et al. 1998). In the infrared, absorption out of all $`J`$-levels is observed in a single spectrum. The excitation temperatures $`T_{\mathrm{ex}}`$ of the various molecules, calculated assuming LTE, range from $`\genfrac{}{}{0pt}{}{_<}{^{}}100`$ to $`1000`$ K between sources, giving direct information on the physical component in which the molecules reside. While CO is well-mixed throughout the envelopes, H<sub>2</sub>O, HCN and C<sub>2</sub>H<sub>2</sub> are enhanced at high temperatures. In contrast, CO<sub>2</sub> seems to avoid the hottest gas. High spectral resolution ground-based data of HCN and C<sub>2</sub>H<sub>2</sub> by Carr et al. (1995) and Lacy et al. (1989) for a few objects suggest line widths of at most a few km s<sup>-1</sup>, excluding an origin in outflowing gas. The abundances of H<sub>2</sub>O, HCN and C<sub>2</sub>H<sub>2</sub> increase by factors of $`\genfrac{}{}{0pt}{}{_>}{^{}}10`$ with increasing $`T_{\mathrm{ex}}`$ (see Figure 7). The warm H<sub>2</sub>O must be limited to a $`\genfrac{}{}{0pt}{}{_<}{^{}}1000`$ AU region, since the pure rotational lines are generally not detected in the $`80^{\prime \prime }`$ ISO-LWS beam (Wright et al. 1997). For CO<sub>2</sub>, the abundance variation between sources is less than a factor of 10, and no clear trend with $`T_{\mathrm{ex}}`$ is found. For the same sources, the H<sub>2</sub>O and CO<sub>2</sub> ice abundances show a decrease by an order of magnitude, consistent with evaporation of the ices. However, the gas-phase H<sub>2</sub>O and CO<sub>2</sub> abundances are factors of $`10`$ lower than expected if all evaporated molecules stayed in the gas phase, indicating that significant chemical processing occurs after evaporation. More detailed modeling using the source structures derived from submillimeter data is in progress. ### 4.4. Chemical structure: submillimeter emission The JCMT data of the infrared-bright objects show strong lines, but lack the typical crowded ‘hot core’ spectra observed for objects such as W 3(H<sub>2</sub>O) and NGC 6334 IRS1. Complex organics such as CH<sub>3</sub>OCH<sub>3</sub> and CH<sub>3</sub>OCHO are detected in some sources (e.g., GL 2591, NGC 7538 IRS1), but are not as prominent as in the comparison sources. Yet warm gas is clearly present in these objects. Is the ‘hot core’ still too small to be picked up by the single dish beams, or are the abundances of these molecules not (yet) enhanced? To investigate this question, van der Tak et al. (2000b) consider the analysis of two species, H<sub>2</sub>CO and CH<sub>3</sub>OH. Both species have many lines throughout the submillimeter originating from low- and high-lying energy levels. Given the physical structure determined in §4.2, abundance profiles can be constrained. Two extreme, but chemically plausible models are considered: (i) a model with a constant abundance throughout the envelope. This model is motivated by the fact that pure gas-phase reaction schemes do not show large variations in calculated abundances between 20 and 100 K; (ii) a model in which the abundance ‘jumps’ to a higher value at the ice evaporation temperature, $`T_d90`$ K. In this model, the abundances in the outer envelope are set at those observed in cold clouds, so that the only free parameter is the amount of abundance increase. It is found that the H<sub>2</sub>CO data can be well fit with a constant abundance of a few $`\times 10^9`$ throughout the envelope. However, the high $`J,K`$ data for CH<sub>3</sub>OH require a jump in its abundance from $`10^9`$ to $`10^7`$ for the warmer sources. This is consistent with the derived excitation temperatures: H<sub>2</sub>CO has a rather narrow range of $`T_{\mathrm{ex}}`$=50–90 K, whereas CH<sub>3</sub>OH shows $`T_{\mathrm{ex}}`$=30–200 K. Moreover, the interferometer maps of CH<sub>3</sub>OH rule out constant abundance models. The jump observed for CH<sub>3</sub>OH is chemically plausible since this molecule is known to be present in icy grain mantles with abundances of 5–40% with respect to H<sub>2</sub>O ice, i.e., $`10^710^6`$ w.r.t. H<sub>2</sub>. Similar increases in the abundances of organic molecules (e.g., CH<sub>3</sub>OH, C<sub>2</sub>H<sub>3</sub>CN, ….) are found with increasing $`T_{\mathrm{dust}}`$ for a set of ‘hot–core’ objects by Ikeda & Ohishi (1999). ### 4.5. Comparison with chemical models Both the infrared and submillimeter data show increases in the abundances of various molecules with increasing temperature. Four types of species can be distinguished: (i) ‘passive’ molecules which are formed in the gas phase, freeze out onto grains during the cold (pre-)collapse phase and are released during warm-up without chemical modification (e.g., CO, C<sub>2</sub>H<sub>2</sub>); (ii) molecules which are formed on the grains during the cold phase by surface reactions and are subsequently released into the warm gas (e.g., CH<sub>3</sub>OH); (iii) molecules which are formed in the warm gas by gas-phase reactions with evaporated molecules (e.g., CH<sub>3</sub>OCH<sub>3</sub>); (iv) molecules which are formed in the hot gas by high temperature reactions (e.g., HCN). These types of molecules are associated with characteristic temperatures of (a) $`T_{\mathrm{dust}}<20`$ K, where CO is frozen out; the presence of CO ice is thought to be essential for the formation of CH<sub>3</sub>OH; (b) $`T_{\mathrm{dust}}90`$ K, where all ices evaporate on a time scale of $`<10^5`$ yr; and (c) $`T_{\mathrm{gas}}>230`$ K, where gas-phase reactions drive the available atomic oxygen into water through the reactions O + H<sub>2</sub> $``$ OH $``$ H<sub>2</sub>O (Ceccarelli et al. 1996, Charnley 1997). Atomic oxygen is one of the main destroyers of radicals and carbon chains, so that its absence leads to enhanced abundances of species like HCN and HC<sub>3</sub>N in hot gas. Water is abundantly formed on the grains, but the fact that the H<sub>2</sub>O abundance in the hot gas is not as large as that of the ices suggests that H<sub>2</sub>O is broken down to O and OH after evaporation by reactions with H$`{}_{}{}^{+}{}_{3}{}^{}`$. H<sub>2</sub>O can subsequently be reformed in warm gas at temperatures above $``$230 K, but Figure 7 indicates that not all available gas-phase oxygen is driven into H<sub>2</sub>O, as the models suggest. The low abundance of CO<sub>2</sub> in the warm gas is still a puzzle, since evaporation of abundant CO<sub>2</sub> ice is observed. The molecule must be broken down rapidly in the warm gas, with no reformation through the CO + OH $``$ CO<sub>2</sub> reaction (see question by Minh). ### 4.6. Evolution? The objects studied by van der Tak et al. (2000a) are all in an early stage of evolution, when the young stars are still deeply embedded in their collapsing envelope. Nevertheless, even within this narrow evolutionary range, there is ample evidence for physical and chemical differentiation of the sources. This is clearly traced by the increase in the gas/solid ratios, the increase in abundances of several molecules, the decrease in the ice abundances, and the increase of the amount of crystalline ice with increasing temperature (Boogert et al. 2000a). The fact that the various indicators involve different characteristic temperatures ranging from $`<`$50 K (evaporation of apolar ices) to 1000 K ($`T_{\mathrm{ex}}`$ of gas-phase molecules) indicates that the heating is not a local effect, but that ‘global warming’ occurs throughout the envelope. Moreover, it cannot be a geometrical line-of-sight effect in the mid-infrared data, since the far–infrared continuum (45/100 $`\mu `$m) and submillimeter line data (CH<sub>3</sub>OH) show the same trend. Shocks with different filling factors are excluded for the same reason. Can we relate this ‘global warming’ of the envelope to an evolutionary effect, or is it determined by other factors? The absence of a correlation of the above indicators with luminosity or mass of the source argues against them being the sole controlling factor. The only significant trend is found with the ratio of envelope mass over stellar mass. The physical interpretation of such a relation would be that with time, the envelope is dispersed by the star, resulting in a higher temperature throughout the envelope. ## 5. Outstanding questions and future directions The results discussed here suggest that the observed chemical abundance and temperature variations can indeed be used to trace the evolution of the sources, and that, as in the case of low- and intermediate-mass stars, the dispersion of the envelope plays a crucial role. The combination of infrared and submillimeter diagnostics is very important in the analysis. An important next step would be to use these diagnostics to probe a much wider range of evolutionary stages for high-mass stars, especially in the hot core and (ultra-)compact H II region stages, to develop a more complete scenario of high-mass star formation. The relation between the inner warm envelope and the ‘hot core’ is still uncertain: several objects have been observed which clearly have hot gas and evaporated ices (including CH<sub>3</sub>OH) in their inner regions, but which do not show the typical crowded ‘hot core’ submillimeter spectra. Are these objects just on their way to the ‘hot core’ chemical phase? Or is the ‘hot core’ a separate physical component, e.g., a dense shell at the edge of the expanding hyper-compact H II region due to the pressure from the ionized gas, which is still too small to be picked up by the single-dish beams? In either case, time or evolution plays a role and would constrain the ages of the infrared-bright sources to less than a few $`\times 10^4`$ yr since evaporation. Interferometer data provide evidence for the presence of a separate physical component in the inner 1000 AU, but lack the spatial resolution to distinguish a shell from any remnant disk, for example on kinematic grounds. An important difference between high- and low-mass objects may be the mechanism for the heating and dispersion of their envelopes. For low-mass YSOs, entrainment of material in outflows is the dominant process (Lada 1999). For intermediate-mass stars, outflows are important in the early phase, but ultraviolet radiation becomes dominant in the later stages (see § 3). The situation for high-mass stars is still unclear. The systematic increase in gas/solid ratios and gas-phase abundances point to global heating of the gas and dust, consistent with a radiative mechanism. However, a clear chemical signature of ultraviolet radiation on gas-phase species and ices in the embedded phase has not yet been identified, making it difficult to calibrate its effect. On the other hand, high-mass stars are known to have powerful outflows and winds, but a quantitative comparison between their effectiveness in heating an extended part of the envelope and removing material is still lacking. Geometrical effects are more important in less embedded systems, as is the case for low-mass stars, where the circumstellar disk may shield part of the envelope from heating (Boogert et al. 2000b). To what extent does the chemical evolution picture also apply to low-mass stars? Many of the chemical processes and characteristics listed in Table 1 are also known to occur for low-mass YSOs, but several important diagnostic tools are still lacking. In particular, sensitive mid-infrared spectroscopy is urgently needed to trace the evolution of the ices for low-mass YSOs and determine gas/solid ratios. Also, molecules as complex as CH<sub>3</sub>OCH<sub>3</sub> and C<sub>2</sub>H<sub>5</sub>CN have not yet been detected toward low-mass YSOs, although the limits are not very stringent (e.g., van Dishoeck et al. 1995). Evaporation of ices clearly occurs in low-mass environments as evidenced by enhanced abundances of grain-surface molecules in shocks (e.g., Bachiller & Pérez-Gutiérrez 1997), but whether a similar ‘hot core’ chemistry ensues is not yet known. Differences in the H/H<sub>2</sub> ratio and temperature structure in the (pre-)collapse phase may affect the grain-surface chemistry and the ice composition, leading to different abundances of solid CH<sub>3</sub>OH, which is an essential ingredient for building complex molecules. Future instrumentation with high spatial resolution ($`<1^{\prime \prime }`$) and high sensitivity will be essential to make progress in our understanding of the earliest phase of massive star formation, in particular the SMA and ALMA at submillimeter wavelengths, and SIRTF, SOFIA, FIRST and ultimately NGST at mid- and far-infrared wavelengths. #### Acknowledgments. The authors are grateful to G.A. Blake, A.C.A. Boogert, A.M.S. Boonman, P. Ehrenfreund, N.J. Evans, T. Giannini, F. Lahuis, L.G. Mundy, A. Nummelin, W.A. Schutte, A.G.G.M. Tielens, and M.E. van den Ancker for discussions, collaborations and figures. This work was supported by NWO grant 614.41.003 ## References Adams, F. C., Lada, C. J. & Shu, F. H. 1987, ApJ, 312, 788 Bachiller, R. & Pérez-Gutiérrez, M. 1997, ApJ, 487, L93 Blake, G.A., Sutton, E.C., Masson, C.R., & Phillips, T.G. 1987, ApJ, 315, 621 Boogert, A.C.A. et al. 2000a, A&A, 353, 349 Boogert, A.C.A., et al. 2000b, A&A, submitted Boogert, A.C.A., Helmich, F.P., van Dishoeck, E.F., Schutte, W.A., Tielens, A.G.G.M., & Whittet, D.C.B. 1998, A&A, 336, 352 Boonman, A.M.S., Wright, C.M. & van Dishoeck, E.F. 1999, in The Physics and Chemistry of the Interstellar Medium, ed. V. Ossenkopf et al. (Herdecke: GCA), p. 275 Boonman, A.M.S. et al. 2000a, in preparation Boonman, A.M.S. et al. 2000b, in preparation Carr, J., Evans, N.J., Lacy, J.H., & Zhou, S. 1995, ApJ, 450, 667 Caselli, P., Hasegawa, T.I., & Herbst, E. 1993, ApJ, 408, 548 Ceccarelli, C. , Hollenbach, D. J. & Tielens, A. G. G. M. 1996, ApJ, 471, 400 Cernicharo, J., Lim, T., Cox, P. et al. 1997, A&A, 323, L25 Cesaroni, R., Felli, M., Testi, L., Walmsley, C., & Olmi, L. 1997, A&A, 325, 725 Cesaroni, R., Felli, M., Jenness, T., Neri, R., Olmi, L., Robberto, M., Testi, L., & Walmsley, C. M. 1999, A&A, 345, 949 Charnley, S.B. 1997, ApJ, 481, 396 Charnley, S.B. & Kaufman, M.J. 2000, ApJ, 529, L111 Charnley, S.B., Tielens, A.G.G.M., & Millar, T.J. 1992, ApJ, 399, L71 Charnley, S.B., Kress, M.E., Tielens, A.G.G.M., & Millar, T.J. 1995, ApJ, 448, 232 Churchwell, E.B. 1999, in The Origin of Stars and Planetary Systems, eds. C.J. Lada and N.D. Kylafis (Dordrecht: Kluwer), p. 515 Cummins, S.E., Linke, R.A., & Thaddeus, P. 1986, ApJS, 69, 819 Dartois, E., d’Hendecourt, L., Boulanger, F., Jourdain de Muizon, M., Breitfellner, M., Puget, J-L, & Habing, H.J. 1998, A&A, 331, 651 Doty, S.D. & Neufeld, D.A. 1997, ApJ, 489, 122 Ehrenfreund, P., van Dishoeck, E. F., Burgdorf, M., Cami, J., van Hoof, P., Tielens, A. G. G. M., Schutte, W. A. & Thi, W. F. 1998, Ap&SS, 255, 83 Evans, N.J., Lacy, J.H., & Carr, J.S. 1991, ApJ, 383, 674 Fuente, A., Martín-Pintado, J., Bachiller, R., Neri, R. & Palla, F. 1998, A&A, 334, 253 Garay, G. & Lizano, S. 1999, PASP, 111, 1049 Geballe, T.R. & Oka, T. 1996, Nature, 384, 334 Giannini, T. et al. 2000, A&A, 346, 617 Hartquist, T. W., Caselli, P., Rawlings, J. M. C., Ruffle, D. P. & Williams, D. A. 1998, in The Molecular Astrophysics of Stars and Galaxies, eds. T.W. Hartquist and D.A. Williams (Oxford: OUP), 101 Harwit, M., Neufeld, D.A., Melnick, G.J., & Kaufman, M.J. 1998, ApJ, 497, L105 Helmich, F.P. & van Dishoeck, E.F. 1997, A&AS, 124, 205 Henning, Th., Burkert, A., Launhardt, R., Leinert, Ch., & Stecklum, B. 1998, A&A, 336, 565 Hogerheijde, M. R., van Dishoeck, E. F., Salverda, J. M., & Blake, G. A. 1999, ApJ, 513, 350 Ikeda, M. & Ohishi, M. 1999, in IAU Symposium 197 Abstract Book, Astrochemistry: from molecular clouds to planetary systems, p. 171 Irvine, W.M., Goldsmith, P.F., & Hjalmarson, Å. 1987, in Interstellar Processes, ed. D. Hollenbach and H.A. Thronson (Kluwer: Dordrecht), p. 561 Jansen, D. J., Spaans, M., Hogerheijde, M. R. & van Dishoeck, E. F. 1995, A&A, 303, 541 Jewell, P.R., Hollis, J.M., Lovas, F.J., & Snyder, L.E. 1989, ApJS, 70, 833 Johansson, L.E.B., Andersson, C., Elldér, J., et al. 1984, A&A, 130, 227 Kuan, Y.J. & Snyder, L.E. 1994, ApJS, 94, 651 Kulesa, C. A., Black, J. H. & Walker, C. K. 1999, BAAS, 194, 4709 Lacy, J., J.H., Evans, N.J., Achtermann, J.M. et al. 1989, ApJ, 342, L43 Lacy, J.H., Knacke, R., Geballe, T.R., & Tokunaga, A.T. 1994, ApJ, 428, L69 Lada, C.J. 1999, in The Origin of Stars and Planetary Systems, eds. C.J. Lada and N.D. Kylafis (Dordrecht: Kluwer), p. 143 Lahuis, F. & van Dishoeck, E.F., 2000, A&A, in press Langer, W.D., van Dishoeck, E.F., Blake, G.A. et al. 2000, in Protostars & Planets IV, eds. V. Mannings, A.P. Boss and S.S. Russell, (Tucson: Univ. Arizona), in press Liu, S.-Y. & Snyder, L. E. 1999, ApJ, 523, 683 Lorenzetti, D. et al. 1999, A&A, 346, 604 McCall, B.J., Geballe, T.R., Hinkle, K.H. & Oka, T. 1999, ApJ, 522, 338 Millar, T.J. 1997, in Molecules in Astrophysics: Probes and Processes, IAU Symposium 178, ed. E.F. van Dishoeck (Dodrecht: Kluwer), p. 75 Mitchell, G.F., Maillard, J.–P., Allen, M., Beer, R., & Belcourt, K. 1990, ApJ, 363, 554 Motte, F., André, P. & Neri, R. 1998, A&A, 336, 150 Nummelin, A., Bergman, P., Hjalmarson, Å., Friberg, P., Irvine, W. M., Millar, T. J., Ohishi, M. & Saito, S. 1998, ApJS, 117, 427 Ohishi, M. 1997, in Molecules in Astrophysics: Probes and Processes, IAU Symposium 178, ed. E.F. van Dishoeck (Dordrecht: Kluwer), p. 61 Ohishi, M. & Kaifu, N. 1999, in IAU Symposium 197 Abstract Book, Astrochemistry: from molecular clouds to planetary systems, p. 143 Ossenkopf, V. & Henning, Th. 1994, A&A, 291, 943 Saraceno, P., Benedettini, M., Di Giorgio, A.M., et al. 1999, in Physics and Chemistry of the Interstellar Medium III, eds. V. Ossenkopf et al. (Berlin: Springer), p. 279 Schilke, P., Groesbeck, T.D., Blake, G.A., & Phillips, T.G. 1997, ApJS, 108, 301 Simon, R., Stutzki, J., Sternberg, A., & Winnewisser, G. 1997, A&A, 327, L9 Sutton, E.C., Jaminet, P.A., Danchi, W.C., & Blake, G.A. 1991, ApJS, 77, 255 Sutton, E.C., Peng, R., Danchi, W.C., et al. 1995, ApJS, 97, 455 Turner, B.E. 1991, ApJS, 76, 617 van den Ancker, M. et al. 2000a, A&A, submitted van den Ancker, M. et al. 2000b, A&A, in press van den Ancker, M. et al. 2000c, A&A, submitted van der Tak, F.F.S., van Dishoeck, E.F., Evans, N.J., Bakker, E.J., & Blake, G.A. 1999, ApJ, 522, 991 van der Tak, F.F.S., van Dishoeck, E.F., Evans, N.J., & Blake, G.A. 2000a, ApJ, in press van der Tak, F.F.S. & van Dishoeck, E.F. 2000b, A&A, to be submitted van Dishoeck, E.F. 1998, Far. Disc., 109, 31 van Dishoeck, E.F. & Blake, G.A. 1998, ARAA, 36, 317 van Dishoeck, E. F., Blake, G. A., Jansen, D. J. & Groesbeck, T. D. 1995, ApJ, 447, 760 van Dishoeck, E.F. & Helmich, F.P. 1996, A&A, 315, L177 van Dishoeck, E.F., Helmich, F.P., de Graauw, Th. et al. 1996, A&A, 315, L349 van Dishoeck, E.F. & Hogerheijde, M.R. 1999, in Origin of Stars and Planetary Systems, eds. C.J. Lada and N. Kylafis (Dordrecht: Kluwer), p. 97 van Dishoeck, E.F. et al. 1999, in The Universe as seen by ISO, eds. P. Cox and M.F. Kessler (Noordwijk: ESTEC), ESA SP-427, p. 437 Waters, L.B.F.M. & Waelkens, C. 1998, ARAA, 36, 233 Wright, C.M., van Dishoeck, E.F., Helmich, F.P., Lahuis, F., Boogert, A.C.A., & de Graauw, Th. 1997, in First ISO Workshop on Analytical Spectroscopy, ESA SP-419, p. 37 Wright, C.M., van Dishoeck, E.F., Black, J.H., Feuchtgruber, H., Cernicharo, J., González-Alfonso, E., & de Graauw, Th. 2000, A&A, in press ## Discussion M. Ohishi: I agree with the point you mentioned, that the chemical differences among hot cores is due to a difference of evolutionary stage. Now we have several well-known hot cores such as Orion KL/S, W 3 IRS5/H<sub>2</sub>O/IRS4, SgrB2 N/M/NW etc. Can you give us your personal view on the evolutionary differences of these sources? E.F. van Dishoeck: Van der Tak et al. (2000a) argue that the infrared-bright objects such as W 3 IRS5 represent an earlier evolutionary phase than the hot cores, on the basis of an anti-correlation with the radio continuum. The physical picture is that the ionizing UV radiation and stellar winds push the hottest dust in the inner regions further out, decreasing the temperature of the dust and thus the near-infrared continuum. At the same time, the size of the region which can be ionized is increased. The ‘erosion’ of the envelopes thus occurs from the inside out. For other sources, infrared diagnostics are lacking, so that the situation is less clear. It would be great if chemistry could help to tie down the time scales of the various phases. W. Irvine: How do you interpret the behavior of the PAH features as a function of evolutionary stage in the sources that you discussed? E.F. van Dishoeck: The absence of PAH features in the early embedded stage can be due either to a lack of ultraviolet radiation to excite the features or to an absence of the carriers. Manske & Henning (1999, A&A 349, 907) have argued for the case of Herbig Ae/Be stars that there should be sufficient radiation to excite PAHs in the envelope/disk system, so that the lack is likely due to the absence of the PAHs themselves. Perhaps the PAHs have accreted into the icy mantles at the high densities in the inner envelope and do not evaporate and/or are chemically transformed into other more refractory species on grains. Alternatively, the region producing ultraviolet radiation (H II region) may be very small in these massive objects, and the photons may not reach the PAH-rich material or have a very small beam filling factor. Once the envelope breaks up and ultraviolet radiation can escape to the less dense outer envelope, the PAH features from those regions will appear in spectra taken with large beams. T. Geballe: You said that there is little evidence of the effect of ultraviolet radiation on solid-state chemistry. Isn’t the 4.6 $`\mu `$m XCN feature a good example of that influence? E.F. van Dishoeck: The ‘XCN’ feature is indeed the best candidate for tracing the ultraviolet processing of ices. If ascribed to OCN<sup>-</sup>, it likely involves HNCO as a precursor. In the laboratory, HNCO is produced by photochemical reactions of CO and NH<sub>3</sub>, but in the interstellar medium grain surface chemistry is an alternative possibility which does not necessarily involve ultraviolet radiation (see Ehrenfreund & Schutte, this volume). Y.C. Minh: Do you have an explanation of the low and constant abundances of CO<sub>2</sub> in the gas phase? E.F. van Dishoeck: Charnley & Kaufman (2000, ApJ, 529, L111) argue that the evaporated CO<sub>2</sub> is destroyed by reactions with atomic hydrogen at high temperatures in shocks. This is an interesting suggestion, but needs to be tested against other species such as H<sub>2</sub>O and H<sub>2</sub>S which can be destroyed by reactions with atomic hydrogen as well. Also, the amount of material in the envelope that can be affected by shocks is not clear.
no-problem/0002/astro-ph0002104.html
ar5iv
text
# The NASA Astrophysics Data System: Overview ## 1 Introduction The NASA Astrophysics Data System Abstract Service (hereafter ADS, except in section 2) is now a central facility of bibliographic research in astronomy. In a typical month (March 1999) it is used by more than 20,000 individuals, who make $``$ 580,000 queries, retrieve $``$ 10,000,000 bibliographic entries, read $``$ 400,000 abstracts and $``$ 110,000 articles, consisting of $``$ 1,100,000 pages. The ADS is a key element in the emerging digital information resource for astronomy, which has been dubbed Urania (Boyce (1996)). The ADS is tightly interconnected with the major journals of astronomy, and the major data centers. The present paper serves as an introduction to the system, a description of its history, current status, use, capabilities, and goals. Detailed descriptions of the ADS system are in the companion papers: The design and use of the search engine is in Eichhorn, et al. (1999); hereafter SEARCH. The architecture, indexing system, and mirror maintenance is in Accomazzi, et al., (1999); hereafter ARCHITECTURE. Finally the methods we use to maintain and update the data base, and to maintain communication with our collaborating data centers and journals (primarily via bibcodes, Schmitz et al. (1995)) is in Grant, et al. (1999); hereafter DATA. In section 2 we discuss the history of the ADS, paying particular note of the persons and events which were most important to its development. Section 3 briefly discusses the current status of the system, the data it contains, and the hardware, software, and organizational methods we use to maintain and distribute these data. Urania, and especially the ADS role in it, is discussed in section 4. The current capabilities and use of the system are shown in section 5; with section 5.1 showing example queries, and section 5.2 showing how ADS use has changed over time. In section 6 we show how current use varies as a function of the age of an article and the journal it was published in; in 6.1 we develop a multi-component model which accurately describes the whole pattern of article use as a function of age; in 6.2 we compare the similarities and differences of readership information with citation histories; in 6.3 we examine several aspects of the readership pattern for the major journals. Finally, in 7, we estimate the impact of the ADS on astronomy. ## 2 Historical Introduction The ADS Abstract Service had its beginnings at the conference Astronomy from Large Data-bases, held in Garching in 1987. There Adorf & Busch (1988) discussed the desirability of building a natural language interface to a set of astronomical abstracts (Astronomy and Astrophysics Abstracts (A&AA) was the model) using software from Information Access Systems, Inc. (IAS; E. Busch was the president of IAS). Watson (1988) discussed the existing abstract services. At this meeting G. Shaw (who was representing IAS) saw the paper by Kurtz (1988), and noticed that the vector space classification methods developed by M. J. Kurtz for the numerical classification of stellar spectra were very similar to those developed by P. G. Ossorio (1966) for the classification (and thus natural language indexing) of text. Ossorio’s methods were the basis of the proposal by Adorf & Busch (1988); Ossorio was the founder of IAS. Shaw suggested Ossorio and Kurtz meet. Also at this conference Squibb & Cheung (1988) presented the NASA plan for an astrophysics data system, and Shaw met G. Squibb. This meeting of Kurtz and Ossorio took place in January 1988, in Boulder, CO. By the end of the meeting it was clear that the technical difficulties involved in creating an abstract service with a natural language index could be overcome, if the data could become available. A preliminary mathematical solution to the problem was developed, under the assumption that A&AA would be the source of the abstracts. This technique was later called the “statistical Factor Space”, factor analysis being one of the tools used to create the vector space. Over the next year NASA moved to implement the Squibb & Cheung (1988) plan for the establishment of a network based, distributed system for access and management of NASA astrophysics data holdings, the Astrophysics Data System. Shaw and Ossorio founded a new company, Ellery Systems, Inc., which obtained the systems integration contract for the ADS. During this time Shaw, Ossorio, Kurtz, and S.S. Murray all spoke often about the abstract service as an integral part of the emerging ADS system, and the abstract service, and Factor Space, became nearly synonymous with the ADS project. No actual work was done to implement the abstract service during this time, Ossorio and Kurtz worked on applying their vector space classification techniques to galaxy morphologies (Ossorio & Kurtz (1989), Kurtz, Mussio & Ossorio (1990)), while Murray used the original, non-statistical, Factor Space methods of Ossorio (1966) to build a small ($``$40 documents) natural language indexing system for demonstration purposes. During the next three years the ADS was built (Good (1992)), but without a literature retrieval service, which was listed as a future development. No NASA funds were devoted to the abstract service during this time. Independently Kurtz and Watson set out to obtain the data necessary to build a prototype system; keyword data was received from the IAU (International Astronomical Union) Thesaurus project (Shobbrook & Shobbrook (1992), Shobbrook & Shobbrook (1993)), and from the NASA Scientific and Technical Information (STI) branch (Pinelli (1990)). A breakthrough occurred in mid 1990 when the Astronomische Rechen Institut graciously provided Watson with magnetic tape copies of the two 1989 volumes of Astronomy and Astrophysics Abstracts. By the end of 1990 Kurtz (1991, 1992) had built a prototype abstract retrieval system, based on the statistical Factor Space. In April, 1991 F. Giovane and C. Pilachowski organized a meeting near Washington, D.C. on “On-Line Literature in Astronomy.” At this meeting Boyce (1991) discussed the desire of the American Astronomical Society (AAS) to publish on-line journals, Kurtz (1991) discussed the prototype system, and pointed out the types of queries which would be made possible if a natural language abstract system were combined with the Strasbourg Data Centers’s (CDS) SIMBAD (Egret & Wenger (1988)) database and with the Institute for Scientific Information’s Science Citation Index (Garfield (1979)), and van Steenberg (1991) discussed the desire of the National Space Science Data Center (NSSDC) to create a database of scanned bitmaps of journal articles. Also at this meeting were representatives of NASA’s STI branch, who indicated that they would be willing to provide the abstracts from the STI (often called NASA RECON) abstracts database (Wente (1990)). Near the end of the meeting Murray (1991) outlined the possibilities inherent in the previous talks. He described a networked data system where a natural language query system for the STI abstracts would work jointly with the CDS/SIMBAD object name index to point astronomers to relevant abstracts, article bitmaps, and electronic journal articles. Save that the World Wide Web (Berners-Lee (1994)) has taken the place of the proprietary network software created for the ADS project by Ellery Systems Inc., and that the ADS has taken over responsibility for the bitmaps from the NSSDC, the current system is essentially identical to the one predicted by Murray (1991). Following the meeting the NSSDC group (Warnock et al. (1993)) organized the STELAR project, which held a series of meetings where many of the issues involved in electronic journals were discussed, and a consensus was reached on allowed uses of copyrighted journal article bitmaps. In the spring of 1992 Murray took over the direct management of the ADS project; G. Eichhorn was hired as project manager. The decision was made to proceed forthwith with the development of an abstract service based on the STI abstracts. Because the STI abstract system is differently structured than the A&AA system the statistical Factor Space was abandoned in favor of a more traditional entropy matching technique (Salton and McGill (1983), see SEARCH). The new system was working with a static database by fall, and was shown at the Astronomical Data Analysis Software and Systems II meeting in Boston (Kurtz et al. (1993)). The production system was released in February 1993, as part of the package of ADS services, still part of the proprietary ADS network system. Abstract Service use quickly became more than half of all ADS use. By summer 1993 a connection had been made between the ADS and SIMBAD, permitting users to combine natural language subject matter queries with astronomical object name queries (Grant, Kurtz & Eichhorn (1994)). This connection was enabled by the use of the bibcode (see DATA). We believe this is the first time an internet connection was made to permit the routine, simultaneous, real-time interrogation of transatlanticly separated scientific databases. By early 1994 The World Wide Web (Berners-Lee (1994)) had matured to where it was possible to make the ADS Abstract Service available via a web forms interface; this was released in February. Within five weeks of the initial WWW release use of the Abstract Service quadrupled (from 400 to 1600 users per month). By the end of 1994 the ADS project had again been restructured, leaving primarily the WWW based Abstract Service as its principle service. Also the STELAR project at NSSDC ended, and the ADS took over responsibility for creating the database of bitmaps. The first full article bitmaps, which were of Astrophysical Journal Letters articles, were put on-line in December 1994 (Eichhorn et al. (1994)). By the summer of 1995 the bitmaps were current and complete going back ten years. At that time the Electronic ApJ Letters (Boyce (1995)) went on-line. ¿From the start the ADS indexed the EApJL, and pointed to the electronic version. Also from the beginning the reference section of the EApJL pointed (via WWW hyperlinks) to the ADS abstracts for articles referenced in the articles; again this was enabled by the use of the bibcode. Also during this time the NASA STI branch became unable to provide abstracts of the journal articles in astronomy. In order to continue the abstract service cooperative arrangements were made with nearly every astronomical research journal, as well as a number of other sources of bibliographic information. DATA describes these arrangements in detail. The next year (1996) saw nearly every astronomy journal which had not already joined into collaboration with ADS join. Also in 1996 the American Astronomical Society purchased the right to use a subset of the Science Citation Index, and gave these data to ADS (Kurtz et al. (1996)). ## 3 The Current System Currently the ADS system consists of four semi-autonomous (to the user) abstract services covering Astronomy, Instrumentation, Physics, and Astronomy Preprints. Combined there are nearly 1.5 million abstracts and bibliographic references in the system. The Astronomy Service is by far the most advanced, and accounts for $`85`$% of all ADS use; it ought be noted, however, that the Instrumentation Service contains more abstracts than Astronomy, and a subset of that service is used by the Society of Photo-Optical Industrial Engineers as the basis of the official SPIE publications web site. All of what follows will refer only to the Astronomy service. ### 3.1 Data Here is a brief overview of the data in the ADS system, a complete description is in DATA. #### 3.1.1 Abstracts The ADS began with the abstracts from the NASA STI database, in printed form these abstracts were the union of the International Aerospace Abstracts and the NASA Scientific and Technical Abstracts and Reports (NASA STAR). While the STI branch has had to substantially cut back on their abstracting of the journal literature, we still get abstracts of NASA reports and other materials from them. We now receive basic bibliographic information (title, author, page number) from essentially every journal of astronomy. Most also send us abstracts, and some cannot send abstracts, but allow us to scan their journals, and we build abstracts through optical character recognition. Finally we receive some abstracts from the editors of conference proceedings, and from individual authors. The are $``$500,000 different astronomy articles indexed in the ADS, the database is nearly complete for the major journals articles beginning in 1975. #### 3.1.2 Bitmaps The ADS has obtained permission to scan, and make freely available on-line, page images of the back issues of nearly all of the major journals of astronomy. In most cases the bitmaps of current articles are put on-line after a waiting period, to protect the financial integrity of the journal. DATA describes the current status of these efforts. We plan to provide for each collaborating journal, in perpetuity, a database of page images (bitmaps) from volume 1 page 1 to the first issue which the journal considers to be fully on-line as published. This will provide (along with the indexing and the more recent archives held by the journals) a complete electronic digital library of the major literature in astronomy. On a longer term we plan to scan old observatory reports, and defunct journals, to finally have a full historical collection on-line. This work is beginning with a collaboration with the Harvard Preservation Project (Eichhorn et al. (1997); Corbin & Coletti (1995)). #### 3.1.3 Links ADS responds to a query with a list of references and a set of hyperlinks showing what data is available for each reference. There are $``$1.73 million hyperlinks in the ADS, of which $``$ 31% are to sources external to the ADS project. The largest number of external links are to SIMBAD, NED, and the electronic journals. A rapidly growing number, although still small in comparison to the others, are to data tables created by the journals and maintained by the CDS and the ADC at Goddard. SEARCH describes the system of hyperlinks in detail. #### 3.1.4 Citations and References The use of citation histories is a well known and effective tool for academic research (Garfield (1979)); their inclusion in the ADS has been planned since the conception of the service. In 1996 the AAS purchased a subset of the Science Citation Index from the Institute for Scientific Information, to be used in the ADS; this was updated in 1998. This subset only contains references which were already in the ADS, thus it is seriously incomplete in referring to articles in the non-astronomical literature. This citation information currently spans January 1982-September 1998. The electronic journals all have machine readable, web accessible, reference pages. The ADS points to these with a hyperlink where possible. Several publishers allow us to use these to maintain citation histories; we do this using our reference resolver software (see ARCHITECTURE). The same software is also used by some publishers to check the validity of their references, pre-publication. Additionally we use optical character recognition to create reference and citation lists for the historical literature, after it is scanned (Demleitner, et al. (1999)). #### 3.1.5 Collaboration with CDS/SIMBAD The Strasbourg Data Center (CDS) has long maintained several of the most important data services for astronomy (e.g. Jung (1971); Jung, Bischoff, & Ochsenbein (1973); Genova et al. (1998)); access to parts of the CDS data via ADS is a key feature of the ADS. ADS users are able to make joint queries of the ADS bibliographic database and the CDS/SIMBAD bibliographic data base. When SIMBAD contains information on a object which is referred to in a paper whose reference is returned by ADS then ADS also returns a pointer to the SIMBAD data. When a paper has a data table which is kept on-line at the CDS the ADS returns a pointer to it. The CDS-ADS collaboration is at the heart of Urania (section 4). More recently ADS has entered into a collaboration with the National Extragalactic Database (NED; Helou & Madore (1988), Madore et al. (1992)) which is similar to the SIMBAD portion of the CDS-ADS collaboration. ### 3.2 Search Engine The basic design assumption behind the search engine, and other user interfaces, is that the user is an expert astronomer. This differs from the majority of information retrieval systems, which assume that the user is a librarian. The default behavior of the system is to return more relevant information, rather than just the most relevant information, assuming that the user can easily separate the wheat from the chaff. In the language of information retrieval this is favoring recall over precision. SEARCH describes the user interface in detail. ### 3.3 Hardware and Software Architecture The goals of our hardware and software systems are speed of information delivery to the user, and ease of maintainability for the staff. We thus pre-compute many things during our long indexing process for later use by the search engine; we have highly optimized all code which is run by user processes; we have developed a worldwide network of mirror sites to speed up internet access. ARCHITECTURE describes these systems. ### 3.4 Data Ingest The basic rule for what books and periodicals the ADS covers is: if it is in the Center for Astrophysics library it should be in the ADS. As a goal we are still some ways from realization. We have recently adopted a second rule for inclusion: if it is referenced by an article in a major scholarly journal of astronomy it should be in the ADS. DATA describes the ADS coverage, and ingest procedures. ## 4 Urania The idea that the internet could be used to link sources of astronomical information into a unified environment is more than a decade old; it was fully expressed in the planning for the old ADS (Squibb & Cheung (1988)) and ESIS (Adorf, et al. (1988)) projects. These early attempts were highly data oriented, their initial goals were the interoperability of different distributed data archives, primarily of space mission data. Astronomical data is highly heterogeneous and complex; essentially every instrument has its quirks, and these must be known and dealt with to reduce and analyze the data. This quirky nature of our data essentially prevented the establishment of standardized tools for data access across data archives. The new, hyperlink connected network data system for astronomy is based on the highest level of data abstraction, object names and bibliographic articles, rather than the lowest, the actual observed data in archives. This change in the level of abstraction has permitted the creation of a system of extraordinary power. This new system, still unique amongst the sciences, has been dubbed Urania (Boyce (1996)), for the muse of astronomy. Conceptually the core of Urania is a distributed cross-indexed list which maintains a concordance of data available at different sites. The ADS maintains a list of sites which provide data organized on an article basis for every bibliographic entry in the ADS database. The CDS maintains a list of articles and positions on the sky for every object in the SIMBAD database. The CDS also provides a name to object resolver. The possibility for synergy in combining these two data systems is obvious; they have functioned jointly since 1993. Surrounding this core, and tightly integrated with it, are many of the most important data resources in astronomy, including the ADS Abstract Service, SIMBAD, the fully electronic journals (currently ApJL, ApJ, ApJS, A&A, A&AS, AJ, PASP, MNRAS, New Astronomy, Nature, and Science), NED, CDS-Vizier, Goddard-ADC, and the ADS Article Service. All these groups actively exchange information with the Urania core, they point their users to it via hyperlinks, and they are pointed to by it. The astronomy journals which are not yet fully electronic, in that they do not support hyperlinked access to the Urania core, also interact with the system. Typically they provide access to page images of the journal, either through PDF files, or bitmaps from the ADS Article Service, or both. Bibliographic information is routinely supplied to the ADS, and the SIMBAD librarians routinely include the articles (along with those of the electronic journals) in the SIMBAD object-article concordance. While most data archives are not closely connected to the Urania system there are some exceptions. For example the National Center for Supercomputing Application’s Astronomy Digital Image Library (Plante, Crutcher, & Sharpe (1996)) connects with the ADS bibliographical data via links which are papers written about the data in the archive. SIMBAD connects with the High Energy Astrophysics Science Archive Research Center (HEASARC) (White (1992)) archive using the position of an object as a search key, HEASARC has an interface which permits several archives to be simultaneously queried McGlynn & White (1998), and a new data mining initiative between CDS and the European Southern Observatory (ESO) (Ortiz, et al. (1999)) will connect the Vizier tables with the ESO archives. Several archives use the SIMBAD (and in some cases NED) name resolver to permit users to use object name as a proxy for position on the sky, the Space Telescope Science Institute (STScI) Digital Sky Survey (Postman (1996)) would be an example. The Space Telescope-European Coordinating Facility archive (Murtagh (1995)) allows ADS queries using the observing proposals as natural language queries, and the Principal Investigator names as authors. The establishment and maintenance of the Urania core represents a substantial fraction of the ADS service. SEARCH discusses the user interface to the set of hyperlinks, ARCHITECTURE discusses the methods and procedures we use to implement and maintain the links, and DATA discusses the data sharing arrangements we have with other groups, and presents a complete listing of all our data sources. ## 5 Capabilities, Usage Patterns, and Statistics ### 5.1 Examples The ADS answers about 5,000,000 queries per year, covering a wide range of possible query type, from the simplest (and most popular): “give me all the papers written by (some author),” to complex combinations of natural language described subject matter and bibliometric information. Each query is essentially the sum of simultaneous queries (e.g. an author query and a title query), where the evidence is combined to give a final relevance ranking (e.g. Belkin, et. al (1995)). The ADS once supported index term (keyword) queries, but does not currently. This is due to the incompatibility of the old STI (NASA-STI (1988)) keyword system with the keywords assigned by the journals (Abt (1990); A&A (1992); MNRAS (1992)) . Work is underway to build a transformation between the two systems (Lee, Dubin & Kurtz (1999); Lee & Dubin (1999)). Here we show four examples of simple, but sophisticated queries, to give an indication of what is possible using the system. A detailed description of available query options is in SEARCH. We encourage the reader to perform these queries now, to see how the passage of time has changed the results. Figure 1 shows how to make the query “what papers are about the metallicity of M87 globular clusters?” This was the first joint query made after the SIMBAD-ADS connection was completed in 1993. There are 1,765 papers on M87 in SIMBAD, NED, or both; there are 6,425 papers which contain the phrase “globular cluster” in ADS, and there are 25,599 papers in ADS containing “metallicity” or a synonym (abundance is an example of a synonym for metallicity). The result, which comes in a couple of seconds, is a list of just those 58 papers desired. Five different indices are mixed in this query: the SIMBAD object—bibcode index query on M87 is logically OR’d with the NED object—refcode index query for M87. The ADS phrase index query for “globular cluster” is (following the user’s request) logically AND’d with the ADS word index query on metallicity, where metallicity is replaced by its group of synonyms from the ADS astronomy synonym list (this replacement is under user control). If the user requires a perfect match, then the combination of these simultaneous queries yields the list of 58 papers shown in figure 2. Before the establishment of the Urania core queries like this were nearly impossible. Another simple, but very powerful method for making ADS queries is to use the “Find Similar Abstracts” feature. Essentially this is an extension of the ability to make natural language queries, whereby the user can choose one or more abstracts to become the natural language query. This can be especially useful when one wants to read in depth on a subject, but only knows one or two authors or papers in the field. This is a typical situation for many researchers, but especially for students. As an example, suppose one is interested in Ben Bromley’s (1994) PhD thesis work. Making an author query on “Bromley” gets a list of his papers, including his thesis. Next one calls up the abstract of the thesis, goes to the bottom of the page, where the “Find Similar Abstracts” feature is found, and clicks the “Send” button. Figure 3 shows the top of the list returned as a result. These are papers listed in order of similarity to Bromley’s (1994) thesis; note that the thesis itself is on top, as it matches itself perfectly. This list is a detailed subject matter selected custom bibliography. As a third example of ADS use figure 4 shows an intermediate step from the previous example (obtained by clicking on the “Return Query Form” button, replacing the default “Return Query Results” in the “Find Similar Abstracts” query. Here we make one change from the default setting: we change “Items returned” from the default “Abstracts” to “References.” The result, shown in figure 5 lists all the papers which are referenced in the 50 papers most like Bromley (1994), sorted by the number of times they appear in the 50 reference lists. Thus the paper by Bardeen et al. (1986) appears in 21 reference lists out of 50, the paper by Davis & Peebles (1983) appears in 11 lists out of 50, etc. By this means one has a list of the most cited papers within a very narrowly defined subfield specific to one’s personal interest. We are not aware of any other system which currently allows this capability. Finally we show a somewhat more complex query in figure 6. Here we modify the basic query (Bromley’s (1994) thesis) by requiring that the papers contain the word “void.” We do this by changing the logic on the text query to “simple logic” and adding “+void” to the query. The returned papers to this query would be very similar to those shown in figure 3, but with all papers which do not contain the word “void” removed. In addition we change “Items returned” to be “Citations,” and increase the number of papers to get the citations for to the top 150 closest matches to the query. The result, shown in figure 7, are those papers which most cite the 150 papers most like Bromley’s (1994) thesis, modified by the requirement that they contain the word “void.” Thus the paper by El-Ad & Piran (1997) cited 26 papers out of the 150, the paper by Rood (1988) cited 19, etc. These are the papers with the most extensive discussions of a user defined very narrow subfield. This feature also is unique to the ADS. ### 5.2 Use of the System In September 1998 ADS users made 440,000 queries, and received 8,000,000 bibliographic references, 75,000 full-text articles, and 275,000 abstracts (130,000 were individually selected, the rest were obtained through a bulk retrieval process, which typically retrieves between one and fifty), as well as citation histories, links to data, and links to other data centers. Of the 75,000 full-text articles accessed through the ADS in September 1998, already 33% were via pointers to the electronic journals. This number increased to 52% in March 1999. ADS users access and print (either to the screen, or to paper) more actual pages than are printed in the press runs of all but the very largest journals of astronomy. In September 1998, 472,621 page images were downloaded from the ADS archive of scanned bitmaps. About 75% of these were sent directly to a printer, 22% were viewed on the computer screen, and 2% were downloaded into files; FAXing and viewing thumbnail images make up the rest. If the electronic journals provide “pages” of information at the same rate as the ADS archive, per article accessed (slightly more than 10 pages/article accessed), then more than 750,000 “pages” were “printed,” on demand, in September 1998 by ADS users. This is about three times the number of physical pages published in September 1998 by the PASP. Viewed as an electronic library the ADS, five years after its inception, provides bibliographic information and services similar to those provided by the sum of all the astronomy libraries in the world, combined. The Center for Astrophysics Library, an amalgamation of the libraries of the Harvard College Observatory and the Smithsonian Astrophysical Observatory, is one of the largest, most complete, and best managed astronomy libraries in the world. For several years the CfA Library has been keeping records of the number of volumes reshelved, as a proxy for the number of papers read (library users are requested not to reshelve anything themselves). This number has remained steady in recent years, and was 1117 in September 1998 (D.J. Coletti & E.M. Bashinka 1998, personal communication). If the CfA represents 2–3% of the use of astronomy libraries, worldwide (the CfA has slightly more than 350 PhDs, the AAS has about 6800 members, the IAU about 8500, CfA users made 2.4% of ADS queries in September 1998, 5.7% of articles in the ADS Astronomy database with 1998 publication dates had at least one CfA author), and if other astronomers use their libraries at the same rate as astronomers at the CfA, then worldwide there would have been 37,000–56,000 reshelves in September 1998. In September 1998 ADS provided access to 75,000 full text articles and 130,000 individually selected abstracts, as well as substantial other information; current use of ADS is clearly similar to the sum of all current traditional astronomy library use. ADS use continues to increase. Figure 8 shows the number of queries made each month to the ADS Abstract Service from April 1993 to September 1998, the dotted straight line represents a yearly doubling, which represents the five year history reasonably well. Since 1996 use has been increasing at a 17 month doubling rate, shown by the dashed line in the figure. It is difficult to determine the exact number of ADS users. We track usage by the number of unique “cookies”<sup>1</sup><sup>1</sup>1A cookie is a unique identifier which WWW providers (in this case ADS) assign to each user, and store on the users computer using the browser. which access ADS, and by the number of unique IP<sup>2</sup><sup>2</sup>2Each Machine on the internet has a unique IP (Internet Protocol) address. addresses. There are difficulties with each technique. In addition many non-astronomers find ADS through portal sites like Yahoo, which skews the statistics. In September 1998 10,000 unique cookies accessed the full-text articles, 17,000 made queries, and 30,000 visited the site. 91% of full-text users had cookies, but only 65% of site visitors. Figure 9 shows the number of unique users who made a query using the ADS each month from April 1993 to September 1998. Before early 1994 users had user names and passwords in the old, proprietary system, and could be counted exactly; after the ADS became available on the WWW users were defined as unique IP addresses. Note the enormous effect the WWW had on ADS use, a factor of four in the first five weeks. The straight dashed line represents the 17 month doubling period seen recently in the number of queries; the dotted line, which better represents the recent growth, is for a 22 month doubling period. The difference between the two is due to a one third increase in the mean number of queries per month per user (from 19 to 25) since 1996. ¿From another perspective, the number of unique IP addresses from a single typical research site (STScI) which access the full-text data in a typical month (September 1998) is 107, the number of unique cookies associated with stsci.edu which access the full-text data is 104, the number of unique IP addresses from STScI which make a query to ADS is 148 and the number of cookies is 140. The number of AAS members listing an STScI address is 145 (J. Johnson, personal communication), and the number of different people listing an STScI address in the Astropersons e-mail compilation (Benn & Martin (1995)) is 195. Those who access the full-text average one article per day, those who make queries average two per day. We believe nearly all active astronomy researchers, as well as students and affiliated professionals use the ADS on a regular basis. Most of the recent exponential growth of use of the ADS is due to an increased number of users; this growth cannot last much longer, the 17,000 who made queries in September 1998 are probably the majority of all those who could conceivably want to make a query of the technical astronomy literature. ## 6 How the Astronomical Literature is used Electronic libraries, because they provide access to the literature on an article basis, can provide direct measures of the use of individual articles. Direct bibliometric studies of article use are rare, and tend to be based on small samples (e.g. Tsay (1998)); most bibliometric studies use indirect measures, particularly citation histories,(e.g. Garfield (1979); White & McCain (1989); Line (1993)), as proxies for use. Astronomy is perhaps unique, in that it already has an integrated electronic information resource (ADS/Urania) which includes electronic access to nearly all the modern journal literature, and which is used by a large fraction of practitioners in the field, worldwide. The combined Urania logs, including the electronic journals and the ADS, probably represent a fair sample of total readership in the field, perhaps even a majority of the readership as well. In this section we will investigate the use of the astronomy literature as shown by the ADS logs; for articles more than a few months past the publication date they probably represent accurately the use of the astronomy literature. For articles immediately after publication the logs of the electronic journals are the definitive source; this usage pattern is substantially different from the pattern shown in the ADS logs, for example, the half-life for article reads for the electronic Astrophysical Journal is measured in days (E. Owens, 1997, personal communication). ### 6.1 Readership as a Function of Age The ADS logs provide a direct measure on the readership of individual articles. There are several different ADS logs, here we will use the “data” log. Entries in the data log correspond to individual data items selected from a list which is returned following a query, such as shown in figure 2. Each entry is the result of a user, who can see the authors and title of a paper, choosing to get more information. 61% of these requests are for the abstract, 34% are for the whole text, 2% are for the citation histories, as well as several other options; SEARCH lists all the options and their use. In what follows we will refer to any request for data as a “read.” By “age” we refer to the time since publication of an article, NOT the time since birth of the astronomer reading the article! In this subsection we restrict the study to the January 1999 log, and only requests for information about articles published in the largest (in terms of ADS use) eight journals (ApJ, ApJL, ApJS, A&A, A&AS, MNRAS, AJ, PASP; hereafter the Big8). The Big8 represent 62% of the 270,000 entries in the January data log. Figure 10 shows the number of ADS reads (solid line, left abscissa) during January 1999 for articles published in the Big8 from 1976 to 1998, and the number of Big8 articles for which at least one data item was requested (dotted line, right abscissa), on a log-linear plot, binned yearly. The ADS database is 100% complete in titles, and in links to the full text of articles (either to the ADS scans, or directly to the electronic journals), and is 99% complete in article abstracts for the Big8 journal articles published during this 22 year period. The number of papers published in the Big8 has been increasing at about 4% per year during this 22 year period (Schulman et al. (1997); Abt (1998); figure 11), figure 12 shows the information in figure 10 divided by the number of papers published. The top line shows the mean number of reads per paper, and the bottom line shows the fraction (maximum 1) of papers published for which information was requested. ¿From 1976 to about 1994 the two lines are nearly parallel; this demonstrates that the change in readership with age is caused mainly by a change in the fraction of papers which are considered interesting enough to be read, not by a change in the number of times an interesting paper is read. Extrapolating the relation seen in the earliest 16 years of figure 12 we find that the fraction of articles interesting enough to be read is $`I=I_0e^{0.075T}`$, where T is the age of the article in years, and $`I_0`$ is about 0.7. Similarly readership declines as $`e^{0.09T}`$, so the mean number of reads per relevant article is $`M=M_0e^{.015T}`$, with $`M_0`$ equal to 2.5 reads per month. For articles between 4 and 22 years old the readership pattern is well fit by $`R=IM`$. For articles younger than 4 years old the extrapolation of the $`R=IM`$ model substantially underestimates readership. While the fraction of read papers is only about 20% higher than the extrapolation (it could not be more than 30% after which all papers would be read), the mean reads per paper is 350% higher. We postulate that there is another mode of readership, which dominates for articles between one month and four years old, we will call this “papers current enough to be read.” If we subtract the $`R=IM`$ model from the data we get the residual of papers current enough to be read. This can be well represented by $`C=C_0e^{0.85T}`$, where $`C_0`$ is equal to 5 reads per month. Now we have a two component model for readership (per article published), valid for papers between one month and 22 years old which is $`R=IM+C`$. Figure 13 shows how well the model fits the actual readership data for January 1999. The solid line shows the difference between the log of the reads per paper published and the log of the model; the dotted lines show the $`1\sigma `$ errors, estimated using $`\sqrt{N}`$. Clearly the model fits the data well. While the $`R=IM+C`$ model accounts for the vast majority of ADS use, there are at least two other modes of readership, which we will call “historical”, and “new”. The historical mode describes the use of very old articles, and the new mode describes the readership of the current issue of a journal. The ADS in January 1999 had only one journal which is complete to an early enough time to measure the historical mode, the Astronomical Journal, which is complete from volume 1 in 1849. The data currently available (shown in figure 14) suggest a constant low level use, independent of time, $`H=H_0`$, where $`H_0`$ is 0.025 reads per month. With the database now being extended to include much of the literature of the past two centuries this parameterization should improve greatly in the next couple of years. The new mode represents the readership of the latest issue of a journal. As soon as a journal is issued, either received in the mail, or posted electronically, a large number of astronomers scan the table of contents and read the articles of interest. Although ADS has a feature in the Table of Contents page which supports this type of readership, it does not represent a substantial fraction of ADS use. We believe most users do this either with the paper copy, or through the electronic journals directly. We can crudely estimate this mode in the ADS use by examining the daily usage logs following the release of new issues of the Astrophysical Journal, After subtracting the other modes already described we find $`N=N_0e^{16T}`$, where $`N_0`$ is about 3.5 reads per month. For an accurate description of this mode one would need to analyze the logs of the electronic journals. Finally we have a four component model for how the astronomical literature is read, as a function of the age of an article, $`R=N+C+IM+H`$, where the first three terms are exponentials with very different time constants, and the fourth is a low level constant. ADS use certainly underestimates the amplitude of the $`N`$ term, and may underestimate the amplitude of the $`C`$ term, as there are alternative electronic routes to some of these data. ### 6.2 Comparison of Readership with Citation History Citation histories have long been used to study the long-term readership of scientific papers (e.g. Burton and Kebler (1960)) with the basic result that the number of citations that a paper receives declines exponentially with the age of the article. While it is often assumed that the pattern of use is similar to the pattern of citation this has not been conclusively demonstrated. Recently Tsay (1998) has found that the mean use half-life for a set of medical journals was 3.4 years, while the mean citation half-life for the same journals was 6.3 years. We will compare the use of some of the Big8 journals with their citation histories using two datasets: the ADS data logs for the period from 1 May 1998 to 31 July 1998, and the citation information provided to ADS by the Institute for Scientific Information covering references in articles published during the first nine months of 1998, and only covering references from 1981 to date. ISI does not provide us with the full citation histories, rather they provide us with pairs of citing and cited journal articles where both are in the ADS database, so the results will systematically underrepresent the citation histories of articles with substantial influence in areas outside astronomy, or where the primary references come from conference proceedings. Figure 15 compares the citation histories of the Big8 journals with their readership; the abscissa refers to the citation information (dotted lines), the readership data (solid lines) have been arbitrarily shifted for comparison. The lower dotted line represents the fraction of Big8 journal articles which were cited during the first nine months of 1998; the upper dotted line represents the mean number of cites per article. The lower solid line shows the mean number of reads per article during the three month period May-July 1998, shifted by a factor of 19; the upper solid line shows the fraction of Big8 articles read, times 1.8. The number of cites has the same functional form as the fraction of reads, And the fraction of cites has the same form as the number of reads. This result is perhaps surprising. Except for the most recent year (1997), where the number of cites declined from the year before the number of cites per article declines with age as $`e^{0.09T}`$ or proportional to $`IM`$, the long term declining readership. The citation half-life for these articles, 7.7 years, is longer than the 4.9 years found by Gupta (1990) for the Physical Review, but is consistent with results of Abt (1981, 1996) of 20-30 year half-lives with no normalization, once one takes the increase in the number of astronomy papers/cites into account (Abt 1981, 1995). The fraction of articles cited, on the other hand, appears to follow the same two component form as readership, $`R=IM+C`$. We postulate the following explanation for this behavior. The degree of citability we define as the degree to which a paper would be cited, were it possible. We postulate this is directly proportional to readership: $`D=D_0R`$. The large increase in the fraction of recent papers cited is thus due to the large increase in readership. We define the ability of a paper to be cited to be a steeply increasing function of age, simply because for one paper to cite another it must appear before the second paper is written, refereed, and published: $`A=1e^{1.5T}`$. Our model for the mean number of citations a paper receives, $`Z`$, as a function of age is: $`Z=Z_0AD`$ or $`Z=Z_0AD_0R`$. Figure 16 shows the the number of citations per paper as a function of age (thick solid line), the $`Z=Z_0AD_0R`$ model using the actual number of reads per paper for $`R`$ (thin solid line), and the $`Z=Z_0AD_0R`$ model using the $`R=IM+C`$ model for $`R`$ (dotted line). The product of the constants $`Z_0D_0`$ is the number of citations per read, currently this is about 0.08. The papers which are frequently cited tend also to be frequently read, although the correlation is not very strong. We rank the papers by number of cites/reads during the 1998 periods, and perform a Spearman rank correlation between the 26988 different Big8 papers cited and the 53755 papers read (57340 total), we obtain $`r_{Spearman}=0.35`$. This underestimates the correlation because it excludes papers which were neither cited nor read. Of the 66392 Big8 papers published between 1982 and 1997 81% were read in the 3 month period using ADS, while 41% were cited during the 9 month period. The probability that a paper was not read declined sharply with the number of times it was cited. Figure 17 shows this; one paper each of the (324, 224, 126) papers which were cited (7, 8, 9) times went unread during the period; none of the 430 papers which were cited 10 or more times went unread. The relations between the number of cites or reads of a paper and the rank that paper has when ranked by number of cites/reads are identical. If one takes papers published in a single year both cites and reads follow a Zipf (1949) power law $`nr^\alpha `$ ($`n`$ is the number of reads or cites, and $`r`$ is the rank of the paper with that many reads/cites), where $`\alpha `$ is $`\frac{1}{2}`$, this is the same result Redner (1998) found for citation histories for the physics literature. If papers from all years are taken together and ranked the power law index flattens identically for both cites and reads to $`\alpha =\frac{1}{3}`$. ### 6.3 How the Journals are Used #### 6.3.1 The main journals Figure 18 shows the fraction of articles published in the Big8 by each of the five main journals, leaving out the letters and supplements. We show the data only for articles published from 1983 to 1995. Before 1983 the data from ISI are less complete, and after 1995 the presence of the electronic journals, and the differing rules for the distribution of the ADS bitmaps, make the meaning of a “read” differ from journal to journal. The reads and cites data for figures 18, 19, and 20 comes from the same 1998 reporting periods described above. Figure 19 shows the relative readership of papers as a function of journal and publication year. The abscissa is the ratio of the fraction of Big8 papers read and the fraction of Big8 papers published. Were all papers read equally frequently, independent on the journal in which they were published, figure 19 would show five straight lines at one; it does not. The papers from the AJ are read more on a per article basis than the other journals; the papers from A&A are read less. Recent PASP papers are read substantially more frequently than older ones, when compared with the readership patterns of the other journals. Figure 20 shows the ratio of the fraction of citations an article received to the fraction of reads, as a function of journal and year. Were all articles cited in the same proportion to the number of times they were read (this is the constant $`Z_0D_0`$ in 6.2) then the figure would be five straight lines at one. The three bi- and tri-monthly journals do not show much deviation from straight lines at one, while the AJ appears to be systematically less cited than it is read. The PASP again shows an increase during the beginning of this decade. Recall that the readership and citation information are from hundreds of thousands of individual decisions made by more than 10,000 astronomers during 1998. Taken together figures 18, 19, and 20 show the current opinion of astronomers as to the usefulness of articles as a function of journal and publication date. The growth of the AJ for example, from 6.5% of Big8 articles to 9.5% has not greatly affected the relative readership or citation rates for the journal. The recent history of the PASP is perhaps the most interesting feature on figures 18, 19, and 20. From 1983 to 1995 the fraction of Big8 papers published by PASP declined from 6% to 3% . This decline is overstated, as PASP published some conference proceeding abstracts during the late 1980s, a practice which ended in 1991; the decline is nevertheless real: PASP published the same number of papers in 1995 as 19 years before, during which time the number of Big8 journal articles doubled. Figure 19 shows two main features, fluctuations, and a slow rise. The large fluctuations during the late 80s and early 90s are due to two factors: fluctuations in the number of conference proceeding papers and abstracts; and the influence of Stetson (1987), which was read at twice the rate of the next most read paper from 1997, and four times the next most read PASP paper from that year. The rise in the readership measure during the 1990s is not caused by any known systematic; we believe it represents a real increase in the perceived usefulness of the journal. Figure 20 also shows the influence of Stetson (1987), currently the third most cited article in the ADS database, although now without the addition of the fluctuations in article counts. It also shows the rise in the perceived usefulness per article (this time in the measure of cites per read). Noting that the number of cites per article is the product of figures 19 and 20 the rise in the number of cites per article, compared with the Big8 over the period 1989 to 1995 is a factor of three, so that now the journal is at full parity with the Big8. This demonstrates that the policy during this period was one of quality rather than quantity, a policy we dub “shaken, not stirred.” #### 6.3.2 Loss of relative currency All Big8 astronomical journals lose currency, the current usefulness of an article, at a rate described by the readership and citation models of 6.1 and 6.2. Any changes in the loss of currency of one journal with respect to the rest of the Big8 should be seen in figure 19 in the form of a relative decrease in readership, as a function of age. Indeed the changes in the PASP which we have attributed to changes in editorial policy could simply be a substantial loss of relative currency. One of the Big8 journals, the Astrophysical Journal Letters is intended to lose currency more rapidly than the other journals. Figure 21 shows the relative fraction of articles published (thin solid), articles read (thick solid), and articles cited (dotted) for the ApJL from 1981 to 1997. Except for the period from 1994 to 1997 the curves track each other reasonably well; older ApJL papers are not cited or read any more or less than the Big8 average. For the more recent papers the cites and reads increase above the fraction published, implying that the journal is in some sense more current than average. In terms of readership this effect is strongly affected by a systematic. During the 3 month period in 1998, most of the 1996 and all of the 1997 issues of MNRAS were not available electronically due to copyright constraints. This dramatically lowered the relative readership of that journal, pushing all the others up. Also all five journals which were fully electronic during 1997 show increases compared with AJ and PASP which were only available as bitmaps. Thus the increase in readership of the ApJL, the pioneer electronic journal (Boyce (1995)), could be due to its superior delivery system, rather than its content. #### 6.3.3 Local differences in readership rates Astronomers in different parts of the world read different journals at different rates than the average. Figure 22 shows three typical differences. The three curves show the ratio of readership fractions for a particular subset when compared with the rest of the world; a value of 1 means that there is no difference in relative readership. The thin solid line shows the MNRAS readership ratio for users who access the US site and have IP addresses ending in .uk; it shows that the British read Monthly Notices about 60% more than the world average. The dotted line shows the A&A readership ratio for users of the Strasbourg mirror, and the thick solid line shows the AJ readership ratio for US users with an IP address ending in .edu. They show that Europeans/Americans read A&A/AJ about 20% more than the rest of the world. The ApJ also shows about a 20% increase in the US; the PASJ shows a 300% increase in Japan. #### 6.3.4 Use of historical literature The ADS is in the process of putting a large fraction of the astronomical literature of the past two centuries on-line via bitmapped scans. The first nineteenth century journal to be fully on-line is the Astronomical Journal, which was first fully on-line on 1 January 1999. Figure 14 shows the raw readership figures for the first two months of 1999 (US logs only), this shows the current readership of 150 years of the journal. Clearly the back issues are being read; the only year where the journal was published, but no paper was read in the two months, was 1909, where only 12 papers were published. Also there is a break in the exponential falloff with age for articles published between 1950 and 1960, where approximately twice the expected readership occurred. During this period 94 different users read 283 articles; the biggest user made 13 reads. We have no explanation for this increased use. The only other period where the use is not predicted by the $`C+IM+H`$ model of 6.1 is the first decade of the journal’s existence, perhaps due to curiosity. ## 7 The Impact of the ADS on Astronomy It is difficult to judge the impact of scientific work. For scientific programs citation histories, personal honors and awards, and the success of students can give a measure of impact. For support type programs these measures do not suffice; the impact of the 200-inch Hale Telescope (Anderson (1948), Rule (1948)) or the 4-meter Mayall Telescope (Crawford (1965)) clearly extends beyond the papers and honors of their respective developers. The impact of large software projects is, if anything, even harder to quantify; the large data reduction environments, like AIPS (Fomalont (1981), Greisen (1998)), MIDAS (Banse et al. (1983)), or IRAF (Tody (1986)) have transformed astronomy, but how much? The ADS is perhaps unique among large support projects in that a reasonably accurate quantitative estimate of its impact can be made. This is because many of the services the ADS provides are just more efficient methods of doing things astronomers have long done, and found worth the time it took to do them. We will assign to each of several ADS functions a time which is our estimate of the increase in research time which accrues to the researcher by virtue of using that function. Our fundamental measure will be the time saved in obtaining an article via the ADS, which we estimate from the time it takes to go to the library, find the volume, photocopy the article, and return to the office, as 15 minutes. We then estimate that reading an abstract, a reference list, or a citation history saves $`1/3`$ of the full article time, or 5 minutes, and we arbitrarily assign a one minute time savings to each query. We can now estimate the impact of ADS, in terms of FTE (Full Time Equivalent, 2000 hour) research years, by examining the ADS usage logs. We note that about half of the full text articles currently retrieved via the ADS come from the on-line journals, which certainly deserve credit for their work. Also we are ignoring several important (but hard to quantify) aspects of the ADS service, such as links from other web sites (e.g. the HTML journals), the synergy of joint ADS/SIMBAD and ADS/NED queries (e.g. that in figure 1), the bulk retrieval of abstracts and LATEX formatted references (about 200,000 per month), and the more than 10,000,000 references returned each month. We think that what follows is a reasonable estimate of the impact of the ADS on astronomy, and that the impact of the full Urania collaboration is substantially more. Using the March 1999 worldwide combined ADS logs there were 113,471 full text articles retrieved, 195,026 abstracts (individually selected), 10,663 citation histories, and 3,702 reference pages retrieved, and 582,836 queries made. Using the estimated time savings above we find that the impact of the ADS on astronomy is 333 FTE research years per year, approximately the same as the entire Harvard-Smithsonian Center for Astrophysics. If we crudely estimate that there are 10,000 FTE research years in astronomy each year the ADS can be viewed as accounting for 3.33% of astronomy. Currently the ADS contains 27,712 (11,834) articles (refereed articles) in the astronomy database dated 1998, so one way of expressing the impact of the ADS would be 923 (394) articles (refereed articles) per year. While the efficiencies brought about by the technologies inherent in the ADS and Urania are permanent, and will contribute (compounded) to the accelerating pace of discovery in astronomy, one can ask what was gained by being first. Risks were taken in funding the early development and adoption of technologies via the ADS and Urania. Also, had nothing been done, the “winning” technologies would eventually be adopted with very little risk. To judge the payoff we adopt a simple model; we assume that the increase in research efficiency due to the ADS has increased linearly from zero in 1993 to 333 FTE research years in 1999, and that it will decrease linearly to zero over the next six years, after which there will be no difference in the technologies employed. This yields a sum impact from the early creation of the ADS of 2,332 FTE research years, which is 23% of the astronomical research done in a single year, or 6463 (2760) papers (refereed papers). This is surely equal to the impact of the very largest and most successful projects. Doing this analysis for the entire Urania would yield a substantially increased amount. ## 8 Acknowledgments Peter Ossorio is a pioneer in the field of automated text retrieval, he gave freely of his ideas in the early phase of the project. Geoff Shaw provided the enthusiasm to keep the Abstract Service project going during the long period of no funding. Margaret Geller gave crucial encouragement at the time of the original prototype. Frank Giovane long believed in the possibilities of the Abstract Service, and acted as a friend in high places. Todd Karakashian wrote much of the software at the time of the public release, he left in 1994. Markus Demleitner joined the ADS project in April 1999, he has already produced much of value. There are about a dozen individuals at the Strasbourg Observatory, and the Strasbourg Data Center to thank, too many to thank individually. The data services provided by them are at the heart of the new astronomy; their collaboration with the ADS has been both very fruitful, and a great joy. Peter Boyce, Evan Owens, and the electronic Astrophysical Journal project staff have had the vision necessary to do things first. Their collaboration has been important to the success of the ADS, and crucial to the success of Urania. Without the long term support from NASA, and Günter Riegler in particular, the ADS would not now exist. We are supported by NASA under Grant NCC5-189.
no-problem/0002/cond-mat0002464.html
ar5iv
text
# Soft Mode Anomalies in the Perovskite Relaxor Pb(Mg1/3Nb2/3)O3 ## I Introduction In the past year two neutron inelastic scattering studies have been carried out in an attempt to elucidate the nature of the lattice dynamics in the relaxor-based systems Pb(Mg<sub>1/3</sub>Nb<sub>2/3</sub>)O<sub>3</sub> (PMN) and Pb(Zn<sub>1/3</sub>Nb<sub>2/3</sub>)<sub>0.92</sub>Ti<sub>0.08</sub>O<sub>3</sub> (PZN-8%PT) which have the complex perovskite structure $`A(B^{}B^{\prime \prime })O_3`$ and $`A(B^{}B^{\prime \prime }B^{\prime \prime \prime })O_3`$, respectively . Of particular interest to both studies was the soft phonon mode that is ubiquitous in ferroelectric and perovskite systems. The displacive phase transition in classical ferroelectric systems such as PbTiO<sub>3</sub>, which has the simple $`ABO_3`$ perovskite structure, is driven by the condensation or softening of a zone-center transverse optic (TO) phonon, i. e. a “soft mode,” that transforms the system from a cubic paraelectric phase to a tetragonal ferroelectric phase. Direct evidence of this soft mode behavior is easily obtained from neutron inelastic scattering measurements made at different temperatures above the Curie temperature $`T_c`$. The top panel of Fig. 1 shows, for example, the phonon dispersion of the lowest-energy TO branch in PbTiO<sub>3</sub> (PT) at 20 K above $`T_c`$. Here one can see that the zone center ($`\zeta =0`$) phonon energy has already dropped to a very low value of 3 meV . As $`TT_c`$, the soft mode energy $`\mathrm{}\omega _o(TT_c)^{1/2}0`$. By contrast, the so-called “relaxor” systems possess a built-in disorder that stifles the ferroelectric transition. Instead of a sharp transition at $`T_c`$, one observes a diffuse phase transition in which the dielectric permittivity $`ϵ`$ exhibits a broad maximum as a function of temperature at a temperature $`T_{max}`$. The disorder in relaxors can be either compositional or frustrated in nature. In the case of PMN and PZN (Z = Zn), the disorder results from the $`B`$-site being occupied by ions of differing valence (either Mg<sup>2+</sup> or Zn<sup>2+</sup>, and Nb<sup>5+</sup>). Hence the randomness of the $`B`$-site cation breaks the translational symmetry of the crystal. Yet despite years of intensive research, the physics of the observed diffuse phase transition is still not well understood . Moreover, it is interesting to note that no definitive evidence for a soft mode has been found in these systems. Fig. 1. Top - Dispersions of the lowest-energy TO mode and the TA mode in PbTiO<sub>3</sub>, measured just above $`T_c`$ (from ). Bottom - Dispersions of the equivalent modes in PMN measured far above $`T_{max}`$ (from ). In a series of papers published in 1983, Burns and Dacol proposed an elegant model to describe the disorder intrinsic to relaxor systems . Using measurements of the optic index of refraction on both ceramic samples of (Pb<sub>1-3x/2</sub>La<sub>x</sub>)(Zr<sub>y</sub>Ti<sub>1-y</sub>)O<sub>3</sub> (PLZT) as well as microscopically homogeneous single crystals of PMN and PZN , they demonstrated that a randomly-oriented local polarization $`P_d`$ develops at a well-defined temperature $`T_d`$, often referred to as the Burns temperature, several hundred degrees above the apparent ferroelectric transition temperature $`T_c`$. The spatial extent of these locally polarized regions in the vicinity of $`T_d`$ was conjectured to be $``$ several unit cells, and has given rise to the term “polar micro-regions,” or PMR . For PMN, the formation of the PMR occurs at $``$ 617 K , well above the temperature $`T_{max}`$ = 230 K where the dielectric permittivity reaches a maximum . Recently, using neutron inelastic scattering techniques, we have found striking anomalies in the lowest-energy TO phonon branch (the same branch that goes soft at at the zone center at $`T_c`$ in PbTiO<sub>3</sub>) that we speculate are directly caused by these same nanometer-sized PMR. ## II Search for a Soft Mode in PMN Our phonon measurements on relaxor systems began with PMN at the NIST Center for Neutron Research (NCNR) in 1997. At that time many diffuse scattering studies of PMN using X-rays and neutrons had already been published . However, there were no published neutron inelastic scattering measurements on PMN until the 1999 phonon study by Naberezhnov et al . The bottom panel of Fig. 1 shows neutron scattering data taken by Naberezhnov et al. on PMN exactly analogous to that shown in the top panel for PbTiO<sub>3</sub>, except that the PMN data were taken at 800 K, a temperature that is much higher relative to the transition temperature of PMN, i. e. $`570`$ K above $`T_{max}`$. The neutron scattering measurements presented here were performed at the NCNR using both the BT2 and BT9 triple-axis spectrometers. The (002) reflections of highly-oriented pyrolytic graphite (HOPG) crystals were used to both monochromate and analyze the incident and scattered neutron beams. An HOPG transmission filter was used to eliminate higher-order neutron wavelengths. Inelastic measurements were made by holding the final neutron energy $`E_f`$ fixed at 14.7 meV ($`\lambda _f=2.36`$ Å) while varying the incident neutron energy $`E_i`$. Typical horizontal beam collimations used were 60-40-40-80 and 40-48-48-80. The single crystal of PMN used in this study measures 0.5 cm<sup>3</sup> in volume, and was the identical crystal used by Naberezhnov et al. It was grown using the Chochralsky technique described elsewhere . The crystal was mounted onto an aluminum holder and oriented in air with the either the cubic \[$`\overline{1}`$10\] or axis vertical. We used two types of scans to collect data. Constant energy (constant-$`E`$) scans were performed by keeping the energy transfer $`\mathrm{}\omega =\mathrm{\Delta }E=E_fE_i`$ fixed while varying the momentum transfer $`\stackrel{}{Q}`$. Constant-$`\stackrel{}{Q}`$ scans were performed by holding the momentum transfer $`\stackrel{}{Q}=\stackrel{}{k_f}\stackrel{}{k_i}`$ ($`k=2\pi /\lambda `$) fixed while varying the energy transfer $`\mathrm{\Delta }E`$. Using these scans, the dispersions of both the transverse acoustic (TA) and the lowest-energy transverse optic (TO) phonon modes were mapped out at room temperature (still in the cubic phase, but well below the Burns temperature $`T_d`$ 617 K). Fig. 2. Solid dots represent positions of peak scattered neutron intensity taken from constant-$`\stackrel{}{Q}`$ scans at 300 K along the symmetry direction. Vertical bars represent phonon FWHM linewidths in meV. Solid lines are guides to the eye indicating the TA and TO dispersion curves. Shaded area represents region of TO dispersion where constant-$`\stackrel{}{Q}`$ scans showed no well-defined peaks. In Fig. 2 we plot the positions of the peak in the scattered neutron intensity taken from constant-$`\stackrel{}{Q}`$ scans at 300 K as a function of $`\mathrm{}\omega `$ and $`|\stackrel{}{q}|=k`$. Here $`\stackrel{}{q}=\stackrel{}{Q}\stackrel{}{G}`$ is the momentum transfer measured relative to the $`\stackrel{}{G}=(2,0,0)`$ Bragg reflection along the symmetry direction. Limited data were also taken near $`(3,0,0)`$. The lengths of the vertical bars represent the measured phonon peak FWHM linewidths (full width at half maximum) in $`\mathrm{}\omega `$ (meV), and were derived from Gaussian least-squares fits to the constant-$`\stackrel{}{Q}`$ scans. The lowest-energy data points trace out the TA phonon branch along , and solid lines have been drawn through these points as a guide to the eye. We see that the TA dispersion curve is identical to that shown for PMN at 800 K in the bottom panel of Fig. 1. Fig. 3. Data from constant-$`\stackrel{}{Q}`$ scans taken near (2,0,0) at 300 K. Lines are guides to the eye. The scan at $`q=0.2`$ r.l.u. shows well-defined TA and TO modes. But at $`q=0.1`$ r.l.u., only the TA peak is well-defined. The TO mode is strongly overdamped. The inset, however, shows data taken on the same crystal at the same $`q`$ at 880 K in which the TO mode is clearly well-defined (from ). It is also clear from the dispersion diagram presented in Fig. 2 that our room temperature data show the same TO1 modes at high $`q`$ as those reported at 800K by Naberezhnov et al. However the scattering intensities for this mode for small $`q0.15`$ r.l.u. were scarcely above background at $`(2,q,0)`$ as well as at $`(3,q,0)`$. This is evident in Fig. 3 where two constant-$`\stackrel{}{Q}`$ scans, taken near (2,0,0) at 300 K, are shown. For $`q`$ = -0.2 r.l.u. (1 r.l.u. = 2$`\pi `$/a = 1.553 Å<sup>-1</sup>), we observe two well-defined peaks corresponding to scattering from the TA and TO modes. But for $`q`$ = -0.1 r.l.u. only the TA mode is well-defined. The TO mode scattering is weak and broadly distributed in $`q`$. By contrast, the inset of Fig. 3 shows a very prominent peak in the scattering from the TO mode at the same $`q`$ (-0.1 r.l.u.) taken on the same crystal (data from . The only difference was that these data were taken at much higher temperature, i. e. 880 K. These data remained a puzzle as we could not locate where the scattering intensity had gone, and we were forced to abandon our search for the soft mode for the time being. ## III The Morphotropic Phase Boundary and PZN-8%PT Our phonon studies of relaxor single crystals were subsequently resumed two years later from a very different perspective. The nearly vertical morphotropic phase boundary (MPB) in Pb(Zr<sub>1-x</sub>Ti<sub>x</sub>)O<sub>3</sub> (PZT), which separates the rhombohedral and tetragonal regions of the PZT phase diagram near a Ti concentration of 50%, had recently been reinvestigated by Noheda et al . There they found a previously unknown monoclinic phase above 300 K that separates the tetragonal and rhombohedral phases. This was an exceedingly important discovery, and was extensively discussed in this conference , because the monoclinic phase forms a natural bridge between the tetragonal and rhombohedral phases, and sheds new light on possible explanations for the enhanced piezoelectric properties observed in PZT ceramics for compositions that lie close to the MPB. Motivated by this result, Gehring, Park and Shirane realized that a very similar MPB boundary exists in $`(1x)`$PZN-$`x`$PT around $`x=0.08`$ . These are solid solutions which exhibit even greater piezoelectric properties than are obtained in PZT ceramics. Moreover, unlike the case of PZT, they can be grown into large high quality single crystals, ideal for neutron inelastic scattering studies. The soft phonons in PbTiO<sub>3</sub> (PT) had already been thoroughly characterized by Shirane et al . Hence their idea was to study the phonons in $`(1x)`$PZN-$`x`$PT at higher $`x`$, say 20% PT, and then trace the evolution of the transverse optic modes to 8% PT and PZN as a function of $`x`$. In this way it was hoped that the scattering associated with the missing optic branch at small $`q`$ for PMN could be located. The neutron inelastic measurements on PZN-8%PT were performed at 500 K. By employing a combination of both constant-$`\stackrel{}{Q}`$ and constant-$`E`$ scans, an anomalous enhancement of the scattering cross section was discovered between 0.10 r.l.u. $`<|\stackrel{}{q}|<`$ 0.15 r.l.u. . This enhancement was located at a fixed $`q`$ relative to the zone center, and was energy independent over a large range of energy transfer extending from 4 meV to 9 meV. When plotted in the form of a “dispersion” diagram, the TO branch appeared to drop precipitously into the acoustic branch. This resulting TO branch was referred to as a “waterfall” for this reason, and is shown in the inset to Fig. 4. It was conjectured that these waterfalls are caused by the polar micro-regions first demonstrated by Burns and Dacol . The existence of such polarized regions, which are of finite spatial extent, should effectively inhibit the propagation of the ferroelectric TO mode. Moreover, the size of these regions can be estimated as $`2\pi /q`$, which at 500 K corresponds to about 31 Å, or roughly 7 to 8 unit cells. This value is consistent with that put forth in the picture of Burns and Dacol. Fig. 4. Two constant-E scans measured along at 5 meV and 9 meV at 300 K. These scans demonstrate the presence of the same anomalous scattering that was observed in PZN-8%PT (shown in the inset, from ). At this point it was natural to ask the question whether or not the same anomalous “waterfall” was present in PMN as this would provide a natural explanation for the missing low-$`q`$ portion of the optic branch in PMN. So we went back to our old data of PMN taken at room temperature, and we found two constant-$`E`$ scans at 5 and 9 meV which we had taken along the direction from near (2,2,0). These data, shown in the right hand portion of Fig. 4, clearly indicate the presence of an anomalous enhancement of the scattering intensity at a fixed $`q`$, similar to that observed in PZN-8%PT. ## IV Discussion and Interpretation Naberezhnov et al. identified the normal-looking optic phonon branch at 800 K shown in Fig. 1 as a hard TO1 mode, and not the ferroelectric soft mode, because the $`\stackrel{}{Q}`$-dependence of the associated dynamic structure factor was inconsistent with that expected for ferroelectric fluctuations, i. e. nearly no critical scattering was observed near the (2,2,0) Bragg peak in the vicinity of $`T_{max}`$ . On the other hand, the absence of critical fluctuations at (2,2,0) may simply mean that the eigenvectors for PMN are different from those of PbTiO<sub>3</sub>. The lowest polar optic mode is still clearly present and well-defined at 880 K. At lower temperatures, using the same PMN single crystal, we observe an overdamped phonon scattering cross section in addition to this new anomalous scattering at small $`q`$ below $`T_d`$, i. e. the waterfall. So the proper question to ask is whether or not this TO1 branch is the lowest-energy polar optic mode in PMN. Before this point can be settled uniquely, more neutron measurements will be needed at temperatures both above and below $`T_d`$ to show precisely how the anomalous scattering changes with temperature, that is, whether or not the waterfall evolves into the TO1 branch measured by Naberezhnov et al. at 880 K. For this purpose, constant-$`E`$ scans will be of particular importance since, as we learned in 1997, the waterfall is not readily visible without them. At present, our picture of PMN is that at high temperatures $`T>T_d`$ the system behaves like all other simple perovskites. When the PMR are formed below the Burns temperature, the crystal behaves like a two-phase mixture from a lattice dynamical point of view. The PMR exhibit the anomalous waterfall as found in PZN-8%PT, whereas the non-PMR regions show a gradual change from the regular TO branch to one which is overdamped. These are shown very nicely in the constant-$`\stackrel{}{Q}`$ data of Vakhrushev et al. at $`(3,q,0)`$ between 880 K and 450 K . Consequently we believe that the study of Naberezhnov et al. properly characterized the coupled modes of PMN at small $`q`$, whereas the study of Gehring, Park and Shirane characterized the modes at intermediate $`q`$ in which the highly unusual waterfall was discovered. Future measurements are being planned to determine whether or not an applied electric field can influence the shape of the waterfall. ## V Acknowledgments We thank L. E. Cross, R. Guo, S. -E. Park, S. Shapiro, N. Takesue, and G. Yong, for stimulating discussions. Financial support by the U. S. Department of Energy under contract No. DE-AC02-98CH10886 is acknowledged. Work at the Ioffe Institute was supported by the Russian Fund for Basic Research grant 95-02-04065, and by the National Program “Neutron Scattering Study of Condensed Matter.” We also acknowledge the support of the National Institute of Standards and Technology, U. S. Department of Commerce, in providing the neutron facilities used in this work.
no-problem/0002/nlin0002031.html
ar5iv
text
# Ordered and self–disordered dynamics of holes and defects in the one–dimensional complex Ginzburg–Landau equation ## Abstract We study the dynamics of holes and defects in the 1D complex Ginzburg–Landau equation in ordered and chaotic cases. Ordered hole–defect dynamics occurs when an unstable hole invades a plane wave state and periodically nucleates defects from which new holes are born. The results of a detailed numerical study of these periodic states are incorporated into a simple analytic description of isolated “edge” holes. Extending this description, we obtain a minimal model for general hole–defect dynamics. We show that interactions between the holes and a self–disordered background are essential for the occurrence of spatiotemporal chaos in hole–defect states. The formation of local structures and the occurrence of spatiotemporal chaos are the most striking features of pattern forming systems. The complex Ginzburg–Landau equation (CGLE) $$A_t=A+(1+ic_1)^2A(1ic_3)|A|^2A$$ (1) provides a particularly rich example of these phenomena. The CGLE describes pattern formation near a Hopf bifurcation and has become a paradigmatic model for the study of spatiotemporal chaos . Defects occur when $`A`$ goes through zero and the complex phase $`\psi :=\mathrm{arg}(A)`$ is no longer defined. In two and higher dimensions, such defects can only disappear via collisions with other defects, and act as long–living seeds for local structures like spirals and scroll waves whose instabilities lead to various chaotic states . For the 1D CGLE, however, defects occur only at isolated points in space–time (see Fig. 1) and intricate dynamics of defects and local hole structures occurs, especially in the so–called intermittent and bi–chaotic regimes . The holes are characterized by a local concentration of phase–gradient $`q:=_x\psi `$ and a depression of $`|A|`$ (hence the name “hole”), and dynamically connect the defects (Fig. 1). We divide these holes into two categories: coherent and incoherent structures. Coherent structures - By this we mean uniformly propagating structures of the form $`A(x,t)=e^{i\omega t}\overline{A}(xvt)`$ . Recently, hole solutions of this form called homoclinic holes were obtained . Asymptotically, homoclinic holes connect identical plane waves where $`Ae^{i(q_{ex}x\omega t)}`$. With $`c_1,c_3`$ and $`q_{ex}`$ fixed, unique left moving and unique right moving coherent holes are found. Left (right) moving holes with $`q_{ex}=Q`$ ($`q_{ex}=Q`$) are related by the left–right $`qq`$ symmetry of the CGLE. Coherent holes have one unstable core mode . Incoherent structures - In full dynamic states of the CGLE, one does not observe the unstable coherent homoclinic holes, unless one fine–tunes the initial conditions (see Fig. 2d). Instead evolving incoherent holes that can grow out to defects occur (Fig. 1 and 2b). In this Letter we study the hole $``$ defect and defect $``$ holes dynamical processes of the 1D CGLE . We present a minimal model for hole–defect dynamics that describes the full “interior” spatiotemporal chaotic states of Fig. 1a, where holes propagate into a self–disordered background. Similar “self–replicating” patterns are observed in many other situations, e.g., reaction–diffusion models , film–drag , eutectic growth , forced CGLE and space–time intermittency models . Hole $``$ defect - Let us consider the short–time evolution of an isolated hole propagating into a plane wave state. Holes can be seeded from initial conditions like: $$A=\mathrm{exp}(i[q_{ex}x+(\pi /2)\mathrm{tanh}(\gamma x)]).$$ (2) The precise form of the initial condition is not important here as long as we have a one–parameter family of localized phase–gradient peaks. This is because the left moving and right moving coherent holes for fixed $`c_1,c_3`$ and $`q_{ex}`$ are each unique and have one unstable mode only. As $`\gamma `$ is varied three possibilities can arise for the time evolution of the initial peak: evolution towards a defect (as in Fig. 1a), decay, or evolution arbitrary close to a coherent homoclinic hole (see Fig. 2). The hole propagation velocities are much larger than the typical group velocities in the plane wave states: the holes are thus only sensitive to the leading wave. Their internal, slow dynamics determines their trailing wave. A (nearly) coherent hole will, due to phase conservation, have a trailing wave (nearly) identical to the leading wave (Fig. 2); hence the relevance of the homoclinic holes. Defect $``$ holes - What dynamics occurs after a defect has been formed? A study of the spatial defect profiles reveals that they consist of a negative and positive phase–gradient peak in close proximity (the early stage of the formation of these two peaks can be seen in Fig. 2b; see also Fig. 4d of ). The negative (positive) phase gradient peak generates a left (right) moving hole. The lifetimes of these holes depend on their parent defect profile (analogous to what we described in Fig. 2) and also on $`c_1,c_3`$ and $`q_{ex}`$. Hence the defects act as seeds for the generation of daughter holes (see also Fig. 1). Periodic hole-defect states - When an incoherent hole invades a plane wave state and generates defects, stable periodic hole $``$ defect $``$ hole behavior can set in at the edges of the resulting pattern (Fig 1a). The asymptotic period $`\tau `$ of this process depends on $`c_1,c_3`$, the propagation direction and the wavenumber $`q_{ex}`$ of the initial condition only; we focus here on right moving holes. The period $`\tau `$ diverges at a well–defined value of $`q_{ex}=q_{coh}`$ (Fig. 3a). This can be understood in the phase space picture presented in Fig. 2. Suppose we fix $`c_1`$ and $`c_3`$. The edge defects that are generated periodically yield constant initial conditions for their daughter edge holes, similar to fixing $`\gamma `$ in Eq. (2). The period $`\tau `$ will depend on the location of the defect profile with respect to the stable manifold of the coherent hole. When $`q_{ex}`$ is varied, both this manifold and the defect profile may change, and for a certain value of $`q_{ex}`$ which we call $`q_{coh}`$, the defect generates an initial condition precisely on the stable manifold of the coherent hole. The lifetime of the resulting daughter hole then diverges (see Fig. 2d). To substantiate this intuitive picture, we have performed numerics on the dynamics of “edge–holes” invading a plane wave state where $`Ae^{i(q_{ex}x\omega t)}`$. We have performed runs for many different parameters, but will only discuss a representative subset here. Our results indicate that the $`\tau `$ divergence is of the form $$\tau s\mathrm{ln}(q_{ex}q_{coh})+\tau _0.$$ (3) This equation, and in particular the value of $`s`$ can be understood by considering the flow near the saddle point shown in Fig 2a. Just after the hole has been formed, it first evolves rapidly along the stable manifold. Secondly it evolves slowly along the unstable manifold before being shot away towards the next defect. For values of $`q_{ex}`$ close to $`q_{coh}`$, the holes approach the coherent structure fixed point very closely, and $`\tau `$ will be dominated by a regime of exponential growth close to this fixed point. Small changes in $`q_{ex}`$ will have a negligible effect on the duration of the first phase ($`\tau _0`$), but the duration of the second phase will diverge logarithmically as $`(1/\lambda )\mathrm{ln}(q_{ex}q_{coh})`$. Here $`\lambda `$, which depends on $`c_1`$ and $`c_3`$, denotes the unstable eigenvalue of the coherent structures at $`q_{ex}=q_{coh}`$. In Table 1 we list some numerically determined values for $`q_{coh}`$, $`1/\lambda `$, and $`s`$. We obtained $`s`$ and $`q_{coh}`$ from a fit of $`\tau `$ to Eq. (3), whereas $`\lambda `$ is obtained from a shooting algorithm, see Ref. . The agreement between $`s`$ and $`1/\lambda `$ is quite satisfactory. We will now construct a phenomenological model for isolated incoherent holes. (i) We will ignore their early time attraction to the unstable manifold, and think of their location on $`W^\mathrm{U}`$ as an internal degree of freedom, parameterized by the phase–gradient extremum $`q_m`$. (ii) Clearly the model should have an unstable fixed point for values of $`q_m`$ corresponding to coherent holes. We have found that, in good approximation, coherent holes have $`q_m=q_n+gq_{ex}`$ where $`q_n`$ denotes the value of $`q_m`$ for a coherent hole in a $`q_{ex}=0`$ state, and $`g`$ is a negative phenomenological constant. (iii) When approaching a defect, $`q_m`$ diverges as $`(\mathrm{\Delta }t)^1`$ ; we have confirmed this by accurate numerics (Fig. 3b). An appropriate equation incorporating these three features is $$\dot{q}_m=\lambda (q_m(q_n+gq_{ex}))+\mu (q_m(q_n+gq_{ex}))^2,$$ (4) where $`g`$ and $`\mu `$ are phenomenological constants. The first term on the RHS of (4) results from the linearization near the coherent fixed point. Nonlinear terms of higher than quadratic order on the RHS of Eq. (4) are ruled out by the $`(\mathrm{\Delta }t)^1`$ divergence of $`q_m`$. Our numerical data for $`\dot{q}_m`$ versus $`q_m`$ indeed shows quadratic behavior for large enough values of $`q_m`$ (Fig. 3c). For smaller values of $`q_m`$, the curves are quite intricate; this corresponds to the rapid early time evolution along the stable manifold not included in model (4). From Eq. (4), it is straightforward to show that the hole lifetime $`\tau `$ (the time taken for $`q_m`$ to diverge) displays the required logarithmic divergence as $`q_{ex}`$ is tuned towards a critical value $`q_{coh}`$. Disordered dynamics - If the patches away from the holes/defects were simply plane waves with fixed wavenumber, then one would expect, following the arguments given above, quite regular dynamics. The coupling between holes and the background induced by phase conservation becomes the key ingredient to understand disorder in hole–defect dynamics such as shown in Fig 1a. Let us introduce a variable $`\varphi :=𝑑xq`$ that measures the phasedifference across a certain interval. Consider again an edge hole evolving towards a defect. While the peak of the $`q`$-profile grows, the hole creates a dip in its wake (see Fig. 2b) in order to locally conserve $`\varphi `$. Clearly the trailing edge of this incoherent hole is not a perfect plane wave. In the interior of states such as shown in Fig. 1a, unstable holes move back and forth through a background of disordered $`q_{ex}`$ and amplify this disorder. Nevertheless, as we pointed out earlier, the disordering dynamics is sufficiently slow such that the holes remain approximately homoclinic for much of their lives. Although the typical range of values for the disordered $`q_{ex}`$ is small, the hole lifetimes depend on it sensitively. Hence the variation in $`q_{ex}`$ and $`\varphi `$ is sufficient to explain the varying lifetimes found in the interior states such as that shown in Fig 1a. Thus the essence of the spatiotemporal chaotic states here lies in the propagation of unstable local structures in a self–disordered background. Minimal model - To illustrate our picture of self–disordered dynamics, we will now combine the various hole–defect properties with the left–right symmetry and local phase conservation of the CGLE to form a minimal model of hole–defect dynamics. From our previous analysis, we see that the following hole–defect properties should be incorporated: (i) Incoherent holes propagate either left or right with essentially constant velocity (see Fig. 1a). (ii) For fixed $`c_1,c_3`$, their lifetime depends on the profile of their parent defect, the direction of propagation, and on the wavenumber of the state into which they propagate. (iii) Eq. (4) captures essentially all aspects of the evolution of their internal degree of freedom. When $`q_m`$ diverges, a defect occurs. In our model we will assume that all the defects have the same profile and so act as unique initial conditions for their daughter incoherent holes. While in principle a defect profile could depend on the entire history of the hole which preceded it, for simplicity we have chosen to neglect this. We have observed that for some regions of the $`c_1,c_3`$ parameter space, the defect profiles from the interior spatiotemporal chaotic patterns show a surprising lack of scatter . Therefore we believe that treating the defect profiles as constant, and only including the effect of the background in the hole dynamics incorporates the essence of the coupling to a disordered background. We discretize both space and time by coarse-graining, and take a “staggered” type of update rule which is completely specified by the dynamics of a $`2\times 2`$ cell (see Fig. 4a). We put a single variable $`\varphi _i`$ on each site, corresponding to the phase difference across a cell divided by $`2\pi `$. Local phase conservation is implemented by $`\varphi _l^{}+\varphi _r^{}=\varphi _l+\varphi _r`$, where the primed (unprimed) variables refer to values after (before) an update. Holes are represented by active sites where $`|\varphi |>\varphi _t`$; here $`\varphi `$ plays the role of the internal degree of freedom. Inactive sites are those with $`|\varphi |<\varphi _t`$, and they represent the background. The value of the cutoff $`\varphi _t`$ is not very important as long as it is much smaller than typical values of $`\varphi `$ for coherent holes. Here $`\varphi _t`$ is fixed at $`0.15`$. Without loss of generality we force holes with positive (negative) $`\varphi `$ to propagate only from $`\varphi _l`$ ($`\varphi _r`$) to $`\varphi _r^{}`$ ($`\varphi _l^{}`$). Depending on the two incoming states, we have the following three possibilities: One site active: Without loss of generality we assume that we have a right moving hole. We implement evolution similar to Eq. (4), but neglect the quadratic term of Eq. (4); even though $`q_m`$ diverges, the local phasedifference $`\varphi _m`$ does not diverge near a defect. Hence the finite time divergence of the local phase gradient $`q`$ that signals a defect can be replaced by a cutoff $`\varphi _d`$ for $`\varphi `$. Therefore, when $`\varphi _l<\varphi _d`$, the internal hole coordinate $`\varphi `$ is taken to evolve via $`\varphi _r^{}=\varphi _l+\lambda (\varphi _l\varphi _ng\varphi _r)`$. Here $`\lambda `$ sets the time scales and can be taken small (fixed at $`0.1`$). This evolution equation, combined with the local phase conservation, means that an incoherent hole propagating into a perfect laminar state will leave a disordered state in its wake. When $`\varphi _l>\varphi _d`$, a defect occurs and two new holes are generated: $`\varphi _r^{}=\varphi _{ad}`$, and $`\varphi _l^{}=\varphi _d1\varphi _{ad}`$. The factor $`1`$ reflects the change in winding number at a defect. Both sites inactive: Away from the holes/defects, the relevant dynamics is phase diffusion. This is implemented via: $`\varphi _r^{}=D\varphi _l+(1D)\varphi _r`$. The value of $`D`$ is fixed at $`0.05`$ and is not very important. Both sites active: This corresponds to the collision of two oppositely moving holes. Typically this leads to the annihilation of both holes (see Fig. 1a), which we implement here via phase conservation: $`\varphi _r^{}=\varphi _l^{}=(\varphi _l+\varphi _r)/2`$. The coupling of the holes to their background, $`g`$, should be taken negative (although its precise value is unimportant). For $`g=0`$ the lifetime $`\tau `$ becomes a constant, independent of the $`\varphi `$ of the state into which the holes propagate, and moreover, the dynamical states are regular Sierpinsky gaskets (Fig. 4b). Nevertheless, starting from a $`\varphi =0`$ state, the local phase conservation of the hole dynamics leads to a background state with a disordered $`\varphi `$ profile. For $`g<0`$ the coupling to this background leads to disorder as shown in Fig. 4c,d. This illustrates the crucial importance of the coupling between the holes and the self–disordered background. The essential parameters determining the qualitative nature of the overall state are $`\varphi _n`$, $`\varphi _d`$ and $`\varphi _{ad}`$. These parameters determine the amount of phase winding in the core of the $`q_{ex}=0`$ coherent holes ($`\varphi _n`$) and in the new holes generated by defects ($`\varphi _{ad},\varphi _d1\varphi _{ad}`$). When varying the CGLE coefficients $`c_1,c_3`$, these parameters change too; for example, $`\varphi _n`$ typically decreases when $`c_1`$ or $`c_3`$ are increased. As a result, for large values of $`c_1`$ and $`c_3`$, $`|\varphi _l^{}|`$ and $`\varphi _r^{}`$ are typically larger than $`\varphi _n`$ so that most “daughter holes” will grow out to form defects and hole-defect chaos spreads (Fig. 4c,d). For sufficiently small values of $`c_1`$ and $`c_3`$, on the other hand, $`\varphi _n`$ is large and both daughter holes will decay. For intermediate values of $`c_1`$ and $`c_3`$ it may occur that $`|\varphi _l^{}|`$ is significantly larger than $`\varphi _r^{}`$, leading to zigzag states (Fig. 4d). In conclusion, we have studied in detail the dynamics of local structures in the 1D CGLE. We have obtained a quantitative understanding of the edge holes, unraveled the interplay between defects and holes, and put forward a simple model for some of the spatiotemporal chaotic states occurring in the CGLE. M.v.H. acknowledges support from the EU under contract ERBFMBICT 972554. M.H. acknowledges support from the Niels Bohr Institute, the NSF through the Division of Materials Research, and NSERC of Canada.
no-problem/0002/astro-ph0002005.html
ar5iv
text
# A Comparison of Ultraviolet, Optical, and X-Ray Imagery of Selected Fields in the Cygnus Loop ## 1 Introduction Because of its large angular size and wide range of shock conditions, the Cygnus Loop is one of the best laboratories for studying the environment and physics of middle-aged supernova remnants (SNR). It covers a huge expanse in the sky (2.8$`\times `$3.5<sup>o</sup>) corresponding to 21.5$`\times `$27 pc, at a newly determined distance of 440 pc (Blair et al. (1999)). The currently accepted view for the Cygnus Loop is that it represents an explosion in a cavity produced by a fairly massive precursor star (cf. Levenson et al. (1998)). The SN shock has been traveling relatively unimpeded for roughly ten parsecs and has only recently begun reaching the denser cavity walls. The size of the cavity implicates a precursor star of type early B. The interaction of the shock with the complex edges of the cavity wall is responsible for the complicated mixture of optical and X-ray emission seen in superposition, and a dazzling variety of optical filament morphologies. Portions of the SN blast wave propagating through the fairly rarefied atomic shell ($`<`$1 cm<sup>-3</sup>), show faint filaments with hydrogen Balmer-line-dominated optical spectra. These filaments represent the position of the primary blast wave and are often termed nonradiative shocks (because radiative losses are unimportant to the dynamics of the shock itself). Ambient gas is swept up and progressively ionized, emitting He II, C IV, N V, and O VI lines in the FUV (Figure 1, bottom spectrum) (Hester, Raymond, & Blair (1994), Raymond et al. (1983)). Balmer-dominated emission arises from the fraction ($``$0.3) of neutral hydrogen swept up by the shock that stands some chance of being excited and recombining before it is ionized in the post-shock flow (Chevalier & Raymond (1978); Chevalier, Kirshner, & Raymond (1980)). The Balmer emission is accompanied by hydrogen two-photon events which produce a broad continuum above 1216Å peaking at $``$1420Å (Nussbaumer & Schmutz (1984)). For recombination and for high temperature shocks, the ratio of two-photon emission to Balmer is nearly constant ($``$8:1). In slow shocks ($``$40$`\mathrm{km}\mathrm{s}^1`$) in neutral gas, the ratio can be enhanced considerably (Dopita, Binette, & Schwartz, (1982)). Balmer-dominated filaments are very smooth and WFPC2 observations by Blair et al. (1999) show that they are exceedingly thin as well—less than one WFC pixel across when seen edge-on, or $`<6\times 10^{14}`$ cm at our assumed distance, in keeping with theoretical predictions (cf. Raymond et al. (1983)). Postshock temperatures reach millions of degrees and the hot material emits copious soft X-rays. The density is low, however, and cooling is very inefficient. With time, as the shock continues to sweep up material, these filaments will be able to start cooling more effectively and will evolve to become radiative filaments. The bright optical filaments in the Cygnus Loop represent radiative shocks in much denser material, such as might be expected in the denser portions of the cavity wall. These shocks are said to be radiative (that is, energy losses from radiation are significant); they have more highly developed cooling and/or recombination zones. The shocked material emits in the lines of a broad range of hot, intermediate, and low temperature ions, depending on the effective ‘age’ of the shock at a given location and the local physical conditions. For instance, a relatively recent encounter between the shock and a density enhancement (or similarly, a shock that has swept up a fairly low total column of material) may show very strong \[O III\] $`\lambda 5007`$ compared with H$`\alpha `$. This would indicate that the coolest part of the flow, the recombination zone where the Balmer lines become strong, has not yet formed. Such shocks are said to be ‘incomplete’ as the shocked material remains hot and does not yet emit in the lower ionization lines. In contrast, radiative filaments with the full range of ionization (including the low ionization lines) are well approximated by full, steady-flow shock model calculations, such as those of Raymond (1979), Dopita, Binette, & Tuohy (1984), and Hartigan, Raymond, & Hartmann (1987; hereafter HRH ). Morphologically, radiative complete filaments lack the smooth grace of nonradiative filaments or even radiative incomplete filaments in some cases (cf. Fesen, Blair, & Kirshner (1982)). The more irregular appearance of these filaments is due partly to inhomogeneities in the shocked clouds themselves, partly to turbulence and/or thermal instabilities that set in during cooling (cf. Innes (1992) and references therein), and partly to several clouds appearing along single lines of sight. Often the emission at a given filament position cannot be characterized by a single shock velocity. Much of the above understanding of shock types and evolutionary stages has been predicated on UV/optical studies of the Cygnus Loop itself. The Cygnus Loop is a veritable laboratory for such studies because of its relative proximity, large angular extent and low foreground extinction (E\[B $``$ V\] = 0.08; Fesen, Blair, & Kirshner (1982)), and thus its accessibility across the electromagnetic spectrum. However, because of the range of shock interactions and shock types, coupled with the significant complication of projection effects near the limb of the SNR, great care must be taken in order to obtain a full understanding of what is happening at any given position in the nebular structure. Although FUV spectra are available at a number of individual filament locations from years of observations with IUE and the shuttle-borne Hopkins Ultraviolet Telescope (HUT), the perspective obtainable from FUV imaging has been largely lacking. The Ultraviolet Imaging Telescope (UIT) was flown as part of the Astro-1 Space Shuttle mission in 1990 was used to observe a field in the Cygnus Loop through both mid-UV and far-UV (FUV) filters (Cornett et al. (1992)). In this paper, we report on additional FUV observations with UIT obtained during the Astro-2 shuttle mission in 1995. In addition to the field imaged during Astro-1, UIT observed four different regions around the periphery of the Cygnus Loop with a resolution comparable to existing optical and X-ray observations. These fields sample the full range of physical and shock conditions and evolutionary stages in the SNR. We combine these data with existing ground-based optical images and ROSAT HRI X-ray data to obtain new insights into this prototypical SNR and its interaction with its surroundings. In §2 we present the observations obtained with the UIT and review the comparison data sets. In §3 we discuss the spectral content of the UIT filter used in the observations. In §4 we discuss examples of the various kinds of shocks as seen in the UIT fields, and summarize our conclusions in §5. ## 2 UIT Observations and Comparison Data UIT has flown twice on the Space Shuttle as part of the Astro-1 and Astro-2 programs (1990 December 2-10 and 1995 March 2-18). Together with the Hopkins Ultraviolet Telescope (HUT) and the Wisconsin Ultraviolet Photo-Polarimeter Experiment, UIT explored selected UV targets. An f/9 Ritchey-Chretien telescope with a 38 cm aperture and image intensifier systems produced images of circular 40′ fields of view with $``$3″ resolution at field center (depending on pointing stability). Images were recorded on 70mm Eastman Kodak IIa-O film which was developed and digitized at NASA/GSFC and processed into uniform data products. Technical details on the hardware and data processing can be found in Stecher et al. (1992) and Stecher et al. (1997). Table 1: UIT B5 Filter Observations in the Cygnus Loop Position RA(J2000) Dec(J2000) exposure (sec) Figure W cloud 20:45:38 +31:06:33 1010 3 NE nonrad 20:54:39 +32:17:29 2041 4 NE cloud 20:56:16 +31:44:34 500 5 XA region<sup>a</sup> 20:57:35 +31:07:28 1280 6 XA region 20:57:04 +31:07:45 1151 6 XA region 20:57:22 +31:04:02 1516 6 XA region 20:57:24 +31:03:51 1274 6 SE cloud 20:56:05 +30:44:01 2180 7 <sup>a</sup> Astro-1 image (cf. Cornett et al. (1992)) Astro/UIT images are among the few examples of FUV images of SNRs, and UIT’s B5 bandpass ($`1450`$Å to $`1800`$Å) encompasses severally generally high-excitation and heretofore unmapped lines that are often present in SNR shocks (Figure 1). UIT’s two Astro flights have produced eight FUV images of five different Cygnus Loop fields. Table 1 lists the observation parameters and field locations, which are indicated in Figure 2. We will refer to these fields by the names listed in Table 1. Since all four exposures of the XA region (named by Hester & Cox (1986)) are reasonably deep, we constructed a mosaic of the field using the IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation. IMCOMBINE task, resulting in significantly improved signal-to-noise in the overlapped region of the combined image. In panel c of Figures 3 through 7, we show the five reduced UIT images as observed in the B5 filter bandpass. UIT images with long exposure times suffer from an instrumental malady dubbed “measles” by the UIT team (Stecher et al. (1997)). Measles manifest themselves as fixed-pattern noise spikes in images with a large sunlight flux, such as long daylight exposures or images of red, very bright sources (e.g. planets or the Moon). This effect is probably produced by visible light passing through pinholes in either the output phosphor of the first stage or the bialkali photocathode of the second stage of the UIT FUV image tube. The Cygnus Loop was a daytime object for both the Astro-1 and Astro-2 flights, but the phenomenon is visible only in some of the longer exposures. Most dramatically, measles are seen in the northeast cloud nonradiative image (Figure 4) as a darkening in the northwest corner; the individual “measles” are spread into a background by the binning used to produce these images. Various approaches to removing the appearance of measles were attempted but none of them have yielded satisfactory results. In practice, the measles, here arising from daylight sky contamination, affect our analysis only by adding to the background level, so the original images are presented here, “measles” and all. For comparison with our FUV images, we show narrow-band optical images in \[O III\] $`\lambda `$5007 and H$`\alpha `$+\[N II\] (which for simplicity we refer to as H$`\alpha `$) obtained with the Prime Focus Corrector on the 0.8 m telescope at McDonald Observatory (cf. Levenson et al. (1998)). These images, shown in panels a and b of Figures 3 – 7, are aligned and placed on a common scale of 5″ per pixel, which is similar to the FUV resolution of 3″. The optical images have each been processed with a 3-pixel median filter to remove faint stars and stellar residuals. In addition, we show the soft X-ray (0.1–2.4 keV) emission for each field, as observed with the ROSAT High Resolution Imager (HRI) (from Levenson et al. (1997)). The resolution of the HRI imager is 6″ on axis degrading to 30″ at the edge of each field. As with the optical data, the X-ray images are aligned on a 5″ per pixel scale. The HRI images have additionally been smoothed with a 3-pixel FWHM gaussian and are shown in panel d of Figures 3 – 7. All images in Figures 3 – 7 are displayed on a logarithmic scale. Figure 8 shows three-color composite images using H$`\alpha `$ as red, B5 as green, and the ROSAT HRI as blue. The color levels have been adjusted for visual appearance, to best show the relative spatial relationships of the different emissions. (The color composite for the Northeast nonradiative region is not shown, since little new information is gained above Figure 4 and because of the adverse effect of the measles.) This will be discussed in more detail below. ## 3 Spectral Content of UIT Images Figure 1 shows the UIT B5 filter profile superimposed on spectra of typical radiative and nonradiative filaments, as observed by HUT. Unlike typical SNR narrow band images in the optical, the B5 filter is relatively broad and does not isolate a single spectral line, but rather encompasses several strong, moderately high ionization lines that are variable from filament to filament. Cornett et al. (1992) point out that C IV should dominate emission in this bandpass since it is a strong line centered near the filter’s peak throughput, and since shock models predict this result for a range of important velocities (cf. Figure 10 and accompanying discussion). Here we look at this more closely over a larger range of shock velocities, and in particular also discuss the potential complicating effects of hydrogen two-photon recombination continuum emission, shock completeness, and resonance line scattering. Empirical comparisons of IUE and HUT emission line observations can be used to quantify at what level the C IV emission is expected to dominate the line emission detected through the B5 filter. For instance, in the highly radiative XA region (see Figure 7) we have compared a large number of FUV spectra both on and adjacent to bright optical filaments against the throughput curve of B5 (Danforth, Blair, & Raymond 2000; henceforth DBR ). This comparison shows that on average the various lines contribute as follows: C IV $`\lambda `$1550, 42%; O III\] $`\lambda `$1665, 27%; He II $`\lambda `$1640, 17%; N IV $`\lambda `$1486, 8%; and 6% from fainter emission lines. Using the HUT observation of Long et al. (1992), we estimate for nonradiative shocks the B5 contributions are more like C IV (60%), He II (28%), and all other species 12%. These percentages are only approximate, of course, and will vary with shock velocity, geometry and a host of other conditions, but they serve to highlight the fact that, while C IV is the strongest contributor to the line emission, it is not the only contributor. In addition, while it is not obvious at the scale of Figure 1, a low level continuum is often seen in IUE and HUT spectra of Cygnus Loop filaments, especially where optical H$`\alpha `$ emission is present and strong. This continuum arises due to the hydrogen two-photon process (cf. Osterbrock (1989)). Benvenuti, Dopita, & D’Odorico (1980) note that SNR shocks cause two-photon emission from hydrogen via both collisional excitation and recombination into the 2<sup>2</sup>S<sub>1/2</sub> state. The two-photon spectrum arises from a probability distribution of photons that is symmetric about 1/2 the energy of Ly$`\alpha `$ (corresponding to 2431Å), resulting in a shallow spectral peak near 1420Å and extending from 1216Å towards longer wavelengths, throughout the UV and optical region. The expected (integrated) strength of this component is about 8 $`\times `$ the H$`\alpha `$ flux but is spread over thousands of Angstroms. However, the wide bandpass of the B5 filter detects $``$15% of the total two-photon flux available, enough to compete with line emission in the bandpass. Further complicating the question, two-photon emission can also be highly variable from filament to filament. By using signatures from the images and spectra at other wavelengths, we can interpret, at least qualitatively, what is being seen in the UIT images. For instance, Figure 4 shows the NE rim of the SNR. The faint, smooth H$`\alpha `$ filament running along the edge of the X-ray emission is clearly a nonradiative filament. The faint emission seen in the B5 image traces these faint Balmer filaments well, and at this position, does not correlate particularly well with the clumpy \[O III\] emission seen near the middle of the field. This implies a relatively strong contribution from two-photon continuum, although as shown in the bottom spectrum of Figure 1, C IV and He II are also present in the filaments at some level. As discussed earlier, in radiative filaments, higher ionization lines such as O VI $`\lambda `$1035, N V $`\lambda `$1240, C IV $`\lambda `$1550, and \[O III\] $`\lambda `$5007 become strong first, followed by lower ionization lines like \[S II\] $`\lambda `$6725, \[O I\] $`\lambda `$6300, and the hydrogen Balmer lines. Hence, in filaments that show high optical \[O III\] to H$`\alpha `$ ratios, and are thus incomplete shocks, the B5 content primarily arises from C IV and other line emission. In older, more complete shocks where the optical \[O III\] to H$`\alpha `$ ratios are close to those expected from steady flow shock models, two-photon emission again should compete with the line emission and the B5 flux should arise from both sources. It is difficult to assess these competing effects from Figures 3 – 7 since the relative intensities of the two optical images are not always obvious, but much of the variation in coloration in Figure 8 for bright radiative filaments is due to the variation in relative amounts of line emission and two-photon continuum contributions to the B5 image. In Figure 9, we show the XA field as seen with UIT (panel a) and ratio maps of the UIT image against the aligned optical H$`\alpha `$ and \[O III\] images. Since the ionization energies of C IV (64.5 eV) and O III (54.9 eV) are similar (and to the extent that the B5 image contains a substantial component of C IV emission), we would expect a ratio of B5 to H$`\alpha `$ to show evidence for the transition from incomplete to complete shock filaments. Such a ratio map is shown in panel b of Figure 9, and a systematic pattern is indeed seen. The white filaments, indicative of a relatively low value of the ratio (and hence relatively strong H$`\alpha `$ filaments) tend to lie systematically to the right. These filaments tend to be closer to the center of the SNR, and hence should have had more time (on average) to cool and recombine. Of course, there is significant evidence for projection effects in this complicated field as well. Indeed, one interpretation of Figure 9b is that we are separating some of these projection effects, and are seeing two separate ‘systems’ of filaments that are at differing stages of completeness. Another way of assessing the expected contributions of line emission and two-photon emission to the B5 flux is by comparing to shock model calculations. We use the equilibrium preionization “E” series shock models of Hartigan, Raymond, & Hartmann (1987, HRH ) to investigate variations in spectral contributions to the UIT images as a function of shock velocity. Figure 10 shows how various spectral components are predicted to change in relative intensity as shock velocity increases for this set of planar, complete, steady flow shock models. As expected, the key contributors to the B5 bandpass are indeed C IV and two-photon continuum, although between $``$100 – 200 $`\mathrm{km}\mathrm{s}^1`$ these models indicate C IV should dominate. This is quite at odds with ‘ground truth’, as supplied by careful comparisons at the specific locations of IUE and HUT spectra within the UIT fields of view. We note that the two-photon flux per Å in HUT and IUE spectra is low and thus difficult to measure accurately since background levels are poorly known. Even so, it is quite clear from comparisons such as those of Benvenuti, Dopita, & D’Odorico (1980) and Raymond et al. (1988) that nowhere do we see C IV dominate at the level implied by Figure 10. (Indeed such studies indicate that two-photon should dominate! As will be discussed more thoroughly in §4, Benvenuti, Dopita, & D’Odorico (1980) and others give two-photon fluxes which overwhelm C IV in the B5 band by a factor of 5-10. Interestingly, consideration of incompleteness effects only serves to exacerbate this discrepancy since the expected two-photon emission should be weaker or absent. Something else is going on. That ‘something else’ is apparently resonance line scattering. It has long been suspected that the strong UV resonance lines, like N V $`\lambda `$1240, C II $`\lambda `$1335, and C IV $`\lambda `$1550, are affected by self-absorption along the line of sight, either by local gas within the SNR itself or by the intervening interstellar medium. We can expect significant column depth from the cavity wall of the remnant itself. Since filaments selected for optical/UV observation have tended to be bright, and since many such filaments are edge-on sheets of gas with correspondingly high line of sight column densities (Hester (1987)), the spectral observations are likely affected in a systematic way. While this has been known for some years (Raymond et al. (1981)), the UIT data presented here indicate just how widespread resonance line scattering is in the Cygnus Loop and how significantly the C IV intensity may be reduced by this effect. Figure 9c shows a ratio map of the B5 image to the \[O III\] optical image of the XA region (cf. Cornett et al. (1992)). Since \[O III\] is a forbidden transition, its optically thin emission is not affected by resonance scattering. The ionization potentials for C IV and \[O III\] are similar, so this ratio should provide some information about resonance scattering, if a significant fraction of the B5 image can be attributed to C IV. Hence, this ratio image shows where resonance line scattering is most important, and provides information on the 3-dimensional structure of regions within the SNR. The B5 image gives the appearance of smaller dynamic range and lower spatial resolution than \[O III\] because we see optically thick radiation from only a short distance into the filaments. The highest saturation (lowest ratios, or light areas in Figure 9c) occurs in the cores of filaments and dense clouds, such as the three regions indicated in Figure 9a. The “spur” filament was studied in detail by Raymond et al. (1988) and is probably an edge-on sheet of gas. The region marked ‘B’ is the turbulent, incomplete shocked cloud observed with HUT during Astro-1 (Blair et al. (1991)). The XA region is also a shocked cloud or finger of dense gas that is likely elongated in our line of sight (cf. Hester & Cox (1986); DBR ). What is surprising, however, is the extent to which the light regions in Figure 9c extend beyond the cloud cores into regions of more diffuse emission. This indicates that significant resonance scattering is very widespread in the Cygnus Loop. The diminished C IV flux also boosts the relative importance of two-photon emission in the B5 bandpass and explains the discrepancy between numerous spectral observations and the shock model predictions shown in Figure 10. UIT’s B5 images are particularly useful in that they sample two important shock physics regimes–the brightest radiative shocks arising in dense clouds and the primary blast wave at the edge of the shell. However, it is evidently difficult to predict the spectral content of B5 images alone without detailed knowledge of the physics of the emitting regions. Nonetheless, B5 images are useful in combination with \[O III\]$`\lambda `$5007 and H$`\alpha `$ images as empirical tools. The image combinations allow us to determine whether C IV or two-photon dominates, in two clear-cut cases. 1) In regions where B5 images closely resemble \[O III\] images, the B5 filter is detecting radiative shocks with velocities in the range 100-200 $`\mathrm{km}\mathrm{s}^1`$ and therefore primarily C IV. 2) In regions where B5 images closely resemble H$`\alpha `$, the B5 filter is detecting largely two-photon emission from recombination of hydrogen in radiative shocks or from collisional excitiation of hydrogen in nonradiative shocks. ## 4 Discussion Each of the fields in our study portrays a range of physical conditions and geometries, and hence filament types, seen in projection in many cases. By comparing the UV, X-ray and optical emissions, we can gain new insights into these complexities. In this section, we discuss the spatial relationships between the hot, intermediate and cooler components seen in these images. ### 4.1 The Western Cloud In the Western Cloud field (Figure 3) the B5 image of the bright north-south filaments resembles the \[O III\]5007Å images very closely. The filaments are clearly portions of a radiative shock viewed edge-on to our line of sight. The Western Cloud has been studied spectroscopically at optical wavelengths by Miller (1974) and in the FUV by Raymond et al. (1980b). This region shows a case where a cloud is evidently being overrun by a shock, and the cloud is much larger than the scale of the shock. The cloud is elongated in the plane of the sky of dimensions perhaps 1$`\times `$10 pc (Levenson et al. (1998)) and represents an interaction roughly 1000 years old (Levenson et al. (1996)). The main north-south radiative filament is bright in all wavelengths, with good detailed correlation between B5 and \[O III\]. H$`\alpha `$ is seen to extend farther to the east, toward the center or ’behind’ the shock, as is expected in a complete shock stratification. Bright X-rays (Figure 3d) are seen to lag behind the radiative filaments by 1-2′ (0.15 to 0.3 pc). This is indicative of a reverse shock being driven back into the interior material from the dense cloud. This doubly-shocked material shows enhanced brightness of about a factor of 2. From this, Levenson et al. (1996) derive a cloud/ambient density contrast of about 10. Attempts to fit shock models to optical observations of the bright filament have been frustrated by the large \[O III\]/H$`\beta `$ ratio. A shock velocity of 130 $`\mathrm{km}\mathrm{s}^1`$ was found by Raymond et al. (1980b) using IUE line strengths and assuming a slight departure from steady flow and depleted abundances in both C and Si. Raymond et al. (1980b) also note that much of the hydrogen recombination zone predicted by steady flow models is absent, implying that the interaction is fairly young. As seen in the H$`\alpha `$ image, a Balmer-dominated filament projects from the south of the bright radiative filament toward the northwest. Raymond et al. (1980a) find that the optical spectrum of the filament contains nothing but hydrogen Balmer lines. High-resolution observations of the H$`\alpha `$ line (Treffers (1981)) show a broad component and a narrow component, corresponding to the pre- and post-shock conditions in the filament, with a resulting estimated shock velocity of 130-170 $`\mathrm{km}\mathrm{s}^1`$. The filament may be a foreground or background piece of the blast wave not related to the radiative portion of the shock, or a related piece of blast wave that is travelling through the atomic (rather than molecular) component. It is visible in both H$`\alpha `$ (Figure 3a) and B5 (Figure 3c) though generally not in other bands; thus the B5 flux for this filament arises primarily from the two-photon process. There is a small segment of the filament visible in \[O III\] where the shock may be becoming radiative, visible in B5 as a brightening near the southern end of the filament. The X-ray luminosity behind this nonradiative filament is much fainter than that observed to the east of the main radiative filament, since there is no reverse shock associated with the nonradiative filament to boost the brightness (Hester, Raymond, & Blair (1994)). The absence of X-rays to the west of this filament confirms that it represents the actual blast front. As expected, the peak X-ray flux lags behind the H$`\alpha `$ and B5 flux by roughly one arcminute (0.1 pc). This representing the “heating time” of gas behind the shock. A CO cloud is seen just to the south of the Western Cloud field (Scoville et al. (1977)). The presence of CO clearly indicates material with molecular hydrogen at densities of 300-1000 cm<sup>-3</sup>. The nonradiative filament runs closely along the T<sub>antenna</sub>=5K contour of the CO cloud, indicating this shock is moving through the atomic component at this stage, but showing no sign of interaction with the molecular cloud. ### 4.2 Northeast Nonradiative Region The canonical example of nonradiative filaments in any context lies on the north and northeast rim of the Cygnus Loop. There, smooth Balmer filaments extend counterclockwise from the northern limb (Figure 4), and can be seen prominently in H$`\alpha `$ in Figure 5a. Small portions of this shock system have been extensively studied by Raymond et al. (1983), Blair et al. (1991), Long et al. (1992), Hester, Raymond & Blair (1994), and most recently by Blair et al. (1999). The filaments are clearly visible in H$`\alpha `$ (Figure 4a) as well as B5 (Figure 4c), but invisible along most of their length in \[O III\] (Figure 4b) except for small segments. These segments represent portions of the shock front where a slightly higher density has allowed the shock to become partially radiative. The shocked, T$`10^6`$K gas emits in an edge-brightened band of X-rays (Figure 4d). The brightness variations in X-rays confirm that the nonradiative filaments are simply wrinkles in the blast wave presenting larger column densities to our line of sight. Spectroscopic observations of selected locations on the filaments indicate that the B5 filter observes nonradiative filaments as a mixture of C IV and two-photon emission. Long et al. (1992) find an intrinsic ratio of two-photon emission to C IV of 4.3, which gives an observed ratio in B5 of 0.65. Raymond et al. (1983) find fluxes in the same filament which give an observed ratio of 1.6; in a nearby filament, Hester, Raymond & Blair (1994) find a ratio near 2.0. These filaments all have velocities of around 170 $`\mathrm{km}\mathrm{s}^1`$. It is likely that much of the ISM carbon is locked up in grains in the preshock medium, thus boosting the ratio. The system of thin filaments in the NE nonradiative field extends to the south and is visible in H$`\alpha `$ ahead of the radiative Northeast Cloud (Figure 5) discussed below. ### 4.3 The Northeast Cloud The Northeast Cloud (Figure 5) radiative filaments, south and east of the field discussed above, make up one of the brightest systems in the Cygnus Loop. The interaction of the SN blast wave and the denser cavity wall is most evident at this location. A complex of radiative filaments can be seen, apparently jumbled together along our line of sight, displaying the signs of a complete shock undergoing radiative cooling. The X-ray edge marking the SN blast wave is well separated from the optical and UV filaments, implying a strongly decelerated shock and cooling that has continued for some time. Stratification of different ionic species is evident, with \[O III\] in sharp filaments to the east, and more diffuse H$`\alpha `$ behind (Figure 8b). The Northeast Cloud extends into the southern portions of the NE nonradiative field (Figure 4) as well. However, the exposure time for this FUV image is a factor of four shorter than that in Figure 4c, so the nonradiative filaments are not detected above the background. There are a few UV-bright sections which correspond closely with bright \[O III\] knots. However, other equally bright \[O III\] knots in the region do not have corresponding FUV knots. This may be evidence for a range of shock velocities, or it may be portions of the shocks that are in transition from nonradiative to radiative conditions. Using IUE spectra Benvenuti, Dopita, & D’Odorico (1980) measure the two-photon continuum for one of the brightest radiative positions within the NE cloud, with a resulting observed two-photon/C IV ratio of 5.0. Observations of other radiative regions both in the Cygnus Loop and in other SNRs similar in morphology and spectrum give ratios between 1.7 and 10 (Raymond et al. (1988); various unpublished data). Therefore, while conditions vary widely within these shocked regions, spectroscopy indicates that resonance scattering of C IV causes us to see 2-6 times more flux from two-photon emission than from other ions in the field. Yet the B5 morphology of most of the field resembles \[O III\] far more than H$`\alpha `$, as we would expect if two-photon emission were dominant. The apparent conflict is likely caused by the fact that most lines of sight through this region undoubtedly encounter material with a broad range of physical conditions. Furthermore, the UIT NE cloud exposure is the shortest of our set. Only regions bright in both H$`\alpha `$ and \[O III\] show up in B5. ### 4.4 The XA Field The XA field (Figure 6) is a complicated region of predominantly radiative filaments, noteworthy because an extremely bright and sharp X-ray edge corresponds closely to a bright knot of UV/visible emission (Hester & Cox (1986)). Indeed, this region is seen to be bright in many wavelengths including radio (Green (1990); Leahy et al. (1997)) and infrared (Arendt, Dwek, & Leisawitz (1992)). Strong O VI $`\lambda `$1035 emission is seen (Blair et al. (1991)) as well as other high-ionization species; N V, C IV, O III\] (DBR ) and \[Ne V\] (Szentgyorgyi et al. (2000)). See DBR for a more detailed analysis of this region. In general, the B5 emission corresponds closely to optical \[O III\]. However, while optical images show a high contrast between the brightest ‘cloud’ regions and others in ’empty’ space, B5 contrast is lower (Cornett et al. (1992)). This suggests contributions from a high column depth of diffuse C IV and/or two-photon emission. We are either looking at diffuse material through the edge of a cavity wall or are seeing emission from face on sheets of gas. DBR show evidence that the bright ’cloud’ in the center is not isolated and may be a density enhancement in the cavity wall or a finger of denser material projecting in from the east. The entire blast wave in the region appears indented from the otherwise circular extent of the SNR (Levenson et al. (1997)) implying that the disturbance is produced by a cloud extended several parsecs in our line of sight. The visible structure is likely the tip of a much larger cloud. Levenson et al. (1998) suggest a density enhancement in the cavity wall, resulting in rapid shock deceleration and accounting for the bright emission. IUE and HUT observations show evidence for a 150 $`\mathrm{km}\mathrm{s}^1`$ cloud shock in the dense core of XA itself (the west-pointing V shape in the center of the field) and a faster, incomplete shock in the more diffuse regions to the north and south (DBR ). Two parallel, largely east-west filaments are seen flanking the central ’cloud’. The X-ray emission is seen to drop off dramatically south of the two long radial filament systems. Blair et al. (1991) report HUT observations of a radiative but incomplete cloud shock directly to the north of XA marked ‘B’ in Figure 9a. This region features almost complete cooling with the exception of H$`\alpha `$ and cooler ions. Raymond et al. (1988) studied the Spur filament and found a completeness gradient along the length of it. This filament is well-defined in B5 as well as the optical bands. The XA region is the one region in the Cygnus Loop where preionization is visible ahead of the shock front (Levenson et al. (1998)). This preionization is caused by X-ray flux from the hot, postshock gas ionizing neutral material across the shock front. The emission measure is high enough in this photoionized preshock gas that it is clearly visible as a diffuse patch of emission a few arc minutes to the east of the main XA knot in the center of the field in both H$`\alpha `$ and B5. The B5 flux presumably arises almost entirely from two-photon emission in this case since no \[O III\] is seen (and hence no strong UV line emission is expected). One unique ability of the B5 filter becomes apparent in the XA region; that of detecting nonradiative shocks in ionized gas. In the X-ray (Figure 6d) we see a bulge of emission to the north and east of the brightest knot (Hester & Cox’s XA region proper). This bulge does not show up in either of the optical bands, but the perimeter is visible in the FUV at the edge of the X-ray emission in Figure 6c. This region has likely been ionized by X-ray flux from the hot post-shock gas. A nonradiative shock is now propagating through it and, lacking a neutral fraction to radiate in H$`\alpha `$, is seen only in high ions such as C IV. This filament is becoming more complete in its southern extremity (the ‘B’ location in Figure 9a) and is emitting in \[O III\] as well. This filament also appears to connect to the nonradiative filament seen in H$`\alpha `$ in the Northeast cloud (Figure 5a). ### 4.5 The Southeast Cloud The Southeast Cloud (Figure 7) presents an interesting quandry. In the optical it appears as a small patch of radiative emission with a few associated nonradiative filaments. Fesen, Kwitter, & Downes (1992) hypothesize that it represents a small, isolated cloud at a late stage of shock interaction. Indeed, the resemblance to the late-stage numerical models of Bedogni & Woodward (1990) and Stone & Norman (1992) is striking. More recent X-ray analysis (Graham et al. (1995)) suggests that the shocked portion of the southeast cloud is merely the tip of a much larger structure. Indeed, it is probably similar to the Western and Northeastern Clouds but at an even earlier point in its evolution. Fesen, Kwitter, & Downes (1992) note that the age of the interaction is probably $`4.1\times 10^3`$ years based on an assumed blast wave velocity. Given the revised distance estimate of Blair et al. (1999), this age becomes $`2.3\times 10^3`$ years. In H$`\alpha `$ (Figure 7a) we see a set of nonradiative filaments to the southeast of the cloud. These filaments are visible very faintly in B5 (Figure 7c) as well.Given the complete lack of X-ray emission (Figure 7d) to the east, these filaments are the primary blast wave. The fact that these filaments are indented from the circular rim of the SNR implies the blast wave is diffracting around some object much larger than the visible emission and extended along our line of sight (Graham et al. (1995)). Fesen et al. identify a filament segment seen to the west of the SE cloud–visible in both H$`\alpha `$ and our B5 image–as a reverse shock driven back into the shocked medium. The X-ray emission, however, demonstrates that this is instead due the primary forward-moving blast wave. X-ray enhancement is seen to the west of the cloud, not the east as we would expect from a doubly shocked system. Furthermore, the optical filament is Balmer-dominated, which requires a significant neutral fraction in the pre-shock gas, which would not occur at X-ray producing temperatures (Graham et al. (1995)). These points suggest that the filament segment seen is a nonradiative piece of the main blast wave not obviously related to the other emission in the area. The relative faintness and lack of definition compared to other nonradiative filaments suggests that it is not quite parallel to our line of sight. Meanwhile, the densest material in the shocked cloud tip has cooled enough to emit in ionic species like \[O III\] (Figure 7b) and C IV. Gas stripping resulting from instabilities in the fluid flow along the edges of the cloud is seen as ’windblown streamers’ on the north and south as well as diffuse emission (because of a less favorable viewing angle) to the east. The B5 image shows great detail of the cloud shock and closely resemble the \[O III\] filaments, but with an added “tail” extending to the southeast. The main body of the cloud shock as viewed in B5 is likely composed of C IV and O III\] emission while the “tail” may be an example of a slow shock in a neutral medium and have an enhanced two-photon flux (Dopita, Binette, & Schwartz, (1982)). The shock velocity in the cloud is quoted by Fesen et al. as $`<`$60 $`\mathrm{km}\mathrm{s}^1`$ though this is based on the identification of the western segment as a reverse shock. Given the bright \[O III\] and B5 emission in the cloud shock, it seems more likely that the cloud shock is similar to other structures to the north where shock velocities are thought to be more nearly 140 $`\mathrm{km}\mathrm{s}^1`$. There is a general increase in signal in the northern half of the SE FUV field (Figure 7c). It is unclear whether this is primarily due to the background “measles” noted in §2 or if this represents diffuse, hot gas emitting C IV as is seen in the halo around the central knot of XA. There is very faint emission seen in both H$`\alpha `$ and \[O III\] in the area which could represent a region of more nearly face-on emitting gas. ## 5 Concluding Remarks The UIT B5 band, although broader than ideal for SNR observations, provides a unique FUV spectral window. Under some conditions, the B5 bandpass provides images of radiative filaments overrun by very high-speed shocks. Under other conditions, B5 observes nonradiative filaments at the extreme front edge of SNR blast waves. Combined with other image and spectral data, the B5 band can provide unique insights into complex, difficult-to-model shock phenomena such as C IV resonance scattering and shock completeness. In nonradiative filaments, B5 flux comes from a mixture of C IV as it ionizes up and two-photon emission from preshock neutral hydrogen. In general, nonradiative filament morphology is very similar in B5 and H$`\alpha `$, implying that two-photon emission, originating in the same regions as H$`\alpha `$, is the primary contributor to the B5 images. One unique capability of B5 imaging is its ability to capture nonradiative shocks in ionized media. We see one example of such in Figure 6c where a nonradiative shock is faintly seen in C IV and He II. Radiative filaments usually show good correlation between B5 and \[O III\] morphology, suggesting that B5 flux arises in ions with similar excitation energies such as C IV. Existing models for simple, complete shocks indicate the same origin. However, existing FUV spectra complicate this picture, indicating that these regions should be dominated by two-photon flux which we would expect to follow more closely the H$`\alpha `$ morphology. Observational selection restricts detailed spectral information to only the very brightest knots and filaments. Presumably, these bright regions also suffer the greatest resonance scattering in C IV$`\lambda `$1550, decreasing its observed flux; in fact, DBR found unexpectedly strong resonance scattering even away from the bright filaments and knots. Despite this, morphological similarities between B5 and \[O III\] in radiative filaments strongly suggest that, at least away from the brightest filaments and cloud cores, B5 flux is dominated by C IV. #### Acknowledgements The authors wish to thank John Raymond for valuable discussions and the use of unpublished HUT data. We would also like to thank an anonymous referee for several valuable suggestions including using FUV images to trace nonradiative filaments through ionized regions. Funding for the UIT project has been through the Spacelab Office at NASA headquarters under project number 440-551.
no-problem/0002/astro-ph0002345.html
ar5iv
text
# X-ray absorption lines in the Seyfert 1 galaxy NGC 5548 discovered with Chandra-LETGS ## 1 Introduction Low to medium energy resolution X-ray spectra of AGN such as obtained by the Rosat or ASCA observatories showed the presence of warm absorbing material (see references in Kaastra kaastra (1999)). This was deduced from broad band fits to the continuum, showing a flux deficit at wavelengths shorter than the expected edges of ions such as O vii and O viii. The relation of this warm X-ray absorber to the medium that produces narrow UV absorption lines in C iv or N v is not clear, mainly due to a lack of sufficient constraints in the X-ray band. A major drawback of all previous X-ray studies of AGN has been the low spectral resolution, making it hard to disentangle any emission line features from the surrounding absorption edges, and prohibiting the measurements of Doppler shifts or broadening. With the Chandra spectrometers it is now possible for the first time to obtain high-resolution X-ray spectra of AGN. ## 2 Observations The present Chandra observations were obtained on December 11/12, 1999, with an effective exposure time of 86400 s. The detector used was the High Resolution Camera (HRC-S) in combination with the Low Energy Transmission Grating (LETG). The spectral resolution of the instrument is about 0.06 Å and almost constant over the entire wavelength range (1.5–180 Å). Event selection and background subtraction were done using the same standard processing as used for the first-light observation of Capella (Brinkman et al. brinkman (2000)). The wavelength scale is currently known to be accurate to within 15 mÅ. The efficiency calibration has not yet been finished, and our efficiency estimates are based upon preflight estimates for wavelengths below 60 Å and on inflight calibration based upon data from Sirius B for longer wavelengths. We estimate that the current effective area is accurate to about 20–30 %, that it may show large scale systematic variations within those limits, but it does not show significant small scale variations. The observed count spectrum was corrected for higher spectral order contamination by subtracting at longer wavelengths the properly scaled observed count spectrum at shorter wavelengths. The spectrum was then converted to flux units by dividing it by the effective area, and by correcting for the galactic absorption of $`1.65\times 10^{24}`$ m<sup>-2</sup> (Nandra et al. nandra (1993)), as well as for the cosmological redshift, for which we took the value of 0.01676 (Crenshaw et al. crenshaw (1999)). The spectrum in the 5–38 Å range is shown in Fig. 1. The continuum is rather smooth; the reality of the large-scale structures cannot be assessed completely at this moment given our current understanding of the efficiency calibration of the instrument. Nevertheless, there is no indication for the presence of strong O vii or O viii K-shell absorption edges at 16.77 and 14.23 Å, respectively. A more detailed discussion of the spectrum, including the long-wavelength part will be given in a forthcoming paper, when the full efficiency calibration of the instrument is available. ### 2.1 Absorption lines The most striking feature of the spectrum is the presence of narrow absorption lines, including the Lyman $`\alpha `$ and $`\beta `$ transitions of H-like C, N, O, Ne and Mg, as well as the 2–1 resonance absorption line of He-like O and Ne. We have searched the wavelength range of Fig. 1 systematically for absorption and emission lines, and found the lines listed in Table 1. In addition we provide data for some weaker features for which the equivalent widths (or their upper limits) help to constrain models. We give expected wavelength $`\lambda _0`$, the measured wavelength difference $`\mathrm{\Delta }\lambda \lambda _0\lambda _{\mathrm{o}bs}/(1+z)`$ (thereby accounting for the cosmological redshift $`z`$), the equivalent line width $`W`$ (determined from a gaussian fit to the line profile), and the proposed line identification. A negative sign before an equivalent width indicates an emission line. The presence of these aborption lines can be seen as evidence for a warm, absorbing medium in NGC 5548 along the line of sight towards the nucleus. The absorption can be very strong: the core of the C vi Ly$`\alpha `$ line for example absorbs some 90 % of the continuum, and this is just a lower limit, since the true line profile is smeared out by the instrument. That the optical depth in some lines is considerable is evidenced by two facts: firstly, the equivalent width ratio of the Ly$`\beta `$ to Ly$`\alpha `$ lines of C vi and O viii is much larger than the ratio of their oscillator strengths (0.079 to 0.417). Secondly, we see absorption features of sodium, despite the fact that the sodium abundance is 20 times smaller than, e.g., the magnesium abundance. All this can be explained if the line cores of the more abundant elements are strongly saturated. ### 2.2 Column densities Using the observed equivalent width $`W`$ of the absorption lines, it is possible to derive the absorbing column density, assuming a gaussian velocity distribution (standard deviation $`\sigma _\mathrm{v}`$) of the absorbing ions and neglecting the scattered line emission contribution: $$W=\frac{\lambda \sigma _\mathrm{v}}{c}\underset{\mathrm{}}{\overset{\mathrm{}}{}}[1\mathrm{exp}(\tau _0\mathrm{e}^{y^2/2})]dy,$$ (1) with $`\tau _0`$ the optical depth of the line at the line center, given by $$\tau _0=0.106fN_{20}\lambda /\sigma _{\mathrm{v},100}.$$ (2) Here $`f`$ is the oscillator strength, $`\lambda `$ the wavelength in Å, $`\sigma _{\mathrm{v},100}`$ the velocity dispersion in units of 100 km/s and $`N_{20}`$ the column density of the ion in units of $`10^{20}`$ m<sup>-2</sup>. Given a value for $`\sigma _\mathrm{v}`$ and the measured equivalent width, these equations yield the column density. For some ions we have more than one absorption line identified, and this allows us to constrain $`\sigma _\mathrm{v}`$. From the O vii, O viii and C vi ions we obtain $`\sigma _\mathrm{v}`$=140$`\pm `$30 km/s. Using this value, we derive the column densities of Table 2. We give the column density $`N`$ in logarithmic units. The reason is the relatively large inferred optical depth of some lines, e.g. 70 for the O viii Ly$`\alpha `$ line. This makes the line core saturated and hence significant changes in the column density lead to minor changes in the equivalent width. One of the most striking features of the spectrum is the absence of the oxygen continuum absorption edges that were deduced from low resolution X-ray spectra such as those acquired with Rosat (Nandra et al. nandra (1993)), ASCA (Fabian et al. fabian (1994)) or BeppoSAX (Nicastro et al. nicastro (2000)). The absence is, however, consistent with the column densities derived above from the line absorption. We predict a jump of 11 % at the O viii edge (14.23 Å) and 4 % at the O vii edge (16.77 Å), all within a factor of 2. We can measure any jumps near the edges with an accuracy of about 10 % of the continuum, but we find no evidence for an absorption edge; the data even suggest a small emission edge (radiative recombination continuum) of 10$`\pm `$10 %. The column densities of the other ions for which we have absorption measurements do not lead to significant absorption edges, except for Ne ix and Ne x, which should be at the low side of their column density range in order to avoid significant continuum absorption. Note that the Ne x Ly$`\alpha `$ line has some blending from Fe xvii 4d-2p; taking that into account leads to a somewhat smaller column density. We have made a set of runs using the XSTAR photoionization code (Kallman & Krolik kallman (1999)), using solar abundances and the spectral shape as given by Mathur et al. (mathur (1995)), normalised to 13 photons m<sup>-2</sup>s<sup>-1</sup>Å<sup>-1</sup> at 20 Å. We obtained a good overall agreement with our measured column densities using a hydrogen column density of $`3\times 10^{25}`$ m<sup>-2</sup> and $`\xi =100\pm 25`$ (in units of 10<sup>-9</sup> W m). This column density is comparable to the value derived from earlier ASCA observations (Fabian et al. fabian (1994)). However, our ionization parameter is significantly larger, having most of the oxygen as O viii or O ix. The plasma temperature of the absorber implied by the XSTAR model is $`2\times 10^5`$ K. The low temperatures imply that thermal contributions to line broadening ($`\sigma _\mathrm{v}`$) are negligible. ### 2.3 Velocity fields The absorption lines appear to be blueshifted: the average blueshift of the C, N and O lines is 280$`\pm `$70 km/s. There is some evidence that the lines are broadened in proportion to their wavelengths, indicative of Doppler broadening. Subtracting the instrumental line width ($`\sigma `$=0.023 Å) yields for the intrinsic line broadening a width ($`\sigma `$) of 270$`\pm `$100 km/s, somewhat larger than the width of 140 km/s derived from line ratios (previous section). This could indicate that the absorber consists of a few narrow components ($`\sigma `$140 km/s), with different mean velocities and $`\sigma `$270 km/s for the ensemble. As an illustration we show the velocity profile of six of the strongest absorption lines in Fig. 2. On the blue side, the line profiles extend out to about 2000 km/s. There is no clear evidence for the presence of an underlying broad emission component for these lines, although for O viii and C vi there appears to be an excess at the red side of the absorption line. ### 2.4 Emission lines The LETGS spectrum of NGC 5548 shows only a few emission lines. Except for a clear detection of the He-like triplet of O vii, we only identified the forbidden lines of the same triplets of Ne ix and Mg xi; these are marginally detected. Here we focus upon the O vii triplet (Fig. 3). The forbidden ($`f`$) and intercombination ($`i`$) line are not blue-shifted like the absorption lines but have a marginally significant redshift of 200$`\pm `$130 km/s. The ratio $`i/f`$ can be used as a density diagnostic if the coupling between the upper levels ($`2^3`$S and $`2^3`$P) of $`f`$ and $`i`$ is determined by electron collisions and not by the external radiation field from the central source. For a photon flux of 50 photons m<sup>-2</sup>s<sup>-1</sup>Å<sup>-1</sup> at 1600 Å and the atomic parameters taken from Porquet & Dubau (porquet (2000)) we estimate that this is the case as long as $`n_\mathrm{e}>4\times 10^{14}`$ m<sup>-3</sup>. The ratio $`i/f`$ does depend only weakly upon the type of ionization balance: Collisional ionization equilibrium (CIE) or photoionization equilibrium (PIE) (Mewe mewe (1999) and Porquet & Dubau porquet (2000)). From the observed value $`i/f`$ of 0.45$`\pm `$0.29 we derive an upper limit to the electron density $`n_\mathrm{e}`$ of $`7\times 10^{16}`$ m<sup>-3</sup>. The observed ratio $`G=(i+f)/r`$ is 3.2$`\pm `$1.5, although this value might be somewhat smaller due to overlap of the $`r`$ absorption component. For CIE plasmas, $`G`$ should be of order 1, while for PIE plasmas, values larger than about 4 can be expected (Liedahl liedahl (1999), Porquet & Dubau porquet (2000)). Thus, the O vii triplet probably originates from a photoionized plasma. Does the emission from the triplet arise from the same medium that absorbs the continuum? Assuming that the absorber has the shape of a thin, spherical shell, we calculate on the basis of recombination and the absorber parameters derived in section 2.2 emission line intensities that agree within the error bars with the measured intensities. The upper limit for $`n_\mathrm{e}`$ found from the $`i/f`$ ratio then implies a lower limit for the thickness of the shell of $`5\times 10^8`$ m, and a distance from the central source of at least $`8\times 10^{13}`$ m. Thus, both the absorption and emission of the O vii resonance line, as well as the emission from $`i`$ and $`f`$ may originate in the same expanding shell. ## 3 Discussion For the first time we see narrow absorption lines in the X-ray spectrum of an AGN. However narrow absorption lines in Seyfert galaxies have been seen before in the UV band. Shull and Sachs (shull (1993)) discovered narrow absorption features in the C iv and N v lines. This was confirmed by Mathur et al. (mathur (1995)) and studied in more detail by Mathur et al. (mathur99 (1999)) and Crenshaw et al. (crenshaw (1999)). These last authors find at least 5 narrow absorption components in the C iv 1550 Å and N v 1240 Å lines. These components are broadened by $`\sigma _v`$=20–80 km/s, somewhat smaller than we find. The rms width of the ensemble of UV absorption lines is 160 km/s for C iv and 260 km/s for N v, consistent with the effective line width of 270$`\pm `$100 km/s that we find from our Chandra data. Also, the average blueshift of the UV absorption lines ($`390`$ km/s for C iv and $`490`$ km/s for N v) is only slightly larger than what we find for the C, N and O lines ($`280\pm 70`$ km/s). Note that our wavelength scale has residual uncertainties of 100–300 km/s for most of our lines. However, the column density of the lithium-like ions C iv and N v as derived by Crenshaw et al. is 100 times smaller than the column density of the corresponding hydrogen-like ions that we find. The difference may be attributed to either time variability (low column density during the UV observations), a high degree of ionization (hydrogenic ions dominating) or a stratified absorber (with UV and X-ray absorption lines originating from different zones). We favour this last possibility. This is supported by the fact that our simulations with XSTAR imply C iv and N v columns that are 100 times smaller than the measured values by Crenshaw et al. Crenshaw & Kraemer (crenshawk (1999)) find that the weakest of the five dynamical components (their number 1) in the UV absorption lines has the highest outflow velocity ($`1056`$ km/s). Based upon the N v to C iv ratio, they argue that this component has the highest ionization parameter and could produce the oxygen continuum absorption edges as implied by the ASCA data. Our modelling with XSTAR also predicts column densities of N v and C iv close to the measured values for component 1. But the outflow velocity of the X-ray absorber that we find is significantly smaller than the velocity of UV component 1. However, Mathur et al. (mathur99 (1999)) identify component 3 ($`540`$ km/s) as the most likely counterpart to the X-ray warm absorber. We conclude that the detailed relation between UV and X-ray absorbers is still an open issue. ###### Acknowledgements. The Laboratory for Space Research Utrecht is supported financially by NWO, the Netherlands Organization for Scientific Research. Work at LLNL was performed under the auspices of the U.S. Department of Energy, Contract No. W-7405-Eng-48.
no-problem/0002/cond-mat0002264.html
ar5iv
text
# Ion–beam induced current in high–resistance materials ## I Introduction The irradiation of semiconductors and insulators by ion beams is a tool that is frequently used for studying or modifying the electric properties of these high-resistance materials (see e.g. and references therein). As s result of ion irradiation, heavily doped charged layers can be formed in a sample , with the density of ions in the layer reaching $`10^{20}`$cm<sup>-3</sup>. The formation of strongly nonuniform charge distributions is the standard situation in the ion-beam irradiation, while for achieving a uniform volume concentration of implanted ions special tricks are required . Under the influence of ion irradiation, electric properties of high-resistance materials can be essentially changed , with additional peculiarities resulting from a nonuniform distribution of charge carriers \[6–9\]. High-resistance materials mean, as usual, the materials with a very poor concentration of carriers, such as semiconductors, semimetals, or insulators that, being irradiated, acquire conducting properties . The electron transport in such materials can be well described in the semiclassical drift-diffusion approach . The existence of the implantation induced damages can be effectively taken into account by means of characteristic parameters entering the drift-diffusion equations , for instance, by specifying the resistivity or mobility . When an energetic ion strikes a solid surface, there is a probability of electron capture resulting in the neutralization of a part of implanted ions that become electrically inactive. However, the neutralized ions can easily be activated again by irradiating the material with laser beams \[12–15\]. In the present paper, we study the electric transport in high-resistance materials with a strongly nonuniform initial distribution of charge carriers, which can be formed by ion-beam irradiation. As follows from the above discussion, it is always possible to prepare such conditions, e.g. employing laser activation \[12–15\], that the injected ions could be the principal charge carriers. For concreteness, we consider below positive ions, though this assumption is not principal and negative ions could be treated in the same footing. ## II Peculiarities of Electric Transport The transport properties of high-resistance materials, such as extrinsic semiconductors, are usually described by the drift-diffusion approach consisting of the continuity and Poisson equations, respectively, $$\frac{\rho }{t}+\stackrel{}{}\stackrel{}{j}+\frac{\rho }{\tau }=0,\epsilon \stackrel{}{}\stackrel{}{E}=4\pi \rho ,$$ (1) where $`\rho =\rho (\stackrel{}{r},t)`$ is the charge density of carriers; $`\stackrel{}{j}=j(\stackrel{}{r},t)`$ is the electric current density; $`\tau `$ is the relaxation time; and $`\epsilon `$ is the dielectric permittivity. The total current density is $$\stackrel{}{j}_{tot}=\stackrel{}{j}+\frac{\epsilon }{4\pi }\frac{\stackrel{}{E}}{t},\stackrel{}{j}=\mu \rho \stackrel{}{E}D\stackrel{}{}\rho ,$$ (2) where $`\mu `$ is the mobility of carriers and $`D=(\mu /\epsilon )k_BT`$ is the diffusion coefficient. The average current through the considered sample is given by the integral $$\stackrel{}{J}(t)=\frac{1}{V}\stackrel{}{j}_{tot}(\stackrel{}{r},t)𝑑\stackrel{}{r}$$ (3) over the sample volume $`V`$. Let us consider a plane sample of the thickness $`L`$ and area $`A`$, which is biased with an external constant voltage $`V_0>0`$. Because of the plane symmetry, the three-dimensional picture will be reduced everywhere below to the one-dimensional description. For what follows, it is convenient to simplify the notation passing to dimensionless quantities. The return to dimensional quantities can be easily done as follows. We keep in mind that the coordinate $`x`$ is measured in units of the thickness $`L`$ and the time $`t`$, in units of the transit time $`\tau _0`$, so that for returning to the corresponding dimensional variables one has to make the substitution $$x\frac{x}{L},t\frac{t}{\tau _0},\tau _0\frac{L^2}{\mu V_0}.$$ For other physical quantities the return to dimensional units is done by accomplishing the following transitions: for the diffusion coefficient, $$D\frac{D}{D_0},D_0\mu V_0,$$ for the electric field, $$E\frac{E}{E_0},E_0\frac{V_0}{L},$$ for the total accumulated charge, $$Q\frac{Q}{Q_0},Q_0\epsilon AE_0=\frac{\epsilon AV_0}{L},$$ for the charge density, $$\rho \frac{\rho }{\rho _0},\rho _0\frac{Q_0}{AL}=\frac{\epsilon V_0}{L^2},$$ and for the electric current, $$j\frac{j}{j_0},j_0\frac{Q_0}{A\tau _0}=\frac{\epsilon V_0}{L\tau _0},$$ with the same transformation for the average current (3), $`JJ/j_0`$. Employing the dimensionless notation, for the plane case considered, we have from equations (1) $$\frac{\rho }{t}+\frac{}{x}(\rho E)D\frac{^2\rho }{x^2}+\frac{\rho }{\tau }=0,\frac{E}{x}=4\pi \rho ,$$ (4) where the space and time variables are such that $$0<x<1,t>\mathrm{\hspace{0.33em}0}.$$ The total current density (2) reads $$j_{tot}=\rho ED\frac{\rho }{x}+\frac{1}{4\pi }\frac{E}{t}.$$ (5) The condition that the sample is biased with a constant external voltage now writes $$_0^1E(x,t)𝑑x=1.$$ (6) Note that the parameters $`\epsilon `$ and $`\mu `$ do not appear in Eqs. (4) because of the usage of the dimensionless units. An initial condition to the continuity equation is defined by the distribution of ions after the irradiation process. The distribution of implanted species can be modelled by the Gaussian form $$\rho (x,0)=\frac{Q}{Z}\mathrm{exp}\left\{\frac{(xa)^2}{2b}\right\},$$ (7) in which $`0<a<1`$ and $$Q=_0^1\rho (x,0)𝑑x,Z=_0^1\mathrm{exp}\left\{\frac{(xa)^2}{2b}\right\}𝑑x.$$ The distribution centre, $`a`$, is close to the mean projected range of ions, although may be not exactly coinciding with it. Our aim is to study the behaviour of the electric current through the sample, $$J(t)=_0^1j_{tot}(x,t)𝑑x,$$ (8) as a function of time, when the initial distribution of carriers is given by the form (7). In particular, we shall show that the nonuniformity of the initial distribution may lead to quite unexpected behaviour of the current (8) when it turns against the applied voltage becoming negative, This will be shown to be the result of the solution of the transport equations (4). First of all, let us emphasize that the occurrence of the negative current, if any, can be due only to a nonuniform distribution of carriers. Really, the current (8), with the current density (5), can be written as $$J(t)=_0^1\rho (x,t)E(x,t)𝑑x+D\left[\rho (0,t)\rho (1,t)\right].$$ (9) If $`\rho (x,t)`$ does not depend on $`x`$, then Eq. (9), with the help of the voltage condition (6), immediately shows that the current is positive since we are dealing with a positive charge density $`\rho `$. As far as the diffusion and relaxation terms in the continuity equation tend to smooth the initial nonuniform distribution of carriers, this tells us that the negative current can happen only at the initial stage of the process while the charge density $`\rho (x,t)`$ is yet essentially nonuniform. The initial stage means that $`t1`$, when the influence of the diffusion and relaxation terms is yet negligible. In other words, the favouring conditions for the occurrence of the negative electric current are $`D1`$ and $`\tau 1`$. It is worth recalling here that, according to the discussion given in the Introduction, the influence of the implantation-induced damages is assumed to be taken into account by the corresponding values of parameters in the transport equations and that the implanted ions are assumed to be prepared as electrically active, which can be done by means of laser irradiation \[12–15\]. To demonstrate explicitly that the negative current really can occur, let us analyse the case of a very narrow layer of injected ions, such that the standard deviation $`b`$, also called the straggling , is small, $`b1`$. Then distribution (7) is close to $$\rho (x,0)=Q\delta (xa).$$ (10) In the beginning of the process, when $`t0`$, we have from Eqs. (9) and (10) $$J(0)=QE(a,0).$$ Integrating the second equation in Eq. (4) yields $$E(a,0)=1+4\pi Q\left(a\frac{1}{2}\right).$$ With these conditions, the current (8) becomes negative provided that $$4\pi Q\left(\frac{1}{2}a\right)>\mathrm{\hspace{0.33em}1}.$$ (11) From here, the inequality $$a<\frac{1}{2}\frac{1}{4\pi Q}$$ (12) follows for the location of the ion layer. Since this location is inside the sample, we also have the inequality $$Q>\frac{1}{2\pi }(0<a<1)$$ (13) for the initial charge. The above analysis demonstrates that the negative current really can occur provided some special conditions, as (12) and (13), hold true. To our knowledge, there have been no experimental measurements demonstrating the appearance of such a negative current. Therefore the picture we describe here is a theoretical prediction of a novel effect. The occurrence of this negative current is very sensitive to the characteristics of the irradiated sample as well as to the initial nonuniform distribution of carriers, which suggests the possibility of using this effect for studying the mentioned properties. For instance, the mean projected range of irradiating ions could be measured in this way. This can be realized as follows. Assume that for an ion-irradiated material we observed the occurrence of the negative current $`J(0)`$ at initial time. Let us compare the observed value $`J(0)`$ with formula (9) where we have to substitute the distribution (7). The peak of the latter is close to the mean projected range of the irradiating ions. With the given $`\rho (x,0)`$, the right-hand side of expression (9) can be explicitly calculated. This is because the electric field satisfying Eq. (4), with the voltage condition (6), can be presented as the functional $$E(x,t)=1+4\pi \left[Q(x,t)_0^1Q(x,t)𝑑x\right]$$ of the density $`\rho (x,t)`$ entering through $$Q(x,t)=_0^x\rho (x^{},t)𝑑x^{}.$$ Hence, for a given $`\rho (x,0)`$, the electric field $`E(x,0)`$ is uniquely defined by the above functional. Thus, equating the calculated $`J(0)`$ from formula (9) with the corresponding measured value, we obtain an equation for the dimensionless mean projected range $`a`$. For example, in the case of a narrow ion layer, we find $$a=\frac{1}{2}\frac{1}{4\pi Q}+\frac{J(0)}{4\pi Q^2}.$$ Returning to dimensional units, for the mean projected range $`\lambda =aL`$ we obtain $$\lambda =\frac{L}{2}\frac{\epsilon AV_0}{4\pi Q}+\frac{\epsilon A^2L^2}{4\pi \mu Q^2}J(0).$$ (14) This formula directly relates the mean projected range of ions, $`\lambda `$, with the known parameters of the system and the measured initial current $`J(0)`$. In deriving Eq. (14), we have only used the fact that the initial distribution of ions is narrow, $`b1`$, while the current $`J(0)`$ can, in general, be of any sign. The advantage of using the effect of the negative electric current is that the latter always requires a relatively narrow initial distribution. Therefore, as soon as we observe a negative current $`J(0)`$, we can employ formula (14) as a realistic approximation for the ion mean projected range. When the initial ion layer is not narrow and the current $`J(0)`$ is not necessary negative, so that the dependence of the formulas on the standard deviation $`b`$ becomes important, then two situations can happen. One is when $`b`$ is known from other experiments. Then, in order to find the mean projected range, one should proceed exactly as is described above equating the measured and calculated currents $`J(0)`$. If the straggling $`b`$ is not known, it can be found in the following way. Assume that at the initial time $`t=0`$ there appears the negative current. Since this is a transient effect, existing only in a finite time interval $`0tt_0`$, there is some time $`t=t_0`$ when the current changes its sign becoming normal, i.e. positive, for $`t>t_0`$. At the same time $`t=t_0`$, the current then is zero, $`J(t_0)=0`$. Therefore, the latter equation, with the given experimentally measured $`t_0`$, may serve as an additional condition defining the straggling $`b`$. It may also happen that the injected ion layer is narrow but located close to the surface, so that $`ab`$. Then formula (14) can be corrected by taking into account the second term in Eq. (9), which gives for the mean projected range $$\lambda =\frac{L}{2}\frac{\epsilon A}{4\pi Q}\left[V_0+\frac{AL}{\mu Q}\left(D\mathrm{\Delta }\rho LJ\right)\right],$$ (15) where $$JJ(0),\mathrm{\Delta }\rho \rho (0,0)\rho (1,0).$$ To find the electric current (8) for arbitrary times, we need to solve equations (4), with the voltage condition (6) and the initial condition (7). Those should also be complimented by boundary conditions which can be taken in the Neumann form . We have accomplished such numerical calculations whose results are presented in the figures, where all values are given in dimensionless units, and the conditions $`D1`$ and $`\tau 1`$, favouring the occurrence of the negative current are assumed. The figures correspond to the setup explained in detail in the Introduction and specified in the present section. Fig. 1 shows the dependence of the electric current $`J(t)`$ on time for the varying location $`a`$ of the initial distribution peak of ions. Formulas (14) and, respectively, (15) for the mean projected range are valid with a very good accuracy, with an error being less than $`1\%`$. In order to distinguish the lines, they are numbered. Fig. 2 presents the electric current as a function of time for different widths of the initial distribution of ions. Formulas (14) and (15) are again very accurate for the negative current $`J(0)`$. But when the current $`J(0)`$ is positive and at the same time, the initial distribution is not narrow, then Eqs. (14) and (15) are not valid, as it should be. Fig. 3 demonstrates the time dependence of the electric current for different values of the charge $`Q`$. Since $`b1`$, formulas (14) and (15) work perfectly for both cases, for the negative current, as in Fig. 3, as well as for positive current, as in Fig. 4; the errors being not more than around $`1\%`$. The current in Fig. 4 becomes positive because condition (13) does not hold. Increasing the accumulated charge $`Q`$, so that condition (13) becomes true, the negative current appears provided that condition (12) is satisfied, as is clearly illustrated in Fig. 5. ## III Discussion We have shown that in high resistance materials irradiated by ion beams an unusual transient effect of the negative electric current can occur. This effect can be used for studying the characteristics of irradiated materials as well as those of irradiating ions. For instance, the mean projected range of ions can be measured whose value is well approximated by formula (15). This new way of measuring the ion mean projected range is, certainly, not the sole possible, but it can provide additional information being complimented by other experimental methods. Throughout the paper, we have been talking about ions. But, as is clear, the same consideration concerns any kind of charged particles, say electrons, positrons, or muons. It is worth stressing that here we have advanced a theoretical proposal for observing and employing a novel effect. The general prerequisites for realizing the latter look, as follows from the above discussions, achievable. Moreover, there exist so many types of various high-resistance materials, such as semiconductors, semimetals, and insulators, there are so many ways of fabricating such materials with specially designed characteristics, and also there are so many methods of varying the properties of these materials by involving additional external sources, as electromagnetic fields or laser beams, that the feasibility of preparing the desired conditions for realizing the suggested effect looks quite realistic. Even if this realization is difficult today, it might be accomplished in future. The history shows that practically any effect that is possible in principle sooner or later becomes realizable in experiment. Although our main aim in this paper has been to advance a theoretical proposal for the principal possibility of observing a novel effect, we have also paid attention to the feasibility of realizing it experimentally. In addition to the above discussions, we would like to touch several other important points that could substantiate the feasibility of observing the considered effect. First, the space charge due to the accumulated ions will act against further implantation, and the ion doses will certainly be limited. At the same time, to observe the effect of negative electric current, one needs to reach some threshold value for the accumulated charge, as in the inequality (13). In order to understand how the latter condition can be satisfied, it is necessary to rewrite it in the corresponding dimensional units introduced in the beginning of Sec. 2, in this case, in the units of $`Q_0`$. This gives us the condition $$2\pi \epsilon A\frac{V_0}{L}Q>1.$$ From here, it is evident that this condition can always be satisfied for any given charge $`Q`$, which can be achieved by increasing either the sample area $`A`$ or the applied voltage $`V_0`$. Of course, for a given sample, one has to increase the voltage $`V_0`$. The implanted ions do not form a homogeneous layer and their distribution may be not exactly Gaussian. In this paper we consider the Gaussian distribution (7) which usually describes well the profile of implanted ions. However this form is not compulsory for realizing the negative-current effect. The latter persists as well for other distributions, for which one needs only that the straggling $`b`$ be less than the projected range $`a`$. For the realization of the effect, it is also not necessary that the ion distribution be centered at the mean projected range of ions. For simplicity, we called the distribution center $`a`$ the projected range. However, if these two do not coincide, all consideration remains valid with merely slightly changing the terminology, so that $`a`$ is to be called the distribution center. To measure currents through the sample, one has to take into account that near contacts there often occurs the space charge build-up caused by electron or hole injection from contacts. The current due to the carriers injected from contacts can mask the current produced by ions. How to ascertain that the considered effect is caused by the implanted ions? This problem can be easily resolved by measuring the current through the sample before ion irradiation. Then one may study the influence of contacts on creating electric currents. In this way, one can always separate the influence of contacts from physical effects resulting from ion irradiation. In conclusion, we do not see principal difficulties for realizing the negative-current effect. And, as follows from the above discussions, different technical problems seem to be resolvable. Being realized, this effect will make it possible to have an additional tool for studying the transport properties of high-resistance materials as well as to measure characteristics related to irradiated ions. Acknowledgement We appreciate financial support from the São Paulo State Research Foundation. Figure Captions Fig. 1. Electric current through the sample as a function of time for the parameters $`Q=1,b=0.05`$, and different initial locations of the ion layer: $`a=0.1`$ (curve 1, solid line), $`a=0.25`$ (curve 2, long–dashed line), $`a=0.5`$ (curve 3, short–dashed line), and $`a=0.75`$ (curve 4, dotted line). Fig. 2. The dependence of the electric current on time for $`Q=0.5,a=0.1`$, and varying widths of the initial ion distribution: $`b=0.05`$ (curve 1, solid line), $`b=0.1`$ (curve 2, long–dashed line), and $`b=0.5`$ (curve 3, short–dashed line). Fig. 3. Electric current as a function of time for $`a=0.25,b=0.05`$ and different charges: $`Q=3`$ (curve 1, solid line), $`Q=1`$ (curve 2, long–dashed line), and $`Q=0.5`$ (curve 3, short–dashed line). Fig. 4. The time dependence of the electric current for $`Q=0.1,b=0.05`$, and two different initial locations of the ion distribution: $`a=0.1`$ (curve 1, solid line) and $`a=0.25`$ (curve 2, long–dashed line). Fig. 5. Electric current vs. time for $`Q=0.5,b=0.1`$, and varying initial locations of the peak of the ion distribution: $`a=0.1`$ (curve 1, solid line), $`a=0.25`$ (curve 2, long–dashed line), $`a=0.5`$ (curve 3, short–dashed line), and $`a=0.75`$ (curve 4, dotted line).
no-problem/0002/astro-ph0002153.html
ar5iv
text
# Can we predict the fate of the Universe? ## I Introduction The issue of the present state, future dynamics and final fate of the Universe, or at least our patch of it, has been recently pushed to the front line of research in cosmology. This is mostly due to observations of high redshift type Ia supernovae, performed by two independent groups (the “Supernova Cosmology Project” and the “High-Z Supernova Team”), which allowed accurate measurements of the luminosity-redshift relation out to redshifts up to about $`z1`$ . It should be kept in mind that these measurements are done on the assumption that these supernovae are standard candles, which is by no means demonstrated and could conceivably be wrong. There are concerns about the evolution of these objects and the possible dimming caused by intergalactic dust , but we will ignore these for the purposes of this paper, and assume that the quoted results are correct. The supernovae data, when combined with the ever growing set of CMBR anisotropy observations, strongly suggest an accelerated expansion of the Universe at the present epoch, with cosmological parameters $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$ and $`\mathrm{\Omega }_\mathrm{m}0.3`$. A further cause of concern here is the model dependence of the CMBR analysis, but we shall again accept the above results for the purpose of this paper. Taken at face value, these results would seem to show that the universe will necessarily enter an inflationary stage in the near future. However, as pointed out by Starkman, Trodden and Vachaspati this is not necessarily so. We could be living in a small, sub-horizon bubble, for example. And even if we were indeed inflating, it would not be trivial to demonstrate it. In the above work, these authors looked at the crucial question of ‘How far out must we look to infer that the patch of the universe in which we are living is inflating?’ Their analysis is based on previous work by Vachaspati and Trodden which shows that the onset of inflation can in some sense be identified with the comoving contraction of our minimal anti-trapped surface (MAS)<sup>*</sup><sup>*</sup>*The MAS of each comoving observer is a sphere centered on him/her, on which the velocity of comoving objects is $`c`$. For the particular case of an homogeneous universe, the MAS has a physical radius $`cH^1`$.. They then argue that if one can confirm cosmic acceleration up to a redshift $`z_{MAS}`$ and detect the contraction of our MAS, then our universe must be inflating. Unfortunately, even if we can do the former (for $`\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$ the required redshift is $`z_{MAS}1.8`$), it turns out that there is no way to presently confirm the latter, because the accelerated expansion hasn’t been going for long enough for the MAS to contract. Only if we had $`\mathrm{\Omega }_\mathrm{\Lambda }0.96`$ would we be able to demonstrate inflation today. As in the proverbial mathematicians joke, the method outlined by Starkman et al. provides an answer that is completely accurate but will take a long time to find, and hence is of no immediate use to us. In this paper, however, we will explore a different possibility. Our main aim is to provide what could be called a physicists version of the “mathematical” question of Starkman et al. . In other words, we are asking, ‘If we can’t know for sure the fate of the universe at present, what is our best guess today?’. As we will discuss, we can answer this question, although it will involve making some crucial additional assumptions. In order to answer the above question we compare the particle and event horizons. We show that for a flat universe with $`\mathrm{\Omega }_\mathrm{\Lambda }\text{ }>0.14`$ the particle horizon is greater than the distance to the event horizon meaning that today we may be able to observe a larger portion of the Universe than that which will ever be able to influence us. We argue that if we find evidence for a constant vacuum density up to a distance from us equal to the event horizon then our Universe will necessarily enter an inflationary phase in the not too distante future, assuming that the potential of the scalar field which drives inflation is time-independent and that the content of the observable universe will remain ‘frozen’ in comoving coordinates. Note that Starkman et al. argue that inflation can only take place if the vacuum energy dominates the energy density on a region with physical radius not smaller than that of the MAS at that time. However, they did not assume that the content of the observable universe would remain frozen in comoving coordinates and so they found that the larger is the contribution of a cosmological constant for the total density of the Universe, the larger is the redshift out to which one has to look in order to infer that our portion of the universe is inflating. This result seems paradoxical, until one realizes that the size of the MAS at a given time does not, by itself, say anything about inflation. The main reason why $`z_{MAS}`$ grows with $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is simply because the scale factor has grown more. The plan of this paper is as follows. In the next section we introduce the various lengthscales that are relevant to our discussion, and provide a qualitative discussion of our test for inflation. We also discuss the assumptions involved and compare our ‘physical’ test with the ‘mathematical’ one recently proposed by Starkman et al. . In section III we provide a more quantitave analysis of our criterion. We also discuss in more detail our crucial assumption of an energy-momentum distribution which remains frozen in comoving coordinates. Finally, in section IV we summarize our results and discuss some other outstanding issues. ## II A ‘physical’ test for inflation The dynamical equation which describes the evolution of the scale factor $`a`$ in a Friedmann-Robertson-Walker (FRW) universe containing matter, radiation and a cosmological constant can be written as $$H^2=H_0^2(\mathrm{\Omega }_{m0}a^3+\mathrm{\Omega }_{r0}a^4+\mathrm{\Omega }_{\mathrm{\Lambda }0}+\mathrm{\Omega }_{k0}a^2).$$ (1) where $`H=\dot{a}/a`$ and the density parameters $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_r`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ express respectively the densities in matter, radiation and cosmological constant as fractions of the critical densityA dot represents a derivative with respect to the cosmic time $`t`$. The subscript ‘$`0`$’ means that the quantities are to be evaluated at present epoch, and we have also taken $`a_0=1`$.. Naturally one has $`\mathrm{\Omega }_k=1\mathrm{\Omega }_m\mathrm{\Omega }_r\mathrm{\Omega }_\mathrm{\Lambda }`$. The distance $`d`$, to a comoving observer at a redshift $`z`$ is given by $$d(z)=c_{t(z)}^{t_0}\frac{dt^{}}{a(t^{})}=cH_0^1_0^z\frac{dz^{}}{\left[\mathrm{\Omega }_{m0}(1+z^{})^3+\mathrm{\Omega }_{r0}(1+z^{})^4+\mathrm{\Omega }_{\mathrm{\Lambda }0}+\mathrm{\Omega }_{k0}(1+z^{})^2\right]^{1/2}},$$ (2) and is related to the ‘radius’ of the local universe which we can in principle observe today. The distance to the event horizon can be defined as $$d_e=c_{t_0}^{\mathrm{}}\frac{dt^{}}{a(t^{})}=cH_0^1_1^{\mathrm{}}\frac{da}{(\mathrm{\Omega }_{m0}a+\mathrm{\Omega }_{r0}+\mathrm{\Omega }_{\mathrm{\Lambda }0}a^4+\mathrm{\Omega }_{k0}a^2)^{1/2}}.$$ (3) and represents the portion of the Universe which will ever be able to influence usIn writing the upper integration limit as infinity we are of course assuming that the universe will keep expanding forever; an analogous formal definition could be given for an universe ending in a ‘big crunch’.. On the other hand, the particle horizon, $`d_p`$, is defined by (from eqn. (2)) $$d_p\underset{z\mathrm{}}{lim}d(z),$$ (4) and it represents the maximum distance which we can observe today. If today the distance to the event horizon is smaller than the particle horizon ($`d_e<d_p`$) this means that today we are able to observe a larger portion of the Universe than that which will ever be able to influence us. We can do this if we look at a redshift greater than $`z_{}`$ defined by (see also ) $$d(z_{})=d_e.$$ (5) In a flat universe solutions to this equation are only possible for $`z_{}1`$ and for $`\mathrm{\Omega }_{\mathrm{\Lambda }0}\text{ }>0.14`$. Hence, assuming that the energy-momentum distribution within the patch of the Universe which we are able to see remains unchanged in comoving coordinates, our Universe will necessarily enter an inflationary phase in the future if there is a uniform vacuum density permeating the Universe up to a redshift $`z_{}`$. This assumption obviously requires some further discussion. One can certainly think of a universe made up of different ‘domains’, each with its own values of the matter and vacuum energy density. Furthermore, by cleverly choosing the field dynamics, one can always get patches with time-varying vacuum energy densities, or patches where the vacuum energy density is non-zero for only short periods. In all such cases, the domain walls separating these patches can certainly have a very complicated dynamics, and in particular it is always possible that a domain wall will suddenly get inside our horizon sometime between the epoch corresponding to our observations and the present day. On the other hand, it should also be pointed out that a certain amount of fine-tuning would be required to have a bubble coming inside our horizon right after we have last observed it. In these circumstances, the best that can be done is to impose constraints on the characteristics of any bubble wall that could plausibly have entered the patch of the universe we are currently able to observe, given that we have so far seen none. We shall analyse this point in a more quantitative manner in the following section. We think that the results obtained in this way, even if less robust from a formal point of view, are intuitively more meaningful than those obtained in in the sense that, among other things, in this case the minimum redshift $`z_{}`$ out to which one must observe in order to be able to predict an inflationary phase (subject to the conditions mentioned earlier) decreases as $`\mathrm{\Omega }_\mathrm{\Lambda }`$ increases—see Fig. 1. In other words, the larger the present value of the cosmological constant, the easier it should be to notice it. It is perhaps instructive to compare our test with that of in more detail. Starkman et al. require the contraction of the MAS. Now, in order to see the MAS one has to look at a redshift defined by: $$a(z_{MAS})d(z_{MAS})cH^1(z_{MAS})=0.$$ (6) This finds the redshift, $`z_{MAS}`$, for which the physical distance to a comoving observer at that redshift, evaluated at the corresponding time $`t_{MAS}`$, is equal to the Hubble radius at that time. However, if the vacuum density already dominates the dynamics of the Universe at the redshift $`z_{}`$ then eqn. (5) reduces to: $$d(z_{})cH^1(z_{})=0.$$ (7) (recall that $`a_0=1`$) because during inflation the physical size of the event horizon is simply equal<sup>§</sup><sup>§</sup>§This is only exactly true when the vacuum energy density is the only contributor to the energy density, in which case exponential inflation occurs. to $`cH^1`$ (this is ultimately the reason for the choice of criterium for inflation by Vachaspati and Trodden ). As has been discussed above, eqns. (6) and (7) have totally different solutions ($`z_{}=1`$ while $`z_{MAS}\mathrm{}`$). To put it in another way, the main difference between our approach and that of ref. lies on the fact that we assume that the energy-momentum content of the observable Universe does not change significatively in comoving coordinates. This allows us to use the equation of state of the local universe observed for a redshift $`z`$ (looking back at a physical time $`t(z)`$) to infer the equation of state of the local universe at the present time. In the following section, we shall discuss these points in somewhat more detail. We should also point out that if we were to relax the assumption of a co-movingly frozen content of the observable universe, then the equation—analogous to (5)—specifying the redshift out to which one should look in order to be able to predict the future of the Universe would be $$d(z_+)=d_e(z_+).$$ (8) This equation has no solution, so a stronger test of this kind is not feasible in practice. ## III Discussion Here we go through some specific aspects of our test in more quantitative detail. To begin with, we have solved numerically eqn. (5); the numerical results were obtained for choices of cosmological parameters such that $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=0.7,1.0,1.3`$, with an additional $`\mathrm{\Omega }_m=0.3`$ for illustration. We are interested only in a matter–dominated or $`\mathrm{\Lambda }`$dominated epoch of the evolution of the universe, and therefore we have dropped the radiation density parameter $`\mathrm{\Omega }_0^\mathrm{r}`$ of eqn. (1), in the calculations. These results are displayed in Fig. 1 as a function of $`\mathrm{\Omega }_{\mathrm{\Lambda }0}`$. The cases with constant total density are shown in solid curves (with the top curve corresponding to the higher value of the density), while the case of a fixed $`\mathrm{\Omega }_\mathrm{m}`$, is shown, for comparison purposes, by a dotted curve. As expected, as the universe becomes more $`\mathrm{\Lambda }`$dominated and/or less matter–dominated, the comoving distance to the event horizon decreases, which is reflected in the decrease of the redshift $`z_{}`$ of a comoving source located at that distanceNote that pushing $`\mathrm{\Omega }_{\mathrm{\Lambda }0}`$ down to zero, the value of $`z_{}`$ tends to infinity, since in such universes an event horizon does not exist.. For the observationally preferred values of $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, the required redshift will be $`z_{}1.8`$. For comparison, the redshift, $`z_{MAS}`$, defined by eqn. (6), which is the analogous relevant quantity for the criterion of Starkman et al. is shown, for the same choices of cosmological parameters, in Fig. 2. Note that in this case, as the universe becomes more $`\mathrm{\Lambda }`$dominated and/or less matter–dominated, the redshift of the MAS will increase. As we already pointed out our test will not be applicable for very low values of the vacuum energy density, and for intermediate values, it requires a higher redshift than $`z_{MAS}`$. However, for high values of the cosmological constant and/or low matter contents, the fact that the universe will be expanding much faster makes the redshift of the MAS increase significantly, and even become larger than $`z_{}`$ for some combinations of cosmological parameters. For the same observationally preferred values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ quoted above, the required redshift will be $`z_{MAS}1.6`$. A comparison of the values of $`z_{}`$ and $`z_{MAS}`$ for the spatially flat model is shown in Fig. 3. We now return to our assumption about the energy-momentum content of the universe, considering the possibility that different regions of space may have different values for the vacuum energy density, which are separated by domain walls. This means that we are assuming the existence of a scalar field, say $`\varphi `$, which within each region sits in one of a number of possible minima of a time-independent potential. It is obvious that if the potential depends on time or if the scalar field did not have time to roll to the minimum of the potential, then it is not possible to predict the fate of the universe without knowing more about the particle physics model which determines its dynamics. For simplicity, we shall assume that we live in a spherical domain with constant vacuum energy density (effectively a cosmological constant) that is surrounded by a much larger region in which the the vacuum density has a different value—for the present purposes we will assume it to be zero. Note that this is the case where the dynamics of the wall will be faster (more on this below). Is it possible that a region with a radius $`d(z)`$, say centred on a nearby observer, can be inside a given domain at the conformal time $`\eta (z)`$, but outside that domain at the present time, $`\eta _0`$? This problem can provide some measure of how good the assumption of a frozen energy-momentum distribution in comoving coordinates is. In other words, is it likely that a domain wall may have entered this region at a redshift smaller than $`z`$? In order to provide a more quantitative answer to this question, we have performed numerical simulations of domain wall evolution using the PRS algorithm, in which the thickness of the domain walls remains fixed in comoving coordinates for numerical convenience. See also for a description of the simulations. We assume that the domain wall has spherical symmetry, thereby reducing a three-dimensional problem to a one-dimensional one. We perform simulations of this wall in a flat universe on a one-dimensional $`8192`$ grid. The comoving grid spacing is $`\mathrm{\Delta }x=c\eta _i`$ where $`\eta _i=1`$ is the conformal time at the beginning of the simulation. The initial comoving radius of the spherical domain was chosen to be $`R=2048\mathrm{\Delta }x`$, and the comoving thickness of the domain wall was set to be $`10\mathrm{\Delta }x`$. In these simulations we neglect the gravitational effect induced by the different domains and domain walls on the dynamics of the universe, and we also do not consider the possibility of an open universe. We must emphasise, however, that both these effects would slow down the defects, thereby helping to justify our assumption of a constant equation of state in comoving coordinates even more. We have obtained the following fit for the radius of the domain wall as a function of the conformal time $`\eta `$ $$R(\eta )=R_{\mathrm{}}\left(1\left(\frac{c\eta }{\alpha R_{\mathrm{}}}\right)^n\right)^{2/n},$$ (9) where $$\alpha =2.5,n=2.1$$ (10) and $`R_{\mathrm{}}`$ is the initial comoving radius of the domain wall (with $`R_{\mathrm{}}c\eta _i`$). This fit is accurate to better than $`5\%`$, except for the final stages of collapse. In a flat universe with no cosmological constant the comoving distance to a comoving object at a redshift $`z`$ is given by $$d(z)=c\eta _0\left(11/\sqrt{1+z}\right),$$ (11) whereas the radius of the spherical domain wall can be written as a function of the redshift $`z`$, given its initial radius $`R_{\mathrm{}}`$, as follows $$R(z)=R_{\mathrm{}}\left(1\left(\frac{c\eta _0}{\alpha R_{\mathrm{}}\sqrt{1+z}}\right)^n\right)^{2/n}.$$ (12) Now, by solving the equation $$R(z=0)=d(z)$$ (13) we can find the initial comoving radius of our domain (in units of the present conformal time, that is $`R_i(z)/\eta _0`$) which it would be required to have so that its comoving size today is equal to the comoving distance to an object at a redshift $`z`$. Finally, we can calculate the radius of this domain at the present time $`\eta _0`$ and at the redshift $`z`$ (call it $`R_{\mathrm{max}}(z)`$) with the value of $`R_i(z)/\eta _0`$ obtained from the previous equation. In Fig. 4 we plot the value of $`R_{\mathrm{max}}(z)/d(z)`$ as a function of the redshift, $`z`$. If the radius of our domain at a redshift $`z`$ was smaller than $`d(z)`$ the domain wall would be in causal contact with us at the present time and we could in principle detect the gravitational effect both of the domain wall and of the different vacuum density outside our bubble. On the other hand, if the radius of our domain at a redshift $`z`$ was greater than $`R_{max}`$ then it would not have time to enter the sphere of radius $`d(z)`$ before today. When the redshift of the cosmological object we are looking at is small, that is ($`z0`$), its comoving distance from us, $`d(z)`$, is much smaller than the comoving horizon, $`\eta (z)`$, at the time at which the light was emitted. Consequently, a domain wall with a comoving size equal to $`d(z)`$ at the present time would already have a velocity very close to the speed of light by the redshift $`z`$. It is easy to calculate the maximum comoving size, $`R_{max}`$, which our domain would need to have at the redshift $`z`$, in order for the domain wall to enter a sphere of comoving radius $`d(z)`$ centred on a nearby observer sometime between today and redshift $`z`$. This is simply given by $$\frac{R_{max}(z)}{d(z)}2$$ (14) when $`z0`$, because $`d(z)`$ is the distance travelled by light from a redshift $`z`$ until today (see fig. 4). If we assume that the comoving radius of our bubble at a redshift $`z`$ is larger than $`\eta (z)`$, then it will remain frozen in comoving coordinates until its size gets smaller than the horizon. This means that in this case the value of $`R_{max}/d(z)`$ is even smaller, approaching $$\frac{R_{max}(z)}{d(z)}\frac{\alpha ^2\times (\sqrt{1+4\alpha ^n}1)^{2/n}}{2^{2/n}},$$ (15) when $`z\mathrm{}`$ (see fig. 4). For a spherical domain we have $$\frac{R_{max}(z)}{d_{\mathrm{}}}1.12.$$ (16) We thus see that for the purposes of predicting the fate of the universe it may be a plausible assumption to assume a fixed content in comoving coordinates. The above discussion also suggests, in particular, that one may find a posteriori that it is indeed a reasonable assumption if we can observe the dynamical effects of a uniform vacuum density up to a redshift $`z1`$. ## IV Conclusions We have provided a simple analysis of the use of cosmological observations to infer the state and fate of our patch of the universe. In particular, in the same spirit of Starkman et al. , we have discussed possible criteria for inferring the present or future existence of an inflationary epoch in our patch of the universe. We have presented a ‘physical’ criterion for the existence of inflation, and contrasted it with the ‘mathematical’ one that has been introduced in . Ours has the advantage of being able to provide (in principle) a definite answer at the present epoch, but the disadvantage of ultimately relying on assumptions on the content of the local universe and on field dynamics. We consider our assumptions to be plausible, but we can certainly conceive of (arguably contrived or fine-tuned) mechanisms that would be capable of violating it. ###### Acknowledgements. C. M. is funded by FCT (Portugal) under ‘Programa PRAXIS XXI’ (grant no. PRAXIS XXI/BPD/11769/97). We thank Centro de Astrofísica da Universidade do Porto (CAUP) for the facilities provided.
no-problem/0002/nlin0002045.html
ar5iv
text
# Scarred Patterns in Surface Waves ## I Introduction Parametrically forced surface waves arising as a result of the Faraday instability have provided an excellent opportunity to study nonlinear pattern formation. One of the special features of this system is that the system size relative to the basic correlation length can be varied so that both the large aspect ratio and small aspect ratio limits can be explored. At large aspect ratio, all of the classic ordered patterns have been found, including stripes, hexagons, and squares; additional exotic structures such as quasicrystalline and superlattice patterns have also been found, as well as secondary instabilities giving rise to spatiotemporal chaos. Extensive references can be found in . The case of small aspect ratio has also been studied in rectangular and circular containers. Typically the wave patterns found near onset are either normal modes of the container or symmetrical combinations of these modes . For example, in the circular case the normal modes are Bessel functions of the radius multiplied by sinusoidal functions of the azimuthal angle. The effects of container shape can be either a nuisance or a benefit depending on one’s point of view. One example of the usefulness of considering container geometry is the study by Lane et al. in which the conceptual differences between square symmetry and square geometry were elucidated. On the other hand, finite size effects impede efforts to utilize amplitude equations to describe the wave dynamics. The influence of the container shape is also a fundamental issue in the field of quantum chaos. There is known to be a close correspondence between certain finite quantum systems (or analogous systems supporting classical waves) and their particle (or ray-optic) counterparts . Of particular interest are non-integrable quantum systems with classical counterparts that are chaotic, such as the billiard formed from two semicircles separated by two straight edges. For almost all initial conditions, a particle launched inside such a billiard will exhibit sensitive dependence on initial conditions - the hallmark of chaos. Experimental, numerical and theoretical studies have shown that the statistical behavior of the wavefunctions of the quantum or wave version of this system is distinctly different from the behavior for “integrable” geometries such as a square or circle . Most notably, regions of high amplitude in the wavefunctions - called scars - are found along some of the deleted word periodic orbits of the classical counterpart . Effects of this type were explored to a limited extent using parametrically forced surface waves by Blümel et al. . Their experiment utilized water as a working fluid and high frequency excitation. They reported observations of “scarlets”, that are ridge like structures consistent with a random superposition of plane waves (but are not located along periodic orbits.) On the other hand, no clear evidence for the simpler “scarred” wavefunctions was given. The possible effects of hydrodynamic nonlinearity on the utility of the ray optics approach has also not been discussed. Nonlinearity is in principle important, since even infinitesimally above the onset of instability, saturation of the wave amplitude is produced by nonlinear effects. In this paper we first present observations of the spatial modes of Faraday waves in a finite non-integrable geometry, close to onset where nonlinearity is as weak as possible, and the waves might be usefully described by quasi-classical (ray optics) methods. A low viscosity fluid in a stadium shaped container is used for this purpose. Scarred patterns that resemble the computed eigenfunctions of the stadium geometry are clearly evident, but some of the linear eigenfunctions are apparently suppressed. The relative suppression of certain modes has been plausibly explained by Agam and Altshuler in terms of higher dissipation rates for those modes near the boundaries in comparison with the modes that are observed. We then consider the evolution of the wave patterns as the degree of nonlinearity is increased. Transitions between modes are found at some driving frequencies, along with a general increase in spatial complexity. The scars that are characteristic of the linear eigenmodes are often evident substantially above onset. Finally, we consider the development of spatiotemporal chaos (STC) in the stadium geometry. Though the onset of STC is strongly dependent on the excitation frequency, the boundaries continue to play a large role, leading for example to coherent oscillations between symmetric and asymmetric states, a phenomenon that we study using the concept of pattern entropy. ## II Theoretical background A fluid layer with a free surface is subjected to an oscillatory vertical acceleration of amplitude $`a`$. The surface is flat until a critical acceleration $`a_c`$ is reached, at which point the surface becomes unstable and standing wave patterns are observed that oscillate at half the driving frequency. The threshold acceleration depends on the frequency and the viscosity of the fluid. It is convenient to define a dimensionless driving parameter $`ϵ=(aa_c)/a_c`$ that measures the departure from onset and hence the degree of nonlinearity. The patterns are time-independent for a range of positive $`ϵ`$ but eventually a secondary instability gives rise to spatiotemporal chaos. In this section, we briefly discuss the linear inviscid theory, the effect of viscosity, and the role of nonlinearity, as they pertain to the present investigation. The linear stability theory for Faraday waves was first developed by Benjamin and Ursell . We summarize it here because the quantum/classical correspondence occurs for linear waves. They started from the Euler (inviscid) equation of motion and the continuity equation for an ideal fluid with a free surface in an oscillating gravitational field, and simplified the equations by retaining only the linear terms appropriate for small amplitude waves. The surface deformation $`h(𝐱,t)`$ as a function of spatial coordinate $`𝐱`$ and time $`t`$ may be written as a superposition of normal modes $`\psi _i(𝐱)`$ with coefficients $`A_i(t)`$: $$h(𝐱,t)=\underset{i}{}A_i(t)\psi _i(𝐱),$$ (1) where $`\psi _i(𝐱)`$ is a complete orthogonal set of eigenfunctions of the Helmholtz equation, $$(^2+k_i^2)\psi _i(𝐱)=0.$$ (2) The sidewall boundary condition is imposed by setting the normal component of the velocity of the fluid at the wall to zero. This leads to a quantization condition on $`k_i`$ (the wavenumber). In addition, $`k_i`$ satisfies the dispersion relation which relates the frequency of oscillation $`\omega `$ of the fluid to the wavenumber: $$\omega ^2=\mathrm{tanh}(k_id)(\frac{\mathrm{\Gamma }}{\rho }k_i^3+gk_i),$$ (3) where, $`\rho `$ is the fluid density, $`\mathrm{\Gamma }`$ is the surface tension, $`d`$ is the mean fluid depth, and $`g`$ is the gravitational acceleration. In our experiment, the wave number is sufficiently large so that the surface tension term is much greater than the gravity term. The hyperbolic tangent factor is close to unity since $`k_id`$ is large. The time dependent amplitudes $`A_i`$ of these normal modes satisfy the Mathieu equation: $$\frac{d^2A_i}{dt^2}+k_i\mathrm{tanh}k_ih(k_i^2\frac{\mathrm{\Gamma }}{\rho }+ga\mathrm{cos}(\omega t))A_i=0.$$ (4) An instability occurs and the amplitude $`A_i`$ grows exponentially in time when the eigenvalue is in a band (known as the stability tongue) such that the frequency of oscillation of the fluid is half the driving frequency. The instability occurs at arbitrarily small driving amplitude in the absence of viscosity. Damping, which is provided by a number of distinct mechanisms in addition to bulk viscosity, can be included by means of a phenomenological linear damping term as reviewed in Ref.. Though treating damping in this way may not be fully adequate, the main effect is to reduce the width of the stability tongue in parameter space and raise the critical threshold to a finite amplitude. A proper theoretical treatment of instability in the viscous case has been given in Ref. , where the shapes of the computed stability boundaries were presented. If the acceleration $`a`$ is slightly higher than $`a_c`$, all modes in a band ($`𝐤\mathrm{\Delta }𝐤/2,𝐤+\mathrm{\Delta }𝐤/2`$) are accessible and can be excited. The wavenumber width $`\mathrm{\Delta }k`$ of the stability band for small $`ϵ`$ has been estimated to be: $$\mathrm{\Delta }k=8\sqrt{2}\rho \nu \omega \sqrt{ϵ}/3\mathrm{\Gamma },$$ (5) where $`\nu `$ is the kinematic viscosity of the fluid. For a suitable choice of $`ϵ`$, $`\rho `$, and $`\omega `$, and assuming no interaction between modes, one then expects to find either single mode patterns or superpositions of a few modes whose thresholds lie in the window $`(𝐤\mathrm{\Delta }𝐤/2,𝐤+\mathrm{\Delta }𝐤/2)`$. The cumulative number of eigenvalues of the Helmholtz equation $`N(k)`$ is related to the geometry and is given by: $$N(k)\frac{S}{4\pi }k^2\frac{L}{4\pi }k,$$ (6) where $`S`$ is the area, $`L`$ is the perimeter of the stadium, and the negative or positive sign corresponds to Dirichlet or Neumann boundary condition respectively . At high $`k`$, the perimeter term is negligible compared to the area term. Taking $`ϵ=0.01`$ and using Eqs. 5, 6 with an area $`S`$ that is appropriate to the experiments reported here, one can estimate that the typical number of accessible modes is about 8 for a driving frequency of 70 Hz. What are the effects of nonlinearity? A nonlinear theory that describes regular Faraday wave patterns in large containers rather well has been given by Zhang and Viñals and Chen and Viñals . In this theory, an evolution equation is determined for the time derivative of the amplitude of a typical Fourier mode $`B_1`$ of the interfacial deformation. It may be expressed in the form $$\frac{dB_1}{dT}=\alpha B_1g_0B_1^3\underset{m1}{}g(\theta _{m1})B_m^2B_1,$$ (7) where $`T`$ is a slow time variable, the linear term is due to the basic instability, the cubic self-interaction term produces saturation of the wave pattern, and the coupling terms to other modes (which depend on their relative angle $`\theta `$) are also of cubic order. The constants have been computed, and the ratio $`g(\theta )/g_0`$ is of order unity and independent of $`ϵ`$. This implies that coupling effects between the accessible modes may be substantial. The theory was able to explain the striking cascade of 2n-fold patterns discovered by Binks and van de Water . It also explains semi-quantitatively the appearance of striped, square and hexagonal patterns observed in experiments using viscous fluids in large containers . However, the amplitude equation is variational, and is only appropriate near onset. It cannot describe nonuniform patterns, secondary instabilities, or spatiotemporal chaos. An earlier approach that allowed spatially varying patterns was given by Milner . The amplitude equations also ignore the effects of the boundary. For slightly viscous fluids in small containers, a large fraction of the dissipation occurs in the boundary layer and can in fact be the leading cause of dissipation . In work stimulated by the experiments reported here, Agam and Altshuler show that the dissipation near the boundary depends strongly on the nature of the mode. ## III Experimental Apparatus The apparatus is similar to that used by Gluckman et al. in Ref. . Fig. 1 shows a schematic diagram of the experimental setup. The stadium shaped container made of Delrin has the following dimensions: depth $`d=1.25`$ cm, radius of semicircles $`r=3`$ cm, and length of straight edge $`l=4.5`$ cm. The top and bottom plates of the container are made of glass to allow the transmission of light. The fluid is silicone oil of kinematic viscosity $`0.02\mathrm{cm}^2\mathrm{s}^1`$, chosen for its stable surface tension and good wetting characteristics. To minimize meniscus waves, a brim full boundary condition was prepared by machining a ledge in the boundary at the same height as the fluid. Therefore the fluid meets the ledge at $`90^0`$. By maintaining the fluid under brim full conditions, the fluid surface is pinned to the ledge and boundary dissipation is reduced. This situation has been modeled as a Dirichlet boundary condition ($`\psi =0`$. The container is rigidly attached to an electromagnetic shaker (Vibration Test Systems Model 40C) and the acceleration is measured with an accelerometer. The apparatus is placed within a temperature-controlled environment. The driving frequency is selected to be greater than 55 Hz to be in the capillary wave limit, but less than 75 Hz to prevent the density of modes from becoming too high. The patterns are imaged with shadowgraph techniques. The specific implementation is discussed in depth in Ref.. Light from an expanded and collimated incident beam is collected and imaged onto a CCD (charge-coupled device) video camera via a large collecting lens and the camera lens. The resulting images can be interpreted by considering which rays of light reach the CCD plane after passing through the fluid. The relatively small aperture of the camera lens restricts the rays that reach the CCD. All the rays that leave the fluid surface at an angle measured from the normal that is greater than a critical angle (typically about $`10^2`$ radians) are blocked. Since the critical angle is so small, light is collected only from the nearly horizontal regions of the wave surface. Therefore, the bright regions in an image corresponds to local extrema or antinodes of the wave pattern. Images are averaged over one video frame, 1/30 s, which is more than a full cycle of the standing waves. The imaging process is nonlinear in the wave height, but a quantitative model for the measured intensity was presented and tested in Ref. . ## IV Patterns Near Onset We made a survey of the time-independent wave patterns near onset over the range 55 to 65 Hz by changing the frequency in 0.1 Hz steps. In order to obtain useful statistics for the surface wave patterns, a systematic procedure was followed: for each selected frequency, $`a_c`$ was first measured to within 0.1% and then $`ϵ`$ was raised to 0.01, the smallest value that could be maintained accurately. The threshold $`a_c`$ is $`3.1\mathrm{m}\mathrm{s}^2`$ at $`f=60`$ Hz and increases weakly with frequency . A sequence of images from the survey for driving frequency $`f`$ between 60.1 Hz and 62.8 Hz, and with an approximate spacing of 0.4 Hz (i.e. every fourth image), is shown in Fig. 2. This spacing is comparable to the experimentally observed increment (0.3 Hz) typically required to obtain a distinctly different pattern in this frequency range; it is greater then the computed mean level spacing (about 0.1 Hz in this frequency range as estimated from Eqns. 5 and 6.) Most of the observed patterns show the reflection symmetries of the stadium. Regions of large amplitude are often located along lines that would form periodic orbits of the classical analog. Since these regions are similar to those found in other numerical and experimental investigations , we refer to the patterns containing such enhancements as scarred patterns. We compare the observed patterns with numerically computed eigenstates of the Laplace operator for the stadium geometry, obtained for comparable mean wavenumber using an algorithm due to Heller . Sample computed eigenstates (selected from a large number of distinct patterns) are shown in Fig. 3. We find that some of the computed states resemble observed patterns. On the other hand, a one-to-one correspondence for sequences of eigenstates was definitely not observed. Furthermore, certain computed eigenstates that occur frequently such as the “whispering gallery” mode (Fig. 3d) were not observed in the full experimental frequency range. Interestingly, most of the observed symmetric patterns resemble one of three basic classes of eigenstates shown in Fig. 3(a-c). For instance, Figs. 2(a,d) resemble the bouncing ball eigenstate Fig. 3(a); Figs. 2(g,h) are close to the longitudinal eigenstate of Fig. 3(b); and Figs. 2(b,e) are a combination of the longitudinal and bowtie eigenstates of Figs. 3(b,c). It is noteworthy that among the observed patterns are states such as Fig. 2(c) that do not have the reflection symmetries of the stadium. We refer to these as disordered patterns. Table I summarizes the percentages of the onset patterns that were visually judged to approximate particular computed scarred eigenstates in the frequency range 55 to 65 Hz at $`ϵ=0.01`$. (Visual comparison was used because automated pattern recognition, which we attempted, was not sufficiently reliable.) Since the discovery of scars, there have been a number of theoretical attempts to obtain a quantitative measure for scarring based on eigenstate overlap, Wigner function overlap, and inverse participation ratios for the amplitudes in the vicinity of the scars. To utilize such measures experimentally, the local wave amplitude is required with high accuracy. The shadowgraph technique used here is quantitative but nonlinear and does not provide this information. Development of a quantitative experimental measure of scarring has proven to be difficult even for linear probes. We use the concept of “pattern entropy” as a tool to classify the patterns. Egolf, Melnikov, and Bodenschatz have applied this concept successfully to measure the complexity of patterns observed in Rayleigh-Bénard convection. The pattern entropy is calculated from the power spectrum of the pattern. If $`P(𝐤)`$ is the normalized two dimensional power spectrum of the pattern at time t, then the pattern entropy $`E(t)`$ is defined as: $$E(t)=\underset{𝐤}{}P(𝐤)\mathrm{ln}(P(𝐤)).$$ (8) Here $`E(t)`$ measures the spectral complexity of a pattern. If the image consists of just one Fourier mode of amplitude unity, then $`E=0`$; otherwise $`E>0`$. To minimize the effects of experimental noise, we sum contributions only in a band of wavenumbers centered at the mean wavenumber of the pattern with a rage of $`\pm 25\%`$. In Table II, the approximate entropy ranges for the various types of patterns observed in the range 55-65 Hz are given. Note that the patterns are not distinguishable solely by their entropy, since some of the ranges overlap. However, the pattern entropy can be useful in studies of time dependence farther above onset, as we show in Sec. VI. ## V Patterns beyond onset Here we examine the evolution of the wave patterns farther from onset, where the interactions between different Fourier components of the waves become increasingly nonlinear and the approximation of Eq. (2) becomes inapplicable. The patterns were observed to be time-independent while changing adiabatically with $`ϵ`$ for $`ϵ<0.3`$. On the other hand, they become weakly time-dependent for $`ϵ0.3`$ at most frequencies. The evolution toward time-dependence with increasing $`ϵ`$ depends on the excitation frequency. Three examples of this evolution are shown in Figs. 4-6. For some driving frequencies, the patterns remain reflection-symmetric as $`ϵ`$ is increased, but exhibit transitions from one spatial mode to another prior to the onset of time dependence, as in Fig. 5. In these cases, the transition to spatial disorder (asymmetry) tends to coincide with the onset of spatiotemporal chaos (STC). It is important to note that as $`ϵ`$ increases, the width of the stability tongue grows (see Eq.(5)): for example at $`f=74.1`$ Hz and $`ϵ=0.2`$ the number of accessible modes of the container is approximately 35. Therefore, the observed mode switching might be a combined effect of the increase in the number of accessible modes and an increase in the degree of nonlinearity that couples them. It is remarkable that the container boundary continues to influence the patterns even at $`ϵ=0.252`$ (Fig. 5d), where nonlinearity clearly plays a major role. The variability of the nonlinear development is evident from examining the examples in Figs. 4-6. In Fig. 4, there is a general increase in complexity with $`ϵ`$, but the dominant mode does not change. In Fig. 6, the near onset pattern is nearly obliterated even at $`ϵ=0.025`$ by the growing complexity, and the pattern is also distinctly asymmetric, while remaining time-independent. In one instance, a complex but symmetric time-independent pattern was observed at an unusually high driving amplitude of $`ϵ=0.8`$ at a frequency of 65.0 Hz, in a regime where spatiotemporal chaos is usually fully developed. The image shown in Fig. 7 was averaged over 3000 images taken over a period of 5 minutes to test for time-dependence. The lack of blurring demonstrates its time-independence. ## VI Role of ordered states in the regime of spatiotemporal chaos For driving amplitudes just beyond the frequency-dependent onset of spatiotemporal chaos, the time dependence of the pattern is often intermittent; the patterns appear to oscillate between states that are relatively ordered and states that are relatively disordered (see the images in Fig. 8 and a corresponding web-based movie ). The power spectra of these typical patterns are also shown in Fig. 8 and indicate the greater complexity of the disordered case, where the power is distributed more uniformly on the ring corresponding to the dominant wavenumber. The time dependence and complexity of the patterns are monitored using two quantitative measures: (i) the rate of change of the pattern $`R(t)`$, and (ii) the entropy $`E(t)`$ as defined in Eq. (8). The rate of change $`R(t)`$ was calculated by subtracting two consecutive images I($`𝐱,t+\mathrm{\Delta }t`$) and I($`𝐱,t`$) separated by a time interval of $`\mathrm{\Delta }t=0.36`$ seconds, and calculating $`R(t)`$ according to the following formula: $$R(t)=c(I(𝐱,t+\mathrm{\Delta }t)I(𝐱,t))^2,$$ (9) where, $`𝐱`$ is the position, and $`c`$ is a constant scaling factor. In Fig. 9(a), a section of the resulting quantity $`R(t)`$ is shown as function of time for $`f=71.9`$ Hz and $`ϵ=0.55`$. The data has been smoothed by averaging over four adjacent points. At every pronounced minimum of this smoothed data we find that the corresponding pattern is symmetric and appears to have long range order. At all other times the pattern is asymmetric and disordered. A graph of the corresponding entropy $`E(t)`$ is shown in Fig. 9(b). Examples of the ordered (X) and disordered (Y) states are shown in Fig. 8, where the location in time is indicated by symbols X ($`t=124`$ s) and Y ($`t=136`$ s) respectively in Fig. 9. At X, where the pattern entropy is low, $`R(t)`$ is small, while at Y, where the pattern entropy is high, $`R(t)`$ is large. This correlation between $`R(t)`$, and $`E(t)`$ holds true for most of the other strong peaks and valleys. For higher $`ϵ`$ ($`>0.8`$) the oscillations diminish in strength and uniform STC is observed. In this regime, following Gluckman et al. , we obtained the time averaged pattern after adding 3000 instantaneous images obtained over five minutes. An example of an instantaneous pattern is shown in Fig. 10(a). The resultant average pattern is shown in Fig. 10(b). The average reveals considerable ordered structure including remnants of the bouncing ball states. This phenomenon is seen at most frequencies and persists up to $`ϵ1.6`$. Beyond this point, scars were visually absent and the average patterns are locally parallel to the boundary, as observed in circular or square patterns by Gluckman et al. . ## VII Discussion In this paper we have discussed the parametrically forced wave patterns formed in a stadium-shaped container containing a low viscosity fluid, as a function of driving frequency and amplitude. The patterns near onset ($`ϵ=0.01`$) were compared to a simple model consisting of linearized equations that reduce to the Helmholtz equation with Dirichlet boundary conditions (see Section II). While a large proportion of observed patterns resemble the numerically computed eigenstates of the stadium, many of the computed eigenstates (for instance the whispering gallery modes) were not observed in a scan with sufficient frequency resolution to detect them. The observed patterns may be broadly classified into three categories: (a) bouncing ball patterns, (b) longitudinal patterns, and (c) bowtie patterns, which have high amplitudes near corresponding periodic orbits. In addition, a significant number of disordered patterns (lacking in symmetry but time-independent) were observed near onset. Furthermore, the observed mode spacing ( $`0.3`$ Hz) is somewhat greater than the mean eigenvalue separation implied by Eq. (6) ($`0.1`$ Hz). These observations imply that the simplest model is inadequate even close to onset. Recently, Agam and Altshuler have offered an explanation for the selection of modes at onset by considering the stability of the periodic orbits corresponding to the scars. They argue that a threshold for excitation of a particular scarred pattern is given by: $$h>\gamma _b+\gamma _p+\lambda /2,$$ (10) where $`h`$ is proportional to the rate that energy is pumped into the system (i.e. the driving amplitude), $`\gamma _b`$ is the dissipation rate in the bulk of the fluid, $`\gamma _p`$ is the dissipation near the perimeter, and $`\lambda `$ is the Lyapunov exponent of the ray orbit that predominantly scars the pattern. The bulk dissipation $`\gamma _b=\nu k^2`$ is the same for all the patterns and corresponds to approximately 2 sec<sup>-1</sup> in the frequency window used in the experiments. Therefore the appearance of a scarred pattern depends on a combination of the two remaining factors which are orbit-dependent. Stability of a pattern is favored both by a small Lyapunov exponent of the associated scarring orbit, and by small perimeter dissipation $`\gamma _p`$. In the limit of high wavenumber $`k`$ relevant to our experiments, Agam and Altshuler derive an expression for the damping rate of scars due to boundary effects: $$\gamma _p=\frac{\sqrt{\omega \nu /2}}{L}\underset{i}{}\frac{(1\mathrm{cos}^2(\varphi _i))}{\mathrm{cos}(\varphi _i)}$$ (11) where $`\omega `$ is the angular frequency, $`\nu `$ is the viscosity, $`L`$ is the length of the periodic orbit, and the sum is over all the collision points of the orbit with the boundary, $`\varphi _i`$ being the angle between the orbit and a line perpendicular to the boundary at the collision point. The parameter $`\gamma _p`$ and the Lyapunov exponent have been calculated for most of the shortest periodic orbits and a long ergodic orbit in Ref. . The perimeter dissipation $`\gamma _p`$, which can be either smaller or larger than $`\gamma _b`$, varies between 0.2 sec<sup>-1</sup> and 3.0 sec<sup>-1</sup>. The extreme values correspond to patterns scarred by the horizontal orbit and the ergodic orbit respectively. Most of the features observed in the experiments near onset appear to be captured by Eq. (10). The bouncing ball orbit is prominent because the Lyapunov exponent is zero in that case, and the longitudinal orbit occurs because of relatively low perimeter dissipation. The whispering gallery orbits and others with angles that come close to $`\pi /2`$ have particularly large boundary dissipation and are suppressed. Eq. 10 implies that if one increases the dissipation on the boundary so that it dominates, the longitudinal orbit will be the last to survive. This is precisely what we observe in the experiments when the level of the fluid is lowered, a change that results in higher perimeter dissipation because of motion of the contact line. The theory just cited is also able to account for the observed tendency of one scarred pattern to suppress other nearby eigenmodes through nonlinear interactions, as well as the existence of some asymmetric patterns. At some driving frequencies the patterns are observed to switch modes as $`ϵ`$ is increased (Fig. 5). This occurs especially when the bowtie state is observed at onset, an observation that may be related to the larger perimeter dissipation and Lyapunov exponent of the bowtie mode. At higher driving amplitude, additional nonlinear effects occur as indicated by the growth in spatial complexity, and no adequate theoretical treatment exists. However, the onset patterns are often robust, persisting in the presence of increasing spatial complexity. The boundaries remain influential even beyond the onset of time dependence. At sufficiently high $`ϵ`$ ($`0.2`$), the onset patterns are no longer visible, though they persist in the time-average. Strong intermittency in the degree of order of the patterns is observed in the regime of spatiotemporal chaos. Furthermore, the rate of change of the pattern $`R(t)`$ just above the STC onset is strongly correlated with the order as characterized by the entropy $`E(t)`$. The more ordered patterns evolve more slowly in time, a striking observation that remains to be explained. This tendency for ordered patterns to be more stable may be related to the complex but highly symmetric and time-independent pattern observed at atypically large acceleration (Fig. 7). ## VIII Acknowledgments We thank Oded Agam and Boris Altshuler for many useful discussions, and E.J. Heller for software to determine eigenstates of the stadium. Bruce Boyes provided technical help. This work was supported by the National Science Foundation under Grant DMR-9704301. A. K. acknowledges a grant from the Alfred P. Sloan Foundation.
no-problem/0002/gr-qc0002032.html
ar5iv
text
# Small-eccentricity binary pulsars and relativistic gravity ## 1. Introduction The majority of binary pulsars is found to be in orbit with a white-dwarf companion. Due to the mass transfer in the past, these systems have very small orbital eccentricities and, therefore, neither the relativistic advance of periastron nor the Einstein delay were measured for any of these binary pulsars. In fact, the only post-Keplerian parameters measured with reasonable accuracy for a small-eccentricity binary pulsar are the two Shapiro parameters in case of PSR B1855+09 (Kaspi et al. 1994). However, since the orbital period of this system is 12.3 days, the expected gravitational wave damping of the orbital motion is by far too small to be of any importance for timing observations and, consequently, there is no third post-Keplerian parameter which would allow the kind of test conducted in double-neutron-star systems (Damour & Taylor 1992). On the other hand, many alternative theories of gravity, tensor-scalar theories for instance, predict effects that depend strongly on the difference between the gravitational self energy per unit mass ($`ϵE^{\mathrm{grav}}/mc^2`$) of the two masses of a binary systems (Will 1993; Damour & Esposito-Farése 1996ab). While this difference in binding energies is comparably small for double-neutron-star systems, it is large in neutron star-white dwarf systems since for a white dwarf $`ϵ10^4`$ while for a 1.4$`M_{}`$ neutron star $`ϵ0.15`$. ## 2. Gravitational dipole radiation Unlike general relativity, many alternative theories of gravity predict the presence of all radiative multipoles — monopole and dipole, as well as quadrupole and higher multipoles (Will 1993). For binary systems scalar-tensor theories, for instance, predict a loss of orbital energy which at highest order is dominated by scalar dipole radiation. As a result, the orbital period, $`P_b`$, of a circular binary system should change according to $$\dot{P}_b^{(\mathrm{dipole})}\frac{4\pi ^2G_{}}{c^3P_b}\frac{m_pm_c}{m_p+m_c}(\alpha _p\alpha _c)^2,$$ (1) where $`m_p`$ and $`m_c`$ denote the mass of the pulsar and its companion, respectively, $`G_{}`$ is the ‘bare’ gravitational constant and $`c`$ the speed of light. The total scalar charge of each star is proportional to its mass and its ‘effective coupling strength’ $`\alpha (ϵ)`$ (Damour & Esposito-Farése 1996b). For a white dwarf companion $`|\alpha _c|1`$ and thus the expression $`(\alpha _p\alpha _c)^2`$ in equation (1) can be of the order one if the pulsar develops a significant amount of scalar charge. In this case the gravitational wave damping of the orbit is completely dominated by the emission of gravitational dipole radiation. PSR J1012+5307 is a 5.3 ms pulsar in a 14.5 h circular orbit with a low mass white-dwarf companion. Since its discovery in 1993 (Nicastro et al. 1995) this pulsar has been timed on a regular basis using the Jodrell Bank 76-m and the Effelsberg 100-m radiotelescope, sometimes achieving a timing accuracy of 500 ns after just 10 min of integration (Lange et al. , this conference). In addition, the white-dwarf companion appears to be relatively bright ($`V=19.6`$) and shows strong Balmer absorption lines. Based on white dwarf model calculations, a companion mass of $`m_c=0.16\pm 0.02`$ and a distance of $`840\pm 90`$ pc was derived (van Kerkwijk et al. 1996, Callanan et al. 1998). Further, a reliable radial velocity curve for the white dwarf has been extracted, which then, in combination with the pulsar timing information, gave a mass for the pulsar of $`m_p=1.64\pm 0.22`$. Since $`\dot{P}_b=(0.1\pm 1.8)\times 10^{13}`$ for this binary system, we find from equation (1) $$|\alpha _p|<0.02\text{(95\% C.L.)}$$ (2) Simulations show, that this value should improve by a factor of five within the next three years (Lange et al., in prep.). ## 3. Gravitational Stark effects ### 3.1. Violation of the strong equivalence principle The strong equivalence principle (SEP) requires the universality of free fall of all objects in an external gravitational field regardless of their mass, composition and fraction of gravitational self-energy. While all metric theories of gravity share the property of universality of free fall of test particles (weak equivalence principle), many of them, which are considered as realistic alternatives to general relativity, predict a violation of the SEP. A violation of the SEP can be understood as an inequality between the gravitational mass, $`m_g`$, and the inertial mass, $`m_i`$, which can be written as function of $`ϵ`$: $$m_g/m_i1+\delta (ϵ)=1+\eta ϵ+𝒪(ϵ^2).$$ (3) While the analysis of lunar-laser-ranging data tightly constrains the ‘Nordtvedt parameter’ $`\eta `$ (Müller et al. 1997) it indicates nothing about a violation of the SEP in strong-field regimes, i.e. terms of higher order in $`ϵ`$, due to the smallness of $`ϵ`$ for solar-system bodies. For neutron stars, however, $`ϵ0.15`$ and thus binary-pulsars with white-dwarf companions ($`ϵ10^4`$) provide ideal laboratories for testing a violation of the SEP due to nonlinear properties of the gravitational interaction (Damour & Schäfer 1991). In case of a violation of the SEP the eccentricity vector of a small-eccentricity binary-pulsar system exposed to the external gravitational field of the Galaxy, $`𝐠`$, is a superposition of a constant vector $`𝐞_F`$ and a vector $`𝐞_R`$ which is turning in the orbital plane with the rate of the relativistic advance of periastron. The ‘induced’ eccentricity $`𝐞_F`$ points into the direction of the projection of the Galactic acceleration onto the orbital plane, $`𝐠_{}`$, and $`e_F(\delta _p\delta _c)P_b^2g_{}`$. However, neither the length of $`𝐞_R`$ nor its rotational phase $`\theta `$ are known quantities. We therefore have to proceed as follows. Given a certain $`(\delta _p\delta _c)\delta _p`$, i.e. a certain $`e_F`$ for a given binary pulsar, the observed eccentricity $`e`$ sets an upper limit to $`|\theta |`$ which is independent of $`e_R`$: $`\mathrm{sin}|\theta |<e/e_F`$ for $`e<e_F`$ and $`|\theta |\pi `$ for $`ee_F`$ (Wex 1997). We now have to calculate an upper limit for $`\theta `$ for every observed small-eccentricity binary pulsar and compare the result with Monte-Carlo simulations of a large number of (cumulative) distributions for the (uniformly distributed) angle $`\theta `$. This way, by counting the number of simulated distributions which are in agreement with the ‘observed’ limits, one obtains the confidence level with which a certain $`\delta _p`$ is excluded. As a safe upper limit for $`|\delta _p|`$ we find $$|\delta _p|<0.009\text{(95\% C.L.)}$$ (4) Note, in order to calculate $`e_F`$ for a given binary system, we need also the masses of pulsar and companion and the location and orientation of the binary system in the Galaxy. If there are no restrictions from timing and optical observations, the pulsar masses were assumed to be uniformly distributed in the range $`1.2M_{}<m_p<2M_{}`$, the companion masses were taken from evolutionary scenarios (Tauris & Savonije 1999), and the pulsar distances were estimated using the Taylor-Cordes model assuming a typical error of 25% (Taylor & Cordes 1993). Finally, the orientation of the ascending node in the sky, which is an unobservable parameter for all our binary system, was treated as variable which is uniformly distributed between 0 and $`2\pi `$. ### 3.2. Violation of local Lorentz invariance and conservation laws If gravity is mediated in part by a long-range vector field or by a second tensor field one expects the global matter distribution in the Universe to select a preferred frame for the gravitational interaction (Will & Nordtvedt 1972). At the post-Newtonian level, gravitational effects associated with such a violation of the local Lorentz invariance of gravity are characterized by two theory dependent parameters $`\alpha _1`$ and $`\alpha _2`$. If $`\alpha _1`$ were different from zero, the eccentricity of a binary system which moves with respect to the global matter distribution in the Universe would suffer a secular change similar to a violation of the SEP. This time, $`|e_F|\alpha _1|m_pm_c|P_b^{1/3}w_{}`$ where $`𝐰`$ denotes the velocity of the binary system with respect to the preferred frame, i.e. the cosmic microwave background. Again, we can perform a Mote-Carlo analysis as outlined in the previous section to derive the upper limit $$|\alpha _1|<1.2\times 10^4\text{(95\% C.L.)}$$ (5) This limit is slightly better than the limit obtained from lunar-laser-ranging data (Müller et al. 1996) and, more importantly, also holds for strong gravitational-field effects which could occur in the strong-field regions of neutron stars. Due to its small eccentricity, $`e<1.7\times 10^6`$ (95% C.L.), and high velocity with respect to the cosmic microwave background ($`w500`$ km/s), PSR J1012+5307 turns out to be the most important binary system for this kind of analysis (Lange et al. , in prep). While for PSR J1012+5307 also the radial velocity of the system is known from spectroscopic observations of the white dwarf companion, for all the other binary pulsars no radial velocity information is available and we have to assume an isotropic probability distribution for the 3-d velocity. In theories of gravity which violate the local Lorentz invariance and the momentum conservation law, a rotating self-gravitating body will suffer a self-acceleration which is given by $`𝐚_{\mathrm{self}}=\frac{1}{3}\alpha _3ϵ𝐰\times 𝛀`$ (Nordtvedt & Will 1972), where $`\alpha _3`$ is a theory dependent parameter and $`𝛀`$ denotes the rotational velocity of the body. Again, binary pulsars are ideal probes for this kind of self-acceleration effects (Bell & Damour 1996). A careful analysis analogous to the previous analyses (SEP, local Lorentz invariance) gives $$|\alpha _3|<1.5\times 10^{19}\text{(95\% C.L.)}$$ (6) Note, the statistical tests for gravitational Stark effects presented here for the first time appropriately take care of selection effects by simulating the whole population, therefore, giving the first reliable limits for $`\delta _p`$, $`\alpha _1`$, and $`\alpha _3`$. #### Acknowledgments. I am grateful to Kenneth Nordtvedt for pointing out to me the problem of selection effects related with binary-pulsar limits to gravitational Stark effects. I thank Christoph Lange for numerous valuable discussions. ## References J. F. Bell & T. Damour: Class. Quantum Grav., 13, 3121 (1996) P. J. Callanan, P. M. Garnavich, D. Koester: MNRAS, 298, 207 (1998) T. Damour & G. Esposito-Farése: Phys. Rev. D, 53, 5541 (1996a) T. Damour & G. Esposito-Farése: Phys. Rev. D, 54, 1474 (1996b) T. Damour & G. Schäfer: Phys. Rev. Lett., 66, 2550 (1991) T. Damour & J. H. Taylor: Phys. Rev. D, 45, 1840 (1992) V. M. Kaspi, J. H. Taylor & M. F. Ryba: ApJ, 428, 713 (1994) J. Müller, K. Nordtvedt & D. Vokrouhlický: Phys. Rev. D, 54, R5927 (1996) J. Müller, M. Schneider, K. Nordtvedt & D. Vokrouhlicky: In: Proceedings of the 8th Marcel Grossman Meeting, Jerusalem 1997 L. Nicastro, A. G. Lyne, D. R. Lorimer, P. A. Harrison, M. Bailes, B. D. Skidmore: MNRAS, 273, L68 (1995) K. Nordtvedt & C. M. Will: ApJ, 177, 775 (1972) T. M. Tauris & G. J. Savonije: A&A, 350, 928 (1999) J. H. Taylor & J. M. Cordes: ApJ, 411, 674 (1993) M. H. van Kerkwijk, P. Bergeron & S. R. Kulkarni: ApJ, 467, L89 (1996) N. Wex: A&A, 317, 976 (1997) C. M. Will: Theory & experiment in gravitational physics, (Cambridge University Press, Cambridge 1993) C. M. Will & K. Nordtvedt: ApJ, 177, 757 (1972)
no-problem/0002/hep-th0002217.html
ar5iv
text
# Duality between coordinates and Dirac field ## Abstract The duality between the Cartesian coordinates on the Minkowski space-time and the Dirac field is investigated. Two distinct possibilities to define this duality are shown to exist. In both cases, the equations satisfied by prepotentials are of second order. PACS numbers: 03., 03.65.-w, 03.65.Pm Recently, a duality between the Cartesian coordinates on the Minkowski space-time and the solutions of Klein - Gordon equation could be derived by employing a method to invert the wave functions which was previously used in the context of Seiberg-Witten theory . Several consequences of this duality were analyzed in subsequent studies. Based on it, there was proposed an equivalence principle stating that all physical systems should be related by coordinate transformation . By applying the equivalence principle to the phase-space in the Hamilton-Jacobi formulation it was discovered that the energy quantization of the bound states and the tunneling effect can be derived without any assumption about the probability interpretation of the wave function . In this way, a trajectory representation of the quantum mechanics that was previously used in can also be obtained. The simplification of the coordinate-field duality at Planck scale and for Heisenberg’s uncertainty principle was discussed in . In it was analyzed a coordinate-free formulation of gravity, while in’ the coordinate-field duality was extended to curved space-time manifolds and in there was proposed a local formulation of gravity in terms of matter fields. (For recent attempts to formulate the gravity and supergravity in terms of quantum objects, see .) Although the above mentioned results were obtained basically from coordinate-field duality, one should note that, in the relativistic case, this is constructed exclusively for the Klein-Gordon field. In reference it was suggested that the coordinate-field duality should hold for the Dirac field, too. However, this suggestion was never realized in a precise way. In this letter we follow and concern ourselves with investigating the coordinate-field duality in the case of the Dirac field. For definiteness, we work in the four-dimensional Minkowski space-time with the metric signature mostly minus and we assume that the field is a Dirac spinor. However, the results can be straightforwardly generalized to different space-time dimensions and to other types of spinors. Basic remark is that the linear independent solutions of the Dirac equation for the field and its Dirac conjugate can be factorized as follows $`\psi _\alpha (k,s;x)`$ $`=`$ $`u_\alpha (k,s)\varphi (k,x),\stackrel{~}{\psi }_\alpha (k,s;x)=v_\alpha (k,s)\stackrel{~}{\varphi }(k,x),`$ (1) $`\overline{\psi }_\alpha (k,s;x)`$ $`=`$ $`\overline{u}_\alpha (k,s)\stackrel{~}{\varphi }(k,x),\stackrel{~}{\overline{\psi }}_\alpha (k,s;x)=\overline{v}_\alpha (k,s)\varphi (k,x).`$ (2) Here, $`u_\alpha (k,s)`$ and $`v_\alpha (k,s)`$ are column vectors that span the space of $`spin(1,3)`$ spinors and they depend only on the momentum and the spin of the field. The fields $`\varphi (k,x)`$ and $`\stackrel{~}{\varphi }(k,x)`$ are linear independent solutions of the Klein-Gordon equation which usually are taken to be wave functions. The Dirac conjugation is denoted by a bar and the distinction between two linearly independent solutions is made by a tilde. To simplify the relations, we drop the indices $`k`$ and $`s`$ in what follows. In order to determinate the coordinate-field duality for the Klein-Gordon field, one keeps a coordinate at a time as a variable, say $`x^\mu `$, and treat $`x^\nu `$ with $`\mu \nu `$ as parameters . The corresponding solutions are labelled with an upper index ($`\mu `$) which is a dumb index, e.g. $`\varphi ^{\left(\mu \right)}`$. Then a prepotential $`_\varphi ^{\left(\mu \right)}\left[\varphi ^{\left(\mu \right)}\right]`$ is introduced for each pair of linearly independent solution by $$\stackrel{~}{\varphi }^{\left(\mu \right)}\frac{_\varphi ^{\left(\mu \right)}\left[\varphi ^{\left(\mu \right)}\right]}{\varphi ^{\left(\mu \right)}}.$$ (3) In the case of the Dirac field, we can follow the same procedure. However, since the dependence on $`x`$ of $`\psi _\alpha `$ and $`\overline{\psi }_\alpha `$ is carried entirely by $`\varphi `$ and $`\stackrel{~}{\varphi }`$ according to (2), there is an ambiguity which in the case of the Klein-Gordon field does not appear: namely, one can use either $`\stackrel{~}{\psi }_\alpha `$ or $`\overline{\psi }_\alpha `$ to define the prepotential for $`\psi _\alpha `$. This ambiguity is due to the fact that the second-order Klein-Gordon equation has been split into two first-order differential equation. The second possibility which implies mixing the solutions of the Dirac equation for the field and its conjugate is treated as the first one from the point of view of the second-order differential equation. We analyze both cases. In the first case we define the prepotential of the Dirac field by the following relation $$\stackrel{~}{\psi }_\alpha ^{\left(\mu \right)}\frac{_{\alpha \psi }^{\left(\mu \right)}\left[\psi _\alpha ^{\left(\mu \right)}\right]}{\psi _\alpha ^{\left(\mu \right)}},$$ (4) for $`\mu =0,1,2,3,`$ where $`\stackrel{~}{\psi }_\alpha ^{\left(\mu \right)}`$ are the solutions of the Dirac equation “along the direction $`x^\mu `$”, i.e. with $`x^\nu `$= const. for $`\nu \mu `$. It is easy to see that $`\psi _\alpha ^{\left(\mu \right)}`$ factorizes as in (2) with $`\varphi ^{\left(\mu \right)}`$ satisfying the corresponding Klein-Gordon equation . In order to express $`x^\mu `$ as a function of $`\psi _\alpha ^{\left(\mu \right)}`$ and $`_{\alpha \psi }^{\left(\mu \right)}`$, one has to derive the prepotential with respect to the space-time coordinate . However, the usual derivation does not make sense since it includes the product $`v_\alpha u_\alpha `$ which is not defined for two column vectors. To makeshift around this difficulty we define the derivative of the prepotential through a tensor product by $$_\mu _{\alpha \psi }^{\left(\mu \right)}=\frac{_{\alpha \psi }^{\left(\mu \right)}}{\psi _\alpha ^{\left(\mu \right)}}\frac{\psi _\alpha ^{\left(\mu \right)}}{x^\mu }.$$ (5) The definition (5) takes into account the internal structure of the Dirac field. The tensor product $`v_\alpha u_\alpha `$ can be identified with an invertible $`4\times 4`$ matrix. By a straightforward computation, one can verify that the duality between the coordinates and fields is given by the following relation $$\frac{\sqrt{2m}}{\mathrm{}}X_{\alpha \alpha }^\mu =\frac{1}{2}\frac{_{\alpha \psi }^{\left(\mu \right)}}{\psi _\alpha ^{\left(\mu \right)}}\psi _\alpha ^{\left(\mu \right)}_{\alpha \psi }^{\left(\mu \right)}+C_\alpha ^{\left(\mu \right)},$$ (6) for $`\mu =0,1,2,3`$. Here $`X_{\alpha \alpha }^\mu =O_{\alpha \alpha }x^\mu v_\alpha u_\alpha x^\mu `$. The arbitrary function $`C_\alpha ^{\left(\mu \right)}`$ does not depend on $`x^\mu `$ but depends on the parameters $`x^\nu `$ and on the momentum and spin of the field $`\psi _\alpha ^{\left(\mu \right)}`$. In order to completely determine the duality (6) one has to find out the differential equation satisfied by the prepotential. The method for doing this was described in. By applying it in the present case, one can show that $`_{\alpha \psi }^{\left(\mu \right)}`$ satisfies the following differential equation $$i\frac{\sqrt{8m}}{\mathrm{}}\gamma ^\mu O_{\alpha \alpha }\delta _\mu ^2_{\alpha \psi }^{\left(\mu \right)}+\left[\stackrel{~}{V}_\alpha ^{\left(\mu \right)}m\right]\delta _\mu _{\alpha \psi }^{\left(\mu \right)}\left(\delta _\mu ^2_{\alpha \psi }^{\left(\mu \right)}\psi _\alpha ^{\left(\mu \right)}\delta _\mu _{\alpha \psi }^{\left(\mu \right)}\right)=0,$$ (7) for $`\mu =0,1,2,3,`$ where $`\delta _\mu =/\psi _\alpha ^{\left(\mu \right)}`$. The potential $`\stackrel{~}{V}_\alpha ^{\left(\mu \right)}`$ in (7) is given by $$\stackrel{~}{V}_\alpha ^{\left(\mu \right)}=\left[i\underset{\upsilon \mu }{}\gamma ^\nu _\nu \stackrel{~}{\psi }_\alpha ^{\left(\mu \right)}\stackrel{~}{\overline{\psi }}_\alpha ^{\left(\mu \right)}\right]|_{x^\nu =ct;\nu \mu }.$$ (8) Some comments are in order now. From (6) we see that the coordinate-field duality gives different “representations” of the coordinate $`x^\mu `$, corresponding to different solutions of the Dirac equation labelled by $`\alpha `$. This is to be expected, since the Dirac field has an inner structure manifest in the factorization (2). If the matrix $`O_{\alpha \alpha }`$ is invertible we can express the real coordinates as functions of the spinor fields. In this case, we expect that the image $`x^\mu `$ $`\left[\stackrel{~}{\psi }_\alpha ^{\left(\mu \right)}\right]`$ be an unique real number. If this is the case, on has to impose a constraint on the prepotentials which for this system is given by the following relation $$\left(O_{11}\right)^1\left[\frac{1}{2}\frac{_{1\psi }^{\left(\mu \right)}}{\psi _1^{\left(\mu \right)}}\psi _i^{\left(\mu \right)}_{1\psi }^{\left(\mu \right)}+C_1^{\left(\mu \right)}\right]=\left(O_{22}\right)^1\left[\frac{1}{2}\frac{_{2\psi }^{\left(\mu \right)}}{\psi _2^{\left(\mu \right)}}\psi _2^{\left(\mu \right)}_{2\psi }^{\left(\mu \right)}+C_2^{\left(\mu \right)}\right].$$ (9) Also, due to the fact that the field satisfies a first-order differential equation, the prepotential satisfy a second-order differential equation (7). We recall that in the Klein-Gordon case the corresponding equation is a third-order one. Another difference from the Klein-Gordon field is that the potential function (8) involves, beside the contribution from the parameters $`x^\nu `$, a solution of the conjugate Dirac equation, which shows that both the Dirac field and the conjugate Dirac field should be considered in the coordinate-field duality. Note than, when the matrix $`O_{\alpha \alpha }`$ is not invertible, although the real coordinates should be the same in all representations, the relation (9) can not be imposed as a constraint. In the second case one defines the prepotential through the relation $$\overline{\psi }_\alpha ^{\left(\mu \right)}\frac{_{\alpha \psi }^{\left(\mu \right)}\left[\psi _\alpha ^{\left(\mu \right)}\right]}{\psi _\alpha ^{\left(\mu \right)}}.$$ (10) The duality relation obtained from (10) is given by $`{\displaystyle \frac{\sqrt{2m}}{\mathrm{}}}x^\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{_{\alpha \psi }^{\left(\mu \right)}}{\psi _\alpha ^{\left(\mu \right)}}}\psi _\alpha ^{\left(\mu \right)}_{\alpha \psi }^{\left(\mu \right)}+C_\alpha ^{\left(\mu \right)}`$ (11) $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{_\varphi ^{\left(\mu \right)}}{\varphi ^{\left(\mu \right)}}}\varphi ^{\left(\mu \right)}_\varphi ^{\left(\mu \right)},`$ (12) for $`\mu =0,1,2,3`$. In deriving the second equality, we have used the fact that the prepotential depends only on a solution of the Dirac equation with given momentum and spin and we have used the normalization of $`\overline{u}_\alpha (k,s)`$ and $`u_\alpha (k,s)`$ to one. The relation (12) shows that in the second case the “representation” of coordinate $`x^\mu `$ in terms of the Dirac field is unique. In order to find the equation for the prepotentials, we have to assume that the matrix $`O_{\alpha \alpha }`$ is invertible. This allows us to write a potential functional on each directions. Then the prepotential defined in (10) satisfies the following differential equation $$i\frac{\sqrt{8m}}{\mathrm{}}\delta _\mu ^2_{\alpha \psi }^{\left(\mu \right)}\gamma ^\mu +\left[\overline{V}_\alpha ^{\left(\mu \right)}m\right]\delta _\mu _{\alpha \psi }^{\left(\mu \right)}\left(\delta _\mu ^2_{\alpha \psi }^{\left(\mu \right)}\psi _\alpha ^{\left(\mu \right)}\delta _\mu _{\alpha \psi }^{\left(\mu \right)}\right)=0$$ (13) where the potential $`\overline{V}_\alpha ^{\left(\mu \right)}`$ has the following form $$\overline{V}_\alpha ^{\left(\mu \right)}=\left[i\underset{\upsilon \mu }{}\gamma ^\nu _\nu \overline{\psi }_\alpha ^{\left(\mu \right)}O_{\alpha \alpha }^1\psi _\alpha ^{\left(\mu \right)}\right]|_{x^\nu =ct;\nu \mu },$$ (14) where $`O_{\alpha \alpha }=u_\alpha \overline{u}_\alpha `$. Note that although the tensor product disappeared from the duality relation, it still remains in the equation for $`_{\alpha \psi }^{\left(\mu \right)}`$ as in the first case. Due to this fact, the potential (14) acts as a tensor product on the next term in (13). However, one has to note that when the matrix $`O_{\alpha \alpha }`$ is not invertible, which is the case for most of the representations of the spinor field, the explicit form of the potential (14) is unknown and the resolution of the second case along the line of is problematic and might very well not exist. The relations (6), (7) and (12), (13) describe the coordinate-field duality and the equations of prepotentials for the Dirac field in the two cases allowed by the Klein-Gordon equation. When one takes for $`\varphi (k,x)`$ and $`\stackrel{~}{\varphi }(k,x)`$ the corresponding wave function, the entire dependence of $`x^\mu `$ on the Dirac fields is concentrated in $`_{\alpha \psi }^{\left(\mu \right)}.`$ Note that in the quantum case, the factors $`u_\alpha `$ and $`v_\alpha `$ include anticommuting creation and annihilation operators. Also, since the prepotential are functionals of fields, they can be in principle either anticommuting or commuting. In both cases the equations (7) and (13) become of first degree, since the prepotentials are polynomials of rank one in the fields. Then (7) and (13) can be easily solved. Two cases are to be considered. In the first one $`V_\alpha ^{\left(\mu \right)}=m`$ and the prepotential can be an arbitrary polynomial of rank one in the Dirac field. In the second case $`V_\alpha ^{\left(\mu \right)}m`$ and the prepotential is an arbitrary constant functional. Finally let us discuss the symmetry of the coordinate-field duality. In it was shown that the duality between the coordinate and the wave-function in the quantum mechanics obeys the modular symmetry $`SL(2,)`$ of the solution of the Schrödinger equation. In particular, under an $`SL(2,)`$ transformation of the linearly independent solutions, the prepotentials transform as (16) of . In the case of the Dirac field we have the same situation. Indeed, if we consider the coordinate-field duality defined by (12) on can reproduce (16) of if we transform the linearly independent solution ($`\psi ,\stackrel{~}{\psi }`$) by $`\left(\begin{array}{cc}A& B\\ C& D\end{array}\right)SL(2,)`$ (for the sake of clarity, we omit all the indices). In this case the problem reduces to the Klein-Gordon problem as can be seen from the second equality from (12). For the first coordinate-field duality given by (6) one has to introduce the circle product between a line vector and a column vector with Dirac field components $`\left(\begin{array}{cc}\mu _1& \mu _2\end{array}\right)`$ and $`\left(\begin{array}{c}\nu _1\\ \nu _2\end{array}\right)`$ given by the following relation: $$\left(\begin{array}{cc}\mu _1& \mu _2\end{array}\right)\left(\begin{array}{c}\nu _1\\ \nu _2\end{array}\right)=\mu _1\nu _1+\mu _2\nu _2.$$ (15) The properties of (15) are given by properties of the tensor product. The product (15) is allowed because there is an ambiguity in defining a product between two vectors which have as components others vectors. Using (15), is easy to verify that under $`SL(2,)`$ the prepotential $``$ transforms as $$\delta =\frac{1}{2}𝒳\left[G^T\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)G\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\right]𝒳,$$ (16) where $`𝒳`$ is the two components column vector $`\left(\begin{array}{c}\psi \\ \stackrel{~}{\psi }\end{array}\right)`$. In conclusion, the coordinate-field duality for the Dirac field has two possible forms given by (6) and (12), respectively. Both of those forms are compatible with the Klein Gordon equation. However in the first case the representation of the Cartesian coordinates through the Dirac field is degenerate, while in the second case it is equivalent with the one obtained from the duality for Klein-Gordon field. In both cases, the prepotentials transform under $`SL(2,)`$ group as in (16) with the only difference that in the second case the circle product should be replaced by the usual product. It is important to note that we have used the fact that the matrix $`O_{\alpha \alpha }`$ is invertible in deriving the equations satisfied by prepotentials. This matrix depends crucially on the linear independent basis $`u_\alpha `$ and $`v_\alpha `$ as well as on the mapping of the tensor product of these vectors to quadratic matrices. An unappropriate choice of any of these two elements can lead to an univertible matrix. Nevertheless, one can always find a suitable basis and define the mapping in such of way that the inverse of $`O_{\alpha \alpha }`$ exists. The fact that there are several “representations” of coordinates in terms of fields may bring some simplifications in the study of coordinate-field duality for supersymmetric systems. We want to thank M. A. de Andrade, J. A. Helayel-Neto and A. I. Shimabukuro for useful discussions. A.L.G. acknowledgs V. S. Timóteo for discussions and CAPES for his fellowship. The work of I.V.V. was supported by a FAPESP postdoc fellowship.
no-problem/0002/quant-ph0002036.html
ar5iv
text
# Entanglement measures and the Hilbert-Schmidt distance ## Abstract In order to construct a measure of entanglement on the basis of a “distance” between two states, it is one of desirable properties that the “distance” is nonincreasing under every completely positive trace preserving map. Contrary to a recent claim, this letter shows that the Hilbert-Schmidt distance does not have this property. PACS: 03.67-a Keywords: entanglement; completely positive maps; operations; Hilbert-Schmidt norm As classical information arises from probability correlation between two random variables, quantum information arises from entanglement . Motivated by the finding of an entangled state which does not violate Bell’s inequality, the problem of quantifying entanglement has received an increasing interest recently. Vedral et. al. proposed three necessary conditions that any measure of entanglement has to satisfy and showed that if a “distance” between two states has the property that it is nonincreasing under every completely positive trace preserving map (to be referred to as the CP nonexpansive property), the “distance” of a state to the set of disentangled states satisfies their conditions. It has been shown that the quantum relative entropy and the Bures metric have the CP nonexpansive property , and it has been conjectured that so does the Hilbert-Schmidt distance . In the interesting Letter , Witte and Trucks claimed that the Hilbert-Schmidt distance really has the CP nonexpansive property and conjectured that the distance generates a measure of entanglement satisfying even the stronger condition posed later by Vedral and Plenio . However, it can be readily seen that their suggested proof includes a serious gap. In this Letter, it will be shown that, contrary to their claim, the Hilbert-Schmidt distance does not have the CP nonexpansive property by presenting a counterexample. Let $`=_1_2`$ be the Hilbert space of a quantum system consisting of two subsystems with Hilbert spaces $`_1`$ and $`_2`$. We assume that $`_1`$ and $`_2`$ have the same finite dimension. We shall consider the notion of entanglement with respect to the above two subsystems. Let $`𝒯`$ be the set of density operators on $``$. The set $`𝒟`$ of disentangled states is the set of all convex combinations of pure tensor product states. There are several requirements that every measure of entanglement, $`E`$, should satisfy : (E1) $`E(\sigma )=0`$ for all $`\sigma 𝒟`$. (E2) For any family of bounded operators $`\{V_i\}`$ of the form $`V_i=A_iB_i`$ such that $`_iV_i^{}V_i=I`$, (a) $`E(_iV_i\sigma V_i^{})E(\sigma )`$, (b) $`_i\text{Tr}[V_i\sigma _iV_i^{}]E(V_i\sigma _iV_i^{}/\text{Tr}[V_i\sigma _iV_i^{}])E(\sigma )`$. Condition (E1) ensures that disentangled states have a zero value of entanglement. Condition (E2) ensures that the amount of entanglement does not increase totally or in average by so-called purification procedures. Note that (E2-a) implies the following condition: (E3) $`E(\sigma )=E(U_1U_2\sigma U_1^{}U_2^{})`$ for all unitary operators $`U_i`$ on $`_i`$ for $`i=1,2`$. Condition (E3) ensures that a local change of basis has no effect on the amount of entanglement. Vedral et. al. proposed the following general construction of the measure of entanglement $`E`$. Let $`D:𝒯\times 𝒯𝐑`$ be a function satisfying the following conditions: (D1) $`D(\sigma ,\rho )0`$ and $`D(\sigma ,\sigma )=0`$ for any $`\sigma ,\rho 𝒯`$. (D2) $`D(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )D(\sigma ,\rho )`$ for any $`\sigma ,\rho 𝒯`$ and for any completely positive trace preserving map $`\mathrm{\Theta }`$ on the space of operators on $``$. Condition (D1) ensures that $`D`$ has some properties of “distance”. Condition (D2) ensures that the “distance” does not increase by any nonselective operations. Then, it is shown that the “distance” $`E(\sigma )`$ of a state $`\sigma `$ to the set $`𝒟`$ of disentangled states defined by $$E(\sigma )=\underset{\rho 𝒟}{inf}D(\sigma ,\rho )$$ (1) satisfies conditions (E1) and (E2-a). It is shown that the quantum relative entropy and the Bures metric satisfy (D1) and (D2) , and it is conjectured that the Hilbert-Schmidt distance is a reasonable candidate of a “distance” to generate an entanglement measure . Here, the Hilbert-Schmidt distance is defined by $$D_{HS}(\sigma ,\rho )=\sigma \rho _{HS}^2=\text{Tr}[(\sigma \rho )^2]$$ for all $`\sigma ,\rho 𝒯`$, which satisfies (D1) since $`\sigma \rho _{HS}`$ is a true metric. Recently, Witte and Trucks claimed that the Hilbert-Schmidt distance also satisfies (D2) and that the prospective measure of entanglement, $`E_{HS}`$, defined by $$E_{HS}(\sigma )=\underset{\rho 𝒟}{inf}D_{HS}(\sigma ,\rho )$$ satisfies (E1) and (E2-a). It should be pointed out first that their suggested proof of condition (D2) for $`D_{HS}`$ is not justified. Let $`f`$ be a convex function on $`(0,\mathrm{})`$ and let $`f(0)=0`$. Let $`\mathrm{\Phi }`$ be a trace preserving positive map on the space of operators such that $`\mathrm{\Phi }1`$. Then, Lindblad’s theorem asserts that for every positive operator $`A`$ we have $$\text{Tr}[f(\mathrm{\Phi }A)]\text{Tr}[f(A)],$$ (2) where $`f(A)`$ is defined as usual through the spectral resolution of $`A`$. It is suggested that with the help of the above theorem it can be shown that $$D_{HS}(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )D_{HS}(\sigma ,\rho )$$ (3) by regarding $`D_{HS}`$ as a convex function on $`𝒯_+()𝒯_+()`$ for all positive mappings $`\mathrm{\Theta }`$. However, it is not clear at all how $`D_{HS}`$ and $`\mathrm{\Theta }`$ satisfy the assumptions of Lindblad’s theorem. Now, we shall show a counterexample to the claim that $`D_{HS}`$ satisfies condition (D2). Let $`A`$ and $`B`$ be $`4\times 4`$ matrices defined by $$A=\left(\begin{array}{cccc}0& 0& 0& 0\\ 1& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 1& 0\end{array}\right),B=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 1\end{array}\right).$$ Then we have $$A^{}A=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 0\end{array}\right).$$ It follows that $`A^{}A+B^{}B=I_4`$ and hence $$\mathrm{\Theta }\sigma =A\sigma A^{}+B\sigma B^{},$$ where $`\sigma `$ is arbitrary, defines a completely positive trace preserving map. Let $`\sigma `$ and $`\rho `$ be density matrices defined by $$\sigma =\left(\begin{array}{cccc}1/2& 0& 0& 0\\ 0& 1/2& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),\rho =\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 1/2& 0\\ 0& 0& 0& 1/2\end{array}\right).$$ Then we have $$(\sigma \rho )^2=\left(\begin{array}{cccc}1/4& 0& 0& 0\\ 0& 1/4& 0& 0\\ 0& 0& 1/4& 0\\ 0& 0& 0& 1/4\end{array}\right)$$ and hence $$D_{HS}(\sigma ,\rho )=\mathrm{Tr}[(\sigma \rho )^2]=1.$$ On the other hand, we have $$A\sigma A^{}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 1/2& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),B\sigma B^{}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 1/2& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),$$ $$A\rho A^{}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 1/2\end{array}\right),B\rho B^{}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 1/2\end{array}\right).$$ It follows that $$(\mathrm{\Theta }\sigma \mathrm{\Theta }\rho )^2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 1\end{array}\right)$$ and hence $$D_{HS}(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )=\mathrm{Tr}[(\mathrm{\Theta }\sigma \mathrm{\Theta }\rho )^2]=2.$$ We conclude therefore $$D_{HS}(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )>D_{HS}(\sigma ,\rho ).$$ From the above counterexample, we conclude that the inequality $$D_{HS}(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )D_{HS}(\sigma ,\rho )$$ is not generally true for completely positive trace preserving maps $`\mathrm{\Theta }`$. Therefore, it is still quite open whether $`E_{HS}`$ is a good candidate for an entanglement measure or not. In order to obtain a tight bound for $`D_{HS}(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )`$, we take advantage of Kadison’s inequality : If $`\mathrm{\Phi }`$ is a positive map, then we have $$\mathrm{\Phi }(A)^2\mathrm{\Phi }\mathrm{\Phi }(A^2)$$ (4) for all Hermitian $`A`$. Applying the above inequality to the positive trace preserving map $`\mathrm{\Phi }=\mathrm{\Theta }`$ and $`A=\sigma \rho `$, we have $$(\mathrm{\Theta }\sigma \mathrm{\Theta }\rho )^2\mathrm{\Theta }\mathrm{\Theta }[(\sigma \rho )^2].$$ By taking the trace of the both sides we obtain the following conclusion: For any trace preserving positive map $`\mathrm{\Theta }`$ and any states $`\sigma `$ and $`\rho `$, we have $$D_{HS}(\mathrm{\Theta }\sigma ,\mathrm{\Theta }\rho )\mathrm{\Theta }D_{HS}(\sigma ,\rho ).$$ (5) The previous example shows that the bound can be attained with $`\mathrm{\Theta }=2`$. Acknowledgements I thank V. Vedral and M. Murao for calling my attention to the present problem.
no-problem/0002/hep-ex0002019.html
ar5iv
text
# Emulsion Chamber with Big Radiation Length for Detecting Neutrino Oscillations ## 1 Introduction Oscillatory transitions among neutrinos of different flavors have emerged as a major topic of particle physics . An accelerator experiment using muon antineutrinos from $`\mu ^+`$ decays at rest over an effective baseline of $`L/E1`$ m/MeV, LSND at Los Alamos, has reported a positive signal in the channel $`\overline{\nu }_\mu \overline{\nu }_e`$ with a probability of $`3\times 10^3`$ . A consistent, albeit less significant, signal was also observed in the CP-conjugate channel $`\nu _\mu \nu _e`$ using muon neutrinos from $`\pi ^+`$ decay in flight . When combined with the upper limits imposed by other accelerator and reactor experiments , the LSND data suggest that the $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillation is driven by a mass difference squared, $`\mathrm{\Delta }m^2`$, between some 0.3 and 2.3 eV<sup>2</sup>. The transitions $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$ in this region of $`\mathrm{\Delta }m^2`$ will soon be explored with higher sensitivity by the BooNE experiment at Fermilab . Qualitatively, an analogy with the quark sector suggests that (i) the $`\nu _\tau `$ should pick the largest contribution of the heaviest mass state $`\nu _3`$, and that (ii) the mixings between the neighboring neutrino flavors (i.e., $`\nu _e\nu _\mu `$ and $`\nu _\mu \nu _\tau `$) should be the strongest. Therefore, a mass difference as large as $`\mathrm{\Delta }m^21`$ eV<sup>2</sup> should primarily manifest itself in the $`\nu _\mu \nu _\tau `$ channel. The existing upper limits on the effective mixing $`\mathrm{sin}^22\theta `$ for the transition $`\nu _\mu \nu _\tau `$ largely come from experiments with small effective baselines of $`L/E_\nu 1`$ km/GeV, and therefore are much less compelling for $`\mathrm{\Delta }m^21`$ eV<sup>2</sup> than for larger values of the mass difference. At neutrino energies well below the $`\tau `$ threshold ($`E_\nu <2`$ GeV), the transition $`\nu _\mu \nu _\tau `$ in the required region $`L/E_\nu 1`$ km/GeV will be indirectly probed by BooNE through $`\nu _\mu `$ disappearance . However, a convincing $`\nu _\mu \nu _\tau `$ signal can only be demonstrated by detecting the CC collisions of the $`\nu _\tau `$ in appearance mode. For this, an appropriate environment is offered by the ”medium-baseline” location on Mount Jura in the $`\nu _\mu `$ beam of CERN-SPS ($`L=17`$ km and $`E_\nu =27`$ GeV by flux). The CC collisions of the $`\nu _\tau `$ may be identified either topologically or by detecting the detached vertex of the produced $`\tau `$ in a hybrid-emulsion spectrometer . The latter option is considered in this paper. For the ”original” transition $`\nu _\mu \nu _e`$ to be observed and investigated in the same experiment, that will have very different systematics compared to either LSND and BooNE , secondary electrons must be efficiently detected and momentum-analyzed. The atmospheric evidence for neutrino oscillations driven by a mass difference of 10<sup>-2</sup>–10<sup>-3</sup> eV<sup>2</sup> will be initially probed by the long-baseline experiments operating in the neutrino beams of proton accelerators . Large angular divergences of these beams dictate that, for the sake of statistics, fine instrumentation of the detector and/or the magnetic analysis be compromised for a multi-kiloton fiducial mass. Therefore, unraveling the pattern of oscillations in this mass region will probably require a much more intense beam generated by a muon storage ring with a straight section that points towards the detector . Assuming a $`\mu ^{}`$ ring, the beam fully consists of muon neutrinos and electron antineutrinos whose energy spectra and angular spreads are precisely known. That the original beam contains neither $`\overline{\nu }_\mu `$ nor $`\nu _e`$ allows to identify the parent neutrino by the sign of the lepton produced by the oscillated neutrino: negative and positive leptons are unambiguous signatures of the $`\nu _\mu `$ and $`\overline{\nu }_e`$ parents, respectively. Therefore, the transitions $`\nu _\mu \nu _e`$, $`\nu _\mu \nu _\tau `$, $`\overline{\nu }_e\overline{\nu }_\mu `$, and $`\overline{\nu }_e\overline{\nu }_\tau `$ can be tagged independently. The unique properties of this neutrino beam may warrant a finely instrumented detector of relatively small mass, that would be sensitive to a broad range of neutrino transitions driven by a mass difference of 10<sup>-2</sup>–10<sup>-3</sup> eV<sup>2</sup>. For this, several conditions are obligatory. Selecting the $`\nu _\tau `$ and $`\overline{\nu }_\tau `$ collisions by the detached vertex of the $`\tau `$ is only possible in a hybrid-emulsion apparatus. In order to suppress the background to $`\tau `$ decays arising from anticharm production in CC collisions of the $`\overline{\nu }_e`$, the detector should identify the electrons as reliably as the muons. For the transitions $`\nu _\mu \nu _\tau `$ and $`\overline{\nu }_e\overline{\nu }_\tau `$ to be reliably discriminated, all charged secondaries (and not just muons) should be sign-selected and momentum-analyzed in the detector. The latter will also allow extra kinematic handles for further suppressing the background to $`\tau `$ decays (such as $`p_\mathrm{T}`$ of a decay particle with respect to $`\tau `$ direction and transverse disbalance of the event as a whole with respect to incident neutrino). In this paper, we propose a conceptual scheme of a hybrid-emulsion spectrometer for detecting and identifying the neutrinos of different flavors by their CC collisions, as required for probing various channels of neutrino oscillations. The design emphasizes detection of $`\tau `$ leptons by detached vertices, reliable identification and sign-selection of electrons, good spectrometry for all charged secondaries, and reconstruction of secondary photons and $`\pi ^0`$ mesons. We also estimate the performance of the proposed apparatus in a medium-baseline experiment using the neutrino beam of the proton machine CERN-SPS and in a long-baseline experiment in the neutrino beam of a muon storage ring. ## 2 The detector Building on the ideas put forth by A. Ereditato, K. Niwa, and P. Strolin , we implement the principle of ”emulsion cloud chamber”: layers of thin emulsion are only used as a tracker for events occurring in passive material. But in contrast with , we aim at constructing a distributed target with low density and large radiation length , so that either muons, hadrons, and electrons can be momentum-analyzed by curvature inside the target itself. Accordingly, the target should largely consist of low-Z material like aluminum or carbon in the form of carbon-fiber composite. Apart from narrow gaps instrumented with drift chambers that provide an electronic ”blueprint” of the event, the target is built as a compact homogeneous volume in ambient magnetic field. Owing to relatively weak multiple scattering in low-Z material, the successive layers of thin emulsion may act as an ”emulsion spectrometer” for analyzing the momenta of charged secondaries by curvature in magnetic field. In untangling the topologies of neutrino events, the detector will operate very much like a bubble chamber. The proposed apparatus is therefore referred to as the Emulsion Bubble Chamber, or EBC. Specified below is a tentative structure of the distributed target that has been assumed in our simulations of detector response. (The actual design, including the choice of low-Z material, must of course be carefully optimized for a particular experiment). The fine structure of the target is depicted in Fig. 1. The 6-mm-thick basic element of the structure is formed by a 1-mm plate of passive material, by an emulsion–plastic–emulsion sheet (referred to as the ES) with total thickness of 0.2 mm, and by a relatively large drift space of 4.8 mm in which $`\tau `$ decays are selected. In our tentative design, the passive plate is largely carbon (960 $`\mu `$m), but also includes a thin layer of denser substance (40 $`\mu `$m of copper) in order to boost the geometric acceptance. The ES is a 100-$`\mu `$m sheet of transparent plastic coated on both sides with 50-$`\mu `$m layers of emulsion. The medium formed by successive elements has mean density of $`\rho =0.49`$ g/cm<sup>3</sup> and effective radiation length of $`X_0=52.6`$ cm. (These values of $`\rho `$ and $`X_0`$ are very similar to those of a bubble chamber with neon-hydrogen filling .) Note that the element does not feature the second ES downstream of the gap, as originally foreseen in : the idea is to detect the kink or the trident using track segments in the ESs of two successive elements, as allowed by relatively weak Coulomb scattering in low-Z material of the intervening passive plate. On the technical side, the air gap can be created by thin and rigid ”bristles” on the upstream face of the carbon-composite plate that has been manufactured in ”brush-like” form. (This is only possible because we have just one ES per element.) The positions of the thin bristles will be tabulated, and secondary vertices that match these positions will be dropped. Another technical option for the drift space is 5-mm-thick paper honeycomb. For a $`\tau `$ that emerged from the passive plate and suffered a one-prong or a three-prong decay in the gap, the decay vertex can be reconstructed from the track segments in two successive ESs. As soon as the fitted secondary vertex lies within the intervening passive plate, the candidate event must be dropped because of the high background from reinteractions. In principle, a $`\tau `$ that decayed before reaching the gap can be detected by impact parameter, but this possibility is not considered here. By adding a thin layer of denser material (copper) downstream of low-Z material (carbon), we slightly compromise the radiation length for geometric acceptance: thereby, the fraction of $`\tau `$ leptons that reach the air gap is effectively increased. That the proportion of copper events is boosted by the geometric effect is illustrated by Fig. 2. Here and below, the spectrum of incident $`\tau `$ neutrinos is assumed to be proportional to the spectrum of muon neutrinos from CERN-SPS . For the tracks to be found in emulsion at scanning stage, planes of electronic detectors must be inserted in the continuous structure formed by the basic elements. These detectors should allow a crude on-line reconstruction of the event as a whole, and therefore they must have sufficient spatial resolution, provide angular information, and be spaced by less than one radiation length $`X_0`$. In our tentative design, a stack of 30 elements forms a ”module” with total thickness of $`0.34X_0`$, and 4-cm-wide gaps between adjacent modules are instrumented by multisampling drift chambers. In a medium- or long-baseline experiment where the occupancy of ESs will be relatively low, the accuracy of ”electronic” reconstruction should be sufficient for unambiguously finding an energetic track in emulsion. Therefore, one can scan back along a stiff track, starting from the downstream ES of the module in which the collision occurred. As soon as the layer of origin is reached, a few successive ESs of the nearby elements must be fully scanned over relatively small areas towards finding the stubs of all other tracks associated with the primary vertex. To refine the alignment of ESs, a sufficient number of stiff muon tracks must be fully reconstructed in emulsion. This first stage of emulsion scanning will yield a relatively small number of events featuring decay signatures (either a kink or a trident in the air gap just downstream of the primary vertex). The second stage will be to scan down all tracks emerging from the primary vertex and to find and measure the conversions of secondary photons in the distributed target. This will allow to analyze the momenta of the secondaries in the ”emulsion spectrometer” and to identify electrons by change of curvature and by emission of brems. We foresee that the scanning of emulsion will only start after the full period of detector exposure. ## 3 Spectrometry for charged particles and photons The response of the EBC detector is simulated using GEANT. In fitting a track, we assume that each tracker traversed (either a ES or a drift chamber) provides two spatial points with resolutions of 2 and 150 $`\mu `$m, respectively. Because of multiple scattering, the fit is but marginally sensitive to increasing the spatial error in the ES from 2 up to 8 $`\mu `$m. A uniform magnetic field of 0.7 Tesla, that is normal with respect to beam direction, is assumed throughout. (Using a stronger field would boost the performance of the spectrometer, but may be impracticable for a big air-core magnet that will house a multiton EBC.) When found in emulsion, the track of an electron can be identified by a variation of curvature due to energy losses in successive layers of the target. These losses must be accounted for in estimating the momentum by curvature in magnetic field. For this, we use a simple algorithm that should be treated as preliminary and is not fully realistic. Namely, we fit a restricted segment of the $`e^{}`$ track over which the actual loss of energy does not exceed 30%. Additionally, this segment is required to cross no less than five ESs. We compute an ”ideal trajectory” for a given value of electron momentum $`p_e`$, to which the observed trajectory is then fitted, using GEANT. In doing so, we switch off multiple scattering and radiation losses, but instead reduce the momentum ”by hand” in each layer of the target by the same amount $`\mathrm{\Delta }p`$ that is treated as an empirical parameter. The value of $`\mathrm{\Delta }p`$ is selected so as to obtain (i) an unbiased estimate of electron momentum ($`p_e^{\mathrm{meas}}/p_e^{\mathrm{true}}1`$) and (ii) a reasonable value of $`\chi ^2`$. For definiteness, we use a Monte-Carlo template of electrons with $`p_e>1`$ GeV originating from simulated decays $`\tau ^{}e^{}\nu \overline{\nu }`$ (see further). For these electrons, the mean length of the selected track segment is close to 30 cm (see Fig. 3), which proves to be sufficient for analyzing the momentum in the ”emulsion spectrometer” formed by successive ESs of the distributed target. Plotted in Fig. 4 is the ratio between the fitted and true momenta, $`p_e^{\mathrm{meas}}/p_e^{\mathrm{true}}`$, for the properly fitted electrons ($`\chi ^2<3`$). The same ratio is then separately shown for two regions of electron momentum: $`1<p_e<5`$ GeV and $`p_e>5`$ GeV. We estimate that of all electrons with $`p_e>1`$ GeV, nearly 90% can be reliably detected, identified, and sign-selected (that is, reconstructed with $`\chi ^2<3`$ and $`\delta p_e/p_e<0.40`$). On average, the $`e^{}`$ momentum is measured to a precision of some 11%. Treating the length of the track segment as a free parameter of the fit, which is a more realistic approach that is not attempted in this paper, will further improve the momentum resolution and boost the fraction of sign-selected electrons. Good spectrometry for electrons will allow EBC to detect and discriminate the CC collisions of electron neutrinos and antineutrinos and, at the same time, to select electronic decays of the $`\tau `$ almost as efficiently as the muonic ones. A comparison between the two leptonic modes will provide an important handle on self-consistency of any $`\tau `$ signal. The spectrometry of muons and charged pions is much less affected by energy loss in matter, but for pions the momentum resolution is slightly degraded by hadronic reinteractions. For the pions with $`p_\pi >1`$ GeV originating from the simulated decays $`\tau ^{}\pi ^{}\nu `$, Fig. 5 shows the ratio between the fitted and true momenta, $`p_\pi ^{\mathrm{meas}}/p_\pi ^{\mathrm{true}}`$. The mean uncertainty on pion momentum is seen to be close to 7%. Neutral pions can be reconstructed from photon conversions in the distributed target. (Here, we assume that the potential length in the detector is much larger than $`X_0=52.6`$ cm.) In estimating the energy of a photon that has converted in the target, we count only those conversion electrons that have fired at least one drift chamber and, therefore, can be found in emulsion and then momentum-analyzed by curvature. We also assume that the primary vertex has already been fitted, so that the direction of the photon is precisely known. For illustration, we consider the $`\pi ^0`$ mesons originating from the decay $`\tau ^{}\pi ^{}\pi ^0\nu `$. The measured invariant mass of the two detected photons from $`\pi ^0\gamma \gamma `$, as plotted in Fig. 6, shows a distinct $`\pi ^0`$ signal. The actual size of the mass window for selecting the $`\pi ^0`$ candidates will be dictated by the level of combinatorial background in a particular analysis; for purely illustrative purposes, we assume a mass window of $`115<m_{\gamma \gamma }<155`$ MeV. Thus estimated detection efficiency for $`\pi ^0`$ mesons is close to 0.26. ## 4 Detecting the leptonic and semileptonic decays of the $`\tau `$ For the $`\tau `$ leptons emitted in either the deep-inelastic and quasielastic $`\nu _\tau N`$ collisions, we generate the two leptonic decays, $`\tau ^{}\mu ^{}\nu \overline{\nu }`$ and $`\tau ^{}e^{}\nu \overline{\nu }`$, and three semileptonic (quasi-)two-body decays: $`\tau ^{}\pi ^{}\nu `$, $`\tau ^{}\pi ^{}\pi ^0\nu `$ that is mediated by the resonance $`\rho ^{}\pi ^{}\pi ^0`$, and $`\tau ^{}\pi ^{}\pi ^+\pi ^{}\nu `$ that is mediated by the resonance $`a_1^{}\pi ^{}\pi ^+\pi ^{}`$. The threshold effect for $`\tau `$ production in neutrino–nucleon collisions and polarization of the $`\tau `$, that affects the angular distribution of decay products in the $`\tau `$ frame, are accounted for. Further details on our $`\tau `$ generator can be found in . For definiteness, we again assume that $`\tau `$ neutrinos incident on EBC have the same energy spectrum as muon neutrinos from CERN-SPS ($`E_\nu =27`$ GeV by flux), and that the distributed target is magnetized by a uniform field of 0.7 Tesla that is perpendicular to beam direction. The decay products of the $`\tau `$ are propagated through the target and then reconstructed from hits in emulsion, as explained in the previous section. In estimating the detection efficiencies for different decay channels of the $`\tau `$, we adopt the (quasi-realistic) selection criteria that are listed below. * The $`\tau `$ must have decayed in the drift gap. This is necessary for reconstructing the detached vertex from track segments in the upstream and downstream ESs. * The momentum of either charged daughter must exceed 1 GeV, and its emission angle must lie within 400 mrad of beam direction. These selections are suggested by the fact that soft and broad-angle tracks are poorly reconstructed in emulsion. * For a one-prong decay to a charged daughter $`d`$ (either a muon, electron, or pion), the kink angle must be suffiently large: $`\theta _{\tau d}>20`$ mrad. This lower cut reflects the experimental uncertainty on the kink angle that is close to 5 mrad. * All charged daughters of the $`\tau `$ must be momentum-analyzed by curvature and reliably sign-selected (that is, reconstructed in the detector with $`\chi ^2<3`$ and $`\delta p/p<0.40`$). This will fix the sign of the $`\tau `$ and, on the other hand, will allow kinematic handles for rejecting the decays of charmed and strange particles. * In a one-prong decay, $`p_\mathrm{T}`$ of the charged daughter with respect to $`\tau `$ direction must exceed 250 MeV. This is aimed at rejecting the decays of strange particles. For those $`\tau `$ decay channels that have actually been simulated in the detector, the estimated detection efficiencies (or acceptances) are listed in Table 1. (Note that here we do not require the $`\pi ^0`$ from $`\tau ^{}\pi ^{}\pi ^0\nu `$ to be reconstructed.) For these decay channels of the $`\tau `$, the acceptance-weighted branching fractions add up to some 0.28. Approximately accounting for the other one-prong and three-prong channels, we estimate that nearly 32% of all $`\tau `$ leptons produced in passive material will be detected in EBC by visible kinks or tridents. The (quasi-)two-body decays $`\tau ^{}\pi ^{}\nu `$, $`\tau ^{}\rho ^{}\nu `$, and $`\tau ^{}a_1^{}\nu `$ may provide an extra kinematic handle for discriminating the $`\tau ^{}`$ against the background of anticharm. In a decay $`\tau ^{}h^{}\nu `$, ”transverse mass” is defined as $`M_\mathrm{T}=\sqrt{m_h^2+p_\mathrm{T}^2}+p_\mathrm{T}`$, where $`m_h`$ and $`p_\mathrm{T}`$ are the $`h^{}`$ mass and transverse momentum with respect to $`\tau `$ direction. The two-body kinematics dictate that the unsmeared $`M_\mathrm{T}`$ distribution should reveal a very distinctive peak just below $`M_\mathrm{T}^{\mathrm{max}}=m_\tau `$, see the upper plots of Fig. 7. The $`M_\mathrm{T}`$ technique for identifying massive parents by two-body decays in emulsion was proven by discriminating the relatively rare decay $`D_s^+(1968)\mu ^+\nu `$ against a heavy background from other decays of charm . Observing the high-$`M_\mathrm{T}`$ peak in a detector requires good spectrometry of charged pions and, for the decay $`\tau ^{}\pi ^{}\pi ^0\nu `$ in particular, good reconstruction of $`\pi ^0`$ mesons. The bottom plots of Fig. 7 show the smeared $`M_\mathrm{T}`$ distributions for the reconstructed decays $`\tau ^{}\pi ^{}\nu `$, $`\tau ^{}\pi ^{}\pi ^0\nu `$, and $`\tau ^{}\pi ^{}\pi ^+\pi ^{}\nu `$ (the $`\pi ^0`$ for $`\tau ^{}\pi ^{}\pi ^0\nu `$ has been selected in the mass window $`115<m_{\gamma \gamma }<155`$ MeV, see Fig. 6). The Jacobian cusps of the original $`M_\mathrm{T}`$ distributions largely survive the apparatus smearings of EBC and, therefore, will provide distinctive signatures of the $`\tau `$. ## 5 Sensitivity to neutrino oscillations Neutrino oscillations driven by $`\mathrm{\Delta }m^21`$ eV can be efficiently probed at a location on Mount Jura near CERN, that is irradiated by the existing wide-band $`\nu _\mu `$ beam of the CERN-SPS accelerator ($`E_\nu =27`$ GeV by flux) over a ”medium” baseline of 17 km . At this location, an EBC detector can be deployed in the existing magnet of the NOMAD detector that delivers a magnetic field of up to 0.7 Tesla. In the large internal volume of this magnet, $`3.5\times 3.5\times 7.5`$ m<sup>3</sup>, one might deploy a distributed target with a volume of $`3.0\times 3.0\times 6.6`$ m<sup>3</sup>, that will consist of 30 modules (see Section 2). The target has a total thickness of 10 radiation lengths, and contains nearly 20 tons of passive material and 3 tons of standard emulsion (or a lesser amount of diluted emulsion, that may be warranted by a relatively low occupancy at a medium-baseline location). The magnetic volume will also house a few drift chambers downstream of the target. As photon conversions and electrons will be efficiently detected in the distributed target, no extra electromagnetic calorimeter is foreseen. The design of the muon system is not discussed in this paper. At the Jura location, the rate of $`\nu _\mu `$-induced CC collisions has been estimated as 843 events per ton of target per $`10^{19}`$ protons delivered by CERN-SPS. Assuming $`10^{20}`$ delivered protons which corresponds to 3–4 years of operation, the proposed detector will collect nearly $`1.7\times 10^5`$ CC events. Even if the probability of the $`\nu _\mu \nu _\tau `$ transition is as small as 0.3% , we will detect some 80 $`\tau `$ events with a negligibly small background from the decays of strange and anticharm particles. (This prediction assumes a $`\tau `$ detection efficiency of 32%, as estimated above for the same beam, and takes into account the threshold effect in $`\tau `$ production.) Alternatively, a zero signal will allow to exclude (at 90% C.L.) an area of the parameter plane ($`\mathrm{sin}^22\theta _{\mu \tau }`$, $`\mathrm{\Delta }m_{\mu \tau }^2`$) that is depicted in Fig. 8. As demonstrated above, EBC will very efficiently identify, sign-select, and momentum-analyze prompt electrons from CC collisions of electron neutrinos and antineutrinos. The energy of incident $`\nu _e`$ ($`\overline{\nu _e}`$) will be estimated to better than 10%. If at the Jura site the transition $`\nu _\mu \nu _e`$ indeed occurs with a probability of some 0.003 as in LSND , in an exposure of $`10^{20}`$ protons on target we will detect and reconstruct some 460 $`\nu _eNe^{}X`$ events due to oscillated neutrinos against a background of nearly 860 CC events due to the original $`\nu _e`$ component of the beam . As the original $`\nu _e`$ component of the beam is substantially harder than the $`\nu _\mu `$ component, a $`\nu _\mu \nu _e`$ signal will effectively reduce the mean energy of $`\nu _eNe^{}X`$ events. A very intense and collimated beam of a muon storage ring will allow long-baseline experiments with relatively small but finely instrumented detectors. For purely illustrative purposes, we assume an EBC detector as sketched above (20 tons of passive material plus 3 tons of standard emulsion) that is irradiated by a $`\mu ^{}`$ ring with parameters as foreseen in (that is, $`7.5\times 10^{20}`$ injected muons per year of operation and a straight section that amounts to 25% of the ring circumference). We also assume a ”nominal” distance of 732 km (either from CERN to Gran Sasso or from Fermilab to Soudan) that results in effective baselines of $`L/E_\nu 20`$ and 10 km/GeV for the stored-muon energies of $`E_\mu =50`$ and 100 GeV, respectively. Then, given unpolarized muons with $`E_\mu =50`$ (100) GeV in the ring, in the absence of oscillations the detector will annually collect some $`5.5\times 10^3`$ ($`4.4\times 10^4`$) and $`2.4\times 10^3`$ ($`1.9\times 10^4`$) CC collisions of muon neutrinos and electron antineutrinos, respectively. Note that the expected event rate for $`E_\mu =100`$ GeV is as high as in the CERN-SPS beam at Jura location. Taken together, the data on atmospheric and reactor neutrinos favor a $`\nu _\mu \nu _\tau `$ transition with almost maximal mixing and with a mass difference in the range $`\mathrm{\Delta }m_{\mu \tau }^210^2`$–10<sup>-3</sup> eV<sup>2</sup> . For definiteness assuming $`\mathrm{sin}^22\theta _{\mu \tau }=1`$ and $`\mathrm{\Delta }m_{\mu \tau }^2=5\times 10^3`$ eV<sup>2</sup>, we estimate that in three years of operation a 20-ton EBC will detect a signal of some 50 (170) $`\tau ^{}`$ leptons for $`E_\mu =50`$ (100) GeV with a negligible background of $`\overline{\nu }_e`$-produced anticharm. (Note that the $`E_\mu `$-dependence of the $`\tau `$ signal largely reflects the increase of $`\sigma (\nu _\tau N\tau ^{}X)`$ with neutrino energy: in the region $`\mathrm{\Delta }m^2`$(eV<sup>2</sup>) $`E_\nu /L`$ (GeV/km), the flux of oscillated neutrinos is virtually independent of $`E_\mu `$.) Alternatively, a zero $`\tau ^{}`$ signal will effectively exclude a large area of parameter space for the transition $`\nu _\mu \nu _\tau `$, see Fig. 8. The transition $`\overline{\nu }_e\overline{\nu }_\tau `$ will be simultaneously detected with a comparable sensitivity, and the two transitions will be reliably discriminated by the charge of the $`\tau `$. Using the detached-vertex information, the transitions $`\nu _\mu \nu _e`$ and $`\overline{\nu }_e\overline{\nu }_\mu `$ will be separated from the transitions $`\nu _\mu \nu _\tau `$ and $`\overline{\nu }_e\overline{\nu }_\tau `$ involving the leptonic decays $`\tau ^{}e^{}\nu \overline{\nu }`$ and $`\tau ^+\mu ^+\nu \overline{\nu }`$, respectively. We may conclude that, in this experimental environment, even a relatively small EBC detector will efficiently probe the entire pattern of neutrino oscillations in the region $`\mathrm{\Delta }m^210^2`$–10<sup>-3</sup> eV<sup>2</sup>. ## 6 Summary A conceptual detector scheme is proposed for studying various channels of neutrino oscillations. The hybrid-emulsion spectrometer will detect and discriminate the neutrinos of different flavors by their CC collisions. The design emphasizes detection of $`\tau `$ leptons by detached vertices, identification and sign-selection of electrons, and spectrometry for all charged particles and photons. A distributed target is formed by layers of low-Z material, emulsion-plastic-emulsion sheets, and air gaps in which $`\tau `$ decays are detected. Target modules with mean density of 0.49 g/cm<sup>3</sup> and radiation length of 52.6 cm, that are similar to those of a bubble chamber with neon-hydrogen filling, are alternated by multisampling drift chambers that provide electronic tracking in real time. The tracks of charged secondaries, including electrons, are momentum-analyzed by curvature in magnetic field using hits in successive thin layers of emulsion and in drift chambers. Electrons are identified by change of curvature and by emission of brems. Photons are detected and analyzed by conversions in the distribute target, and neutral pions are reconstructed. The $`\tau `$ leptons are efficiently detected and sign-selected in all major decay channels, including $`\tau ^{}e^{}\nu \overline{\nu }`$. At a medium-baseline location on mount Jura in the existing neutrino beam of the proton machine CERN-SPS, the detector will be sensitive to either the $`\nu _\mu \nu _\tau `$ and $`\nu _\mu \nu _e`$ transitions in the mass-difference region of $`\mathrm{\Delta }m^21`$ eV<sup>2</sup>, as suggested by the results of LSND. At a long-baseline location in the neutrino beam of a muon storage ring, even a relatively small spectrometer of the proposed type will efficiently probe the entire pattern of neutrino oscillations in the region $`\mathrm{\Delta }m^210^2`$–10<sup>-3</sup> eV<sup>2</sup> that is suggested by the data on atmospheric neutrinos. This work was supported in part by the CRDF foundation (grant RP2-127) and by the Russian Foundation for Fundamental Research (grant 98-02-17108). Figure Captions Fig. 1. Schematic of the assumed fine structure of the target, showing the 1-mm-thick carbon–copper plates (960 + 40 $`\mu `$m), emulsion–plastic–emulsion sheets (50 + 100 + 50 $`\mu `$m), and drift space in which $`\tau `$ decays are selected (4800 $`\mu `$m). Fig. 2. The depth of the primary vertex in the carbon–copper plate for all $`\tau `$ events (a) and for those events in which the $`\tau `$ has decayed in the drift space (b). Here and in subsequent figures, the energy spectrum of incident $`\tau `$ neutrinos is assumed to be proportional to the spectrum of muon neutrinos from the CERN-SPS accelerator. Fig. 3 The length along the beam direction of the track segment used for analyzing the $`e^{}`$ momentum by curvature (see text). Fig. 4. The ratio between the fitted and true momenta of the electron from $`\tau ^{}e^{}\nu \overline{\nu }`$, $`R=p_e^{\mathrm{meas}}/p_e^{\mathrm{true}}`$, for $`p_e>1`$ GeV (a), for $`1<p_e<5`$ GeV (b), and for $`p_e>5`$ GeV (c). Fig. 5. The ratio between the fitted and true momenta, $`R=p_\pi ^{\mathrm{meas}}/p_\pi ^{\mathrm{true}}`$, for the $`\pi ^{}`$ with $`p_\pi >1`$ GeV originating from the decay $`\tau ^{}\pi ^{}\nu `$. Fig. 6. For the decay $`\tau ^{}\pi ^{}\pi ^0\nu `$ followed by $`\pi ^0\gamma \gamma `$, the measured invariant mass of the two photons that have been detected by conversions in the distributed target. Fig. 7. Transverse mass $`M_\mathrm{T}=\sqrt{m_h^2+p_\mathrm{T}^2}+p_\mathrm{T}`$ for the (quasi-)two-body decays $`\tau ^{}h^{}\nu `$ with $`h^{}=\pi ^{}`$ (left-hand column), $`h^{}=a_1^{}\pi ^{}\pi ^+\pi ^{}`$ (middle column), and $`h^{}=\rho ^{}\pi ^{}\pi ^0`$ (right-hand column). The unsmeared $`M_\mathrm{T}`$ distributions for all events in each channel prior to any selections are shown in the top row. The smeared distributions for detected events are shown in the bottom row. Fig. 8. Null-limit sensitivity to the $`\nu _\mu \nu _\tau `$ transition (at 90% C.L.) of a 20-ton EBC detector deployed at 17 km from CERN-SPS and at 732 km from a muon storage ring with $`E_\mu =50`$ and 100 GeV, assuming 10<sup>20</sup> protons delivered by CERN-SPS and $`2.2\times 10^{21}`$ negative muons injected in a ring with a straight section of 25%. The shaded area on the right is the region of parameter space for $`\nu _\mu \nu _\tau `$ suggested by a combined analysis of Kamiokande and Superkamiokande data . Also illustrated are the best upper limits on $`\mathrm{sin}^22\theta _{\mu \tau }`$ for $`\mathrm{\Delta }m_{\mu \tau }^2<10`$ eV<sup>2</sup>, as imposed by E531 and CDHS .
no-problem/0002/nucl-th0002031.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is about ten years now that the first theoretical work suggesting a strong enhancement around the $`2m_\pi `$ threshold of the in-medium ’sigma’-meson mass distribution was published . The threshold enhancement originates from in-matter p-wave renormalization of the two pions to which the ’bare sigma’ meson of about 1 GeV decays. The existence of such a bare sigma meson has in the meanwhile been confirmed by quenched lattice calculations . Its decay into two pions leads to a strong hybridization of the sigma meson and finally only a broad bump of about 600 MeV width survives at considerably lower energy. The $`\pi \pi `$ interaction constants in the completely phenomenological model of ref. had been adjusted to reproduce the experimental $`\pi \pi `$ phase shift in the scalar -isoscalar channel. Since the latter raises right from threshold with considerable positive slope, it signals substantial attraction among low-energy pions. Pions are, however, too light to be bound by the interaction. On the other hand, the p-wave renormalization of the pions in the nuclear medium (nucleon- and delta-hole couplings) increases their kinetic mass and thus suppresses the kinetic energy. As a consequence binding of the two pions is favored at high density. Before this occurs, a strong accumulation of strength close to the $`2m_\pi `$ threshold takes place. Even if, with increasing density, the mass distribution receives contributions from below the $`2m_\pi `$ threshold, this does not lead to a sharp 2-pion bound state. Various decay channels such as $`\pi N`$\- $`\pi \mathrm{\Delta }`$\- or $`NN`$ render it broad . To describe the subthreshold behavior of the scalar-isoscalar strength function in accordance with chiral symmetry requirements it turned out later that the $`\pi \pi `$ scattering needs to be implemented by using chiral models such as the linear or non-linear sigma model, rather than an empirical parametrization of the s-wave phase shift. In addition a formalism is called for which respects chiral symmetry beyond the tree level, . Otherwise the in-medium effects become so strong that they lead to unreasonable pion-pair condensation . In the meantime s-wave pion-pion correlations have attracted a significant amount of attention both on the theoretical and the experimental sides. This is primarily related to the fact that these studies are of relevance for the evolution of the chiral condensate and its fluctuations with increasing density. The enhancement of the in-medium $`\pi \pi `$ correlations close to and below the vacuum threshold has recently been confirmed by an independent calculation within the non-linear sigma model. Also on the experimental side important progress has been made. The CHAOS collaboration has studied the $`A(\pi ,2\pi )`$ knock-out reaction and the invariant-mass measurements of the outgoing pions reveal a strong low-mass enhancement which seems to corroborate the theoretical predictions. Performing elaborate reaction calculations, including initial- and final-state absorption, Vincente-Vacas and Oset have claimed that the theory outlined above underestimates the measured $`\pi \pi `$ mass enhancement. This claim may be partly questioned, since the reaction theory calls for the inclusion with a finite total three momentum of the in-medium pion pairs which sofar are only propagated in back-to-back kinematics ($`\stackrel{}{q}=0`$). It was shown in that, allowing for finite three momenta of the pion pair, leads to further increase the $`\pi ^+\pi ^{}`$ invariant-mass distribution near threshold. On the other hand, Hatsuda et al. have argued that the partial restoration of chiral symmetry in nuclear matter, which leads to a dropping of the $`\sigma `$-meson mass , induces similar effects as the standard many-body correlation mentioned above. It is therefore natural to study the combination of both effects. ## 2 The Model As a model for $`\pi \pi `$ scattering we consider the linear sigma model treated in leading order of the $`1/N`$-expansion . The scattering matrix can then be cast in the following form $`T_{ab,cd}(s)`$ $`=`$ $`\delta _{ab}\delta _{cd}{\displaystyle \frac{D_\pi ^1(s)D_\sigma ^1(s)}{3\sigma ^2}}{\displaystyle \frac{D_\sigma (s)}{D_\pi (s)}},`$ (1) where $`s`$ is the Mandelstam variable. In Eq. (1) $`D_\pi (s)`$ and $`D_\sigma (s)`$ denote respectively the full pion and sigma propagators, while $`\sigma `$ is the sigma condensate. The expression in Eq. (1) reduces, in the soft-pion limit, to a Ward identity which links the $`\pi \pi `$ four-point function to the $`\pi `$ and $`\sigma `$ two-point functions as well as to the $`\sigma `$ one-point function. To this order, the pion propagator and the sigma-condensate are obtained from a Hartree-Bogoliubov approximation . In terms of the pion mass $`m_\pi `$ and the decay constant $`f_\pi `$, they are given as $$D_\pi (s)=\frac{1}{sm_\pi ^2},f_\pi =\sqrt{3}\sigma .$$ (2) The sigma meson, on the other hand, is obtained from the Random-Phase Approximation (RPA) involving $`\pi \pi `$ scattering and reads $$D_\sigma (s)=\left[sm_\sigma ^2\frac{2\lambda ^4\sigma ^2\mathrm{\Sigma }_{\pi \pi }(s)}{1\lambda ^2\mathrm{\Sigma }_{\pi \pi }(s)}\right]^1,$$ (3) where $`\mathrm{\Sigma }_{\pi \pi }(s)`$ is the $`\pi \pi `$ self energy, regularized by means of a form factor which is used as a fit function and allows to reproduce the experimental $`\pi \pi `$ phase shifts. The coupling constant $`\lambda ^2`$ denotes the bare quartic coupling of the linear $`\sigma `$-model, related to the mean-field pion mass, $`m_\pi `$, the sigma mass, $`m_\sigma `$, and the condensate, $`\sigma `$, via the mean-field saturated Ward identity $$m_\sigma ^2=m_\pi ^2+2\lambda ^2\sigma ^2.$$ (4) It is clear from the above that the sigma-meson propagator in this approach is correctly defined, since it satisfies a hierarchy of Ward identities. In cold nuclear matter the pion is dominantly coupled to $`\mathrm{\Delta }h`$, $`ph`$, as well as to $`2p2h`$ excitations which, on the other hand, are renormalized by means of repulsive nuclear short-range correlations, (see for details). Since the pion is a (near) Goldstone mode, its in-medium s-wave renormalization does not induce appreciable changes. The sigma meson, on the other hand, is not protected by chiral symmetry from large s-wave renormalization, resulting in a density-dependent mass modification . We extract an approximate density dependence at mean-field level from the (leading-order) density dependence of the condensate. Indeed from Eq. (4) it is clear that the density dependence of the sigma meson mass is essentially dictated by the density dependence of the condensate. Thus, for densities below and around nuclear saturation density, $`\rho _0`$, we take for the in-medium sigma-meson mass the simple ansatz $$m_\sigma (\rho )=m_\sigma (1\alpha \frac{\rho }{\rho _0})$$ (5) where $`\rho `$ is the nuclear matter density and $`m_\sigma `$ is the vacuum $`\sigma `$-meson mass. Such a density dependence very naturally arises in the linear sigma model from the tad-pole graph where the sigma meson directly couples to the nuclear density. In this density dependence was investigated quantitatively and a value of $`\alpha `$ in the range of 0.2 to 0.3 was found. These are the values which we also will use. It should be mentioned at this point that between the (s-wave) density renormalization via the tad-pole of the bare sigma-meson mass in the linear sigma model (as we mentioned already, such a bare sigma exists in quenched QCD lattice calculations ) and the in-medium p-wave renormalization of the pion there exists no double counting. Besides the different partial waves involved, a simple inspection of the corresponding Feynman graphs shows that both density effects are of totally different microscopic origin. ## 3 Results The result for the invariant-mass distribution $`ImD_\sigma (E_{\pi \pi })`$, as calculated from Eq. (3) by using the in-medium mass (5), is shown in Fig. 1 at saturation density. One observes a dramatic downward shift of the mass distribution as compared to the vacuum. The low-energy enhancement, already present without sigma-mass modification ($`\alpha =0`$) and induced by the density-dependence of the pion loop, is strongly reinforced as the in-medium $`\sigma `$-meson mass is included. For $`\alpha =0.2`$ and $`\alpha =0.3`$ the peak height is increased by a factor 2 and 4 respectively. Similarly for the T-matrix, a sizable effect can be noticed in its imaginary part, as shown in Fig. 2, which might be sufficient to explain the findings of the CHAOS collaboration . Further work in this direction is in progress. ## 4 Discussions and Conclusion Let us further comment and discuss the above results. They clearly indicate that there exists a large conspiracy between s- and p-wave in-medium renormalisations of the hybrid ’sigma meson’. Both effects induce a strong shift of the strength towards and even below the $`2m_\pi `$ threshold. However, one has to worry about opposing effects. For example it may happen that vertex corrections, usually a source of repulsion and not taken into account in this work, weaken the effects. They seem, however, to be of minor importance as was recently shown by Chanfray et al. . More care should also be taken in properly incorporating Pauli-blocking when renormalizing the pion pairs in matter, although preliminary investigations have shown that this effect is weak. In conclusion we have shown that a dropping sigma-meson mass, linked to the partial restoration of chiral symmetry in nuclear matter, further enhances the build up of previously found $`\pi \pi `$ strength in the $`I=J=0`$ channel. This scenario holds in the linear sigma model but is likely to hold true in the non-linear sigma model as well, since the tad-pole graph which renormalizes the bare sigma meson mass in the linear sigma model has its equivalent in the non-linear version. Whether our findings are linked to similar ones found recently in the experiment by Bonutti et al is an exciting possibility which, however, must still be consolidated by refined reaction calculations. Last but not least there remains the challenge that the results of Bonutti et al. are confirmed by other experiments. A promising reaction will be the $`A(\gamma ,2\pi )`$ reaction off nuclei with varying mass number, since the average density increases from low to high mass numbers and photons, in contrast to pions, probe the nuclear interior. Existing data from MAMI Mainz are waiting to be analyzed! We hope to have shown in this contribution that the scalar-isoscalar channel, in what concerns chiral symmetry, is at least as interesting and promising as the vector-isovector channel (rho-meson). Acknowledgements: Fruitful and stimulating discussions with N. Grion, W. Nörenberg, E. Oset, M. Vincente-Vacas, T. Walcher, and W. Weise are appreciated.
no-problem/0002/hep-ph0002201.html
ar5iv
text
# Substructure of the Nucleon in the Chiral Quark Model∗ ## Abstract The spin and orbital angular momentum carried by different quark flavors in the nucleon are calculated in the SU(3) chiral quark model with symmetry-breaking. The similar calculation is also performed for other octet and decuplet baryons. Furthermore, the flavor and spin contents for charm and anti-charm quarks are predicted in the SU(4) symmetry breaking chiral quark model. preprint: INPP-UVA-00-01 <sup>*</sup><sup>*</sup>** Plenary talk presented at the Circum-Pan-Pacific RIKEN Symposium on ‘High Energy Spin Physics’, RIKEN, Wako, Japan, November 3-6, 1999. <sup>+</sup> E-mail address: xs3e@virginia.edu I. Introduction One of the important tasks in hadron physics is to reveal the internal structure of the nucleon. This includes the study of flavor, spin and orbital angular momentum shared by the quarks and gluons in the nucleon. These structures determine the basic properties of the nucleon: spin, magnetic moment, axial coupling constant, elastic form factors, and the deep inelastic structure functions. The polarized deep-inelastic scattering (DIS) data indicate that the quark spin only contributes about one third of the nucleon spin or even less. A natural and interesting question is: where is the missing spin ? Intuitively, and also from quantum chromodynamics (QCD) , the nucleon spin can be decomposed into the quark and gluon contributions $$\frac{1}{2}=<J_z>_{q+\overline{q}}+<J_z>_G=\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }+<L_z>_{q+\overline{q}}+<J_z>_G$$ $`(1)`$ where $`\mathrm{\Delta }\mathrm{\Sigma }=\underset{q}{}[\mathrm{\Delta }q+\mathrm{\Delta }\overline{q}]`$ and $`<L_z>_{q+\overline{q}}`$ are the total helicity and orbital angular momentum carried by quarks and antiquarks. $`<J_z>_G`$ is the gluon angular momentum. The smallness of $`\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }`$ implies that the missing part should be contributed either by the quark orbital motion or the gluon angular momentum. Most recently, it has been shown that $`<J_z>_{q+\overline{q}}`$ might be measured in the deep virtual Compton scattering process , and one may then obtain the quark orbital angular momentum from the difference $`<J_z>_{q+\overline{q}}\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }`$. Hence the study of the quark spin and orbital angular momentum are important and interesting both experimentally and theoretically. In the naive quark model, $`<L_z>_{q+\overline{q}}=0`$ and $`<L_z>_G=0`$. In the bag model , $`\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }0.39`$, and $`<L_z>_q0.11`$, while in the Skyrme model , $`\mathrm{\Delta }G=\mathrm{\Delta }\mathrm{\Sigma }=0`$, and $`<L_z>=\frac{1}{2}`$. Most recently Casu and Sehgal show that to fit the baryon magnetic moments and polarized DIS data, a large collective orbital angular momentum $`<L_z>`$, which contributes almost $`80\%`$ of nucleon spin, is needed. Hence the question of how much of the nucleon spin is coming from the quark orbital motion remains. II. SU(3) Chiral Quark Model (a) Basic Asuumptions The effective interaction Lagrangian for SU(3) chiral quark model is $$L_I=g_8\overline{q}\left(\begin{array}{ccc}G_u^0& \pi ^+& \sqrt{ϵ}K^+\\ \pi ^{}& G_d^0& \sqrt{ϵ}K^0\\ \sqrt{ϵ}K^{}& \sqrt{ϵ}\overline{K}^0& G_s^0\end{array}\right)q,$$ $`(2a)`$ where $`G_{u(d)}^0`$ and $`GB_s^0`$ are defined as $$G_{u(d)}^0=+()\pi ^0/\sqrt{2}+\sqrt{ϵ_\eta }\eta ^0/\sqrt{6}+\zeta ^{}\eta ^0/\sqrt{3},G_s^0=\sqrt{ϵ_\eta }\eta ^0/\sqrt{6}+\zeta ^{}\eta ^0/\sqrt{3}.$$ $`(2b)`$ The breaking effects are explicitly included. $`a|g_8|^2`$ denotes the transition probability of chiral fluctuation or splitting $`u(d)d(u)+\pi ^{+()}`$, and $`ϵa`$ denotes the probability of $`u(d)s+K^{(0)}`$. Similar definitions are used for $`ϵ_\eta a`$ and $`\zeta ^2a`$. If the breaking is dominated by mass suppression effect, one reasonably expects $`0\zeta ^2a<ϵ_\eta aϵaa`$. The basic assumptions of the chiral quark model are: (i) the quark flavor, spin and orbital contents of the nucleon are determined by its valence quark structure and all possible chiral fluctuations, and probabilities of these fluctuations depend on the interaction Lagrangian (2), (ii) the coupling between the quark and Goldstone boson is rather weak, one can treat the fluctuation $`qq^{}+\mathrm{GB}`$ as a small perturbation ($`a0.100.15`$) and the contributions from the higher order fluctuations can be neglected, and (iii) quark spin-flip interaction dominates the splitting process $`qq^{}+\mathrm{GB}`$. This can be related to the picture given by the instanton model , hence the spin-nonflip interaction is suppressed. Based upon the assumptions, the quark flips its spin and changes (or maintains) its flavor by emitting a charged (or neutral) Goldstone boson. The light quark sea asymmetry $`\overline{u}<\overline{d}`$ is attributed to the existing flavor asymmetry of the valence quark numbers (two valence $`u`$-quarks and one valence $`d`$-quark) in the proton. On the other hand, the quark spin reduction is due to the spin dilution in the chiral splitting processes. Furthermore, the quark spin component changes one unit of angular momentum, $`(s_z)_f(s_z)_i=+1`$ or $`1`$, due to spin-flip in the fluctuation with GB emission. The angular momentum conservation requires the same amount change of the orbital angular momentum but with opposite sign, i.e. $`(L_z)_f(L_z)_i=1`$ or $`+1`$. This induced orbital motion is distributed among the quarks and antiquarks, and compensates the spin reduction in the chiral splitting. (b) Quark Spin Contents in the Nucleon The spin-weighted quark contents are $$\mathrm{\Delta }u^p=\frac{4}{5}\mathrm{\Delta }_3a,\mathrm{\Delta }d^p=\frac{1}{5}\mathrm{\Delta }_3a,\mathrm{\Delta }s^p=ϵa,$$ $`(3a)`$ where $`\mathrm{\Delta }_3=\frac{5}{3}[1a(ϵ+2f)]`$ and $`f\frac{1}{2}+\frac{ϵ_\eta }{6}+\frac{\zeta ^2}{3}`$. The total quark spin content in the proton is $$\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }^p=\frac{1}{2}(\mathrm{\Delta }u^p+\mathrm{\Delta }d^p+\mathrm{\Delta }s^p)=\frac{1}{2}a(1+ϵ+f)\frac{1}{2}a\xi _1$$ $`(3b)`$ where the notation $`\xi _11+ϵ+f`$ is defined. A special feature of the chiral quark model is that all the spin-weighted antiquark contents are zero $$\mathrm{\Delta }\overline{q}=0.$$ $`(3c)`$ Hence $`(\mathrm{\Delta }q)_{sea}\mathrm{\Delta }\overline{q}`$, which differs from the predictions of the sea quark and antiquark pair produced by a gluon (see discussion in ). (c) Quark Orbital Momentum in the Nucleon The orbital angular momentum produced in the splitting $`q_{}q_{}^{}+\mathrm{GB}`$ is shared by the recoil quark ($`q^{}`$) and the Goldstone boson (GB). Defining $`2\kappa `$ as the fraction of the orbital angular momentum shared by the GB, then the fraction shared by the recoil quark is $`12\kappa `$. We assume the fraction of $`2\kappa `$ is equally shared by the quark and antiquark in the GB and call $`\kappa `$ the partition factor which satisfies $`0<\kappa <1/2`$. For $`\kappa =1/3`$, the three particles $``$ the recoil quark, quark and antiquark in the GB equally share the induced orbital angular momentum. For the proton, we obtain $$<L_z>_q^p<L_z>_{u+d+s}^p=(1\kappa )\xi _1a$$ $`(4a)`$ $$<L_z>_{\overline{q}}^p<L_z>_{\overline{u}+\overline{d}+\overline{s}}^p=\kappa \xi _1a$$ $`(4b)`$ and $$<L_z>_{q+\overline{q}}^p<L_z>_q^p+<L_z>_{\overline{q}}^p=\xi _1a$$ $`(4c)`$ The orbital angular momentum of each quark flavor may depend on the partition factor $`\kappa `$, but the total orbital angular momentum (4c) is independent of $`\kappa `$. Furthermore, the amount $`\xi _1a`$ is just the same as the total spin reduction in (3b), and the sum of (4c) and (3b) gives $$<J_z>_{q+\overline{q}}^p=\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }_{q+\overline{q}}^p+<L_z>_{q+\overline{q}}^p=\frac{1}{2}$$ $`(4d)`$ In the chiral fluctuations, the missing part of the quark spin is transferred into the orbital motion of quarks and antiquarks. The amount of quark spin reduction $`a(1+ϵ+f)`$ in (3b) is canceled by the equal amount increase of the quark orbital angular momentum in (4c), and the total angular momentum of nucleon is unchanged. (d) Paramters The model parameters are determined by three inputs, $`\mathrm{\Delta }u\mathrm{\Delta }d=1.26`$, $`\mathrm{\Delta }u+\mathrm{\Delta }d2\mathrm{\Delta }s=0.60`$, and $`\overline{d}\overline{u}=0.143`$. The result is: $`a=0.145`$, $`ϵ=0.46`$, and $`\zeta ^2=0.10`$. The orbital angular momenta shared by different quark flavors are listed in Table I. We plot the orbital angular momenta carried by quarks and antiquarks in the proton as function of $`\kappa `$ in Fig.1. Using the parameter set given above, $`<L_z>_{q+\overline{q}}^p0.30`$, i.e. nearly $`60\%`$ of the proton spin is coming from the orbital motion of quarks and antiquarks, and $`40\%`$ is contributed by the quark and antiquark spins. Comparison of our result with other models is given in Fig.2. Extension to other baryons and application to the baryon magnetic moments were discussed in . It has been shown that although the chiral model result of the magnetic moments seems to be better than the nonrelativistic quark model result, there is no significant difference between them. This is because the positive orbital contribution to the magnetic moment cancels in part the negative contribution given by the quark spin reduction. This cancellation was also discussed in . Hence the magnetic moment might not be a good observable to manifest the quark orbital contribution. II. SU(4) Chiral Quark Model The effective intercation Lagrangian in SU(4) case is $$L_I=g_{15}\overline{q}\left(\begin{array}{cccc}G_u^0& \pi ^+& \sqrt{ϵ}K^+& \sqrt{ϵ_c}\overline{D}^0\\ \pi ^{}& G_d^0& \sqrt{ϵ}K^0& \sqrt{ϵ_c}D^{}\\ \sqrt{ϵ}K^{}& \sqrt{ϵ}\overline{K}^0& G_s^0& \sqrt{ϵ_c}D_s^{}\\ \sqrt{ϵ_c}D^0& \sqrt{ϵ_c}D^+& \sqrt{ϵ_c}D_s^+& G_c^0\end{array}\right)q,$$ $`(6)`$ where $`G_{u(d)}^0`$ and $`G_s^0`$ are defined similarly as in (2b), but with additional $`ϵ_c`$ term, and $`G_c^0=\zeta ^{}\frac{\sqrt{3}\eta ^0}{4}+\sqrt{ϵ_c}\frac{3\eta _c}{4}`$, with $`\eta _c=(c\overline{c})`$. In the SU(4) chiral quark model, the charm and anticharm quarks are produced nonperturbatively, and they are ‘intrinsic. The intrinsic charm helicity $`\mathrm{\Delta }c`$ is nonzero and definitely negative. To estimate the size of $`\mathrm{\Delta }c`$ and other intrinsic charm contributions, we use the same parameter set ($`a=0.145`$, $`ϵϵ_\eta =0.46`$, $`\zeta ^2=0.10`$) given in the SU(3) case, and leave $`ϵ_c`$ as a variable, then other quark flavor and helicity contents can be expressed as functions of $`ϵ_c`$. We found that $`ϵ_c0.10.3`$ satisfactorily describes the data. Our model results, data, and theoretical predictions from other approaches are listed in Table II and Table III respectively. Several remarks are in order: (1) our result, $`2\overline{c}/(q+\overline{q})3.7\%`$, agrees with that given in and the earlier number given in . But the result given in is much smaller (0.5$`\%`$) than ours. (2) our prediction $`\mathrm{\Delta }c=0.029\pm 0.015`$ is very close to the result $`\mathrm{\Delta }c=0.020\pm 0.005`$ given in the instanton QCD vacuum model . However the size of $`\mathrm{\Delta }c`$ given in is about two order of magnitude smaller than ours. (3) We plot the ratio $`\mathrm{\Delta }c/\mathrm{\Delta }\mathrm{\Sigma }`$ as function of $`ϵ_c`$ in Fig.3. Our result $`\mathrm{\Delta }c/\mathrm{\Delta }\mathrm{\Sigma }0.084\pm 0.046`$ agrees well with the prediction given in and is also not inconsistent with the result given in . To summarize, the chiral quark model with $`a`$ $`few`$ parameters can well explain many existing data of the nucleon properties: (1) strong flavor asymmetry of light antiquark sea: $`\overline{d}>\overline{u}`$, (2) nonzero strange quark content, $`<\overline{s}s>0`$, (3) sum of quark spins is small, $`<s_z>_{q+\overline{q}}0.10.2`$, (4) sea antiquarks are not polarized: $`\mathrm{\Delta }\overline{q}0`$ ($`q=u,d,\mathrm{}`$), (5) polarizations of the sea quarks are nonzero and negative, $`\mathrm{\Delta }q_{sea}<0`$, (6) the orbital angular momentum of the sea quark is parallel to the proton spin, and (7) the SU(4) chiral quark model predicts a small amount of intrinsic charm and a negative $`\mathrm{\Delta }c`$ in the proton. (1)-(4) are consistent with data, and (5)-(7) could be tested by future experiments. Acknowledgments The author would like to thank S. Brodsky, P. K. Kabir and H. J. Weber for useful comments and suggestions. This work was supported in part by the U.S. DOE Grant No. DE-FG02-96ER-40950, the Institute of Nuclear and Particle Physics, University of Virginia, and the Commonwealth of Virginia.
no-problem/0002/astro-ph0002056.html
ar5iv
text
# The Light Elements Be and B as Stellar Chronometers in the Early Galaxy ## 1. Introduction It has become clear, from a number of lines of recent evidence, that the early evolution of the Galaxy is best thought of as a stochastic process. Within the first 0.5-1 Gyr following the start of the star formation process, chemical enrichment does not operate within a well-mixed uniform environment, as was assumed in the simple one-zone models that were commonly used in past treatments of this problem. Rather, the very first generations of stars are expected to have their abundances of heavy elements set by local conditions, which are likely to have been dominated by the yields from individual SNeII. The seeds of this paradigm shift can be found in the observations, interpretations, and speculations of McWilliam et al. (1995), Audouze & Silk (1995), and Ryan, Norris, & Beers (1996). Models which attempt to incorporate these ideas into a predictive formalism have been put forward by Tsujimoto, Shigeyama, & Yoshii (1999; hereafter TSY), and Argast et al. (2000). Although they differ in the details of their implementation, and in a number of their assumptions, both of these models rely on the idea of enhanced star formation in the high-density shells of SN remnants, and the interaction of these shells of enriched material with a local ISM. The predictions which result are similar as well: (1) Both models are capable of reproducing the observed distributions of abundance (e.g., \[Fe/H\]) for stars in the tail of the halo metallicity distribution function (Laird et al. 1988; Ryan & Norris 1991; Beers 1999), and (2) Both models predict that the abundances of heavy elements, such as Fe, are not expected to show strong correlations with the ages of the first stars, at least up until an enrichment level on the order of \[Fe/H\]$`2.0`$ is reached, i.e., at the time when mixing on a Galactic scale is possible (roughly 1 Gyr following the initiation of star formation). Suzuki, Yoshii, & Kajino (1999; hereafter SYK, see also Suzuki, Yoshii, & Kajino, this volume) have extended the SN-induced chemical evolution model of TSY to include predictions of the evolution of the light element species <sup>9</sup>Be, <sup>10</sup>B, and <sup>11</sup>B, based on secondary processes involving spallative reactions with Galactic Cosmic Rays (hereafter GCRs). Recently, Suzuki, Yoshii, & Beers (2000) have considered the extension of this model to the prediction of <sup>6</sup>Li and <sup>7</sup> Li, and demonstrate that they naturally reproduce the recently detected slope in the abundance of Li in extremely metal-poor stars noted by Ryan, Norris, & Beers (1999; see also Ryan this volume). It is particularly encouraging that the same stochastic star-formation models which reproduce the observed trends of some (but not all) heavy elements, such as Eu, Fe, etc., also obtain predictions of the light element abundance distributions that match the available observations quite well, with a minimum of parameter tweaking. In this contribution we summarize one of the more interesting predictions of the TSY/SYK class of models, that the abundances of the light elements Be and B (hereafter, BeB) might be useful as stellar chronometers in the early Galaxy (a time when the heavy element “age-metallicity” relationships are not operating due to the lack of global mixing). It appears possible that, with refinement of the modeling, and adequate testing, observations of BeB for metal-poor stars may provide a chronometer with “time resolution” on scales of tens of Myrs. ## 2. The Essence of the Model In this section we would like to briefly explain our model of SN-induced star formation and chemical evolution. After formation of the very FIRST generation of (Pop. III) stars, with atmospheres containing gas of primordial abundance, the most massive of these stars exhaust their core H, and explode as SNeII. Following the explosion a shock is formed, because the velocity of the ejected material exceeds the local sound speed. Behind the shock the swept-up ambient material in the ISM accumulates to form a high-density shell. This shell cools in the later stages of the lifetime of a given SN remnant (SNR) and is a suitable site for the star formation process to occur. The SNR shells are expected to be distributed randomly throughout the early and rapidly evolving halo, and the shells do not easily merge with one another because of the large available volume. As a result, each SNR keeps its identity and the stars which form there reflect the abundances of material generated by their “parent” SN. TSY present this model, and describe the input assumptions, in more quantitative detail. Figure 1 provides a cartoon illustration of the processes which we discuss herein. One of the most important results of the TSY model is that stellar metallicity, especially \[Fe/H\], cannot be employed as an age indicator at these early epochs. Thus, to consider the expected elemental abundances of the metal-poor stars which form at a given time, a distribution of stellar abundances must be constructed, rather than adopting a global average abundance under the assumption that the gas of the ISM is well mixed. SYK constructed such a model, coupled with the model of SN-induced chemical evolution, which considers the evolution of the light elements. SYK proposed that GCRs arise from the mixture of elements of individual SN ejecta and their swept-up ISM, with the acceleration being due to the shock formed in the SNR. GCRs originating from SNeII propagate faster than the material trapped in the clouds of gas making up the early halo. As a result, GCRs are expected to achieve uniformity throughout the halo faster than the general ISM, with its patchy structure. It follows that the abundances of BeB, which are mainly produced by spallation processes of CNO elements involving GCRs, are expected to exhibit a much tighter correlation with time than those of heavy elements, synthesized through stellar evolution and SN explosions. We note that alternative models for the origin of spallative nucleosynthesis products have been developed which rely on the existence of spatially correlated SNeII in superbubbles of the early ISM (see Parizot & Drury 1999, and this volume). The superbubble model predicts a locally homogeneous production of both heavy and light elements, and the variety of stellar abundances which are observed are explained by the differing diffusion processes of metal-rich (\[Fe/H\] $`1`$) shells swept-up by the bubble and mixed with a metal-poor (\[Fe/H\] $`4`$) ISM. Tests of the “isolated” SN models vs. the superbubble models are expected to be conducted in the near future. ## 3. Abundance Predictions of the Model Figure 2 shows the predicted behavior of the abundance of \[Fe/H\], log(Be/H), and log(B/H), as a function of time, over the first 0.6 Gyrs of the evolution of the early Galaxy, according to the model of SYK. At any given time (note that “zero time” is set by the onset of star formation, not the beginning of the Universe) the range of observed BeB is substantially less than that of Fe, owing to the global nature of light element production. For example, at time 0.2 Gyrs, the expected stellar \[Fe/H\] extends over a range of 50, while that of log (BeB/H) is on the order of 3–7. During early epochs Fe is produced only by SNeII, and most of the Fe observed in stars formed in SNR shells originates from that contributed by the parent SN, because of uniformly low Fe abundance in the ISM at that time. Thus, the expected \[Fe/H\] of stars born at that time will exhibit a rather large range, reflecting differences in Fe yields associated with the different masses of the progenitor stars. On the other hand, according to the SYK model, most of the BeB is produced by spallation reactions of CNO nuclei involving globally transported GCRs. The observed abundances of BeB in metal-poor stars which formed at this time should reflect the global nature of their production, and the correlation between time and BeB abundance is expected to be much better than that found for heavier species. In Table 1, we use the predictions from SYK, and the stellar abundance data from Boesgaard et al. (1999) for Be, to put forward “bold” estimates of stellar ages (since the onset of star formation). We note that these numbers are meant to be indicative, not definitive, predictions, as further tests of the model and its underlying assumptions still remain to be carried out. We have ordered the table according to estimated (Be) time since the onset of star formation in the early Galaxy. It is interesting to consider the implications of this strong age-abundance relationship for individual stars which have been noted in the literature as having “peculiar” BeB (or <sup>7</sup>Li for that matter) abundances, at least as compared to otherwise similar stars of the same \[Fe/H\], T<sub>eff</sub>, and log g. The set of “twins” G64-12 and G64-37 have been noted as one example of stars with very low metallicity, and apparently similar T<sub>eff</sub> and log g, which never-the-less, exhibit rather different abundances of <sup>7</sup>Li. Could this difference be accounted for by a difference in AGE of these stars ? Answering this question is of great importance, and hopefully will be resolved in the near future. ## 4. Can we Test This Model ? Yes, but it will take some hard work. Obviously, if there exists an independent method with which to verify the relative age determinations predicted by this model, that would be ideal. Fortunately, there have been numerous refinements in models of stellar atmospheres, and their interpretation, which may make this feasible (see Fuhrmann 2000). In order to apply the methods described by Fuhrmann, one requires high-resolution, high-S/N spectroscopy of individual stars. It is imperative that the present-generation 8m telescopes (VLT, SUBARU, GEMINI, HET) obtain this data, so that this, and other related questions, may be addressed with the best possible information. Another feasible test would be to compare the abundances of BeB with \[Fe/H\], and other heavy elements, for a large sample of stars with \[Fe/H\] $`<2.0`$. If the superbubble model is the correct interpretation, with an implied locally homogeneous production of the light elements, then one might expect to find correlations between the abundances of various heavy element species (including those other than Fe and O) and BeB. Simultaneous observations of light and heavy elements for stars of extremely low abundance are planned with all the major 8m telescopes, so it should not be too long before a sufficiently large sample to carry out this test is obtained. One can also seek, as we have, confirmatory evidence in the predicted behavior of <sup>7</sup>Li vs. \[Fe/H\] (Suzuki et al. 2000). ## 5. Other Uses for This Model If the model we have considered here can be shown to be correct, there are several new avenues of investigation which are immediately opened. For example, if one were able to “age rank” stars on the basis of their BeB abundances, one could refine alternative production mechanisms for the light element Li which are not driven by GCR spallation, including the SN $`\nu `$process and/or production via a giant-branch Cameron-Fowler mechanism (see Castilho et al. , this volume), in stellar flares, etc.. Furthermore, since BeB nuclei are more difficult to burn than Li nuclei, one could imagine a powerful test for the extent to which depletion of Li has operated in metal-poor dwarfs, with important implications for the Li constraint on Big Bang Nucleosynthesis (BBN). Realistic modeling of BeB evolution at early epochs may also help distinguish between predictions of standard BBN, non-standard BBN, and the accretion hypothesis (see Yoshii, Mathews, & Kajino 1995). An age ranking of metal-poor stars based on their BeB abundances, in combination with measurements of their alpha, iron-peak, and neutron-capture elements, would open the door for an unraveling of the mass spectrum of the progenitors of first generation SNeIIs, and allow one to obtain direct constraints on their elemental yields as a function of mass, a key component to models of early nucleosynthesis. ### Acknowledgments. TCB expresses gratitude to the IAU for support which enabled his attendance at this meeting, and acknowledges partial support from the National Science Foundation under grant AST 95-29454. TCB also wishes to express his congratulations to the LOC and SOC for a well-run, scientifically stimulating, and marvelously located meeting. YY acknowledges a Grant-in-Aid from the Center of Excellence (COE), 10CE2002, awarded by the Ministry of Education, Science, and Culture, Japan. ## References Argast D., Samlund, M., Gerhard, O.E., & Thielemann, F.-K. 2000, A&A, in press Audouze J., & Silk, J. 1995, ApJ, 451, L49 Beers, T.C. 1999, in Third Stromlo Symposium: The Galactic Halo, eds. B. Gibson, T. Axelrod, & M. Putman, (ASP, San Francisco), 165, p. 206 Boesgaard, A.M., Deliyannis, C.P., King, J.R., Ryan, S.G., Vogt, S.S., & Beers, T.C. 1999, AJ, 117, 1549 Fuhrmann, K. 2000, in The First Stars, Proceedings of the Second MPA/ESO Workshop, eds. A. Weiss, T. Abel, & V. Hill (Springer, Heidelberg), in press Laird, J.B., Carney, B.W., Rupen, M.P., & Latham, D.W. 1988, AJ, 96, 1908 McWilliam, A., Preston, W., Sneden, C., & Searle, L. 1995, AJ, 109, 2757 Ryan, S.G., Norris, J.E., & Beers, T.C. 1996, ApJ, 471, 254 Ryan, S.G., Norris, J.E., & Beers, T.C. 1999, ApJ, 523, 654 Ryan, S.G., & Norris, J.E. 1991, AJ, 101, 1865 Suzuki, T.K., Yoshii, Y., & Kajino, T. 1999, ApJ, 522, L125 (SYK) Suzuki, T.K., Yoshii, Y., & Beers, T.C. 2000, ApJ, submitted Tsujimoto, T., Shigeyama, T., & Yoshii, Y. 1999, ApJ, 519, L63 (TSY) Yoshii, Y., Mathews, G.J., & Kajino, T. 1995, ApJ, 447, 184
no-problem/0002/astro-ph0002367.html
ar5iv
text
# Untitled Document 5 cm OH MASERS AS DIAGNOSTICS OF PHYSICAL CONDITIONS IN STAR-FORMING REGIONS. KONSTANTINOS G. PAVLAKIS<sup>1,2,3</sup> & NIKOLAOS D. KYLAFIS<sup>2,3</sup> <sup>1</sup>University of Leeds, Department of Physics and Astronomy, Woodhouse Lane, Leeds LS2 9JT <sup>2</sup>University of Crete, Physics Department, 714 09 Heraklion, Crete, Greece <sup>3</sup>Foundation for Research and Technology-Hellas, P.O. Box 1527, 711 10 Heraklion, Crete, Greece pavlakis@ast.leeds.ac.uk, kylafis@physics.uch.gr Received: ; accepted: ABSTRACT We demonstrate that the observed characteristics of the 5 cm OH masers in star-forming regions can be explained with the same model and the same parameters as the 18 cm and the 6 cm OH masers. In our already published study of the 18 cm and the 6 cm OH masers in star-forming regions we had examined the pumping of the 5 cm masers, but did not report the results we had found because of some missing collision rate coefficients, which in principle could be important. The recently published observations on the 5 cm masers of OH encourage us to report our old calculations along with some new ones that we have performed. These calculations, in agreement with the observations, reveal the main lines at 5 cm as strong masers, the 6049 MHz satellite line as a weak maser, and the 6017 MHz satellite line as never inverted for reasonable values of the parameters. Subject headings: ISM: molecules — masers — molecular processes — radiative transfer — stars: formation 1. INTRODUCTION The OH molecules in star-forming regions are rich in maser emission. They exhibit: a) Four maser lines (at 18 cm) in the ground state $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=3/2`$ (e.g., Gaume & Mutel 1987; Cohen, Baart, & Jonas 1988; for reviews see Reid & Moran 1981; Cohen 1989; Elitzur 1992). b) Three maser lines (at 5 cm) in the first excited state $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=5/2`$ (Knowles, Caswell, & Goss 1976; Guilloteau et al. 1984; Smits 1994; Caswell & Vaile 1995; Baudry et al. 1997; Desmurs et al. 1998; Desmurs & Baudry 1998). c) Three maser lines (at 6 cm) in the next level $`{}_{}{}^{2}\mathrm{\Pi }_{1/2}^{}`$, $`J=1/2`$ (Gardner & Martin Pintado 1983; Gardner & Whiteoak 1983; Palmer, Gardner, & Whiteoak 1984; Gardner, Whiteoak, & Palmer 1987; Baudry et al. 1988; Baudry & Diamond 1991; Cohen, Masheder, & Walker 1991). d) One maser line (at 2 cm) in the level $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=7/2`$ (Turner, Palmer, & Zuckerman 1970; Baudry et al. 1981; Baudry & Diamond 1998). In our recently reported calculations (Pavlakis & Kylafis 1996a, hereafter Paper I; Pavlakis & Kylafis 1996b, hereafter Paper II) we explained theoretically the observed characteristics of the 18 cm and the 6 cm maser lines of OH. Naturally, we had computed also the maser emission of the 5 cm lines of OH, but we decided to “not show or discuss the maser lines in the excited state $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=5/2`$ because this state is directly connected with the state $`{}_{}{}^{2}\mathrm{\Pi }_{1/2}^{}`$, $`J=7/2`$, which is not included in our calculations. We urge quantum chemists to compute collision rate coefficients for as many transitions as possible” (Paper I). Soon after our calculations were published, Baudry et al. (1997) reported an extensive study of the 5 cm maser lines of OH in star-forming regions. To our surprise, our already performed calculations explain the observed characteristics. This probably means that, for temperatures between 100 and 200 K that are thought appropriate for OH maser regions, the missing collision rate coefficients are not important for the 5 cm masers. Thus, we are encouraged to publish our results. Of course, when the missing collision rate coefficients are computed, it will be reassuring to show that they are indeed not important for the 5 cm masers. In §2 we discuss briefly the model that we used, in §3 we present the results of the calculations, in §4 we compare our calculations with the observations and in §5 we present our conclusions. 2. MODEL Our model is the same as that in Papers I and II. Not only this, but the values of the parameters are exactly the same. Thus, no parameters are adjusted for any qualitative or quantitative explanation of the observations. The maser regions are modeled as cylinders of length $`l=5\times 10^{15}`$ cm and diameter $`d=10^{15}`$ cm. The characteristic bulk velocity in the maser region is denoted by $`V`$ and the assumed velocity field there is given by $`\stackrel{}{𝐯}=V/(d/2)\rho \widehat{\rho }+(V/l)z\widehat{𝐳}`$, where ($`\rho ,z`$) are the cylindrical coordinates, and $`\widehat{\rho }`$ and $`\widehat{𝐳}`$ are the corresponding unit vectors. The fractional abundances of OH and ortho-H<sub>2</sub> with respect to density of H<sub>2</sub> molecules in the maser region are denoted by $`f_{\mathrm{OH}}`$ and $`f_{\mathrm{ortho}\mathrm{H}_2}`$, respectively, while the density of H<sub>2</sub> molecules and the kinetic temperature there are denoted by $`n_{\mathrm{H}_2}`$ and $`T_{\mathrm{H}_2}`$, respectively. Finally, the brightness temperature of the maser lines is denoted by $`T_{br}`$, the dilution factor of the far infrared radiation field by $`W`$ (see eq. of Paper II), the dust optical depth parameter by $`p`$ (see eq. of Paper II), and the dust temperature by $`T_d`$. In addition to the exploration of the parameters used in Papers I and II, we also explore here the effects of the fractional abundance of OH. In the figures of this paper the key is as follows: The brightness temperature of the 6049 MHz transition is shown as a solid line, that of the 6035 MHz transition by a dotted line, that of the 6017 MHz by a dashed line and that of the 6031 MHz by a dot-dashed line. 3. CALCULATIONS AND PRESENTATION OF RESULTS 3.1. Collisions Only For kinetic temperatures $`100<T_{\mathrm{H}_2}<200`$ K, which are thought to be prevailing in H II/OH maser regions, there are several locally (i.e., thermally) overlapping lines of OH. Nevertheless, it is interesting to look at calculations, which take into account collisions only, in order to see what their effects are on the pumping of OH molecules (see also Paper I). Interestingly, we have found that for temperatures $`T_{\mathrm{H}_2}<150`$ K, which are highly likely for H II/OH regions, the effects of locally overlapping lines are insignificant on the pumping of the 5 cm maser lines and their inclusion changes the results of our calculations by less than a factor of two. Thus, if there are no large velocity gradients in the maser regions and the external FIR field is weak, collisions alone determine the 5 cm maser emission of OH at $`T_{\mathrm{H}_2}<150`$ K. We have found that collisions alone are unable to invert the main lines at 5 cm. For $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$ and $`f_{\mathrm{OH}}`$$`=10^5`$ only the 6049 MHz satellite line is masing for hydrogen densities $`2\times 10^5<n_{\mathrm{H}_2}<7\times 10^6`$ cm<sup>-3</sup>. The peak of the brightness temperature occurs at $`n_{\mathrm{H}_2}10^6`$ cm<sup>-3</sup> and it is $`T_{br}10^9`$ K above $`T_{\mathrm{H}_2}=100`$ K (see Figure 1 below and the discussion in the next subsection). As $`f_{\mathrm{ortho}\mathrm{H}_2}`$decreases, the brightness temperature of the 6049 MHz line decreases faster than exponential. For $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0.5`$ the peak of the brightness temperature is $`T_{br}=6\times 10^6`$ K and it is at the limits of dedectability. For values of $`f_{\mathrm{ortho}\mathrm{H}_2}`$below 0.2 the inversion disappears and no 5 cm line shows inversion. 3.2. Collisions and Local Line Overlap The effects of collisions and local line overlap cannot be separated. It is simply a good fortune that for temperatures up to about 150 K the effects of collisions dominate those of local line overlap. Assuming that large velocity gradients and a significant FIR radiation field are absent in the maser regions (see below for their effects), we have computed the 5 cm OH maser emission as a function of $`n_{\mathrm{H}_2}`$ taking into account both collisions and local line overlap. For $`T_{\mathrm{H}_2}`$$`=150`$ K, $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$, $`f_{\mathrm{OH}}=10^5`$ and $`V=0.6`$ km s<sup>-1</sup> (for which we do not have any non-locally overlapping lines), we show in Figure 1 the brightness temperature $`T_{br}`$of the 6049 MHz line (the only inverted OH line at 5 cm) as a function of $`n_{\mathrm{H}_2}`$. This is a quite strong maser line with a peak brightness temperature $`T_{br}=6\times 10^8`$ K. As the temperature $`T_{\mathrm{H}_2}`$increases further, more and more pairs of lines overlap locally and their degree of overlap also increases. For temperatures up to 170 K, local overlap causes only quantitative (not qualitative) changes in the results. The peak brightness temperature decreases with increasing kinetic temperature and the range of densities over which inversion occurs also decreases. For $`T_{\mathrm{H}_2}`$$`=170`$ K, the peak $`T_{br}`$ of the 6049 MHz maser line falls to $`10^7`$ K and inversion occurs for $`10^5<n_{\mathrm{H}_2}<10^6`$ cm<sup>-3</sup>. Above $`T_{\mathrm{H}_2}`$$`=170`$ K, the effects of local line overlap introduce qualitative changes. The 6049 MHz line continues to weaken, while the 6035 MHz line now appears. As the temperature approaches 200 K, there are fifteen pairs and one triple of locally overlapping lines. Figure 2 shows $`T_{br}`$as a function of $`n_{\mathrm{H}_2}`$ for $`T_{\mathrm{H}_2}`$$`=200`$ K, $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$, $`f_{\mathrm{OH}}=10^5`$ and $`V=0.6`$ km s<sup>-1</sup>. At this relatively high temperature, the peak $`T_{br}`$of the 6049 MHz maser line is only $`10^6`$ K, while for the 6035 MHz main line, $`T_{br}`$$`10^9`$ K at $`n_{\mathrm{H}_2}10^7`$ cm<sup>-3</sup>. When $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$, no OH 5 cm maser line appears for kinetic temperatures lower than 170 K. For $`T_{\mathrm{H}_2}`$$`=200`$ K, $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$, $`f_{\mathrm{OH}}`$$`=10^5`$ and $`V=0.6`$ km s<sup>-1</sup> the results are shown in Figure 3. The peak $`T_{br}`$of the 6035 MHz line is one order of magnitude stronger than that for $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$, but the 6049 MHz line is absent. Thus, as with the 1720 MHz maser line (see Paper I), the 6049 MHz maser line could be a diagnostic (but see below) of the abundance of ortho-H<sub>2</sub> in maser regions. The fractional abundance of OH in star-forming regions is probably not constant independent of density. To explore this possibility we have computed models with $`f_{\mathrm{OH}}`$$`=10^6`$ (the results are not shown in a Figure). No qualitative changes occur in comparison with the results for $`f_{\mathrm{OH}}`$$`=10^5`$ (see Figures 2 and 3). The 6035 MHz line is of the same intensity as in Figures 2 and 3, but it is inverted at densities a factor of three higher. The 6049 MHz line is reduced in intensity to the point of being unobservable ($`T_{br}<10^3`$ K). For this line also the inversion occurs at densities a factor of three higher than those in Figure 2. 3.3. Collisions, Local and Non-local Line Overlap In addition to collisions and local line overlap, we now take into account the effects of non-local line overlap (for details see Paper II). In order to avoid showing a multitude of models, we take the representative value of 150 K for $`T_{\mathrm{H}_2}`$. For $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$ and $`f_{\mathrm{OH}}`$$`=10^5`$, we start with a characteristic velocity $`V=1`$ km s<sup>-1</sup>, for which the effects of non-local line overlap are already important. Figure 4 shows the brightness temperature of the masing lines as a function of $`n_{\mathrm{H}_2}`$. Comparing Figure 4 with Figure 1 we see the dramatic effects that non-local line overlap has on the maser transitions. For relatively high hydrogen densities, two 5 cm lines are inverted. The 6035 MHz main line with high brightness temperature and the 6017 MHz satellite line. Increasing the characteristic velocity to $`V=2`$ km s<sup>-1</sup>, but keeping the rest of the parameters the same, results in a significant reduction of the 6049 MHz line (Figure 5). For $`n_{\mathrm{H}_2}`$$`10^8`$ cm<sup>-3</sup>, the 6031 MHz main line and the 6017 MHz satellite one are inverted with high brightness temperatures. At even higher densities the 6035 MHz main line is inverted, the 6017 MHz line remains strongly inverted, while the 6031 MHz one is suppressed. A further increase of the velocity to $`V=3`$ km s<sup>-1</sup> results in the complete disappearance of the 6049 MHz line (Figure 6). As in the previous subsection, a significant reduction of the abundance of ortho-H<sub>2</sub> results in the disappearance of the 6049 MHz line as a maser line. This is true for $`V=1`$, 2 or 3 km s<sup>-1</sup>. As a characteristic example we show the case of $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$, $`f_{\mathrm{OH}}`$$`=10^5`$ and $`V=2`$ km s<sup>-1</sup> (Figure 7). Below $`n_{\mathrm{H}_2}`$$`3\times 10^7`$ cm<sup>-3</sup> no maser line appears. Finally, a reduction of $`f_{\mathrm{OH}}`$ by an order of magnitude has the general result of significantly reducing the brightness temperature of the 6049 MHz line ($`T_{br}<10^4`$ K), as it was also seen in the previous subsection. Furthermore, our calculations have shown that $`f_{\mathrm{OH}}`$$`=10^6`$ results in destroying the inversion of all 5 cm maser lines at relatively high densities ($`n_{\mathrm{H}_2}`$$`>4\times 10^7`$ cm<sup>-3</sup>). 3.4. Effects of a FIR Radiation Field From the calculations presented so far, it is evident that the 5 cm main lines of OH are never inverted together for densities thought prevailing in star-forming regions (i.e., $`n_{\mathrm{H}_2}`$$`<\mathrm{few}\times 10^7`$ cm<sup>-3</sup>), when a far infrared (FIR) radiation field is absent. In this subsection we will demonstrate that a FIR radiation field is necessary to reproduce the observed features of the 5 cm lines and their correlations with the ground state $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=3/2`$ and the excited state $`{}_{}{}^{2}\mathrm{\Pi }_{1/2}^{}`$, $`J=1/2`$ OH masers. For $`T_{\mathrm{H}_2}`$$`=150`$ K, $`f_{\mathrm{OH}}=10^5`$, $`f_{\mathrm{ortho}\mathrm{H}_2}`$=1, $`V`$=1 km s<sup>-1</sup> and dilution factor $`W=0.01`$ (see Paper II), the main lines at 5 cm are inverted at low densities ($`n_{\mathrm{H}_2}`$$`<\mathrm{few}\times 10^7`$ cm<sup>-3</sup>) when $`T_d>T_{\mathrm{H}_2}`$. When $`T_d<T_{\mathrm{H}_2}`$ (see Figures 8a and 8b) the results are similar i.e., differences less than a factor of 2 in the $`T_{br}`$ to those of Figure 4, where there was no external FIR radiation field. However, when $`T_d>T_{\mathrm{H}_2}`$ (see Figures 8c and 8d), the main line at 6035 and 6031 MHz make their appearance as masers. Increasing the strength of the FIR radiation field by taking $`W=0.1`$ has dramatic effects on the 5 cm lines of OH. Figures 9a - 9d show $`T_{br}`$ of the maser lines as a function of $`n_{\mathrm{H}_2}`$ for $`T_{\mathrm{H}_2}`$$`=150`$ K, $`f_{\mathrm{OH}}=10^5`$, $`f_{\mathrm{ortho}\mathrm{H}_2}`$=1, $`V`$=1 km s<sup>-1</sup> and dilution factor W=0.1. Both 5 cm main lines are masing with the 6035 MHz line stronger than the 6031 MHz one in the range of densities where both lines are inverted. The satellite line at 6049 MHz is also masing but the other satellite line at 6017 MHz is never inverted for $`n_{\mathrm{H}_2}`$$`<\mathrm{few}\times 10^7`$ cm<sup>-3</sup>. Remarkably, the abundance of ortho-H<sub>2</sub> causes no changes as to which 5 cm lines of OH are masers. Figures 10a - 10d are made with $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$ and the rest of the parameters the same as in Figures 9a - 9d, respectively. What has dramatic effects on the pumping of the 5 cm lines is the characteristic velocity. For $`V=2`$ km s<sup>-1</sup> and $`V=3`$ km s<sup>-1</sup> with the rest of the parameters the same as in Figures 9a - 9d, the results are shown in Figures 11a - 11d and 12a - 12d, respectively. As it is clear from these Figures, an increase of $`V`$ (i.e., increase of nonlocal overlap) causes suppression of the 5 cm main lines. The 6035 MHz line is not inverted at all. The 6031 MHz either is not inverted (compare Figs. 9a and 12a) or it is much weaker (compare Figs. 9d and 12d) depending on the strength of the FIR field. The reader should notice the competition between the FIR field, which inverts the main lines (as we go from a to d in Figs. 9, 11, and 12 the FIR field increases) and the nonlocal overlap, which suppresses the inversion (as we go from Fig. 9 to Fig. 11 and then to Fig. 12 the nonlocal overlap increases). For completeness, we take one of our cases that agrees qualitatively well with the observational data, namely the case presented in Figure 9c, and explore the effects of the abundance of OH. For $`f_{\mathrm{OH}}`$$`=10^4`$ and $`f_{\mathrm{OH}}`$$`=10^6`$ the results are shown in Figures 13 and 14, respectively. The abundance of OH introduces only quantitative changes. An enhanced abundance of OH increases the brightness temperature of the 6049 MHz maser line, while a reduced abundance has the opposite effect. The main lines at 6035 and 6031 MHz remain essentially unaffected. Since the effects of the abundance of OH on the ground state $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=3/2`$ and the excited state $`{}_{}{}^{2}\mathrm{\Pi }_{1/2}^{}`$, $`J=1/2`$ were not investigated in Paper II, we show in Figure 15 the case of $`f_{\mathrm{OH}}`$$`=10^6`$ and all the other parameters the same as in Figure 6c of Paper II. 4. COMPARISON WITH OBSERVATIONS Since the original discovery of emission from the $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=5/2`$ (Yen et al. 1969), many surveys were made for detection of 5 cm maser emission toward a variety of sources (Knowles et al. 1976; Guilloteau et al. 1984; Smits 1994). Caswell and Vaile (1995) surveyed for 6035 MHz masers in 208 OH sources with peak 1665MHz flux density greater than 0.8 Jy. Only 35 masers at 6035 MHz were detected, “ a result that agrees well with our calculations ”. Since these observations were made with a single dish, and the authors have not proven that any of the 1665-6035 MHz “pairs” come from the same region, these observations must be interpreted solely as a tendency of the 1665 MHz line to be inverted more easily than the 6035 MHz one. Our results qualitatively agree with this. The 1665 MHz line (see Paper II) is inverted in a much broader range of densities, velocity fields and strengths of a FIR field than the 6035MHz line. Let’s for the rest of this section restrict to our results in the presence of a FIR field, $`V<1.5`$ km s<sup>-1</sup> and $`n_{\mathrm{H}_2}<10^7`$ cm<sup>-3</sup>. The 6035 MHz line is weaker than the 1665 MHz one and is inverted in a range of densities which is a subset of the range of densities over which the 1665 MHz line is inverted. As the FIR field gets stronger, this subregion becomes broader and the 6035 MHz line tends to be inverted in the same range of densities as the 1665 MHz line. By taking also into account our result that the stronger the FIR radiation field is the stronger the 1665 and 6035 MHz masers are, and assuming that both lines come from the same region, our models are in qualitative agreement with the observational result of Caswell and Vaile (1995) that the greater the peak of 1665 MHz maser intensity, the greater the detection rate of 6035 MHz masers. An extensive search for all four maser lines in 5 cm has been made by Baudry et al. (1997) toward 265 strong FIR sources and the general observed characteristics of these 5 cm masers can be explained by our calculations. Their observations show (see also Desmurs et al. 1998) that the main-line masers at 6035 MHz, in the $`{}_{}{}^{2}\mathrm{\Pi }_{3/2}^{}`$, $`J=5/2`$ state of OH, are generally stronger and more common than those at 6031 MHz in H II/OH regions. Nevertheless the 6031 MHz line is frequently observed to be masing. Strong 5 cm satellite line masers are not observed in the $`J=5/2`$ state of OH. The 6017 MHz line is often found in absorption while the other satellite line at 6049 MHz is observed in weak emission which could correspond to low gain masers. Our theoretical calculations are in good qualitative agreement with these observations. The 6017 MHz line is never inverted in our calculations and the 6049 MHz line is weak in a wide range of parameters thought to be prevailing in star-forming regions. Our calculations show that a combination of a FIR radiation field, collisions and line overlap is necessary to reproduce the general features of 6 GHz H II/OH masers. Nevertheless, simultaneous or nearly simultaneous VLBI observations at 1.6 GHz, 4.7 GHz and 6 GHz are necessary to restrict the range of parameters for the inversion of these masers and a search of the correlation between 5 cm maser and FIR radiation field strength would be important to prove or not the importance of FIR radiation for the inversion of these masers. 5. SUMMARY AND CONCLUSIONS We have performed a detailed, systematic study of OH maser pumping in order to attempt to invert the problem and from the OH maser observations to infer the physical conditions in H II/OH regions. This was partially accomplished in Papers I and II. With the present study of the 5 cm masers of OH the predictions of our model are: 1) When strong 5 cm maser main lines are seen, a FIR radiation field must be strong there, i.e., high value of $`W`$ or $`p`$ or $`T_d`$ or a combination of them. 2) Inversion of both main lines at 5 cm requires relatively small velocity gradients. For $`V<1`$ km s<sup>-1</sup> and a FIR radiation field present, these lines are always seen. If these lines are seen together in the same spatial region, the 1665 MHz OH ground state main line maser will also be observed in the same region, while there is a high probability the other ground state main line maser at 1667 MHz to be observed too (see Paper II). 3) When the 6031 MHz maser line is observed in a region where there is no detection of 6035 MHz maser, the 1665 MHz ground state line is inverted in the same spatial region. This situation has a great probability to be indicative of relatively large velocity gradients ($`V>1`$ km s<sup>-1</sup>). 4) When the 6049 MHz maser line is seen as a strong line (say, as strong as the 5 cm main lines are typically seen), then $`f_{\mathrm{OH}}`$$`>10^5`$. 5) We predict that maser spots showing very strong 18 cm main lines should exhibit 5 cm maser main lines also. This may have already been seen (Caswell & Vaile 1995), but VLBI observations are needed to confirm or reject our prediction. 6) We also predict that 18 cm maser main lines with $`V>2`$ km s<sup>-1</sup> will not be accompanied by 5 cm maser main lines. This research has been supported in part by a grant from the General Secretariat of Research and Technology of Greece and a Training and Mobility of Researchers Fellowship of the European Union under contract No ERBFMBICT972277. REFERENCES Baudry, A., & Diamond, P. J. 1991, A&A, 247, 551 ———— 1998, A&A, 331, 697 Baudry, A., Desmurs, J. F., Wilson, T. L., & Cohen, R. J. 1997, A&A, 325, 255 Baudry, A., Diamond, P. J., Booth, R. S., Graham, D., & Walmsley, C. M. 1988, A&A, 201, 105 Baudry, A., Walmsley, C. M., Winnberg, A., & Wilson, T. L. 1981, A&A, 102, 287 Caswell, J. L., & Vaile, R. A. 1995, MNRAS, 273, 328 Cohen, R. J. 1989, Rep. Prog. Phys., 52, 881 Cohen, R. J., Baart, E. E., & Jonas, J. L. 1988, MNRAS, 231, 205 Cohen, R. J., Masheder, M., & Walker, R. N. F. 1991, MNRAS, 250, 611 Desmurs, J. F., Baudry, A., Wilson, T. L., Cohen, R. J., & Tofani, G. 1998, A&A, 334, 1085 Desmurs, J. F., & Baudry, A. 1998, A&A, in press Elitzur, M. 1992, ARA&A, 30, 75 Gardner, F. F. & Martin-Pintado, J. 1983, A&A, 121, 265 Gardner, F. F., & Whiteoak, J. B. 1983, MNRAS, 205, 297 Gardner, F. F., Whiteoak, J. B., & Palmer, P. 1987, MNRAS, 225, 469 Gaume, R. A., & Mutel, R. L. 1987, ApJ Suppl., 65, 193 Guilloteau, S., Baudry, A., Walmsley, C. M., Wilson, T. L., & Winnberg, A. 1984, A&A, 131, 45 Knowles, S. H., Caswell, J. L., & Goss, W. M. 1976, MNRAS, 175, 537 Palmer, P., Gardner, F. F., & Whiteoak, J. B. 1984, MNRAS, 211, 41p Pavlakis, K. G., & Kylafis, N. D. 1996a, ApJ, 467, 300 (Paper I) ———— 1996b, ApJ, 467, 309 (Paper II) Reid, M. J., & Moran, J. M. 1981, ARA&A, 19, 231 Smits, D. P. 1994, MNRAS, 269, L11 Turner, B. E., Palmer, P., & Zuckerman, B. 1970, ApJ, 160, L125 Yen, J. L., Zuckerman, B., Palmer, P., & Penfield, H. 1969, ApJ 156, L27 FIGURE CAPTIONS FIG. 1.— Brightness temperature $`T_{br}`$as a function of density $`n_{\mathrm{H}_2}`$ for $`T_{\mathrm{H}_2}`$$`=150`$ K, $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$, $`f_{\mathrm{OH}}`$$`=10^5`$ and $`V=0.6`$ km s<sup>-1</sup>. The values of the other parameters are given in §2. FIG. 2.— Same as in Figure 1, but $`T_{\mathrm{H}_2}`$$`=200`$ K. FIG. 3.— Same as in Figure 2, but $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$. FIG. 4.— Brightness temperature $`T_{br}`$as a function of density $`n_{\mathrm{H}_2}`$ for $`T_{\mathrm{H}_2}`$$`=150`$ K, $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$, $`f_{\mathrm{OH}}`$$`=10^5`$ and $`V=1`$ km s<sup>-1</sup>. The values of the other parameters are given in §2. FIG. 5.— Same as in Figure 4, but for $`V=2`$ km s<sup>-1</sup>. FIG. 6.— Same as in Figure 4, but for $`V=3`$ km s<sup>-1</sup>. FIG. 7.— Same as in Figure 5, but for $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$. FIG. 8a.— Brightness temperature $`T_{br}`$as a function of density $`n_{\mathrm{H}_2}`$ for $`T_{\mathrm{H}_2}`$$`=150`$ K, $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=1`$, $`f_{\mathrm{OH}}`$$`=10^5`$, $`V=1`$ km s<sup>-1</sup>, $`T_d=100`$ K, $`p=1`$ and $`W=0.01`$. The values of the other parameters are given in §2. FIG. 8b.— Same as in Figure 8a, but for $`p=2`$. FIG. 8c.— Same as in Figure 8a, but for $`T_d=200`$ K. FIG. 8d.— Same as in Figure 8b, but for $`T_d=200`$ K. FIG. 9a.— Same as in Figure 8a, but for $`W=0.1`$. FIG. 9b.— Same as in Figure 8b, but for $`W=0.1`$. FIG. 9c.— Same as in Figure 8c, but for $`W=0.1`$. FIG. 9d.— Same as in Figure 8d, but for $`W=0.1`$. FIG. 10a.— Same as in Figure 9a, but for $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$. FIG. 10b.— Same as in Figure 9b, but for $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$. FIG. 10c.— Same as in Figure 9c, but for $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$. FIG. 10d.— Same as in Figure 9d, but for $`f_{\mathrm{ortho}\mathrm{H}_2}`$$`=0`$. FIG. 11a.— Same as in Figure 9a, but for $`V=2`$ km s<sup>-1</sup>. FIG. 11b.— Same as in Figure 9b, but for $`V=2`$ km s<sup>-1</sup>. FIG. 11c.— Same as in Figure 9c, but for $`V=2`$ km s<sup>-1</sup>. FIG. 11d.— Same as in Figure 9d, but for $`V=2`$ km s<sup>-1</sup>. FIG. 12a.— Same as in Figure 9a, but for $`V=3`$ km s<sup>-1</sup>. FIG. 12b.— Same as in Figure 9b, but for $`V=3`$ km s<sup>-1</sup>. FIG. 12c.— Same as in Figure 9c, but for $`V=3`$ km s<sup>-1</sup>. FIG. 12d.— Same as in Figure 9d, but for $`V=3`$ km s<sup>-1</sup>. FIG. 13.— Same as Figure 9c, but for $`f_{\mathrm{OH}}`$$`=10^4`$. FIG. 14.— Same as Figure 9c, but for $`f_{\mathrm{OH}}`$$`=10^6`$. FIG. 15.— Same as Figure 6c of Paper II, but for $`f_{\mathrm{OH}}`$$`=10^6`$.
no-problem/0002/cond-mat0002028.html
ar5iv
text
# Time inhomogeneous Fokker-Planck equation for wave distributions in the abelian sandpile model ## Abstract The time and size distribution of the waves of topplings in the Abelian sandpile model are expressed as the first arrival at the origin distribution for a scale invariant, time inhomogeneous Fokker-Planck equation. Assuming a linear conjecture for the the time inhomogeneity exponent as function of loop erased random walk (LERW) critical exponent, suggested by numerical results, this approach allows one to estimate the lower critical dimension of the model and the exact value of the critical exponent for LERW in three dimension. The avalanche size distribution in two dimensions is found to be the difference between two closed power laws. 05.10.Gg,04.40.-a The abelian sandpile model (ASM) was introduced by Bak, Tang, and Wiesenfeld as a minimal description for natural phenomena characterized by intermittent time evolution through events called avalanches which have scale invariant properties. The model is defined on a hypercubic lattice whose sites can accommodate a variable, positive number of grains. With a uniform distribution a site is chosen and its number of grains is increased by one. If its total number of grains exceeds a given critical threshold $`z_{max}`$, the nearest neighbor sites increase their number of grains by one and the initial site loses the corresponding number of grains. Then the newly updated sites are checked for stability until there are no more unstable states. This event is an avalanche and the analytical properties of its distribution is still an open question . Analytical approaches rely on the algebraic properties of the toppling rules and the decomposition of avalanches into simpler events called waves which are related to spanning trees on a lattice. A wave of topplings is simply obtained by restraining the initial site, from where an avalanche was initiated, to topple again only after all the other unstable sites have relaxed. Recently it was shown that the wave distribution satisfies the finite size scaling ansatz, with critical exponents deduced from the equivalence between waves and spanning trees . In this Letter we present numerical evidence that the critical exponents of the wave size and time distribution, which were deduced from geometrical considerations in , can be related to the parameters of a scale invariant Fokker-Planck equation (FPE) in any dimension. In this way we make a connection between the geometrical properties of the waves and an evolution equation. As further confirmation of the validity of a FPE description for the abelian sandpile model, we show that the scaling behavior of the average number of unstable sites as function of time is predicted by the FPE. Using the expressions for the critical exponents deduced in together with those inferred form the FPE approach we are able to find the lower critical dimension of the abelian sandpile model and, as an extra benefit, the exact value for the loop erased random walk (LERW) critical exponent $`\nu `$ in three dimensions. In two dimensions, the case in which the distribution of the last wave is known , we can compute the asymptotic behavior of the avalanche distribution with the result that it has the form of a difference between two close power laws. This has been previously proposed as an explanation for the failure of the finite size scaling approach . The abelian sandpile model is, from its definition, a Markovian process whose states are specified by the lattice configuration. Once the initial point has been chosen randomly the dynamics of relaxation is deterministic, with the evolution determined by the initial configuration and the toppling rule. We consider a coarse grain description of the sandpile evolution. Instead of a complete description, involving the number of grains at each site of the lattice, we use as variable the total number of unstable sites which exist at a given time, after an avalanche has started, irrespective of the configuration in which the sandpile is. This description is stochastic since the transition between two states with a given number of unstable sites depends on the configuration of the sandpile which is now taken randomly. Let us consider for this process the evolution equation describing the transition between states with different numbers of unstable sites: $`P(t+1,n)=_n^{}W(t;n,n^{})P(t,n^{})`$; where $`W(n,n^{};t)`$ is obtained by averaging over all the transitions between the configurations with the same number of unstable sites $`n,n^{}`$ at a given time $`t`$. The transition probability $`W(t;n,n^{})`$ may depend on time since the configuration of the unstable sites also depends on time; i.e. at the first step of a wave the unstable sites are some of the nearest neighbors of the initial site, eventually moving away. In this formulation the wave is equivalent to a particle performing a discrete random walk on the positive semi-axis with the transition probabilities depending on the particle position. An wave event is a first arrival at the origin problem, for the wave stops when all the sites, except the initial one, are stable. This random walk is close to a diffusion process since the number of unstable sites varies with bounded steps, $`x(t+1)<2Dx(t)`$, where $`x(t)`$ is the number of unstable sites and $`D`$ is the lattice dimension. From this analogy we expect that the distribution of first arrival at the origin, that is the wave duration, has a power law distribution as it has for the first return distribution of the simple diffusion process. If we take the lattice size to infinity and the time unit to zero in an appropriate way the discrete Markov chain can be cast into a diffusion equation via the Kramer-Moyal expansion . Here we shall not propose an explicit way to construct the FPE for the ASM, instead we use the fact that in the stationary state the sandpile is at criticality and we shall investigate the general FPE which yields critical behavior and has diffusion and drift coefficients behaving similarly to the sandpile model. Generically the FPE has the form $`_tp=_x[v(x,t)p]+(1/2)_{xx}^2[D_2(x,t)p]`$, where $`p(x,t)`$ is the probability density of the number of unstable sites, $`x`$ is the number of unstable sites, $`t`$ is the time since the wave has started. The drift coefficient $`v(x,t)`$ and the diffusion coefficient $`D_2(x,t)`$ are obtained by taking the continuum limit of the local first order moment $`_j(x_ix_j)W(i,j;t)`$ and of the local second order moment $`_j(x_ix_j)^2W(i,j;t)`$ . Numerically we have found that the discrete diffusion coefficient $`D_2(x,t)`$ depends linearly on the number of unstable sites $`x`$. The slope depends on the dimension of the lattice but does not depend on the lattice size and the geometric condition (bulk or boundary wave)( see Fig. 1). Also we have found that the discrete drift coefficient $`v(x)`$ tends to a constant as the size of lattice grows, Fig. (1). The finite size effects affect the drift coefficient for the bulk waves at any value of the number of unstable sites, since the transition to states with larger number of unstable sites is smaller when the wave takes place near the boundary, while the statistics collects all the waves. The simplest Fokker-Planck equation satisfying the scale invariance assumption and the numerical behavior of the discrete coefficients $`D_2(x,t)`$ and $`v(x,t)`$ is $$_tp(x,t)=_x[vt^\alpha p(x,t)]+_{xx}^2[D_2xt^\alpha p(x,t)]$$ (1) where $`v,D_2,\alpha `$ are constants. The initial condition for the above equation is $`p(x,t=t_0)=\delta (xx_0)`$. Since we are interested in the time and size distribution for waves, which are first arrival events, we set an absorbing boundary condition at the origin $`p(x=0,t)=0`$, for the wave stops when the number of unstable sites, except for the initial one, is zero. The above differential equation is invariant under a scale transformation $`xbx,tb^{1/(1\alpha )}t`$. We observe that we can eliminate the parameter $`D_2`$ by variable change $`xD_2x`$ and we change to $`vv/D_2`$. Using a standard approach one can find the asymptotic behavior of the first arrival at the origin for Eq. (1): $`P_t(t)t^{\tau _t},t1`$ with $`\tau _t`$ $`=`$ $`1+(1\alpha )|1v|.`$ (2) The second critical exponent we are interested in is the size distribution of waves. The size of the wave is the sum of the number of unstable sites until the wave stops; in the continuous formulation we have $`s(t)=_{t_0}^t𝑑t^{}x(t^{})+x(t_0)`$. We make the observation that the variable $`s`$ is a monotonic function of time as $`\dot{s}=x(t)>0`$. The relation between the two variables can be found using the average relation $`\dot{s}(t)=x(t)`$. Multiplying Eq. (1) by $`x`$ and integrating over $`x`$ we obtain that $`x(t)t^{1\alpha },t1`$, where the average was normalized to the probability of surviving $`𝑑xp(x,t)`$ until the moment $`t`$. Now we can use the variable $`s`$ in the time distribution of waves. We have $`t^{\tau _t}dts^{\frac{1}{2\alpha }}ds^{\frac{1}{2\alpha }}=s^{\tau _a}ds`$ for large $`t`$ and $`s`$, where $$\tau _a=1+\frac{(1\alpha )}{2\alpha }|1v|.$$ (3) This result can be checked easily by Monte Carlo simulation of a random walk constructed to be the discrete version of Eq. (1) or equivalent ones. In Table I we show the values of the critical exponents $`\tau _a,\tau _t`$ taken from , using Eqs. (2,3) we have computed the values of the parameters $`v`$ and $`\alpha `$. We observe that $`\alpha `$ has the same value for bulk and boundary waves and the time inhomogeneity disappears at the critical dimension, $`\alpha =0`$ for $`D=4`$. Thus the exponent $`\alpha `$ can be interpreted as a measure of the correlation among the unstable sites bellow critical dimension. Also, for $`D=2`$ we note that $`\alpha `$ has the same numerical value as the critical exponent characterizing the decay of the autocorrelation function of waves found in . One more hint that the exponent $`\alpha `$ is related to the correlation on the lattice is that it depends linearly on the erased random critical exponent $`\nu `$ which has the values $`\nu =4/5(D=2),\nu =0.616(D=3),\nu =1/2(D4)`$ . Indeed one can see easily that the relation $$\alpha =\frac{4}{3}\left(\nu \frac{1}{2}\right)$$ (4) holds exactly in $`D=2`$ and $`D=4`$ and has an error of $`0.003`$ for $`D=3`$, case in which the critical exponent $`\nu `$ is only known from numerical simulation . In the following we shall use Eq. (4), which holds for both bulk and boundary waves, as a conjecture for further exploration of the Table I. The fact that $`\alpha `$ has the same value for both bulk and boundary waves can be checked numerically using the first two moments $`m_n(t)=x^np(x,t)`$ of the solution of Eq. (1). Integrating Eq. (1) over $`x`$ and using the absorbing boundary conditions at the origin, we obtain $`m_1(t)/m_0(t)t^{1\alpha }`$ which is independent of $`v`$. Numerical estimation of the above ratio in $`D=2,3,4`$, presented in Fig. (2), shows an excellent agreement with the predicted values (the error is less than $`0.01`$). In it was shown that for waves the critical exponents for size and time distribution are given by $$\begin{array}{c}\tau _a=2\frac{1+\sigma }{d_f}\hfill \\ \tau _t=1+(d_f(1+\sigma ))\nu \hfill \end{array}$$ (5) where $`d_f`$ is the fractal dimension of the wave, which has the values of the Euclidean dimension for $`D=2,3,4`$ and value $`4`$ for higher dimension; $`\sigma =1,0`$ for bulk and boundary wave respectively. Using the above proposed conjecture, Eq. (4), and Eqs. (2,3,5), from $`(2\alpha )(\tau _a1)=\tau _t1`$ we obtain $$d_f=\frac{8}{3}\frac{1}{\nu }\frac{4}{3}.$$ (6) This relation shows that that the minimum critical dimension of the abelian sandpile model is $`4/3`$ since the maximum value of $`\nu `$ is $`1`$. This result is in agreement with the observation that in $`D=1`$ the scaling in the abelian sandpile model breaks down . An additional benefit of the above relation is that it gives the value of the critical exponent for the loop erased random walk in three dimension. Indeed if we put $`d_f=3`$, and keep in mind that $`d_f=D`$ for $`D<4`$, we get $`\nu _{D=3}=8/13=0.615384\mathrm{}`$ in perfect agreement with the numerical value found in . In fact, assuming the conjecture Eq. (4) and identifying $`D=d_f`$, Eq. (6) yields a relation between the LERW critical exponent and space dimensionality. Ktitarev et al argue that all critical exponents of the abelian sandpile are determined by the critical exponent $`\nu `$. In order to complete this program we need the relation between the drift coefficient $`v`$ and $`\nu `$ and a relation between the the drift coefficients $`v`$ for the bulk and boundary waves. Using the Eq. (5) for $`\tau _t`$ together with Eqs. (2,3) we obtain $$\frac{|1v_{\text{bulk}}|}{|1v_{\text{boundary}}|}=\frac{d_f2}{d_f1}.$$ (7) We need one more relation to connect the coefficients $`\alpha `$ and $`v`$ (bulk or boundary) with $`\nu `$. This can be obtained by using for example Eqs. (3,4,5) from which we extract $$v_{\text{bulk}}=\{\begin{array}{cc}\frac{6\nu 3}{4\nu +5}\hfill & \text{if}\nu 4/5\hfill \\ \frac{14\nu +13}{4\nu +5}\hfill & \text{if}\nu >4/5\hfill \end{array}.$$ (8) We make the observation that the Eqs. (4,7,8) hold exactly in $`D=3`$ if we use the deduced value $`\nu =8/13`$, see Table I. At this point, we can conclude that we have found a self-consistent description of the critical properties for the time and size distribution of waves in ASM. The exponent $`\alpha `$ captures the lattice correlation and the drift $`v`$ controls the boundary condition of the wave, both of them being a function of the erased loop random walk critical exponent $`\nu `$ through the Eqs. (4,7,8). Now we are in the position to compute the asymptotic behavior for the avalanches in $`D=2`$ using the FPE. In this description an avalanche is the sum of a random number of waves. The waves are statistically independent being recurrent events . This assumption might appear to be in contradiction with analysis of but in fact in this approach the correlation is included already in the time inhomogeneity. When a wave of size $`s`$ touches the origin it has the the probability $`p_d(s)`$ to die, thus also concluding the avalanche, or a new wave can start with the probability $`(1p_d(s))`$. We choose the probability for the wave to die as $`p_d(s)=s^{3/8}\mathrm{ln}s`$, so as to recover asymptotically the probability for the last wave: $`p_w(s)p_d(s)=p_{lw}(s)s^{\frac{11}{8}}`$ . The avalanche distribution can then be written as $`p_a(s)=p_w(s)p_d(s)+_0^s𝑑s^{}p_w(s^{})(1p_d(s^{}))p_w(ss^{})p_d(ss^{})+\mathrm{}`$. We can sum the previous series after a Laplace transform in $`s`$ and we have $$p_a(\lambda )=\frac{p_{lw}(\lambda )}{1((1p_d)p_w)_\lambda }.$$ (9) Applying again the Tauberian theorem we find that asymptotically the avalanche size distribution behaves like $$p_a(s)C_1(s\mathrm{ln}s)^1+C_2s^{\frac{11}{8}}.$$ (10) This kind of behavior has been proposed previously in the literature . The fact that $`1<11/8<2`$ makes it difficult to obtain the ’pure’ dominant behavior. From a numerical fit we obtain that $`C_2/C_10.25`$, therefore $`C_1(s\mathrm{ln}s)^1C_2s^{\frac{11}{8}}`$ for $`s10^6`$. Thus, the FPE approach predicts that the avalanche distribution in the bulk must have the same asymptotic behavior as the waves for very large values of $`s`$, provided that the statistics excludes the avalanches which are affected by the boundary. In conclusion, we have used numerical hints to propose a FPE for the time and size distributions of the waves in the ASM. In this approach a wave is a first return event and the asymptotic properties of its distributions (time and size) are described by the first return probabilities of a time inhomogeneous FPE; the time inhomogeneity appears below the critical dimension $`D=4`$. Furthermore, this approach yields an analytical expression for the asymptotic behavior of the avalanche distribution in $`D=2`$ which goes beyond the finite size scaling hypothesis and in agreement with recent results . Using the relation for the critical exponents $`\tau _a`$, $`\tau _t`$ deduced in together with the the relations found through FPE approach and the conjecture (4) we the propose explicit dependence of the critical exponents $`\tau _a`$, $`\tau _t`$ of the critical exponent of LERW $`\nu `$ (via $`\alpha `$ and $`v`$). A bonus of this approach is that it yields the value of the lower critical dimension, $`4/3`$, for the ASM and the exact value of $`\nu =8/13`$ in $`D=3`$ for LERW. The author thanks H. B. Geyer, F. Scholtz, L. Boonaazier and A. van Biljon for useful discussions and H. B. Geyer for a critical reading of the manuscript.
no-problem/0002/nucl-th0002036.html
ar5iv
text
# Phenomenological aspects of kaon photoproduction on the nucleon ## 1 INTRODUCTION A wealth of new high-statistics data on elementary kaon photo- and electroproduction has recently become available in three isospin channels. Along with some new progress in the theoretical side this has made the field of kaon electromagnetic production to be of considerable interest. New models, span from chiral perturbation theory to the relatively simple isobar approach, have been proposed in the recent years as the SAPHIR collaboration made their precise data publicly available. Because the error-bars are sufficiently small, an interesting structure can be resolved in $`K^+\mathrm{\Lambda }`$ total cross section. This leads to a critical question, as to whether the structure comes from less known resonances or other reaction channels start to open at the corresponding energy. In this paper we discuss some phenomenological aspects, which can be investigated by means of the isobar model. The model has been constructed by including three states that have been found to have significant decay widths into $`K\mathrm{\Lambda }`$ and $`K\mathrm{\Sigma }`$ channels, the $`S_{11}`$(1650), $`P_{11}`$(1710), and $`P_{13}(1720)`$ resonances, to fit all elementary data by adjusting some free parameters, which are known as coupling constants. ## 2 THE MODEL AND SOME PHENOMENOLOGICAL ASPECTS ### 2.1 Hadronic Form Factors Previous analyses of kaon photoproduction have never included a form factor at the hadronic vertex. However, since most of the present isobaric models diverge at higher energies, the need for such hadronic form factors has been known for a long time. Furthermore, it has been demonstrated that models which give a good description of the $`(\gamma ,K^+)`$ data can give unrealistically large predictions for the $`(\gamma ,K^0)`$ channels . It is well known that incorporating a hadronic form factor helps alleviate this divergence and, simultaneously, leads to a problem with gauge invariance, since not every diagram in the Born terms retains gauge invariance by itself.<sup>1</sup><sup>1</sup>1Since the resonance terms are individually gauge invariant, the discussion will be limited to the Born terms. The question of gauge invariance is actually one of the central issues in dynamical descriptions of how photons interact with hadronic systems. While there is usually no problem at the tree-level with bare, point-like particles, the problem becomes very complicated once the electromagnetic interaction is consistently incorporated within the full complexity of a strongly-interacting hadronic system. In the previous work we have studied the influence of hadronic form factors on kaon production, by multiplying the whole amplitude with an overall, monopole, form factor $`F(t)`$, i.e. $`M_{\mathrm{fi}}`$ $`=`$ $`[M_\mathrm{B}(s,t,u)+M_\mathrm{R}(s,t,u)]F(t),`$ (1) where the subscripts B and R refer to the Born and resonance terms, to simulate the average effect of the fact that nucleons are not point-like. In spite of the success to suppress the divergence and to avoid the problem of gauge invariance, this ad hoc fashion does not have any microscopic foundation. In order to restore gauge invariance properly, one needs to construct additional current contributions beyond the usual Feynman diagrams to cancel the gauge-violating terms. One of the most widely used methods is due to Ohta . For kaon photoproduction off the nucleon, Ohta’s prescription amounts to dropping all strong-interaction form factors for all gauge-violating electric current contributions in Born terms. Symbolically, this may be written as $`M_\mathrm{B}(s,t,u)`$ $`=`$ $`M_\mathrm{B}^{\mathrm{mag}.}[s,t,u,F(s),F(t),F(u)]+M_\mathrm{B}^{\mathrm{elec}.}(s,t,u).`$ (2) The recipe, however, does not completely solve the problem of divergence, since the electric terms do not have suppression and, therefore, could violently increase as a function of the coupling constants. As shown in Ref. , even at the coupling constants values accepted by the SU(3) symmetry, Ohta’s recipe already yields very large $`\chi ^2`$. On the other hand, Haberzettl has put forward a comprehensive treatment of gauge invariance in meson photoproduction. This includes a prescription for restoring gauge invariance in situations when one cannot handle the full complexity of the problem and therefore must resort to some approximations. In our language, this method can be translated as $`M_\mathrm{B}(s,t,u)`$ $`=`$ $`M_\mathrm{B}^{\mathrm{mag}.}[s,t,u,F(s),F(t),F(u)]+M_\mathrm{B}^{\mathrm{elec}.}(s,t,u)\widehat{F}(s,t,u),`$ (3) with $`\widehat{F}(s,t,u)=a_1F(s)+a_2F(t)+a_3F(u)`$ and $`a_1+a_2+a_3=1`$. Clearly, Haberzettl’s method removes the Ohta’s problem by an additional form factor in the electric terms. By fitting to the kaon photoproduction data we found that the method proposed by Haberzettl to be superior rather than the Ohta’s, since the former can provide a reasonable description of the data using values for the leading couplings constants close to the SU(3) prediction. Such couplings cannot be accommodated in Ohta’s method due to the absence of a hadronic form factor in the electric current contribution. ### 2.2 The anomalous magnetic moment of the nucleon One of the important ground state properties of the nucleon is the anomalous magnetic moment, which exists as a direct consequence of its internal structure. More than 30 years ago Gerasimov, and independently Drell and Hearn, proposed that this ground state property is related to the nucleon’s resonance spectra by a sum rule which was then called the Gerasimov-Drell-Hearn (GDH) sum rule . In the limit of photon point, the sum rule may be written as $`{\displaystyle \frac{\kappa _N^2}{4}}`$ $`=`$ $`{\displaystyle \frac{m_N^2}{8\pi ^2\alpha }}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{d\nu }{\nu }}[\sigma _{1/2}(\nu )\sigma _{3/2}(\nu )],`$ (4) where $`\sigma _{3/2}`$ and $`\sigma _{1/2}`$ denote the cross sections for possible combinations of the nucleon and photon spins. Experiment with polarized beam and target has been performed at MAMI with photon energy up to 850 MeV and data are being analyzed . Using higher photon energies, experiments have been planned at ELSA and JLab. For practical purpose, instead of Eq. (4) we use $`\kappa _N^2`$ $`=`$ $`{\displaystyle \frac{m_N^2}{\pi ^2\alpha }}{\displaystyle _{\nu _{\mathrm{thr}}}^{\nu _{\mathrm{max}}}}{\displaystyle \frac{d\nu }{\nu }}\sigma _{TT^{}},`$ (5) where $`\sigma _{TT^{}}`$ denotes the cross section with polarized real photon and target. In terms of polarization observables this cross section corresponds to the double polarization $`E`$ . Since there are no data available for $`\sigma _{TT^{}}`$, previous work approximated Eq. (5) with $`\kappa _N^2`$ $``$ $`{\displaystyle \frac{m_N^2}{\pi ^2\alpha }}{\displaystyle _{\nu _{\mathrm{thr}}}^{\nu _{\mathrm{max}}}}{\displaystyle \frac{d\nu }{\nu }}\sigma _T,`$ (6) to estimate the upper bound of contributions, where $`\sigma _T`$ represents the total cross section. To calculate Eqs. (5) and (6) we use our elementary operator with $`\nu _{\mathrm{max}}=2.2`$ GeV. The result is shown in Table 1. Our calculation yields values of $`\kappa _p^2(K)=0.063`$ and $`\kappa _n^2(K)=0.031`$, or $`|\kappa _p(K)|/\kappa _p0.14`$ and $`\kappa _n(K)/\kappa _n0.094`$. This shows that the kaon-hyperon final states contributions to the proton’s and neutron’s magnetic moment are very small. An interesting feature is that our calculation yields a negative contribution for the $`\kappa _p^2(K)`$ and a positive contribution for the $`\kappa _n^2(K)`$, which is obviously consistent with the result of Karliner’s work . ### 2.3 Investigation of missing resonances A brief inspection to the particle data book reveals that less than 40% of the predicted nucleon resonances are observed in $`\pi N\pi N`$ scattering experiments. Quark model studies have suggested that those ”missing” resonances may couple strongly to other channels, such as the $`K\mathrm{\Lambda }`$ and $`K\mathrm{\Sigma }`$ channels . Interestingly, the new SAPHIR total cross section data for the $`p(\gamma ,K^+)\mathrm{\Lambda }`$ channel, shown in Fig. 1, indicate for the first time a structure around $`W=1900`$ MeV. Using the current isobar model we investigate this structure. As shown in Fig. 1, our previous model cannot reproduce the total cross section. Although a structure in total cross section data does not immediately imply a new resonance, the energy region around 1900 MeV represents a challenge not only because of possible broad, overlapping resonances, but also because there are additional production thresholds nearby, such as photoproduction of $`\eta ^{}`$, $`K^{}\mathrm{\Lambda }`$, and $`K\mathrm{\Lambda }^{}`$ final states, which can all lead to structure in the $`K^+\mathrm{\Lambda }`$ cross section through final-state interaction. Here, we limit ourselves only to the possibility that this structure is in fact due to one of the missing or poorly known resonances. The constituent quark model of Capstick and Roberts predicts many new states around 1900 MeV. However, only a few of them have been calculated to have a significant $`K\mathrm{\Lambda }`$ decay width . These are the $`S_{11}`$(1945), $`P_{11}`$(1975), $`P_{13}`$(1950), and $`D_{13}`$(1960) states. We have performed fits for each of these possible states, allowing the fit to determine the mass, width and coupling constants of the resonance. We found that all four states can reproduce the structure at $`W`$ around 1900 MeV, while reducing the $`\chi ^2`$, but only for the $`D_{13}`$(1960) state we found a remarkable agreement, up to the sign, between the quark model prediction and our extracted result . The result is shown in Fig. 1, where without this resonance the model shows only one peak near threshold, while inclusion of the new resonance leads to a second peak at $`W`$ slightly below 1900 MeV, in accordance with the new SAPHIR data. The difference between the two calculations is much smaller for the differential cross sections. The largest effects are found in the photon asymmetry. Therefore, we would suggest that measuring this observable is well suited to shed more light on the contribution of this state in kaon photoproduction. ## 3 EXTENSION TO HIGHER ENERGIES Extending the model to the higher energy regime requires a non-trivial task, since the Born terms increase rapidly as a function of energy. As shown in Fig. 2, even the hadronic form factors are unable to suppress the cross sections for the energy region above 2 GeV as demanded by the data. Especially in the case of $`K^0\mathrm{\Sigma }^+`$ production, where the predicted cross section starts to monotonically increase at this point. However, in order to explore the higher-lying nucleon resonances or to account for higher energies contributions to the GDH integral, an isobar model which also properly work at higher photon energies would be demanded. In Ref. it has been shown that the contributions from the $`t`$-channel resonances are responsible for the divergence of the cross section, thus indicating that the Regge propagator should be used instead of the usual Feynman propagator. While a proper reggeization of the model is considerably complicated and the study is still underway, we investigate here only the qualitative effects of using Regge propagators in the model. Following Ref. , we multiply the Feynman propagators $`1/(tm_K^{}^2)`$ of the $`K^{}(892)`$ and $`K_1(1270)`$ resonances in the operator with a factor of $`P_{\mathrm{Regge}}(tm_K^{}^2)`$, where $`P_{\mathrm{Regge}}`$ indicates the Regge propagator given in Ref. . For the $`K^{}`$ intermediate state it has the form $`P_{\mathrm{Regge}}`$ $`=`$ $`{\displaystyle \frac{s^{\alpha _K^{}(t)1}}{\mathrm{sin}[\pi \alpha _K^{}(t)]}}e^{i\pi \alpha _K^{}(t)}{\displaystyle \frac{\pi \alpha _K^{}^{}}{\mathrm{\Gamma }[\pi \alpha _K^{}(t)]}},`$ (7) where $`\alpha (t)=\alpha _0+\alpha ^{}t`$ denotes the corresponding trajectory. Equation (7) clearly reduces to the Feynman propagator in the limit of $`tm_K^{}^2`$, thus approximating the low energy behavior of the amplitude. The model is then refitted to kaon photoproduction data and the result is shown in Fig. 2, where we compare the isobar model with and without reggeization. Obviously, Regge propagators strongly suppress the cross section at high energies and, therefore, yield a better explanation of data at this energy regime. For the $`K^0\mathrm{\Sigma }^+`$ process, the use of Regge propagators seems to give more flexibility in reproducing the cross section data. This cannot be achieved without reggeization, since the high energy behavior of both $`t`$-channel resonances is less controllable by the hadronic form factors. However, since the data for the $`K^0\mathrm{\Sigma }^+`$ channel shown in Fig. 2 are still preliminary , we have to wait before any further conclusion can be drawn. In future we will include the high energy data in the fit and investigate the model in the transition between medium and high energy regions.
no-problem/0002/hep-th0002052.html
ar5iv
text
# 1 Introduction ## 1 Introduction The celebrated AdS/CFT conjecture opened, two years ago, an important path to study strongly coupled gauge theories from an unexpected supergravity framework. With this new technology, the string theorists have been able to describe a variety of properties of gauge theories unreachable from the field theory point of view (see ). All this work was supported by the conformal invariance of the $`𝒩=4`$ SYM on the D$`3`$-brane and represented in supergravity by the conformal nature of the $`AdS_5\times S^5`$ of the corresponding background. This symmetry is a strong constraint on the systems that has allowed a good description of it. The question we will approach in this paper is in the direction of the extension of the Maldacena’s conjecture to non-conformal branes. Some work on the subject has been done . In particular we will focus on the compatibility of a string symmetry, T-duality, with the holographic conjecture. There have been some approaches to the problem and it seems that, at least, part of the phase space of branes on tori has been constructed . Our first interest in this paper is the clarification of such results. We will argue that T-duality does not affect the holographic description of D-brane dynamics, and we could map from one brane system to the other, any physical property of the system. We will specially focus on the renormalization of classical parameters, and discuss how it must be understood. We will explore the corrections of the dynamics due to the presence of compact dimensions. This analysis will allow us to present of a detailed description of different parts of the phase space. In particular we describe an intermediate phase between two T-dual systems that is exclusively produced by finite size effects. Finally we will explore the dynamics of Wilson loops in this kind of backgrounds. The analysis of this system will be done with the same philosophy of the previous sections. We will study two different quark-antiquark configurations and see how T-duality and finite size effects change their properties. ## 2 The background solution. We are interested in obtaining the complete background solution for wrapped D-branes and their T-duals, that is, unwrapped D-branes moving in a transverse space that is compactified in at least one direction. We shall compute one (the latter) and deduce the other with the help of the Buscher rules for T-duality. For definiteness, let us concentrate in the case of D2-branes with a transverse space like $`R^6\times S^1`$. It is easier if we adopt the multi-centered image, which consists in considering the circle as the whole $`R`$ space with an identification $`xx+2\pi R`$. This means that the compactified metric is equivalent to the one induced by an infinite number of parallel and equally-spaced D2-branes. Of course, the solutions cannot be linearly added because Supergravity is not at all a linear theory. However, happily enough, we can write everything in terms of a function, usually called harmonic function, that does behave linearly . All physical fields, in particular, the metric and the dilaton, are non-linear functionals of it. The general $`p`$-brane solution is $$ds^2=f^{1/2}(dt^2+dx_i^2)+f^{1/2}dy_j^2$$ (1) with $$e^\varphi =g_sf^{\frac{3p}{4}}$$ (2) where $`i`$ is the index of the directions parallel to the brane, $`f`$ is the harmonic function that depends on the transverse coordinates and on the radius of the compact direction. This function $`f`$ obeys a Laplace equation whose solution, for a group of $`N`$ superposed D2-branes is $$f(r)=1+\frac{g_sNl_s^5d_2}{r^5}$$ (3) where $`d_2=\frac{(2\pi )^5\mathrm{\Gamma }(7/2)}{10\pi ^{7/2}}`$. The multi-centered solution is $$f(r)=1+\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{g_sNl_s^5d_2}{\left[r_{}^2+\left(r_{}+2\pi nR\right)^2\right]^{5/2}}$$ (4) The subscripts separate the coordinate that parameterizes the circle from the other transverse ones. Now we take the decoupling limit to see if we get something similar to an AdS space. We must define $`u_{}:=r_{}l_s^2`$, $`u_{}:=r_{}l_s^2`$, and also $`:=Rl_s^2`$. We take $`l_s`$ and all other lengths to zero but keeping the $`u`$ and $``$ variables finite. The first two follow the usual infrared limit that is necessary to decouple the open strings attached to the branes from the closed strings. In the supergravity, this separates the fields moving in the Minkowskian geometry very far from the horizon from the fields very near it. The meaning of the definition of $``$ is not the same; $`r`$ and $`u`$ are variables and the limit is a restriction of the values, on the other hand, the radius is a constant and what we are doing is to choose the value of $`R`$ in such a way that $``$ is finite (and much smaller than $`l_s^1`$). We will later explain the reason for this choice. The only relevant magnitudes in the new geometry will be the ones defined above. We obtain $$f(u_{},u_{},)=\frac{g_sNd_2}{l_s^5}\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\left[u_{}^2+\left(u_{}+2\pi n\right)^2\right]^{5/2}$$ (5) It is useful to use spherical coordinates in the six-dimensional transverse space and angular ones for the compact circle. This way $``$ can explicitly appear in the metric. That way the solution becomes $$ds^2=f^{1/2}(dt^2+dx_i^2)+f^{1/2}^2d\theta ^2+f^{1/2}du_{}^2+u_{}^2f^{1/2}d\mathrm{\Omega }_5^2$$ (6) $`\theta =u_{}/`$ is dimensionless and has periodicity $`2\pi `$. The three longitudinal coordinates and $`u_{}`$ form a kind of AdS space over which a sphere and a circle, both with variable radii, are fibered. The T-dual metric is obtained with the substitution $$f^{1/2}^2d\theta ^2f^{1/2}^2\alpha ^2d\psi ^2$$ (7) The way we have dealt with the solution does not seem to be symmetric with respect to the duality. We have constructed the solution deforming the uncompactified solution of the D2-branes, but in the limit $`0`$ we should recover the D3-brane case. There should, then, exist an expression that deforms the D3-brane solution adding the contribution of the closed strings wound around the compact Dirichlet direction, instead of the contribution of the mirror images, as we have done. It can be obtained by a Poisson resummation of the function $`f`$. $`f(u_{},\theta ,)={\displaystyle \frac{g_sNd_2}{l_s^5}}{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\left[u_{}^2+^2\left(\theta +2\pi n\right)^2\right]^{5/2}=`$ $`={\displaystyle \frac{g_sNd_3}{l_s^5}}u_{}^4+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{g_sNd_3}{l_s^5^3}}u_{}^2\mathrm{cos}\left(n\theta \right)K_2\left(\left|{\displaystyle \frac{nu_{}}{}}\right|\right)`$ (8) where $`d_3=4\pi `$. The first term of the sum is the solution for the D3-brane with coupling $`g_s^{}=g_s/(l_s)`$. Notice that the natural transition point is $`u_{}`$. By ‘transition point’ we mean the value of the variables where the zero mode (first term) of one of the series begins to be a bad approximation as more and more terms need to be added to get a certain accuracy, and the zero mode of the other series begins to be a good approximation. The physical meaning of this resummation is something similar to the Gregory-Laflamme localization transition of black holes. Here we do not have a horizon, and therefore, neither have we a definite size. However, at a certain transverse distance (in $`u_{}`$) of the center, the D2-brane solution has a width in the longitudinal direction ($`u_{}`$) that can define a typical size, precisely of order $``$. When the distance to the center is larger than that, the metric is basically constant with respect to $`u_{}`$. In the picture where the circle is emulated by a one-dimensional lattice, the observer beyond the point $`u_{}=`$ sees a nearly continuous distribution of branes in the compact direction. Any computation obtained as a sum of contributions of each individual D2-brane has to be resummed. We expect to relate this soft transition to T-duality so it is somewhat surprising that $`\alpha ^{}`$ does not appear in the condition. The explanation is as follows. There are two different reasons to prefer a theory instead of its T-dual. In the case of the closed strings, there are two series of energy levels in the spectrum, windings and momenta. If the winding modes are much lighter than the momenta, it is necessary to resum the perturbative series and therefore, it is more useful to use the T-dual theory. When $`R=\sqrt{\alpha ^{}}`$, both modes have the same energy and that is why that is the limit of usefulness of each theory. On the other hand, when only open strings are involved, windings and momentum cannot coexist. Only when there is another length (or energy) scale like the separation of branes is there a reason to prefer one of the two theories. If the gaps in the discrete spectrum are much smaller than the separation of the branes many of them will contribute to a typical process and it is better to use the theory where they are momenta and can be easily integrated. Otherwise, one should use the other one. The range where the transition happens is around $`R=Y`$, if $`Y`$ is the separation between the branes. This second possibility is the one we have found in this section. ## 3 The gauge theories. Let us now see which is the field theory that is described by this geometry. It is the one that appears when we take the same limit in the open string model of the same group of D-branes. The modes of the string that do not decouple are the massless ones, whose dynamics is governed by the SYM<sub>2+1</sub>; and the strings that wind a finite number of times around the compact direction before both ends attach on the D2-brane. Again, we should use the multi-centered image, which is the clearest. The field theory is a $`U(N\times \mathrm{})`$ SYM in $`2+1`$ dimensions. $`N`$ is the number of parallel and superposed D2-branes, it must be large in order for the conjecture to work well. The infinity that multiplies that $`N`$ can be substituted by any number large enough to describe all the windings of the system. The scalar with index in the compact direction takes the following expectation value $$X^i=blockdiag\{\mathrm{},Id_{N\times N},0_{N\times N},Id_{N\times N},\mathrm{}\}$$ (9) that breaks the gauge symmetry to the subgroup $`U(N)^{\mathrm{}}`$. There is a T-dual description of this system that interchanges windings and momenta, which are, in this case Higgs’ masses and Kaluza-Klein momenta. It is the SYM<sub>3+1</sub> compactified in a circle of radius $`\stackrel{~}{R}`$ (that equals $`\alpha ^{}/R=^1`$). The isomorphism identifies the fields of the $`N\times N`$ boxes which are, say, $`M`$ places away from the diagonal with the $`N\times N`$ fields that complete the adjoint representation of the SYM<sub>3+1</sub> with KK momentum $`M/\stackrel{~}{R}`$ (=$`M`$). There are two ways to write the action of this theory: as a three-dimensional theory with a compact direction or as a two-dimensional theory with an infinite number of fields of relatively integer masses. When the radius $`\stackrel{~}{R}`$ is small, or better, when the typical energy of the experiment is smaller than the first KK level, the two-dimensional representation is more suitable while in the opposite case, one should use the three-dimensional action. Both actions reproduce the same physics and, therefore, give the same results. Renormalized parameters are functions of $`f`$, for example $$g_{YM}=e^{\varphi /2}=g_s^{1/2}f^{1/8}$$ (10) The interpretation of the sum we have written in formula (8) is that it is the way the masses originated by the compactification add different corrections to the renormalization of the two uncompactified theories and interpolate between them in a continuous manner. There is, however, one subtlety. The interpolation is soft between the two behaviours of the function $`f`$, but in the solution of both the dilaton (the gauge coupling) and the metric there is a discrete jump: in the first case, there is a different power of $`f`$ and in the second there is an inversion of one of the components. The meaning of this is that the two different actions that describe the field theory do it in terms of two kinds of fields. The two- and three-dimensional fields have different dimensions and therefore do not scale equally <sup>2</sup><sup>2</sup>2As usual computing two-point functions of reduced fields on the transverse sphere one finds that the weights of three-dimensional fields correspond to masses on the $`3`$-brane background. For two-dimensional fields we should reduce fields of type IIA supergravity on $`S^1\times S^5`$, and their masses should account for their weight. In this case both the supergravity and gauge theory fields hold in representations of $`SU(4)\times U(1)`$ R-symmetry group. .In particular, the gauge coupling of the three-dimensional theory is dimensionless and, classically, it is independent of the scale; on the other hand, in the theory with two dimensions, the coupling is a length to the $`1/2`$ power and the dimensionless coupling ($`g_{YM}\sqrt{U}`$) depends on the scale. In principle, both descriptions are not only equivalent, but, in fact, are exactly the same. In both cases loops take the form of sums in a discrete number which we can call momentum or winding, but it is the same thing. So the picture is that the effective coupling when, for example, the energy is very low, is that of the D2-brane theory. If we are interested in higher energies, we reach a region in phase space where there is an increasing number of addends in the function $`f`$ that contribute significantly, and when they are neatly higher than $``$, the series can be resummed to obtain a new good zero mode. In terms of fields, this resummation can be seen as the redefinition of two-dimensional fields in terms of three-dimensional ones: $$A^i(x_3,x_j)=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}X_n(x_j)e^{inx_3/\stackrel{~}{R}}$$ (11) This change of variables is not trivial at all when one considers the renormalized theories because $`R`$, as any other parameter appearing in the Lagrangian acquires a dependence on the scale $`u`$, that we should introduce in the Fourier expansion. This is a consequence of the renormalization of the masses of the KK modes. The redefinition affects the coupling constant as it does in any dimensional reduction: $$(g_{YM})^2=\frac{\left(g_{YM}^{}\right)^2}{R_{\text{ren}}(u)}$$ (12) This means that the behaviour under renormalization of both theories can be very different. In our case, using the correspondence, one finds $$u_{}^{5/8}\stackrel{u}{}g_{YM}\stackrel{u}{}u_{}^{1/2}$$ (13) while $`g_{YM}^{}`$ is independent of the scale. This is due to the fact that the dependence on the scale of $`g_{YM}`$ and that of $`R_{\text{ren}}(u)`$ exactly cancel. In fact, the behaviour $`g_{YM}u_{}^{1/2}`$ is conformal because the dimensionless coupling in two dimensions $`g_{YM}\sqrt{u_{}}`$ is, indeed, independent of the scale. It is interesting to notice that in this case the map that relates both sides of the correspondence is a little more complex. Indeed, the renormalized magnitudes depend on three parameters ($`u_{}`$, $`\theta `$ and $``$) instead of one. From the original correspondence for the D2-brane case we know that $`u_{}`$ is the $`\mu `$ parameter, the typical energy of the experiment which we are testing the system with. The reason why $``$ appears in the D2-brane side is that the renormalization that the supergravity gives uses a mass-dependent scheme. The massive fields give contributions to the renormalized quantities in such a way that when energies ($`\mu `$) are smaller than their mass, they naturally decouple due to the fact that the terms of the series related to them tend to zero in that limit. In our case, those terms are Lorentzian-like. In a mass-independent scheme, the terms are added by hand when necessary. The translation would be $$\left[u_{}^2+^2\left(\theta +2\pi n\right)^2\right]^{5/2}u_{}^5\left(u_{}\left|\theta +2\pi n\right|\right)$$ (14) where $``$ is a Heaviside step function. Although the function tends to a constant, not to zero, when the variable is small, it is possible to neglect it because it is much smaller than the divergent zero mode term. The variable $`\theta `$ gives a contribution to the mass because from the point of view of the field theory, it is the expectation value of the scalar field that is used as a test; this breaks the gauge symmetry and changes the mass of some fields. In the picture with the D3-branes, there are not any masses but discrete momenta. We know that in mass-independent schemes, compactification does not affect the renormalization and no dependence on the radius appears. The reason is that mass-independent renormalization adds the minimum counterterms to the Feynman diagrams to cancel the ultraviolet divergences. Obviously, compactifying does not affect at all the ultraviolet behaviour because it is a purely infrared phenomenon. However, if one wants to observe how the infrared effects act over the effective coupling, one should use a radius-dependent scheme and that is exactly what the supergravity result is. Indeed, the first term is the one given by the minimum counterterms and the rest are exponentially suppressed when the energy is larger than the inverse radius. In this case $`\theta `$ is related to the Wilson line. The precise identification is $$A^i=\frac{}{2\pi }\theta $$ (15) The reason why it appears as a renormalization parameter is that the interaction, and therefore the coupling constant is affected by its presence. Its effects are periodic, that is why they appear as a Fourier expansion, and depend on the proportion between $`u_{}`$ and $``$. If the radius is very small, then the exponential asymptotic behaviour of the Bessel functions makes the contribution of the Wilson line negligible, but if the radius is large, then the function $`f`$ develops a clear maximum in $`\theta =0`$ and also does the coupling. This is due to the fact that the Wilson line breaks the gauge symmetry and gives masses to the intermediate (off-diagonal) bosons. In the infrared, when $`u_{}`$ is much smaller than $``$, these masses can forbid any interaction and that is why the effective coupling can tend to zero and is maximum when the Wilson line is not present at all ($`\theta =0`$). ## 4 Effects of renormalization on the phase space. As we have written just above, not only the coupling changes with the ‘distance’ $`u`$, also the radii of both the five-sphere and the circle change and it is interesting to analyze its meaning. Results related to this section are in . Let us start with the circle <sup>3</sup><sup>3</sup>3We are very grateful to María Suárez for her help on the topics covered in this section.. In the supergravity, $``$ is a parameter related to the value of the radius measured by a faraway observer situated where the metric is Minkowskian. The limit we have taken has completely disconnected that observer from the system we want to study and the physical radius is a function given by $$R_{\text{phys}}(u_{},\theta ,)=\alpha ^{}f^{1/4}(u_{},\theta ,)$$ (16) for the D2-brane case and the inverse for the D3-brane one. The interpretation of this is direct. In any field theory all parameters, including radii can be renormalized. The expression above tells us how the physical, renormalized radius depends on the scale. This can also be seen as the renormalization of the masses of the Kaluza-Klein modes. Let us now discuss the low energy spectrum of the two (IIA and IIB) T-dual string background solutions that we have. We are interested in particular in the energies of winding modes and discrete momenta. Classically, if we choose $`=R/l_s^2`$ to be an order one magnitude, then the energy of the IIA windings is also order one ($`2\pi m`$), while that of the momenta is large ($`^1l_s^2`$) and decouples. If we now look at the physical radius, because of the fact that the function $`f`$ runs from $`0`$ to infinity, this is not true in all points of space. In fact, there is a point where $`R_{\text{phys}}(u_{},\theta ,)=l_s`$ where momenta begin to be the lightest modes and must be taken into account while windings decouple. This is the usual phenomenon that sets the limit of utility of two T-dual closed string theories, as we have written before. In this case, this soft transition always happens when $`u`$, that is, in the far ultraviolet of the Yang-Mills theories. Explicitly $$R_{\text{phys}}(u,\theta ,)=l_s\text{if}u_{}d_3^{1/4}(g_sN)^{1/4}l_s^{1/4}^{3/4}$$ (17) That limiting value for $`u`$ is much larger than $``$ because $`(g_sN)/l_s`$ tends to infinity in order for the conjecture to work. For this reason, this transition, always occurs when $`f`$ can be well approximated by $$f(u_{},\theta ,)=\frac{d_3g_sN}{l_s^5u_{}^4}$$ (18) that is the harmonic function of a D3-brane in an open space. When $`u_{}`$ is beyond both transition points, in the ultraviolet, the D3-brane solution is exact and there is not any finite size effect. When it is smaller ($`u`$), it is the D2-brane solution the one that describes well the problem. When $$<u<(g_sN)^{1/4}l_s^{1/4}^{3/4}$$ (19) there is an intermediate case. One can use the supergravity solution of the D3-brane, but size of the circle is such that the momenta are light enough to be considered continuous but windings (which are string corrections) are even lighter and have to be added. On the other hand, one could use T-duality and find the solution in terms of D2-branes; then windings are heavier than momenta, as they should, but still, the harmonic function must be resummed. In the picture where the circle is seen as $`R/Z`$, the solution is that of a continuous set of D2-branes that fill one of the perpendicular directions. The question now is how can we interpret this transition from the point of view of the Yang-Mills theories. The guides are group theory and the identification of $`R_{\text{phys}}(u)`$ as the renormalized value of $``$. The supergravity fields that have discrete momentum in the IIB theory (or winding in the IIA one) clearly correspond to the Kaluza-Klein modes of the SYM<sub>3+1</sub> (or the massive off-diagonal modes in the $`U(N\times \mathrm{})`$ SYM<sub>2+1</sub>). The relation is the charge under the $`U(1)`$ symmetry of the circle. Compactifying a Yang-Mills theory in a circle has several consequences: the momenta get discrete and the component of the gauge field polarized in that direction acquires a periodicity $`A^iA^i+\frac{1}{R}`$, besides that field can have a vacuum expectation value (Wilson line). The periodicity affects the momentum of the field (the electric field in that direction) making it discrete with gap $`2\pi R`$. The electric field has energy proportional to $`\stackrel{}{E}^{\mathrm{\hspace{0.17em}2}}`$. The renormalization of $``$ makes this energy spectrum identical to that of the windings around the compact direction (in the IIB 3-brane supergravity). We can be more precise. Let us design an experiment with an observer in the IIA supergravity theory moving in the direction of the circle, that is one D2-brane, separated from the rest, falling towards the central bundle of $`N`$ D2-branes. From the point of view of the $`U(N+1)`$ SYM<sub>2+1</sub>, the movement is described by a time-dependent vacuum expectation value of the scalar that breaks the $`U(N+1)`$ into $`U(N)\times U(1)`$. This represents the change of the coordinate of the D2-brane as it moves. With the usual identification, that velocity is related in the $`U(N+1)`$ SYM<sub>3+1</sub> to the value of a background electric field. As we have taken our observer to be a D2-brane, its movement is Galilean and that is why the dispersion relation energy-electric field is quadratic. With this we learn a bit about the meaning of the closed string T-duality for the correspondence. If in usual string theory it could be seen as a competence between momenta and windings, here it can also be seen as a competence between two different spectra. In perturbative gauge theories there are two kinds of excitations: the perturbative fields and the solitons that represent changes in the vacuum. In our case the vacuum is parameterized by the expectation values of the scalars. Usually, unable to do exact computations, one uses the Born-Oppenheimer approximation and consider the solitons as ‘slow modes’ and the perturbative fields as ‘fast modes’. Their dynamics have so different frequencies or time scales that completely decouple and one can solve them independently. In our case, the D-branes are so heavy that their movements and those of the open strings are independent. In our case we have found that when a Yang-Mills theory (at least of the kind we are studying) is compactified, depending on the value of the energy in terms of the radius and the other parameters ($`g_s`$ and $`N`$) both types of modes (polarized in the compact direction) can interchange the role of being the lightest. When the energy is high, the slow modes are very heavy while the fast ones are light, but below the transition point written in (17), the situation is the opposite. To describe the intermediate phase we can either use the $`U(N)`$ SYM<sub>3+1</sub> with one compact direction, light KK modes but even lighter electric fields or (better we would say) the $`U(N\times \mathrm{})`$ SYM<sub>2+1</sub> with the gauge symmetry slightly broken by some light off-diagonal modes. Once we have dealt with the radius, let us see now what happens to the couplings. They are $$g_{IIB}(u_{},u_{},)=g_s^{IIB}\text{and}g_{IIA}(u_{},u_{},)=f^{1/4}g_s^{IIA}=f^{1/4}g_s^{IIB}l_s$$ (20) As the IIB string theory is S-selfdual, we can choose $`g_s^{IIB}`$ to be smaller than one. However, we cannot prevent that $`g_{IIA}`$ become larger than one in some region in the target space, when $$u<\left(g_s^{IIA}\right)^{5/4}N^{1/4}^{1/4}l_s^{5/4}.$$ (21) There, it is necessary to consider corrections from M theory. Another limit is imposed by the variable radius of the sphere. When it is too small, the curvature can exceed the string scale and then it is also necessary to add corrections. This is not different from the usual uncompactified cases so we shall not discuss it more. In the picture (1), we show the effect of the Wilson line ($`\theta `$ parameter) on the phase space. The line that separates the two phases, the one describable in terms of D3-branes and the other in terms of the continuous set of D2-branes, is not a straight line. The Wilson line extends or reduces each phase depending on its value. More to the left, we could draw a second wavy line, parallel to this one, that signalled the appearance of an M2-brane phase when the coupling constant is larger than one. The phase with D2-branes in an open space ($`u_{}<`$) can exist or not. It does exist if the M2-brane phase appears precisely when $`u_{}<`$. The line of phases is, therefore: wrapped D3-brane, continuous set of D2-branes, D2-brane in open space (not always) and M2-brane. The straight line drawn in the figure is the result obtained with the D3-brane neglecting all finite volume effects. ## 5 Wilson loops In this section we will analyze Wilson loops in theories with a compactified dimension. As in the rest of our work we will deal with the D$`2`$ and D$`3`$-brane systems. We will focus our attention on those features coming from the presence of a compactified dimension. Let us remind how we can introduce static quarks in gauge theories living on the brane. We begin with $`N+1`$ branes and we break the $`U(N+1)`$ gauge theory by taking one brane to infinity. The open strings connecting the separated brane to the others have infinite mass and transform in the fundamental (antifundamental) of the gauge group. These are the external quarks. From the worldvolume point of view these sources produce solitonic deformations on the worldvolume because the $`N`$-branes are pulled by the infinitely long open strings attached to the other. The shape of this deformation could be represented, on the supergravity side, by the world-sheet of a string ending on the boundary of the background manifold. Using this prescription we can stablish a concrete correspondence between the quark-antiquark potential and the classical action of the string as $$<W(C)>_{Gauge}=A(L)e^{TE(L)}=<e^S>_{Sugra}$$ (22) where $`C`$ is the Wilson contour and $`L`$ is the distance from one source to the other. The string action is computed as $$S=\frac{1}{2\pi \alpha ^{}}𝑑x𝑑t\sqrt{detG_{mn}_aX^m_bX^n}$$ (23) where $`G_{mn}`$ is the euclidean background metric in the loop directions. In order to compute the classical action we should find the string configuration that minimizes the world-sheet area. We are interested in finite size effects due to the compact dimension. We will investigate how to compute the Wilson loop in the D$`2`$-brane case and compare it with the result obtained in the D$`3`$-brane theory. There are several situations to be considered and we will describe the T-duality map for each case. These situations are related to different loop geometries. Suppose a quark-antiquark pair in the D$`2`$-brane theory separated in a world-volume direction. Remember that in this case the boundary theory has a $`SO(6)\times U(1)`$ R-symmetry that corresponds to rotational symmetry on the supergravity side. This allows us to take quarks on different points on the $`S^5\times S^1`$ transverse manifold and introduce an angular difference between the open strings connecting the separated branes and the $`N`$ central ones. We will begin with the simplest configuration that corresponds to taking the angular difference to zero. Using the metrics in (6) we can see that the classical world-sheet action is given by $$S=\frac{T}{2\pi \alpha ^{}}𝑑x\sqrt{G_{tt}G_{xx}+G_{tt}G_{uu}(_xu)^2}$$ (24) where we have integrated the temporal variable assuming a static configuration for the Loop. We can compute the worldsheet area in (24) using a general result as, for example, in . These general formulas allow us to stablish that if the Nambu-Goto action is written as $$S=\frac{T}{2\pi \alpha ^{}}𝑑x\sqrt{A(u)(_xu)^2+\frac{B(u)}{\lambda ^4}}$$ (25) then the physical magnitudes that we are interested in could be written $`L=2\lambda ^2\sqrt{B(u_0)}{\displaystyle _{u_0}^{\mathrm{}}}𝑑u\sqrt{{\displaystyle \frac{A(u)}{B(u)(B(u)B(u_0))}}}`$ $`E={\displaystyle \frac{1}{\alpha ^{}\pi }}{\displaystyle _{u_0}^{\mathrm{}}}𝑑u\left[\sqrt{{\displaystyle \frac{A(u)B(u)}{B(u)B(u_0)}}}\right]{\displaystyle \frac{1}{\alpha ^{}\pi }}{\displaystyle _{u_{min}}^{u_0}}𝑑u\sqrt{A(u)}`$ (26) where $`u_0`$ is the point given by $`u(x=0)=u_0`$, and $`u_{min}`$ is a geometrical minimal value of $`u`$ given, for example by the presence of a singularity on the background manifold. We can now compute the Wilson loop in the D$`2`$-brane theory. Using the action in (24)and expressing it in terms of the functions in (26) we can write $`A(u_{})=G_{uu}G_{tt}=\alpha ^2`$ $`B(u_{},u_{},)=\lambda ^4G_{xx}G_{tt}=\alpha ^2H(u_{},u_{},)^1`$ (27) where $`\lambda ^4=d_2g_{YM}N`$ and $`H(u_{},u_{},)`$ is the sum over multicenter solutions in the first line of (8). Finally one obtains $`L=2\lambda ^2{\displaystyle _{u_0}^{\mathrm{}}}𝑑u{\displaystyle \frac{H(u,u_{},)}{\sqrt{H(u_0,u_{},)H(u,u_{},)}}}`$ $`E_{qq}={\displaystyle \frac{1}{\pi }}{\displaystyle _{u_0}^{\mathrm{}}}𝑑u\sqrt{{\displaystyle \frac{H(u_0,u_{},)}{H(u_0,u_{},)H(u,u_{},)}}}.`$ (28) The final scope of our computation should be the resolution of $`E`$ in terms of $`L`$. It does not seem possible to obtain this result analytically but it is possible to do it numerically. We can compute both integrals in terms of the variable $`u_0`$ and then extract a numerical representation of $`E=E(L)`$. We will recover the analytic result of the Wilson loop in the limit $`\mathrm{}`$, which corresponds to the D$`2`$-brane on a decompactified background. The complete T-dual map for this quark-antiquark computation is obtained by computing the Wilson Loop from the D$`3`$-brane point of view. The method is exactly the same. In this case we will use the Poisson resummed expression for the harmonic function which is given in the second line of (8). We can write it as $$B(u_{})=\lambda ^4f^1=\alpha ^2u_{}^4g(\stackrel{~}{R},u_{},u_{})^1$$ (29) now $`\lambda ^4=d_3Ng_s^{}`$. Using this parameterization of the metric and the usual procedure we can express, as we did for the D$`2`$-brane, the energy of the Wilson loop and the separation between quarks in terms of $`u_0=u_{}(x=0)`$ $`L(u_0,,u_{})=2\lambda ^2u_0^2{\displaystyle _{u_0}^{\mathrm{}}}{\displaystyle \frac{du}{u^4}}{\displaystyle \frac{g(u)}{\sqrt{g(u_0)g(u)\left(\frac{u_0}{u}\right)^4}}}`$ $`E_{qq}(u_0,,u_{})={\displaystyle \frac{1}{\pi }}{\displaystyle _{u_0}^{\mathrm{}}}{\displaystyle \frac{du}{\sqrt{g(u_0)g(u)\left(\frac{u_0}{u}\right)^4}}}{\displaystyle \frac{1}{\pi }}{\displaystyle _{u_0}^{\mathrm{}}}𝑑u`$ (30) where the last integral eliminates the usual divergence coming from the quark masses. We can recover the standard D$`3`$-brane result simply taking the radius parameter $`\stackrel{~}{R}`$ to infinity then reducing the complete series in $`g(\stackrel{~}{R},u_{},u_{})`$ to its zero mode. The first evidence that comes out directly from the results in (28) and (30) is that both solutions are exactly the same, but expressed in T-dual variables. This fact simply reflects that, for this simple configuration of the Wilson loop, T-duality does not affect any parameter defining the system. More concretely, what we have seen is that the effective metric, used to compute the string worldsheet area, remains unchanged under Busher’s transformation rules. Consequently, they give the same result for the quark-antiquark potentials in D$`2`$ and D$`3`$ brane theories. In the figure (2) we have plotted our result. We have normalized the variables in the integrals, working with dimensionless parameters. The election has been such that the results can be written in terms of $`=E_{qq}\stackrel{~}{R}`$ as a function of $`=L\stackrel{~}{R}^1`$. Assuming the usual functional form $$^{a()}$$ (31) we can use the variable $`a()`$ to see how the quark-antiquark potential goes from the unwrapped D$`3`$-brane behavior, $`a()=1`$, to the expected for the D$`2`$-brane in a decompactified background, where $`a()=2/3`$ . The complete result is shown in (2). We see that when the distance between quarks is large enough compared with the compactification radius, $`LR`$, the system feels a small compact dimension so the propagator of gauge fields going from one quark to the other does not include any KK mode. In this case we approach the T-dual system, that is the D$`2`$-brane. When the radius is large, or the quarks are very close to each other, $`RL`$, the KK modes are very light and all of them must be taken into account. Then the system approaches the unwrapped D$`3`$-brane one. Let us now explore the other loop geometry we are interested in. Apart from the spatial separation between quarks, one can consider that they have a different ‘flavour’, that is, that, taken as vectors in the R-symmetry space $`S^1\times S^5`$, the quarks point at different directions. This affects the Wilson loop and, therefore, the potential. We will concentrate on phase differences in the circle, which is the new parameter here. The angular difference on the sphere does not play any role in the T-duality map. The way to calculate in this case is the same as before, but adding some new terms. Again, we have to compute the area of the world-sheet, which is given by the Nambu-Goto action. It is such that $`X^i(x,\tau )=xL`$ $`\theta (1,\tau )\theta (0,\tau )=\theta `$ (32) $`T=\tau `$ (33) and the configuration is static so nothing depends on $`\tau `$. The form of the world-sheet in the target space is the same as before, except that the spatial sides of the rectangle are not exactly in the $`i^{\text{th}}`$ direction but in a general direction in the $`(X^i,\theta )`$ plane. The action is given by $$S=\frac{1}{2\pi \alpha ^{}}𝑑x\sqrt{G_{xx}G_{\tau \tau }+G_{uu}G_{\tau \tau }(_xu)^2+G_{\tau \tau }G_{\theta \theta }(_x\theta )^2}$$ (34) We would like to compare this expression to its T-dual. In order to do that, we have, firstly, to look for a configuration that is the T-dual of this one. It is important to notice that the dual magnitude to an angle in one circle is not another angle in the dual circle because that would mean that movements, and therefore momenta in both circles would be related by the duality, which is not true. A string stretched between two points placed a distance apart in the circle of the D2-brane solution has an energy related to its fractional winding number; so we expect that the T-dual configuration has some fractional momentum in the IIB supergravity. From the gauge theory point of view, the R-symmetry separation of quarks on the D$`2`$-brane should be represented by a pair of quarks, living on the T-dual D$`3`$-brane, moving with different momenta in the compact direction. The main difference of the string worldsheet that describes the Wilson loop in supergravity, is that the compact scalar must have momentum so instead of the static configuration we should have $`X^i(x,\tau )=xL`$ $`\psi (x,T_{\text{max}})\psi (x,0)=\psi `$ (35) $`T=\tau `$ (36) Now the action is $$S=\frac{1}{2\pi \alpha ^{}}𝑑x𝑑\tau \sqrt{G_{xx}G_{\tau \tau }+G_{uu}G_{\tau \tau }(_xu)^2+G_{\tau \tau }G_{\psi \psi }(_\tau \psi )^2+G_{uu}G_{xx}(_xu)^2(_\tau \psi )^2}$$ (37) Both actions are quite different. One integrates fields in one dimension and the other in two, besides, the second has a term which is quartic in the velocities. Here we will show that both actions describe the dynamics of T-dual systems. The procedure we will use assumes that, similar to what happens in the string sigma model , these two actions are related by a canonical transformation. The Hamiltonians of the systems are simply computed to give $$H_{D2}=\sqrt{G_{xx}G_{\tau \tau }+\frac{G_{xx}}{G_{\theta \theta }}P_\theta ^2+\frac{G_{xx}}{G_{uu}}P_u^2}$$ (38) and $$H_{D3}=\sqrt{G_{xx}G_{\tau \tau }G_{xx}G_{\psi \psi }(_\psi \psi )^2+\frac{G_{xx}}{G_{uu}}P_u^2}$$ (39) Now we simply see that the canonical transformation should be $$P_\theta =i_\tau \psi $$ (40) with no change in the coordinates and momenta. There is a small difference between the transformation in (40) and those in . Here we simply adopted the rule in to the Euclidean case. Imposing the equality of the Hamiltonians $`H_{D2}`$ and $`H_{D3}`$ we recover the Buscher’s transformation rule for the metric $$G_{\psi \psi }=\frac{1}{G_{\theta \theta }}$$ (41) that finally shows that the worldsheet configurations we described below are T-dual. ## 6 Conclusions In this work we have studied the dynamics of D-branes sitting on backgrounds with toroidally compactified dimensions. Concretely we focused our analysis on the D$`3`$ and D$`2`$-branes with one compactified dimension. All our results are straightforwardly extensible to more general situations. We worked in the framework of the Maldacena duality, trying to clarify how T-duality enters into the holographic conjecture. Our principal interest has been the analysis of finite size effects on the dynamics of the systems. We studied their influence on the dynamics of the brane and then, by T-duality, we showed how the possible corrections appear in the dual system. The analysis started by a detailed study of the background geometry of a D$`2`$-brane with a compactified transverse direction. In order to describe the effect of the compact circle we used the multicentered solution of IIA supergravity. We clearly explained how the near horizon limit has to be taken. The background is described by a series of harmonic functions, the eq.(8), expressing the presence of an infinite tower of winding modes. We found a transition point, $`u_{}`$, where the accuracy of any truncation of the series is ever worse. Physically this means that at this point finite size effects become very important. It is then preferable to use a Poisson resummation formula for the series. It does not mean that we are making a T-duality transformation. In fact the system now behaves as a continuous distribution of branes along the compact direction, still staying in type IIA string theory. We are, in some sense, forced to make use T-duality and go to the three-brane system when the physical radius, $`R_{\text{phys}}`$, is smaller than the string scale. We showed that this point is always reached at values of $`u_{}`$ larger than $``$, so the system has passed through the continuous distribution-phase described above. On the other hand we see that at these energies in the T-dual system finite size effects are irrelevant. In sections 3 and 4 we present what can be learned about the gauge theory from supergravity. We know that the coordinate dependence of the expectation value of background fields corresponds to the renormalization group flow of the corresponding quantities in the gauge theory. We saw that in our cases the renormalization results coming from supergravity appear in mass-dependent renormalization scheme. It is due to the fact that, in order to obtain information from finite size effects, we should include all the infrared degrees of freedom of the theory. Finally we showed that the angular separation of a test D$`2`$-brane from the $`N`$-branes source is mapped on the D$`3`$-brane system as a Wilson Line. In this case the test object sees a constant gauge field on the source. The AdS/SYM allows the description of the strong ’t Hooft coupling of the gauge theories when the string coupling is small. In the case of non-conformal brane configurations this establishes some limits on the phase space describable in terms of supergravity. In our work we dealt with two non-conformal systems. In the case of the D$`2`$-brane the renormalization is shown by the running of the string coupling and of the size of the physical radius of the compact dimension. The latter reflects the change of the masses of winding modes. The D$`3`$-brane case is more subtle. In order to see the running of the coupling one cannot use the ten-dimensional string coupling but one must use the coupling of the T-dual D$`2`$-brane. The explanation of this requirement could be seen from pure gauge theory arguments. When we compactify the one of the three-brane coordinates and express our fields in terms of KK modes in two dimensions we really deal with the D$`2`$-brane which coupling constant is the T-dual of the initial one. Another way to see the non-conformal nature of the wrapped three-brane, using fields in three dimensions, is looking at the energy dependence of their masses. Finally we studied the dynamics of Wilson loops in the systems. We presented two possible configurations of the loop. They correspond to two different geometries of the string worldsheet that describe the quark-antiquark interaction from supergravity. The simplest one described a pair of quarks at a distance $`L`$ in the compact direction and with the same position in the angular directions. This configuration corresponds to a static string worldsheet. We studied the evolution of the properties of the system in terms of the supergravity parameters. We saw how the system goes from a pure D$`2`$-brane loop to the three-brane one. Our computation allowed us to study how the presence of a compact direction could affect the system. The other quark-antiquark configuration we considered, includes an angular distance, on the circle, between infinite strings describing static quarks of the two-brane. In this case we showed that T-duality converts this static system into a time-dependent one on the three-brane. Concretely it corresponds to quarks moving on the compact worldvolume direction with different momenta. We constructed this configuration from physical arguments and finally showed that it is exactly the same as the initial one but expressed in canonical transformed variables. ## 7 Acknowledgments We are very grateful to M. Suárez, M.A.R. Osorio, J. F. Barbón, P. Silva A. Nieto and V. Di Clemente for enlightening conversations. The work of M.L.M. is supported by a M.E.C. grant under the FP97 project.
no-problem/0002/cond-mat0002267.html
ar5iv
text
# Strong, Ultra-narrow Peaks of Longitudinal and Hall Resistances in the Regime of Breakdown of the Quantum Hall Effect ## Abstract With unusually slow and high-resolution sweeps of magnetic field, strong, ultra-narrow (width down to $`100\mu \mathrm{T}`$) resistance peaks are observed in the regime of breakdown of the quantum Hall effect. The peaks are dependent on the directions and even the history of magnetic field sweeps, indicating the involvement of a very slow physical process. Such a process and the sharp peaks are, however, not predicted by existing theories. We also find a clear connection between the resistance peaks and nuclear spin polarization. The integer quantum Hall effect (QHE) is a most remarkable phenomenon of two dimensional electron system (2DES), in which the Hall resistance is quantized to $`h/ie^2`$ while the longitudinal resistance nearly vanishes ($`h`$ is Planck’s constant, $`e`$ the electron charge, and $`i`$ an integer). To employ the QHE for the resistance standard, it is desirable to apply a high current through a Hall bar. However, it was early discovered that the QHE breaks down if the current reaches a critical value, $`I_c`$. Extensive investigations were thereafter performed to study the origin of the breakdown. So far, most studies have focused on factors that influence the critical current around, in particular, even filling factors. A number of models have been proposed, such as inter-Landau-level scattering and the superheating process. However, the exact mechanism responsible for the breakdown is still under debate. Here, we report on the measurement of the differential longitudinal and Hall resistances $`R_{xx}`$ and $`R_{xy}`$ (the derivative of voltage with respect to the total applied current) at high injected currents close to $`I_c`$. With unusually slow, high-resolution sweeps of magnetic field $`B`$, ultra-narrow $`R_{xx}`$ peaks (width down to $`100\mu \mathrm{T}`$) are observed. The peak values exceed the resistances of the surrounding magnetic fields by a factor 36. While no substantial change in $`R_{xy}`$ is noticed around the odd filling factor $`\nu =3`$, strong, sharp peaks are also shown on the $`R_{xy}`$ curves for $`\nu =2`$ and 4. We find the peaks to be sensitively dependent on the directions and even the history of the $`B`$ sweeps. This indicates that a physical process with a very large time constant is involved, which is orders of magnitude longer than that may be predicted by the existing models for the QHE breakdown. While many disordered electronic systems have recently been found to exhibit very slow relaxations, to our knowledge, the unusually slow physical process to be reported here has never been observed in the integer QHE regime. Interestingly, some of the aspects of our experimental observations are similar to the discovered anomalous resistance peaks in the fractional QHE regime, while some other aspects are apparently different. We will also show that the sharp resistance peaks are influenced by the nuclear spin flips. Furthermore, we present a model, which qualitatively explains the different aspects of our observations. We use two GaAs/AlGaAs modulation-doped heterostructures (wafer I and wafer II) with carrier densities of $`n_s=3.7`$ and $`3.5\times 10^{15}\mathrm{m}^2`$ and mobilities of $`\mu =59`$ and $`130\mathrm{m}^2/\mathrm{Vs}`$ at 0.3 K, respectively. A modulation-doped $`\mathrm{In}_{0.75}\mathrm{Ga}_{0.25}`$As/InP structure ($`n_s=2.8\times 10^{15}\mathrm{m}^2`$, $`\mu =22\mathrm{m}^2/\mathrm{Vs}`$) is also studied. For all these wafers, $`I_c`$ is found to scale linearly with the device width. The experiments are performed in a <sup>3</sup>He refrigerator at 0.3 K. Hall devices with different widths (from 43 to $`200\mu \mathrm{m}`$) and different geometries are investigated using a standard lock-in technique with a frequency of 17 Hz. Together with a 5 nA ac current, large dc currents, $`I_{dc}`$, are sent through the sample to drive the 2DES close to the regime of breakdown of the QHE. Qualitatively similar behavior is observed in all the samples fabricated from different material systems. We report here on measurements performed on a Hall bar made from wafer I. The inset of Fig. 1(a) shows the curves of differential resistances $`R_{xx}`$ and $`R_{xy}`$ as a function of $`B`$ around $`\nu =3`$ and at a dc current close to, but below, the critical current $`I_c=11\mu \mathrm{A}`$. The Hall bar has a width of $`43\mu \mathrm{m}`$ and five pairs of voltage probes, as is schematically shown in the inset of Fig. 1(b). The $`B`$ sweep is at a “normal” speed of 0.14 T/min and the curves are “as expected”, i.e. $`R_{xx}`$ nearly vanishes and $`R_{xy}=h/3e^2`$ within a $`B`$ range (i.e. the dissipationless regime) that is narrower than that at $`I_{dc}=0`$. However, by reducing the sweep speed and increasing the magnetic field resolution, the two $`R_{xx}`$ peaks at the left and right edges of the dissipationless regime become successively higher and narrower. Furthermore, the curves of the upward and downward sweeps become increasingly different. Figure 1(a) shows the differential resistances around the left edge of the dissipationless regime at a sweep speed of $`0.13\mathrm{mT}/\mathrm{min}`$ and a sweep step of 0.000015 T, which is the resolution of our magnet system. The arrows on the curves indicate the sweep directions. While the downward sweeps show only very small changes in $`R_{xy}`$, strong peaks are observed on the $`R_{xx}`$ curve. The narrower peak has a full width at half maximum (FWHM) of only $`100\mu \mathrm{T}`$. The resistance value at the peak is almost four times as high as the value at the Hall plateau and about 36 times higher than the $`R_{xx}`$ value on the lower $`B`$ side. For lower magnetic fields, $`R_{xx}`$ is found to remain virtually constant. When sweeping upwards from 5.03 T, however, $`R_{xx}`$ remains at this constant value (no peak structures) until it suddenly drops to zero at about 5.048 T. The behavior is thus totally different from the hysteresis effect of the breakdown of the QHE where only a shift in the magnetic field position is observed. We have simultaneously measured $`R_{xx}`$ using different segments of the Hall bar. Figure 1(b) shows the results of a downward sweep within 2 mT, using probes 1 and 2, 2 and 3, and 4 and 5. We obtain almost identically strong, narrow resistance peaks from different parts of the Hall bar. For instance, it can be seen in Fig. 1(b) that the peak on the higher field side has a fine structure, which can be seen on all the three traces. This rules out the possibility that our observations are due to local breakdown induced by inhomogeneities of the sample. Further studies of the fine structure, however, require a magnet system with a better resolution. The behavior of $`R_{xx}`$ and $`R_{xy}`$ at the higher $`B`$ edge of the dissipationless regime is very similar to that shown in Fig. 1. There, an upward sweep results in sharp $`R_{xx}`$ peaks while a downward sweep shows only a sudden drop to zero. If $`I_{dc}`$ is decreased, the height of the $`R_{xx}`$ peaks is reduced, while the width increases. Furthermore, there is less difference in the $`R_{xx}`$ curves between upward and downward sweeps. Figure 2 shows the $`R_{xx}`$ traces obtained from different segments of the Hall bar at a lower current, $`I_{dc}=9.5\mu `$A. The $`B`$ range corresponds to the right edge of the dissipationless regime around $`\nu =3`$. Five successive sweeps \[Figs. (a)–(e)\] are made back and forth between $`5.160`$T and $`5.175`$T with a speed of 0.3 mT/min. The curves are plotted only in the range between 5.1625 T and 5.1690 T for clarity. The change in the peak position of about 3 mT with sweep direction is most likely due to hysteresis of the magnet system. Although each sweep takes about one hour, the peak structure changes gradually, indicating the involvement of a very slow physical process. We have noticed the following points. First, curves obtained in the same sweep direction, such as Figs. (a), (c), and (e) or Figs. (b) and (d), are similar. Second, the greater the number of sweeps made, the less the difference in the $`R_{xx}`$ curves between upward and downward sweeps. This can already be seen from the increased similarity between Figs. (d) and (e), and is more clear in later sweeps (not shown here). Third, an increasing number of peaks and fine structures are obtained when more sweeps are made. This rules out any trivial heating effects, as heating is expected to smear out fine structures. Although strong peaks are observed on the $`R_{xx}`$ curves, the Hall resistance around $`\nu =3`$ shows only small changes as can be seen in Fig. 1(a). The behavior of $`R_{xy}`$ around the even filling factors $`\nu =2`$ and 4 is, however, totally different. This suggests that the phenomenon is connected with the spin of the 2DES. Figure 3 shows three $`R_{xx}`$ traces taken from different segments of the Hall bar and one $`R_{xy}`$ curve around $`\nu =2`$. The dc current is $`24\mu `$A, which is about $`I_c/2`$ at this filling factor. The $`B`$ range is centered at the right edge of the dissipationless regime. In contrast to the results for odd filling factors \[see Fig. 1(a)\], an equally strong, narrow peak (FWHM below 3 mT) forms on the $`R_{xy}`$ trace as on the $`R_{xx}`$ traces. The peak value is more than five times higher than the Hall plateau $`h/2e^2`$. It can be observed that $`R_{xx}`$ becomes negative on the higher $`B`$ side of the peaks in Fig. 3. A dc measurement of the longitudinal resistance is shown in the inset. Obviously, dc resistances can be quite different from differential resistances in the nonlinear regime. This is the reason why no anomalous behavior is observed in the dc measurement at $`8.1354`$T where sharp peaks form on the differential resistance curves. In fact, we do not see any unusual behavior of the dc resistance at other $`B`$ values. The general features reported here are observed at all filling factors at sufficiently high magnetic fields and in all the Hall bars and wafers studied. Thus, the above phenomena seem to be general in 2DES. The fine structures are, however, very difficult to fully reproduce in different samples. This is, at least in part, due to the fact that the fine structures are extremely sensitive to the exact $`I_{dc}`$ used, sweep speed, starting point of sweeps, history, etc. While many disordered electronic systems are characterized by very slow relaxations, to our knowledge, the above unusually slow physical process has never been observed in the integer QHE regime. It is orders of magnitude slower than the time scale of the instabilities in the regime of the QHE breakdown . The existing models for the breakdown of the QHE, such as inter-Landau-level scattering and electron superheating, do not predict any physical process with a time constant larger than microseconds. Interestingly, we have noticed that certain aspects of our observations, such as the long time constant, strong $`R_{xx}`$ peaks, and current dependence, are similar to the recently discovered anomalous resistance peaks in the fractional QHE regime at $`\nu =\frac{2}{3}`$ and $`\frac{3}{5}`$. However, some other aspects are different, such as the existence of fine structures, strong $`R_{xy}`$ peaks, the much sharper peaks (more than three orders of magnitude narrower), etc. Very recently, the peaks observed in Ref. were found to be influenced by the nuclear spin polarization. We have also performed nuclear magnetic resonance (NMR) experiments on $`{}_{}{}^{75}\mathrm{As}`$, $`{}_{}{}^{69}\mathrm{Ga}`$, and $`{}_{}{}^{71}\mathrm{Ga}`$. A typical result for $`{}_{}{}^{75}\mathrm{As}`$ is shown by the lower inset in Fig. 1(b). The splitting of the line is, however, threefold that is different from the fourfold splitting observed in Ref. . Furthermore, we observe resonance peaks rather than dips as in Ref. . While the above NMR response is strong in the GaAs/AlGaAs samples, so far, no clear observation has been obtained in our InGaAs/InP samples. One reason might be the comparatively low mobility of those samples. In the following, we present a model, which qualitatively explains the different aspects of our observations. In the $`B`$ range of a dissipationless regime, the bulk of the Hall bar is actually insulating. In the single-particle picture, if $`B`$ is sufficiently high, each Landau level is split into two well-separated, spin-polarized levels with a degeneracy proportional to $`B`$. Therefore, a change in $`B`$ will induce a redistribution of the electrons in the bulk of the Hall bar (denoted “bulk electrons”) among the Landau levels, i.e. some electrons need to have their energies changed and their spins flipped in order to achieve equilibrium. However, as the bulk electrons have no effective interaction with the electrons at the edge nor with electron reservoirs (the ohmic contacts) in the dissipationless regime, the scatterings required to flip the spins and change the energies are virtually absent. The redistribution among the single-particle Landau levels is thus not possible. This means that the bulk electrons can be far from the “normal equilibrium” (the equilibrated distribution among the single-particle Landau levels) inside the dissipationless regime if $`B`$ is changed. To the best of our knowledge, no study has been carried out on how these electrons redistribute in the energy and spin space in such a “nonequilibrium” situation. As it is not possible for the bulk electrons to redistribute among the single-particle Landau levels, effects such as electron-electron interactions must take place. We speculate that the real distribution maintains some order, which means that the electrons might rearrange to form “mini-gaps” and “mini-bands” in the energy and spin distribution. When the 2DES starts to enter the dissipation regime where the bulk-edge interactions are still considerably weak, we expect the electrons in the mini-bands to be affected. Each time a mini-band starts to participate in the scattering process, a differential resistance peak is observed. In this picture, the multiple resistance peaks and fine structure reflect the mini-band structure of the nonequilibrium distribution of the bulk electrons. One may speculate that similar nonequilibrium distribution also forms in the fractional QHE regime, which might as well give rise to resistance peaks. If a strong current is applied to the Hall bar, the large Hall electric field will substantially enhance the interaction between the electrons at the edge and those in the bulk, and therefore give rise to much stronger and sharper resistance peaks, in agreement with our experimental observations. Note that the ranges of $`B`$ in which the resistance peaks and fine structures are observed are only slightly away from the dissipationless regime. Therefore, the scattering between electrons in the bulk and electrons at the edge is expected to be rather weak. In addition, since the bulk area of a Hall bar is fairly large, the time constant of the equilibration can be very long, which explains the slow physical process indicated especially in Fig. 2. The details of the distribution of nonequilibrium electrons, and thereby the mini-bands, depend on the initial $`B`$ position, the sweep direction, and the sweep speed. This thus explains the observed strong dependence of the resistance peaks and fine structures on the experimental history. The observed NMR resistance peaks shown in the inset of Fig. 1(b) also supports our model. Via the hyperfine interaction an electron spin can flip with a simultaneous flop of a nuclear spin, which can be induced by, for example, applying NMR rf signals. Because in our model the lack of electron spin-flip scatterings is the reason for the nonequilibrium distribution of bulk electrons, the additional electron spin-flip scattering induced by the NMR signals will reduce the degree of nonequilibrium distribution. This leads to an increased scattering probability from edge to bulk, which is detected as an increase of the resistance, as shown in the lower inset of Fig. 1(b). The threefold splitting is most likely caused by the electric quadrupole interaction, which is possible in our sample where large electric field gradients are expected. This connection to nuclear spins is in line with earlier observations of the importance of nuclear spin polarization in experiments on 2DES. The dynamical nuclear polarization has been observed as Overhauser shifts in electrically detected spin resonance experiments and in e.g. the time dependency of current-voltage characteristics in transport experiments in which spin polarized electrons were injected . Also, our results are, although performed in a different physical regime, similar to the recent findings in Ref. 20. This may imply that a similar scattering mechanism might be involved in the two different regimes. To conclude, unexpected strong, ultra-narrow resistance peaks and fine structures have been observed in the regime of breakdown of the QHE. The studies reveal the involvement of a very slow physical process, which is not predicted by existing models. We also show a clear connection between the sharp peaks and the nuclear spin polarization. Furthermore, we have presented a model that emphasizes the important role of the nonequilibrium distribution of bulk electrons and qualitatively explains the observed phenomena. We acknowledge useful discussions with H. Q. Xu and technical support by A. Svensson and H. Persson. This work was supported by the Swedish Natural Science Research Council.
no-problem/0002/astro-ph0002301.html
ar5iv
text
# Acknowledgement ## Acknowledgement I am very much indebt to Prof.I.Shapiro and Prof.R.Ramos for helpful discussions on the subject of this paper. Financial support from CNPq. and FAPESP is gratefully acknowledged.
no-problem/0002/astro-ph0002235.html
ar5iv
text
# Problems encountered in the Hipparcos variable stars analysis ## 1. Introduction The variable star analysis of the Hipparcos photometric data was an iterative process in interaction with the FAST and NDAC data reduction consortia. We started with data extracts from FAST, then from NDAC and finally worked on the whole merged data set, that is on the 118 204 time series. The result of variable star analysis is a beautiful by-product of the mission and it was clear that it had to be published at the same time as the astrometric results (ESA 1997). The time available for the photometric analysis was then short in order to match the deadlines. The teams were put under strong pressure. The approach was then to produce a robust analysis restricting the analysis to statistically well-confirmed variables, leaving suspected variables and ambiguous cases for further analysis. Because several instrumental problems were identified and solved, the variable star study definitely improved the overall quality of the Hipparcos photometry available now on the CD-ROMs. ## 2. Hipparcos main-mission photometry Although the telescope diameter is small (29 cm), Hipparcos achieved a high photometric precision in the wide $`Hp`$ band (335 to 895 nm), thanks to the chosen time allocation strategy and to the frequent on-orbit photometric calibrations, making use of a large set of standard stars. The time allocated for a star observation was adapted to its magnitude in order to homogenize the astrometric precision. The photometric reduction was made in time slices of 10 hours, called reduced great circles (RGC). During that time interval, the satellite scanned a closed strip in the sky, measuring about 2600 stars, among them 600 standard stars. The FAST and NDAC consortia independently reduced the photometry. They had to map the time evolution of the spatial and chromatic response of the detection chains for both fields of view (FOV), the preceding and the following. In Fig. 1, a zero-point problem between FOV magnitude scales is shown, before and after its correction. The light of the star was modulated by a grid for astrometric purposes. The transmitted signal was modeled by a Fourier series of 5 parameters. From this model two estimates of the intensity were done: one, measuring the integrated signal, the “DC mode”, was robust to the duplicity but more dependent on the background, and the second, measuring the amplitude of the modulation, the “AC mode”, was sensitive to the duplicity but not to the background (cf. van Leeuwen et al. 1997). These two estimates and their accuracies are given in the Epoch Photometry Annex (CD-ROM 2) and in the Epoch Photometry Annex Extension (CD-ROM 3). ## 3. Noise and magnitude The precision of the magnitude is a function of the magnitude itself. In addition, for a star of constant magnitude, the errors may be variable. The data are then heteroscedastic. The correlation between the error and the magnitude, may generate some problems. For instance, the weighted mean cannot be used to estimate the central value of the magnitude distribution, if the amplitude is large as for the Miras. The global loss of precision as the satellite ages (Eyer & Grenon 2000) also needs to be considered. The usual period search algorithms are also sensitive to the inhomogeneity of the data. ### 3.1. Quoted transit errors During a single transit, a star was measured 9 times on average, the transit error $`\sigma _{Hp}`$ was derived in a first approximation from the spread of these measurements. However, the estimation did not include offsets which might have affected a whole transit, e.g., due to a mispointing or to a superposition of a star from the other FOV. In our first analysis, an empirical law was determined to correct the transit error underestimation, otherwise the number of candidate variable stars did not appear credible. During the phase of data merging, ad-hoc corrections were computed (Evans 1995). The errors were studied with different methods, comparing first the “average” error estimated from the $`\sigma _{Hp}`$ with the dispersion of the measurements on $`Hp`$. Another study by Eyer & Genton (1999) was made on the quoted errors using variograms; it showed a good general agreement with the Evans results, with some mild underestimations for faint magnitudes and some mild overestimations for the bright magnitudes in Evans’ approach. ### 3.2. Time sampling The time sampling was determined by the satellite rotation speed and by the scanning law which were optimized to reach the most uniform astrometric precision over the whole celestial sphere. The total number of transits per star is a function mainly of the ecliptic latitude. The time intervals between successive transits are 20-108-20-etc…minutes. The transits form groups which are separated by about one month, but the number of consecutive measurements as well as the time separation between groups of transits can vary strongly from one star to another. ### 3.3. Chromatic aging The irradiation by cosmic particles reduced the optical transmission with time. This aging was chromatic; it was worse than expected because the satellite had to cross the two van Allen Belts twice per orbit. Furthermore, the satellite was operational during a maximum of solar activity. For instance, the magnitude loss over 3.3 years was 0.8 mag for the bluest stars and only 0.15 mag for the reddest. The aging of the image dissector tubes were not uniform and distinct for both FOVs, therefore the aging corrections had to be calibrated as functions of the star location on the grid for each FOV. #### Magnitude trends: An odd effect of the chromatic aging was the production of magnitude trends in the $`Hp`$ time series. As the transmission loss was colour dependent, the magnitude correction had to be a function of the star colour. A colour index, monotonically growing with the effective wavelength of $`Hp`$ band, had to be evaluated from heterogeneous sources. The precision of the equivalent $`VI`$ was highly variable. For stars with “bad” $`VI`$ colour, the magnitude correction was erroneous and produced a trend. A colour bluer than true generates a spurious increase of the luminosity with the time. An example of a trend is given in Fig. 2. #### Selection of trends: Stars like Be stars may also show quasi-linear trends over the mission duration. The identification of spurious trends was iterative. LPVs, showing Gaussian residuals when modeled with a trend on top of their semi-periodic light-curve, were sorted first. But there was much more diversity in the data showing trends, true or spurious. So we used an Abbe test (Eyer & Grenon 2000) for a global detection. Stars with an Abbe test close to 1, or with a large trend or with very long periods, were flagged. Stars with possible envelopes were not retained. After visual inspection of the time series by Grenon, the number of stars selected by these different procedures was 2412. #### Correction of the star colour: The amplitude of the magnitude drift was used to correct the star colour. Indeed, if a time series shows a trend $`\alpha `$ which may be imputed to an incorrect initial colour, there is a possibility to recover the true star colour by the relation: $$(VI)_{new}=(VI)_{old}14290\alpha $$ where $`\alpha `$ is expressed in magnitude per day. Every selected case was investigated to decide whether the trend could be a consequence of an incorrect colour; 965 $`VI`$ indices were corrected this way with certainty and the origin of the errors on the colours was traced back. ## 4. Outlying values When studying variable stars, outlying values and anomalous data distributions are of great interest. Namely, it is important to distinguish outliers of instrumental origin from those due to stellar physical phenomena. Some stars show luminosity changes on very short time scales. For Algol eclipsing binaries, the duration of the eclipse is short with respect to their period. With non-continuous time sampling, eclipses may appear as low luminosity points. UV Ceti stars show strong bursts in the U band on very short time scales. However, because of the width of the $`Hp`$ band, the photospheric flux in the redder part of the band largely dominates that of the burst, with the result that no burst was detected with certainty in the M dwarfs. In Fig. 3 we present two cases of outlying values of instrumental origin. ### 4.1. Instrumental outliers: Mispointing effect The pointing precision of the satellite was normally better than 1 arcsec. However after Earth or Moon eclipses and especially near the end of the mission when most gyroscopes were faulty, the problems of mispointing were more acute. The radius of the photocathode was 15 arcsec, with a lower sensitivity towards the edge. An inaccurate pointing was inducing a loss of counted photons, leading to dimmer points in the time series. The problems with extended objects were even worse, depending on their sizes. A similar situation happened with visual double systems when the separation was around 10 arcsec. In this case the target was either the primary or the photocenter of the system. From time to time the companion was on the edge or out of the FOV, diminishing the amount of collected light. Even when the two components were measured alternatively, the not-measured star might have sometimes entered in the FOV producing a luminosity excess mimicking a burst. ### 4.2. Instrumental outliers: Light pollution A neighbour star could contaminate the observed star, although most of the identified cases were rejected during the Input Catalogue compilation. The perturbing star was possibly a real neighbour or, more often, a star belonging to the other FOV. Several configurations are possible: * A star from the other FOV was added to the measured star. That is called a superposition effect. Stars in the Galactic plane were more often perturbed because of the higher star density. * The perturbing star was very bright and caused scattered light in the detection chain. This veiling glare could be felt even if the disturbing star was further than 15 arcsec. * When the separation was greater that 15 arcsec, the two effects of pollution and mispointing could produce high values of fluxes of the dim component. Fig. 4 shows the correlation between the asymmetry of the time series for double systems and the angular separation $`\rho `$. The asymmetry is positive for dimmer outlying values, and negative for pollution by the primary (bright outlying values). ### 4.3. Selection of outliers The problems caused by the outliers were very acute at the beginning of the analysis; we had then to take drastic measures before searching for periods and amplitudes. We removed: * all measurements with a non-null flag. * the end of the mission, if the dispersion of the data was smaller than 0.3 before the day 8883. (the data for large amplitude variables were kept up to the end of the mission). * high luminosity values if there was a magnitude jump in consecutive transits. * bad-quality measurements with transit errors higher than $`ϵ(DC)=0.000510^{0.167Hp_{DC}}+0.0014.`$ * temporarily one or two outliers to check their impact on the result of an analysis based on truncated time series. This removal represents a reduction by 6% of the number of measurements with non-zero flag. Suspect transits from the analysis of outliers were transmitted to the reduction consortia, who flagged them according to the origin of the disturbance. ## 5. Alias and spurious periods The spectral window produces spurious periods when it is convoluted with the true spectrum. As a result, spurious periods around 0.09 d were frequently found for long periods or irregular variables as well as for stars showing magnitude trends. Periods of about 5 d for SR variables turned out to be nearly all spurious. With the Hipparcos time sampling the spectral window changes from one star to another and the alias effect had to be studied on a per star basis. A 58 day periodicity was found in many time series when applying the period search algorithm. This period corresponds to the time interval between consecutive measurements of double systems under the same angle with respect to the modulating grid (the modulated signal is higher when the components are parallel to the grid). ## 6. Advice about the use of the data We want to stress that caution should be taken in handling the epoch photometry, especially when the signal to noise ratio or when the number of retained measurements are small. The effects of multiperiodicity are generally very tricky. The spectral window should be investigated in detail and periods near sampling frequencies should be taken with care, in particular in the range 5 to 20 d where the Hipparcos photometry has the weakest detection capability. The selection of photometric data can be made according to their flags, the estimates of the transit errors and the background intensities. In case of doubt about outlying data, a look to the magnitude difference $`ACDC`$ will reveal problems related to duplicity and image superpositions since the amplitude of the modulated signal is reduced in the case of misaligned sources with respect to the grid orientation. The contents of the opposite FOV can be investigated thanks to their published positions. It is suggested to correct rather than to eliminate data since a loss of information might twist statistics. For an example of a successful selection procedure applied to the data of HIP 115510, see Lampens et al. (1999). ## 7. Conclusion Performing accuracy photometry in space is not free from problems. The same is true for the data analysis. Once the origin of the encountered problems is identified, it is possible to cope with them and determine precisely the domains of validity of the algorithms for search of periods and amplitudes. Globally the ratio quantity-quality generated by this mission for the study of variability has no equivalent up to now. ## References ESA 1997, The Hipparcos and Tycho catalogues, ESA SP-1200 Evans, D.W. 1995, Hipparcos Photometry Merging Report, RGO/NDAC 95.01 Eyer, L., and Genton, M.G. 1999, A&AS 136, 421 Eyer, L. and Grenon, M., 2000, in preparation Lampens, P., van Camp, M., Sinachopoulos, D. 1999 A&A, accepted van Leeuwen, F., Evans, D.W., Grenon, M., Grossmann, V., Mignard, F., Perryman, M.A.C. 1997, A&A323, L61
no-problem/0002/cond-mat0002353.html
ar5iv
text
# Metal induced gap states and Schottky barrier heights at non–reactive GaN/noble metal interfaces ## I Introduction GaN has certainly been one of the most studied compounds in the last few years, mainly because of both interesting optical properties and remarkable thermal stability, which render this semiconductor particularly suitable for important technological applications. As is well known, however, device performances depend on good metallic contacts and so the study of Schottky barrier heights (SBH) in GaN/metal systems is of great relevance: as an example, the performance of GaN–based laser diodes is still limited by the difficulty in making low resistance ohmic contacts. In this regard, surface reactivity and the presence of interface states are also seen to play a relevant role in Schottky barrier formation. In a previous work, we investigated the GaN/Al system, which is considered to be a reactive interface due to the Ga–Al exchange reaction driven by AlN formation at the immediate interface; we studied the ideal interface , as well as the effects on the interface properties of some defects (such as atomic swap and Ga<sub>x</sub>Al<sub>1-x</sub>N intralayers ) at the initial stages of the SBH formation . In the present work, we report results of ab-initio calculations for GaN/M interfaces (with M = Ag, Au) which are considered to be non–reactive and compare the results (such as metal induced gap states (MIGS) and SBH) with those obtained for GaN/Al, in order to understand the dependence of the relevant electronic properties on the deposited metal. The interest in studying noble metal contacts resides in understanding the effect of the $`d`$ states on the Fermi level position, which has been thought to be relevant in the case of GaAs interfaces . Moreover, we investigate the role of the atomic positions in the interface region in determining the final SBH values: starting from GaN/Al, line–ups differing by as much as 0.80 eV can be produced by changing the interface N-metal interplanar distance from its equilibrium value to that corresponding to the GaN–Ag interface; as a result, the Schottky barrier height is brought to a value very close to that obtained for GaN/Ag. This leads to the conclusion that strain effects, mainly affecting the magnitude of the interface dipole, play a major role in determining the final SBH at the GaN/metal interface. ## II Technical details The calculations were performed using the all–electron full-potential linearized augmented plane wave (FLAPW) method within density functional theory in the local density approximation (LDA). We used a basis set of plane waves with wave vector up to $`K_{max}`$ = 3.9 a.u., leading to about 2200 basis functions and for the potential and the charge density we used an angular momentum expansion up to $`l_{max}`$ = 6; tests performed by increasing $`l_{max}`$ up to 8 showed changes in the Schottky barrier height of less than 0.03 eV. The Brillouin zone sampling was performed using 10 special $`k`$–points according to the Monkhorst-Pack scheme . The muffin tin radii, $`R_{MT}`$, for Au and Ag were chosen equal to 2.1 a.u., while for Ga and N we used $`R_{MT}`$ = 1.96 and 1.65 a.u., respectively. We have considered supercells containing 15 GaN layers (8 N and 7 Ga atoms) and 9 metal layers; tests performed on the cell dimensions have shown that bulk conditions are well recovered far from the interface using this 15+9 layer cell size (see discussion below). ## III Structural properties GaN is well known to show polytypism between the zincblende and the wurtzite phase, so that either one can be easily stabilized; we therefore concentrate on ordered N–terminated zincblende interfaces in order to avoid the contribution of spontaneous polarization effects inside GaN that might contribute to the Schottky barrier height. Our goal is in fact to investigate the role played by the different metals in determining the position of the Fermi level within the semiconductor band gap. We considered the metal as grown epitaxially on a GaN substrate ($`a_{subs}`$ = $`a_{GaN}=4.482`$ Å). Given the bulk lattice constants of the three metals, $`a_{\mathrm{Al}}=4.05`$ Å, $`a_{\mathrm{Ag}}=4.09`$ Å and $`a_{\mathrm{Au}}=4.08`$ Å, all the metals considered show a quite large mismatch with the GaN substrate, ranging from 8.8 % in the case of fcc–Ag up to 9.6 % in the case of fcc–Al. In all cases, their lattice constants are smaller than that of the substrate, which implies that appreciable bond length relaxations are expected for the metal overlayers. We calculated the most stable structures, assuming pseudomorphic growth conditions and a geometry in which the metal atoms simply replace the Ga atoms on their fcc-sites, using total energy minimization and the ab initio forces calculated on each atomic site to find the equilibrium values of the interface Ga-N, N-M and M-M interplanar distances. In–plane relaxations, as well as the possibility of in-plane reconstruction of the GaN surface, before or during metal deposition, were neglected. In this work, in fact, we are interested mostly in studying the effect of the metal overlayer on the interface GaN/M electronic properties rather than determining the structural configurations that may occur experimentally. Our structural data are reported in Table I. Due to a refinement of our previous calculations on the GaN/Al system, the data listed in Table I for this system differ from those already published; the largest change occurs for the Al–Al bulk interplanar distance. For clarity, we report our previous results in parenthesis in this same Table. Let us focus on the structural rearrangement of Al, Ag and Au on the GaN substrate and consider some relevant interplanar distances (i.e. distances between atomic planes along the growth direction). A comparison between the free-electron-like case of Al and the behavior of noble metals shows that none of the metals considered seems to alter appreciably the bond length at the interface semiconductor layer so that the interplanar distance remains close to its bulk value (1.12 Å); the larger deviation, found in the case of Ag, still gives a reduction of the Ga–N interface distance by less than 5%. A larger difference is found for the interface nitrogen–metal interplanar distance: the N–Al distance is smaller than those corresponding to the Ag and Au structures by about 20 $`\%`$. This can be related to the different bonding at the interface in the different cases - as will be further discussed later on. For both Al and Au we find that the interplanar metal–metal distance, $`d_{int}^{MM}`$, increases compared to the bulk, leading to a bond–length larger than equilibrium just at the interface layer. This effect is far more evident in the case of Al and can be related to a weakening of the $`s`$–type metallic Al–Al bond due to a partial $`sp`$ hybridization of the interface aluminum in the Al–N covalent bond. We find that already in the sub–interface layers the forces are very small, thereby indicating that the metals recover quickly their strained bulk–tetragonal bond–lengths. In the last column of Table I, we report the interplanar distances calculated according to the macroscopic theory of elasticity (MTE) for the tetragonal metal strained to match the GaN substrate, using the bulk elastic constants and the equilibrium bond lengths as input parameters. We recall that in all cases considered, the mismatch is pretty large (about 9%) so that we might be probably out of the range of validity of the MTE. In fact, the discrepancies between the optimized interplanar distances within the bulk regions (namely, $`d_{bulk}^{MM}`$) and those predicted by MTE range between 1% in the case of Au to up to 8% in the Al case. As expected, the bond length distances in the metal bulk side are very similar for all the metals considered, seeing that their equilibrium lattice constants are very close. ## IV MIGS: the noble metal case and comparison with the GaN/Al system We compare in Fig.1 the atomic site–projected partial density of states (PDOS) for the GaN/Au and GaN/Al interfaces and inner N and M atoms, taking the valence band maximum (VBM) of the inner N PDOS as the zero of the energy scale (vertical arrows denote the position of $`E_F`$). The GaN/Ag PDOS is very similar to that of GaN/Au, and are therefore not shown. In order to demonstrate the presence of the MIGS and the strong effect of the metal deposition on the interface semiconducting atoms, we show as reference the PDOS of the same atoms (N and metal) in the corresponding bulk compounds (i.e. zincblende GaN and bulk metal). The PDOS for the interface N and Au atoms - essentially due to $`p`$ and $`d`$ states, respectively \- (Fig. 1 (c) and (d)) show peaks with a quite high density of states in the GaN band gap energy region (i.e., between 0 and 1.8 eV). We recall here that LDA strongly underestimates the GaN band gap whose experimental value is $`E_{gap}^{expt}`$ = 3.39 eV ); these states are seen to disappear in the inner bulk atom (Fig. 1 (a) and (b)). Therefore, the presence of the metal affects only the semiconductor layers closer to the interface; already in the second layer (not shown) the MIGS decrease appreciably and the DOS gets very close to their bulk shape. Bulk conditions are perfectly recovered in the inner layers (see the practically overlapping lines in Fig. 1 (a) and (e)), showing that the supercell dimensions are sufficient for our purposes. In particular, we notice that the LDA GaN gap is recovered in the PDOS of the inner N atoms. Apart from the presence of MIGS, the PDOS for both the N and Au interface atoms shows strong differences relative to the bulk, much larger than in the GaN/Al case, where the most relevant difference consists in the general modulation which brings the Al PDOS from the free–electron square–root–like behavior closer to the PDOS of bulk AlN. The reason for this behavior is that in GaN/Au, the Au $`3d`$ states are occupied and interact strongly with the N p states, which are also filled. As a result, the antibonding states rise in energy above the semiconductor VBM and form the peaks at around 1.5 eV, just in proximity to $`E_F`$. Such features are absent in the GaN/Al case (Fig. 1 (g)) and are present in the PDOS of the interface N and Au atoms (Fig. 1 (c)), completely disappearing on atoms far from the junction inside GaN. The presence of a peak at $`E_F`$ might indicate a tendency to an instability, probably leading to in–plane reconstruction with the possible introduction of defects. The spatial location of the charge density corresponding to the peak around $`E_F`$ is shown in Fig.2: these states, which have a clear anti–bonding character between N and Au, are mainly localized in the interface region with a resonant behavior inside the Au region (not shown) and a negligible charge density in the GaN region. A similar situation occurs in the GaN/Ag system (not shown) and was also reported for a GaAs/Ag interface, whereas it is completely absent in GaN/Al-type structures due to the lack of $`d`$–states in this system. The overall shape of the interface N PDOS shows a depletion of states in the region from $`4`$ to $`1`$ eV, and a peak around $`5`$ eV, presumably representing the bonding partners of the structures around $`E_F`$ and around the GaN VBM. This corresponds to a degradation of the $`sp^3`$ bonding environment around the interface N. On the basis of this discussion, we can try to give an explanation for the much larger distances $`d_{int}^{NAu}`$ and $`d_{int}^{NAg}`$, compared with the Al case. In fact, the N-Al interface bond is similar to the one in bulk AlN which provides its stability. The N-Au bond, on the other hand, mixes together filled states and pushes towards higher energies the antibonding combinations (with a large anion (N) contribution) around $`E_F`$ (but mostly below it). This is consistent with the smaller amount of charge present inside the interface N atom, to be discussed later. In this scenario, decreasing the N-Au (Ag) distances, would not further stabilize the structure. In order to investigate the spatial dispersion of the occupied gap states, we show in Fig. 3 the macroscopic average of the MIGS charge density in GaN/Al (solid line), GaN/Au (dotted line) and GaN/Ag (dot-dashed line). As already pointed out for GaN/Al , the presence of the metal affects almost exclusively the interface semiconductor layer; the MIGS decay exponentially, approaching zero inside the semiconductor. Due to the different density of states distribution of Al and Au (or Ag) for energies close to $`E_F`$ and the consequent different positions of $`E_F`$ with respect to the GaN VBM, the total integrated MIGS charge in the whole cell (metal side included) is larger in the free–electron metal–like case; however, the behavior of these states as a function of the distance from the junction in the three interfaces is overall very similar. Actually, we find from Fig. 3 a very similar behavior, which can be extrapolated with an exponential, leading to the same decay length (see Ref. for details) for both GaN/Au and GaN/Ag estimated to be $`\lambda `$ = 2.0 $`\pm `$ 0.1 Å., which is close to the value obtained for the GaN/Al system ($`\lambda `$ 1.9 Å ). Therefore, even though the energy dispersion of the MIGS is somewhat different in the free–electron–like and noble metal interfaces (see Fig. 1), they are equally screened in the semiconductor side within 1–2 layers, thus showing that, within a good approximation, $`\lambda `$ is a GaN bulk property. We plot in Fig. 4 the binding energies of $`1s`$ core levels, with respect to the $`E_F`$ of Ga and N (panels (a) and (b) respectively) and the difference between MT charges in the GaN/M superlattices (SL) and in GaN bulk (Fig.4 (c) and (d) for the Ga and N atoms, respectively) as a function of the distance from the interface. As already discussed in Ref. , it is clear that in the Al case the charge rearrangement at the interface causes a small effect (about 0.15 eV) on the interface N core level and negligible effects on the other semiconductor atoms. On the other hand, in the noble metals case, the interface effect is much stronger: the GaN interface layer shows core level binding energies differing by about 0.4 (0.5) eV for Ag (Au), from the values of the inner bulk-like atoms. Moreover, the core levels bending in going from the bulk towards the interface follows an opposite trend in the noble and free-electron–like metals. This behaviour is consistent with the trend of the valence charge inside atomic spheres: the interface nitrogen atom shows a charge depletion (enhancement) with respect to the bulk atoms in the noble metal (Al) case. As discussed above, the presence of a peak (see Fig.1) in the GaN gap energy region, with antibonding N $`p`$Au $`d`$ character, might be the cause of this charge rearrangement and the resulting chemical shift observed on the core level profiles. ## V Schottky barrier heights To calculate the values of the SBH, we adopt the usual procedure which takes core levels as reference energies. In particular, the potential discontinuity can be expressed as the sum of two terms: $`\mathrm{\Phi }_B=\mathrm{\Delta }b+\mathrm{\Delta }E_b`$, where $`\mathrm{\Delta }b`$ and $`\mathrm{\Delta }E_b`$ denote an interface and bulk contribution, respectively. We evaluate $`\mathrm{\Delta }b`$ taking the difference of Ga $`1s`$ and the noble–metal $`1s`$ core levels energies in the superlattice: $`\mathrm{\Delta }b=E_{1s}^{Ga}E_{1s}^{NM}`$. On the other hand, the bulk contribution can be evaluated from separate calculations for bulk GaN and noble–metal and calculating the difference between the binding energies of the same $`1s`$ levels considered above: $`\mathrm{\Delta }E_b=(E_{VBM}^{GaN}E_{1s}^{Ga})(E_F^{NM}E_{1s}^{NM})`$. The $`p`$ type SBH values obtained, shown in Table II, include a spin–orbit perturbation $`\mathrm{\Delta }_{SO}^{GaN}`$ 0.1 eV, but do not include quasi–particle corrections. Note that, as already pointed out, this refinement in the calculation of the GaN/Al system gives a $`4\%`$ difference in the N-Al interface distance (which changes the core level alignment at the interface) and a large difference in the tetragonal bulk Al interplanar distance. This last quantity affects the Al core level binding energies used in the evaluation of the final SBH, changing considerably the bulk contribution, essentially related to the absolute deformation potential of the Fermi level in bulk Al. As a result, we obtain a SBH value different from the one previously published ($`\mathrm{\Phi }_B`$ = 1.12 eV). The values shown in Table II are in good agreement (within 0.1-0.2 eV) with those calculated from the density of states, obtained by considering the SBH as the energy distance between $`E_F`$ (see vertical arrows in Fig. 1) and the top of the valence band of the PDOS corresponding to the inner semiconductor layer inside the bulk region of the superlattice (the energy zero in Fig.1). We note that the SBH values in the noble–metal case are lower than the SBH in the GaN/Al interface ($`\mathrm{\Phi }_{B_p}(GaN/Al)`$ = 1.51 eV). To better understand this result, we should consider that the Al and noble metal interfaces differ in two main aspects: (i) different chemical species of the metal overlayer and (ii) different structural properties, $`i.e.`$ different bond lengths at the interface that comprise, in all respects, an interfacial strain contribution. In particular, from inspection of Table I it is evident that the Ag and Au structures show very similar $`d_{int}^{NM}`$ and $`d_{int}^{MM}`$ interplanar distances, which are at variance with those with Al. In order to separate the chemical from the strain contribution, we evaluate the SBH for three different structures that can be regarded as intermediate steps necessary to bring the GaN/Al structure to match perfectly the GaN/Ag one. We show in Table III the interface interplanar distances and final SBH values for the equilibrium GaN/Al and GaN/Ag systems and for the three intermediate interfaces. In the first structure (Step I), the interplanar $`d_{int}^{NM}`$ distance of the GaN/Al SL is taken equal to that minimized for the GaN/Ag SL ($`d_{int}^{NM}`$ = 1.32 Å) : the SBH is reduced from 1.5 eV to 0.76 eV (this surprising result will be discussed in detail later on). As a second step (Step II), we change $`d_{int}^{MM}`$ to recover that calculated for the GaN/Ag structure: $`d_{int}^{MM}`$ = 1.60 Å. The SBH is remarkably less sensitive to this parameter, giving only a 0.04 eV change in the potential barrier which brings the SBH to about 0.80 eV. This is expected since the metal efficiently screens out the perturbation generated by atomic displacements: actually, the dynamical effective charge, related to the dipole induced by a unit displacement of atoms, is zero inside a metal. In the third structure (Step III), the Ga-N interface distance is brought to its value in the GaN/Ag superlattice, $`d_{int}^{GaN}`$ = 1.07 Å, and we have a system where Al atoms perfectly replace Ag in GaN/Ag. Although this last structural change is very small (about 4 $`\%`$), this interplanar distance turns out to be very important for the final potential line–up, due to the incomplete screening in the semiconductor side and the large N effective charge. As a result, the SBH changes appreciably ($`\mathrm{\Phi }_B`$ = 0.71 eV). This final result is quite close (within 0.2 eV) to the value found for the real GaN/Ag interface, showing that apparently the interface strain plays a more important role than the bare chemical contribution. Perhaps the most surprising result of our tests is the very strong dependence of the SBH on the interface N–Al distance, whose variation represents the larger contribution to the difference between GaN/Al and GaN/Ag (Au) SBHs. Test calculations have shown an almost perfect linear behaviour of $`\mathrm{\Phi }_B`$ against $`d_{int}^{NAl}`$, leading to an Al effective charge $`Z_L^{}=0.08`$, ($`Z_L^{}=Z_T^{}/ϵ_{\mathrm{}}`$, where $`Z_T^{}`$ is the Born dynamical charge and $`ϵ_{\mathrm{}}`$ is the electronic static dielectric constant). This result is in sharp contrast with the case of GaAs/Al , where no significant changes were found for small elongations of the As-metal interface distance, therefore resulting in $`Z_L^{}`$ 0 - an almost perfect metallic behavior. As a further test, we performed calculations on GaAs/Al, similar to those of Ruini et al., confirming their results both in terms of the SBH values, and of their behavior as a function of $`d_{int}^{AsAl}`$. A possible explanation of the striking difference between the GaN/Al and GaAs/Al systems can be provided by an analysis of the relevant physical parameters involved. The MIGS decay length is larger for GaAs/Al ($`\lambda _{GaN}`$ 2 Å and $`\lambda _{GaAs}`$ 3 Å), therefore providing a more extended region with metallic behavior inside the semiconductor; in addition, the density of states at the Fermi level, $`N(E_F)`$, is larger for GaAs/Al (see Fig 5), giving rise to a smaller (by a factor of $`1.5`$) Thomas-Fermi screening length $`\lambda _{TF}`$ in this system. Still, the values of the MIGS decay length and of $`N(E_F)`$ are not very different between GaAs and GaN, so that it is not at all obvious to expect such a behavior of the SBH in the two compounds. To understand these results, we can use elementary electrostatics arguments. If we make a few rough assumptions such as a simple Yukawa–like screened potential and consider a metallic behavior inside the semiconductor up to distances of the order of $`\lambda `$, the MIGS decay length, we find that the potential difference across the metal/semiconductor junction induced by displacements of the Al interface atom scales with the factor $`e^{k_{TF}\lambda }`$. If we now estimate the values of the Thomas–Fermi screening lengths, $`k_{TF}`$, using the GaN/Al and GaAs/Al superlattice value of $`N(E_F)`$, we are led to the conclusion that a unit displacement of the interface Al atoms produces a potential change across the interface which is roughly 7 times larger in GaN/Al than in GaAs/Al. Considering the crudeness of this model, such an estimate is in satisfactory agreement with the first–principles results, which indicate a factor of $`10`$ for the same ratio. In other words, GaAs screens out almost perfectly all the structural changes in the interface region (namely, displacements of the interface Al atoms), while the same is not true for GaN. In order to further investigate the effects of the $`d_{int}^{NM}`$ change, we compare the core levels and MT charges for the equilibrium GaN/Al interface and the Step I system. Recall that these systems are exactly equal, except for the difference in the interface nitrogen–metal distance: $`d_{int}^{NM}`$ = 1.11 Å and $`d_{int}^{NM}`$ = 1.32 Å, for the equilibrium and Step I structures, respectively. As in Fig. 4, we show in Fig. 6 the trends for the core level binding energies (panels (a) and (b) for Ga and N atoms, respectively) and the difference between the MT charges compared to their values in the bulk compounds (panels (c) and (d) for Ga and N atoms, respectively), as a function of the atomic distance from the interface. In terms of core levels and charge transfer within the MT, a comparison with Fig. 4 shows that the Step I system behaves very closely the GaN/Ag structure but very differently from the equilibrium GaN/Al interface. Therefore, the electronic charge distribution around the interface N and Al atoms has a strong dependence on the interplanar distance and is able to reduce the SBH by as much as 0.75 eV, bringing a GaN/Al Step I SBH within only 0.16 eV from the GaN/Ag SBH. The dispersion of the SBH values seems to exclude a Fermi level pinning in the GaN case, as experimentally confirmed by the large spread of values reported in the literature for the SBH between GaN and different metals. In particular, let us recall those obtained for $`n`$-GaN/Ag and $`n`$-GaN/Au: $`\mathrm{\Phi }_{B_p}^{expt}(nGaN/Ag)`$ = 2.7 eV and $`\mathrm{\Phi }_{B_p}^{expt}(nGaN/Au)`$ 2.4 eV and 2.2 eV . On the other hand, recent photoemission measurements performed for Au deposited on $`p`$-type GaN show that the Fermi level is stabilized around 1 eV above the $`p`$-GaN VBM, in apparent good agreement with our calculated value ($`\mathrm{\Phi }_B`$ = 1.08 eV). The disagreement between some of these values and our calculated ones are certainly related to the different conditions of the GaN surface (which is ideal in our calculations and subject to different preparations in the experimental case); moreover, we considered the GaN zincblende structure and oriented interfaces, whereas all the experimental samples are grown on wurtzite GaN. We note finally that the spread of experimental values for GaN/metal interfaces has not been found for the GaAs/metal systems, where a Fermi level pinning was experimentally observed . This can be understood in terms of the much more efficient screening provided by GaAs with respect to GaN, as discussed above. In fact, the main difference between the noble metal and Al systems is the length of the anion–metal bond which gives rise to large variations of the interface dipole: this electrostatic contribution is poorly screened in GaN and therefore contributes considerably to the final SBH value. In addition, it has to be observed that GaAs matches better (within 2$`\%`$) the Au, Ag and Al lattice constants so that the interface strain does not play a crucial role in this case. ## VI Conclusions We have performed FLAPW calculations for ordered GaN/Ag and GaN/Au interfaces, mainly focusing on the electronic properties and comparing our results with those obtained from previous calculations for the GaN/Al interface. Our calculations show that there is an appreciable density of MIGS in the noble metal interfaces considered (even higher than in the GaN/Al case); however, the presence of the gap states is relevant in the interface layer only, being strongly reduced already in the sub–interface layer. We estimate the MIGS decay length to be $`\lambda `$ 2.0 Å, for all the metals considered (Al, Ag and Au). The SBH values ($`\mathrm{\Phi }_{B_p}(GaN/Ag)`$ = 0.87 eV and $`\mathrm{\Phi }_{B_p}(GaN/Au)`$ = 1.08 eV) are significantly smaller than the value obtained in the GaN/Al case ($`\mathrm{\Phi }_{B_p}(GaN/Al)`$ = 1.51 eV); we demonstrated that the appreciable SBH reduction in going from the free–electron to the noble–metal case is mostly due to structural effects. In particular, the distance between the last N and the first metal layer plays a critical role in dictating the final SBH value, in contrast with that found in GaAs/Al, where previous , as well as our present, calculations showed negligible effects of this same structural parameter. We found that the largest structural differences between the various GaN/M interfaces considered are related to this distance, mainly determined by the different bonding nature between N and free–electron–like or noble metals. Finally, we were able to show, at least for our perfectly ordered abrupt interfaces, that the lack of Fermi level pinning in GaN can be understood in terms of electrostatic effects related to variations of the interface anion–metal dipole: these effects are not properly screened in GaN, so that they contribute considerably to the final potential line–up at the interface. ## VII ACKNOWLEDGEMENTS We gratefully acknowledge useful discussions with Prof. R. Resta and Dr. A. Ruini. Work in L’Aquila and Cagliari supported by grants of computer time at the CINECA supercomputing center (Bologna, Italy) through the Istituto Nazionale di Fisica della Materia (INFM). Work at Northwestern University was supported by the U.S. National Science Foundation through the Northwestern Materials Research Center.
no-problem/0002/astro-ph0002483.html
ar5iv
text
# The Polytropic Equation of State of Interstellar Gas Clouds ## 1 Introduction A fundamental problem in the theory of star formation is the physical structure of molecular gas clouds, such as dense cores and Bok globules, from which (low-mass) stars are formed. Early work on the thermal and chemical balance of interstellar clouds has been performed by de Jong, Dalgarno & Boland (1980) and Falgarone & Puget (1985). A framework for active regions such as Orion has been developed by Tielens & Hollenbach (1985). The above work has been extended and refined by many authors (see Hollenbach & Tielens 1999 for a review), driven to a large extent by recent advances in observational techniques. Examples of such observational progress include the catalog of 248 optically selected Bok globules of Clemens & Barvainis (1988), the NH<sub>3</sub> survey of isolated cores in Taurus by Benson & Myers (1989, and references therein), the measurement of non-thermal line widths in regions surrounding ammonia cores (Fuller & Myers 1992, Goodman et al. 1998), and the use of spectroscopy of various atoms and molecules as temperature and density diagnostics of different types of interstellar clouds as summarized in van Dishoeck (1997). Despite this wealth of observational data, the nature of their equation of state (EOS) remains a major theoretical problem that concerns the stability and collapse of these molecular clouds The stiffness of the EOS can be largely responsible for the resulting density probability function of interstellar gas in the turbulent ISM (Scalo et al. (1998) and Passot & Vazquez-Semadeni 1998). In particular, the value of gamma affects the densities that can be attained behind shocks (Vazquez-Semadeni, Passot & Pouquet 1996), the fraction of gas driven to high densities by turbulent interactions (c.f. Scalo et al. 1998), and the stability of clouds being overrun by shocks (e.g. Tohline, Bodenheimer & Christodoulou 1987; Foster & Boss 1996). In the present paper we concentrate on the effects of the EOS for the fragmentation of individual collapsing clouds. Before embarking on an analysis of the EOS of interstellar gas clouds, say of the form $`P=P(\rho ,T_\mathrm{g},T_\mathrm{d},V,B,I,C)`$ for pressure $`P`$, density $`\rho `$, gas temperatures $`T_\mathrm{g}`$, dust temperature $`T_\mathrm{d}`$, velocity $`V`$, magnetic field $`B`$, radiation intensity $`I`$, and chemical composition $`C`$, one should assess which of the possible physical quantities entering the EOS dominates in the parameter ranges under study. The importance of $`\rho `$ and $`T_\mathrm{g}`$ is self-explanatory. The presence of warm dust can play a role since it can heat the gas through gas-grain coupling and radiatively through optically thick molecular lines (Takahashi, Hollenbach & Silk 1983). The velocity field, even when it does not contribute to the turbulent pressure, strongly influences the optical depth of atomic and molecular cooling lines, and hence the thermal balance of the medium. The magnetic field plays an important role in the support of interstellar gas (McKee 1999). The presence of ultraviolet or other hard radiation sources strongly influences the chemical, thermal and ionization balance of the ambient medium, and hence the overall EOS. Finally, the chemical composition of the gas also depends on the metallicity since it is the abundance of atoms and molecules which influences the local cooling rate. In the present work, a polytropic equation of state, $`P=K\rho ^\gamma `$, is assumed to describe the physical state of the gas. The exponent $`\gamma `$ is considered to be a function $`\gamma (T_\mathrm{g},T_\mathrm{d},V,I,C)`$. That is, the magnetic pressure is assumed to be not dominant in the support of the model cloud. The radiation field $`I`$, in addition to the radiation emitted by the dust grains, is assumed to be given by the cosmic ray ionization rate. In other words, the visual extinction of the model cloud is assumed to be sufficient to shield the gas from any ultraviolet (stellar) sources. We note that $`\gamma `$ is the logarithmic derivative of $`P`$ with respect to $`\rho `$. That is, since $`P\rho T`$ it follows that $`\gamma =1+\frac{d\mathrm{log}T}{d\mathrm{log}\rho }`$. Through implicit differentiation the above derivative is related to the logarithmic derivatives of the heating and cooling functions, which can be quite complex. It is this dependence of $`\gamma `$ on the details of heating and cooling that is the main topic of the present paper. In the remainder of the paper, a useful guide is the density dependence of the heating and cooling rate. If the density dependence of the heating rate is steeper (shallower) than that of the cooling rate, then $`\gamma `$ is larger (smaller) than unity. A similar result holds for the kinetic temperature. Here, the cosmic ray ionization rate is always linear in the density and independent of temperature, whereas gas-grain heating and cooling is quadratic in the density and proportional to $`T_\mathrm{g}^{1/2}(T_\mathrm{g}T_\mathrm{d})`$. Optically thin, subthermally excited cooling lines have a quadratic density dependence which becomes linear in the thermalized (high density) limit and even effectively sub-linear when radiative trapping (optical depth effects) becomes important. Finally, collisional de-excitation rates are characterized by powers of the temperature on the order of $`0.51.5`$, whereas excitation rates are modified by an exponential factor which is of the order of unity for kinetic temperatures larger than the pertaining excitation temperatures. In general, our results are applicable mostly to dense and/or well shielded molecular clouds/cores where the pressure appears to be close to thermal, with only a small additional contribution to the line-widths from turbulent motions, and the ionization fraction is small (Goodman et al. 1998, and references therein). That is, we assume that the clouds are outside the regime where the so-called Larson (1981) laws apply, which indicate an increase in the non-thermal line width with length scale. One should note that polytropic models can also be extended into the regime where there is a significant contribution to the pressure from turbulence and magnetic fields. The polytropic temperature $`T`$ should then be interpreted as representing the total velocity dispersion $`\sigma `$, i.e., $`T\sigma ^2`$ (Maloney 1988). A complete description of polytropic models, including composite ones for non-thermal pressure effects and bulk properties like mass, radius and density contrast, is presented in Curry & McKee (1999). The aim here is to study the thermal regime and to investigate the specific effects of metallicity, radiative transfer and internal sources on the stiffness of the resulting local EOS. ## 2 Model Description The results presented here were obtained by application of the numerical code of Spaans (1996), described in detail in Spaans & Norman (1997); Spaans & van Dishoeck (1997); and Spaans & Carollo (1998). The interested reader is referred to these papers for a description of the underlying algorithm. The code has been specifically designed to solve large chemical networks with a self-consistent treatment of the thermal balance for all heating and cooling processes known to be of importance, and to be computable with some sense of rigor, in the interstellar medium (Spaans & Ehrenfreund 1999). Care has to be taken in the treatment of line trapping for the atomic and molecular cooling lines which dominate the thermal balance. Collisional de-excitation and radiative trapping is most prominent in the density range $`10^310^4`$ cm<sup>-3</sup> (Scalo et al. 1998). Our result were found to be in good agreement with those of Neufeld et al. (1995) for their, and our, adopted linear velocity gradient model (see further details below). The adopted chemical network is based on the UMIST compillation (Millar, Farquhar & Willacy 1997) and is well suited for low temperature dense molecular clouds, with the latest rates for the important neutral-neutral reactions. The radiative transfer is solved by means of a Monte Carlo approach (Spaans & van Langevelde 1992) with checks provided by an escape probability method for large line optical depths. The thermal balance includes heating by cosmic rays, absorption of infrared photons by H<sub>2</sub>O molecules and their subsequent collisional de-excitation, and gas-grain heating. The cooling includes the coupling between gas and dust grains (Hollenbach & McKee 1989), atomic lines of all metals, molecular lines, and all major isotopes, as described in Spaans & Norman (1997) and Spaans et al. (1994), and also in Neufeld, Lepp & Melnick (1995) for the regime of very large line optical depth. All level populations are computed in statistical equilibrium at a relative accuracy, also imposed on the ambient line radiation field, of no less than $`10^3`$. The combined chemical and thermal balance is required to obey a convergence criterion between successive iterations of 0.5%. For the low metallicity computations, the relative contributions to the total cooling by H<sub>2</sub> and HD can become significant. Since the value of $`\gamma `$ depends on the logarithmic derivative of the cooling rate with respect to temperature and density, the end result could become quite sensitive to the (quantum mechanical) treatment of H-H<sub>2</sub>, H<sub>2</sub>-H<sub>2</sub> and He-H<sub>2</sub> collional processes. The main differences between calculations of H-H2 and H2-H2 collisions are due to different potentials (surfaces) and whether reactive collisions, i.e., the exchange of an H atom, are included (important above 1000 K). The shape of the potential surface is quite important for the lower temperatures, where rotational quantum effects dominate. This situation has improved greatly over recent years (c.f. Forrey et al. 1997). Nevertheless, we have restricted our calculations to $`Z0.01`$ Z. In this regime, there are other atomic and molecular coolants which still contribute, at the 30% level, diminishing the relative importance of the aforementioned sensitivities. As an additional check, we have performed runs with the recent le Bourlot et al. (1999) H<sub>2</sub> rates. We have found no significant changes in $`\gamma `$ for our adopted metallicities. Finally, for zero metallicity gas, and in the absence of HD cooling, we found (albeit tentatively given the above caveats) that $`\gamma `$ is always less than unity. The inclusion of HD restricts this result to densities less than $`10^5`$ cm<sup>-3</sup>. ## 3 The Initial Mass Function Before discussing the results of our computations, we want to outline how the value of $`\gamma `$ could related to the process of star formation, i.e., how a large value of $`\gamma `$ might suppress the formation of stars. The polytropic equation of state of interstellar gas clouds may be relevant to determinations of stellar masses. Our starting point is to pose the question: is there a characteristic stellar mass? Observationally, the answer appears to be positive. The IMF, $`dN/dMm^{1x}`$, for the number $`N`$ of stars per unit of mass $`M`$, has $`x>1`$ above $`1\mathrm{M}_{}.`$ Below $`0.3\mathrm{M}_{}`$, virtually all data suggests a flattening to $`1\stackrel{<}{}x\stackrel{<}{}0.`$ The critical value that defines a characteristic mass is $`x1`$ for a logarithmic divergence in total mass toward decreasing mass, and there is no doubt that there is a transition from $`x>1`$ to $`x<1`$ in the range $`0.21\mathrm{M}_{}`$. Hence there is a characteristic stellar mass. Whether this varies with environment is uncertain, but $`m_{\mathrm{char}}0.3\mathrm{M}_{}`$ fits most data (cf. Scalo 1998). The status of theoretical discussions of a critical mass often centers on the thermal Jeans mass, $`m_\mathrm{J}T^{3/2}\rho ^{1/2}`$, generalized to include external pressure, $`m_{\mathrm{J},\mathrm{P}}T^2P^{1/2}`$, turbulence $`m_{\mathrm{J},\mathrm{t}}\mathrm{\Delta }V^4P^{1/2}`$, or magnetic flux support $`m_{\mathrm{J},\mathrm{B}}B^3\rho ^1`$ (c.f. Elmegreen 1999, Larson 1998). The generalized Jeans scale most likely applies to the masses of atomic and molecular clouds or to self-gravitating clumps. However, the fact that in the interstellar medium, the Jeans mass in diffuse atomic or molecular clouds generally exceeds stellar masses by a large factor, apart from the very coldest clumps, argues that the physics of the IMF is more complicated than Jeans instability. The Jeans mass is a fragmentation scale according to linear theory, and three-dimensional simulations indeed find that clump masses are described by the Jeans scale. Nevertheless, simulations cannot currently resolve the scales of individual protostars on solar mass scales. Analytic arguments, on the contrary, of opacity-limited fragmentation find scales of $`0.01\mathrm{M}_{}`$, or far less than the characteristic stellar mass scale. All of this seems to suggest that the Jeans mass is not very relevant to the IMF, not withstanding the Elmegreen (1999) and Larson (1998) results. If it were, of course, the critical polytropic index for fragmentation to occur is 4/3. This has the following significance: one can write $`m_\mathrm{J}\rho ^{(3/2)(\gamma 4/3)}`$. As collapse occurs, smaller and smaller masses fragment, and one plausibly ends up with a powerlaw IMF only if $`\gamma <4/3`$. This argument could apply to the clump mass function within molecular clouds. The physics of nonlinear fragmentation is complicated, and this is at least a necessary condition for a power law IMF to continue to very small mass scales (compared to the initial Jeans mass). But what really determines stellar masses? Physical processes that must play a role in determining the characteristic stellar mass scale include clump coagulation (Allen & Bastien 1995), clump interactions (Klessen, Burkert & Bate 1998) and protostellar outflows (Nakano, Hasegawa & Norman 1995; Silk 1995) that halt accretion. One conjecture is that feedback from protostars plays a crucial role in limiting accretion. The latter depends crucially on the accretion rate to protostellar cores (various authors relate inflow to outflow rates on energetic grounds), which in turn depends sensitively on the turbulent velocity. The maximum embedded young stellar object luminosity correlates with NH<sub>3</sub> core line width $`\mathrm{\Delta }V`$ (Myers & Fuller 1993) and clump mass (Kawamura et al. 1998) over a wide range. Myers & Fuller (1993) concluded that variations in the initial velocity dispersion from core to core is a key determinant of the stellar mass formed from such a core. Also, Fuller & Myers (1992) concluded that the physical basis of line width-size relations (the so-called Larson laws) is part of the initial conditions for star formation. The theoretical justification incorporates disk physics and, at least conceptually, magnetic fields (Adams & Fatuzzo 1996; Silk 1995). So if the above holds (theoretically and observationally, both aspects are relevant), then one might argue that the characteristic mass of a young stellar object depends primarily on $`\mathrm{\Delta }V`$. The latter is determined by the kinetic temperature if the thermal line width dominates in a prestellar core. This situation holds for low mass star formation but less so for the high mass cores, where the nonthermal contribution to the line width (i.e. turbulent or magnetic) can be as large as 50-80% (c.f. Myers & Fuller 1993). In any case, $`\gamma =1`$ appears the most natural critical value of interest. If $`\gamma =1`$ is indeed a critical value in this sense, there is the possibility that the primordial EOS with a stiffer polytropic index, $`\gamma >1`$, leads to a peaked IMF, i.e. one biased toward massive stars, whereas current star formation, where the molecular gas is characterised by an EOS with $`\gamma <1`$, results in runaway fragmentation and hence a power law IMF. ## 4 Results The model results<sup>3</sup><sup>3</sup>3Extensive tables can be obtained from MS. are obtained for self-gravitating, quiescent spherical clouds with total column densities per unit of velocity given by $$N^{\mathrm{SIS}}(\mathrm{H}_2)=5.1\times 10^{19}n(\mathrm{H}_2)^{1/2}\mathrm{cm}^2\mathrm{per}\mathrm{km}\mathrm{s}^1,$$ $`(1)`$ (the singular isothermal sphere value; Neufeld et al. 1995) to set the optical depth parameter for any species, total hydrogen densities $`n_\mathrm{H}=10^210^6`$ cm<sup>-3</sup>, metallicities $`Z=10.01Z_{}`$ in solar units, and a possible homogeneous background infrared radiation (IR) field given by 100 K dust grains. The total visual extinction through the region containing the model cloud is fixed at $`A_\mathrm{V}=100`$ mag, although this is only important for determining the dust emission optical depth in the IR background case. Models are also constructed for a constant velocity gradient field of 3 km s<sup>-1</sup> pc<sup>-1</sup>, which is a factor of three larger than the value adopted in Goldsmith & Langer (1978), corresponding to an optical depth parameter of $$N^{\mathrm{VG}}(\mathrm{H}_2)=1.0\times 10^{18}n(\mathrm{H}_2)\mathrm{cm}^2\mathrm{per}\mathrm{km}\mathrm{s}^1.$$ $`(2)`$ It has been shown by Neufeld & Kaufman (1993) that this choice violates the virial theorem in the limit of high density. Therefore, (1) is interpreted to apply to a (statistically) static cloud supported by thermal pressure, whereas (2) should be seen as representative of a dynamic medium in a state of infall or outflow. The cosmic ray ionization rate is taken equal to $`\zeta _{\mathrm{CR}}=3\times 10^{17}`$ s<sup>-1</sup>. ### 4.1 Variations in $`\gamma `$: Competition between Heating and Cooling Processes Figures 1 and 2 present results which do not include an IR background radiation field, e.g. little or no ongoing star formation, or a warm CMB. Figure 1 displays the results for a quiescent velocity field with a velocity dispersion of $`\mathrm{\Delta }V=0.3`$ km s<sup>-1</sup> for H<sub>2</sub>. Figure 2 is for the corresponding case of a velocity gradient of 3 km s<sup>-1</sup> pc<sup>-1</sup>. The value of $`\gamma `$ becomes of the order of unity for low metallicities, $`Z<0.03Z_{}`$, and densities $`n_\mathrm{H}<10^5`$ cm<sup>-3</sup>. The reason is that the cooling of the ambient medium becomes dominated by H<sub>2</sub> and HD (Norman & Spaans 1997), which emit optically thin line radiation. At low densities, the level populations are not thermalized and the cooling rate has a density dependence which is steeper than linear and typically quadratic, whereas the cosmic ray heating rate is linear in the density. At higher densities gas-grain cooling becomes important. This has a quadratic density dependence, and the equation of state softens in the absence of any embedded sources (Scalo et al. 1998), but to a lesser extent due to the lower dust abundance, for small metallicities. Even though no computations were performed beyond densities of $`3\times 10^6`$ cm<sup>-3</sup>, the gas-grain coupling at very large densities is so strong that the gas temperature follows the grain temperature and the value of $`\gamma `$ will increase again and attain a value of unity if embedded protostars are present and the ambient radiation field is not coupled to the local gas density (see below). The high $`\gamma `$ region between $`10^3`$ and $`10^4`$ cm<sup>-3</sup> is the result of line trapping and collisional de-excitation. Line trapping causes the cooling rate to depend on the density with an effective power less than unity due to absorption and subsequent collisional de-excitation, while the cosmic ray heating rate is always linear in the density, independent of the kinetic temperature. It should be emphasized that the effect of radiative trapping is stronger for the quiescent velocity field, but also depends crucially on the specific implementation that we have chosen. A truly turbulent velocity field (Kegel, Piehler & Albrecht 1993), i.e. one with a finite correlation length, may lead to a quite different value of $`\gamma `$ in the density range $`10^310^4`$ cm<sup>-3</sup>. ### 4.2 High Redshift Star formation As can be seen in Figure 3, the inclusion of an IR background leads to a much stiffer polytropic equation of state for high, $`Z>0.1Z_{}`$, metallicities. The reason is that the abundant presence of water leads to heating through absorption of IR radiation and subsequent collisional de-excitation (Takahashi et al. 1983) for densities between $`3\times 10^3`$ and $`3\times 10^4`$ cm<sup>-3</sup>. Temperature effects play a role here in the determination of the line opacities, but this is essentially a heating term that is between linear and quadratic in the ambient density. Since this heating rate is roughly proportional to the abundance of H<sub>2</sub>O, it is negligible for metallicities $`Z<0.1Z_{}`$. At very high densities, $`n_\mathrm{H}>3\times 10^6`$ cm<sup>-3</sup>, the gas and dust become completely coupled, and $`\gamma `$ approaches unity since the gas temperature will simply follow the dust temperature. This only follows provided that the strength of the IR radiation field is independent of the ambient density. This point will re-addressed in the next subsection on starburst galaxies. Note also that $`\gamma `$ decreases to unity from above for densities larger than $`10^5`$ cm<sup>-3</sup> because of the high temperature of the FIR background. This causes the dust to heat the gas and results in a relatively stiff EOS, even for low metallicities where H<sub>2</sub> and HD dominate. This is because at these high densities the overall cooling rates are at their thermodynamic limit, roughly linear in the density, while the dust-gas heating rate goes like the square of the density. Note that for $`T<200`$ K, HD quickly becomes the dominant coolant of pristine gas. The lower equilibrium temperature of the gas facilitates the thermalization of the level populations. Finally, for densities larger than $`3\times 10^5`$ cm<sup>-3</sup> radiative trapping becomes important for HD, leading to a density dependence of its cooling rate of less than unity. Even though no results are shown for intermediate (30-60 K) temperature dust backgrounds, $`\gamma `$ first decreases to somewhat below unity around $`10^5`$ cm<sup>-3</sup> in these cases, if the metallicity is less than 0.1 Z. It then approaches unity from below for the reasons given above. Finally, in the limit of a large velocity gradient, H<sub>2</sub>O plays a minor role and its heating (or cooling) does not strongly influence the thermal balance of the gas (Takahashi et al. 1983). Since an IR background is naturally present in the form of the CMB for redshifts larger than $`z1030`$, one can speculate on the nature of very early star formation. That is, any dense shielded region which has been enriched by metals through the first supernova explosions, viewed here as the product of the very first population III stars, will become more stable in the presence of an intense CMB background. Conversely, metal-poor regions will retain a polytropic exponent of $`1`$. Therefore, high redshift star formation may exhibit the counter-intuitive property that metal enrichment and a warm CMB together “halt” the process of star formation for a metallicity $`Z_\mathrm{c}>0.1Z_{}`$, even though the gas is intrinsically capable of cooling efficiently (Spaans & Norman 1997). Once the CMB temperature drops below $`40`$ K, at $`z15`$, any dense, $`n_\mathrm{H}>10^4`$ cm<sup>-3</sup>, cloud of high or even modest metallicity rapidly develops a $`\gamma `$-value smaller than unity and is prone to collapse. In effect, although more detailed calculations are required to substantiate this, the CMB may play an integral part in determing the epoch of efficient early star formation. One also sees that the primordial IMF can change strongly as one goes from a $`\gamma >1`$ to a $`\gamma <1`$ EOS. In the former regime one expects only large density excursions to collapse, leading to an IMF biased toward massive stars. ### 4.3 Starburst Galaxies A similar phenomenon can occur in luminous (IR) starburst galaxies in that warm, $`50200`$ K, opaque dust hardens the polytropic equation of state of a nuclear starburst region. The system Arp 220 is a possible example, being optically thick at 100 $`\mu `$m. From the results presented here, one concludes that such IR luminous opaque systems have an IMF which is quite shallow, favoring high mass star formation. This may in fact be desirable to maintain the required power of such galaxies through supernova explosions (c.f. Doane & Mathews 1993). Note though that in these extreme systems there is a strong coupling between the ambient radiation field and the star formation rate $`R`$, and hence with the gas density. Typical Schmidt star formation laws adopt an observationally motivated value of $`12`$ for the dependence of $`R`$ on gas density (Kennicutt 1998) in star-forming spiral galaxies, with a best fit value of $`r=1.4`$. Since the resulting dust temperature depends on the $`1/5`$ power of the ultraviolet energy density (Tielens & Hollenbach 1985), the effective $`\gamma `$ for $`r=2`$ ($`r=1`$) is 1.4 (1.2), consistent with the above results. ## 5 Discussion The polytropic equation of state of interstellar gas clouds has been computed for a range of physical parameters, and its relation to the IMF, high redshift star formation, and starburst galaxies has been investigated. A much needed extension of the results presented here lies in MHD, and particularly turbulence. The magnetic and velocity fields are important for pressure support away from the most centrally condensed part of a molecular cloud core, and radiative transfer effects in a turbulent medium are non-trivial. More to the point, the polytropic EOS indices computed here, with their relevance to the IMF as discussed in Section 4, are to be applied to a medium with a given cloud density probability distribution, as investigated in Scalo et al. (1998). Such a cloud density probability distribution can only be derived from detailed hydrodynamic simulations which include magnetic pressure and Coriolis forces. Furthermore, $`\gamma `$ can also affect, in a way still to be investigated, star formation and the underlying IMF in other scenarios like turbulence-induced condensation or shocked-cloud induced star formation. We would like to mention as well the work by Ogino, Tomisaka & Nakamura (1999) that shows that the mass accretion rate in their polytropic simulations of collapsing clouds depends a lot on whether gamma is taken as 0.8 or 1.2. Extensions in terms of multiple polytropic models for the core and envelope of a molecular cloud, such as discussed in Curry & McKee (1999), are another avenue to explore for the bulk properties of interstellar clouds. Caveats in the approach presented here are grain mantle evaporation which can influence the abundance of H<sub>2</sub>O and hence the heating rate in the presence of an IR background, uncertainties in the chemical reaction rates of neutral-neutral reactions at low temperatures, and details of the freeze-out onto grains of molecular species. We hope that through a synthesis of the approach adopted here and those of others mentioned above, one can address the question of cloud stability and collapse in a more general (cosmological) framework. We are grateful to the referee, John Scalo, for his constructive comments, which improved the presentation of this paper, and his emphasis on the computation of the zero metallicity case. Discussions with Ewine van Dishoeck and John Black on the collisional cross sections of H<sub>2</sub> are also gratefully acknowledged. MS is supported by NASA through grant HF-01101.01-97A, awarded by the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. J.S. has been supported in part by grants from NSF and NASA at UC Berkeley.
no-problem/0002/quant-ph0002091.html
ar5iv
text
# Prospects for photon blockade in four level systems in the N configuration with more than one atom. ## I Introduction Recent work by Imamoğlu et al suggested a promising scheme for observing photon blockade in a highly non-linear cavity, where the change in the Kerr nonlinearity due to single photon effects was enough to make the cavity non-resonant with modes of more than one photon. As described in their work, such a device could work as a single-photon turnstile, similar to that realised recently by Kim et al in a semiconductor junction, which might be useful for quantum computation and the generation of non-classical light fields. The work by Imamoğlu et al was based on the use of the adiabatic elimination procedure and later studies showed that the breakdown of the procedure in the high dispersion limit leads to prohibitive restrictions on the parameter space where photon blockade could be observed in a multi-atom system. Further to this, Werner and Imamoğlu and Rebić et al suggested that these problems could be overcome by using a system with a single atom. The question of observing photon blockade in multi-atom systems is of more than just theoretical interest. Possible schemes for observing photon blockade depend on directing a low flux atomic beam through a high finesse cavity and this implies that there will be an uncertainty in the number of atoms present in the cavity at any given time. We show that the photon blockade, in certain circumstances, remains strong even for a fluctuation in the atom number. Recent work has highlighted the importance of employing a mutual detuning between the cavity and the semiclassical coupling field to shift the many atom degenerate state out of resonance. Our work builds on this idea and presents a detailed atom-cavity dressed state calculation showing the parameter regimes which offer the best prospects for the observations of photon blockade in one, two and three atom systems. We show that current technology should allow the construction of a system exhibiting photon blockade. We suggest that photon blockade can be best realised, not in a MOT as originally envisaged by Imamoğlu et al, but by sending a low atom current beam through a cavity of the style realised by Hood et al and Münstermann et al. Photon blockade in a cavity can be explained quite simply. An external field drives a cavity that is resonant when there are zero or one photon in the cavity and non-resonant for two (or more) photons. This can be achieved by introducing a medium into the cavity, exhibiting a large Kerr non-linearity which alters the refractive index as a function of the intensity. In general, one would expect to find large non-linearities in systems exhibiting electromagnetically induced transparency . ## II Dressed states of the atom-cavity system Consider a four level atom as depicted in figure 1. The energy levels are labeled in order of increasing energy as $`|a`$, $`|b`$, $`|c`$ and $`|d`$ with associated energies of $`\mathrm{}\omega _a`$, $`\mathrm{}\omega _b`$, $`\mathrm{}\omega _c`$ and $`\mathrm{}\omega _d`$ respectively and transition frequencies $`\omega _{\alpha \beta }=\omega _\alpha \omega _\beta `$ where $`\alpha ,\beta =a,b,c,d`$. The $`|b|c`$ transition is driven by a strong classical coupling field, with frequency $`\omega _{\text{class}}`$, Rabi frequency $`\mathrm{\Omega }`$ , detuned from the $`|b|c`$ transition by an amount $`\delta _{cb}=\omega _{\text{class}}\omega _{cb}`$. The atoms are in a cavity with resonance frequency $`\omega _{\text{cav}}`$ which is detuned from the $`|a|c`$ transition by an amount $`\delta _{ca}=\omega _{\text{cav}}\omega _{ca}`$, detuned from the $`|b|d`$ transition by an amount $`\mathrm{\Delta }=\omega _{\text{cav}}\omega _{db}`$ and not interacting with the $`|b|c`$ transition. The detunings $`\delta _{cb}`$ and $`\delta _{ca}`$ are set equal to ensure that the $`|a|b`$ transition is driven by a two photon resonance. We therefore define the mutual detuning, $`\delta =\delta _{cb}=\delta _{ca}`$. The cavity is driven by an additional classical field with frequency $`\omega _e=\omega _{\text{cav}}`$ and power, $`P`$. We analyse the effect of this field by examining the dressed states of the atom-cavity system. This configuration of fields and atomic levels is called the N configuration and has been considered previously . The cavity linewidth is $`\mathrm{\Gamma }_{\text{cav}}`$. The atom-cavity mode coupling is $`g_{\alpha _1\alpha _2}=\left(\omega _{\alpha _1\alpha _2}/2\mathrm{}ϵ_0V_{\text{cav}}\right)^{1/2}\mu _{\alpha _1\alpha _2}`$ where $`\alpha _1`$ and $`\alpha _2`$ correspond to atomic levels, $`\mu _{\alpha _1\alpha _2}`$ is the electric dipole moment of the transition, $`V_{\text{cav}}`$ is the cavity volume and $`ϵ_0`$ is the permittivity of free space. The Hamiltonian for the system with $`N`$ atoms in the frame rotating at the cavity resonance frequency in the rotating-wave approximation is $`{\displaystyle \frac{\widehat{}}{\mathrm{}}}`$ $`=`$ $`i\stackrel{~}{\mathrm{\Gamma }}_c{\displaystyle \underset{j=1}{\overset{N}{}}}\widehat{\sigma }_{cc}^ji\stackrel{~}{\mathrm{\Gamma }}_d{\displaystyle \underset{j=1}{\overset{N}{}}}\widehat{\sigma }_{dd}^j+{\displaystyle \underset{j=1}{\overset{N}{}}}\mathrm{\Omega }\left(\widehat{\sigma }_{cb}^j+\widehat{\sigma }_{bc}^j\right)`$ (2) $`+{\displaystyle \underset{j=1}{\overset{N}{}}}g_{ac}\left(\widehat{a}\widehat{\sigma }_{ca}^j+\widehat{a}^{}\widehat{\sigma }_{ac}^j\right)+{\displaystyle \underset{j=1}{\overset{N}{}}}g_{bd}\left(\widehat{a}\widehat{\sigma }_{db}^j+\widehat{a}^{}\widehat{\sigma }_{bd}^j\right)i\mathrm{\Gamma }_{\text{cav}}\widehat{a}^{}\widehat{a}.`$ where $`\stackrel{~}{\mathrm{\Gamma }}_c=\mathrm{\Gamma }_c+i\delta `$, $`\stackrel{~}{\mathrm{\Gamma }}_d=\mathrm{\Gamma }_d+i\mathrm{\Delta }`$, $`\mathrm{\Gamma }_\alpha `$ is the decay rate from atomic state $`|\alpha `$, $`\widehat{a}`$ $`\left(\widehat{a}^{}\right)`$ is the cavity photon annihilation (creation) operator and $`\widehat{\sigma }_{\alpha _1\alpha _2}^j`$ is the atomic operator $`|\alpha _1\alpha _2|`$ acting on atom $`j`$. We find the dressed states by diagonalizing $`\widehat{}`$. Fortunately, $`\widehat{}`$ is block diagonal in the bare state basis. The $`n`$th block can be identified by starting with the state $`|a,a,\mathrm{},n`$, representing all the atoms in state $`|a`$ and the cavity field in the $`n`$ photon state $`|n`$, and finding the closed set of states coupled to $`|a,a,\mathrm{},n`$ by $`\widehat{}`$. Diagonalizing this block gives the $`n`$ quanta manifold of dressed states. We first consider the case of a single atom in the cavity. The zero quanta manifold consists solely of the state $`|a,0`$. The one quantum manifold is spanned by the states $`|a,1`$, $`|b,0`$ and $`|c,0.`$ The corresponding block of $`\widehat{}/\mathrm{}`$ can be written in matrix form in this basis as $$\frac{_1^{\left(1\right)}}{\mathrm{}}=\left[\begin{array}{ccc}i\mathrm{\Gamma }_{\text{cav}}& 0& g_{ac}\\ 0& 0& \mathrm{\Omega }\\ g_{ac}& \mathrm{\Omega }& i\stackrel{~}{\mathrm{\Gamma }}_c\end{array}\right]$$ (3) where the superscript on $``$ refers to the number of atoms and the subscript to the number of quanta in the system. In order to simplify the expressions for eigenvalues and eigenvectors, we assume $`\mathrm{\Gamma }_{\text{cav}}=0`$. The figures which follow, however, have been generated using non-zero values of $`\mathrm{\Gamma }_{\text{cav}}`$. Diagonalising the matrix $`_1^{\left(1\right)}/\mathrm{}`$ with $`\mathrm{\Gamma }_{\text{cav}}=0`$ gives the dressed state energies $`_+`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c+\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\left(\mathrm{\Omega }^2+g_{ac}^2\right)}\right)/2`$ $`_0`$ $`=`$ $`0`$ $`_{}`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\left(\mathrm{\Omega }^2+g_{ac}^2\right)}\right)/2`$ and corresponding dressed states $`|D_+`$, $`|D_0`$ and $`|D_{}`$ respectively. In this form, the real part of the eigenstate corresponds to the state energy and the imaginary part to the width of the state. These eigenstates form the well known Mollow triplet and are presented in figure 2(a) as a function of the scaled mutual detuning, $`\delta /\mathrm{\Omega }`$, with $`\mathrm{\Gamma }_c/\mathrm{\Omega }=0.1`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$ and $`g_{ac}/\mathrm{\Omega }=1`$. It is important to express the form of the central dressed state, $`|D_0`$ which is $`|D_0={\displaystyle \frac{\mathrm{\Omega }}{\sqrt{\mathrm{\Omega }^2+g_{ac}^2}}}|a,1{\displaystyle \frac{g_{ac}}{\sqrt{\mathrm{\Omega }^2+g_{ac}^2}}}|b,0.`$ Photon blockade will occur in this dressed-state picture when the cavity driving field resonantly couples the zero to one quantum manifolds, and only weakly couples the one and two quanta manifolds. We can gauge the extent of these couplings by treating each transition driven by the cavity driving field as a separate, closed two-state system. This approach will break down when multiple states are excited simultaneously. However, in situtations where the photon blockade effect occurs, the number of dressed states that are significantly occupied will be minimal and our two-state model should give a reasonably accurate picture of the degree of excitation of each transition. The effect of the cavity driving field on the cavity-atom system can be treated by including the additional term on the right hand side of equation 2 $`\mathrm{}\beta (\widehat{a}+\widehat{a}^{})`$where $`\beta =\sqrt{P\mathrm{\Gamma }_{\text{cav}}T^2/\left(4\mathrm{}\omega _{\text{cav}}\right)}`$ is the external field-cavity mode coupling strength for a cavity mirror transmittance of $`T`$. In our two state model the cavity driving field drives transitions between lower and upper states, $`|L`$ and $`|U`$, in the $`n`$ and $`n+1`$ quanta manifolds respectively. The effective Rabi frequency of the transition is given by $`\mathrm{\Omega }_e=\left|\beta L\left|\widehat{a}\right|U\right|.`$Under these conditions, the steady state population of $`|U`$ is given by $`\rho _{\text{exc}}={\displaystyle \frac{\mathrm{\Omega }_e^2}{2\mathrm{\Omega }_e^2+\mathrm{\Delta }_e^2+\mathrm{\Gamma }_U^2}}`$where $`\mathrm{\Delta }_e`$ is the detuning of the external cavity driving field from the $`|L|U`$ transition and $`\mathrm{\Gamma }_U`$ the decay rate of state $`|U`$, assumed to take population from $`|U`$ to $`|L`$. Note that we have ignored the decay rate from $`|L`$. We denote the maximum value of $`\rho _{\text{exc}}`$ over all transitions from a given lower state to all possible upper states in the $`n`$ quantum manifold as $`\rho _{\text{exc}}^{\left(n\right)}`$. For ideal photon blockade, we require $`\rho _{\text{exc}}^{\left(1\right)}0.5`$ for the transition from the ground state, $`|L=|a,a,\mathrm{},0=|G_0`$ to the maximally coupled one quantum dressed state $`|G_1`$. For the case that $`|G_1=|D_0`$, we note that $`\mathrm{\Gamma }_U`$ will be small, because $`|D_0`$ contains no proportion of atomic state $`|c`$. Thus it is possible to inject a single quantum of energy into the atom-cavity system for modest values of $`\beta `$. Ideal blockade also requires that $`\rho _{\text{exc}}^{\left(2\right)}`$ be negligible for transitions between $`|L=|G_1`$ and states $`|U`$ of the two quantum manifold. We next consider the two atom, one quantum manifold of states. In this case, the basis states are $`|a,a,1`$, $`|a,b,0`$, $`|a,c,0`$, $`|b,a,0`$ and $`|c,a,0`$ and the corresponding block of $`\widehat{}/\mathrm{}`$ can be written in matrix form as $$\frac{_1^{\left(2\right)}}{\mathrm{}}=\left[\begin{array}{ccccc}i\mathrm{\Gamma }_{\text{cav}}& 0& g_{ac}& 0& g_{ac}\\ 0& 0& \mathrm{\Omega }& 0& 0\\ g_{ac}& \mathrm{\Omega }& i\stackrel{~}{\mathrm{\Gamma }}_c& 0& 0\\ 0& 0& 0& 0& \mathrm{\Omega }\\ g_{ac}& 0& 0& \mathrm{\Omega }& i\stackrel{~}{\mathrm{\Gamma }}_c\end{array}\right]$$ (4) Diagonalising $`_1^{\left(2\right)}/\mathrm{}`$ yields the eigenvalues $`_{+2}`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c+\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\left(\mathrm{\Omega }^2+2g_{ac}^2\right)}\right)/2`$ $`_{+1}`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c+\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\mathrm{\Omega }^2}\right)/2`$ $`_0`$ $`=`$ $`0`$ $`_1`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\mathrm{\Omega }^2}\right)/2`$ $`_2`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\left(\mathrm{\Omega }^2+2g_{ac}^2\right)}\right)/2`$ with associated dressed states $`|D_{+2}`$, $`|D_{+1}`$, $`|D_0`$, $`|D_1`$ and $`|D_2`$. The dressed state energies are plotted in figure 2(b) for the same conditions as in figure 2(a). There are some important similarities between the spectrum of eigenstates for the one atom and two atom cases. In each case there is a state with zero energy, indicating that transitions from the zero to the one quantum manifold are possible for a cavity driving field tuned to the cavity resonance $`\omega _{\text{cav}}`$. The states which are anti-crossing in each manifold are asymptotic to the lines$`/\mathrm{\Omega }=0`$ and $`/\mathrm{\Omega }=\delta /\mathrm{\Omega }`$ with the point of closest approach being at $`\delta /\mathrm{\Omega }=0`$. It is also important to realise that although there are five distinct eigenstates only three of these eigenvalues will couple to the ground state of the atom cavity system, i.e. the matrix element $`a,0\left|\widehat{a}\right|D_N`$ is non-zero only for $`N=0`$, $`\pm 2`$. For this reason only the optically active states $`|D_{+2}`$, $`|D_0`$ and $`|D_2`$ are plotted in figure 2(b). We have also solved the analogous three and four atom Hamiltonians and we summarise our results for the eigenstates in each case as $`_{+2}`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c+\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\left(\mathrm{\Omega }^2+Ng_{ac}^2\right)}\right)/2`$ (5) $`_{+1}`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c+\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\mathrm{\Omega }^2}\right)/2`$ (6) $`_0`$ $`=`$ $`0`$ (7) $`_1`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\mathrm{\Omega }^2}\right)/2`$ (8) $`_2`$ $`=`$ $`\left(i\stackrel{~}{\mathrm{\Gamma }}_c\sqrt{\stackrel{~}{\mathrm{\Gamma }}_c^2+4\left(\mathrm{\Omega }^2+Ng_{ac}^2\right)}\right)/2`$ (9) where $`N=1,2,3,4`$ indicates the number of atoms in the cavity. The degeneracies of the eigenvalues presented in equations 9 are interesting to observe, namely the $`_{+2}`$, $`_0`$ and $`_2`$ values are all non-degenerate, whilst the $`_{+1}`$ and $`_1`$ values are $`\left(N1\right)`$ fold degenerate. (The latter values, $`_{+1}`$ and $`_1`$ do not occur for $`N=1`$). The presence of the zero eigenvalue, $`_0`$, indicates that it is possible to inject one photon into the atom-cavity system with a cavity driving field tuned to $`\omega _{\text{cav}}`$. Now we consider the two quanta manifold for a cavity containing one atom. This manifold is spanned by the states $`|a,2`$, $`|b,1`$, $`|c,1`$ and $`|d,0`$ and the corresponding block of $`\widehat{}/\mathrm{}`$ in matrix form is $`{\displaystyle \frac{_2^{\left(1\right)}}{\mathrm{}}}=\left[\begin{array}{cccc}2i\mathrm{\Gamma }_{\text{cav}}& 0& \sqrt{2}g_{ac}& 0\\ 0& i\mathrm{\Gamma }_{\text{cav}}& \mathrm{\Omega }& g_{bd}\\ \sqrt{2}g_{ac}& \mathrm{\Omega }& i\left(\mathrm{\Gamma }_{\text{cav}}+\stackrel{~}{\mathrm{\Gamma }}_c\right)& 0\\ 0& g_{bd}& 0& i\stackrel{~}{\mathrm{\Gamma }}_d\end{array}\right]`$In order to observe photon blockade in such a system, it is essential that none of the eigenstates of $`_2^{\left(1\right)}/\mathrm{}`$ are resonantly coupled by the cavity driving field to the occupied states of the one quantum manifold. To achieve this, we require $`\rho _{\text{exc}}^{\left(2\right)}0.5`$, the smaller $`\rho _{\text{exc}}^{\left(2\right)}`$ the greater the degree of photon blockade. A plot showing the eigenenergies of this system with associated linewidths, $`\mathrm{\Gamma }_x`$ where $`x`$ is the dressed state under consideration, is presented in figures 3(a) and 3(b). In each case $`\mathrm{\Delta }/\mathrm{\Omega }=2`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01,`$ $`g_{ac}/\mathrm{\Omega }=1`$ and in figure 3(a) $`g_{bd}/\mathrm{\Omega }=1`$ and the energies are plotted as a function of $`\delta /\mathrm{\Omega },`$ whereas in figure 3(b) $`\delta /\mathrm{\Omega }=0`$ and the energies are plotted as a function of $`g_{bd}/\mathrm{\Omega }`$. The important features to recognise from these two traces is the shift of the smallest magnitude energy state from zero, indicating (as highlighted in and ) that photon blockade will indeed be possible in the one atom case. Similarly, for the two atom case, the basis states of the two quanta manifold are $`|a,a,2,`$ $`|a,b,1`$, $`|a,c,1`$, $`|b,a,1`$, $`|c,a,1`$, $`|a,d,0`$, $`|b,b,0`$, $`|b,c,0`$, $`|c,b,0`$, $`|c,c,0`$,and $`|d,a,0`$ where the notation is $`|\text{atom 1, atom 2, cavity field}`$. In matrix form, the corresponding block of the Hamiltonian is $`{\displaystyle \frac{_2^{\left(2\right)}}{\mathrm{}}}=\left[\begin{array}{ccccccccccc}2i\mathrm{\Gamma }_{\text{cav}}& 0& \sqrt{2}g_{ac}& 0& \sqrt{2}g_{ac}& 0& 0& 0& 0& 0& 0\\ 0& i\mathrm{\Gamma }_{\text{cav}}& \mathrm{\Omega }& 0& 0& g_{bd}& 0& 0& g_{ac}& 0& 0\\ \sqrt{2}g_{ac}& \mathrm{\Omega }& i\mathrm{\Gamma }_t& 0& 0& 0& 0& 0& 0& g_{ac}& 0\\ 0& 0& 0& i\mathrm{\Gamma }_{\text{cav}}& \mathrm{\Omega }& 0& 0& g_{ac}& 0& 0& g_{bd}\\ \sqrt{2}g_{ac}& 0& 0& \mathrm{\Omega }& i\mathrm{\Gamma }_t& 0& 0& 0& 0& g_{ac}& 0\\ 0& g_{bd}& 0& 0& 0& i\stackrel{~}{\mathrm{\Gamma }}_d& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& \mathrm{\Omega }& \mathrm{\Omega }& 0& 0\\ 0& 0& 0& g_{ac}& 0& 0& \mathrm{\Omega }& i\stackrel{~}{\mathrm{\Gamma }}_c& 0& \mathrm{\Omega }& 0\\ 0& g_{ac}& 0& 0& 0& 0& \mathrm{\Omega }& 0& i\stackrel{~}{\mathrm{\Gamma }}_c& \mathrm{\Omega }& 0\\ 0& 0& g_{ac}& 0& g_{ac}& 0& 0& \mathrm{\Omega }& \mathrm{\Omega }& i\mathrm{\Gamma }_t& 0\\ 0& 0& 0& g_{bd}& 0& 0& 0& 0& 0& 0& i\stackrel{~}{\mathrm{\Gamma }}_d\end{array}\right]`$where $`\mathrm{\Gamma }_t=\mathrm{\Gamma }_{\text{cav}}+\stackrel{~}{\mathrm{\Gamma }}_c`$. The energy eigenvalues of $`_2^{\left(2\right)}/\mathrm{}`$ are presented in figures 4(a) and 4(b). As in the two atom, single quantum case there are optically inactive states, and these have been removed from the figures. Figure 4(a) was generated for $`\mathrm{\Delta }/\mathrm{\Omega }=2`$, $`\delta /\mathrm{\Omega }=0`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$ and $`g_{ac}/\mathrm{\Omega }=1`$. As can be seen, for this case there is a zero eigenvalue so we would expect that there would be resonant coupling from the single quantum manifold to the two quantum manifold by a cavity driving field tuned to $`\omega _{\text{cav}}`$ and hence, photon blockade would not be observed in this case. However in figure 4(b) we illustrate the effect of introducing a small mutual detuning, $`\delta ,`$ from state $`|c`$. The parameters used were $`\mathrm{\Delta }/\mathrm{\Omega }=2`$, $`\delta /\mathrm{\Omega }=0.5`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$ and $`g_{ac}/\mathrm{\Omega }=1`$. It is important to observe the shift of the smallest magnitude eigenvalue from zero. Although this shift is less than for the corresponding single atom case, it is still feasible to consider building a cavity to observe this photon blockade. This point will be elaborated on in the conclusions. ## III Photon blockade in a realistic system Of critical importance to the experimental observation of photon blockade in atomic systems, is the question of how to trap single or small numbers of atoms within a small, high finesse optical cavity. We know of no demonstration of continuous trapping, however recent experiments conducted by Hood et al and Münstermann et al have shown that it is possible to have a cavity with extremely low atom fluxes passing through it. This was achieved in by allowing atoms to fall from a leaky MOT into the cavity and in by directing atoms out of a MOT and into the cavity. One would expect such atomic ejections to be stochastic in nature and as a consequence, the probability of having a certain number of atoms in the cavity should follow Poissonian statistics. In order to build a device which uses photon blockade, it will therefore be necessary to ensure that significant photon blockade will be observed over as wide a range of atomic numbers as possible, otherwise the photon blockade could be lost and the performance of the device degraded. A complication in the experimental realisation of photon blockade is that the parameters chosen in the theoretical plots shown above were assumed to be independent variables. In general this will not be the case unless great care is taken in the preparation of an experiment. Specifically, for transitions in an alkali vapour which might realise the N configuration, one would expect the dipole moments of all relevant transitions to be of the same order, with the atom-cavity coupling set solely by the cavity volume, a parameter shared by both the $`|a|c`$ and $`|b|d`$ transitions. One would therefore expect $`g_{ac}g_{bd}=g`$. Also, because of the shared cavity, the detuning parameters are not independent, so if we assume that $`\omega _{ca}\omega _{db}=\delta _\omega `$ then we find that $`\mathrm{\Delta }=\delta +\delta _\omega `$. These extra considerations will be included in the analysis to follow, which concentrates on experimentally realisable effects rather than the general theoretical demonstration presented above. We start by investigating photon blockade using the parameters $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$, $`\delta _w/\mathrm{\Omega }=0`$ and $`\beta /\mathrm{\Omega }=1`$. For these values $`\rho _{\text{exc}}^{\left(1\right)}0.5`$, as is expected due to the presence of the strongly absorbing state at the cavity resonance. In figure 5 (a) we show a pseudo-colour plot of $`\rho _{\text{exc}}^{\left(2\right)}`$ as a function of $`\delta /\mathrm{\Omega }`$ and $`g/\mathrm{\Omega }`$ for the one atom two quanta case, where colour is indicative of the value of $`\rho _{\text{exc}}^{\left(2\right)}`$, blue being 0 and red being 0.5. The graph clearly shows $`\rho _{\text{exc}}^{\left(2\right)}`$ decreasing monotonically with $`g/\mathrm{\Omega }`$, indicating the effectiveness of photon blockade correspondingly increasing. This is expected, as a large $`g`$ will give rise to a highly nonlinear system. Also note that $`\rho _{\text{exc}}^{\left(2\right)}`$ increases as $`\left|\delta /\mathrm{\Omega }\right|`$ increases, indicating that the photon blockade is a resonance phenomenon. In figure 5(b) we show the analogous $`\rho _{\text{exc}}^{\left(2\right)}`$ plot for two atoms. In accord with the preliminary results which suggested that photon blockade was not possible in the multi-atom system we find $`\rho _{\text{exc}}^{\left(2\right)}0.5`$ in the vicinity of $`\delta /\mathrm{\Omega }=0`$, implying that photon blockade will not be observable for these choices of parameters. However, by increasing the mutual detuning and employing a modest atom-cavity coupling, regions of strong photon blockade are observable, typified by the minimum recorded value on figure 5(b) of $`\rho _{\text{exc}}^{\left(2\right)}=0.0087`$. The three atom case is shown in figure 5(c) for the same parameter regime as the one and two atom cases. The form of $`\rho _{\text{exc}}^{\left(2\right)}`$ is similar to that of the two atom case, with the same overall structure, but narrower regions where photon blockade should be observable, this is shown by the smaller blue region of figure 5(c) than figure 5(b). Again very low values of $`\rho _{\text{exc}}^{\left(2\right)}`$ were obtained, with a minimum recorded value of $`\rho _{\text{exc}}^{\left(2\right)}=0.0070`$. To use photon blockade as a tool for new quantum devices it is necessary to identify real systems in which these effects may be observed. As an example of what may be achieved with the current state of the art, we take parameters from a recent experimental paper and make some minor assumptions about how they might apply to atoms falling through a high finesse cavity. It should certainly be possible to achieve a coupling field Rabi frequency of $`\mathrm{\Omega }=10`$ MHz, external field-cavity coupling strength of $`\beta =1`$MHz and Hood et al achieve $`g=120`$ MHz and $`\mathrm{\Gamma }_{\text{cav}}=40`$ MHz. For a system based on transitions in the <sup>87</sup>Rb D<sub>2</sub> line we may assume $`\delta _\omega =6600`$MHz and $`\mathrm{\Gamma }_c=\mathrm{\Gamma }_d=17.8`$ MHz. We ignore the Zeeman magnetic sublevels since these form a simple, effective three level $`\mathrm{\Lambda }`$ system (for states $`|a`$, $`|b`$ and $`|c`$) for the situation under consideration . Assuming equal rates of radiative decay, these translate into our system as $`g/\mathrm{\Omega }=12`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=1.78`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=4`$, $`\delta _\omega /\mathrm{\Omega }=660`$ and $`\beta /\mathrm{\Omega }=0.3`$. In figure 6(a) we present a plot of $`\rho _{\text{exc}}^{\left(1\right)}`$ as a function of $`g/\mathrm{\Omega }`$ and $`\delta /\mathrm{\Omega }`$ for one atom in the cavity, with the plots showing $`\rho _{\text{exc}}^{\left(2\right)}`$ in (b) and (c) for one and two atoms respectively. There are some important features to note in figures 6. In 6(a) $`\rho _{\text{exc}}^{\left(1\right)}`$ increases monotonically with $`g/\mathrm{\Omega }`$ and, for the scale used, is independant of $`\delta /\mathrm{\Omega }`$. The value of $`\rho _{\text{exc}}^{\left(1\right)}`$ is qualitatively very similar for one, two and three atoms, with the values of $`\rho _{\text{exc}}^{\left(1\right)}`$ slightly increasing as the number of atoms increases. As an example, for $`g/\mathrm{\Omega }=12`$, $`\rho _{\text{exc}}^{\left(1\right)}=0.3099`$, $`0.3824`$ and $`0.4148`$ for one, two and three atoms respectively. Traces 6(b) and (c) show $`\rho _{\text{exc}}^{\left(2\right)}`$ for the one and two atom cases respectively. The value of $`\rho _{\text{exc}}^{\left(2\right)}`$ for the one atom case is very small across the entire parameter space, indicating that photon blockade should be easy to observe for a single atom. Of interest is the resonance in the vicinity of $`\delta =\delta _w`$ which will be discussed below. There is an extra resonance in the vicinity of $`\delta =0`$ for the one atom case, although this is not present to the same extent in the two and three atom cases. For two atoms, and also for three atoms (not shown), the overall values (away from the resonances) of $`\rho _{\text{exc}}^{\left(2\right)}`$ appear to increase with the number of atoms and are much larger than for the one atom case. Qualitatively, $`\rho _{\text{exc}}^{\left(2\right)}`$ for two and three atoms are extremely similar, with maximum values along the line $`g/\mathrm{\Omega }=12`$ of $`\rho _{\text{exc}}^{\left(2\right)}=0.2244`$ and $`0.3092`$ for two and three atoms respectively. These observations would appear to agree with the intuitive idea that photon blockade would be more difficult in multi-atom systems and that there is a qualitative difference between the single atom case and multi-atom cases. The results for $`\rho _{\text{exc}}^{\left(2\right)}`$ in the vicinity of $`\delta /\mathrm{\Omega }=660`$ are shown in figure 7. The significance of this region is that we have $`\delta =\delta _w`$ so that the cavity is now resonant with the $`|b|d`$ transition. To our knowledge, this situation has not been previously explored and the consequences it has for photon blockade are significant. In figure 7 we present $`\rho _{\text{exc}}^{\left(2\right)}`$ for one, two and three atoms in plots (a), (b) and (c) respectively. The behaviour of $`\rho _{\text{exc}}^{\left(1\right)}`$ shows no resonance phenomena and is described above, it is only by considering $`\rho _{\text{exc}}^{\left(2\right)}`$ that the resonance is observed. In 7(a) we see generally small values for $`\rho _{\text{exc}}^{\left(2\right)}`$ indicating that photon blockade should be observable. With the exception of the ‘shelf’ for $`g/\mathrm{\Omega }1`$, $`\rho _{\text{exc}}^{\left(2\right)}`$ has the a similar chevron shape to that observed in the ideal case, shown in figure 5(a), although with very much smaller values. In figures 7(b) and (c), we observe a roughly triangular region of low $`\rho _{\text{exc}}^{\left(2\right)}`$, superimposed on the background of $`\rho _{\text{exc}}^{\left(2\right)}`$ noted earlier. Values of $`\rho _{\text{exc}}^{\left(2\right)}<0.01`$ are present where $`\rho _{\text{exc}}^{\left(1\right)}>0.35`$ for both the two and three atom cases. This suggests that the non-linearity of the system is very large about this resonance. Further investigations of this resonance will be done with increasing number of atoms to show how robust the photon blockade will be as any relaxation of the requirement for low number of atoms will enhance the prospects for experimental verification of this effect. ## IV Photon blockade with an off resonant cavity The discussions above have used a system where the cavity is driven resonantly by the external field. This has a significant drawback in trying to realise a continuously operating device inasmuch as when there are no atoms in the cavity, the cavity will absorb photons from the external field. To overcome this using the on-resonance configuration, one must run the experiment in a pulsed mode to ensure that there are no photons in the cavity prior to atoms entering the cavity. There is, however, another alternative. This occurs when the cavity driving field is not tuned to the cavity resonance, but instead is tuned to a side resonance of the one atom dressed system. In this case there will be strong coupling between the external driving field and the cavity when there is one atom in the cavity and no coupling when there are zero atoms in the cavity. By studying the form of the dressed state eigenvalues given in equation 9 it is clear that there is a dependence of the eigenvalue with the number of atoms so that it should be possible to choose $`g_{ac}`$ such that the one-quantum manifold is only coupled into when there is only one atom in the cavity. These conditions are best met for the case that both $`\delta `$ and $`\mathrm{\Omega }`$ are small compared to $`g_{ac}`$ to ensure the maximum shift in eigenvalues with number of atoms. ## V Conclusions We have presented detailed calculations which show the parameter space where photon blockade in one, two and three atom systems should be observable. By considering transitions on the <sup>87</sup>Rb D<sub>2</sub> line, we have suggested that photon blockade should be observable using current state of the art technology, such as has been recently demonstrated . We have pointed out that strong photon blockade should be possible when the cavity resonance is tuned to the $`|b|d`$ transition, despite a large mutual detuning, $`\delta `$. This resonance may relax the requirements on the energy levels of the atomic species under consideration. We gratefully acknowledge financial support from the EPSRC and useful discussions with Ole Steuernagel (University of Hertfordshire), Derek Richards (The Open University), Danny Segal and Almut Beige (Imperial College). ## VI Figures Figure 1: Energy level diagram for the four level N system. The atoms are labelled in order of increasing energy as $`|a`$, $`|b`$, $`|c`$, and $`|d`$ with energies $`\mathrm{}\omega _a`$, $`\mathrm{}\omega _b`$, $`\mathrm{}\omega _c`$ and $`\mathrm{}\omega _d`$ respectively. A strong classical coupling field with frequency $`\omega _{\text{class}}`$ and Rabi frequency $`\mathrm{\Omega }`$ is applied to the $`|b|c`$ transition and detuned from it by an amount $`\delta =\omega _{cb}\omega _{\text{class}}`$. The atoms are placed in a cavity with resonance frequency $`\omega _{\text{cav}}`$ which is detuned from the $`|a|c`$ transition by $`\delta `$ and from the $`|b|d`$ transition by $`\mathrm{\Delta }=\omega _{db}\omega _{\text{cav}}`$. The cavity is driven resonantly by an external classical driving field with external field-cavity coupling strength $`\beta `$ and frequency $`\omega _{\text{cav}}`$. Figure 2: Eigenvalues of the one quantum manifold as a function of the scaled mutual detuning $`\delta /\mathrm{\Omega }`$ for one and two atoms in the cavity. In each case the parameters used were $`g_{ac}/\mathrm{\Omega }=1`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$ and $`\mathrm{\Gamma }_c/\mathrm{\Omega }=0.1`$. In figure 2(a) the spectrum for a single atom interacting with a single cavity photon traces out the well known Mollow triplet. In figure 2(b) we present the analogous trace for two atoms instead of one where the optically inactive dressed states, $`|D_{+1}`$ and $`|D_1`$ have been removed. Figure 3: Eigenvalues of the two quanta manifold for a single atom. Parameters used were the same as those in figure 2, but with the addition of $`\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$ and $`\mathrm{\Delta }/\mathrm{\Omega }=2`$. Figure 3(a) has $`g_{bd}/\mathrm{\Omega }=1`$ and the energy eigenvalues are plotted as a function of scaled mutual detuning, whilst figure 3(b) has $`\delta /\mathrm{\Omega }=0`$ and the energy eigenvalues are plotted as a function of $`g_{bd}/\mathrm{\Omega }`$. Figure 4: Eigenvalues of the two quanta manifold for two atoms in the cavity with the parameters $`\mathrm{\Delta }/\mathrm{\Omega }=2`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$ and $`g_{ac}/\mathrm{\Omega }=1`$, as a function of $`g_{bd}/\mathrm{\Omega }`$. In 4(a) the mutual detuning $`\delta /\mathrm{\Omega }=0`$ and no photon blockade is observed, in 4(b) a small mutual detuning of $`\delta /\mathrm{\Omega }=0.5`$ is applied and the photon blockade is restored for a cavity driving field tuned to $`\omega _{\text{cav}}`$. Note that the optically inactive states have been removed in these traces also. Figure 5: $`\rho _{\text{exc}}^{\left(2\right)}`$ plotted as a function of scaled detuning $`\left(\delta /\mathrm{\Omega }\right)`$ and scaled coupling $`\left(g/\mathrm{\Omega }\right)`$ for one atom (a), two atoms (b) and three atoms (c). The colour of the plot shows the value of $`\rho _{\text{exc}}^{\left(2\right)}`$ with blue being zero, red 0.5. The parameters chosen for these figures were $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=0.01`$, $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=0.1`$, $`\delta _w/\mathrm{\Omega }=0`$ and $`\beta /\mathrm{\Omega }=1`$. Note that over this paramter range, $`\rho _{\text{exc}}^{\left(1\right)}=0.5`$. Figure 6: Pseudo-colour plots of $`\rho _{\text{exc}}^{\left(1\right)}`$ and $`\rho _{\text{exc}}^{\left(2\right)}`$. The parameters chosen were those which could be expected in a realistic experiment involving <sup>87</sup>Rb atoms. These parameters were $`\mathrm{\Gamma }_c/\mathrm{\Omega }=\mathrm{\Gamma }_d/\mathrm{\Omega }=1.78`$, $`\mathrm{\Gamma }_{\text{cav}}/\mathrm{\Omega }=4`$, $`\delta _\omega /\mathrm{\Omega }=660`$ and $`\beta /\mathrm{\Omega }=0.3`$. Figure (a) corresponds to $`\rho _{\text{exc}}^{\left(1\right)}`$ for one atom, whilst (b) and (c) correspond to $`\rho _{\text{exc}}^{\left(2\right)}`$ for one and two atoms respectively. Note that the colour scales for each figure vary according to the maximum values of $`\rho _{\text{exc}}`$. Figure 7: Pseudo-colour plots of $`\rho _{\text{exc}}^{\left(2\right)}`$ in the vicinity of the resonance $`\delta =\delta _w`$ for one (a), two (b) and three (c) atoms. The parameters used were the same as in figure 6.
no-problem/0002/quant-ph0002074.html
ar5iv
text
# Test of quantum nonlocality for cavity fields ## I Introduction Entangled states have been at the focus of discussions in quantum information theory encompassing quantum teleportation, computing, and cryptography. Two-body entanglement allows diverse measurement schemes which can admit tests of quantum nonlocality . Using the atom-field interaction in a high-$`Q`$ cavity we can produce quantum entanglement between cavity fields, between atoms, and between a cavity and an atom. An entangled pair of atoms have been experimentally generated using the cavity QED (Quantum electrodynamics) . The entanglement of atoms and fields in the cavity can be utilized towards a realization of the controlled-NOT gate for quantum computation . A pair of atoms can be prepared in an entangled state using the atom-field interaction in a high-$`Q`$ cavity. The interaction of a single two-level atom with a cavity field brings about entanglement of the atom and the cavity field . If the atom does not decay into other internal states after it comes out from the cavity, the entanglement will survive for long and it can be transferred to a second atom interacting with the cavity field. The violation of Bell’s inequality can be tested by the joint measurement of atomic states. There are proposals to entangle fields in two spatially separated cavities using the atom-field interaction . A two-level atom in its excited state passes sequentially through two resonant single-mode vacuum cavities and is found to be in its ground state after the second-cavity interaction. If the interaction with the first cavity is equivalent to a $`\pi /2`$ vacuum pulse and the second-cavity interaction is to a $`\pi `$ pulse then the atom could have deposited a photon either in the first cavity or in the second so that the final state $`|\mathrm{\Psi }_f`$ of the two cavity field is $$|\mathrm{\Psi }_f=\frac{1}{\sqrt{2}}(|1,0+\text{e}^{i\phi }|0,1)$$ (1) where $`|1,0`$ denotes one photon in the first cavity and none in the second, and $`|0,1`$ vice versa. Using the entangled cavity field (1), an unknown atomic quantum state can also be teleported . A three-level atom can act as a quantum switch to couple a vacuum cavity with the external classical coherent field. If the atom is in its intermediate state before it enters to the cavity, the AC Stark shift between the intermediate and upper-most states brings about the resonant coupling of the cavity with the external field so that the external field can be fed into the cavity. If the atom is initially in its lower-most state, the atom is unable to switch on the coupling between the cavity and the external field. Using the Ramsey interference and atomic quantum switch, Davidovich et al. suggested a coherent state entanglement $`|\mathrm{\Psi }_c`$ between two separate cavities : $`|\mathrm{\Psi }_c=B_1|\alpha ,0+B_2|0,\alpha `$, where $`|\alpha ,0`$ denotes the first cavity in the coherent state of the amplitude $`\alpha `$ and the second in the vacuum. In this paper, we are interested in a test of nonlocality for the entangled field prepared in the spatially separated cavities. Despite the suggestions on the production of entangled cavity fields, the test of quantum entanglement, i.e., the measurement of the violation of Bell’s inequality for the entangled cavity fields has not been studied. To test quantum nonlocality, we first couple classical driving fields with the cavities where a nonlocal state is prepared. Two independent two-level atoms are then sent through respective cavities to interact off-resonantly with the cavity fields. The atomic states are measured after the interaction. Bell’s inequality can be tested by the joint probabilities of two-level atoms being in their excited or ground states. We find that quantum nonlocality can also be tested using a single atom sequentially interacting with the two cavities. We also consider potential experimental errors. The atoms normally have the Gaussian velocity distribution with the normalized standard deviation less than 5%. We show that, even with the experimental errors caused by the velocity distribution, the test can be feasible. ## II Bell’s inequality by parity measurement It is important to choose the type of measurement variables when testing nonlocality for a given state. Banaszek and Wódkiewicz considered even and odd parities of the field state as the measurement variables, where a state is defined to be in the even (odd) parity if the state has even (odd) numbers of photons. The even and odd parity operators, $`\widehat{O}_E`$ and $`\widehat{O}_O`$, are the projection operators to measure the probabilities of the field having even and odd numbers of photons, respectively: $$\widehat{O}_E=\underset{n=0}{\overset{\mathrm{}}{}}|2n2n|;\widehat{O}_O=\underset{n=0}{\overset{\mathrm{}}{}}|2n+12n+1|.$$ (2) To test the quantum nonlocality for the field state of the modes $`a`$ and $`b`$, we define the quantum correlation operator based on the joint parity measurements: $$\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )=\widehat{\mathrm{\Pi }}_E^a(\alpha )\widehat{\mathrm{\Pi }}_E^b(\beta )\widehat{\mathrm{\Pi }}_E^a(\alpha )\widehat{\mathrm{\Pi }}_O^b(\beta )\widehat{\mathrm{\Pi }}_O^a(\alpha )\widehat{\mathrm{\Pi }}_E^b(\beta )+\widehat{\mathrm{\Pi }}_O^a(\alpha )\widehat{\mathrm{\Pi }}_O^b(\beta )$$ (3) where the superscripts $`a`$ and $`b`$ denote the field modes and the displaced parity operator, $`\widehat{\mathrm{\Pi }}_{E,O}(\alpha )`$, is defined as $$\widehat{\mathrm{\Pi }}_{E,O}(\alpha )=\widehat{D}(\alpha )\widehat{O}_{E,O}\widehat{D}^{}(\alpha ).$$ (4) The displacement operator $`\widehat{D}(\alpha )`$ displaces a state by $`\alpha `$ in phase space. The displaced parity operator acts like a rotated spin projection operator in the spin measurement . We can easily derive that the local hidden variable theory imposes the following Bell’s inequality $$|B(\alpha ,\beta )||\widehat{\mathrm{\Pi }}^{ab}(0,0)+\widehat{\mathrm{\Pi }}^{ab}(0,\beta )+\widehat{\mathrm{\Pi }}^{ab}(\alpha ,0)\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )|2$$ (5) where we call $`B(\alpha ,\beta )`$ as the Bell function. ## III Parity measurement in cavity QED Englert et al. proposed an experiment to determine the parity of the field in a high-Q single-mode cavity. Let us consider a far-off resonant interaction of a two-level atom with a single-mode cavity field. If the detuning $`\mathrm{\Delta }=\omega _o\omega `$ of the atomic transition frequency $`\omega _o`$ from the cavity-field frequency $`\omega `$ is much larger than the Rabi frequency $`\mathrm{\Omega }`$, there is no energy exchange between the atom and the field but the relative phase of the atomic states changes due to the AC stark shift . The change of the phase depends on the number of photons in the cavity and on the state of the atom: $`|e,\psi _f`$ $``$ $`\mathrm{exp}[i\mathrm{\Theta }(\widehat{n}+1)]|e,\psi _f,`$ (6) $`|g,\psi _f`$ $``$ $`\mathrm{exp}[i\mathrm{\Theta }(\widehat{n})]|g,\psi _f`$ (7) where the atom-field state $`|e,\psi _f`$ denotes the atom in its excited state and the cavity field in $`|\psi _f`$. The phase $`\mathrm{\Theta }(\widehat{n})`$ is a function of the number of photons in the cavity and the atom-field coupling time and strength. If a $`\pi /2`$ pulse is applied on a two-level atom before it enters the cavity, the atom initially in its excited state transits to a superposition state without changing the cavity field $$|e,\psi _f\frac{1}{\sqrt{2}}[|e,\psi _f+i\text{e}^{i\varphi _1}|g,\psi _f],$$ (8) where $`\varphi _1`$ is determined by the phase of the pulse field. The atom, then, passes through a cavity and undergoes an off-resonant interaction with the cavity field where the atom-field coupling function is selected to be $$\mathrm{\Theta }(\widehat{n})=\frac{\pi }{2}\widehat{n}.$$ (9) After the atom comes out from the cavity, the atom is let interact with the second $`\pi /2`$ pulse. If the phases of the first and the second pulses are chosen to satisfy the following relation $$i\text{e}^{i(\varphi _1\varphi _2)}=1,$$ (10) the atom-field state after the second $`\pi /2`$ pulse is $$\frac{1}{2}\left\{i[1(1)^{\widehat{n}}]|e,\psi _f+\text{e}^{i\varphi _2}[1+(1)^{\widehat{n}}]|g,\psi _f\right\}.$$ (11) If the atom is detected in its excited state, the field has the even parity. If the atom is detected in its ground state, the field has the odd parity. The atom tells us the parity of the field . ## IV Quantum nonlocality of cavity fields To test quantum nonlocality of the field, $`|\psi _f^{ab}`$, prepared in spatially separated two single-mode cavities, we use two two-level atoms as shown in Fig. 1(a). In this paper we assume that the mode structures of the cavities are identical and the atoms are independent identical atoms. The atoms are labelled as $`a`$ and $`b`$ to interact, respectively, with the fields in the cavities $`a`$ and $`b`$. Each atom is initially prepared in its excited state and sequentially passes through interaction zones of the first $`\pi /2`$ pulse, the cavity field, and the second $`\pi /2`$ pulse. The atoms-field state then becomes $$|e_a,e_b|\psi _f^{ab}|\phi (\widehat{n}_a,\widehat{n}_b)|\psi _f^{ab}$$ (12) where $`|\phi (\widehat{n}_a,\widehat{n}_b)`$ is the atomic state with the weights of the field operators ($`\widehat{n}_a`$ and $`\widehat{n}_b`$ are number operators of fields in the cavities $`a`$ and $`b`$): $`|\phi (\widehat{n_a},\widehat{n_b})=`$ $`a(\widehat{n}_a)a(\widehat{n}_b)|e_a,e_b+a(\widehat{n}_a)b(\widehat{n}_b)|e_a,g_b`$ (13) $`+`$ $`b(\widehat{n}_a)a(\widehat{n}_b)|g_a,e_b+b(\widehat{n}_a)b(\widehat{n}_b)|g_a,g_b`$ (14) where $`a(\widehat{n})=[e^{i\mathrm{\Theta }(\widehat{n}+1)}e^{i(\varphi _1\varphi _2)}e^{i\mathrm{\Theta }(\widehat{n})}]/2`$ and $`b(\widehat{n})=ie^{i\varphi _2}[e^{i\mathrm{\Theta }(\widehat{n}+1)}+e^{i(\varphi _1\varphi _2)}e^{i\mathrm{\Theta }(\widehat{n})}]/2`$. Choosing appropriate conditions for the atom-field couplings and pulse phases as shown in (9) and (10), the atoms-field state becomes $`{\displaystyle \frac{1}{4}}`$ $`\left\{i[1(1)^{\widehat{n}_a}]|e_a+\text{e}^{i\varphi _2}[1+(1)^{\widehat{n}_a}]|g_a\right\}`$ (16) $`\times \left\{i[1(1)^{\widehat{n}_b}]|e_b+\text{e}^{i\varphi _2}[1+(1)^{\widehat{n}_b}]|g_b\right\}|\psi _f^{ab}.`$ If the atoms are jointly found in their excited states then we know that both the cavities are in the odd parity states. The joint probability $`P_{ee}`$ of the atoms being in their excited states is related to the expectation value of the following joint parity operator: $$P_{ee}=\widehat{\mathrm{\Pi }}_O^a(0)\widehat{\mathrm{\Pi }}_O^b(0).$$ (17) In Eqs. (3) and (5), it is seen that we need to know the joint parities of the displaced original fields to test quantum nonlocality. To displace the cavity field, external stable fields are coupled to the cavities as shown in Fig. 1(a) . After a nonlocal field state is prepared in the cavities, we couple the cavities with the external fields to displace the original nonlocal field, then send two independent atoms through respective cavities. The $`\pi /2`$ pulses shine atoms before and after the cavity interaction to provide Ramsey interference effects. The atomic states are detected after the second $`\pi /2`$ pulses. $`P_{ee}(\alpha ,\beta )`$ denotes the joint probability of atoms being in their excited states when the original fields in the cavities $`a`$ and $`b`$ are displaced by $`\alpha `$ and $`\beta `$, respectively. The expectation value of the quantum correlation operator in (3) is obtained by the joint probabilities: $$\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )=P_{ee}(\alpha ,\beta )P_{eg}(\alpha ,\beta )P_{ge}(\alpha ,\beta )+P_{gg}(\alpha ,\beta ).$$ (18) If there are any displacement factors $`\alpha `$ and $`\beta `$ which result in the violation of Bell’s inequality in Eq. (5), the field originally prepared in the cavities is quantum-mechanically nonlocal. After a closer look at Eq. (3), we find that we do not need the individual parity of each cavity field to test the inequality (5). We need the parity of only the total field. Instead of sending two atoms to cavities, we now send a single two-level atom sequentially through cavities as shown in Fig. 1(b). The atom is initially prepared in its excited state and undergoes $`\pi /2`$-pulse interactions before and after the cavity interaction. The atom-field coupling strength is selected to satisfy Eq. (9) and the phases, $`\varphi _a`$ and $`\varphi _b`$, of the two $`\pi /2`$ pulses are chosen as $`\mathrm{exp}[i(\varphi _a\varphi _b)]=1`$ then the atom-field state becomes $$|e,\psi _f^{ab}\frac{1}{2}[1+(1)^{\widehat{n}_a+\widehat{n}_b}]|e,\psi _f^{ab}+\frac{i}{2}[1(1)^{\widehat{n}_a+\widehat{n}_b}]|g,\psi _f^{ab}.$$ (19) The external stable fields are taken to be coupled with the cavities to displace the cavity fields. The probability $`P_e(\alpha ,\beta )`$ of the atom being in its excited state after having passed displaced cavity fields and $`\pi /2`$ pulses, is the expectation value of the parity operators: $$P_e(\alpha ,\beta )=\widehat{\mathrm{\Pi }}_O^a(\alpha )\widehat{\mathrm{\Pi }}_O^b(\beta )+\widehat{\mathrm{\Pi }}_E^a(\alpha )\widehat{\mathrm{\Pi }}_E^b(\beta )$$ (20) where $`\alpha `$, $`\beta `$ denote the displacements of the fields in the cavities $`a`$ and $`b`$. Similarly, the probability of the atom being in its ground state $`P_g(\alpha ,\beta )`$ is found to be related to the odd parity of the total fields. The expectation value of the quantum correlation function operator in Eq. (3) is simply $$\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )=P_e(\alpha ,\beta )P_g(\alpha ,\beta ).$$ (21) This does not tell us the parity of each mode but the parity of the total field which is enough to test the violation of Bell’s inequality. ## V Remarks We have suggested a simple way to test quantum nonlocality of cavity fields by measuring the states of atoms after their interaction with cavity fields. The test does not require a numerical process on the measured data. The difference in the probability of a single two-level atom being in its excited and ground states is directly related to the test of quantum nonlocality. In fact, this can also be used to reconstruct the two-mode Wigner function as the mean parity of the field is proportional to the two-mode Wigner function : $$W(\alpha ,\beta )=(2/\pi )^2\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta ).$$ (22) Experimental error can easily occur from the fluctuation in the atom-field coupling strength and time. The atom-field coupling function $`\mathrm{\Theta }(\widehat{n})`$ depends on the mode structure of the cavity field and on the duration of time for the atom to interact with the cavity field . Because the atomic velocity has some fluctuations the interaction time is subject to the experimental error . Another error source may be the $`\pi /2`$ pulse operation. We analyze the possibility to measure the violation of quantum nonlocality in the potential experimental situation. The test of quantum nonlocality using the two-atom scheme in Fig. 1(a) is considered with potential experimental errors. The error in the atom-field coupling function is denoted by $`\mathrm{\Delta }\mathrm{\Theta }(\widehat{n})`$, which is the departure of the experimental value $`\mathrm{\Theta }(\widehat{n})`$ from the required value $`\mathrm{\Theta }_0(\widehat{n})`$: $$\mathrm{\Delta }\mathrm{\Theta }(\widehat{n})=\mathrm{\Theta }(\widehat{n})\mathrm{\Theta }_0(\widehat{n})=\delta \widehat{n}$$ (23) where $`\mathrm{\Theta }_0(\widehat{n})=(\pi /2)\widehat{n}`$. The relative phases of atomic states given by the $`\pi /2`$-pulse interactions are also subject to experimental errors. We take the phase error $`\mathrm{\Delta }\varphi `$ as $$\mathrm{\Delta }\varphi =(\varphi _2\varphi _1)\varphi _0;i\mathrm{exp}(i\varphi _0)=1.$$ (24) Note that the atomic state measurement is equivalent to the parity measurement as in Eq. (17) only when $`\mathrm{\Theta }(\widehat{n})=\mathrm{\Theta }_0(\widehat{n})`$ and $`\varphi _2\varphi _1=\varphi _0`$. The errors in the atom-field coupling and phases of the $`\pi /2`$ pulses bring about the departure $`\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,\beta )`$ of the joint atomic state probabilities from the expectation value of the parity operators in Eq. (18). The mean error of $`\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,\beta )`$ is calculated up to the second order of $`\delta `$ and $`\mathrm{\Delta }\varphi `$ as follows $`\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,\beta )`$ $``$ $`P_{ee}(\alpha ,\beta )P_{eg}(\alpha ,\beta )P_{ge}(\alpha ,\beta )+P_{gg}(\alpha ,\beta )\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )`$ (25) $`=`$ $`2\psi _f^{ab}|\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )\left[\mathrm{\Delta }(\widehat{n}_a(\alpha ))+\mathrm{\Delta }(\widehat{n}_b(\beta ))\right]|\psi ^{ab}`$ (26) where $$\mathrm{\Delta }(\widehat{n}_{a,b}(\alpha ))=\frac{1}{4}[2\widehat{n}_{a,b}(\alpha )+1]^2\delta ^2+\frac{1}{2}[2\widehat{n}_{a,b}(\alpha )+1]\delta \mathrm{\Delta }\varphi +\frac{1}{4}(\mathrm{\Delta }\varphi )^2$$ (27) and $`\widehat{n}_{a,b}(\alpha )=\widehat{D}^{}(\alpha )\widehat{n}_{a,b}\widehat{D}(\alpha )`$ are the displaced number operators for the field modes in the cavities $`a`$ and $`b`$. The mean error $`\mathrm{\Delta }B(\alpha ,\beta )`$ of the Bell function measurement in (5) is given by $$\mathrm{\Delta }B(\alpha ,\beta )=\mathrm{\Delta }\mathrm{\Pi }^{ab}(0,0)+\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,0)+\mathrm{\Delta }\mathrm{\Pi }^{ab}(0,\beta )\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,\beta ).$$ (28) Consider an explicit example of a quantum nonlocal field (1) for an illustration of the experimental errors. For simplicity, we take the phase factor zero, i.e., $`\phi =0`$. We know from an earlier work that Bell’s inequality is maximally violated with $`B2.19`$ when $`\alpha =\beta `$ and $`|\alpha |^20.1`$. Substituting $`|\mathrm{\Psi }_f`$ of Eq. (1) into $`|\psi _f^{ab}`$ of Eq. (25), $`\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,\beta )`$ is $`\mathrm{\Delta }\mathrm{\Pi }^{ab}(\alpha ,\beta )`$ $``$ $`2\psi _f^{ab}|\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )|\psi _f^{ab}\psi _f^{ab}|\left(\mathrm{\Delta }(\widehat{n}_a(\alpha ))+\mathrm{\Delta }(\widehat{n}_b(\beta ))\right)|\psi _f^{ab}`$ (29) $`=`$ $`2\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )\left\{c_1(\alpha ,\beta )\delta ^2+c_2(\alpha ,\beta )\delta \mathrm{\Delta }\varphi +(\mathrm{\Delta }\varphi )^2\right\}`$ (30) where the mean field approximation has been used . The expectation value $`\widehat{\mathrm{\Pi }}^{ab}(\alpha ,\beta )=(2|\alpha \beta |^21)e^{2(|\alpha |^2+|\beta |^2)}`$, and the parameters $`c_1(\alpha ,\beta )=2(|\alpha |^4+|\beta |^4)+\frac{13}{2}(|\alpha |^2+|\beta |^2)+5`$, and $`c_2(\alpha ,\beta )=2(|\alpha |^2+|\beta |^2)+4`$. The mean error $`\mathrm{\Delta }B`$ for $`\alpha =\beta \sqrt{0.1}`$, is given by $$\mathrm{\Delta }B10.2\delta ^2+8.1\delta \mathrm{\Delta }\varphi +2.0(\mathrm{\Delta }\varphi )^2.$$ (31) The probing atoms normally have the Gaussian velocity distribution which causes the errors $`\delta `$ and $`\mathrm{\Delta }\varphi `$. When we consider the ensemble average over atoms, the second term vanishes in Eq. (31) and the first and third terms finally contribute to degrade the value of the Bell function. For the test of nonlocality using single atoms as shown in Fig. 1(b), the mean error $`\mathrm{\Delta }B`$ is similarly obtained as $$\mathrm{\Delta }B16.3\delta ^2+8.2\delta \mathrm{\Delta }\varphi +1.0(\mathrm{\Delta }\varphi )^2.$$ (32) This is slightly larger than the error (31) for the two-atom scheme. The error enhancement in the single-atom scheme is due to the fact that the experimental errors are multiplied as the atom passes through two cavities. For the two-atom scheme, the error is a sum of errors occurred in each atom interaction with the cavity field and $`\pi /2`$ pulses. We find that when the standard deviation of the atomic velocity distribution is 5%, $`\mathrm{\Delta }B0.06`$ for the two-atom scheme and $`\mathrm{\Delta }B0.10`$ for the single-atom scheme, which still allows the observation of the violation of Bell’s inequality. If the $`Q`$ factor of a cavity is high, the cavity is very much closed. When an atom passes through the cavity walls the atom can lose the information on the phases of atomic states so that the scheme suggested in this paper cannot be used. However, recently, Nogues et al. suggested a way to implement $`\pi /2`$ pulses and cavity fields interactions inside the cavity . If this scheme is applied there will not be a problem of losing the $`Q`$ value to keep the atomic phase information. ###### Acknowledgements. M.S.K. thanks Professor Walther for discussions and hospitality at the Max-Planck-Instut für Quantenoptik where a part of this work was carried out. This work was supported by the BK21 grant (D-0055) by the Korean Ministry of Education.
no-problem/0002/astro-ph0002485.html
ar5iv
text
# Search for molecular gas in HVCs by HCO+ absorption ## 1 Introduction Since their discovery by Muller, Oort, & Raimond (1963), HI clouds moving with velocities that can not be explained by differential Galactic rotation (often exceeding it by 100 km s<sup>-1</sup>) have been the target of numerous surveys. Many of these high-velocity clouds (HVCs) are located at intermediate and high Galactic latitudes (Giovanelli, Verschuur, & Cram 1973; Mathewson, Schwarz, & Murray 1977; Wakker & van Woerden 1991) and do not appear to have any connection with the gas in the Galactic disk. HVCs are often found in large HI complexes with angular sizes 10–90. They cover from 10 to 37% of the sky, depending on the sensitivity of the studies (Murphy et al 1995). The origin of HVCs is still unclear, mainly because the distances to the individual complexes are in most cases unknown. They could be cold gas corresponding to the return flow in a Galactic fountain (e.g. Houck & Bregman 1990), or gas left over from the formation of the Galaxy. Some HVCs are probably belonging to the tidal gas streams torn from the Magellanic Clouds by the Milky-Way (Mathewson et al 1974, Putman & Gibson, 1999). Several authors have already explored the possibility that HVCs are infalling primordial gas and have associated them with the Local Group (see Wakker & van Woerden 1997 for a thorough review). Recently Blitz et al. (1999) re-examined this hypothesis, and simulating the dynamical evolution of the Local Group galaxies, used the up-to-date HI maps of HVCs to show that the HVCs are consistent with a dynamical model of infall of the ISM onto the Local group. As such, they would represent the building blocks of our galaxies in the Local group and provide fuel for star formation in the disk of the Milky Way. In their model, HVCs contain altogether 10<sup>11</sup> M of neutral gas. From their stability analysis, they conclude that there is roughly 10 times more dark matter than luminous gas within each HVC, and that these could correspond to the mini-halos which are able to accumulate baryons, and can gather into filaments (e.g. Bond et al 1988, Babul & Rees 1992, Kepner et al 1997). HVCs would therefore be related to the hierarchical structure of the Universe (see e.g. Katz et al 1996), and to the gas seen in absorption towards quasars (Lyman-$`\alpha `$ forest and Lyman-limit lines). However, Giovanelli (1981) has pointed out that the velocity distribution of HVCs does not match that of the Local Group, but does favor an association with the Magellanic Stream, the most obvious tidal feature of the interaction between the Galaxy and the Magellanic Clouds. The discrepancy with the results from Blitz et al. (1999) comes from the fact that the latter authors have not considered all observed HVCs, but only a selection of them. Maps of the brightest HVC complexes have revealed the existence of unresolved structure at 10 arcmin resolution, which was further resolved into high-density cloud cores at 1 arcmin resolution (Giovanelli & Haynes 1977; Wakker & Schwarz 1991; Wakker & van Woerden 1991). More generally, HVCs follow the fractal structure observed in the whole interstellar medium (Vogelaar & Wakker, 1994). The HI column densities in these cores are estimated to be several times 10<sup>20</sup> cm<sup>-2</sup> and their temperatures are generally between 30 and 300 K. The central densities of individual clouds can reach $`>`$ 80 cm<sup>-3</sup> D$`{}_{}{}^{1}{}_{kpc}{}^{}`$, where D<sub>kpc</sub> is their distance in kpc. Depending on the actual distance, which still remains poorly determined, those conditions make the HVC cores possible sites of star formation. HVCs have in fact been considered good candidates for the source of young Population I stars at large distances from the Galactic plane (see e.g. Sasselov 1993). Attempts to measure the spin temperature of atomic hydrogen through 21cm absorption in front of background continuum sources have often only resulted in upper limits (Colgan et al. 1990, Mebold et al. 1991). A few detections have been reported (Payne et al. 1980, Wakker et al. 1991, Akeson & Blitz, 1999), with inferred spin temperature as low as 36 K, but in general most HVCs must have spin temperatures larger than 200 K or be very clumpy. Most of our current knowledge of the gaseous content of HVCs comes from HI observations. Efforts to search for molecular hydrogen using CO emission lines have been so far unsuccessful (Wakker et al 1997), since the sub-solar metallicity and/or low density of HVCs makes direct CO emission line detection very difficult. However, optical absorption lines have shown that HVCs are not completely devoid of heavy elements (e.g. Robertson et al 1991, Lu et al 1994, Keenan et al 1995, Wakker & van Woerden 1997). The lines detected are from SiII, CII, FeII, or CIV, but the strongest are from MgII (Savage et al. 1993, Bowen & Blades 1993, Sembach et al. 1995, 1998). Metallicity studies have been done in HVCs to determine their origin. If HVCs result from the Galactic fountain effect, their metallicity should be solar while if they were associated with the Magellanic or intergalactic Clouds, it could be even less than 0.1 solar. In fact, the determined abundances are around 0.1 solar, but with much uncertainties, because of saturated lines, or dust depletion (Sembach & Savage 1996). This average metallicity is compatible with a Local-Group infall model, since X-ray observations have revealed abundances of 0.1 solar in poor groups (Davis et al 1996), and even 0.3 solar in intra-cluster gas (Renzini 1997). If HVCs were local analogues of Lyman-limit absorbing clouds, background QSOs could enable us to detect molecules in absorption, a task which is considerably easier to achieve. Absorption has recently been reconfirmed as a very powerful tool in the millimetric range (Lucas & Liszt 1996 hereafter LL96, Combes & Wiklind 1996). A mm molecular detection would advance our knowledge of HVCs, their physical conditions, and their possible origin. A first attempt has been made in this domain by Akeson & Blitz (1999) with only negative results, using the BIMA and OVRO interferometers. Here we report on HCO<sup>+</sup>(1-0) absorption line observations, made in the southern sky with the single dish 15m ESO-SEST telescope, and in the northern sky with the IRAM interferometer. The choice of HCO<sup>+</sup> is justified because, due to its large dipole moment, it is generally not excited in diffuse media (the critical density for excitation is $``$ 10<sup>7</sup> cm<sup>-3</sup>); therefore confusion with emission is not a problem, contrary to the CO(1-0) line which requires an interferometer to resolve out the emission. Furthermore, the HCO<sup>+</sup> absorption survey carried out by Lucas & Liszt (1994, 1996) has revealed more and wider absorption lines than in CO, suggesting that it might be a better tracer. In that study, the derived abundance HCO<sup>+</sup>/H<sub>2</sub> was surprisingly large, of the order of 6$`\times `$10<sup>-9</sup>, and sometimes even an order of magnitude larger. The details of the observations are presented in Section 2. Section 3 summarizes the results, which are then discussed in Section 4. ## 2 Observations The observations in the southern sky were carried out with the 15m antenna of ESO-SEST in La Silla (Chile), during 6-10 November 1999. We used two SIS receivers at 3 mm and 2 mm simultaneously, to observe the HCO<sup>+</sup>(1-0) and CS(3-2) lines at 89.188523 and 146.969033 GHz respectively. Although CS is less abundant, it would have been interesting to have both molecules in case of detection. The HPBW were 57” and 34” and the main-beam efficiencies were $`\eta _{\mathrm{mb}}=T_\mathrm{A}^{}/T_{\mathrm{mb}}`$=0.75 and 0.66 respectively at the two mentioned frequencies. The backends were accousto-optic spectrometers (AOS) both at high-resolution (HRS) and low-resolution (LRS). The corresponding channel spacings (velocity resolutions) were 0.144 (0.268) and 0.087 (0.163) km s<sup>-1</sup> at HCO<sup>+</sup> and CS for HRS and 2.32 (4.7) and 1.4 (2.8) km s<sup>-1</sup> respectively for LRS. The number of channels were 1000 and 1440 for high and low resolution backends respectively, so that the velocity coverage was 140 and 3352 km s<sup>-1</sup> at high and low resolution for the HCO<sup>+</sup> line, and 85 and 2057 km s<sup>-1</sup> for CS respectively. Since all backends were centered of the expected HVC velocity, there was no problem detecting any galactic line around V=0 km s<sup>-1</sup> with the low-resolution, but it was most of the time outside of the range at high-resolution. We have retrieved the HCO<sup>+</sup> absorptions already reported by Lucas & Liszt (1996) with the IRAM interferometer. In cases where dilution in large velocity channels made detection problematic, we shifted the high-resolution backend to zero velocity to verify the detection. Pointing was corrected regularly (every 2 hours) using known SiO masers (and the 3 mm receiver retuned accordingly). The weather was clear throughout the run, and the typical system temperatures were 180 K for both the 3 and 2 mm ranges. The observing procedure was dual beam switching at high frequency (6 Hz) between same elevation positions in the sky (beam throw of 2’ 27”), to eliminate atmospheric variations. Each source was observed on average for 2 hours, reaching about 3.5 mK of rms noise in channels of 1.4 MHz (4.7 km s<sup>-1</sup> at HCO<sup>+</sup>). Because the width of the absorption lines may be expected to be small (even though the HCO<sup>+</sup> absorption lines are the wider mm lines, and therefore more favorable for detection), a concern should be noted that line profiles may be diluted in the low resolution spectrograph. The technique proved to be valuable, allowing us to retrieve the HCO<sup>+</sup>(1-0) absorptions already detected by LL96 for galactic clouds, near zero velocity (see below). We also wished to check that the HCO<sup>+</sup>(1-0) absorption was not hampered by emission. In one of the sources (2251+158) we observed an offset position (with a beam throw of 12’ for the beam-switch) to check for emission. Emission was not detected at the same signal-to-noise at which the absorption was clearly present. The observations in the northern sky were carried out with the IRAM interferometer in Plateau de Bure (France), during July, October and November 1999. The interferometer data were made with the standard D configuration (see Guilloteau et al. 1992). The array comprised 4 15–m telescopes. The receivers were 3–mm SIS, giving a typical system temperature of 150–200 K. One of the source, 3C 454.3, was used itself as a phase reference, while bandpass and amplitude calibrations were done using also sources such as 3C 273, MWC 349, 1823+568, 2145+067. The data reduction was made with the standard CLIC software. The synthesized beams are of the order of 6$`\times `$8 arcsec. The auto-correlator was used in three overlapping configurations of bandwidth 160, 80 and 40 MHz and a respective resolution of 2.5, 0.625 and 0.156 MHz, giving a maximum velocity resolution of 0.5 km s<sup>-1</sup>. The largest bandwidth observed is 540 km s<sup>-1</sup>. ## 3 Results Our sources were selected from the strong millimeter quasars that were seen in projection over one of the 17 HVC complexes identified, in the Wakker & van Woerden (1991) recapitulating map. Table 1 displays the characteristics of the sources observed, the name and expected velocity of the HVCs along each line of sight, following the notation of the HVC catalogue of Wakker and van Woerden (1991), as well as the measured continuum flux in Jy at 3 and 2 mm, for all 24 sources observed at SEST at the date of observation (6–10 November 1999). The 4 sources observed with the IRAM interferometer (also in Table 1), have been selected to have MgII detected in absorption by Savage et al. (1993), at very high velocities. The HI in emission at these high velocities was measured by the NRAO-43m HI survey towards 143 quasars of Lockman & Savage (1995). The detection of MgII insures the presence of a minimum metallicity towards these lines of sight (larger than 0.32 solar, in 3C 454.3 for instance), favorable for the detection of HCO<sup>+</sup>. The positions of the observed sources are plotted in Figure 1 vis-a-vis an HI map of HVCs in Aitoff projection. Note that one source has been observed both with IRAM and SEST (3C 454.3). The spectra were then normalized to the continuum flux, and the upper limit at 3$`\sigma `$ of the optical depth $`\tau `$ in absorption was computed in each case assuming that the surface filling factor is $`f`$=1, i.e., that the absorbing molecular material covers completely the extent of the mm continuum source, in projection. If $`T_{cont}`$ is the observed continuum antenna temperature of the background source, and $`T_{abs}`$ the amplitude in temperature of the absorption signal, then the optical depth is: $$\tau =ln(1\frac{T_{abs}}{fT_{cont}})$$ The 3$`\sigma `$ upper limits of $`\frac{T_{abs}}{T_{cont}}`$ as measured in 0.56 km s<sup>-1</sup> channels is listed in Table 2. The total column density of HCO<sup>+</sup>, observed in absorption between the levels $`u`$ $`<`$$`l`$ with an optical depth $`\tau `$ at the center of an observed line of width $`\mathrm{\Delta }v`$ at half-power, is: $$N_{\text{HCO}\text{+}}=\frac{8\pi }{c^3}f(T_x)\frac{\nu ^3\tau 𝑑v}{g_uA_u},$$ where $`\nu `$ is the frequency of the transition, $`g_u`$ the statistical weight of the upper level (= 2 J<sub>u</sub>+1), $`A_u`$ the Einstein coefficient of the transition (here $`A_u`$ = 3$`\times `$10<sup>-5</sup> s<sup>-1</sup>), $`T_x`$ the excitation temperature, and $$f(T_x)=\frac{Q(T_x)exp(E_l/kT_x)}{1exp(h\nu /kT_x)}$$ where $`Q(T_x)`$ is the partition function. We assumed statistical equilibrium an excitation temperature, close to the cosmic background temperature, i.e. $`T_x`$ = 3 K, because of the large critical density needed to excite HCO<sup>+</sup>. We also expect a very narrow linewidth, comparable to the detected lines, and adopted $`dv`$ = 1.1 km s<sup>-1</sup> to derive the upper limits in Table 2. The HCO<sup>+</sup>/H<sub>2</sub> abundance was conservatively taken as 6$`\times `$10<sup>-9</sup>, but it must be kept in mind that it could be more than an order of magnitude higher (e.g. Lucas & Liszt 1994, Wiklind & Combes 1997), and therefore the derived H<sub>2</sub> column densities could be correspondingly lower. However, if the metallicity is lower than solar, the corrections should go in the reverse sense. An indicative HI column density of the high-velocity gas was also estimated using the HI surveys of Stark et al (1992) and Hartmann & Burton (1997) for the northern hemisphere, and Bajaja et al. (1985) for our 5 southernmost sources. This column density is rather uncertain since it has been smoothed over large areas (at least one half of a degree) while the material that could appear in absorption extends over milli-arcseconds, similar to the background millimeter sources. At small scales the HI column density could be much higher than the values presented. Figure 2 shows some of the low velocity detections. All of them have already been discovered by Lucas & Liszt (1996), except the one at 1923+210 which is new. In the low-resolution backends, though, they are barely resolved and have somewhat reduced peak intensity. To check this, we have re-tuned to their central velocity, and centered the high-resolution spectrograph at V$``$ 0. In the HRS AOS, the absorptions had indeed stronger intensities and were in all cases compatible with previously reported values (the integration time though, for the retuned spectra, was not enough to obtain a high S/N). Towards the source 1923+210, a tentative detection at the expected HVC velocity was obtained, as shown in Figure 3. It is very narrow, and was detected clearly only in the HRS back-end. Akeson and Blitz (1999) have recently reported upper limits in CO absorption with the BIMA and OVRO interferometers towards 7 continuum sources. We have no sources in common, and therefore we increase the statistical significance of the upper limits. They also searched for HI absorption in HVCs with the VLA, and have positive results only in the gas associated to the outer arm of our Galaxy. They conclude that true HVCs are very weak HI and molecular absorbers. HI absorption in HVCs has been searched with single dish or interferometer several times, without much success (Payne et al. 1980, Colgan et al. 1990, Wakker et al. 1991). The fact that only $``$ 5% of the lines of sight towards HVCs show HI absorptions, while this frequency reaches 100% for normal galactic gas (Dickey et al. 1983), sheds some light on the physical structure of the clouds. ## 4 Discussion and Conclusion ### 4.1 The diverse nature of HVCs It is still difficult to tackle the problem of the origin of HVCs, since they may be diverse in nature, with possibly different formation histories. The first idea that they are clouds infalling towards the Milky Way (Oort, 1966) was proposed since only negative velocities clouds were discovered at this epoch. However, since differential galactic rotation makes an important contribution to those velocities and the tangential velocity of the HVCs is also unknown, it is difficult to ascertain their true three dimensional motion. Now that almost an equal number of clouds are detected with positive high velocities, and that is also true in the various reference frames (e.g. Wakker 1991), it is believed there must be other explanations for at least some of them. Some HVC complexes have been proven to be associated with the Galaxy, in being very close (less than 10 kpc) due to absorption detected in front of Galactic halo stars (e.g. Danly et al. 1993, van Woerden et al. 1999). This supported a Galactic origin for some of the HVCs and more specifically the Galactic fountain model, where gas is ejected in the halo by the star formation feedback mechanisms. The main problems with this scenario are the low metallicity observed in most lines of sight (even though it could explain why clouds have nearly solar metallicity, e.g. Richter et al 1999), and the extreme velocities sometimes reached (up to -450 km s<sup>-1</sup> in the negative side). Alternatively, HVCs could be transient structures, now becoming bound to the Galaxy: an ensemble or complex of HVCs has been identified as tidal debris from the Magellanic Clouds (Mathewson et al. 1974), and is called the Magellanic Stream. Few of them could come from tidal debris from interactions with other dwarf galaxies but the corresponding dwarf galaxies either have not been identified yet, or they have been already dispersed (e.g. Mirabel & Cohen 1979). A key point is to consider whether HVCs are transient structures or coherent and self-gravitating. With only their column density observed in HI, to be self-gravitating they should be at least at 1 Mpc distance, and in average at 10 Mpc, therefore outside the Local Group (Oort, 1966). This distance can be reduced by a factor 10, if the total mass of HVC is taken to be 10 times their HI mass, composed mostly of dark matter (Blitz et al. 1999). Since the specific kinetic energy is in M/R $``$ Distance, then the clouds would have to be only at about 1 Mpc, and would consequently be part of the Local Group. The main difficulty with this model of mini-haloes merging with our Galaxy, is that other high-velocity clouds should have been observed around or in between external galaxies, which is not the case (Giovanelli 1981, Zwaan et al. 1997, Banks et al. 1999). Moreover, the derived typical dimensions and mass of these systems do not correspond to any observed object or any extrapolation of mass ranges (they have dwarf masses, but are very extended in size). Another possibility, however, could be that HVCs represent gas left over from the formation of the Galaxy: they form a system out of equilibrium, but bound to the Galaxy, and are now raining down to the Galactic disk. This would explain several observed characteristics, such as the 4-10 kpc distance determined for some complexes, and the low metallicity of many of them (e.g. Wakker et al. 1999). ### 4.2 How many molecular absorptions could be expected? Given the low HI column densities reported in Table 2, it could seem hopeless to detect molecular absorption, with the present sensitivities. But this is true only if the HVC gas is homogeneous. In fact, from previous HI observations, we expect that like the gas in the galactic disk,the HVC gas is also composed of several components, with clumps of much higher column densities, only detectable at higher spatial resolution. This information can be deduced from existing HI observations in emission and absorption, towards HVCs. The optical thickness $`\tau `$ is related to the column density N(HI) (cm<sup>-2</sup>), the spin temperature T<sub>s</sub>(K) and the FWHM velocity width $`\mathrm{\Delta }V`$ (km s<sup>-1</sup>) by $$\tau =\frac{N(HI)}{1.8\times 10^{20}}\left(\frac{50}{T_s}\right)\left(\frac{20}{\mathrm{\Delta }V}\right)$$ Since the profile extends at least over 20 km s<sup>-1</sup> in emission, and the spin temperature has been determined to be at least 50 K, if there was only one component, the derived optical depth would have to be very low, $`\tau <0.1`$. Absorption and emission studies in the Galactic plane have shown though that there is indeed more than one component. If there was only one, both emission and absorption profiles would look similar, which is not what is observed. In fact, there are at least two components; a warm diffuse inter-clump component with cloudlets which are cold, narrow in velocity, and more optically thick (Garwood & Dickey 1989). If the surface filling factor of the two components are f<sub>1</sub> and f<sub>2</sub>, for spin temperatures T<sub>1</sub> and T<sub>2</sub>, and optical depths $`\tau _1`$ and $`\tau _2`$, the observed ratio between the absorption depth $`\frac{T_{abs}}{T_{cont}}`$ and the antenna temperature of the emission $`T_{em}`$ is $$\frac{T_{abs}}{T_{cont}T_{em}}=\frac{f_1(1e^{\tau _1})+f_2(1e^{\tau _2})}{T_1f_1(1e^{\tau _1})+T_2f_2(1e^{\tau _2})}$$ In this formula, it is obvious that each component is weighted according to its mass if it is optically thin (i.e. weight $`f\tau `$), but considerably less if $`\tau >>1`$. Therefore, if the cold component is optically thick, the derived spin temperature will be overestimated<sup>1</sup><sup>1</sup>1In fact the mass of the cold component would be itself under-estimated, since the absorption often saturates, while the emission, dominated by the warm gas, saturates less. Are HVC clouds of the same nature than normal low-velocity galactic clouds, and have they similar small-scale structure? For normal HI clouds, Payne et al. (1983) have determined (from absorption/emission comparisons) that the weighted average spin temperature is decreasing as the optical depth increases (cf line in Figure 4). There is only a small scatter in this relation, which means that if the cold and thick component is made of cloudlets, their size must be smaller than that of their background continuum sources, i.e. a fraction of an arcmin, and their number must be large accordingly (N $`>`$ 100). We do not see the situation where absorption features are deep and rare (statistically the T<sub>s</sub>-$`\tau `$ relation will still hold, but with large scatter). The data favor a model, in which the cloudlets are quite small ($`<`$ 0.1 pc) and numerous, and they are weighted according to their surface filling factor $`f`$ both in emission and in absorption for any line of sight. The existence of such small scale structure is also confirmed by VLBI HI absorption (Faison et al. 1998), where sizes down to $``$ 20 AU are detected. It is also very likely that the structure, apart from these smallest fragments, has no particular scale. If it is a fractal, statistically, there is a correlation between the optical depth between scales with a limited scatter. This would explain the observed correlation between emission and absorption at different scales. When the emission/absorption measurements for HVCs are considered (cf Figure 4) the lower limits on derived spin temperature are compatible with the detections on the normal low-velocity gas and their T<sub>s</sub>-$`\tau `$ relation. The spin temperature of HVCs is a weighted mean of the warm and cold components, with the same mixtures as the one observed in normal low-velocity clouds of the galactic plane. The difficulty to find HI in absorption in HVCs corresponds then only to their low average column density, and not to a different physical structure. As for molecular absorptions, the column densities to which a detection is possible is even larger. The corresponding scales must be smaller, and consequently the corresponding line-widths narrower. This is observed for low-velocity absorptions, where the detected line-widths are as narrow as 0.6 km s<sup>-1</sup>(LL96). The detected optical depths are larger on average, and the probability to underestimate the column density is higher, because of saturation. (One should note though that the apparent optical depth is low because of spatial and velocity dilutions.) The probability of detection of the HCO<sup>+</sup>(1-0) absorption has been estimated to be 30% as large as the 21cm absorption for galactic clouds (LL96). The presently observed low probability to find an HCO<sup>+</sup>(1-0) absorption is therefore expected. In addition, there could be a column density threshold for self-shielding against photo-dissociation, that hampers molecular observations. This threshold has been estimated for CO emission to N(H<sub>2</sub>) $``$ 4$`\times `$10<sup>20</sup> cm<sup>-2</sup> (or equivalently of the order of 10<sup>12</sup> cm<sup>-2</sup> for HCO<sup>+</sup>). If our tentative detection is confirmed, given the low metallicity of the HVCs, this will support the existence of clumps of high column densities in this medium. The fact that molecular absorption is more frequent with respect to emission than atomic absorption is related to the excitation mechanism. Large volumic critical densities are required to excite molecules above the cosmic background, and this is in particularly true for HCO<sup>+</sup>. This makes absorption techniques the more promising to probe the molecular content of HVCs in the future. ### 4.3 Physical nature and distance of the gas Since the HVCs appear to follow in projection the same fractal properties as the normal low-velocity gas (Vogelaar & Wakker 1994), it could be interesting to develop an insight in their distance from their size-linewidth relation. It is now well established that clouds in the interstellar medium (either molecular or atomic) are distributed according to a self-similar hierarchical structure, characteristic of a fractal structure (Falgarone et al. 1991; Stanimirovic et al. 1999; Westpfahl et al. 1999). Such a scaling relation between mass and size, can also lead to a relation between size and line-width, provided that the structures are virialized. In particular, various clouds at all scales obey a power-law relation between sizes $`R`$ and line-widths or velocity dispersion $`\sigma `$: $$\sigma R^q$$ with $`q`$ between 0.35 and 0.5 (e.g. Larson 1981, Scalo 1985, Solomon et al 1987). We have plotted this latter relation, together with the sizes and line-widths of the 65 well defined isolated HVCs, catalogued by Braun & Burton (1999). This catalog is expected to be free from galactic contamination as well as from blending along the line of sight, because of the isolation criterium imposed for their selection. To compute their sizes, two distances were assumed, either 20 kpc, or 1 Mpc, and the geometrical mean between major and minor axis was calculated. At a distance of 20 kpc, the clouds fall on the relation corresponding to Giant Clouds in the Galaxy. Note that the large scatter is due to the fact that this choice of distance is only one order of magnitude, and the HVCs are certainly not all at the same distance. This set of clouds has been determined with a spatial resolution of half a degree. The clouds determined at higher spatial resolution have correspondingly narrower line-widths. In Figure 5, we have also plotted the characteristics of clouds determined with the Westerbork interferometer, at 1 arcmin resolution (Wakker & Schwarz 1991). They also fit the galactic clouds relation on average, if their distance is chosen to be 20 kpc, with large scatter since individual distances are also not known. In fact, HI observations at different spatial resolutions emphasize a particular scale of the hierarchical structure (see e.g the 10’ resolution observations by Giovanelli & Haynes 1977). This hierarchical structure is similar to what is observed for “normal” galactic clouds, in the sense that the same fraction 20-30% of the single dish flux is retrieved in the interferometer data (Wakker & Schwarz 1991). The size-linewidth relation has been widely observed up to 100pc in size, the largest size for self-gravitating clouds in the Galaxy, but it might appear questionable to extend it to higher scales, where the gas would not be self-gravitating. At these scales, the gas is bound into largest self-gravitating structures, including stars or dark matter. However, even at these scales, the gas should trace the gravitational potential of the bound structure it is embedded in, and share the corresponding velocity dispersion; such a relation is observed for instance in the form of the Tully-Fisher relation in galaxies. The main point is that the gas should reveal velocity profiles in emission that should grow wider with the distance, if it belongs to an assumed self-gravitating remote system. The observed profile width is therefore a distance indicator. ### 4.4 Conclusion We have searched for HCO<sup>+</sup>(1-0) absorption towards 27 high velocity clouds, in front of remote radio-loud quasars. The technique is efficient, since we detect the existing absorption due to low velocity galactic clouds, for our low latitude sources. Only one tentative HVC detection is reported. If confirmed, this indicates the presence of small scale cloudlets, at low excitation and high column densities, prolonging the hierarchical structure already observed in the atomic component at larger scale. When this hierarchical structure is compared to the one observed for low-velocity galactic clouds, a good fit is obtained for the self-similar relation between sizes and line-widths, if the HVCs are on average at 20 kpc distance. Since mm molecular absorptions are expected to be more frequent than emission for these low column density HVCs, this absorption technique appears promising to probe the molecular component of HVCs, already directly detected by UV H<sub>2</sub> absorption lines (Richter et al 1999). ###### Acknowledgements. We are very grateful to R. Giovanelli for useful comments on the manuscript, to B. Wakker and F. Mirabel for stimulating discussions, and an anonymous referee for constructive criticism. We also thank the SEST staff for their kind help during the observations, and Raphael Moreno and Robert Lucas for their assistance in the IRAM interferometer data reduction.
no-problem/0002/cond-mat0002291.html
ar5iv
text
# The model of the ideal crystal as a criterion for evaluating of the approximate equations of the liquids. ## Introduction The structure and thermodynamics of both dense gases and liquids is described by a pair correlation function $`h_{12}`$ considering the collective effects in dense mediums. Nowadays about 20 approximate integral equations of the Ornstein-Zernike (OZ) type connecting the direct $`C_{12}`$ and pair $`h_{12}`$ correlation functions are proposed for describing of the above-mentioned one . The degree of exactness of every such approximation is impossible to be evaluated. The physical meaning of their basic approximations has not been cleared up to the end yet. Thus such a situation suggests a choice of maximum exact approximations in a comparison with the result achieved at the numerical experiment. Nevertheless such a comparison doesn’t reveal the physical meaning of approximations. The OZ equation is to be used for the spatial homogenous and isotropic mediums as in such a case a two-particle distribution function (as well as a pair correlation function) depends only on a mutual distance between the particle centers $`r_{12}`$, that makes all the calculations much more simplified. But there exists a more generalized form of the OZ equation for the spatial inhomogenious anisotropic mediums . It has not been used widely yet for the solution of more exact problems. One can mention its use only for the description of the liquid-solid phase transition . This work describes the use of the Martynov-Sarkisov approximation, which gives the best agreement with the data of the numerical experiment for the hard spheres system . In this work we suggest using the generalized OZ equation for the description of the ideal crystal at $`T=0`$. The pair correlation function $`h_{12}`$ in this limit case takes the meaning of the Dirak $`\delta `$ function. As a result, a linear integral equation for the direct correlation function is achieved, which has a simple analytical solution. It should fit The results known from crystallophysics for the ideal crystal. So the limit transition to the model of the ideal crystal may be considered as a physical criterion for the evaluation of the exactness for the approximations used in physics of liquids. ## 1 Initial equations In works it was shown that a hierarchy of Bogolubov- Born-Green-Kirkwood-Yvon (BBGKY) equations may be modified into a two equations system $$\omega _1=nG_2S_{12}d(2)+\mathrm{ln}a$$ (1) $$h_{12}=C_{12}+nG_3C_{13}h_{23}d(3)$$ (2) for two unknown functions: a one-particle distribution function $`G_1(\stackrel{}{r}_1)=\mathrm{exp}(\omega _1(\stackrel{}{r}_1))`$ and a two-particle distribution function $$G_{ij}(\stackrel{}{r}_i,\stackrel{}{r}_j)=G_iG_j(1+h_{ij}),$$ (3) $$h_{ij}=1+\mathrm{exp}(\frac{\mathrm{\Phi }_{ij}}{\mathrm{\Theta }}+\mathrm{\Omega }_{ij}(\stackrel{}{r}_i,\stackrel{}{r}_j)),$$ (4) where $`n=N/V`$ is density, $`\mathrm{\Phi }_{ij}`$-molecular interaction potential, $`\mathrm{\Theta }=kT`$-temperature, $`d(i)=d\stackrel{}{r}_i`$\- a volume integral element of the i particle. All previous distribution functions are expressed through $`G_1`$ and $`G_{12}`$. The direct correlation functions $`S_{ij}`$, $`C_{ij}`$ in equations (1) and (2) are evidently expressed through infinite integral sets distinguished from the products of the pair correlation functions $`h_{ij}`$ $$S_{ij}=h_{ij}\mathrm{\Omega }ij\frac{1}{2}h_{ij}(\mathrm{\Omega }_{ij}+\frac{1}{6}M_{ij}^{(1)}),$$ (5) $$C_{ij}=h_{ij}\mathrm{\Omega }_{ij}+\frac{1}{2}M_{ij}^{(2)},$$ (6) in which $$M_{ij}^{(k)}=n^2G_3G_4h_{i3}h_{i4}h_{34}h_{3j}h_{4j}d(3)d(4)+\mathrm{},k=1,2$$ (7) the so-called bridge-functionals represent infinite sums of non-reduced diagrams. For spatial homogeneous and isotropic systems (dense gases and liquids) at $`\omega _1=0,G_1=1,G_{12}=G_{12}(\stackrel{}{r}_{12})`$, equation (1) is a definition of the activity factor logarithm $`\mathrm{ln}a`$. Equation (2) describing the neighboring order in liquids tends to come to a well-known OZ ratio $$h_{12}=C_{12}+nC_{13}h_{13}d(3),$$ (8) being the basis of the modern theory of liquids and molds . For making ratio (8) closed it is necessary to set an approximation connecting the direct correlation function $`C_{12}(\stackrel{}{r}_{12})`$ with the pair correlation function $`h_{12}(\stackrel{}{r}_{12})`$. ## 2 The ideal crystal One particle distribution function has the meaning of local density in laboratory coordinate system. For spatially homogeneous isotropic mediums one particle distribution function equals one because the external field does not exist. For the crystal condition one particle distribution function does not equal one, even if the external field does not exist. To describe such a system it is necessary to introduce the external field according to Bogolubov which fixes the crystal place in the space and then to make it tend to zero. Then one particle distribution function describes the local density distribution in the regard to the fixed coordinate system giving the crystal place as a whole. As an origin of such a coordinate system it is convenient to choose one of lattice particles giving it number zero conditionally. Therefore one particle distribution function $`G_1=\mathrm{exp}(\omega _1)`$ for the crystal can be represented in the form of the periodic function $$G_1(\stackrel{}{r}_1)=\mathrm{`}\underset{\stackrel{}{k}}{}G_\stackrel{}{k}\mathrm{exp}(ı\stackrel{}{k}\stackrel{}{r}_1),$$ (9) For the ideal crystal at $`T=0`$ every Fourier’s component $`G_\stackrel{}{k}`$ is similar and $`G_1`$ is a superposition of three-dimension Dirak $`\delta `$ functions $$G_1(\stackrel{}{r}_1)=\underset{\stackrel{}{r}_n}{}\delta (\stackrel{}{r}_1\stackrel{}{r}_n)$$ (10) where the summation is done by crystal lattice nodes $`\stackrel{}{r}_n`$.whose coordinates, in their turn, must be defined. The mutual arrangement of lattice nodes (that is the lattice period and the type of the crystal system) is defined by the potential of molecular interaction. On the other hand, the potential $`\mathrm{\Phi }_{12}`$ enters the pair correlation function in the form of the combination $`\frac{\mathrm{\Phi }_{12}}{\mathrm{\Theta }}`$. Let us consider the kind of the function $`h_{12}`$ at $`T=0`$. Let the molecular interaction be described by Lenard-Jones potential $$\mathrm{\Phi }_{12}=4\epsilon [(\frac{\sigma }{r})^{12}(\frac{\sigma }{r})^6],$$ (11) where $`\sigma `$ is a characteristic size of a molecule. The potential $`\mathrm{\Phi }_{12}`$ has a minimum at the point $`r_0=\sigma \sqrt[6]{2}`$. Therefor the function $`\frac{\mathrm{\Phi }_{12}}{\mathrm{\Theta }}`$ at $`T0`$ has $`\delta `$ -shaped maximum at this point. That is why one can neglect the thermal potential $`\omega _{12}`$ describing the neighbouring order. As a result $`h_{12}`$ is approximated one-dimension Dirak function $`\delta (r_{12}r_0)`$. As a result at $`T=0`$ we get the following equation for the straight correlation line with regard to (10) $$C_{12}^{(2)}+n\underset{\stackrel{}{r}_n}{}C_{12}^{(2)}(\stackrel{}{r}_1,\stackrel{}{r}_n)\delta (r_{2n}r_0)=\delta (r_{12}r_0).$$ (12) We notice that (12) is the linear equation whose solution is sought in the form $$C_{12}=\alpha \delta (r_{12}r_0),$$ (13) where $`\alpha `$ is a coefficient to be defined. Substitution $`C_{12}`$ in (12) results in the following expression $$\delta (r_{12}r_0)=\alpha \delta (r_{12}r_0)+n\alpha \underset{\stackrel{}{r}_n}{}\delta (\stackrel{}{r}_1\stackrel{}{r}_nr_0)\delta (\stackrel{}{r}_2\stackrel{}{r}_nr_0).$$ (14) Particles 1 and 2 are the neighbouring ones. The summation is done by those nodes which are the nearest both for the first particle and for the second one. Let such particles have $`N_0`$ numbers. Using (14) we get $$\alpha =\frac{1}{1+nN_0},r_{12}=r_0,r_{13}=r_0,r_{23}=r_0$$ (15) The correlation (15) defines the crystal structure (the lattice period and the type of the crystal system). For determining the crystal system we should calculate the number $`N_0`$. At $`T=0`$ the system is densely packed. There are two types of lattices with the dense packing - face centered cubic and hexagonal lattices . Their structure is formed by successive layers of densety packed planes. Every particle has twelve neighbouring particles. This structure type is connected with the distant order in the crystal. But in any substance (including the crystal) there is the neighbouring order which is described by the second addend in the right-hand side (14) in this case. Let us take arbitrarily a couple of the neighbourong particles arranged in the plane $`(111)`$. It has four particles juxtaposed with it for both types of the lattice: two particles in the plane (111) and one particle in every adjoined plane. If particles 1 and 2 lie in two neighbouring planes the situation is different. For the face centered cubic lattice $`N_0`$ is 4 as before but for the hexagonal lattice $`N0`$ is 3. So $`N_0=`$const for the face centered cubic lattice. As the pair of the neighbouring particles is chosen arbitrarily this indicates that the face centered cubic lattice is realized. It corresponds to the experimental data for crystals of inert gases. However for Lenard-Jones potential the hexagonal structure has much lower energy than the face centered cubic lattice therefor it is this lattice that must be realized. The problem of the crystal system is not solved in this paper yet. Now we show that solutions (10) and (13) satisfy the equation (1). The direct correlation function $`C_{12}^{(1)}`$ neglecting two particle correlations $`\omega _{12}`$ equals $$C_{12}^{(1)}=C_{12}^{(2)}=\alpha \delta (r_{12}r_0).$$ (16) Substitution this expression in (1) we shall get $$\omega _1=n\alpha \underset{\stackrel{}{r}_n}{}\delta (\stackrel{}{r}_1\stackrel{}{r}_nr_0)+\mathrm{ln}a.$$ (17) Let $`a=1`$ and doing summation by $`\stackrel{}{r}_{12}`$ as it is done in (14) $$\omega _1=n\alpha N_0\delta (\stackrel{}{r}_1\stackrel{}{R}_0r_0)+\mathrm{ln}a.$$ (18) where $`\stackrel{}{R}_0`$ is a radius of the vector-particle placed in the coordinate system origin. Since coordinates of all the nodes are known now, $`\omega _1`$ is a given function $`G_1`$ can be given in the form (10). The complicity of problems being solved by the physics of condensed matter makes one use different physically based approximations. In particular, in the physics of plasma and solids an approximation of the self-coordinated field is widely used. In the physics of liquids a superpositional approximation is well-known, which is used to express a triple function of the distribution through two-particle ones. It is to the superpositional approximation, that fits to the hypernetted chain equation, which results from (2) by the neglect of all the irreducible diagrams. Summing up of such diagrams is practically unrealizable. Physically it is connected with the fact that at the limit temperatures it is necessary to take into account the collective effects, stimulated by a molecular motion. That is why it is necessary to offer a physical value criterion of the irreducible diagrams. A limit transition to the ideal crystal at $`T=0`$ represent such a criterion. In this case it is possible to neglect the collective effects connected with the molecular motion. As a result a limit ratio for the direct correlation function $`C_{12}`$, that effectively includes a contribution of the irreducible diagrams, is possible to be obtained. From a great number approximations between $`C_{12}`$ and $`h_{12}`$ the most physically valid is that one which fits to the given limit ratio. ## Conclusion The solution of Ornstein-Zernike generalized equation is obtained for the ideal crystal at $`T=0`$. In this case the pair correlation function $`h_{12}`$ Degenerates to Dirak $`\delta `$ function and for the direct correlation function $`C_{12}`$ the linear integral equation is found which has the analytical solution. By using it the main problem of physics of liquids is solved - establishment of closure between the direct and pair correlation functions. At the same time there is still a problem for the aggregate condition, the method developed by us can be used as a criterion for evaluating the closures: those resulting in the solution for the ideal crystal at $`T=0`$ will have the physical meaning.
no-problem/0002/cond-mat0002322.html
ar5iv
text
# First-principles calculations of hot-electron lifetimes in metals ## I Introduction Low-energy excited electrons in metals, with energies larger than $`0.5\mathrm{eV}`$ above the Fermi energy, experience strong electron-electron (e-e) scattering processes. Although inelastic lifetimes of these so-called hot electrons have been investigated for many years on the basis of the free-electron-gas (FEG) model of the solid, time-resolved two-photon photoemission (TR-2PPE) experiments have shown the key role that band-structure effects may play in the decay mechanism. First-principles calculations of hot-electron lifetimes that fully include the band structure of the solid have been reported only very recently for aluminum and copper. These calculations show that actual lifetimes are the result of a delicate balance between localization, density of states, screening, and Fermi-surface topology, even in the case of a free-electron-like metal such as aluminum. In this paper, we report first-principles calculations of inelastic lifetimes of excited electrons in a variety of real metals. We start with free-electron-like trivalent (Al) and divalent (Mg) metals, and then focus on divalent Be and the role that $`d`$ electrons play in a noble metal like Cu. First, we expand the one-electron Bloch states in a plane-wave basis, and solve the Kohn-Sham equation of density-functional theory (DFT) by invoking the local-density approximation (LDA) for exchange and correlation (XC). The electron-ion interaction is described by means of non-local, norm-conserving ionic pseudopotentials, and we use the one-electron Bloch states to evaluate the screened Coulomb interaction in the random-phase approximation (RPA). We finally evaluate the lifetime of an excited Bloch state from the knowledge of the imaginary part of the electron self-energy, which we compute within the GW approximation of many-body theory. Our calculations indicate that scattering rates may strongly depend, for a given electron energy, on the direction of the wave vector of the initial state. Also, average lifetimes, as obtained by averaging over all wave vectors and bands with the same energy, are found to deviate considerably from those derived for a FEG. The rest of this paper is organized as follows: Explicit expressions for the electron decay rate in periodic crystals are derived in section II, within the GW approximation of many-body theory. Calculated inelastic lifetimes of hot electrons in Al, Mg, Be, and Cu are presented in section III, and the conclusions are given in section IV. Atomic units are used throughout, i.e., $`e^2=\mathrm{}=m_e=1`$. ## II Theory Take an inhomogeneous electron system. In the framework of many-body theory, the damping rate $`\tau _i^1`$ of an excited electron in the state $`\varphi _i(𝐫)`$ with energy $`E_i`$ is obtained from the knowledge of the imaginary part of the electron self-energy, $`\mathrm{\Sigma }(𝐫,𝐫^{};E_i)`$, as $$\tau _i^1=2d𝐫d𝐫^{}\varphi _i^{}(𝐫)\mathrm{Im}\mathrm{\Sigma }(𝐫,𝐫^{};E_i)\varphi _i(𝐫^{}).$$ (1) In the GW approximation, one considers only the first-order term in a series expansion of the self-energy in terms of the screened Coulomb interaction: $$\mathrm{\Sigma }(𝐫,𝐫^{};E_i)=\frac{i}{2\pi }𝑑EG(𝐫,𝐫^{};E_iE)W(𝐫,𝐫^{};E),$$ (2) where $`G(𝐫,𝐫^{};E_iE)`$ represents the one-particle Green function and $`W(𝐫,𝐫^{};E)`$ is the time-ordered screened Coulomb interaction. After replacing the Green function ($`G`$) by the zero order approximation ($`G^0`$), the imaginary part of the self-energy can be evaluated explicitly: $$\mathrm{Im}\mathrm{\Sigma }(𝐫,𝐫^{};E_i)=\underset{f}{}\varphi _f^{}(𝐫^{})\mathrm{Im}W(𝐫,𝐫^{};\omega )\varphi _f(𝐫),$$ (3) where $`\omega =E_iE_f`$ represents the energy transfer, the sum is extended over a complete set of final states $`\varphi _f(𝐫)`$ with energy $`E_f`$ ($`E_FE_fE_i`$), $`E_F`$ is the Fermi energy, and $`W(𝐫,𝐫^{};\omega )=`$ $`v(𝐫𝐫^{})+{\displaystyle d𝐫_1d𝐫_2v(𝐫𝐫_1)}`$ (6) $`\times \chi (𝐫_1,𝐫_2;\omega )v(𝐫_2𝐫^{}).`$ Here, $`v(𝐫𝐫^{})`$ represents the bare Coulomb interaction, and $`\chi (𝐫,𝐫^{};\omega )`$ is the density-density correlation function of the solid. In the framework of time-dependent density-functional theory (TDDFT), the density-density correlation function satisfies the integral equation $`\chi (𝐫,𝐫^{};\omega )`$ $`=\chi ^0(𝐫,𝐫^{};\omega )+{\displaystyle d𝐫_1d𝐫_2\chi ^0(𝐫,𝐫_1;\omega )}`$ (9) $`\times [v(𝐫_1𝐫_2)+K^{xc}(𝐫_1,𝐫_2;\omega )]\chi (𝐫_2,𝐫^{};\omega ),`$ where $`\chi ^0(𝐫,𝐫^{};\omega )`$ is the density-density correlation function of noninteracting Kohn-Sham electrons, as described by the solutions of the time-dependent counterpart of the Kohn-Sham equation. In usual practice, these amplitudes are approximated by standard LDA wave functions. The kernel $`K^{xc}(𝐫_1,𝐫_2;\omega )`$, which accounts for the reduction in the e-e interaction due to the existence of short-range exchange-correlation (XC) effects, is obtained from the knowledge of the XC energy functional. In the RPA, this kernel is taken to be zero. For periodic crystals, one may introduce the following Fourier expansion for the screened interaction of Eq. (6): $`W(𝐫,𝐫^{};\omega )=`$ $`{\displaystyle \frac{1}{\mathrm{\Omega }}}{\displaystyle \underset{𝐪}{\overset{BZ}{}}}{\displaystyle \underset{𝐆,𝐆^{}}{}}\mathrm{e}^{\mathrm{i}(𝐪+𝐆)𝐫}\mathrm{e}^{\mathrm{i}(𝐪+𝐆^{})𝐫^{}}`$ (12) $`\times v_𝐆(𝐪)ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega ),`$ where the first sum is extended over the first Brillouin zone (BZ), $`𝐆`$ and $`𝐆^{}`$ are reciprocal lattice vectors, $`\mathrm{\Omega }`$ is the normalization volume, $`v_𝐆(𝐪)`$ represent the Fourier coefficients of the bare Coulomb interaction, and $`ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )`$ are the Fourier coefficients of the inverse dielectric function, $$ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )=\delta _{𝐆,𝐆^{}}+\chi _{𝐆,𝐆^{}}(𝐪,\omega )v_𝐆^{}(𝐪).$$ (13) Within RPA, $$ϵ_{𝐆,𝐆^{}}(𝐪,\omega )=\delta _{𝐆,𝐆^{}}\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )v_𝐆^{}(𝐪),$$ (14) where $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ are the Fourier coefficients of the density-density correlation function of non-interacting Kohn-Sham electrons (see, e.g., Ref.). After introduction of the Fourier representation of Eq. (12) into Eq. (3), and in the limit that the volume of the system $`\mathrm{\Omega }`$ becomes infinite, one finds the following expression for the damping rate of an electron in the state $`\varphi _{𝐤,n_i}(𝐫)`$ with energy $`E_{𝐤,n_i}`$: $`\tau _i^1={\displaystyle \frac{1}{\pi ^2}}{\displaystyle \underset{f}{}}{\displaystyle _{\mathrm{BZ}}}d𝐪{\displaystyle \underset{𝐆,𝐆^{}}{}}`$ $`{\displaystyle \frac{B_{if}^{}(𝐪+𝐆)B_{if}(𝐪+𝐆^{})}{\left|𝐪+𝐆\right|^2}}`$ (17) $`\times \mathrm{Im}\left[ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )\right],`$ where $`\omega =E_{𝐤,n_i}E_{𝐤𝐪,n_f}`$, and $$B_{if}(𝐪+𝐆)=d𝐫\varphi _{𝐤,n_i}^{}(𝐫)\mathrm{e}^{\mathrm{i}(𝐪+𝐆)𝐫}\varphi _{𝐤𝐪,n_f}(𝐫).$$ (18) Couplings of the wave vector $`𝐪+𝐆`$ to wave vectors $`𝐪+𝐆^{}`$ with $`𝐆𝐆^{}`$ appear as a consequence of the existence of electron-density variations in real solids. If these terms, representing the so-called crystalline local-field effects, are neglected, one can write $$\tau _i^1=\frac{1}{\pi ^2}\underset{f}{}_{\mathrm{BZ}}d𝐪\underset{𝐆}{}\frac{\left|B_{if}(𝐪+𝐆)\right|^2}{\left|𝐪+𝐆\right|^2}\frac{\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]}{|ϵ_{𝐆,𝐆}(𝐪,\omega )|^2}.$$ (19) Within RPA, Eq. (14) yields $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]={\displaystyle \frac{2\pi v_𝐆(𝐪)}{\mathrm{\Omega }}}{\displaystyle \underset{𝐤}{\overset{BZ}{}}}{\displaystyle \underset{n,n^{}}{}}(f_{𝐤,n}f_{𝐤+𝐪,n^{}})`$ (20) (21) $`\times |\varphi _{𝐤,n}|e^{\mathrm{i}(𝐪+𝐆)𝐫}|\varphi _{𝐤+𝐪,n^{}}|^2\delta (\omega E_{𝐤+𝐪,n^{}}+E_{𝐤,n}).`$ (22) Hence, the imaginary part of $`ϵ_{𝐆,𝐆}(𝐪,\omega )`$ represents a measure of the number of states available for real transitions involving a given momentum and energy transfer $`𝐪+𝐆`$ and $`\omega `$, respectively, which is renormalized by the coupling between initial and final states. The factor $`\left|ϵ_{𝐆,𝐆}(𝐪,\omega )\right|^2`$ in Eq. (19) accounts for the screening in the interaction with the probe electron. Initial and final states of the probe electron enter through the coefficients $`B_{if}(𝐪+𝐆)`$. If all one-electron Bloch states entering both the coefficients $`B_{if}(𝐪+𝐆)`$ and the dielectric function $`ϵ_{𝐆,𝐆^{}}(𝐪,\omega )`$ were represented by plane waves, then Eqs. (17) and (19) would exactly coincide with the GW scattering rate of excited electrons in a FEG, as obtained by Quinn and Ferrell and by Ritchie. For hot electrons with energies very near the Fermi level ($`E_iE_F`$) this result yields, in the high-density limit ($`r_s<<1`$), the well-known formula of Quinn and Ferrell, $$\tau _i^{QF}=263r_s^{5/2}(E_iE_F)^2\mathrm{eV}^2\mathrm{fs}.$$ (23) For a detailed discussion of the range of validity of this approach, see Ref.. We note that the decay $`\tau _i^1`$ of hot electrons in periodic crystals depends on both the wave vector $`𝐤`$ and the band index $`n_i`$ of the initial Bloch state. Nevertheless, we also define $`\tau ^1(E)`$, as the average of $`\tau ^1(𝐤,n)`$ over all wave vectors and bands lying with the same energy in the irreducible wedge of the Brillouin zone (IBZ). Decay rates of hot electrons lying outside the IBZ are considered by simply using the symmetry property $`\tau ^1(S𝐤,n)=\tau ^1(𝐤,n)`$, where $`S`$ represents an operator of the point group of the crystal. For the evaluation of the polarizability $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ and the coefficients $`B_{if}(𝐪+𝐆)`$, Eq. (18), we use the self-consistent LDA eigenfunctions of the one-electron Kohn-Sham hamiltonian of DFT, which we first expand in a plane wave basis, $$\varphi _{𝐤,n}(𝐫)=\frac{1}{\mathrm{\Omega }}\underset{𝐆}{}u_{𝐤,n}(𝐆)e^{\mathrm{i}(𝐤+𝐆)𝐫}.$$ (24) The electron-ion interaction is described by means of non-local, norm-conserving ionic pseudopotentials, and the XC potential is obtained in the LDA with use of the Perdew-Zunger parametrization of the XC energy of Ceperley and Alder. Well-converged results have been found with the introduction in Eq. (24) of kinetic-energy cutoffs of $`12`$, $`6`$, and $`20\mathrm{Ry}`$ for Al, Mg, and Be, respectively. In the case of Cu, all $`4s^1`$ and $`3d^{10}`$ Bloch states have been kept as valence electrons in the pseudopotential generation, and an energy-cutoff as large as $`75\mathrm{Ry}`$ has been required, thereby keeping $`900`$ plane waves in the expansion of Eq. (24). Though all-electron schemes, such as the full-potential linearized augmented plane-wave (LAPW) method, are expected to be better suited for the description of the response of localized $`d`$ electrons, the plane-wave pseudopotential approach has already been successfully incorporated in the description of the dynamical response of copper. Samplings over the BZ required for the evaluation of both the dielectric matrix and the hot-electron decay rate have been performed on Monkhorst-Pack (MP) meshes: $`20\times 20\times 20`$ for Al, $`24\times 24\times 10`$ for Mg, $`24\times 24\times 16`$ for Be, and $`16\times 16\times 16`$ for Cu. For hot-electron energies under study ($`EE_F0.54.0\mathrm{eV}`$), the inclusion of up to 40 bands has been required, and the sum in Eq. (19) has been extended over $`15`$ $`𝐆`$ vectors of the reciprocal lattice, the magnitude of the maximum momentum transfer $`𝐪+𝐆`$ being well over the upper limit of $`2q_F`$ ($`q_F`$ is the Fermi momentum). For the evaluation of hot-electron lifetimes from Eq. (17), with full inclusion of crystalline local-field effects, dielectric matrices as large as $`40\times 40`$ have been considered. ## III Results and discussion ### A Aluminum Due to the free-electron-like nature of the energy bands of face-centered cubic (fcc) aluminum \[see Fig. 1\], a simple metal with no $`d`$ bands, the impact of the band structure on the electronic excitations had been presumed for many years to be small. However, X-ray measurements and careful first-principles calculations of the dynamical density-response of this material have shown that band-structure effects cannot be neglected. Full band-structure calculations of the electronic stopping power of Al for slow ions have shown that the energy loss is $`7\%`$ larger than that of a FEG. Our present calculations indicate that actual hot-electron lifetimes in Al are $`35\%`$ smaller than those of electrons in a FEG. Our ab initio GW-RPA calculation of the average lifetime $`\tau (E)`$ of hot electrons in Al, as obtained from Eq. (17) with full inclusion of crystalline local-field effects, is presented in Fig. 2 by solid circles. The GW-RPA lifetime of hot electrons in a FEG with the electron density equal to that of valence electrons in Al ($`r_s=2.07`$) is exhibited in the same figure, by a solid line. Our calculations indicate that the lifetime of hot electrons in Al is, within RPA, smaller than that of electrons in a FEG with $`r_s=2.07`$ by a factor of $`0.65`$. We have performed band-structure calculations of Eq. (17) with and without \[see also Eq. (19)\] the inclusion of crystalline local-field corrections, and have found that these corrections are negligible for electron energies under study. This is an expected result, since Al crystal does not present strong density gradients nor special electron density directions (bondings). In order to understand the origin of band-structure effects on hot-electron lifetimes in Al, we first focus on the role that the band structure plays in the creation of electron-hole (e-h) pairs. Hence, we evaluate hot-electron lifetimes from either Eq. (17) or Eq. (19) by replacing the electron initial and final states in $`|B_{if}(𝐪+𝐆)|^2`$ by plane waves (plane-wave calculation). The result we obtain with full inclusion of the band structure of the crystal in the evaluation of $`\mathrm{Im}ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )`$ is represented in Fig. 2 by open triangles. Due to splitting of the band structure over the Fermi level, new channels are opened for e-h pair production, and band-structure effects tend, therefore, to decrease the lifetime of very-low-energy electrons by $`5\%`$, as in the case of slow ions. In the case of moving ions, differences between actual decay rates and those obtained in a FEG only enter through the so-called energy-loss matrix, $`\mathrm{Im}[ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )]`$. However, hot-electron decay rates are also sensitive to the actual initial and final states entering the coefficients $`B_{if}(𝐪+𝐆)`$. Differences between our full (solid circles) and plane-wave (open triangles) calculations come from this sensitivity of hot-electron initial and final states on the band structure of the crystal, showing that the splitting of the band structure over the Fermi level now plays a major role in lowering the hot-electron lifetime. Scaled lifetimes, $`\tau (E)\times (EE_F)^2`$, of hot electrons in Al, as obtained from our full band-structure calculation (solid circles) and from the FEG model with $`r_s=2.07`$ (solid line), are represented in the inset of Fig. 2. In the limit $`EE_F`$, the available phase space for real transitions is simply $`EE_F`$, which yields the $`(EE_F)^2`$ quadratic scaling of very-low-electron energies in a FEG, as predicted by Eq. (23) (dashed line). However, as the energy increases momentum and energy conservation prevents the available phase space from being as large as $`EE_F`$, and the lifetime of electrons in a FEG departures, therefore, from the $`(EE_F)^2`$ scaling. For energies under study, band-structure effects in Al are found to be nearly energy-independent; hence, our calculated lifetimes are found to approximately scale as in the case of electrons in a FEG, and they slightly departure, therefore, from the $`(EE_F)^2`$ scaling. The agreement at $`EE_F3\mathrm{eV}`$ between actual lifetimes and those predicted by Eq. (23) is simply due to the nearly thorough compensation, at these energies, between the departure of this formula from full free-electron-gas RPA calculations and band-structure effects. Although the energy bands of Al show an overall similarity with the fcc free-electron band structure, in the vicinity of the W-point there are large differences between the two cases \[see Fig. 1\]. At this point, the free-electron parabola opens an energy gap around the Fermi level and splits along the $`WX`$ direction into two bands ($`Z_1`$ and $`Z_4`$) with energies over the Fermi level. We have calculated lifetimes of hot electrons in these bands, with the wave vector along the $`WX`$ direction. The results of these calculations are exhibited in Fig. 2, as a function of energy, by dotted ($`Z_1`$) and long-dashed lines ($`Z_4`$). Although hot electrons in the $`Z_1`$ band have one more channel available to decay along the $`WX`$ direction, the band gap at the $`W`$-point around the Fermi level results in hot electrons living longer on the $`Z_1`$ than on the $`Z_4`$ band. When the hot-electron energy is well above the Fermi level ($`EE_F>4\mathrm{eV}`$), both $`Z_1`$ and $`Z_4`$ lifetimes nearly coincide with the average values represented by solid circles. We have evaluated hot-electron lifetimes along other directions of the wave vector, and have found that differences between these results and average lifetimes are not larger than those obtained along the $`WX`$ direction. This is in disagreement with the calculations reported by Schöne et al. In particular, the bending of the hot-electron lifetime along the WL direction at $`1\mathrm{eV}`$ reported in Ref. is not present in our calculations. The origin of this discrepancy is the crossing near the Fermi level between bands $`Q_+`$ and $`Q_{}`$ along the WL direction reported in Ref.. This crossing is absent in the present \[see Fig. 1\] and previous self-consistent band-structure calculations, which all show that at the $`W`$-point of the Al band structure the level $`W_2^{}`$ is below $`W_1`$. ### B Magnesium In Fig. 3 we show the band structure of hexagonal closed-packed (hcp) magnesium. There is a close resemblance for energies $`E<E_F`$ between this band structure and that of free electrons, though the free-electron parabola now splits near the Fermi level along certain symmetry directions. As a result, the energy-loss function, $`\mathrm{Im}[ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )]`$, of this material is approximately well described within a free-electron model, and band-structure effects on hot-electron lifetimes enter mainly, as in the case of Al, through the sensitivity of hot-electron initial and final states on the band structure of the crystal. Our ab initio calculation of the average lifetime $`\tau (E)`$ of hot electrons in Mg, as obtained from Eq. (17) with full inclusion of crystalline local-field effects, is presented in Fig. 4 by solid circles, together with the lifetime of hot electrons in a FEG with the electron density equal to that of valence electrons in Mg ($`r_s=2.66`$). As in the case of Al, we have found that local-field corrections are negligible for electron energies under study. Scaled lifetimes of hot electrons in Mg, as obtained from our full band-structure calculations (solid circles) and from the FEG model with $`r_s=2.66`$ (solid line), are represented in the inset of Fig. 4. We note that actual lifetimes in this material scale with energy approximately as in the case of electrons in a FEG, thereby slightly deviating from the $`(EE_F)^2`$ scaling predicted by Eq. (23) (dashed line). Because of splitting of the band structure over the Fermi level new decay channels are opened, not present in the case of a FEG, and band-structure effects tend, therefore, to decrease the lifetime of hot electrons in Mg by a factor of $`0.75`$. Since the splitting of the band structure of Mg is not as pronounced as that of Al, the departure of actual lifetimes from those of electrons in a FEG is found to be smaller in Mg than in Al. ### C Beryllium Among the so-called simple metals, with no $`d`$ bands, beryllium presents distinctive features in that its band structure \[see Fig. 5\] exhibits the largest departure from free-electron behaviour. Both Be and Mg have hcp crystal structure and two conduction electrons per atom. Nevertheless, the electronic structure of Be is qualitatively different from that of Mg, the Be band gaps at points $`\mathrm{\Gamma }`$, $`H`$, and $`L`$ being much larger than those in Mg. Also, the Be band gaps are located on both sides of the Fermi level, and the density of states (DOS) of this material falls to a sharp minimum near the Fermi level. Our band-structure calculation of the average lifetime $`\tau (E)`$ of hot electrons in Be is shown in Fig. 6 by solid circles, as obtained from Eq. (17). The lifetime of hot electrons in a FEG with the electron density equal to that of valence electrons in Be ($`r_s=1.87`$) is represented in the same figure by a solid line. In the inset, the corresponding scaled calculations are plotted, together with the results obtained from Eq. (23) (dashed line). It can be seen that large deviations from the FEG calculation occur for electron energies near the Fermi level ($`EE_F<3\mathrm{eV}`$), especially at $`EE_F1.4`$ and $`1.8\mathrm{eV}`$ where the presence of band gaps at points $`\mathrm{\Gamma }`$ and $`L`$ plays a key role. We note that the deep departure from free-electron behaviour of the beryllium DOS near the Fermi level tends to increase the inelastic lifetime of all excited Bloch states. Furthermore, actual lifetimes strongly deviate from the $`(EE_F)^2`$ scaling predicted within Fermi-liquid theory. This deviation comes from the contribution to the average lifetime due to Bloch states near the points $`\mathrm{\Gamma }`$ and $`L`$ with energies close to the energy gap. At the $`\mathrm{\Gamma }`$-point, the free-electron parabola opens a wide energy gap around the Fermi level and splits along the $`\mathrm{\Gamma }K`$ and $`\mathrm{\Gamma }M`$ directions into two bands, $`T_2/T_4`$ and $`\mathrm{\Sigma }_1/\mathrm{\Sigma }_3`$, respectively. The results of our calculated lifetimes of hot electrons in bands $`T_2`$ and $`T_4`$, with the wave vector along the $`\mathrm{\Gamma }K`$ direction, are plotted in Fig. 7 by short-dashed and long-dashed lines, respectively. For comparison, the average lifetime of hot electrons in real beryllium and in a FEG with $`r_s=1.87`$ are also plotted in this figure by solid circles and by a solid line, respectively. At very-low electron energies ($`EE_F<1\mathrm{eV}`$), interband transitions yield lifetimes of hot electrons in the $`T_2`$ band that are below those of electrons in a FEG, as in the case of Al and Mg. However, at higher energies the coupling with lower lying flat bands becomes small, and lifetimes of electrons in this ($`T_2`$) band are found to be above the FEG prediction. Lifetimes of hot electrons in the $`T_4`$ band are also, at very-low electron energies ($`EE_F<1.4\mathrm{eV}`$), below those of electrons in a FEG. Nevertheless, at energies of $`1.4\mathrm{eV}`$ the presence of the band gap at $`\mathrm{\Gamma }`$ yields very long lifetimes, especially at the level $`\mathrm{\Gamma }_4^{}`$. We have calculated hot-electron lifetimes in bands $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_3`$, with the wave vector along the $`\mathrm{\Gamma }M`$ direction, and have found results that are similar to those obtained for electrons in bands $`T_2`$ and $`T_4`$. Calculations of the lifetime of hot electrons in the $`\mathrm{\Delta }_2`$ band, with the wave vector along the $`\mathrm{\Gamma }A`$ direction, are also shown in Fig. 7 (dotted line). Though this is not a flat band, the presence of the gap at the $`\mathrm{\Gamma }`$-point near the Fermi level results in hot electrons close to the level $`\mathrm{\Gamma }_4^{}`$ living much longer than in the case of a FEG. The combined contribution from hot electrons in bands $`T_4`$, $`\mathrm{\Sigma }_1`$, and $`\mathrm{\Delta }_2`$ is the origin of the enhanced average lifetime (solid circles) at $`EE_F1.4\mathrm{eV}`$, which corresponds to the energy of the level $`\mathrm{\Gamma }_4^{}`$ at the $`\mathrm{\Gamma }`$-point. A wide band gap is also opened at the $`L`$-point, which originates the enhanced average lifetime at $`EE_F1.8\mathrm{eV}`$. Hence, we have plotted in Fig. 8 the results of our calculated lifetimes of hot electrons in the $`S_1`$ band, with the wave vector along the $`LH`$ direction (dotted line), together with the average lifetime of hot electrons in real Be (solid circles) and in a FEG with $`r_s=1.87`$ (solid line). As in the case of hot electrons in bands $`T_4`$ and $`\mathrm{\Sigma }_1`$, the presence of the band gap at the $`L`$-point yields a very long average lifetime, but now at $`EE_F1.8\mathrm{eV}`$. A similar behaviour is obtained near the $`L`$-point for electrons in the $`R_1R_3`$ band along the $`LA`$ direction, and both bands, $`S_1`$ and $`R_1R_3`$, contribute to the enhanced average lifetime at $`EE_F1.8\mathrm{eV}`$. In Fig. 8 we have also represented calculations of the lifetime of hot electrons in the $`S_1^{}`$ band along the $`HA`$ direction (dashed line). At low energies ($`EE_F<2\mathrm{eV}`$), the presence of the gap at the $`H`$-point leads to hot-electron lifetimes along the $`HA`$ direction that are longer than those of electrons in a FEG, but departure from free-electron behaviour at the $`H`$-point is not as pronounced as at the $`\mathrm{\Gamma }`$ or $`L`$ points. At higher energies, the $`S_1^{}`$ band shows great similarity with the corresponding hcp free-electron band, and lifetimes nearly coincide, therefore, with those obtained within the FEG model. ### D Copper Copper, the most widely studied metal by TR-2PPE, is a noble metal with entirely filled $`3d`$-like bands. In Fig. 9 we show the energy bands of this fcc crystal. We see a profound difference between the band structure of Cu and that of free electrons. Slightly below the Fermi level, at $`EE_F2\mathrm{eV}`$, we have $`d`$ bands capable of holding 10 electrons per atom, the one remaining electron being in a free-electron-like band below and above the $`d`$ bands. Hence, a combined description of both delocalized $`4s^1`$ and localized $`3d^{10}`$ electrons is needed to address the actual electronic response of this material. The results presented below have been found by keeping all $`4s^1`$ and $`3d^{10}`$ Bloch states as valence electrons in the pseudopotential generation. Band-structure GW-RPA calculations of the average lifetime $`\tau (E)`$ of hot electrons in Cu are exhibited in Fig. 10 by solid circles, as obtained from Eq. (17) with full inclusion of crystalline local-field effects. The lifetime of hot electrons in a FEG with the electron density equal to that of $`4s^1`$ electrons in Cu ($`r_s=2.67`$) is represented by a solid line. These calculations indicate that the lifetime of hot electrons in Cu is, within RPA, larger than that of electrons in a FEG with $`r_s=2.67`$, this enhancement varying from a factor of $`2.5`$ near the Fermi level ($`EE_F=1.0\mathrm{eV}`$) to a factor of $`1.5`$ for $`EE_F=3.5\mathrm{eV}`$. In order to investigate the role that localized $`d`$ bands play in the decay mechanism of hot electrons, we have also used an ab initio pseudopotential with the $`3d`$ shell assigned to the core. The result of this calculation, displayed in Fig. 10 by a dotted line, shows that it nearly coincides with the FEG calculation; thus, $`d`$-band states play a key role in the hot-electron decay. We have performed band-structure calculations of Eq. (17) with and without \[see also Eq. (19)\] the inclusion of crystalline local-field corrections, and have found that these corrections are negligible for $`EE_F>1.5\mathrm{eV}`$, while for energies very near the Fermi level neglecting these corrections results in an overestimation of the lifetime of less than $`5\%`$. Therefore, differences between our full band-structure calculations (solid circles) and FEG calculations (solid line) come from the actual DOS available for real excitations, localization, additional screening, and Fermi-surface topology. First of all, we focus on the role that both DOS and coupling between Bloch states participating in the creation of e-h pairs, i.e., localization \[see Eq. (20)\] play in the hot-electron decay mechanism. Hence, we neglect crystalline local-field effects and present the result of evaluating hot-electron lifetimes from Eq. (19) by replacing initial and final states in $`|B_{if}(𝐪+𝐆)|^2`$ by plane waves and the dielectric function in $`\left|ϵ_{𝐆,𝐆}(𝐪,\omega )\right|^2`$ by that of a FEG with $`r_s=2.67`$. If we further replaced $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]`$ by that of a FEG, then we would obtain the FEG calculation represented by a solid line. The impact of the actual DOS below the Fermi level may be described by simply replacing the one-electron Bloch states in Eq. (20) by plane waves but keeping the actual number of states available for real excitations. The result of this calculation is represented in Fig. 10 by a dashed line. This result is very close to that reported by Ogawa et al, though these authors approximated the FEG dielectric function in $`\left|ϵ_{𝐆,𝐆}(𝐪,\omega )\right|^2`$ within the static Thomas-Fermi model. It is clear from Fig. 10 that the actual DOS available for real transitions yields lifetimes that are shorter than those obtained in a FEG model, especially for $`EE_F>2\mathrm{eV}`$ due to opening of the $`d`$-band scattering channel dominating the DOS with energies $`2\mathrm{eV}`$. However, if one takes into account, within a full description of the band structure of the crystal in the evaluation of $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]`$, the actual coupling between initial and final states avaliable for real transitions, then one obtains hot-electron lifetimes which lie, at very-low electron energies ($`EE_F<2.5\mathrm{eV}`$) just above the FEG curve \[see open circles in Fig. 1 of Ref.\]. This enhancement of the lifetime, even at energies below the opening of the $`d`$-band scattering channel, is due to the fact that states just below the Fermi level have a small but significant $`d`$ component, thus being more localized than pure $`sp`$ states. The combined effect of DOS and localization, which enters through the imaginary part of the dielectric matrix $`\mathrm{Im}\left[ϵ_{𝐆,𝐆^{}}(𝐪,\omega )\right]`$, increases the lifetime of hot electrons with energies $`EE_F<2.5\mathrm{eV}`$ \[see open circles in Fig. 1 of Ref.\]. As for the departure of hot-electron initial and final states from free-electron behaviour, entering through the coefficients $`B_{if}(𝐪+𝐆)`$, we have found that it yields hot-electron lifetimes that are strongly directional dependent, Fermi-surface shape effects tending to decrease the average inelastic lifetime of very-low-energy electrons ($`EE_F<2.5\mathrm{eV}`$) \[see Ref.\]. Furthermore, the combined effect of DOS and localization, on the one hand, and Fermi-surface shape effects, on the other hand, nearly compensate. Consequently, large differences between hot-electron lifetimes in real Cu and in a FEG with $`r_s=2.67`$ are mainly due to a major contribution from $`d`$ electrons participating in the screening of electron-electron interactions, which is accounted through the factor $`\left|ϵ_{𝐆,𝐆}(𝐪,\omega )\right|^2`$ in Eq. (19). The Fermi surface of Cu is greatly flattened in certain regions, showing a pronounced neck in the direction $`\mathrm{\Gamma }L`$. Thus, the isotropy of hot-electron lifetimes in a FEG disappears in this material. While flattening of the Fermi surface along the $`\mathrm{\Gamma }K`$ direction is found to decrease the hot-electron lifetime by a factor that varies from $`15\%`$ near the Fermi level ($`EE_F=1\mathrm{eV}`$) to $`5\%`$ for $`EE_F=3.5\mathrm{eV}`$ \[see also Ref.\], the lifetime of hot electrons with the wave vector along the necks of the Fermi surface, in the $`\mathrm{\Gamma }L`$ direction, is found to be much longer than the average lifetime. We have calculated hot-electron lifetimes in the $`\mathrm{\Lambda }_1`$ band, with the wave vector along the $`\mathrm{\Gamma }L`$ direction, and have found the lifetime of hot electrons at the $`L_1`$ level with $`EE_F=4.2\mathrm{eV}`$ to be longer than the average lifetime at this energy by $`80\%`$. A comparison between our calculated hot-electron lifetimes in Cu and those determined from most recent TR-2PPE experiments was presented in Ref.. At $`EE_F<2\mathrm{eV}`$, our calculations are close to lifetimes recently measured by Knoesel et al in the very-low energy range. At larger electron energies, good agreement between our band-structure calculations and experiment is obtained for Cu(110), the only surface with no band gap in the $`𝐤_{}=0`$ direction. ## IV Conclusions We have presented full GW-RPA band-structure calculations of the inelastic lifetime of hot electrons in Al, Mg, Be, and Cu, and have demonstrated that decay rates of low-energy excited electrons strongly depend on the details of the electronic band structure. Though the dependence of hot-electron lifetimes in Al and Mg on the direction of the wave vector has been found not to be large, in the case of Be and Cu hot-electron lifetimes have been found to be strongly directional dependent. Furthermore, very long lifetimes at certain points of the BZ in Be yield average lifetimes in this material which strongly deviate from the $`(EE_F)^2`$ scaling predicted within Fermi-liquid theory. As far as band-structure effects on hot-electron energies and wave functions are concerned, we have found that both splitting of the band structure and the presence of band gaps over the Fermi level play an important role in the e-e decay mechanism. In Al and Mg, splitting of the band structure is found to yield electron lifetimes that are smaller than those of electrons in a FEG. On the other hand, large deviations of the band structure of Be along certain symmetry directions from the free-electron model near the Fermi level result in a strong directional dependence of hot-electron lifetimes in this material. As for the presence of band-structure effects on the creation of e-h pairs, there are contributions from the actual DOS available for real transitions, from localization, i.e., the actual coupling between electron and hole states, and from screening. The combined effect of DOS and localization is found not to be large, even in the case of a noble metal like Cu with $`d`$ bands. However, large differences between hot-electron lifetimes in Cu and in a FEG with the electron density equal to that of valence ($`4s^1`$) electrons are found to be due to a major contribution from $`d`$ electrons participating in the screening of e-e interactions. Crystalline local-field corrections in these materials have been found to be small for hot-electron energies under study. ## V Acknowledgments We would like to thank A. G. Eguiluz for stimulating discussions. We also acknowledge partial support by the University of the Basque Country, the Basque Hezkuntza, Unibertistate eta Ikerketa Saila, and the Spanish Ministerio de Educación y Cultura.
no-problem/0002/hep-ph0002156.html
ar5iv
text
# 1 * ## Acknowledgments This work was supported in part by the United States Department of Energy under contract numbers DE–AC03–76SF00515, and DE-FG03-97ER4104 and by a grant from the U.S.-Israel Binational Science Foundation. We appreciate the hospitality of the CSSM in Adelaide where some of this work was performed. We thank W. Melnitchouk, S. Rock, M. Strikman, and especially G. van der Steenhoven, for useful discussions.
no-problem/0002/astro-ph0002403.html
ar5iv
text
# ISO far-infrared observations of rich galaxy clusters Based on observations with ISO, an ESA project with instruments founded by ESA member states (especially the PI countries: France, Germany, the Netherlands, and the United Kingdom) and with the participation of ISAS and NASA ## 1 Introduction The first paper in this series (Hansen et al. han99 (1999), paper i) presented infrared data for the Abell 2670 cluster. We identified 3 far-infrared sources apparently related to star forming galaxies in the cluster. The present paper concerns the rich cluster Sérsic 159-03. The central part of the Sérsic 159-03 cluster was mapped by the Infrared Space Observatory (ISO) satellite, using the PHT-C camera (Lemke et al. lem96 (1996)) at $`60\mu \mathrm{m}`$, $`100\mu \mathrm{m}`$, $`135\mu \mathrm{m}`$, and $`200\mu \mathrm{m}`$. The observations were performed twice with slightly different position angles which gives an opportunity to do independent detections and to study possible instrumental effects. The Sérsic 159-03 cluster (Abell S1101, z=0.0564) is of richness class 0, Bautz-Morgan type iii with a central dominant cD galaxy (Abell et al. abe89 (1989)). A cooling flow is present, and Allen and Fabian (all97 (1997)) found a mass deposition rate of $`\dot{\mathrm{M}}=231_{10}^{+11}\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$ from ROSAT PSPC data. The cooling flow is centered on the cD galaxy which exhibits nebular line emission. Crawford and Fabian (cra92 (1992)) obtained optical spectra and found from line-ratio diagrams that the ratios obtained along the slit bridged the gap between class i and class ii in the scheme of Heckman et al. (hec89 (1989)). Their spectra had position angle $`90\mathrm{°}`$. West of the center they discovered a detached filament of emission having extreme class ii characteristics. They argued that the different line ratios are due to changes in ionization properties. Below in Fig. 4 we show the extent of the nebular emission. In a subsequent paper Crawford and Fabian (cra93 (1993)) included IUE data to obtain the optical-ultraviolet continuum. They announced that a strong Ly$`\alpha `$ line is present in the IUE spectrum. ## 2 Observations ### 2.1 The ISO data A rectangular area centered on the cD galaxy of Sérsic 159-03 was mapped by ISO May 7, 1996 on revolution 173. The projected Z-axis of the spacecraft had a position angle of $`54\stackrel{}{.}4`$ on the sky (measured from north through east). The observation was repeated June 4, 1996, during revolution 200, but this time with position angle $`69\stackrel{}{.}5`$. The observing mode was PHT 32 as for Abell 2670 (paper i). The 9 pixel C100 detector was used for $`60\mu \mathrm{m}`$ and $`100\mu \mathrm{m}`$ to map an area of $`10\stackrel{}{.}0\times 3\stackrel{}{.}8`$. For $`135\mu \mathrm{m}`$ and $`200\mu \mathrm{m}`$ the 4 pixel C200 detector was applied to cover a mapped area of $`11\stackrel{}{.}0\times 4\stackrel{}{.}6`$. The target dedicated times were 1467 seconds for C100 and 1852 seconds for C200. As described in paper i we apply the ISOPHOT Interactive Analysis software<sup>1</sup><sup>1</sup>1The ISOPHOT data presented in this paper was reduced using PIA, which is a joint development by the ESA Astrophysics Division and the ISOPHOT consortium. (PIA) for the reduction work. We also perform parallel reductions using our own least squares reduction procedure (LSQ, cf. paper i). Although LSQ does not use sophisticated methods to correct for various effects – e.g. glitches from cosmic rays are simply discarded – we find it valuable for comparisons with the PIA reductions when evaluating the reality of features visible in the frames. The conclusion is that the PIA reduced images presented here (Fig. 1) do not contain noticeable artifacts from glitches. As in paper i we present the data maps with pixel sizes $`15\mathrm{}\times 46\mathrm{}`$ for C100 and $`30\mathrm{}\times 92\mathrm{}`$ for C200, but the instrumental resolution is only about $`50\mathrm{}`$ for C100 and $`95\mathrm{}`$ for C200 (paper i). The uncertainty of the maps increases towards the left and right borders due to the way the mapping was performed. ### 2.2 Optical data Optical imaging and spectroscopy were performed September 1996 using the DFOSC instrument on the Danish 1.54m telescope at La Silla. The field around the central cD galaxy is shown in Fig. 2. The image was obtained by adding exposures in B (45 min), V (30 min), and Gunn I (30 min). The distribution of B-I colour for a $`70\mathrm{}\times 70\mathrm{}`$ area covering the central parts of the dominant cD galaxy is given in Fig. 3. In order to image the distribution of the nebular emission we obtained narrow band exposures through a filter ($`\lambda 6908`$, FWHM = 98 Å, 1 hour) covering the redshifted H$`\alpha `$+\[N ii\] lines and an off-band filter ($`\lambda 6801`$, FWHM = 98 Å, 1 hour). After scaling and subtraction a H$`\alpha `$+\[N ii\] image is obtained. The central part of this image is shown in Fig. 4. Details about the spectroscopy are found in Table 1. The slit was positioned on the cD nucleus with two different position angles. P.A.=$`270\mathrm{°}`$ covers the western filament, and P.A.=$`21\mathrm{°}`$ passes the object in the upper left corner of Fig. 4 and covers the jet-like emission to the northeast and southwest. ## 3 Results The general brightness distribution in the maps is described most easily for the C200 maps. The $`135\mu \mathrm{m}`$ and $`200\mu \mathrm{m}`$ maps are rather similar. An enhancement is seen at the center in all four maps concordant with the position of the cD. A maximum is present in the upper left corners. After rotating the revolution 200 maps $`15\mathrm{°}`$ into coincidence with the rev. 173 maps we find these maxima to overlap suggesting the presence of one or more real sources. Similarly there are maxima in the upper right corners. Their positions and relative brightness in the maps can be understood if a source is present in the upper right corner of the rev. 200 maps, but just outside the rev. 173 field. A third characteristic feature is the brightness minimum to the lower left (i.e. south) of the center of the C200 maps. Again, when we compare the maps after rotation the reality of this minimum is confirmed. We conclude that the brightness distribution seen in the C200 maps is real. The C100 maps have the advantage of better resolution which improves the possibility of identifying optical counterparts. However, the reality of the peaks in the $`100\mu \mathrm{m}`$ maps is not convincing when the maps are compared after rotation. Generally the peaks occur at different locations. Even the central source is doubtful: The rev. 200 map shows a weak enhancement slightly displaced to the right of the center, but the rev. 173 map shows a minimum at the same location. A comparison between the $`60\mu \mathrm{m}`$ maps is more successful. Both show a central enhancement (C100-1) although slightly displaced to the right (north) in the rev. 173 map. The maximum brightness (object C100-2) occurs in both maps near the upper left corners and overlap after rotation. In Fig. 2 the approximate positions of overlap is marked by numbers for the off-center sources. The rev. 200 map has a peak (C100-3) in the upper right corner which may be related to the source present in the C200 maps. Furthermore, the peak (C100-4) in the right part of the rev. 200 $`60\mu \mathrm{m}`$ map overlaps with an enhancement in the rev. 173 map. There are disagreements as well, however. The peak obvious in the rev. 173 map below C100-4 (confirmed by the LSQ reductions) is not visible in the rev. 200 map. We conclude that the $`60\mu \mathrm{m}`$ sources C100-1, C100-2, C100-3, and C100-4 are likely to be real, but that the present reduction software still produces artifacts calling for caution in the interpretation. In paper i we found that aperture photometry of the faint sources suffers significantly from the uncertainty in the evaluation of the background level. We therefore prefer to position, scale and subtract the PSF from the maps. The success in removing the source is then evaluated by eye. By varying the scaling we estimate the maximum and minimum acceptable flux. The median and its deviation from the limits are given in Table 2 for our identified infrared sources. We assume that the two sources in the upper corners of the C200 maps are identical to C100-2 and C100-3. The reality of C100-1 at $`100\mu \mathrm{m}`$ may be questionable. C100-3 is outside the field in the rev. 173 map. ## 4 Discussion ### 4.1 The cD galaxy The central infrared source, C100-1, is detected in all maps except at $`100\mu \mathrm{m}`$. The measured fluxes in the two independent observations also agree within the limits. We therefore regard the source as real. A comparison with the list of Jura et al. (jur87 (1987)) shows that the luminosity of Sérsic 159-03 at $`60\mu \mathrm{m}`$ is larger than other early type galaxies detected by IRAS by an order of magnitude or more, except the extraordinarily bright galaxy NGC 1275 which is the center of the Perseus cluster cooling flow, and which is undergoing an encounter with an other galaxy (e.g. Nørgaard-Nielsen et al., noe93 (1993)). In a previous paper (Hansen et al. han95 (1995)) we presented a model for the infrared emission from Hydra A measured by IRAS. We assumed that most of the mass cooling out of the cluster gas ends up in low mass stars forming in the flow. We further assumed that dust grains were able to grow in the cool pre-stellar clouds converting a fraction $`y`$ of the mass into grains. If the mechanism is effective we expect $`y1`$%. After a star has formed the remaining material is dispersed in the hot cluster gas. If a fraction $`f`$ is recycled to the hot phase a dust mass of $`y\times f\times \dot{\mathrm{M}}`$ is continuously injected into the cluster gas. At forehand we expect $`f`$ to be approximately $`150`$%. The grains are destroyed by sputtering on a time scale $`\tau _\mathrm{d}`$, and a steady state is obtained. At any time a dust mass of $`\mathrm{M}_\mathrm{d}=y\times f\times \dot{\mathrm{M}}\times \tau _\mathrm{d}`$ is present. The grains are heated by hot electrons (in the inner galaxy the photon field may also be important), and the infrared emission can be evaluated. The present data do not allow testing of more elaborate models having radial distributions of e.g. the dust temperature. We therefore only make a simple estimate using mean values. For Hydra A we found that $`y=1\%`$ and $`f=11\%`$ reproduced the observed IRAS flux. In Table 3 giving calculated fluxes we repeat the calculations for Sérsic 159-03, but with $`f`$ reduced to 2%. Considering the crude model and the uncertainty of the measurements we find the agreement with the observed values in Table 2 satisfactory. This result has some significance although $`f`$ has been used as a free parameter to obtain concordance. If a value of $`f`$ much larger than unity had been necessary to fit the observations the model would have had to be rejected. Also, a value significantly lower than 1% would have made the model unconvincing. A possible disagreement with the model is, however, the small extent of the source. One would expect the infrared emission to show some distribution within the cooling radius which is $`1\stackrel{}{.}89`$. Although the resolution at $`60\mu \mathrm{m}`$ is $`50\mathrm{}`$ C100-1 is indistinguishable from a point source in all our measurements. The reason could be that (1) the star formation is concentrated to the center (as seems to be the case for Hydra A, see Hansen et al. han95 (1995)), (2) the model does not apply, or (3) instrumental effects prevents detection of a faint, extended distribution of FIR emission. Alternative possibilities are that C100-1 is related to the active nucleus as inferred by the presence of a radio source (Large et al. lar81 (1981), Wright et al. wri94 (1994)), or that dust has been introduced into the system by a recent merger event. A hint may be that all three measurable images of the revolution 173 maps show a tendency to be displaced from the center by $`10\mathrm{}`$ to the north where nebular line emission is seen (Fig. 4). The cD galaxy shows no signs of dust lanes, but exhibits a constant distribution in colour (Fig. 3). There are, however, two objects in the upper left part of Fig. 3 which are bluer than the cD. The brightest and bluest of these looks disturbed possibly due to tidal interaction. The spectra taken with P.A. = $`21\mathrm{°}`$ cover the object and contain emission lines. The emission is weak in Fig. 4 because the lines are shifted away from the peak transmission of the filter. Relative to the cD we find the velocity of the object to be $`+1800\pm 200\mathrm{km}\mathrm{s}^1`$. The galaxy may have plumped through the cD and contains young stars. The origin of the optical filaments in Fig. 4 is a puzzle. It may be captured material from mergers, related to radio plasma, or connected to the cooling flow. The relative velocities do not support any particular model. The velocities have been measured from our spectra, and they are quite low as seen from Table 4. Donahue and Voit (don93 (1993)) obtained spectra of the nuclear emission from the Sérsic 159-03 cD galaxy. They argued that the lack of \[Ca ii\] $`\lambda 7291`$ emission indicates that Ca is depleted onto dust grains. We have added all our spectra of the center together and all of the filaments. No \[Ca ii\] emission was visible in any of the two resulting spectra. We then shifted the \[N ii\]$`\lambda 6583`$ to the expected position of \[Ca ii\] and added the shifted line after scaling with various constants. In this way we find that no \[Ca ii\] emission stronger than 0.20 times \[N ii\]$`\lambda 6583`$ is present. Figure 1 of Donahue and Voit (don93 (1993)) predicts (from ionization calculations) that this ratio should never be smaller than 0.24. Although marginal compared to the case of Hydra A the discrepancy can be explained by the condensation of Ca onto grains in accord with Donahue and Voit’s result. The presence of dust in the nebular gas does not necessarily exclude that it originates from the cooling cluster gas. Dust may grow in dense, cool clouds in connection with star formation. For the nebular gas in Hydra A Donahue and Voit (don93 (1993)) found a much tighter limit on the \[Ca ii\] line strongly suggesting the presence of dust. In Hydra A the nebular gas is concentrated to a central disk-like structure of several kpc where vigorous star formation has taken place, and Hansen et al. (han95 (1995)) argue that it is a result of the cooling flow (see also McNamara, mcn95 (1995)). In Sérsic 159-03 the extended nature of the filaments and the presence of the blue, star forming object is more in favour of a merger scenario, however. ### 4.2 Off-center infrared sources There are no striking optical identifications to the off-center sources. The position of C100-2 is relatively well determined by the overlap of the two observations. The nearest object visible in Fig. 2 is $`0\stackrel{}{.}5`$ to the south-west, is unresolved and of blue colour. It is not a known QSO (no QSO is closer than $`30\mathrm{}`$ in the NASA/IPAC Extragalactic Database<sup>2</sup><sup>2</sup>2The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration ), and it is just outside the overlap of the two observations. There are several faint optical objects in the area of C100-3, but no show up in our data with characteristics favouring a candidateship. The difficulties in pointing out candidates are even more pronounced for C100-4 which agrees poorly with the nearest faint objects in Fig. 2. However, C100-4 is also the most uncertain of the sources as it is only visible at $`60\mu \mathrm{m}`$. ## 5 Conclusion The availability of two observations covering essentially the same field at several wavelengths allows us to identify 4 faint ($`0.1`$ Jy) far-infrared sources with some confidence. A central source, C100-1, is attributed to the cD galaxy which contains optical filaments, but our optical images do not reveal significant evidence of dust lanes. The fluxes measured for C100-1 are of the same order of magnitude as expected from dust related to star formation in the cooling flow. For the non-central sources we cannot point out any particular optical candidates in contrast to the results from the Abell 2670 field (paper i) where galaxies with enhanced star formation were found coincident with the infrared sources. ###### Acknowledgements. This work has been supported by The Danish Board for Astronomical Research.
no-problem/0002/hep-th0002201.html
ar5iv
text
# 1 Introduction ## 1 Introduction Kurt Haller has pursued fundamental issues in gauge theories. He has especially emphasized the importance of gauge invariant formulations of quantum electrodynamics and of quantum chromodynamics and of the implementation of Gauss’ law in these theories. Kurt uses operator methods rather than the more-popular path integral methods; this provides a good alternative to the usual point of view. I am happy to dedicate this paper to Kurt Haller. Although there is a long history of the study of charged particles in gauge theories , there is still concern that the issue is not understood fully. For example, in a recent paper the authors state “ the battle is not yet won and one could, in a provacative way, summarize the situation by saying that the question ‘what is an electron in QED’ is still open.” The purpose of this pedagogical paper is to reconsider a very simple model of QED that illustrates one facet of the infrared issues connected with charged particles. At the end I will discuss what may carry over to more realistic models of QED as well as to QED itself. The model is due to W. Pauli and M. Fierz . They studied a simple, exactly soluble model of a single nonrelativistic electron in QED. P. Blanchard studied this model in the interaction picture and showed that the transition operator is well-defined for finite times, but exhibits an infrared divergence for $`t\pm \mathrm{}`$. This model exhibits in an extremely simple context the divergence associated with infinite numbers of soft photons associated with a charged particle. The utility of this model is that explicit formulas valid to all orders in $`e`$ can be found for the transformation between the perturbative states and fields and the physical ones. I reconsider this model using the exact eigenstates of the complete Hamiltonian. I find that the Heisenberg field of the model at finite times creates states that, because of the infrared divergence, are orthogonal to the exact eigenstates. When the Hamiltonian is expressed in terms of a field that creates the exact single-particle eigenstates associated with an electron in this model, the infrared divergence disappears. I suggest that at least part of the solution to the infrared problem is to replace the original Heisenberg field by one that creates the electron with the divergent part of its soft photon cloud. There are three effects associated with the massless quanta in gauge theories: (1) the infinite number of low-momentum quanta associated with a charged particle, (2) the collinearity of low-momentum quanta, and (3) the long-range interaction due to the exchange of massless quanta between charged particles. Here I study only the first of these effects. F. Bloch and A. Nordsieck were the first to show how to remove infrared divergences by summing the cross sections for scattering into final states that have a charged particle together with any number of soft photons. D.R. Yennie, S.C. Frautschi and H. Suura gave a general discussion of the removal of the divergences in the cross sections. P.P. Kulish and L.D. Faddeev showed how to remove the divergences in the S-matrix. I want to show how to remove the divergences in the operator solution of the Pauli-Fierz model using modified asymptotic fields. In the charged single-particle model discussed here, the physical field and the modified asymptotic field are the same. First I find the exact single-particle eigenstates of the model, which have coherent states of soft perturbative photons. Secondly, I solve the operator equations of motion of the model and find that the Heisenberg field acts in a space that is orthogonal to the space of the exact eigenstates. Thirdly, I find fields that diagonalize the Hamiltonian. Finally I discuss which of the properties of this model can be expected to be relevant to more realistic models of QED as well as to QED itself. The exact eigenstates in this model should have the usual orthonormality properties and thus the operators that create and annihilate the exact eigenstates should have the usual free field commutation relations. This is guaranteed since the transformation between these two sets of fields is formally unitary. Thus, following Blanchard and Contopanagos and Einhorn I assume that the original field operators act in a Hilbert space associated with coherent states that is unitarily inequivalent to the usual Fock space, while the physical fields that create the particles together with their soft photon clouds are free fields acting in the usual Fock space. From the point of view of the usual canonical formalism this is a controversial choice. The implication of this choice is that in gauge theories and perhaps in other theories with massless particles or fields the states of relevance to observation are very distant from the states created by smeared polynomials in the fields in the original Hamiltonian or Lagrangian. We already know that in field theory the interacting fields create much more than just the particle with which they are associated. Further in theories with infinite field strength renormalization there is an infinite multiplicative constant relating the unrenormalized fields, which naively obey canonical commutation relations, and the renormalized fields whose equal-time commutators are ill-defined. The coherent state operator for soft photons that relates the original fields of the Hamiltonian to the fields that make the single-particle eigenstates in the Pauli-Fierz model can be viewed as an operator-valued analog of the field strength renormalization. Another purpose of this paper is to find the proper extension of the variational principle proposed earlier to gauge theories. I will argue that in the Pauli-Fierz model, one should first transform to the fields that incorporate the divergent part of the soft photon cloud associated with the electron in the exact single-particle eigenstates and then apply the non-gauge theory form of the variational principle. ## 2 Exact eigenstates in the Pauli-Fierz model The Pauli-Fierz model is nonrelativistic quantum electrodynamics in the transverse gauge with the following approximations: (1) momentum conservation between the photon and the electron is neglected<sup>2</sup><sup>2</sup>2This approximation is also made in , (2) the term quadratic in the vector potential is dropped, and (3) the Coulomb interaction between electrons is dropped. So the model is a single electron interacting with massless transverse photons. The Hamiltonian is $`H`$ $`=`$ $`\psi ^{}(x)({\displaystyle \frac{1}{2m}}^2+V(x))\psi (x)+{\displaystyle \underset{s=1}{\overset{2}{}}}{\displaystyle d^3kka_s^{}(k)a_s(k)}`$ (1) $`{\displaystyle \frac{e}{m}}{\displaystyle \underset{s=1}{\overset{2}{}}}{\displaystyle \psi ^{}(p)\psi (p)\frac{d^3k}{\sqrt{2k}}\stackrel{~}{\rho }(k)pe_s(k)[a_s(k)+a_s^{}(k)]},`$ $`\stackrel{~}{\rho }(k)`$ is a smooth function that prevents ultraviolet divergences but does not affect infrared divergences, in particular $`\stackrel{~}{\rho }(0)=1`$, and the fields are at time $`0`$. For much of the discussion I will suppress the external potential $`V`$. It is straightforward to see that the exact single-electron eigenstates are $$|P=\sqrt{Z(p)}exp\{\frac{e}{m}\underset{s=1}{\overset{2}{}}d^3k\frac{\stackrel{~}{\rho (k)}}{\sqrt{2k}k}pe_s(k)a_s^{}(k)\}\psi ^{}(p)|0,$$ (2) $`\psi (p)`$ is the Fourier transform of $`\psi (x)`$, with energy $$[\frac{1}{2m}\frac{2e^2}{3m^2}_0^{\mathrm{}}𝑑k\stackrel{~}{\rho }^2(k)]p^2\frac{p^2}{2m(\mathrm{})}.$$ (3) (Fields and states in the physical space will be labelled with capital letters; vectors such as $`p`$ will be without bold type or arrows.) The field strength renormalization factor is $$\frac{1}{Z(p)}=exp[\frac{e^2p^2}{m^2}_0^{\mathrm{}}\frac{dk}{k}\stackrel{~}{\rho }^2(k)]$$ (4) At first sight the transformation between $`|P`$ and $`\psi ^{}(p)|0`$ does not seem to be even formally unitary. This first impression is misleading and is connected with the fact that annihilation operators disappear when acting on the vacuum. In fact the transformation between these states is $$U=exp\{\frac{e}{m}d^3p\psi ^{}(p)\psi (p)d^3k\frac{\stackrel{~}{\rho }(k)}{\sqrt{2k}k}\underset{s}{}pe_s(k)(a^{}(k)a(k))\}.$$ (5) Using the Baker-Haussdorff theorem, we find $`U`$ $`=`$ $`exp\{{\displaystyle \frac{e^2p^2}{2m^2}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dk\stackrel{~}{\rho }^2(k)}{k}}\}exp\{{\displaystyle \frac{e}{m}}{\displaystyle \frac{d^3k\stackrel{~}{\rho }(k)}{\sqrt{2k}k}pe^s(k)a_s^{}(k)}\}`$ (6) $`exp\{{\displaystyle \frac{e}{m}}{\displaystyle \frac{d^3k\stackrel{~}{\rho }^2(k)}{\sqrt{2k}k}pe^s(k)a_s(k)}\}`$ Clearly when acting on the vacuum the factor with the annihilation operators becomes one and the remaining factors are not unitary. If one works in the algebra of operators rather than in the Hilbert space, one does not lose the annihilation parts of operators. ## 3 Calculation of the Heisenberg operator The Heisenberg equations of motion are $$i_t\psi ^{}(p,t)=\frac{p^2}{2m}\psi ^{}(p,t)\frac{e}{m}\underset{s=1}{\overset{2}{}}\frac{d^3k}{\sqrt{2k}}\stackrel{~}{\rho }(k)\psi ^{}(p,t)pe_s(k)(a_s(k,t)+a_s^{}(k,t))$$ (7) $$i_ta_s^{}(k,t)=ka_s^{}(k,t)\frac{e}{m}\frac{d^3p}{\sqrt{2k}}\stackrel{~}{\rho }(k)\psi ^{}(p,t)\psi (p,t)pe_s(k)$$ (8) Since the Hamiltonian commutes with the product $`\psi ^{}(p,t)\psi (p,t)`$, this product is conserved and the time can be dropped in the product when it appears in Eq.(8). Then the equation for $`a^{}`$ can be solved exactly, $$a_s^{}(k,t)=e^{ikt}a_s^{}(k)i\frac{e}{m}\frac{d^3p}{\sqrt{2k}}pe_s(k)\psi ^{}(p)\psi (p)\stackrel{~}{\rho }(k)t.$$ (9) The term in $`a^{}`$ linear is $`t`$ is connected with the neglect of momentum conservation in the model. This linear term cancels in the sum $`a^{}+a`$ that appears in the equation of motion for $`\psi ^{}`$. The solution for $`\mathrm{\Psi }^{}`$ is $$\mathrm{\Psi }^{}(p,t)=Cexp\{i\frac{p^2}{2m}t\frac{e}{m}\underset{s=1}{\overset{2}{}}\frac{d^3k}{\sqrt{2k}k}\stackrel{~}{\rho }(k)pe_s(k)[e^{ikt}a_s^{}(k)e^{ikt}a_s(k)]\}.$$ (10) Requiring $`\mathrm{\Psi }^{}(p,0)=\psi (p)`$ gives $$\mathrm{\Psi }^{}(p,t)=\psi ^{}(p)exp\{i\frac{p^2}{2m}t\frac{e}{m}\underset{s=1}{\overset{2}{}}\frac{d^3k}{\sqrt{2k}k}[f_s(p,k,t)a_s^{}(k)f_s^{}(p,k,t)a_s(k)]\},$$ (11) $$f_s(p,k,t)=\frac{\stackrel{~}{\rho }(p)}{\sqrt{2k}k}pe_s(k)(e^{ikt}1).$$ (12) The expectation of the number operator $`N=_sd^3ka^{}(k)a(k)`$ for photons in the one electron state is $$<N(t)>=\frac{e^2}{m^2}\underset{s}{}d^3k|f_s(k,p,t)|^2$$ $$=\frac{e^2}{m^2}\underset{s}{}\frac{d^3k}{2k^3}\stackrel{~}{\rho }^2(k)sin^2\frac{kt}{2}.$$ (13) This is the same expression that Blanchard calls $`N^2(t)`$. As Blanchard shows, for all finite $`t`$, $`<N(t)>`$ is finite, but for $`t\pm \mathrm{}`$ $`<N(t)>`$ diverges as $`logt`$. Thus for all finite $`t`$ there is a finite expection value for the number of soft photons in the cloud associated with the Heisenberg field of the electron, but for $`t\pm \mathrm{}`$ the number diverges. A divergent value of $`<N>`$ is the signature for a space of vectors in an inequivalent representation of the commutation relations that is orthogonal to the original Fock space. The fact that the large-$`t`$ limit leads out of the usual Fock space has been identified as the cause of infrared divergences by several authors, starting with K.O. Friedrichs and Blanchard . The expectation value of the number operator also diverges in the exact eigenstates, Eq.(2). Thus the Heisenberg field acting on the vacuum creates states that are orthogonal to all the exact electron eigenstates in the model. This suggests that in this model one should introduce a field that makes the physical single particle state with its attached soft photons. In the single charged particle sector this should lead to diagonalization of the Hamiltonian. ## 4 Diagonalization of the Hamiltonian We can invert the transformation $`U`$ given in Eq.(5) to express the Hamiltonian in terms of “physical” operators that create the charged particles together with their soft photon clouds. We introduce the physical photon operators (and their adjoints), $`A(k)=Ua(k)U^{}`$, $$A(k)=a(k)\frac{e}{m}d^3p\psi ^{}(p)\psi (p)f_s(p,k)$$ (14) as well as the transformed electron operators (and their adjoints), $`\mathrm{\Psi }(p)=U\psi (p)U^{}`$, $$\mathrm{\Psi }(p)=exp\{\frac{e}{m}d^3k[f_s(p,k)a_s^{}(k)f_s^{}(p,k)a_s(k)]\}\psi (p).$$ (15) In terms of the physical electron and photon field operators the Hamiltonian is $`H=\mathrm{\Psi }^{}(x)({\displaystyle \frac{1}{2m(\mathrm{})}}^2+V(x))\mathrm{\Psi }(x)+{\displaystyle \underset{s=1}{\overset{2}{}}}{\displaystyle d^3kkA_s^{}(k)A_s(k)}+`$ $`+{\displaystyle \underset{s=1}{\overset{2}{}}}{\displaystyle d^3pd^3p^{}f_s^{}(p,k)f_s(p^{},k)\mathrm{\Psi }^{}(p^{})\mathrm{\Psi }^{}(p)\mathrm{\Psi }(p^{})\mathrm{\Psi }(p)}.`$ (16) The bilinear terms in $`\mathrm{\Psi }`$ correspond to scattering of the electron together with its soft photon cloud in the external potential $`V`$. No infrared divergences or other infrared effects are visible. The model is taken to be a one electron model, so we ignore the quartic terms in $`\mathrm{\Psi }(p)`$. This result is similar to very old work on clothed operators in simple models of field theory , except in full QED we would propose to incorporate only the divergent part of the soft photons in the electron Heisenberg field, rather than eliminating all the trilinear terms in the original Hamiltonian. ## 5 Condition for the operator variational principle In this model, the physical fields diagonalize the single-particle Hamiltonian. The asymptotic fields will be those that occur in scattering from the fixed potential $`V(x)`$. Thus the condition to be imposed on the Hamiltonian in the variational principle based on choosing elements from the algebra of asymptotic fields should be to make the Hamiltonian as close to the free field Hamiltonian as possible. This condition is the analog of the condition chosen in theories without massless particles or fields. ## 6 What is relevant to QED? Some issues suggested by this simple model may carry over to QED. The first is that one should express the observables of the theory in terms of a charged field that, acting on the vacuum, creates a state that is not orthogonal to the asymptotic states. The charged field should contain the soft photons in the neighborhood of zero energy that are responsible for the infrared divergence in the number of perturbative photons attached to the charged particle. This can be done by a formally unitary transformation. The same unitary transformation should be applied to the photon field. Secondly, as suggested by Contopanogos and Einhorn , one can take the charged states with their clouds of soft coherent state photons to be in Fock space and the perturbative states to be in an inequivalent representation of the commutation relations. In the present model, if one introduces the clothed electron operators in the theory the infrared divergences disappear and in the absence of an external potential the Hamiltonian has free field form because the physical photons decouple from the physical electrons. This of course will not happen in QED, nor will there be a discrete mass associated with the single charged particle. Since the work of B. Schroer on “infraparticles” we have known that particles that interact with massless quanta cannot have discrete mass. Since in the present model the single-particle Hamiltonian is diagonalized by the physical field, the variational principle is trivial in this model. When the massless quanta have been absorbed the variational principle proposed earlier for the case where there are no massless particles or fields can be applied. This also will not apply to QED and shows that significant modifications must be made in this variational principle before it can be useful in studying gauge theories. ## 7 Qualitative remarks Divergent field strength renormalization occurs both due to ultraviolet effects and to infrared effects. In the case of ultraviolet divergences, one introduces the renormalized fields, for example $`\psi _{ren}(p)=Z^{1/2}\psi (p)`$, where the multiplicative factor is fixed by requiring the coefficient of the $`\psi ^{as}(p)`$ term in the Haag expansion to be one; see references in . This, together with other renormalization effects makes the Haag expansion finite. By contrast in the case of infrared divergences (suppressing non-infrared effects), one should not introduce a field strength renormalization; rather one should exponentiate the soft photons of near zero energy into a new operator-valued multiplicative factor multiplying the charged field as we have done in Eq.(15). In the Pauli-Fierz model this explicitly removes the infrared divergences and produces a charged field that acting on the vacuum creates a state that is not orthogonal to all the asymptotic states. One can hope that in QED at least part of the infrared divergence can be removed using the physical charged field and that this field acting on the vacuum will produce asymptotic states. One could argue <sup>3</sup><sup>3</sup>3Xiang-Dong Ji, private communication that given a Hamiltonian that has no trilinear term one could introduce trilinear terms in the transformed Hamiltonian and could then arrange to have any form of infrared divergence. My point of view is that, although highly simplified, this model does come from QED and does capture the type of soft photon infrared divergence that occurs in QED. ## 8 Outlook for future work The idea of introducing physical charged fields that contain the infrared divergent part of the soft photons should be tried in more realistic models, such as the asymptotic Hamiltonian of Kulish and Faddeev , as well as in other models that have the same infrared divergences as QED. Acknowledgements I am happy to thank Ted Jacobson, Xiang-Dong Ji and especially Ching-Hung Woo for helpful discussions.
no-problem/0002/cond-mat0002226.html
ar5iv
text
# Signatures of spin pairing in a quantum dot in the Coulomb blockade regime ## Abstract Coulomb blockade resonances are measured in a GaAs quantum dot in which both shape deformations and interactions are small. The parametric evolution of the Coulomb blockade peaks shows a pronounced pair correlation in both position and amplitude, which is interpreted as spin pairing. As a consequence, the nearest-neighbor distribution of peak spacings can be well approximated by a smeared bimodal Wigner surmise, provided that interactions which go beyond the constant interaction model are taken into account . Recently, the Coulomb blockade (CB) of electronic transport through quantum dots, defined in two-dimensional electron gases in semiconductor heterostructures, has been of considerable interest . One reason is that such dots are model systems to investigate the interplay between chaos and electron-electron (e-e) interactions. Here, a key feature is the distribution of nearest-neighbor Coulomb blockade peak spacings (NNS), which random matrix theory (RMT) predicts to follow a bimodal Wigner surmise $`P(s)`$ for a non-interacting quantum dot of chaotic shape, i.e. $$P(s)=\frac{1}{2}\left[\delta (s)+P^\beta (s)\right]$$ (1) $`P^\beta (s)`$ is the Wigner surmise for the corresponding Gaussian ensemble, i.e. $`\beta =1`$ for systems with time inversion symmetry (Gaussian orthogonal ensemble - GOE), and $`\beta =2`$ when time inversion symmetry is broken (Gaussian unitary ensemble - GUE). The peak spacing s is measured in units of the average spin-degenerate energy level spacing $`\mathrm{\Delta }=2\pi \mathrm{}^2/(m^{}A)`$, where $`m^{}`$ denotes the effective mass, and A the dot area. The $`\delta `$-function in $`P(s)`$ takes the spin degeneracy into account. RMT further predicts the standard deviation for $`P(s)`$ to be $`\sigma =0.62`$ for $`\beta =1`$, and $`\sigma =0.58`$ for $`\beta =2`$, respectively . The comparison to experimental data is made by applying the constant-interaction model , which allows to separate the constant single-electron charging energy from the fluctuating energies of the levels inside the dot. In disagreement with the predictions of RMT, the experimentally obtained NNS distributions are usually best described by a single Gaussian with enhanced values of $`\sigma `$ . The data thus look as if spins are absent, although in Ref. , a spin pair has been observed. This apparent absence of spins and the different shape of $`P(s)`$ have triggered tremendous recent theoretical work. One possible explanation are additional e-e interactions inside the dot , which lead to “scrambling” of the energy spectrum and can be characterized by the interaction parameter $`r_s`$, defined as the ratio between the Coulomb interaction of two electrons at their average spacial separation, and the Fermi energy. It is theoretically expected that the NNS distribution becomes Gaussian due to e-e interactions, and that $`\sigma `$ increases for $`r_s2`$. . However, all experiments so far have been carried out in a regime where an increase in $`\sigma `$ is not expected, i.e. in samples with $`0.93r_s1.35`$ , with the exception of Ref. , where $`r_s=2.1`$. Gate-voltage induced shape deformations of the dot can modify the NNS distribution as well. The deformation can be described by a parameter $`x`$, which corresponds to the distance between avoided crossings induced by the deformation, measured in units of the CB peak spacing. For $`x`$ 1, the NNS distribution of partly uncorrelated energy spectra is measured, resulting again in a Gaussian shape with enhanced $`\sigma `$. Whether shape deformations or interactions dominate the shape of the NNS distribution is not clear, although there is experimental evidence that $`x<1`$ and interactions are more important. Here, we report measurements on a quantum dot in which shape deformations as well as $`r_s`$ are reduced. We observe a pronounced pairwise correlation of both position and amplitude of the Coulomb blockade resonances, which is sometimes interrupted by kinks in the parametric evolution, among other features. We interpret the pairing as a spin signature: the energies of two states belonging to the same spatial wave function with opposite spin differ by an average interaction energy $`\overline{\xi }`$, which fluctuates with a standard deviation of $`\sigma _\xi `$, both of which are of the order of $`\mathrm{\Delta }`$. We conclude that in previous experiments, spin pairing was difficult to observe because it was frequently destroyed by avoided level crossings. Furthermore, we suggest that the measured NNS distribution can be fitted to a modified bimodal Wigner surmise, with $`\overline{\xi }`$ and $`\sigma _\xi `$ as fit parameters. The sample is a shallow Ga\[Al\]As heterostructure with a two-dimensional electron gas (2DEG) $`34`$ nm below the surface. The quantum dot is defined by local oxidation with an atomic force microscope (inset in Fig.1(a)). The lithographic dot area is 280 nm x 280 nm. The dot can be tuned by voltages applied to a homogeneous top gate and to the planar gates I and II. In order to reduce $`r_s`$ as much as possible, we chose a heterostructure with a high electron density, further increased by a top gate voltage of $`+100`$ mV to $`n_e=5.910^{15}`$ m<sup>-2</sup>. This results in $`r_s=0.72`$, which is smaller than in all previous experiments. Additional screening is provided by the top gate . The sample was mounted in the mixing chamber of a <sup>3</sup>He/<sup>4</sup>He-dilution refrigerator with a base temperature of 90 mK. The mobility of the cooled 2DEG was $`93`$ m<sup>2</sup>/Vs. A DC bias voltage of 10 $`\mu V`$ was applied across the dot, and the current is measured with a resolution of 500 fA. From capacitance measurements, we find the electronic dot area A=190 nm x 190 nm (the depletion length in such devices can be smaller than in structures defined by top gates) . The single-electron charging energy is $`E_c`$ =1.25 meV and the spin-edgenerate level spacing $`\mathrm{\Delta }`$ = 200 $`\mu eV`$. The measurements have been carried out in the weak coupling regime, $`\mathrm{}\mathrm{\Gamma }k_BT\mathrm{\Delta }`$. Here, $`\mathrm{\Gamma }`$ denotes the coupling of the dot to source and drain. The conductance G was measured as a function of the voltage $`V_I`$ applied to the planar gate I (see inset in Fig. 1(a)). Magnetic fields B applied perpendicular to the sample surface and $`V_{II}`$ were used as parameters. The observed CB oscillations (Fig. 1(a)) are fitted to a thermally broadened line shape, i.e., $`G(V_I)=G_{max}cosh^2(\eta V_I/2k_BT)`$ , yielding an electron temperature of T=120mK, as well as the positions and amplitudes of the peaks. Here, $`\eta =0.11eV/V`$ is the lever arm. Fig. 1 (b) shows the peak spacing $`\mathrm{\Delta }V_I`$ as a function of $`V_I`$. Compared to conventional dots defined by top gates, we find a much smaller variation of the average peak spacing as $`V_I`$ is tuned, although the fluctuation of individual spacings is 15% of $`E_c`$. A linear fit gives a slope of $`\mathrm{\Delta }V_I/V_I=6.710^4`$. Hence, the capacitance between the dot and gate I varies only by 3% over the whole scan range, as compared to, for example, a factor of 3 in Ref. . This indicates that tuning gate I or II predominantly changes the energy of the conduction band bottom, while the dot is only slightly deformed. By applying the method of Ref. to a hard-wall confinement, we estimate $`x`$ 0.15 for our dot as a lower limit. In Fig. 2 (a), five consecutive CB peaks are shown as a function of B. A pronounced pairwise correlation of both amplitude and peak position is observed (peak b correlates with peak c, and peak d with peak e, respectively). Observation of a pairing has been reported previously and interpreted as spin pairing , but not been further investigated. We interpret this parametric pair correlation in terms of a model recently developed by Baranger et al. . The constant interaction model is used to subtract $`E_c`$ from the peak spacings. The remaining individual energy separations equal $`\mathrm{\Delta }/2`$ on average and reflect the fluctuating level separations inside the quantum dot, which consist of two parts. We assume that two paired peaks belong to the same spatial wave function, labelled by i, of opposite spin, and are split by an interaction energy $`\xi _i`$, while the energy of consecutive states with different orbital wave functions differs by $`\mathrm{\Delta }_i\xi _i`$. Since the separations between the two levels of equal spin of spin pair i and (i+1), $`\mathrm{\Delta }_i`$, and possibly also $`\xi _i`$, vary as a function of B, levels may cross and the ground state of the dot can be either a singlet or a triplet state. At the singlet-triplet transitions, kinks in the parametric peak evolution occur and the pair correlation is interrupted. We can identify such kinks in our data, among other features. Fig. 2 (b) shows the amplitudes of peaks c, d and e. The correlation between peaks d and e is very strong around B=0. For 0.4T$`<`$B$`<`$0.61T, this correlation is interrupted, while the amplitudes of peaks c and e are correlated instead. In this regime, correlated kinks in the evolution of peaks c and d are observed (Fig.2 (c)). In Fig. 2 (d), a possible corresponding scenario for the parametric dependence of energy levels is sketched: (left) two avoided crossings occur between level pair i and level pair i+1 . This leads to the position of peaks c, d, and e as sketched in Fig. 2 (d), right, corresponding to the difference in energy upon changing the electron number in the dot. Consequently, positions and amplitudes of peaks c and e should be correlated in 0.4T$`<`$B$`<`$0.61T, as observed. Note that this correlation is interrupted around B=0.5T, possibly due to the influence of another energy level. Also, $`\xi _{de}`$ is not constant over the full range of B. While $`\xi _{de}0.05\mathrm{\Delta }`$ for B$`<`$0.22T, the positions of peaks d and e are not detectable in 0.22T$`<`$B$`<`$0.32T, since their amplitudes vanish. As the peaks reappear, $`\xi _{de}`$ has jumped to $`\xi _{de}^{}0.25\mathrm{\Delta }.`$ We speculate that possibly a level crossing has occurred in the regime where the amplitudes are suppressed, and hence for B$`<`$0.22T, a different level pair is at the Fermi energy than for B$`>`$0.32T. In addition, we note that although $`\xi `$ fluctuates as B is varied, a systematic change of $`\xi `$ with B is not observed, which indicates that Zeemann splitting plays a minor role. From the data of Fig. 2, we estimate the average interaction energy to $`\overline{\xi }0.5\mathrm{\Delta }`$ by averaging over all peaks and magnetic fields. Baranger et al. have estimated $`\overline{\xi }0.6\mathrm{\Delta }`$ for $`r_s=1`$. Hence, our findings can be considered as being in agreement with existing theory, while we are not aware of a theoretical prediction for $`\sigma _\xi `$. From the above phenomenology, we conclude that for dots with stronger shape deformations, and hence more level crossings, or in dots with larger $`r_s`$ (and thus larger $`\overline{\xi }`$), the spin pairing is frequently interrupted and difficult to detect. Also, the Kondo effect can occur in neighboring peaks when spin pairing is interrupted. We proceed by discussing the effect of spin pairing on the NNS distributions. In Fig. 3, the measured histograms of the normalized NNS distributions for GOE (a) and GUE (b) are shown. The ensemble statistics have been obtained by measuring $`G(V_I)`$, and either by changing the magnetic flux by one flux quantum $`\varphi _o=\frac{h}{e}`$ through the dot (GUE), or by stepping $`V_{II}`$ in units of one CB period (GOE). Each individual $`V_I`$-sweep contains 15 CB resonances in the low coupling regime. The total number of peak spacings used is 120 for GOE, and 210 for GUE, respectively. The individual level spacings $`s`$ in units of $`\mathrm{\Delta }`$ are obtained by using the fit of Fig. 1b; its expectation value is $`\overline{s}`$=0.5. Both histograms are asymmetric and show no evident bimodal structure. By including the effect of spin pairing into the statistics, however, we can interpret them as modified bimodal distributions: (i) The $`\delta `$-function in the non-interacting NNS distribution $`P(s)`$ with the expectation value of $`\overline{s_\delta }`$=0 (eq. (1)) is shifted to $`\overline{s_\delta }=\overline{\xi ^{}}`$ and, as a reasonable assumption , broadened according to a Gaussian distribution with the standard deviation $`\sigma _\xi ^{}`$. Here, $`\xi ^{}`$ denotes the interaction energy in units of $`\mathrm{\Delta }`$. (ii) Since one level of a spin pair i is shifted upwards in energy by $`\xi _i`$, the separation between the upper level of spin pair i and the lower level of pair (i+1) is given by $`\mathrm{\Delta }_i`$-$`\xi _i`$. Consequently, $`P^\beta (s)`$ in eq. (1) is shifted to $`\overline{s_{P^\beta }}`$=1-$`\overline{\xi ^{}}`$ and convoluted with the Gaussian distribution function of $`\xi ^{}`$. Combining these two components, the modified NNS distribution reads $`P_{int}^\beta (\overline{\xi ^{}},\sigma _\xi ^{})=`$ $$\frac{1}{\sqrt{2\pi }\sigma _\xi ^{}}\left\{exp\left[\frac{(s\overline{\xi ^{}})^2}{2\sigma _\xi ^{}^2}\right]+exp\left[\frac{s^2}{2\sigma _\xi ^{}^2}\right]\times P^\beta (s+\xi ^{})\right\}$$ (2) Here, the “$`\times `$” denotes the convolution. Since $`\mathrm{\Delta }`$ is determined by the dot size and the material parameters, we can fit $`P_{int}^\beta (\overline{\xi ^{}},\sigma _\xi ^{})`$ to the measured NNS distribution with the two fit parameters $`\overline{\xi ^{}}`$ and $`\sigma _\xi ^{}`$ (Fig.3). We obtain $`\overline{\xi ^{}}=0.65`$ and $`\sigma _\xi ^{}=0.35`$ for GOE, as well as $`\overline{\xi ^{}}=0.53`$ and $`\sigma _\xi ^{}=0.34`$ for GUE. Hence, we find that $`\overline{\xi }`$ is higher for GOE than for GUE, which is in agreement the theoretical prediction . The fluctuation of $`\xi `$ is found to be independent of the Gaussian ensemble, and does not vary continously with B In these fits, we have assumed that two electrons are always successively filled in one spatial wave function, i.e. we have neglected situations in which $`\xi _i>\mathrm{\Delta }_i`$. Inclusion of avoided crossings would require more clearly pronounced kinks than those in our data (sometimes the pair correlation is lost while a kink is not clearly visible). More experiments as well as theoretical work is necessary to investigate this dependence, also with respect to fluctuations in $`\xi `$ with B. Finally, we consider how our data can be modelled when complete absence of spin pairing is assumed. In this case, the mean level spacing would be $`\mathrm{\Delta }/2`$. Comparing a correspondingly normalized Wigner surmise to our data gives an extremely poor result (inset in Fig.3): the measured NNS distribution appears too wide by a factor of $``$ 2. In summary, we have observed spin pairing effects in a - compared to dots investigated in earlier experiments - rigid quantum dot with reduced electron-electron interactions. We have observed spin pairing which persists as a magnetic field is varied, but is interrupted by kinks as well as other structures in the parametric evolution of the Colomb blockade peaks. We have extracted the average interaction energy $`\overline{\xi }`$ between states of identical spatial wave functions but opposite spin. Furthermore, we explain the measured distributions of nearest neighbor spacings as being composed of the two branches of a modified, bimodal Wigner-Dyson distribution, which takes $`\overline{\xi }`$ and its fluctuation into account. It is a pleasure to thank H. U. Baranger, E. Mucciolo, K. Richter, and F. Simmel for stimulating conversations and discussions. Financial support from the Schweizerischer Nationalfonds is gratefully acknowledged.
no-problem/0002/hep-ph0002130.html
ar5iv
text
# The three-leptons signature from resonant sneutrino production at the LHC ## 1 Introduction In extensions of the Minimal Supersymmetric Standard Model (MSSM) where the so-called R-parity symmetry is violated, the superpotential contains some additional trilinear couplings which offer the opportunity to singly produce supersymmetric (SUSY) particles as resonances. The analysis of resonant SUSY particle production allows an easier determination of the these R-parity violating ($`\mathit{}_p`$ ) couplings than the displaced vertex analysis for the Lightest Supersymmetric Particle (LSP) decay, which is difficult experimentally especially at hadronic colliders. In this paper, we study the sensitivity provided by the ATLAS detector at the LHC on singly produced charginos via the $`\lambda _{211}^{}`$ coupling, the main contribution coming from the resonant process $`pp\stackrel{~}{\nu }_\mu \stackrel{~}{\chi }_1^\pm \mu ^{}`$. At hadron colliders, due to the continuous energy distribution of the colliding partons, the resonance can be probed over a wide mass range. We have chosen to concentrate on $`\lambda _{ijk}^{}L_iQ_jD_k^c`$ interactions since $`\lambda _{ijk}^{\prime \prime }U_i^cD_j^cD_k^c`$ couplings lead to multijet final states with large QCD background. Besides, we focus on $`\lambda _{211}^{}`$ since it corresponds to first generation quarks for the colliding partons and it is not severely constrained by low energy experiments: $`\lambda _{211}^{}<0.09`$ (for $`\stackrel{~}{m}=100`$ GeV) . We consider the cascade decay leading to the three-leptons signature, namely $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0l_p^\pm \nu _p`$ (with $`l_p=e,\mu `$), $`\stackrel{~}{\chi }_1^0\mu u\overline{d},\overline{\mu }\overline{u}d`$. The main motivation lies in the low Standard Model background for this three-leptons final state. The considered branching ratios are typically of order $`B(\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0l_p^\pm \nu _p)22\%`$ (for $`m_{\stackrel{~}{l}},m_{\stackrel{~}{q}},m_{\stackrel{~}{\chi }_2^0}>m_{\stackrel{~}{\chi }_1^\pm }`$) and $`B(\stackrel{~}{\chi }_1^0\mu ud)40\%70\%`$. ## 2 Mass reconstruction The clean final state, with only two hadronic jets, three leptons and a neutrino allows the reconstruction of the $`\stackrel{~}{\nu }`$ decay chain and the measurement of the $`\stackrel{~}{\chi }_1^0`$, $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\nu }_\mu `$ masses. We perform the full analysis for the following point of the MSSM: $`M_1=75`$ GeV, $`M_2=150`$ GeV, $`\mu =200`$ GeV, $`\mathrm{tan}\beta =1.5`$, $`A_t=A_b=A_\tau =0`$, $`m_{\stackrel{~}{f}}=300`$ GeV and for $`\lambda _{211}^{}`$=0.09. For this set of MSSM parameters, the mass spectrum is: $`m_{\stackrel{~}{\chi }_1^0}=79.9`$ GeV, $`m_{\stackrel{~}{\chi }_1^\pm }=162.3`$ GeV and the total cross-section for the three-leptons production is 3.1 pb, corresponding to $`100000`$ events for the standard integrated luminosity of 30 fb<sup>-1</sup> expected within the first three years of LHC data taking. The single chargino production has been calculated analytically and implemented in a version of the SUSYGEN MonteCarlo modified to include the generation of $`pp`$ processes. The generated signal events were processed through the program ATLFAST , a parameterised simulation of the ATLAS detector response. First, we impose the following loose selection cuts in order to select the considered final state and to reduce the Standard Model (SM) background (see Section 3.1): (a) Exactly three isolated leptons with $`p_T^1>20`$ GeV, $`p_T^{2,3}>10`$ GeV and $`|\eta |<2.5`$, (b) At least two of the three leptons must be muons, (c) Exactly two jets with $`p_T>15`$ GeV, (d) The invariant mass of any $`\mu ^+\mu ^{}`$ pair must lie outside $`\pm 6.5`$ GeV of the $`Z`$ mass. The three leptons come in the following flavour-sign configurations (+ charge conjugates): (1) $`\mu ^{}e^+\mu ^+`$ (2) $`\mu ^{}e^+\mu ^{}`$ (3) $`\mu ^{}\mu ^+\mu ^+`$ (4) $`\mu ^{}\mu ^+\mu ^{}`$, where the first lepton comes from the $`\stackrel{~}{\nu }_\mu `$, the second one from the $`W`$ and the third one from the $`\stackrel{~}{\chi }_1^0`$ decay. As a starting point for the analysis, we focus on configuration (1) where the muon produced in the $`\stackrel{~}{\chi }_1^0`$ decay is unambiguously identified as the one with the same sign as the electron. The distribution of the $`\mu `$-jet-jet invariant mass exhibits a clear peak over a combinatorial background, shown on the left side of Figure 1. After combinatorial background subtraction ( right of Figure 1) an approximately gaussian peak is left, from which the $`\stackrel{~}{\chi }_1^0`$ mass can be measured with a statistical error of $`100`$ MeV. The combinatorial background is due to events where one jet from $`\stackrel{~}{\chi }_1^0`$ decay is lost and a jet from initial state radiation is used in the combination, and its importance is reduced for heavier sneutrinos or neutralinos. Once the position of the $`\stackrel{~}{\chi }_1^0`$ mass peak is known, the reconstructed $`\stackrel{~}{\chi }_1^0`$ statistics can be increased by also considering signatures (2), (3) and (4), and by choosing as the $`\stackrel{~}{\chi }_1^0`$ candidate the muon-jet-jet combination which gives invariant mass nearest to the peak measured previously using events sample (1). For further reconstruction, we define as $`\stackrel{~}{\chi }_1^0`$ candidates the $`\mu `$-jet-jet combinations with an invariant mass within 12 GeV of the measured $`\stackrel{~}{\chi }_1^0`$ peak, yielding a total statistics of 6750 events for signatures (1) to (4) for an integrated luminosity of 30 fb<sup>-1</sup> . For $`\stackrel{~}{\chi }_1^\pm `$ reconstruction we consider only configurations (1) and (2), for which the charged lepton from $`W`$ decay is unambiguously identified as the electron. The longitudinal momentum of the neutrino from the $`W`$ decay is calculated from the missing transverse momentum of the event ($`p_T^\nu `$) and by constraining the electron-neutrino invariant mass to the $`W`$ mass. The resulting neutrino longitudinal momentum has a twofold ambiguity. We therefore build the invariant $`W\stackrel{~}{\chi }_1^0`$ mass candidate using both solutions for the $`W`$ boson momentum. The observed peak, represented on the left side of Figure 2, can be fitted with a gaussian shape with a width of $`6`$ GeV. Only the solution yielding the $`\stackrel{~}{\chi }_1^\pm `$ mass nearer to the measured mass peak is retained, and the $`\stackrel{~}{\chi }_1^\pm `$ candidates are defined as the combinations with an invariant mass within 15 GeV of the peak, corresponding to a statistics of 2700 events. Finally, the sneutrino mass is reconstructed by taking the invariant mass of the $`\stackrel{~}{\chi }_1^\pm `$ candidate and the leftover muon (Figure 2, right). The $`\stackrel{~}{\nu }`$ mass peak has a width of $`10`$ GeV and 2550 events are counted within 25 GeV of the measured peak. ## 3 Analysis reach ### 3.1 Standard Model background We consider the following SM processes for the evaluation of the background to the three-leptons signature: (1) $`\overline{t}t`$ production, followed by $`tWb`$, where the two $`W`$ and one of the $`b`$ quarks decay leptonically, (2) $`WZ`$ production, where both bosons decay leptonically, (3) $`Wt`$ production, (4) $`Wbb`$ production, (5) $`Zb`$ production. These backgrounds were generated with the PYTHIA Monte Carlo , and the ONETOP parton level generator , and passed through the ATLFAST package . We apply to the background events the loose selection cuts described in Section 2, and in addition we reject the three same-sign muons configurations which are never generated by our signal. The background to the sneutrino decay signal is calculated by considering the events with a $`\mu `$-jet-jet invariant mass in an interval of $`\pm 15`$ GeV around the $`\stackrel{~}{\chi }_1^0`$ peak measured for the signal. In order to optimise the signal to background ratio only events containing three muons (configurations (3) and (4)), which are less likely in the Standard Model, are considered. In each event two combinations, corresponding to the two same-sign muons, can be used for the $`\stackrel{~}{\chi }_1^0`$ reconstruction. Both configurations are used when counting the number of events in the peak. In most cases, however, the difference in mass between the two combinations is such that they do not appear in the same peak region. ### 3.2 Supersymmetric background The pair production of SUSY particles through standard $`R_p`$-conserving processes represents another source of background. A study based on the HERWIG 6.0 MonteCarlo has shown that all the SUSY events surviving the cuts described in Section 3.1 are mainly from $`pp\stackrel{~}{\chi }+X`$ reactions ($`\stackrel{~}{\chi }`$ being either a chargino or a neutralino and $`X`$ any other SUSY particle), and that the SUSY background decreases as the $`\stackrel{~}{\chi }^\pm `$ and $`\stackrel{~}{\chi }^0`$ masses increase. This behaviour is due to the combination of two effects: the $`\stackrel{~}{\chi }+X`$ production cross-section decreases with increasing $`\stackrel{~}{\chi }`$ mass, and the probability of losing two of the four jets from the decays of the two $`\stackrel{~}{\chi }_1^0`$ in the event becomes smaller as the $`\stackrel{~}{\chi }^\pm `$ and $`\stackrel{~}{\chi }_1^0`$ masses increase. The SUSY background is only significant for $`\stackrel{~}{\chi }_1^\pm `$ masses lower than $`200`$ GeV. Besides, it can be assumed that the $`\stackrel{~}{\chi }_1^0`$ mass will be derived from inclusive $`\stackrel{~}{\chi }_1^0`$ reconstruction in SUSY pair production as shown in and . Hence, even in the cases where a significant $`\stackrel{~}{\chi }_1^0`$ peak can not be observed above the SUSY background, we can proceed to the further steps in the kinematic reconstruction. The strong kinematic constraint obtained by requiring both the correct $`\stackrel{~}{\chi }_1^0`$ mass and a peak structure in the $`\stackrel{~}{\chi }_1^0W`$ invariant mass will then allow to separate the single sneutrino production from other SUSY processes. Therefore, only the Standard Model background is considered in the evaluation of the analysis reach presented below. ### 3.3 Reach in the mSUGRA parameter space In Figure 3, we show the regions of the $`m_0m_{1/2}`$ plane where the signal significance exceeds 5 $`\sigma `$ ($`\frac{S}{\sqrt{B}}>5`$ with $`S=Signal`$ and $`B=SMBackground`$) after the set of cuts described in Section 3.1 has been applied, within the mSUGRA model. The full mass reconstruction analysis of Section 2 is possible only above the dashed line parallel to the $`m_0`$ axis. Below this line the decay $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0W^\pm `$ is kinematically closed, and the $`W`$ mass constraint can not be applied to reconstruct the neutrino longitudinal momentum. The basic feature in Figure 3 is a decrease of the sensitivity on $`\lambda _{211}^{}`$ as $`m_0`$ increases. This is due to a decrease of the partonic luminosity as $`m_{\stackrel{~}{\nu }}`$ increases. The sensitivity on $`\lambda _{211}^{}`$ is also observed to decrease as $`m_{\stackrel{~}{\chi }_1^\pm }`$ approaches $`m_{\stackrel{~}{\nu }}`$. There are two reasons. First, in this region the phase space factor of the decay $`\stackrel{~}{\nu }\stackrel{~}{\chi }_1^\pm \mu ^{}`$ following the resonant sneutrino production is suppressed, thus reducing the branching fraction. Secondly, as the $`\stackrel{~}{\nu }_\mu `$ and the $`\stackrel{~}{\chi }_1^\pm `$ become nearly degenerate the muon from the decay becomes on average softer, and its $`p_T`$ can fall below the analysis requirements. In the region $`m_{\stackrel{~}{\chi }_1^\pm }>m_{\stackrel{~}{\nu }}`$, shown as a hatched region in the upper left of the plots, the resonant sneutrino production contribution vanishes and there is essentially no sensitivity to $`\lambda _{211}^{}`$. Finally, the the sensitivity vanishes for low values of $`m_{1/2}`$. This region, below the LEP 200 kinematic limit for $`\stackrel{~}{\chi }_1^\pm `$ detection, corresponds to low values of the $`\stackrel{~}{\chi }_1^0`$ mass. In this situation the two jets from the $`\stackrel{~}{\chi }_1^0`$ decay are soft, and one of them is often below the transverse momentum requirement, or they are reconstructed as a single jet. For high $`\mathrm{tan}\beta `$, the three-lepton signature is still present, but it may be produced through the decay chain $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\tau }_1\nu _\tau `$, followed by $`\stackrel{~}{\tau }_1\tau \stackrel{~}{\chi }_1^0`$. The full kinematic reconstruction becomes very difficult, but the signal efficiency is essentially unaffected, as long as the mass difference between the lightest $`\stackrel{~}{\tau }`$ and the $`\stackrel{~}{\chi }_1^0`$ is larger than $`50`$ GeV. For a smaller mass difference the charged lepton coming from the $`\tau `$ decay is often rejected by the analysis cuts. ## 4 Conclusion In conclusion we have shown that if minimal supersymmetry with R-parity violation is realised in nature, the three-leptons signature from resonant sneutrino production will be a privileged channel for the precisely measuring of sparticle masses in a model-independent way as well as for testing a broad region of the mSUGRA parameter space. This signature can lead to an high sensitivity on the $`\lambda _{211}^{}`$ coupling and should also allow to probe an unexplored range of values for many other $`\mathit{}_p`$ couplings of the type $`\lambda _{1jk}^{}`$ and $`\lambda _{2jk}^{}`$.
no-problem/0002/cond-mat0002031.html
ar5iv
text
# Charge-density waves in the Hubbard chain: evidence for 4𝑘_𝐹 instability \[ ## Abstract Charge density waves in the Hubbard chain are studied by means of finite-temperature Quantum Monte Carlo simulations and Lanczos diagonalizations for the ground state. We present results both for the charge susceptibilities and for the charge structure factor at densities $`\rho =1/6`$ and 1/3; for $`\rho =1/2`$ (quarter filled) we only present results for the charge structure factor. The data are consistent with a $`4k_F`$ instability dominating over the $`2k_F`$ one, at least for sufficiently large values of the Coulomb repulsion, $`U`$. This can only be reconciled with the Luttinger liquid analyses if the amplitude of the $`2k_F`$ contribution vanishes above some $`U^{}(\rho )`$. \] Charge-density waves (CDW’s) are present in a variety of strongly correlated electron systems, ranging from quasi-one-dimensional organic conductors to the more recently discovered manganites. In order to understand the influence of CDW formation on magnetic order and transport properties, a crucial issue is to establish the period of charge modulation in the ground state. This period, in turn, is primarily determined by the interplay between electron-phonon and electron-electron couplings. For the specific example of quasi-one dimensional organic conductors at quarter-filled band, both period-2 and period-4 modulations have been observed; these correspond, respectively, to $`4k_F`$ and $`2k_F`$, where $`k_F=\pi \rho /2`$ is the Fermi wave vector for a density $`\rho `$ of free electrons on a periodic lattice. The use of simplified effective models capturing the basic physical ingredients should therefore be extremely helpful in predicting the dominant instability. In this context, the Hubbard model can be thought of as a limiting case (of vanishing electron-phonon interaction), in which the influence of electronic correlations on CDW modulation can be monitored. However, even for this simplest possible model there has been a disagreement between analyses of the continuum (Luttinger liquid) version, and early finite temperature (world-line) quantum Monte Carlo (QMC) simulations. According to the former, the large-distance behaviour of the charge density correlation function is given by $$n(x)n(0)=\frac{K_\rho }{(\pi x)^2}+A_1\frac{\mathrm{cos}(2k_Fx)}{x^{1+K_\rho }\mathrm{ln}^{3/2}x}+A_2\frac{\mathrm{cos}(4k_Fx)}{x^{4K_\rho }},$$ (1) where the amplitudes $`A_1`$ and $`A_2`$, and the exponent $`K_\rho `$ are interaction- and density-dependent parameters; for repulsive interactions $`\frac{1}{2}K_\rho <1`$, so that charge correlations are expected to be dominated by the $`2k_F`$ term. By contrast, the simulations pointed towards $`4k_F`$ being the main correlations. Nonetheless, based on a Renormalization Group (RG) analysis, it was argued that the $`2k_F`$ instability should eventually dominate over the $`4k_F`$ one for sufficiently low temperatures. For infinite coupling the system becomes effectively a spinless fermion problem with a $`4k_F`$ instability, and there is no disagreement in this case. Since present day computational capabilities allow one to reach much lower temperatures and larger system sizes than before, a numerical reanalysis of the model is certainly in order. Our purpose here is to present the results of such a study. The Hubbard Hamiltonian reads $$=t\underset{i,\sigma }{}c_{i\sigma }^{}c_{i+1\sigma }^{}+U\underset{i}{}n_i^{}n_i^{},$$ (2) where, in standard notation, $`U`$ is the on-site Coulomb repulsion; the hopping integral sets the energy scale, so we take $`t=1`$ throughout this paper. We probe finite temperature properties through determinantal QMC simulations for the grand-canonical version of (2), $`\widehat{}\mu \widehat{N}`$, where $`\widehat{N}_{i\sigma }n_{i\sigma }`$; the chemical potential $`\mu `$ is adjusted to yield the desired particle density. This analysis is supplemented by zero temperature calculations: the ground state of Eq. (2), for finite lattices of $`N_s`$ sites with periodic boundary conditions, is obtained through the Lanczos algorithm, in the subspace of fixed particle-density (canonical ensemble). The signature of a CDW instability is a peak at $`q=q^{}`$ in the zero-temperature limit of the charge-density susceptibility, $$N(q)=\frac{1}{N_s}_0^\beta 𝑑\tau \underset{i,\mathrm{}}{}n_i(\tau )n_{i+\mathrm{}}(0)e^{iq\mathrm{}},$$ (3) where the imaginary-time dependence of the operators is given by $`n_i(\tau )e^{\tau \widehat{}}n_ie^{\tau \widehat{}}`$, with $`n_i=n_i+n_i`$. We recall that in simulations ‘time’ is discretized in intervals $`\mathrm{\Delta }\tau `$, such that the size along this direction is $`L=\beta /\mathrm{\Delta }\tau `$. The CDW instability should also show up as a cusp, again at $`q=q^{},`$ in the zero-temperature charge-density structure factor, $$C(q)=\frac{1}{N_s}\underset{i,\mathrm{}}{}0|n_in_{i+\mathrm{}}|0e^{iq\mathrm{}},$$ (4) where $`|0`$ is the ground state. Let us first discuss results for an electronic density $`\rho =1/6`$, for which we found that the fermionic determinants in QMC simulations do not suffer from the ‘minus-sign problem’; this allowed us to reach inverse temperatures as large as $`\beta =25`$. ¿From Fig. 1 we see that the $`4k_F`$ charge susceptibility appears to be increasing with decreasing temperature, at a rate faster than that at $`2k_F`$, especially for $`U=6`$ and 9. The data in Fig. 1 were obtained for $`\mathrm{\Delta }\tau =0.125`$, but we have explicitly tested other values to ensure they do not change significantly as $`\mathrm{\Delta }\tau 0`$; each datum point involves typically 20,000 QMC sweeps over all time slices. Further, in order to check if this increase is limited by finite-size or finite-temperature effects, we performed additional simulations on a ‘square space-time lattice’, i.e., we set $`N_s=L`$; the result is displayed in Fig. 2 for $`N_s=L96`$. While the charge susceptibility at $`2k_F`$ seems to saturate as $`N_s`$ increases, the one at $`4k_F`$ still scales with $`\mathrm{ln}N_s`$, up to the largest sizes considered. Thus, in spite of the very low temperatures reached, we were still unable to find indications of a crossover temperature below which the system is dominated by the $`2k_F`$ instability. Also, since $`N_s1/T`$ for the data in Fig. 2, the $`4k_F`$ charge susceptibility grows logarithmically with the temperature in this range, similarly to the infinite coupling limit. We now discuss the charge structure factor at zero temperature, as obtained from Lanczos diagonalizations, still for $`\rho =1/6`$. Figure 3 shows $`C(q)`$ for several values of $`U`$; for clarity, the curves are shown after suffering successive displacements. For the free case, $`U=0`$, we see a sharp plateau beginning at $`q=2k_F=\pi /6`$, which is the signature of the Peierls instability. This behaviour is quite different from the one observed for $`U0`$: though somewhat rounded for $`U=2`$ and 4, the plateaux now start at $`q=4k_F=\pi /3`$. It is instructive to examine how these roundings evolve with system size. In Fig. 4, we single out data for $`C(q)`$ with $`U=3`$ and 12, and sizes $`N_s=12`$ and 24. For each value of $`U`$, the data below and above $`4k_F`$ respectively move down and up as $`N_s`$ increases, thus sharpening the cusp; in addition, the position of the latter shows no tendency of shifting from $`4k_F`$. Thus, the Lanczos results are consistent with those from QMC simulations, in the sense that a $`4k_F`$ instability is dominant, already for moderate values of $`U`$. At this point, it is worth pointing out that the charge structure factor for the Hubbard chain with second neighbour hopping has been calculated through density matrix renormalization group (DMRG). Though their interest was to extract $`K_\rho `$ from the slope of $`C(q)`$ at $`q=0,`$ if one reinterprets those data along the lines discussed here, a predominance of the $`4k_F`$ instability can be clearly inferred for the largest values of $`U`$ shown in Fig. 11 of Ref. . DMRG calculations for the two-leg Hubbard ladder have also led to $`4k_F`$-like charge correlations. We now change the band filling to $`\rho =1/3`$; the average sign of the fermionic determinant is $`0.9`$ in the worst cases, thus posing no problems to the resulting averages. Figure 5 shows QMC data for the charge susceptibilities with $`U=6`$ and $`U=8`$. For $`U=6`$ an upturn at lower temperatures seems to be setting in for $`4k_F`$, while the $`2k_F`$ data show no noticeable change in growth rate. On the other hand, for $`U=8`$ the $`4k_F`$ susceptibility grows unequivocally faster with $`\beta `$ than the one at $`2k_F`$; see Fig. 5. The corresponding Lanczos data for the charge structure factor are shown in Fig. 6 and, similarly to Fig. 3, the cusp at $`q=4k_F=2\pi /3`$ gets visibly sharper as $`U`$ increases. Unfortunately, for larger band fillings the ‘minus-sign problem’ prevents us from reaching very low temperatures. Nonetheless, down to the lowest temperatures probed with acceptable average signs of the fermionic determinant (i.e., $`\mathrm{sign}0.7`$ at $`T1/20`$), no indications of a $`2k_F`$ peak dominating the charge susceptibility were found for $`\rho =1/2`$ or 3/4. Accordingly, the charge structure factor at $`T=0`$, calculated through Lanczos diagonalizations on a 16-site chain at quarter filling, shown in Fig. 7, confirms the previous patterns: there is some rounding near $`q=4k_F=\pi `$, which sharpens as $`U`$ increases, consistently with a $`4k_F`$ instability setting in for finite $`U`$’s. In summary, for all band fillings examined, the charge instability seems to be characterized by a $`4k_F`$ modulation, rather than by $`2k_F`$, at least for $`U`$ greater than some $`U^{}(\rho )`$. The question of how can these findings be reconciled with the analyses of the continuum model still remains. Since Lanczos data have been obtained at zero temperature, and QMC simulations reached much lower temperatures than before, the scenario of a temperature-driven crossover seems now unlikely; it should be recalled that this crossover was predicted based on a weak coupling RG analysis. We therefore envisage the following scenario. While analyses of the Luttinger liquid have so far provided detailed insight into the behaviour of the exponent $`K_\rho `$, little is known about the dependence of the amplitude $`A_1`$ of the $`2k_F`$ contribution \[see Eq. (1)\] with $`\rho `$ and with the coupling constant. Our results may be indicating that, for fixed density $`\rho `$, $`A_10`$ very fast with increasing $`U`$, either exponentially or, less likely, as $`\left(U^{}(\rho )U\right)^\psi ,`$ for $`UU^{}(\rho )`$, with $`\psi >1`$; a crude examination of the roundings near $`4k_F`$ in the structure factors is consistent with $`U^{}`$ growing with $`\rho `$. An alternative scenario could be that $`2k_F`$ charge correlations in the lattice model would suffer from unusually slow finite-size effects, thus hindering any present day numerical calculations to detect their predominance over the $`4k_F`$ ones; however, since one would need effects slower than those suggested by the logarithmic ‘correction’ in Eq. (1), this scenario is less appealing. Therefore, the presence of a $`4k_F`$ instability can be made compatible with Luttinger liquid picture through the behaviour of the amplitude of the $`2k_F`$ contribution. We hope our results stimulate more extensive work both on the Luttinger liquid and on the lattice model in order to extract a quantitative behaviour for the amplitude $`A_1(\rho ,U)`$. ###### Acknowledgements. The authors are grateful to H. Ghosh, A. L. Malvezzi and D. J. Scalapino for useful discussions, to S. L. A. de Queiroz and R. T. Scalettar for comments on the manuscript, and to R. Bechara Muniz for computational assistance. Financial support from the Brazilian Agencies FAPERJ, FINEP, CNPq and CAPES is also gratefully acknowledged. The authors are also grateful to Centro de Supercomputação da Universidade Federal do Rio Grande do Sul for the use of the Cray T94, and to the Instituto de Física at Universidade Federal Fluminense, where this work initiated.
no-problem/0002/astro-ph0002140.html
ar5iv
text
# A strong jet/cloud interaction in the Seyfert galaxy IC 5063: VLBI observations ## 1 Introduction The study of the effects of interactions between the radio plasma ejected from an active nucleus and the interstellar medium (ISM) of the hosting galaxy is presently attracting a lot of interest. In particular, Seyfert and high-redshift radio galaxies appear to be the kind of objects where the effects of such interactions can be very important. They can range from shaping the morphology of the gas in the ISM (with the radio plasma sweeping up material as it advances in the ISM), to the ionisation of the gas itself. While there is little doubt on the presence of such interactions in objects like Seyferts or high-$`z`$ radio galaxies, the actual importance of these effects in determining the overall characteristics of these sources is still a matter of debate. In some Seyfert galaxies the morphological association between the radio plasma and the optical line-emitting clouds, as well as the presence of disturbed kinematics in these clouds, is striking. In particular, the narrow-line regions (NLR) in Seyfert galaxies (i.e. regions of highly ionised, kinematically complicated gas emission that occupy the central area – up to $`1`$ kpc from the nucleus) often appears to form a ‘cocoon’ around the radio continuum emission (see e.g. Wilson 1997 for a review; Capetti et al. 1996; Falcke, Wilson & Simpson (1998) and references therein). Moreover, outflow phenomena are observed in the warm gas of several Seyfert galaxies (see Aoki et al. 1996 for a summary). Thus, the NLRs represent some of the best examples of regions where interaction between the local ISM and the radio plasma takes place and can be studied in detail. The situation appears to be different for the atomic hydrogen. Observations of the H I 21-cm line, in absorption, can trace the distribution of this gas in front of the brightest radio components, that are usually observed in the central region of Seyferts (of kpc or sub-kpc size, i.e. co-spatial with the NLRs). Thus, the study of the distribution and kinematics of the cold component of the circumnuclear ISM can nicely complement the optical data. Although H I absorption has been detected in a number of Seyfert galaxies (e.g. NGC 4151 Pedlar et al. 1992; NGC 5929, Cole et al. 1998; Mkn 6, Gallimore et al. 1998 ; see also Brinks & Mundell 1996 and Gallimore et al. 1999 and references therein), most of the investigated objects show single localised H I absorption components that can be explained as rotating, inclined disks or rings aligned with the outer galaxy disk (Gallimore et al. 1999) and only very seldom with gas in a parsec-scale circumnuclear torus (NGC 4151, Mundell et al. 1995). These components are therefore originated by gas that is not in interaction with the radio plasma. However, more complex H i absorption profiles that cannot be explained by the above mechanism have been observed in at least one Seyfert galaxy, IC 5063. Australia Telescope Compact Array (ATCA) observations of this galaxy (Morganti, Oosterloo & Tsvetanov 1998, hereafter M98) have revealed a very interesting absorption system with velocities up to $`700`$ km s<sup>-1</sup> blue-shifted with respect to the systemic velocity. In this object, unlike in other Seyfert galaxies, at least some of the observed H I absorption is originating from regions of interaction between the radio plasma and the ISM, producing an outflow of the neutral gas. This object, therefore, poses a number of interesting questions as: where is the interaction occurring, what are the physical conditions, why such interaction is not seen more often in neutral gas in other Seyfert galaxies? Previous H I observations were limited by low spatial resolution. In this paper we present the results from new VLBI observations aimed at investigating in more detail its nuclear radio structure and locating where the complex H I absorption observed with ATCA is really occurring. Throughout the paper we adopt a Hubble constant of $`H_{}=50`$ km s<sup>-1</sup>, so that 1 arcsec corresponds to 0.32 kpc at the redshift of IC 5063. ## 2 Summary of the properties of IC 5063 IC 5063 is a nearby ($`z=0.0110`$) early-type galaxy that hosts a Seyfert 2 nucleus that emits particularly strong at radio wavelengths ($`P_{1.4\mathrm{GHz}}=6.3\times 10^{23}`$ W Hz<sup>-1</sup>). This object has been recently studied, both in radio continuum at 8 GHz and in the 21-cm line of H i, using the ATCA (M98). In the continuum, on the arcsecond scale, we find a linear triple structure (see Fig. 1) of about 4 arcsec size ($`1.3`$ kpc), that shows a close spatial correlation with the optical ionised gas, very similar in nature to what is observed in several other Seyfert galaxies (see e.g. Wilson 1997) and indicating that the radio plasma is important in shaping the NLR. In the H I 21-cm line, apart from detecting the emission from the large-scale disk of IC 5063, very broad ($`700`$ km s<sup>-1</sup>), mainly blue-shifted absorption was detected against the central continuum source. These line observations could only be obtained with $`7`$ arcsec resolution, the highest resolution achievable with the ATCA at this wavelength. This resolution is too low to resolve the linear continuum structure detected in the 8-GHz continuum image. However, and what makes this absorption particularly interesting is that we were able to conclude (by a careful analysis of the data, see M98 for the detailed discussion) that at least the most blue-shifted absorption is likely to originate against the western (and brighter) radio knot and not against the central radio feature seen at 8 GHz. The large, blue-shifted velocities observed in the absorption profile make it very unlikely that these motions have a gravitational origin (the most blue-shifted H I emission associated with the large-scale H I disk occurs at roughly $`300`$ km s<sup>-1</sup>with respect to the systemic velocity), and are more likely to be connected to a fast outflow of the ISM caused by an interaction with the radio plasma. The identification of the central radio feature as the core, and hence that the absorption is occurring against the western lobe, is an important element in interpreting the nature of the absorption detected in IC 5063. In the literature, the core of this galaxy has sometimes been identified with the bright western knot (Bransford et al. 1998), however in our opinion there is compelling evidence that the identification of M98 is correct. The superposition of the 8-GHz radio image with an optical WFPC2 image available from the HST public archive and with a ground based narrow-band \[O III\] image suggests that the nucleus coincides with the central radio knot (see Figs 3 and 4 from M98). Although, as usual, there is some freedom in aligning the WFPC2 image with the 8-GHz radio image, aligning the western radio knot with the nucleus would require too large a shift. Given that the WFPC2 image was taken through the F606W filter, it contains the bright emission lines of \[O III\]$`\lambda 5007`$, H$`\alpha `$ and \[N II\]$`\lambda \lambda 6548,6584`$, and it gives a good idea of the morphology of the ionised gas. By aligning the nucleus with the central radio knot, a good overall correspondence between the radio morphology and the bright region of optical emission lines is obtained, both in the WFPC2 image and the ground-based \[O III\] image, similar in nature to what is observed in many other Seyfert galaxies. Choosing this alignment, the western radio knot falls right on top of a very bright, unresolved, spike in the WFPC2 image, i.e. the western radio knot would also have a counterpart in the WFPC2 image. The filamentary morphology of the ionised gas of the region just around this spike is suggestive of an interaction between the radio plasma and the ISM and the identification of the western radio lobe with this feature seems natural. Using optical spectroscopy, Wagner & Appenzeller (1989) found off-centre blue-shifted broad emission lines with similar widths as the detected H I absorption at a position 1-2 arcsec west of the nucleus, i.e. coincident with the spike seen in the WFPC2 image. This also suggests that at this position a violent interaction is occurring. The identification of the core with the central radio knot has been recently confirmed by Kulkarni et al. (1998) from NICMOS images. Three well resolved knots were detected in the emission lines of \[Fe II\], Pa$`\alpha `$ and H<sub>2</sub>. This emission-line structure shows a direct correspondence with the radio continuum structure. In broad band near IR images they detected a very red point source coincident with the central source seen in the emission lines, consistent with previous suggestions of a dust-obscured active nucleus. The strong \[Fe II\] and H<sub>2</sub> emission are usually interpreted as evidence for fast shocks and the direct correspondence between these regions and the radio emission suggest that shocks associated with the radio jet play a role in the excitation of the emission-line knot. By the same authors, an asymmetry in the H<sub>2</sub> distribution was found, with the eastern lobe showing a much weaker emission than the western lobe. This asymmetry can be explained, e.g, if an excess of molecular gas is present on the western side (for example, if the radio jet has struck a molecular cloud). In the optical, IC5063 shows a very high-excitation emission line spectrum (including \[Fe VII\]$`\lambda \lambda `$5721, 6087; Colina, Spark & Macchetto 1991). The high-excitation lines are detected within 1 - 1.5 arcsec on both sides of the nucleus, about the distance between the radio core and both the lobes. These lines indicate the presence of a powerful and hard ionising continuum in the general area of the nucleus and the radio knots in IC 5063. We have estimated (M98) the energy flux in the radio plasma to be an order of magnitude smaller than the energy flux emitted in emission lines. The shocks associated with the jet-ISM interaction are, therefore, unlikely to account for the overall ionisation and the NLR must be, at least partly, photoionised by the nucleus, unless the lobe plasma contains a significant thermal component (Bicknell et al. 1998). ## 3 VLBI observations IC 5063 was observed with the Australian Long Baseline Array (LBA) initially in continuum at 13 cm (2.3 GHz), followed by spectral-line observations at the frequency corresponding to the redshifted H I. The 13-cm observations in June 1996 comprised five stations; Parkes (64 m), Mopra (22 m), the Australia Telescope Compact Array (5$`\times `$22-m dishes as tied-array), the Mount Pleasant 26-m antenna of the University of Tasmania and the Tidbinbilla 70-m antenna of the Canberra Deep Space Communications Complex (CDSCC) near Canberra. The observations used the S2 recording system to record a single 16 MHz band in right-circular polarisation and were correlated at the LBA S2 VLBI correlator of the Australia Telescope National Facility at Marsfield, Sydney. The 13-cm data were edited and calibrated using the AIPS processing system. After this, the data were exported to DIFMAP (Sheperd 1997) for model fitting and imaging. The final image is presented in Fig. 2 and was made with uniform weighting. Although the observations were not phase-referenced, absolute position calibration for the 13-cm LBA image was extracted from the delay and rate data, allowing the radio image to be fixed at the $``$0.1 arcsec level in each coordinate, adequate for registration with other images. The 21-cm observations were made in September 1997 at the redshifted H I frequency of 1407 MHz, recording 16 MHz bandwidths in each circular polarisation. The same array was used, except for the Tidbinbilla 70-m antenna, which has no 21 cm capability. Correlation was in spectral-line mode with 256 spectral channels on each baseline and polarisation. The editing and part of the calibration of the 21-cm line data was done in AIPS and then the data were transfered to MIRIAD (Sault, Teuben & Wright 1995) for the bandpass calibration. The calibration of the bandpass was done using additional observations of the strong calibrators PKS 1921–293 and PKS 0537–441. Problems were encountered at Mopra which limited the usefulness of those data. It proved not possible to image the source from the final dataset and instead a simpler analysis using the time-averaged baseline spectra was employed. ## 4 The sub-kpc structure ### 4.1 The radio continuum morphology The final 13-cm image, shown in Fig. 2, has a beam of $`56\times 15`$mas in position angle (p.a.) $`40^{}`$. The r.m.s. noise is $`0.7`$ mJy beam<sup>-1</sup>. The total flux is 210 mJy. Because of the high accuracy of the astrometry of this VLBI image, we know that the observed structure corresponds (as expected) to the brighter, western, lobe observed in the 8-GHz ATCA image (see Fig. 2). It is therefore situated at about 0.6 kpc from the nucleus. The image shows that the lobe appears to have a relatively bright peak ($`77`$ mJy beam<sup>-1</sup>) and some extended emission to the north-east in p.a. $`40^{}`$ of total size of about 50 mas (or $`16`$ pc). The p.a. is quite different from the p.a. of the arcsecond sized structure seen in the ATCA 8-GHz data (p.a. $`295^{}`$), so there appears to be structure perpendicular to the main radio structure. These kind of distortions are often seen in the radio structure of Seyfert galaxies (e.g. Falcke et al. 1998) and could perhaps result from the interaction of the radio plasma with the environment. From our data, a brightness temperature of $`\mathrm{T}_\mathrm{B}10^7`$K can be inferred for the VLBI source. This brightness temperature is several orders of magnitude less than the typical values seen in milliarcsecond AGN cores or inner (pc-scale) jets that typically have brightness temperatures between $`10^9`$ and $`10^{11}`$K. However, this temperature is quite commonly found for radio knots detected in Seyfert galaxies (e.g. knot C in NGC 1068, Roy et al. 1998). Unfortunately, we do not have a spectral index of this region on the VLBI scale. The overall spectral index inferred from the ATCA 8.6 and 1.4-GHz images is steep, $`\alpha 1`$, and indeed consistent with a radio lobe or jet. However, unless a detailed multi-frequency spectral index study can be carried out, it is difficult to derive conclusions from this result alone given the complexity often observed in the spectral index of the central regions of Seyfert galaxies. In summary, we can conclude that the radio morphology, the spectrum and the brightness temperature of the VLBI source are consistent with what expected in a radio lobe. ### 4.2 The H I absorption As mentioned above, because from the 21-cm line observations useful data could only be obtained on the Parkes-ATCA baseline, we will present only an time-integrated spectral profile of the H I on this baseline. These data correspond to a spatial scale of about 0.1 arcsec. Fig. 3 shows the continuum-weighted H I absorption profile. Heliocentric, optical velocities are used. For comparison, the spectrum obtained from the previous ATCA observations (with much lower spatial resolution) is superimposed (dashed line). In Fig. 3 we have also indicated the range of the velocities observed as measured for the H I emission of the large-scale disk of IC 5063, as well as the systemic velocity of the galaxy of $`3400`$ km s<sup>-1</sup> as derived from the kinematics of the H I emission. The r.m.s. limit to the optical depth is $`0.3`$%. Fig. 3 shows that a strong absorption signal is detected against the VLBI source. Since from the 13-cm data it followed that the VLBI source corresponds to the western radio lobe, these data now confirm what was believed to be the case from the ATCA data, namely that the absorption is occurring against the western radio lobe. Fig. 4 shows the same data as in Fig. 3, except that both profiles have been normalised to the same optical depth for the most blue-shifted component. Figs 3 and 4 show quite clearly that the shape of the absorption profile obtained at the high resolution of the VLBI data is quite different in character than that obtained with the ATCA. While in the ATCA data the absorption is relatively uniform in velocity, in the VLBI spectrum the most blue-shifted component is clearly the dominant one. This shows that the most blue-shifted absorption is occurring against a compact radio source, while the absorption at lower velocities is against a more diffuse source. Component ($`A`$) has a central velocity of 2786 km s<sup>-1</sup>, over 600 km s<sup>-1</sup> blue-shifted with respect to the systemic velocity (3400 km s<sup>-1</sup>), with its bluest wing extending to about 2650 km s<sup>-1</sup>, or –750 km s<sup>-1</sup> relative to the systemic velocity. Component $`A`$ corresponds to the most blue-shifted component found in the ATCA profile, as is illustrated in Fig. 4. At slightly less blue-shifted velocities, but still outside the range of velocities observed in emission, the VLBI data show a second component ($`B`$). The absorption with velocities within the range of the H I emission, as detected in the ATCA profile, is only partly detected in the VLBI spectrum with component $`C`$. No absorption is detect in the velocity range 3000-3200 km s<sup>-1</sup>. Hence, the absorption seen in the ATCA data at velocities above 3000 km s<sup>-1</sup> has become much less prominent compared to the more blue-shifted absorption. Note that this effect is probably even stronger than the data shows, since the low resolution of the ATCA will have caused some filling of the absorption with emission of the H I disk and the ‘true’ absorption is likely to be stronger at these velocities. The ATCA spectrum also showed a faint red-shifted absorption component that is perhaps also detected in the VLBI spectrum. The column density $`N_{\mathrm{HI}}`$ of the obscuring neutral hydrogen is given by $`N_{\mathrm{HI}}=1.823\times 10^{18}T_s\tau 𝑑v`$ cm <sup>-2</sup> where $`T_\mathrm{s}`$ is the spin temperature of the electron. Assuming a spin temperature of 100 K we derive a column density of $`1.7\times 10^{21}`$ atoms cm<sup>-2</sup> for the components $`A`$ and $`B`$ and a column density of $`2.5\times 10^{20}`$ atoms cm<sup>-2</sup> for the component $`C`$. The main source of uncertainty for the derived column density comes from the assumption in the value of the spin temperature. The presence of a strong continuum source near the H I gas can make the radiative excitation of the H I hyperfine state to dominate over the, usually more important, collisional excitation (see e.g. Bahcall & Ekers 1969). Gallimore et al. (1999) argue that the H I causing the absorption against Seyferts jets is in general at too high densities ($``$$`10^5`$ cm<sup>-3</sup>) for these effects to be relevant. However, the argument used by Gallimore et al. applies to H I in pressure equilibrium with the NLR, while the absorbing gas in Seyferts in general is not co-spatial with the NLR, but is at larger radii. Because of this the density of the absorbing gas is lower, but the regions are also further removed from the central engine and the spin temperature approaches the kinetic temperature at much lower densities. In our model for IC 5063, the H I causing the most blue-shifted absorption is the skin of a molecular cloud that is being stripped off by the jet (see also §5). Given that typical densities in molecular clouds are in the range $`10^4`$ \- $`10^6`$ cm<sup>-3</sup>, this is an upper limit to the density of the absorbing gas. But given the large velocities involved, the actual density of the blue-shifted gas could be substantially lower. The effects on the excitation of the fine-structure line by the local radiation field were already discussed by M98 for the case of IC 5063, where it was concluded that these effects are perhaps important. The column density derived by Kulkarni et al. from the NICMOS observation ($`N_{\mathrm{HI}}5\times 10^{21}`$ atoms cm<sup>-2</sup>) is slightly higher than our estimate based on a $`T_{\mathrm{spin}}`$ of 100 K, also suggesting that perhaps the spin temperature is somewhat higher than 100 K. For $`T_{\mathrm{spin}}=100`$ K the derived column density is much lower than the value of $`10^{23}`$atoms cm<sup>-2</sup> found from X-ray data (Koyama et al. 1992). ## 5 H I, H<sub>2</sub> and radio plasma: a possible scenario for the interaction Summarising, the main result from our new observations is that with the improved spatial resolution, the absorption at velocities outside the range allowed by the rotational kinematics of the large-scale H I disk has become much stronger, while the absorption in the range of velocities of the H I disk has become much less prominent. From the 13-cm radio continuum VLBI data we have been able to image only the western part of the source observed by the ATCA, while the remaining structure is resolved out. Thus, all this confirms and completes the picture we derived from the ATCA data, namely that a strong interaction between the radio plasma and the ISM is occurring at the position of the western radio lobe. Fig. 5 gives a schematic diagram of what we believe is happening in the western lobe of IC 5063. Following the results from NICMOS (Kulkarni et al. 1998), it is likely that the asymmetry observed in the brightness of the H<sub>2</sub> (western side brighter than the eastern side) may be explained by an excess of molecular gas on the western side. Thus, the radio plasma ejected from the nucleus appears to interact directly with such a molecular cloud. Because of this interaction, the jet is drilling a hole in the dense ISM, sweeping up the gas and forming a cocoon-like structure around the radio lobe where the gas is moving at high speed and an outflow of gas is created. The increased ultraviolet radiation due to the presence of shocks generated from the interaction, can dissociate part of the molecular gas. This creates neutral hydrogen or even ionised gas if the UV continuum produced by the shocks is hard and powerful enough. The region of ionised gas would correspond to the part of the cocoon closer to where the interaction is occurring, possibly corresponding to both the shocked gas and to the precursor. The complex kinematics of the emission lines in this region observed from the optical emission lines (Wagner & Appenzeller 1989) is consistent with this. As for the neutral gas, we will observe only the component in front of the radio continuum and therefore, as effect of the outflow produced by the interaction, we will observe only the blue-shifted component. The most blue-shifted component will be seen against the hot spot where the interaction is most intense. Somewhat away from this location, the H I will driven out by the expanding cocoon, but since this is away from the hot spot, this will occur at lower velocities. Moreover, the radio continuum emission from this region is also more extended compared to the small-sized hot spot. Hence the VLBI observation do not detect the absorption at lower velocities, but only the highest velocities against the hot spot (as illustrated in Fig.5). The origin of the H<sub>2</sub> emission can be related to UV or shocks (Draine, Roberge & Dalgano 1983, Sternberg & Dalgano 1989). Although we are not able to distinguish between these two mechanisms, this scenario suggest that there should be in IC 5063 a strong shock component. The H<sub>2</sub> emission observed by NICMOS would therefore come from the very dense region (again due to the compression of the gas associated with the interaction) of the molecular cloud. As we noticed above, not all the components observed in the ATCA data are also visible in the VLBI data. A possible explanation for this is that the components that are missing from the VLBI H I absorption are against continuum emission that is resolved out in our VLBI data, indicating that the cocoon of shocked gas is quite extended and cover at least all the western radio lobe. Alternatively, part of the absorption undetected in the VLBI spectrum can be due to the large-scale disk associated with the dust-lane and also seen in H I in emission, although the continuity of the ATCA absorption profile does not suggest this. By looking at the velocity field derived for this disk from the H I emission observations (see Fig. 5 in M98) we can see as the western side is the approaching side, therefore showing a blue-shifted velocity relative to the systemic. This means that the large-scale disk being in front of the hot spot could be responsible for component $`C`$, but that it cannot explain the weak red-shifted component unless non circular motions are present in the foreground gas associated with the dust lane. In M98 we hypothesised that the red-shifted component could be associated with a nuclear torus/disk. This was motivated by the fact that the width of the red-shifted component appeared to be similar to the width of the CO profile as observed by Wiklind et al. (1995). However, there seems to be no indication of detection of the nuclear component from the visibility of the continuum associated with the 21-cm data (and by extrapolating the arcsec data, the core flux is probably too weak to be detected and, even more, to produce an absorption) so this hypothesis has to be ruled out. A final possibility is that, apart from the bulk outflow, turbulent motions produced in the shocked region can give rise to clouds with red-shifted velocity. ## 6 Comparison with other Seyfert galaxies How does IC 5063 compare with other Seyferts galaxies? The results on IC 5063 confirm the more general results obtained by Gallimore et al. (1999) on a sample of Seyfert galaxies, that the H I absorption is not occurring against the core and that the absorbing gas in Seyferts does not trace (except for NGC 4151) the pc-scale gas. In the galaxies studied by Gallimore et al., the absorption is occurring at a few hundred parsec from the core and is caused by the inner regions of, or gas associated with, the large-scale H I disks. This is also happening in IC 5063. The important difference is that in IC 5063 the jet is physically strongly interacting with this H I disk, causing the fast outflow observed for the absorbing material. This makes IC 5063 unique. Only component $`C`$ would exactly fit the scenario proposed by Gallimore et al. It is quite likely a gas cloud at large radius (given its column density), unrelated to the interaction, projected in front of the radio hot spot. One obvious question is why such kind of absorption (i.e. broad blue-shifted absorption) is so rare. Are the physical conditions in IC 5063 rare, or is there an observational bias? Some arguments suggest that IC 5063 is a special case. IC 5063 is a very strong radio emitter compared to other Seyfert galaxies. Most of the strong radio flux of IC 5063 is produced in the western radio knot, indicating that the interaction is particularly strong. Also the fact that the western radio knot is much brighter than the eastern one indicates that the conditions near the western lobe are special. It has been noted that IC 5063 belongs to a group of ”radio-excess infrared galaxies” (Roy & Norris 1997), objects that could represent active galactic nuclei hosted in an unusual environment or perhaps dust-enshrouded quasars or their progenitors. It appears that the jet-cloud interaction in IC 5063 is particularly strong. This would make IC 5063 a very suitable object for further detailed studies of jet-cloud interactions in Seyfert galaxies. One factor is of course that in order to create the strong interaction and the very broad absorption, the jet has to lie more or less in the plane of the H I disk. Only then can the jet have a strong interaction with the ISM. The orientation of the AGN in Seyferts is not correlated with that of the large-scale disk, so the effects seen in IC 5063 should then only occur in a minority of cases. On the other hand, interactions between the radio plasma and the ISM are common in Seyferts, given that in many Seyfert galaxies, very large velocity widths of the optical emission lines are observed in the NLR (e.g. Aoki et al. 1996 and references therein). Perhaps the high sensitivity in $`\tau `$ of our observations also plays a role. IC 5063 is a strong radio source compared to other Seyfert galaxies that are between 10 and 100 times weaker at radio wavelengths. Therefore only H I absorption with much higher optical depth can be observed against those objects. For example, the Seyfert 2 galaxy NGC 5929 shows a striking morphological similarity (both in the optical and radio) with IC 5063. However, the peak of the radio emission in NGC 5929 is only 24 mJy beam<sup>-1</sup>, so in this object absorption of a few percent would not be detectable with the noise level of the current observations (Cole et al. 1998). For almost all the galaxies in the sample studied by Gallimore et al. (1999) is the sensitivity not enough to have been able to detect faint, broad absorption like in IC 5063. Moreover, in order to detect broad profiles of the level as in IC 5063 even in strong sources, good spectral dynamic range is required, which is not always easy to obtain (e.g. NGC 1068; Gallimore et al. 1999). It is quite well possible that more cases like IC 5063 will be found if more sensitive observations are performed. ## 7 Conclusions Using the Australian Long Baseline Array, we have detected a compact radio source of about of 50 mas (or $`16`$ pc) in size (at 13 cm) in the Seyfert galaxy IC 5063. Because of the high positional accuracy of these measurements, we can unambiguously identify this radio knot with the western radio lobe. The hot spot is extended in a direction almost perpendicular to the radio jet. In 21-cm line observations, we detect absorption very much blue-shifted ($``$700 km s<sup>-1</sup>) with respect to the systemic velocity. Together with the 13-cm observations, this confirms that the H I absorption is not taking place against the core, but that it is against the western radio knot. At the position of the western radio knot a very strong interaction must be occurring between the radio jet and the ISM. Various arguments suggest that this interaction is particularly strong compared to other Seyfert galaxies. This makes IC 5063 a good candidate for studying the physics of jet-cloud interactions in Seyfert galaxies. The HI absorption characteristics of IC 5063 are only partially consistent with other absorption studies of Seyfert galaxies. The major absorption component is occuring against the bright radio knot offset a few hundred parsecs from the core. While there are indication that the absorbing material is associated with the large scale H I disk, it is clearly (and violently) disturbed by the passage of the jet. We suspect that more sensitive observations may reveal similar absorption profiles in other Seyfert galaxies with fainter radio sources. We wish to thank the referee, Jack Gallimore, for his useful comments.
no-problem/0002/cond-mat0002059.html
ar5iv
text
# Evaluation of the quantitative prediction of a trend reversal on the Japanese stock market in 1999. ## Abstract In January 1999, the authors published a quantitative prediction that the Nikkei index should recover from its 14 year low in January 1999 and reach $`20500`$ a year later. The purpose of the present paper is to evaluate the performance of this specific prediction as well as the underlying model: the forecast, performed at a time when the Nikkei was at its lowest (as we can now judge in hindsight), has correctly captured the change of trend as well as the quantitative evolution of the Nikkei index since its inception. As the change of trend from sluggish to recovery was estimated quite unlikely by many observers at that time, a Bayesian analysis shows that a skeptical (resp. neutral) Bayesian sees her prior belief in our model amplified into a posterior belief $`19`$ times larger (resp. reach the $`95\%`$ level). keywords: Stock market; Log-periodic oscillations; scale invariance; prediction; Gold; Nikkei; Herding behaviour. Following the general guidelines proposed in , the authors made in January 1999 public through the Los Alamos preprint server a quantitative prediction stating that the Nikkei index should recover from its 14 year low (actually 13232.74 on 5 Jan 1999) and reach $`20500`$ a year later corresponding to an increase in the index of $`50\%`$. Furthermore, this prediction was mentioned in a wide-circulation journal which appeared in May 1999 . Specifically, based on a third-order “Landau” expansion $$\frac{dF\left(\tau \right)}{d\mathrm{log}\tau }=\alpha F\left(\tau \right)+\beta |F\left(\tau \right)|^2F\left(\tau \right)+\gamma |F\left(\tau \right)|^4F\left(\tau \right)\mathrm{}$$ (1) in terms of $`\tau ttc`$, where $`t_c=`$ 31 Dec. 1989 is the time of the all-time high of the Nikkei index, the authors arrived at the equation $$\mathrm{log}\left(p(t)\right)A^{}+\frac{\tau ^\alpha }{\sqrt{1+\left(\frac{\tau }{\mathrm{\Delta }_t}\right)^{2\alpha }+\left(\frac{\tau }{\mathrm{\Delta }_t^{}}\right)^{4\alpha }}}$$ $$\left\{B^{}+C^{}\mathrm{cos}\left[\omega \mathrm{log}\tau +\frac{\mathrm{\Delta }_\omega }{2\alpha }\mathrm{log}\left(1+\left(\frac{\tau }{\mathrm{\Delta }_t}\right)^{2\alpha }\right)+\frac{\mathrm{\Delta }_\omega ^{}}{4\alpha }\mathrm{log}\left(1+\left(\frac{\tau }{\mathrm{\Delta }_t^{}}\right)^{4\alpha }\right)+\varphi \right]\right\},$$ (2) describing the time-evolution of the Nikkei Index $`p(t)`$. Equation (2) was then fitted to the Nikkei index in the time interval from the beginning of 1990 to the end of 1998, i.e., a total of 9 years. Extending the curve beyond 1998 thus provided us with a quantitative prediction for the future evolution of the Index. In figure 2, we compare the actual and predicted evolution of the Nikkei over 1999. We see that not only did the Nikkei experience a trend reversal as predicted, but it has also followed the quantitative prediction with rather impressive precision, see figure 2. It is important to note that the error between the curve and the data has not grown after the last point used in the fit. This tells us that the prediction has performed well so far. Furthermore, since the relative error between the fit and the data is within $`\pm 2\%`$ over a time period of 10 years, not only has the prediction performed well, but also the underlying model. This analysis represents the correct quantitative evaluation of the performance of the model as well as its predictive power on the Nikkei Index over a quite impressive time-span of 10 years. We wish to stress that the fulfilling of our prediction is even more remarkable than the comparison between the curve and the data indicates. This, since it included a change of trend: at the time when the prediction was issued, the market was declining and showed no tendency to increase. Many economists were at that time very pessimistic and could not envision when Japan and its market would rebounce. For instance, the well-known economist P. Krugman wrote July 14, 1998 in the Shizuoka Shimbun at the time of the banking scandal “the central problem with Japan right now is that there just is not enough demand to go around - that consumers and corporations are saving too much and borrowing too little… So seizing these banks and putting them under more responsible management is, if anything, going to further reduce spending; it certainly will not in and of itself stimulate the economy… But at best this will get the economy back to where it was a year or two ago - that is, depressed, but not actually plunging.” Then, in the Financial Times, January, 20th, 1999, P. Krugman wrote in an article entitled “Japan heads for the edge” the following: “…the story is starting to look like a tragedy. A great economy, which does not deserve or need to be in a slump at all, is heading for the edge of the cliff – and its drivers refuse to turn the wheel.” In a poll of thirty economists performed by Reuters (the major news and finance data provider in the world) in October 1998 , only two economists predicted growth for the fiscal year of 1998-99. For the year 1999-2000 the prediction was a meager 0.1% growth. This majority of economists said that “a vicious cycle in the economy was unlikely to disappear any time soon as they expected little help from the government’s economic stimulus measures… Economists blamed moribund domestic demand, falling prices, weak capital spending and problems in the bad-loan laden banking sector for dragging down the economy.” Nevertheless, we predicted a $`50\%`$ increase of the market in the next 12 months assuming that the Nikkei would stay within the error-bars of the fit. At the time of writing (3rd February 2000), the market is up by $`49.5\%`$ and the error between the prediction and the curve has not increased, see figure 2. Predictions of trend reversals is noteworthy difficult and unreliable, especially in the linear framework of auto-regressive models used in standard economic analyses. The present nonlinear framework is well-adapted to the forecasting of change of trends, which constitutes by far the most difficult challenge posed to forecasters. Here, we refer to our prediction of a trend reversal within the strict confine of equation (2): trends are limited periods of times when the oscillatory behavior shown in figure 2 is monotonous. A change of trend thus corresponds to crossing a local maximum or minimum of the oscillations. We report one case. In the standard “frequentist” approach to probability and to the establishment of statistical confidence, this bears essentially no weight and should be discarded as story telling. We are convinced that the “frequentist” approach is unsuitable to assess the quality of such a unique experiment as presented here of the prediction of a global financial indicator and that the correct framework is Bayesian. Within the Bayesian framework, the probability that the hypothesis is correct given the data can be estimated, whereas this is excluded by construction in the standard “frequentist” formulation, in which one can only calculate the probability that the null-hypothesis is wrong, not that the alternative hypothesis is correct (see also for recent introductory discussions). Bayes’ theorem states that $$P(H_i|D)=\frac{P(D|H_i)\times P(H_i)}{_jP(D|H_j)P(D_j)}.$$ (3) where the sum in the denominator runs over all the different conflicting hypothesis. In words, equation (3) estimates that the probability, that hypothesis $`H_i`$ is correct given the data $`D`$, is proportional to the probability $`P(D|H_i)`$ of the data given the hypothesis $`H_i`$ multiplied with the prior belief $`P(H_i)`$ in the hypothesis $`H_i`$ divided with the probability of the data. In the present context, we use only the two hypotheses $`H_1`$ and $`H_2`$ that our prediction of a trend reversal is correct or that it is wrong. For the data, we take the change of trend from bearish to bullish. We now want to estimate whether the fulfillment of our prediction was a “lucky one”. We quantify the general atmosphere of disbelief that Japan would recover by the value $`P(D|H_2)=5\%`$ for the probability that the Nikkei will change trend while disbelieving our model. We assign the classical confidence level of $`P(D|H_1)=95\%`$ for the probability that the Nikkei will change trend while believing our model. Let us consider a skeptical Bayesian with prior probability (or belief) $`P(H_1)=10^n`$, $`n1`$ that our model is correct. From (3), we get $$P(H_1|D)=\frac{0.95\times 10^n}{0.9510^n+0.05\times (110^n)}.$$ (4) For $`n=1`$, we see that her posterior belief in our model has been amplified compared to her prior belief by a factor $`7`$ corresponding to $`P(H_1|D)70\%`$. For $`n=2`$, the amplification factor is $`16`$ and hence $`P(H_1|D)16\%`$. For large $`n`$ (very skeptical Bayesian), we see that her posterior belief in our model has been amplified compared to her prior belief by a factor $`0.95/0.05=19`$. Alternatively, consider a neutral Bayesian with prior belief $`P(H_1)=1/2`$, i.e., a priori she considers equally likely that our model is correct or wrong. In this case, her prior belief is changed into the posterior belief equal to $$P(H_1|D)=\frac{0.95\frac{1}{2}}{0.95\frac{1}{2}+0.05\frac{1}{2}}=95\%.$$ (5) This means that this single case is enough to convince the neutral Bayesian. We stress that this specific application of Bayes’ theorem only deals with a small part of the model, i.e., the trend reversal. It does not establish the significance of the quantitative description of 10 years of data (of which the last one was unknown at the time of the prediction) by the proposed model within a relative error of $`\pm 2\%`$. A question that remains is how far into the future will the Japanese stock market continue to follow equation (2)? Obviously, the Nikkei Index must “break away” at some point in the future even if there are no changes in the overall behaviour and the underlying model thus remains valid. The reason is that the prediction was made using a third order expansion. This means that, as the parameter $`\tau =tt_c`$ in equation (2) continues to increase, this approximation becomes worse and worse and a fourth order term should be included. Presently, we are not ready to present the derivation of such an equation. Furthermore, we expect the numerical difficulties involved in fitting an even more complex equation than equation (2) to be considerable. Last, we would like to bring to the attention of the reader that not only can bearish markets occasionally be described by the framework underlying equation (2). In fact, bullish markets exhibits such changes of regimes even more frequently, see . Acknowledgement The authors wish to thank D. Stauffer for his encouragement both with respect to the original work of as well as the present re-evaluation.
no-problem/0002/hep-th0002122.html
ar5iv
text
# References The $`q`$-state Potts model, defined by the lattice Hamiltonian $$H=J\underset{x,y}{}\delta _{s(x),s(y)},$$ where the spin variable $`s(x)`$ assumes $`q`$ different values (colours), continues to be of fundamental importance in the description of a large variety of critical phenomena, ranging from ferromagnetism to percolation and adsorbed monolayers . It was shown by Baxter that in two dimensions the ferromagnetic model undergoes a phase transition which is continuous for $`q4`$ and first order otherwise . The Coulomb gas and conformal field theory (CFT) provided later a complete description of the second order phase transition line, which turned out to correspond to CFT’s with central charge $`c1`$, the value $`c=1`$ corresponding to the end point $`q=4`$. More recently, integrable field theory has lead to new results for the scaling limit of the off-critical model for $`q4`$ . Concerning the first order transition for $`q>4`$, several exact lattice results – internal energy , magnetisation , correlation length – are known, but progress through field theoretic methods is generally prevented by the absence of a scaling limit. When $`q`$ approaches 4 from above, however, the correlation length at $`T_c`$ diverges and a continuum description in terms of a massive quantum field theory should be possible. The identification and solution of the quantum field theory describing the limit $`q4^+`$ at $`T_c`$, as well as the determination of the universal critical quantities, are the subject of this note. The correlation length at criticality of the lattice model is known to behave as $$\xi ae^{\pi ^2/\sqrt{q4}}$$ (1) as $`q4^+`$, where $`a`$ is proportional to the lattice spacing, so that the continuum limit corresponds to taking the limits $`a0`$, $`q4^+`$ in such a way that $`\xi `$ remains finite. The presence of an essential singularity rather than a power law divergence in Eq. (1) is characteristic of a perturbing operator which is only marginally relevant (scaling dimension 2) – except that in the original description of the Potts model $`q`$ is a parameter which should be invariant along RG trajectories and therefore such an interpretation needs to be treated with some care. In fact we know that at $`q=4`$ there is such a marginal field $`\psi `$: it was shown by Nienhuis et al. to correspond to the fugacity for vacancies in the lattice model. When $`\psi <0`$ the transition is second order, with logarithmic modifications to scaling arising from the marginal irrelevance of $`\psi `$, but when $`\psi >0`$, the transition is first order with a correlation length at the transition which diverges as $`\psi 0^+`$ like $$\xi ae^{1/b\psi }$$ (2) where $`b`$ is a constant. The main point is that the scaling limits in which $`a0`$ with $`\xi `$ fixed are identical in these two cases. This may be seen, for example, from the general structure of the RG equations near $`q=4`$ and $`\psi =0`$. Based on the analysis of Nienhuis et al. , Cardy, Nauenberg and Scalapino argued that these have, to lowest order, the general form $`d\psi /dl`$ $`=`$ $`b\psi ^2+a(q4)+O(\psi ^3,(q4)\psi ,(q4)^2)`$ (3) $`dt/dl`$ $`=`$ $`(y_t+c_t\psi )t+O(\psi ^2t,(q4)t,t^2)`$ (4) $`dh/dl`$ $`=`$ $`(y_h+c_h\psi )h+O(\psi ^2h,(q4)h,th)`$ (5) where $`t`$ is the deviation from the critical temperature and $`h`$ is a symmetry-breaking field. $`a`$, $`b`$, $`c_t`$ and $`c_h`$ are all constants, certain combinations of which are universal . An important feature of (4,5) is that, to lowest nontrivial order, they do not involve $`q`$. This is because, as may be seen from the first equation, $`q4`$ is effectively $`O(\psi ^2)`$. Integrating (3) up to a value $`\stackrel{~}{l}`$ such that $`\psi (\stackrel{~}{l})=O(1)`$ then gives results for the correlation length $`\xi e^{\stackrel{~}{l}}`$ in agreement with (1,2) in the two cases $`q>4`$ and $`q=4`$, $`\psi >0`$. The various thermodynamic quantities are then found by integrating the other equations up to $`\stackrel{~}{l}\mathrm{ln}\xi `$. To the order stated, the results will be identical in the two cases, when expressed in terms of $`\xi `$. Therefore the scaling limits are identical. Another way of understanding this is through the mapping of the lattice Potts model to a height model and thence to a Coulomb gas or sine-Gordon theory . From this point of view, $`q`$ is merely a parameter identified with a certain function of the coupling constant conventionally called $`\beta `$. $`q=4`$ corresponds to $`\beta ^2=8\pi `$ at which point the operator corresponding to $`\psi `$ becomes marginal. Within the standard RG picture of the sine-Gordon model, both $`\beta `$ and $`\psi `$ have non-trivial marginal flows. However the scaling limit, corresponding to the massive sine-Gordon theory at $`\beta ^2=8\pi `$, is unique, and therefore describes both the cases $`q4^+`$ and $`\psi 0^+`$ at $`q=4`$. Having made this observation, we may take over the results of Ref. in which it was pointed out that the scaling limit of the massive $`q=4`$ theory is integrable, and in which the scattering theory and form factors were determined. In our case, the construction of the scattering theory goes as follows. Along the first order phase transition line, $`q`$ ordered ground states are degenerate with the disordered ground state. The field theory describing the scaling limit $`q4^+`$ has 4 ordered vacuum states $`\mathrm{\Omega }_i`$, $`i=1,\mathrm{},4`$. Invariance under colour permutations implies that, in the order parameter space, they lie at the vertices of a tetrahedron having the disordered vacuum $`\mathrm{\Omega }_0`$ at its center. The elementary excitations of the scattering theory are stable kinks $`K_{0i}(\theta )`$, $`K_{i0}(\theta )`$ interpolating between the center of the tetrahedron and the $`i`$-th vertex, and vice versa. We denote by $`\theta `$ the rapidity variable parameterising the on-shell momenta as $`(p^0,p^1)=(m\mathrm{cosh}\theta ,m\mathrm{sinh}\theta )`$. The mass of the kinks $`m\xi ^1`$ measures the deviation from the conformal point $`q=4`$. The space-time trajectory of a kink on the plane draws a domain wall separating a coloured phase from the disordered one. The space of asymptotic states is made of multi-kink sequences in which adjacent vacuum indices belonging to different kinks have to coincide. For example, up to possible bound states, the lightest excitation interpolating between two ordered vacua is $`K_{i0}(\theta _1)K_{0j}(\theta _2)`$. The factorisation of multi-kink processes reduces the scattering problem to the determination of the two-kink amplitudes. Colour permutation symmetry allows only for the four elementary processes depicted in Fig. 1. The four amplitudes can be determined as a solution of the requirements of unitarity (crucially simplified by the absence of particle production), crossing and factorisation. The scattering amplitudes of Fig. 1 are given in and read $`A_0(\theta )={\displaystyle \frac{e^{i\gamma \theta }}{2}}{\displaystyle \frac{2i\pi \theta }{i\pi \theta }}S_0(\theta ),`$ (6) $`A_1(\theta )={\displaystyle \frac{e^{i\gamma \theta }}{2}}{\displaystyle \frac{\theta }{i\pi \theta }}S_0(\theta ),`$ (7) $`B_0(\theta )=e^{i\gamma \theta }{\displaystyle \frac{i\pi +\theta }{i\pi \theta }}S_0(\theta ),`$ (8) $`B_1(\theta )=e^{i\gamma \theta }S_0(\theta ),`$ (9) where $`\theta `$ is the rapidity difference of the two kinks, $`\gamma =\frac{1}{\pi }\mathrm{ln}2`$, and $$S_0(\theta )=\frac{\mathrm{\Gamma }\left(\frac{1}{2}+\frac{\theta }{2i\pi }\right)\mathrm{\Gamma }\left(\frac{\theta }{2i\pi }\right)}{\mathrm{\Gamma }\left(\frac{1}{2}\frac{\theta }{2i\pi }\right)\mathrm{\Gamma }\left(\frac{\theta }{2i\pi }\right)}=\mathrm{exp}\left\{i_0^{\mathrm{}}\frac{dx}{x}\frac{e^{\frac{x}{2}}}{\mathrm{cosh}\frac{x}{2}}\mathrm{sin}\frac{x\theta }{\pi }\right\}.$$ (10) The absence of poles in the physical strip Im$`\theta (0,i\pi )`$ ensures that there are no bound states and that the four amplitudes above completely determine the scattering theory. This $`S`$-matrix shares evident analytic similarities with that of the $`SU(2)`$-invariant Thirring model. As a matter of fact, the latter is a realisation of the same perturbed CFT on a different particle basis (an $`SU(2)`$ doublet rather than our kinks). The possibility for a single perturbed CFT to be invariant under different symmetry groups and to describe different universality classes is discussed, for example, in Ref. . Making contact with the thermodynamics requires the computation of correlation functions. In our $`S`$-matrix framework these are obtained as spectral series summing over all multi-kink intermediate states. Neglecting terms of order $`e^{4m|x|}`$ in this large distance expansion, we will approximate the (connected) two-point correlator of a scalar operator $`\mathrm{\Phi }(x)`$ as $$\mathrm{\Omega }_0|\mathrm{\Phi }(x)\mathrm{\Phi }(0)|\mathrm{\Omega }_0\underset{i=1}{\overset{4}{}}_{\theta _1>\theta _2}\frac{d\theta _1}{2\pi }\frac{d\theta _2}{2\pi }|F_{0i}^\mathrm{\Phi }(\theta _1\theta _2)|^2e^{|x|E_2},$$ (11) in the disordered phase, and $$\mathrm{\Omega }_i|\mathrm{\Phi }(x)\mathrm{\Phi }(0)|\mathrm{\Omega }_i_{\theta _1>\theta _2}\frac{d\theta _1}{2\pi }\frac{d\theta _2}{2\pi }|F_{i0}^\mathrm{\Phi }(\theta _1\theta _2)|^2e^{|x|E_2},$$ (12) in the $`i`$-th ordered phase. Here $`E_2=m(\mathrm{cosh}\theta _1+\mathrm{cosh}\theta _2)`$ is the energy of the two-kink asymptotic state and we introduced the two-kink form factors $`F_{0i}^\mathrm{\Phi }(\theta _1\theta _2)=\mathrm{\Omega }_0|\mathrm{\Phi }(0)|K_{0i}(\theta _1)K_{i0}(\theta _2),`$ (13) $`F_{i0}^\mathrm{\Phi }(\theta _1\theta _2)=\mathrm{\Omega }_i|\mathrm{\Phi }(0)|K_{i0}(\theta _1)K_{0i}(\theta _2).`$ (14) The operators of interest for us are the spin $`\sigma _j(x)=\delta _{s(x),j}1/q`$, and the energy $`\epsilon (x)=_y\delta _{s(x),s(y)}`$, whose scaling dimensions around the $`q=4`$ fixed point are $`X_\sigma =1/8`$ and $`X_\epsilon =1/2`$ . Some consequences for the physics of the coexisting phases follow immediately from the structure of the scattering theory. The ‘true’ correlation length $`\xi `$ is determined by the large distance decay of the spin-spin correlator as $`\sigma _j(x)\sigma _j(0)e^{|x|/\xi }`$. Then it follows from (11,12) that $$\xi _o=\xi _d=1/2m$$ (15) (here and below the subscript $`o`$ ($`d`$) denotes quantities computed in the ordered (disordered) phase). Numerical simulations and large $`q`$ expansions suggest that the phase independence of $`\xi `$ holds true for all $`q>4`$. Since the interfacial tension between two coexisting phases is given by the total mass of the lightest excitation interpolating between them, we also have $`\sigma _{od}=m`$ and $`\sigma _{od}=\sigma _{oo}/2`$. The latter result is known to hold for all for $`q>4`$ . The relation of the other interesting thermodynamic quantities (spontaneous magnetisation $`M`$, latent heat $`L`$, susceptibility $`\chi `$, specific heat $`C`$, second moment correlation length $`\xi _{2nd}`$) with the connected correlators of $`\sigma _j`$ and $`\epsilon `$, and their behaviour as $`q4^+`$ are $`M=\mathrm{\Omega }_j|\sigma _j|\mathrm{\Omega }_jB\xi ^{1/8},`$ (16) $`L=\epsilon _d\epsilon _o\xi ^{1/2},`$ (17) $`\chi ={\displaystyle d^2x\sigma _j(x)\sigma _j(0)}\mathrm{\Gamma }_{o,d}\xi ^{7/4},`$ (18) $`C={\displaystyle d^2x\epsilon (x)\epsilon (0)}A_{o,d}\xi ,`$ $`\xi _{2nd}^2={\displaystyle \frac{1}{4\chi }}{\displaystyle d^2x|x|^2\sigma _j(x)\sigma _j(0)}(f_{o,d}\xi )^2.`$ (19) For the ordered case, the two-point correlators of $`\sigma _j`$ entering $`\chi `$ and $`\xi _{2nd}`$ are computed on the vacuum $`|\mathrm{\Omega }_j`$. Since $`\epsilon (x)`$ is odd under the duality transformation exchanging the low- and high-temperature phases, we have $$\epsilon _d=\epsilon _o,A_d=A_o.$$ (20) The critical amplitudes are normalisation dependent but can be combined into a series of universal ratios characterising the scaling limit. We can evaluate the critical amplitudes by integrating the two-particle approximations (11,12) of the correlators. What we need to know are the two-kink form factors of the operators $`\epsilon `$ and $`\sigma _j`$. Once again the result is contained in Ref. and reads $`F_{i0,0i}^\epsilon (\theta )=iL{\displaystyle \frac{e^{\pm \frac{\gamma }{2}(\pi +i\theta )}}{\theta i\pi }}F_0(\theta ),`$ (21) $`F_{i0,0i}^{\sigma _j}(\theta )=M{\displaystyle \frac{4\delta _{ij}1}{6\mathrm{{\rm Y}}_+(i\pi )}}{\displaystyle \frac{e^{\pm \frac{\gamma }{2}(\pi +i\theta )}}{\mathrm{cosh}\frac{\theta }{2}}}\mathrm{{\rm Y}}_\pm (\theta )F_0(\theta ),`$ (22) with $`\mathrm{{\rm Y}}_{}(\theta )=\mathrm{{\rm Y}}_+(\theta +2i\pi )`$, $$\mathrm{{\rm Y}}_+(\theta )=\mathrm{exp}\left\{2_0^{\mathrm{}}\frac{dx}{x}\frac{e^x}{\mathrm{sinh}2x}\mathrm{sin}^2\left[(2i\pi \theta )\frac{x}{2\pi }\right]\right\},$$ (23) $$F_0(\theta )=i\mathrm{sinh}\frac{\theta }{2}\mathrm{exp}\left\{_0^{\mathrm{}}\frac{dx}{x}\frac{e^{\frac{x}{2}}}{\mathrm{cosh}\frac{x}{2}}\frac{\mathrm{sin}^2\left[(i\pi \theta )\frac{x}{2\pi }\right]}{\mathrm{sinh}x}\right\}.$$ (24) The results we obtain for the universal amplitude ratios are given in Table 1 and compared with those following from the combination of the exact and series lattice results for the amplitudes. | | Field theory | Lattice | | --- | --- | --- | | $`f_d`$ | $`0.6744`$ | $`0.673(8)`$ | | $`f_o/f_d`$ | $`0.9340`$ | $`0.935(5)`$ | | $`\mathrm{\Gamma }_d/\mathrm{\Gamma }_o`$ | $`1.1406`$ | $`1.19(5)`$ | | $`A_d/^2`$ | $`0.1047`$ | $`0.105(3)`$ | | $`\mathrm{\Gamma }_d/B^2`$ | $`0.06607`$ | $`0.0656(15)`$ | Table 1. Universal amplitude ratios for the $`q`$-state Potts model at $`T=T_c`$, $`q4^+`$. The field theoretical results are obtained within the two-particle approximation. The accuracy exhibited by the two-particle approximation does not come as a surprise since it is known as a common feature of this kind of computations within integrable field theory. In the present case the accuracy is enhanced by the low scaling dimensions of the spin and energy operators which lead to mild singularities for their correlators and then to a small contribution of short distances to the integrals. We estimate that the errors on our values for the amplitude ratios do not exceed order 0.1%. The scaling limit we discussed so far corresponds to $`q4^+`$. At $`q=5`$, however, the correlation length is still some 2500 times the lattice spacing and this suggests that our results for $`q4^+`$ could still provide the basis for an approximate description. For a generic value of $`q>4`$, the model has $`q+1`$ degenerate ground states at the transition point and the elementary excitations are $`2q`$ kinks going from the disordered vacuum to the $`q`$ ordered vacua, and vice versa. If the correlation length is sufficiently large, an approximate scaling should still hold and then it makes sense to keep for the physical quantities the parameterisations (1619), namely a power of the correlation length times an amplitude. It is easy to see, however, that in the present case we have to allow for a $`q`$-dependence of the amplitudes. Consider in fact the correlator $$G_{\alpha j}(x)=\mathrm{\Omega }_\alpha |\sigma _j(x)\sigma _j(0)|\mathrm{\Omega }_\alpha ,\alpha =0,i;i,j=1,\mathrm{},q.$$ (25) Its two-particle approximation in the disordered phase is $$G_{0j}(x)\underset{i=1}{\overset{q}{}}|F_{0i}^{\sigma _j}|^2e^{E_2|x|}=\frac{q}{q1}|F_{0j}^{\sigma _j}|^2e^{E_2|x|},$$ (26) where integration over momenta is understood. The last equality follows from colour symmetry and $`_j\sigma _j=0`$ which imply $$F_{0i,i0}^{\sigma _j}=(q\delta _{ij}1)/(q1)F_{0j,j0}^{\sigma _j}.$$ (27) When integrating the correlator to obtain the amplitude of, say, the susceptibility in the disordered phase, the form factors computed at $`q=4`$ should give a good approximation as long as $`\xi `$ is large. The explicit factor $`q/(q1)`$ dictated by the number of intermediate states and symmetry, however, has to be taken into account and is expected to determine the main deviation from the $`q4^+`$ value of the amplitude. Following the same reasoning, the susceptibility amplitude in the ordered phase should be basically constant in $`q`$ since there is only one intermediate state. More generally, we are led to expect that the ratios listed in Table 2 are approximately constant in $`q`$ for $`\xi `$ large enough so that the continuum description is accurate. Their values determined from the results of the large $`q`$ expansion and reported in the Table seem to confirm our picture. Our field theory results for $`R_1`$ and $`R_2`$ as $`q4^+`$ are 0.855 and 0.0579, respectively. | q | $`4^+`$ | 5 | 10 | | --- | --- | --- | --- | | $`\xi `$ (lattice units) | | 2512.24 | 10.559 | | $`\xi _{2nd}^{(d)}/\xi `$ | $`0.673(8)`$ | $`0.671(3)`$ | $`0.6587(1)`$ | | $`\xi _{2nd}^{(o)}/\xi _{2nd}^{(d)}`$ | $`0.935(5)`$ | $`0.934(7)`$ | $`0.9579(2)`$ | | $`R_1=(q1)/q\chi _d/\chi _o`$ | $`0.89(4)`$ | $`0.810(5)`$ | $`0.80399(1)`$ | | $`R_2=\chi _o/(M\xi )^2`$ | $`0.0550(6)`$ | $`0.0589(2)`$ | $`0.05784(1)`$ | Table 2. Values obtained combining the exact results of Refs. and the large $`q`$ expansions of Ref. . Let us conclude this note by considering the ‘transverse’ susceptibility $`\chi _T`$ obtained integrating $`G_{ij}(x)`$ rather $`G_{jj}(x)`$. From (27), in the two-particle approximation $$\frac{\chi _T}{\chi _o}\frac{1}{(q1)^2}.$$ (28) This result is basically a consequence of the nature of the elementary excitations and is expected to hold as a good approximation for all $`q>4`$ at $`T_c`$, as long as $`\xi a`$. We are not aware of lattice results on $`\chi _T`$ for comparison. In summary, we have shown that the limit $`q4^+`$ in the Potts model defines an integrable massive field theory, whose $`S`$-matrix and form factors may be computed exactly. The results for integrated correlation functions are in excellent agreement with lattice-based numerical results. This shows how methods of continuum field theory are not restricted to the description of second-order transitions only.
no-problem/0002/hep-ph0002031.html
ar5iv
text
# Active-Sterile neutrino oscillations in the early Universe and the atmospheric neutrino anomaly ## 1 Severe BBN constraints in the two neutrino mixing scenario The atmospheric neutrino data are nicely explained in terms of neutrino oscillations $`\nu _\mu \nu _\alpha `$. Even though recent results favour the solution $`\alpha =\tau `$, the possibility that $`\nu _\alpha `$ is a sterile neutrino is still not completely excluded by Earth experiments. In anycase it is an interesting issue to know whether the BBN bound is able to rule out the solution $`\nu _\mu \nu _s`$. Neutrino oscillations are potentially able to thermalize the sterile neutrino that would thus contribute as a fourth neutrino species to the expansion rate, modifying the Standard BBN results. The recent indications from quasar absorbers for low values of Deuterium abundance ($`D/H3.4\times 10^5`$) , suggest a BBN bound $`\mathrm{\Delta }N_\nu ^{\mathrm{eff}}<0.9(0.6)`$ at 99.7 % (95.4%) c.l. . In this case the solution $`\nu _\mu \nu _s`$ to the atmospheric neutrino data is definitely ruled out, as it is shown in the left panel of figure 1. In this figure the constraints in the $`\mathrm{sin}^22\theta _0|\delta m^2|`$ plane have been calculated analytically in the effective kinetic approach (static approximation) developed by Foot and Volkas . They are in a good agreement with the numerical calculations . Moreover they are valid when it is assumed that the effective total lepton number $`L2L_{\nu _\mu }+L_{\nu _\tau }+L_{\nu _e}(1/2)B_n`$ starts and remains negligible. The quantities $`Q_X(n_Xn_{\overline{X}})/n_\gamma `$ ($`Q=L`$ or $`B`$) are the asymmetries of the particle species $`X`$. For positive $`\delta m^2`$ this picture is correct, but for negative <sup>1</sup><sup>1</sup>1We define $`\delta m^2m_2^2m_1^2`$, with $`m_1(m_2)`$ the eigenvalue of the mass eigenstate coinciding with the active (sterile) neutrino interaction eigenstate for zero mixing. $`\delta m^2`$, even though one starts from a situation in which $`L`$ is initially negligible ($`L10^6`$), at a critical temperature $`T_c18\mathrm{MeV}|\delta m^2|^{1/6}`$ this can undergo a phase of rapid growth, first exponentially and afterwards as a power-law . The presence of a large lepton number suppresses the sterile neutrino production and thus it is legitimate to suspect whether taking into account the generation of lepton number can relax the constraints. The answer is negative. First, the growth occurs only for very small mixing angles and thus in anycase the BBN bound for the atmospheric neutrino solution cannot be evaded and moreover, even for small mixing angles, it occurs too late, when sterile neutrinos have already mostly been produced: therefore the constraints cannot be significantly relaxed. The account of lepton number generation is thus uneffective in a simple two neutrino mixing scenario. ## 2 Three neutrino mixing mechanism to evade the BBN bound Assuming that the sterile neutrino is also slightly mixed with a heavier tau neutrino, a lepton number $`L`$ can be generated during $`\nu _\tau \nu _s`$ oscillations and it can afterwards suppress the sterile neutrino production during the $`\nu _\mu \nu _s`$ oscillations . Neutrino oscillations are maximally active at $`T_c|\delta m^2|^{\frac{1}{6}}`$. In this case we have two different $`T_c`$, associated with the two different $`\delta m^2`$ that we indicate with $`\delta m_\mu ^2`$ and $`\delta m_\tau ^2`$. Therefore, to have a generation of lepton number before the sterile neutrino production, one has to impose that $`\delta m_\tau ^2>\delta m_\mu ^2`$. Moreover it is clear that one has also to impose that a significant sterile neutrino production does not occur already during the oscillations $`\nu _\tau \nu _s`$. Therefore the constraints discussed previously in the two neutrino mixing scenario must be imposed on the mixing parameters of tau neutrino (dotted line in right side of figure 1). These conditions are still not sufficient and things are made more complicate considering that the two different neutrino oscillations are mutually dependent. While the lepton number is produced from $`\nu _\tau \nu _s`$, it has also the effect of increasing the temperature $`T_c`$ for $`\nu _\mu \nu _s`$ oscillations that are anticipated and participate to the rate of growth of lepton number but with a destroying contribution. If this counter effect is dominant, first the lepton number stops its growth and then is completely destroyed. To avoid this situation the condition $`\delta m_\tau ^2>\delta m_\mu ^2`$ must be largely satisfied. However structure formation arguments do not allow a tau neutrino mass much higher than a few eV’s and thus the lower limit on the $`\delta m_\tau ^2`$ must not be much larger than about $`10\mathrm{eV}^2`$. This can be determined analitically in the effective kinetic approach. The rate of the total lepton number $`L`$ is simply given by $`dL/dt=2dL_{\nu _\mu }/dt+dL_{\nu _\tau }/dt`$. The first term is the contribution from $`\nu _\mu \nu _s`$ and always destroys $`|L|`$, while the second is the contribution from $`\nu _\tau \nu _s`$ that at the critical temperature drive the growth of $`|L|`$. It is possible to show that $`|dL_{\nu _\alpha }/dt|=k_\alpha \mathrm{sin}^2\theta _\alpha |\delta m_\alpha ^2|`$ where $`k_\alpha `$ is a function of time and does not depend on the mixing parameters. The numerical analysis shows that if the lepton number stops only once during its growth, then its fate is to be destroyed, otherwise it can grow up to a final value able to suppress the sterile neutrino production. In this way the condition that one has to impose for the lepton number to grow is simply that $`d|L|/dt>0`$ at any time. This is equivalent to impose the condition $`|\delta m_\tau ^2|>\sqrt{C}|\delta m_\mu ^2|/\sqrt{s_\tau ^2}`$ where $`C`$ is the maximum of the ratio $`k_\mu /k_\tau `$ during the evolution of the lepton number. In the right panel of figure 1 the the dot-dashed line corresponds to $`|\delta m_\mu ^2|=10^3\mathrm{eV}^2`$ and $`C=28`$, the value that gives the best fit of the numerical result obtained using the static approximation (dashed line). The thick solid line is the result of a numerical calculation in which the full quantum kinetic equations have been used . ## 3 Chaotical generation of lepton domains ? The three neutrino mixing mechanism is independent on the final sign of the lepton number. If however the final sign is sensitive to small fluctuations, one can imagine that different points of the early Universe evolve a different sign and that a chaotical generation of lepton domains occurs . In this case one should calculate a further sterile neutrino production deriving from those neutrinos that, crossing the boundaries of lepton domains, encounter a new resonance. This additive production could spoil the evasion of the BBN bound . Is a chaotical generation of lepton domains possible ? A definitive answer to this difficult problem can be obtained only performing the full quantum kinetic calculations including momentum dependence . The results show that the sign is fully determined for a large choice of mixing parameters and only in a restricted region the numerical analysis cannot be conclusive at the present. This region is the thin solid line in the right panel of the figure. It is evident that even assuming that a chaotical generation of lepton domains occurs in this region, determining a sterile neutrino overproduction, the allowed region for the three neutrino mixing mechanism still includes values of $`\delta m^2100\mathrm{e}\mathrm{V}^2`$, corresponding to a tau neutrino mass of a few eV’s. Thus the conclusion is that cosmology cannot exclude the solution $`\nu _\mu \nu _s`$ to the atmospheric neutrino anomaly. ## Acknowledgments I wish to thank Robert Foot, Paolo Lipari, Maurizio Lusignoli and Ray Volkas for the collaboration and the encouragement during the period of thesis; A.D. Dolgov, K. Enqvist, K. Jedamzik, K. Kainulainen, S. Pastor and A. Sorri for nice discussions during the meeting; the organizers for a great conference.
no-problem/0002/nlin0002011.html
ar5iv
text
# Phenotypical Behavior and Evolutionary Slavery ## 1 Introduction A problem that has always puzzled evolutionary game theorists is the amount of observed cooperation among individuals from the same specie or even belonging to different species, even when it would be harmful for each individual to cooperate. Animals warn their companions about the approach of a predator shouting to their comrades, attracting attention to themselves and, therefore, increasing the chances that the predator will notice them and choose them as the next meal. Animals seem to help their fellows more often than it would be expected in these situations and this problem has been offered two solutions. The problem here is that, even if cooperation is the best solution for all the individuals, it is not stable in a Prisoner Dilemma circumstance. Any mutant that would decide not to cooperate would see its success increasing, as the other animals help him, but he doesn’t run the risks of attracting the predators to himself by not shouting when it could. However, the structure that really is playing these games and ”learning” a way to improve its strategy, is not the individuals, who are mortal and, no matter how successful they become, will disappear. It is their genetic code that is continuously changing itself, to adapt to new circumstances or simply because it has found out a better way to do things. This idea was first proposed by Richard Dawkins . Hamilton proposed that, for a rare gene to survive, it would make sense to cooperate with our brothers, as they would have a 50% chance of having that same gene. The percentage would go down fast to $`1/8`$ of chance between first cousins, so the cooperation would be something that could happen inside families, among close relatives. Our own survival would still be more important than that of our relatives, but we could warn them about danger, as long as that would make their survival, along with our own genes, much more likely. In extreme cases, like in an ant colony where all ants share exactly the same genetic code, the advantages to a gene survival gained from cooperation would be even stronger and the behavior of soldier ants, who never reproduce and die without hesitation to protect their colony would make even more sense. Axelrod suggested the second solution to the dilemma in the form of a strategy. He created several programs who would compete inside a computer, in an environment where the Prisoner Dilemma were to happen and gave it of them a strategy to try to defeat the other competing programs. He found out that the program that cooperated with programs that had previously cooperated with it and did not when in the other case would systematically win over the non-cooperating strategies. The reason for that was quite simple. When facing a non-cooperative algorithm, our algorithm would not risk itself and would not cooperate getting the best possible return. However, at least when facing copies of itself, both would cooperate, allowing for a best overall performance of the individuals. Therefore, the theoretical possibilities for cooperation, so far, seemed to be some limited true cooperation among members of a family and the possibility of adoption of an strategy that would take us to be cooperative only when the individual we are interacting with has not failed to cooperate with us in the past. In this paper, I propose a third possible source of cooperation among individuals of a species. The key to such a cooperation is in the way that genes do influence the real behavior of one individual. So far, it has been considered in the literature that having a specific genetic code forces you to adopt a specific strategy . But this doesn’t have to be that way. A mutant gene could come up with instructions that a percentage of the individuals possessing it would behave in a way, while others would do it differently. There might be some sort of trigger determining which individuals will follow which strategy, like the position of the individual in some hierarchy or anything as trivial as the fact of been born during a cold or a warm day. What makes the mutation here proposed a possible winner is not how this decision is taken, although for specific problems there might be best answers, but the fact that some individuals with exactly the same code will, at some point, decide to act one way or the other. This possibility for different behaviors associated with the same gene, or the appearance of phenotypical properties nor dictated by the gene will, as we will see, open the possibility for the appearance of individuals who exist merely to serve their peers, while the others have a more comfortable life. Therefore, the use of the name evolutionary slavery in the title of this paper. ## 2 The Prisoner’s Dilemma and its Strategies The Prisoner Dilemma happens in a very simple game between two players. In the example above, where an individual must decide whether or not warn its relative about the arrival of a predator, possibly calling the attention of the hunter to itself, let’s say that, in average it has a chance of 10% of being killed if she and her companion cooperate, 50% if none of them does and 90% of chance of been killed to the individual who cooperates when its partner does not, while the non-cooperative specimen will always stay safe if her partner cooperates. This can be represented in a matrix form as bellow: $$\left(\begin{array}{cc}0,5& 0\\ 0,9& 0,1\end{array}\right)$$ (1) There are some known good strategies, in the sense that your decision doesn’t have to be never cooperate. As we mentioned those strategies are cooperate only with your very close relatives or cooperate with everybody but those who have failed to cooperate with you before. Both are good answers to the Dilemma and they can also be used together, so that you would always cooperate with someone who is a very close relative, regardless of their past actions, and cooperate with your friends as long as they cooperate with you. It is quite possible that we won’t find other fixed strategies that are not variations on these two themes and that work well in an evolutionary dynamics, not leading to extinction. However, a small change in the way we look at the problem immediately opens a new possibility. So far, we have always considered that when an individual has a specific genetic code, the strategy he will choose is already completely determined by that code, like it was set in stone. In other words, we are imposing that the strategy one adopts had a purely genetical component, leaving nothing to the phenotype. Biologists know very well that same genes can lead to different external manifestations, different phenotypes. Therefore, there is really no reason why we should assume that for a specific gene, we should have a specific strategy. As we will see, a mutant gene that allows for different strategies associated with it could also flourish in a world where everybody else never cooperates or even in situations where everybody else is playing tit-for-tat. Now imagine that our species is well adapted to its environment, meaning that it uses a strategy as good as everyone else. It has means to identify its close relatives reasonably well and the genes make it sure that cooperation happens in this case. Tit-for-tat may or may not be used by everybody; if it is, just assume our species does it as well. Now, a mutation occurs and we have that the new gene determines that part of the individuals who have it behave in a certain way, and the rest of the population behaves according to different rules. We are not concerned at this point with what triggers the individual decision about which behavior manifests in each case. We will return to this point later. The two possible phenotypes for that gene differ in the strategy they assume when facing someone who will probably have the same genetic code (a close relative, something the species already knows how to recognize). When meeting strangers, they still act the same way, using the well-tested strategy their species developed. In the internal relations that lead to the Prisoner Dilemma, part of the population, that I will name as leaders, for reasons that will become clear very soon, never cooperates, while the other part, the servants, cooperate everytime. This way, a leader will always get the maximum benefit when interacting with a servant, while the servant has to support the burden of the worst result in return. The only advantage for a servant is the fact that the other servants will always cooperate with him. For the gene, if the proportion of leaders and servants is right, the advantage obtained by the leaders is far stronger than the problems caused by their non-cooperative attitude. Against a population who never cooperates, this strategy may even improve the fitness of the servants, for the right parameters, as what they have to lose from the non-cooperation of the leaders may be compensated by the cooperation from other servants. The reason for that is quite easy to see. Suppose that half of the population of the mutant gene is consisted of leaders and the other half of servants. The interaction with individuals who do not have the gene is not altered and everybody always get a $`0.5`$ in average, just as the non-mutants get every time. Among themselves, the servants get a $`0.1`$ half of the time and a $`0.9`$ the other half, for the same $`0,9`$ average. If the penalty for cooperating with a non-cooperator was just a little smaller or if the result obtained for two non-cooperating individuals a little worse, the servant population would, in average, have an evolutionary advantage when compared with the non-mutant gene. In any case, it is easy to see that the non-cooperative population has no protection at all against an invasion by this genetic strategy. For a percentage of the total population $`ϵ`$ having the mutant gene, the average result for the non-mutants doesn’t change from $`0.5`$. The mutants, however, get a $$0.5+\frac{ϵ^2}{8}$$ (2) result, with is always better than what the non-mutant population gets, for every value of $`ϵ`$. Let’s see how this works on the general case where we have an invasion by a percentage of $`ϵ`$ mutants, where there is a probability of $`p`$ for a specific mutant to be a leader (and, of course $`1p`$ that he will be a servant). The Prisoner Dilemma takes here the general form $$\left(\begin{array}{cc}a& c\\ d& b\end{array}\right)$$ (3) where $`c>b>a>d`$. That is, if you don’t cooperate and your opponent does, you get $`c`$ and he $`d`$, if both cooperate, both get $`b`$ and if none does, both get $`a`$. In this case, we will have that a non-cooperative population would get as a result of its actions always $`a`$, no matter if they are interacting with mutants or with non-mutants. The case for the mutants, however has to be divided in two parts, the gains obtained by the leaders $`G_l`$ and the gain for servants $`G_s`$. These are given by $`G_l`$ $`=`$ $`\left(1ϵ\right)a+ϵ\left[pa+\left(1p\right)c\right]`$ $`G_s`$ $`=`$ $`\left(1ϵ\right)a+ϵ\left[pd+\left(1p\right)b\right]`$ The average gain $`G`$ for the gene would then be given by $$G=pG_l+\left(1p\right)G_s=a\left(1ϵ\right)+ϵS$$ where $$S=\left[p^2\left(a+bcd\right)+p\left(c+d2b\right)+b\right]$$ If we have no leaders ($`p=0`$), we have that $`G=a(1ϵ)+bϵ`$. That is clearly better than the result for non-mutants, $`a`$, reflecting the fact that it is always good for a gene to create cooperation within the close family, or, in other words, with itself. If that kind of cooperation were already the rule, our newly created mutant population would also be identified as family by its close relatives and the result would change to $`G=b`$, or no improvement at all, but no worsening, as it was obvious, as everybody is cooperating with everybody, regardless of their genetic codes. When we have only leaders ($`p=1`$), we obtain $`G=a`$, as nobody is cooperating. In a population where nobody cooperates, that would again mean no difference at all. If cooperation with your family is already the common strategy, we get $`G=b(1ϵ)+aϵ`$, which is actually worse than the result for the non-mutants. If you have only leaders and no servants, the strategy is useless, as expected. For the appearance of some leaders to bring some advantage to the only servants case, we must see that $`G`$ increases as $`p`$ moves away from $`0`$, or, in other words, $$\left(\frac{G}{p}\right)_{p=0}>0$$ which means $`c+d>2b`$. Therefore, we see that the strategy to acquire leaders only pays off when the average of $`c`$ and $`d`$ is higher than the result one gets from the cooperative strategy, what makes sense. When we have a Prisoner Dilemma where the parameters obey the $`c+d>2b`$ relation, the appearance of a gene with two phenotypes, servants and leaders, is possible. A population of non-cooperative individuals has no barrier against such an invasion. Let’s now turn to the problem of invading a population where cooperation already happens. We have seen that, in this case, an invasion by a genetic code having only leaders can not happen as the result for such genes is actually worse than for those genes that do cooperate within their families. In this case, it is just a good idea to improve the mutant strategy. If a leader can recognize who in his close family is also a leader (using any kind of cues, like smell or body posture, the details of this recognition are not important to the discussion) and who is a servant, he might decide not to cooperate only with the servants and cooperate with the other leaders inside his own family. As that changes his results from the interaction with the other leaders from $`a`$ to $`b`$, that is actually an improved version that could easily replace the strategy we were discussing so far in all the events where that strategy was a winner. It is always useful for the leaders to cooperate inside their family. In this case, we see have that $`G_l`$ $`=`$ $`\left(1ϵ\right)b+ϵ\left[pb+\left(1p\right)c\right]`$ $`G_s`$ $`=`$ $`\left(1ϵ\right)b+ϵ\left[pd+\left(1p\right)b\right]`$ and $$G=b\left(1ϵ\right)+ϵS$$ where $$S=\left[p^2\left(2bcd\right)+p\left(c+d2b\right)+b\right]$$ Here, we will have that $`G(p=0)=G(p=1)=b`$, that is, for both a population with only leaders or only servants, the result is exactly the same as for the non-mutant population. If once more we have $`c+d>2b`$, we will have that, for any $`p`$ such that $`0<p<1`$ we have $`G>b`$. We have a maximum for $`G`$ when $`p=1/2`$, therefore, we see that the population will eventually mutate to a point where the numbers of leaders and servants are equal, as that point gives the highest gains. Against a non-mutant population playing tit-for-tat, the outer strategy of the mutant gene just has to be tit-for-tat, as everybody. This way, the gene makes it sure that they keep getting cooperation from the individuals outside their family. In this case, both leaders and servants should keep playing tit-for-tat, as they did before their gene mutated. However, inside the family, we have seen that the division among leaders and servants, for the right parameters in the Prisoner Dilemma, brings a better result than simply cooperation. This way, the mutant gene can also invade an environment where everybody plays tit-for-tat and prosper there. The equality in the number of leaders and servants means one very specific thing. That, under a Prisoner Dilemma game, it is the best genetic reply to have an equal number of individuals who increase their fitness as the number of individuals who have their fitness decreased. When the population is divided into these two groups, assymetric situations will arise as we will disccuss in the next section and it is not clera right now if the $`p=1/2`$ would be always the best reply. ## 3 Leaders and Servants in the General Two Players Case The result obtained in the last section was first developed as an alternative solution to the Prisoner Dilemma, but that is not its only application. Let’s examine now another type of game, one non-symmetric game with complete information, where a dominant strategy exists. Non-symmetric games can happen among members of the same species easily, specially if each member has different positions, as males and females, parents and children, or, if the leader-servant solution was developed previously, among leaders and servants. So far, it was believed that all genes should choose that strategy, as it would be silliness to do otherwise, according with traditional Evolutionary Game Theory. That is not true anymore if we introduce leaders and servants. Let’s see how. Two players, A and B, compete with each other, player A choosing the row and player B the line. Their payoffs are given by the pair $$A=\left(\begin{array}{cc}2& 10\\ 4& 3\end{array}\right)$$ $$B=\left(\begin{array}{cc}1& 1\\ 2& 3\end{array}\right)$$ Here, it is clear that player B will chose the second line, as no matter what A does, it is his best solution. Knowing this, either by rational analysis, or simply observing that B always do that, A is limited to choose between second line payoff and it will clearly picks the first column, so that the total result will be, for (A,B), (4,2). We have here an average result of 3. However, were the two players to choose first line and second column, the average would go up to 5.5, thanks to very good result A would be getting. Therefore, if these numbers represent, per example, the average number of surviving offspring A and B will have, when facing this asymmetric situation, it is a good strategy for a gene make its owners to choose the first line, second column whenever faced with this decision. Again, we are supposing A and B can determine here, with reasonable success, that they share this same gene, or in other words, that they are close relatives, something we will go on assuming in this paper. Here, if the gene of player B changes, so that he will not cooperate for A to get a better result, B will prosper initially, but, as his descendents form their own family, where the components take always the dominant strategy, they lose the advantage our mutant gene had and, as consequence, evolution works to take them out of the scenario. The same dynamics can happen for a number of other games, including games with multiple Nash equilibria, where one of the equilibria is clearly more favorable to the gene than the other one. A point that should be very clear is that being a leader has not necessarily anything at all to do with be the one who gives the orders or make the decisions. Being a leader, in this context, means just that other beings from your family will sacrifice their own fitness in order to increase yours. They might even be the ones responsible for the decision making and you nothing more than a reproductive machine. The point of view here is not a subjective one, but an evolutionary one, where all that matters is whether you will leave fertile children or not. Evolution is not concerned with issues like freedom or happiness, unless they would represent some change in your fitness. ## 4 Flexibility, Specialization and Multi-Cellular Organisms An important genetic decision is on strategies to decide who would play the role of leader and who would be the servant. More specifically, it is interesting to ask if these positions sholud be perpetual. We can reason that it is not clear whether a leader should remain leader for his entire life as there it may happen that it is evolutive better for the roles to be traded. A gene that, once the decision about which role a individual would be playing was made, would mark the leaders as leaders for life as well as the servants would be less flexible and possibly less efficient than one that would allow a change of ranks. Flexibility seems a good idea, at first. The flexible gene has to develop a way to determine when it is convenient to change the hierarchical positions, based on the actual information the individual has access to, and, maybe, on the amount of resources already invested in the old leaders. In the extreme case, supposing a well tuned mechanism of change, the change wouldn’t be necessary and this strategy would not cause any changes. Being as efficient as the non-flexible variety. In all the other occasions, the flexible gene would use its flexibility to get improved chances. In human societies, we see new leaders taking the place of old ones. That may be this effect of changing the position of leaders and servants working, or simply the rise of a new generation, as the older individuals, due to age, have their fitness decreased and it is useful for the species to put other leaders in their places. This way of doing things should lead to change in the individual characteristics, associated with a change in the hierarchical position. This effect can be a good way to determine if the leaders are turning into servants or if they are just been replaced by new leaders. If the old leader, when demoted, loses its regal posture and seems weaker, as happens with some of them, we are actually seem his position been reverted. The opposite is hardly true. The only people who ever makeit into leader positions seem to be those who were already leaders of smaller groups. However, good fortune can change positions bringing power and/or money to a servant. In these cases, the opposite change, from servant to leader, is to be expected. On the other hand, very rigid structures, where no changes at all are possible, allow a better specialization. A servant who is always a servant can specialize his work and get a better result for his family and genetic code, like in an ant colony. The gene could mutate to include an instruction that, if the individual is turned into a servant, his development would happen according to some blueprint; if he is a leader, his development to adulthood is changed to fit his different hierarchical position. If specialization wins, this can lead to the appearance of large social structures and multicelular organisms. It has already been pointed that, the fact that all the beings in a community evolving towards a single organism share the same genetic code, makes cooperation a good idea, as that gene has its survival chances improved. Here, not only cooperation happens, but some individuals use strategies athat decreases their fitness, in order to improve their leaders’ fitness. In the limit situation, the servants might give up completely on having descendents of their own. The leaders make up for this by having descendents who are servants and others who are leaders, keeping the population with the same distribution as before. If the servants start contributing with very few descendants to the next generations, they soon stop having any evolutionary utility but increasing their leaders fitness. That way, unicellular beings may have found, at some stage of their evolutions, that creating infertile descendents was a very good idea. Those beings would be programmed with some specific task, like protection, improved food gathering or anything else, whose function was to serve the ones who would be responsible for the procriation. That can lead to cells who were expert in their tasks, unable to survive on their on, but that contribute to the whole. In this sense, we could understand that the whole body of cells that make up a multicellular being are all slaves to the sexual cells, who are the ones that really make copies of themselves and of everybody else, of course. The genetic code survives and alters itself exactly by that reproduction. And, of course, the sexual cells, or leaders in the framework we developed above, want to create new servants (actually, the other parts of the body) or else they would fail in their task to get reproduced. When the dynamics between flexibility and specialization comes to a stable solution, we have the creation of a new kind of being, a multicelular being, with no inner struggles about who should get the better fitness. The servant cell fitness can decrease to the point where they only reproduce to repair damaged tissues, and they don’t even try to get an evolutionary advantage for themselves. The same process can work with multi-cellular entities and we will have structures like ant colonies, that are actually one only being, from an evolutionary point of view. The question of why some ants work only as soldiers, never getting reproduced, living just to be sacrificed, actually does not make sense from an evolutionary point of view. Those soldiers do not reproduce, so their fitness is not an issue when determining any type of evolutionary success. During the process they when they became servants, their fitness was decreasing and becoming each time less important for the species as a whole. Therefore, it was just expected that they have turned into expendable servants. Both forces, for flexibility and for specialization, will compete and the most apropriate structure for the situation will win. It is quite possible, from what we see in nature, that the rigidity has a tendency to win, forming new individuals, superorganisms composed of the smaller organisms working together as one. It has won in the multi-cellular case, we see very few examples of colonies of microorganisms working together, with fertile leaders and servants present. While superorganisms consisting of very specialized servants cells working to improve the fitness of a few cells are very common and are all around us, like ourselves, or every animal or plant that we find. Specialization has won among the social insects. Among us, the struggle between the two forces still exists, but this may be just a transient phase. Only the future can tell. ## 5 Some Comments and Possible Applications In this section will allow myself to speculate on the possible explanantions to natural phenomena provided by this result. I will ask the reader to forgive me if she thinks I have gone too far, my only defense is that I do believe that what I will be saying here is, if not a true description of Nature, at least, very reasonable and likely. A first important warning is that one should be very careful when trying to identify situations where leaders and servants have appeared. This because these terms were used here in a very strict sense. Servants are not individuals who obey the leaders, but individuals who take strategies that decrease their fitness in order to increase their leader’s fitness. In this sense, there are situations where leaders and servants might, according to our definition be identified in the reversed positions than leaders and servants in every day language. We know that it can be a good decision for a gene to spend some of its carriers to further the survivability of other carriers. We have seen how this mechanism works, how it can be the right evolutionary thing to do to sacrifice some individuals for a greater ”good”. It can not be stressed enough that this does not mean any of us must agree with this genetic moral. The point is just that moving resources from the servants to the leaders, when facing a problem with the appropriate parameters, can be a winner evolutionary strategy. Therefore, living beings can have inside them some kind of structure that makes this type of slavery not only possible, but even desirable by the servants. This slavery is not necessarily of the type we humans are used to recognize as such, of course. All this can lead us to some speculation on how this genetic strategy can be actually seen working in nature and in ourselves. It seems to me that much of our society could have been built around such idea. In the old days, our communities were much smaller and people from the same tribe as we were probably our relatives. Therefore, it was a good genetic strategy to divide the human society in leaders and servants and to base our decision on whether someone belonged or not to our family on the fact that they lived in the same tribe we did. We have inherent abilities to recognize hierarchy and our position in it and we expect to be treated accordingly. When a person is low in the hierarchy (a servant) it accepts abuses it wouldn’t accept if its position were higher (a leader). We have always heard that power corrupts. Well, when you are a leader, you should expect to be treated like one, that is part of the strategy. Per example, when a family has two children, the parents may decide to invest more in one of them and simply ignore the other one, if that would make the family fitness higher. The servant child will be expected to work and actually help feeding and fulfilling the needs of the leader, so that the leader child will have even better chances. More food, protection, more access to suitable members of the opposite sex. When detecting leaders, human beings seem to use other clues than posture to determine the hierarchical position of someone. More specifically, we use clothing to represent our position and this seems to be a trace common to many cultures. Therefore, it is not so strange that people worry far more than would seem reasonable about fashion and tend to prefer more clothing that is more expensive just because it is more expensive and not because it is better or more difficult to make. If wearing clothes is something people will use to determine if we are leaders or servants and decide whether to cooperate or not with us, it is not strange at all that people, since a very young age, seem so irrationally attracted to clothes. That can be not just a way to show to which group you belong but also a strategy to be seen as a leader. The teenage struggles to be accepted by their peers become now even more desperate. If all of them would play tit-for-tat in their social relationships, they would not have many problems among them. But that’s not the point of the game, the point is to establish oneself as leader, as someone the servants must cooperate with and not expect cooperation in return. Therefore, it is to be expected problems at some point before the adult life, some period when, if left to themselves, humans will fight each other, physically or verbally, trying by all means make themselves leaders and those they don’t like servants. And we all know what happens when one is a teenager. The same effect should be expected around organizations and this could be checked. Organizations seen as leaders, with prestige, should be able to atract workers with smaller wages for the same task than declining organizations, as people would want to be seen as leaders. It is also very interesting to see how many organizations make it a very clear point to tell people they are servants as soon as they are accepted as part of the organization. Universities, with the treatment freshmen get from their just a little older fellows, military organizations, the examples are many. Make the man know he is a servant and he will cooperate, even when the organization, or its leaders, do not. Other area where this effect can happen is in the field of ideas and in politics. Some ideas are know to send its followers to paths that actually make their fitness lower, but the idea still flourish. A priest who is told he can’t have sexual relationships, much less, children; a terrorist who dies for the cause, all have their fitness terribly diminished by these strategies. However, as we do have the ability to become servants, they can take these paths. In this case, their paths might serve somehow to further the goals of the ideas they are slaved to. But they can also serve to increase the fitness of the other member of the man’s tribe, be it a rligious community or a military party. Our leaders do think about the life of their soldiers as something expendable in order to get to their objectives. Objectives that make them more powerful, possibly allowing their genes, or the dominant genes in their country, to dominate and spread over greater areas, therefore, increasing the fitness of their own genetic code. We have a great sense of hierarchy, we are constantly gauging our position in the social hierarchy and we tend to behave according to that. Our judgement may be different from that accepted by the official society, as in the case of a mob leader, but, even there, there is a hierarchy to be respected, it is always clear who the leaders are. We instinctively tell in our body stance, in the clothes we wear, where in that hierarchy we are and that knowledge is used by everybody to decide how to behave around us. In the old days, we would just normally meet and call friends people who belonged to our tribe and who were probably our relatives. Working with them and playing leaders and servants then was a good choice. Nowadays we carry the same genes, but most people we meet are not family. Our systems to decide who we might expect to share a servant-leader relationship do not work as well anymore and can be exploited by our bosses, religious political leaders. It is no surprise to me that one of the most common practices in this fields is to make you feel like you are part of a family. Because people not only cooperate with their families, they also accept servant roles in that position and make sacrifices they wouldn’t do otherwise. This has other possible applications. Police officers are notoriously known, to a greater or smaller extent, in several parts of the world and/or periods of history, for abusing their functions. Political leaders have done it even worse, some considering that nothing was more important than their personal desire. In both cases, we see people in positions that give them power, using this power in a non-cooperative way, expecting that the servants would obey them. The more power given to a person, the more she requires from her followers, and the more corrupt she becomes. What could be happening here is simply the fact that this person, who learned she is the supreme leader, knows instinctively that she must not cooperate with the servants. That can be the best genetic strategy. Therefore, it is our own genes that make us corrupt as we grow more and more powerful. It is probaably more than a coincidence that, as culture changed and we start to tell our leaders and police men that they are not the real leaders, but servants of the rest of society, their behavior changes towards a more cooperative one. They still have power and the desire to use it and become leaders but, as the society do not treat them as leaders, the problems with abuses decreases. Turning our attention back to psychological problems, it is interesting to ask how we notice and classify who is a leader and who is a servant. Posture is a good guess as well as tips we can give with our behavior. Other subtle ways can be working as well and some research to identify them could lead to interesting discoveries. This dynamics can be used also to explain the apparently irrational success of self-help literature and movements. If you can convince people to behave as leaders, and the world will react to that by treating them that way, it is very likely they their lives will get better, as more people start cooperating with them. The down side to it is that their improvement happens with servants sacrificing themselves for the new leaders; this way, they are not really making the world a better place, just exploring it and making sure they get their best share of it. Aggression inside families can be a mean to determine leadership. As we have seen, it is exactly inside a family that the leader-servant strategy is expected to work best, all other cases been just weaker variations, based on erroneous instinctive methods to determine family. What we have is that violence can be a mean for the leader to establish his dominance. Servants can’t be violent with their leaders, as that would make their leaders fitness worse, but, under the right circumstances, leaders could be violent and the servants would still serve them. When someone makes himself a leader using brute force, it would be reasonable to expect that the other person would, if he were acting according to a dominant strategy for his personal interests, do something. Taking the problem to the police, changing relationships, fighting back, all are better personnal strategies than just accepting the violence. However, when servants recognize their position as servants, their instincts are expected to work towards keeping them in that position. Therefore, it should be very hard to convince a victim of aggression to act against her aggressor, when this happens inside a family. In the economic arena, this type of dynamics is also known to happen. Conglomerates can force their component companies to take bad individual decisions, specially when dealing with companies from the same group, so that, in the whole, the conglomerate will profit more. As long as the group ends up better, they would never care about making one of their companies worse. ## 6 Conclusion We have seen that when genes are playing their games, it is not always in their best interest to play it so that the individuals playing their parts would always the problem answer with a dominant strategy. There are situations when it is genetically preferred to have individuals making decisions against their best interests, so that the whole survivability of the gene gets increased. As long as the servants have a mean to adequately determine who are the leaders with the same genes as them, they can use that information to increase the leader’s fitness, even when it decreases their own. This solution is not evolutionary unstable, if the individuals adopting it belong to the same family. The major consequences of this is that not only cooperation becomes more likely, helping to explain how it is so common in our word, but also that there will be times, even in non-symmetric games, when a dominant strategy for the gene carriers is not the best answer, from the gene point of view.
no-problem/0002/hep-ph0002224.html
ar5iv
text
# Phenomenology of the radion in the Randall-Sundrum scenario at colliders ## Abstract Phenomenology of a radion ($`\varphi `$) that stabilizes the modulus in the Randall-Sundrum scenario is considered. The radion couples to the trace of energy momentum tensor of the standard model (SM) with a strength suppressed only by a new scale ($`\mathrm{\Lambda }_\varphi `$) of an order of the electroweak scale. In particular, the effective coupling of a radion to two gluons is enhanced due to the trace anomaly of QCD. Therefore, its production cross section at hadron colliders could be enhanced, and the dominant decay mode of a relatively light radion is $`\varphi gg`$, unlike the SM Higgs boson case. We also present constraints on the mass $`m_\varphi `$ and the new scale $`\mathrm{\Lambda }_\varphi `$ from the Higgs search limit at LEP and perturbative unitarity bound. preprint: KAIST–TH 00/1 DESY 00–030 It is one of the problems of the standard model (SM) to stabilize the electroweak scale relative to the Planck scale under quantum corrections, which is known as the gauge hierarchy problem. Traditionally, there have been basically two avenues to solve this problem : (i) electroweak gauge symmetry is spontaneously broken by some new strong interactions (technicolor or its relatives) or (ii) there is a supersymmetry (SUSY) which is spontaneously broken in a hidden sector, and superpartners of SM particles have masses around the electroweak scale $`O(1001000)`$ GeV. However, new mechanisms based on the developments in superstring and M theories including D-branes have been suggested by Randall and Sundrum . If our world is confined to a three dimensional brane and the warp factor in the Randall and Sundrum (RS) theory is much smaller than 1, then loop corrections can not destroy the mass hierarchy derived from the relation $`v=e^{kr_c\pi }v_0`$, where $`v_0`$ is the VEV of Higgs field ($``$ $`O(M_P)`$) in the 5 dimensional RS theory, $`e^{kr_c\pi }`$ is the warp factor, and $`v`$ is the VEV of Higgs field ($``$ 246 GeV) in the 4 dimensional effective theory of the RS theory by a kind of dimensional reduction. Especially the extra-dimensional subspace needs not be a circle $`S^1`$ like the Kaluza-Klein theory , and in that case, it is crucial to have a mechanism to stabilize the modulus. One such a mechanism was recently proposed by Goldberger and Wise (GW) , and also by Cs$`\stackrel{´}{\mathrm{a}}`$ki et al.. In such a case, the modulus (or the radion $`\varphi `$ from now on) is likely to be lighter than the lowest Kaluza-Klein excitations of bulk fields. Also its couplings to the SM fields are completely determined by general covariance in the four-dimensional spacetime, as shown in Eq. (1) below. If this scenario is realized in nature, this radion could be the first signature of this scenario, and it would be important to determine its phenomenology at the current/future colliders, which is the purpose of this work. Some related issues were addressed in Ref. . In the following, we first recapitulate the interaction Lagrangian for a single radion and the SM fields, and calculate the decay rates and the branching ratios of the radion into SM particles. Then the perturbative unitarity bounds on the radion mass $`m_\varphi `$ and $`\mathrm{\Lambda }_\varphi `$ are considered. Current bounds on the SM Higgs search can be easily translated into the corresponding bounds on the radion, which we will show in brief. Then the radion production cross sections at next linear colliders (NLC’s) and hadron colliders such as the Tevatron and LHC are calculated. Then our results will be summarized at the end. The interaction of the radion with the SM fields at an electroweak scale is dictated by the 4-dimensional general covariance, and is described by the following effective Lagrangian : $$_{\mathrm{int}}=\frac{\varphi }{\mathrm{\Lambda }_\varphi }T_{\mu }^{}{}_{}{}^{\mu }(\mathrm{SM})+\mathrm{},$$ (1) where $`\mathrm{\Lambda }_\varphi =\varphi O(v)`$. The radion becomes massive after the modulus stabilization, and its mass $`m_\varphi `$ is a free parameter of electroweak scale. Therefore, two parameters $`\mathrm{\Lambda }_\varphi `$ and $`m_\varphi `$ are required in order to discuss productions and decays of the radion at various settings. The couplings of the radion with the SM fields look like those of the SM Higgs, except for $`v\mathrm{\Lambda }_\varphi `$. However, there is one important thing to be noticed : the quantum corrections to the trace of the energy-momentum tensor lead to trace anomaly, leading to the additional effective radion couplings to gluons or photons in addition to the usual loop contributions. This trace anomaly contributions will lead to distinct signatures of the radion compared to the SM Higgs boson. The trace of energy-momentum tensor of the SM fields at tree level is easily derived : $`T_{\mu }^{}{}_{}{}^{\mu }(\mathrm{SM})^{\mathrm{tree}}`$ $`=`$ $`\left[{\displaystyle \underset{f}{}}m_f\overline{f}f2m_W^2W_\mu ^+W^\mu m_Z^2Z_\mu Z^\mu +\left(2m_h^2h^2_\mu h^\mu h\right)+\mathrm{}\right],`$ (2) where we showed terms with only two SM fields, since we will discuss two body decay rates of the radion into the SM particles, except the gauge bosons of which virtual states are also considered. The couplings between the radion $`\varphi `$ and fermion pair or weak gauge boson pair are simply related with the SM Higgs couplings with these particles through simple rescaling : $`g_{\varphi f\overline{f}}=g_{hf\overline{f}}^{\mathrm{SM}}v/\mathrm{\Lambda }_\varphi `$, and so on. On the other hand, the $`\varphi hh`$ coupling is more complicated than the SM $`hhh`$ coupling. There is a momentum dependent part from the derivatives acting on the Higgs field, and this term can grow up as the radion mass gets larger or the CM energy gets larger in hadroproductions of the radion. It may lead to the violation of perturbative unitarity, which will be addressed after we discuss the decay rates of the radion. Finally, the $`h\varphi \varphi `$ coupling could be described by $$_{\mathrm{int}}(h\varphi ^2)=\frac{v}{\mathrm{\Lambda }_\varphi ^2}\varphi ^2\left[\frac{1}{2}^2h+\frac{m_h^2}{2}h\right],$$ (3) which might lead to an additional Higgs decay $`h\varphi \varphi `$, if this mode is kinematically allowed, thereby enlarging the Higgs width compared to the SM case. However, this coupling actually vanishes upon using the equation of motion for the Higgs field $`h`$. This is also in accord with the fact that the radion couples to the trace of the energy momentum tensor and there should be no $`h\varphi `$ mixing after field redefinitions in terms of physical fields. In addition to the tree level $`T_\mu ^\mu (\mathrm{SM})^{\mathrm{tree}}`$, there is also the trace anomaly term for gauge fields : $$T_{\mu }^{}{}_{}{}^{\mu }(\mathrm{SM})^{\mathrm{anom}}=\underset{G=\mathrm{SU}(3)_C,\mathrm{}}{}\frac{\beta _G(g_G)}{2g_G}\mathrm{tr}(F_{\mu \nu }^GF^{G\mu \nu }),$$ (4) where $`F_{\mu \nu }^G`$ is the field strength tensor of the gauge group $`G`$ with generator(s) satisfying $`\mathrm{tr}(t_G^at_G^b)=\delta ^{ab}`$, and $`\beta _G`$ is the beta function for the corresponding gauge group. The trace anomaly term couples with the parameter of conformal transformation in our 3-brane. And the radion $`\varphi `$ plays the same role as the parameter of conformal transformation, since it belongs to the warp factor in the 5 dimensional RS metric . Therefore, the parameter associated with the conformal transformation is identified with the radion field $`\varphi `$. As a result, the radion $`\varphi `$ has a coupling to the trace anomaly term. For QCD sector as an example, one has $$\frac{\beta _{QCD}}{2g_s}=(11\frac{2}{3}n_f)\frac{\alpha _s}{8\pi }\frac{\alpha _s}{8\pi }b_{QCD},$$ (5) where $`n_f=6`$ is the number of active quark flavors. There are also counterparts in the $`SU(2)\times U(1)`$ sector. This trace anomaly has an important phenomenological consequence. For relatively light radion, the dominant decay mode will not be $`\varphi b\overline{b}`$ as in the SM Higgs, but $`\varphi gg`$. Using the above interaction Lagrangian, it is straightforward to calculate the decay rates and branching ratios of the radion $`\varphi `$ into $`f\overline{f},W^+W^{},Z^0Z^0,gg`$ and $`hh`$. $`\mathrm{\Gamma }(\varphi f\overline{f})`$ $`=`$ $`N_c{\displaystyle \frac{m_f^2m_\varphi }{8\pi \mathrm{\Lambda }_\varphi ^2}}\left(1x_f\right)^{3/2},`$ (6) $`\mathrm{\Gamma }(\varphi W^+W^{})`$ $`=`$ $`{\displaystyle \frac{m_\varphi ^3}{16\pi \mathrm{\Lambda }_\varphi ^2}}\sqrt{1x_W}(1x_W+{\displaystyle \frac{3}{4}}x_W^2),`$ (7) $`\mathrm{\Gamma }(\varphi ZZ)`$ $`=`$ $`{\displaystyle \frac{m_\varphi ^3}{32\pi \mathrm{\Lambda }_\varphi ^2}}\sqrt{1x_Z}(1x_Z+{\displaystyle \frac{3}{4}}x_Z^2),`$ (8) $`\mathrm{\Gamma }(\varphi hh)`$ $`=`$ $`{\displaystyle \frac{m_\varphi ^3}{32\pi \mathrm{\Lambda }_\varphi ^2}}\sqrt{1x_h}(1+{\displaystyle \frac{x_h}{2}})^2,`$ (9) $`\mathrm{\Gamma }(\varphi gg)`$ $`=`$ $`{\displaystyle \frac{\alpha _s^2m_\varphi ^3}{32\pi ^3\mathrm{\Lambda }_\varphi ^2}}\left|b_{QCD}+{\displaystyle \underset{q}{}}I_q(x_q)\right|^2,`$ (10) where $`x_{f,W,Z,h}=4m_{f,W,Z,h}^2/m_\varphi ^2`$, and $`I(z)=z[1+(1z)f(z)]`$ with $`f(z)`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _0^1}{\displaystyle \frac{dy}{y}}\mathrm{ln}[1{\displaystyle \frac{4}{z}}y(1y)]`$ (11) $`=`$ $`\{\begin{array}{cc}\mathrm{arcsin}^2(1/\sqrt{z}),& z1,\hfill \\ \frac{1}{4}\left[\mathrm{ln}\left(\frac{1+\sqrt{1z}}{1\sqrt{1z}}\right)i\pi \right]^2,& z1.\hfill \end{array}`$ (14) Note that as $`m_t\mathrm{}`$, the loop function approaches $`I(x_t)2/3`$ so that the top quark effect decouples and one is left with $`b_{QCD}`$ with $`n_f=5`$. For $`\varphi WW,ZZ`$, we have ignored $`SU(2)_L\times U(1)_Y`$ anomaly, since these couplings are allowed at the tree level already, unlike the $`\varphi gg`$ or $`\varphi \gamma \gamma `$ couplings. This should be a good approximation for a relatively light radion. Using the above results, we show the decay rate of the radion and the relevant branching ratio for each channel available for a given $`m_\varphi `$ in Figs. 1 and 2. In the numerical analysis, we use $`\mathrm{\Lambda }_\varphi =v=246`$ GeV and $`m_h=150`$ GeV, and also included the QCD corrections. The decay rates for different values of $`\mathrm{\Lambda }_\varphi `$ can be obtained through the following scaling : $`\mathrm{\Gamma }(\mathrm{\Lambda }_\varphi )=(v/\mathrm{\Lambda }_\varphi )^2\mathrm{\Gamma }(\mathrm{\Lambda }_\varphi =v)`$. The decay rate scales as $`(v/\mathrm{\Lambda }_\varphi )^2`$, but the branching ratios are independent of $`\mathrm{\Lambda }_\varphi `$. In Fig. 1, we also show the decay rate of the SM Higgs boson with the same mass as $`\varphi `$. We note that the light radion with $`\mathrm{\Lambda }_\varphi =v`$ could be a much broader resonance compared to the SM Higgs even if $`m_\varphi 2m_W`$. This is because the dominant decay mode is $`\varphi gg`$ (see Fig. 2), unlike the SM Higgs for which the $`b\overline{b}`$ final state is a dominant decay mode. This phenomenon is purely a quantum field theoretical effect : enhanced $`\varphi gg`$ coupling due to the trace anomaly. For a heavier radion, it turns out that $`\varphi VV`$ with $`V=W`$ or $`Z`$ dominates other decay modes once it is kinematically allowed. Also the branching ratio for $`\varphi hh`$ can be also appreciable if it is kinematically allowed. This is one of the places where the difference between the SM and the radion comes in. If $`\mathrm{\Lambda }_\varphi v`$, the radion would be a narrow resonance and should be easily observed as a peak in the two jets or $`WW(ZZ)`$ final states. Especially $`\varphi ZZ(l\overline{l})(l^{^{}}\overline{l^{^{}}})`$ will be a gold plated mode for detecting the radion as in the case of the SM Higgs. Even in this channel, one can easily distinguish the radion from the SM Higgs by difference in their decay width. The perturbative unitarity can be violated (as in the SM) in the $`V_LV_LV_LV_L`$ or $`hhV_LV_L`$, etc. Here we consider $`hhhh`$, since the $`\varphi hh`$ coupling scales like $`s/\mathrm{\Lambda }_\varphi `$ for large $`s(p_{h_1}+p_{h_2})^2`$. The tree-level amplitude for this process is $`(hhhh)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Lambda }_\varphi ^2}}\left({\displaystyle \frac{s^2}{sm_\varphi ^2}}+{\displaystyle \frac{t^2}{tm_\varphi ^2}}+{\displaystyle \frac{u^2}{um_\varphi ^2}}\right)`$ (17) $`36\lambda ^2v^2\left({\displaystyle \frac{1}{sm_h^2}}+{\displaystyle \frac{1}{tm_h^2}}+{\displaystyle \frac{1}{um_h^2}}\right)`$ $`6\lambda ,`$ where $`\lambda `$ is the Higgs quartic coupling, and $`s+t+u=4m_h^2`$. Projecting out the $`J=0`$ partial wave component ($`a_0`$) and imposing the partial wave unitarity condition $`|a_0|^2\mathrm{Im}(a_0)`$ (i.e. $`|\mathrm{Re}(a_0)|1/2`$), we get the following relation among $`m_h,v,m_\varphi `$ and $`\mathrm{\Lambda }_\varphi `$, for $`sm_h^2,m_\varphi ^2`$ : $$\left|\frac{2m_h^2+m_\varphi ^2}{8\pi \mathrm{\Lambda }_\varphi ^2}+\frac{3\lambda }{8\pi }\right|\frac{1}{2}.$$ (18) This bound is shown in the lower three curves of Fig. 3. We note that the perturbative unitarity is broken for relatively small $`\mathrm{\Lambda }_\varphi 130(300)`$ GeV for $`m_\varphi `$ 200 GeV (1 TeV). Therefore, the tree level results should be taken with care for this range of $`\mathrm{\Lambda }_\varphi `$ for a given radion mass. At the $`e^+e^{}`$ colliders, the main production mechanism for the radion $`\varphi `$ is the same as the SM Higgs boson : the radion–strahlung from $`Z`$ and the $`WW`$ fusion, the latter of which becomes dominant for a larger CM energy . Again we neglect the anomaly contributions here. Since both of these processes are given by the rescaling of the SM Higgs production rates, we can use the current search limits on Higgs boson to get the bounds on radion. With the data from L3 collaboration, we show the constraints of $`\mathrm{\Lambda }_\varphi `$ and $`m_\varphi `$ in the left three curves of Fig. 3. Since L3 data is for $`\sqrt{s}=189`$ GeV and mass of $`Z`$ boson is about 91 GeV, the allowed energy for a scalar particle is about 98 GeV. If the mass of the scalar particle is larger than 98 GeV, then the cross section vanishes. Therefore if $`m_\varphi `$ is larger than 98 GeV, there is no constraint on $`\mathrm{\Lambda }_\varphi `$. And the forbidden region in the $`m_\varphi \mathrm{\Lambda }_\varphi `$ plane is not changed by $`m_h98`$ GeV, because there is no Higgs contribution to the constraint for $`m_h98`$ GeV. The radion production cross sections at NLC’s and the corresponding constant production cross section curves in the ($`\mathrm{\Lambda }_\varphi `$, $`m_\varphi `$) plane are shown in Fig. 4 and Fig. 5, respectively. We have chosen three different CM energies for NLC’s : $`\sqrt{s}=500`$ GeV, 700 GeV and 1 TeV. We observe that the relatively light radion ($`m_\varphi 500`$ GeV) with $`\mathrm{\Lambda }_\varphi v`$ (upto $`1`$ TeV) could be probed at NLC’s if one can achieve high enough luminosity, since the production cross section in this region is less than a picobarn. The production cross sections of the radion at hadron colliders are given by the gluon fusion into the radion through quark loop diagrams, as in the case of Higgs boson production, and also through the trace anomaly term, Eq. (4), which is not present in the case of the SM Higgs boson : $$\sigma (pp(\mathrm{or}p\overline{p}))=K\widehat{\sigma }_{\mathrm{LO}}(gg\varphi )_\tau ^1\frac{\tau }{x}g(x,Q^2)g(\tau /x,Q^2)𝑑x,$$ (19) where $`\tau m_\varphi ^2/s`$ and $`\sqrt{s}`$ is the CM energy of the hadron colliders ($`\sqrt{s}=2`$ TeV and 14 TeV for the Tevatron and LHC, respectively). The $`K`$ factor includes the QCD corrections, and we set $`K=1.5`$. The parton level cross section for $`gg\varphi `$ is given by $$\widehat{\sigma }_{\mathrm{LO}}(gg\varphi )=\frac{\alpha _s^2(Q)}{256\pi \mathrm{\Lambda }_\varphi ^2}\left|b_{QCD}+\underset{q}{}I_q(x_q)\right|^2,$$ (20) where $`I(z)`$ is given in the Eqs. (11). For the gluon distribution function, we use the CTEQ5L parton distribution functions . In Fig. 6, we show the radion production cross sections at the Tevatron and LHC as functions of $`m_\varphi `$ for $`\mathrm{\Lambda }_\varphi =v`$. We set the renormalization scale $`Q=m_\varphi `$ as shown in the figure. When we vary the scale $`Q`$ between $`m_\varphi /2`$ and $`2m_\varphi `$, the production cross section changes about $`+30\%`$ to $`20\%`$. The production cross section will scale as $`(v/\mathrm{\Lambda }_\varphi )^2`$ as before. Compared to the SM Higgs boson productions, one can clearly observe that the trace anomaly can enhance the hadroproduction of a radion enormously. As in the SM Higgs boson, there is a great possibility to observe the radion upto mass $`1`$ TeV if $`\mathrm{\Lambda }_\varphi v`$. For a smaller $`\mathrm{\Lambda }_\varphi `$, the cross section becomes larger but the radion becomes very broader and it becomes more difficult to find such a scalar. For a larger $`\mathrm{\Lambda }_\varphi `$, the situation becomes reversed : the smaller production cross section, but a narrower width resonance, which is easier to detect. In any case, however, one has to keep in mind that the perturbative unitarity may be violated in the low $`\mathrm{\Lambda }_\varphi `$ region. In summary, we presented the collider phenomenology for the radion, which was suggested by means of stabilizing the modulus in Randall-Sundrum scenario. Unlike other similar scenarios solving the hirerachy problem where the radion or Kaluza-Klein modes are heavy and/or very weakly coupled to the SM fields, the radion discussed by Goldberger and Wise can have sizable interactions with the SM, only suppressed by a one power of the electroweak scale. The radion phenomenology is very similar to the SM Higgs boson upto a simple rescaling of couplings by $`v/\mathrm{\Lambda }_\varphi `$, except that its couplings to two gluons or two photons are enhanced by the trace anomaly. Also $`\varphi hh`$ coupling can be substantially larger than the corresponding triple Higgs couplings, and it increases as $`m_\varphi `$ or the CM energy grows up. We discussed various branching ratios and the decay rate of this radion, and the possibility to discover it at linear or hadron colliders. Unlike the SM Higgs boson, the relatively light radion dominantly decays into two gluons, not into the $`b\overline{b}`$ pair, and this makes the radion substantially broader than the SM Higgs if $`\mathrm{\Lambda }_\varphi =v`$. A heavier radion decays into $`WW,ZZ`$ pairs with some fraction into two Higgs if it is kinematically allowed. Depending on the $`\mathrm{\Lambda }_\varphi `$, the radion can be either broad or narrow, leading to larger (smaller) production rates at hadron colliders but more harder (easier) to detectability. One may be also able to probe some regions of $`(m_\varphi ,\mathrm{\Lambda }_\varphi )`$ if enough luminosity is achieved. It would be exciting to search for such a scalar particle which interacts with the trace of the energy-momentum tensor of the SM at the current/future colliders. Finally, although we considered the radion in the Randall-Sundrum-Goldberger-Wise scenario, our study can be equally applied to any scalar particle which couples to the $`T_\mu ^\mu (\mathrm{SM})`$. ###### Acknowledgements. The work of SB, PK and HSL was supported by grant No. 1999-2-111-002-5 from the interdisciplinary Research program of the KOSEF and BK21 project of the Ministry of Education. JL is supported by the Alexander von Humboldt Foundation. Notes Added : While we were completing this work, there appeared two papers that consider the same subjects as we do. The paper by Giudice et al. includes a direct coupling between the radion and the Ricci scalar parameterized by $`\xi `$ : $`_{\mathrm{int}}=\xi R\varphi ^2/2`$. Our case corresponds to $`\xi =0`$ in Ref. . And the usefulness of the mode $`\varphi \gamma \gamma `$ at hadron colliders was emphasized in Ref. . Qualitative conclusions of these papers are the same as ours where there are overlaps.
no-problem/0002/nucl-th0002055.html
ar5iv
text
# Moments of inertia for multi-quasiparticle configurations ## I Introduction The moments of inertia of deformed atomic nuclei at low spins are a factor of two or three smaller than the rigid-body value. The reduction is attributed to the strong pair correlations, because nuclei in the ground state are in a superfluid condensed state . Angular momentum is generated by either rotating the deformed superfluid or by breaking Cooper pairs from this condensed state. In order to reach high spins, an increasing number of Cooper pairs are broken, which reduces and finally quenches the pair condensate. It has been often stated that after this transition, the moments of inertia should reach the rigid-body value . However, this conjecture is based on the consideration of two special cases : (i) the limit of large particle number, where the nuclear shell structure becomes unimportant; and (ii) nucleons in a harmonic oscillator potential at its equilibrium deformation. Particularly for independent particles in a harmonic oscillator potential well, the moment of inertia has in the limit of vanishing angular velocity exactly the rigid-body value in any combination of stationary single-particle states provided the total energy is stationary with respect to volume-conserving variations of the equipotential ellipsoids . At finite angular velocity, the condition for the rigid-body value is that the second moments of the density distribution should have the ratio of the squares of the axes of the oscillator-plus-centrifugal equipotential ellipsoids, which is not exactly equivalent to a stationary energy . These results have led to the expectation that in real nuclei, the moment of inertia would not be very different from the rigid-body value if the pairing is quenched. This expectation was substantiated by early studies like, for example, that in Ref. of more realistic single-nucleon potentials, which seemed to indicate that permitting the nuclear system to relax to an equilibrium shape generally tends to reduce deviations from the rigid-body moment of inertia due to shell structure. The validity of the aforementioned conjecture for the real nuclear potential remains, however, a continuing subject of investigation with new theoretical and experimental techniques, and so does the related question of the current distribution in a rotating nucleus . Systematic deviations from rigid-like behavior at high angular velocity have been demonstrated for transitional nuclei in the A$``$110 region and discussed for superdeformed nuclei . In the present study we address the inertial behavior of a different class of nuclear excitations: high-seniority states in well deformed A$``$180 nuclei (see also the earlier work by Andersson et al. ). Recently, rotational bands have been observed in, for example, <sup>178</sup>Hf, <sup>178</sup>W and <sup>179</sup> that are built on configurations with up to four broken pairs (that is, seniority eight) and high $`K`$ values, where $`K`$ is the angular momentum with respect to the body-fixed deformation axis. It is found (see the empirical data in Fig. 1) that the moments of inertia are substantially below the rigid-body value. Furthermore, some bands deviate from the linear dependence of the angular momentum on the angular velocity, expected for the strong coupling of quasiparticles to the deformed field. These features can be explained by the persistence of pair correlations in the Lipkin-Nogami pairing model, combined with the assumption that the zero-pairing limit would result in rigid-like rotation. However, since the latter assumption is not self-evident, a microscopic determination of the moment of inertia in the zero-pairing limit is required. In the present work it is demonstrated, through tilted-axis-cranking (TAC) calculations, that the main experimental features can be understood by assuming that nucleons move in a rotating mean field with no pairing. The preliminary results of Ref. are extended. ## II The tilted-axis-cranking Model To describe the high-$`K`$ rotational bands, involving many unpaired nucleons and predominant magnetic dipole transitions, the tilted-axis-cranking approach is employed. When pairing is neglected, the nuclear state $`|\omega `$ considered is a uniformly rotating Slater determinant which is an eigenstate of the “Routhian” $$h^{}=h_{\text{def}}(\epsilon _2,\epsilon _4)\omega (j_1\mathrm{sin}\vartheta +j_3\mathrm{cos}\vartheta ),$$ (1) where $`h_{\text{def}}`$ is the Hamiltonian of independent nucleons in a deformed potential, $`\omega `$ is the angular velocity, $`j_1`$ and $`j_3`$ the components of the angular momentum with respect to the principal axes 1 and 3 (symmetry axis) of the deformed potential, and $`\vartheta `$ the angle of the angular velocity with the 3-axis. The total Routhian $`E^{}`$ is obtained by applying the Strutinsky renormalization to the energy of the non-rotating system $`E_0`$. This kind of approach has turned out to be a quite reliable calculation scheme in the case of standard cranking . Thus we have $$E^{}(\omega ,\vartheta ,\epsilon _2,\epsilon _4)=E_{LD}(\epsilon _2,\epsilon _4)\stackrel{~}{E}+\omega |h^{}|\omega .$$ (2) By means of Strutinsky-averaging , the smooth energy $`\stackrel{~}{E}`$ is calculated from the non-rotating single-nucleon energies, obtained from the Hamiltonian $`h_{\text{def}}(\epsilon _2,\epsilon _4)`$. The orientation angle $`\vartheta `$ is found by requiring the total angular momentum $`\stackrel{}{J}=\omega |\stackrel{}{j}|\omega `$ to be parallel to $`\stackrel{}{\omega }`$. This makes $`E^{}`$ a minimum with respect to $`\vartheta `$. In the case of the high-K bands we are interested in, the rotational axis is “tilted”, i. e. it does not coincide with one of the principal axes of the deformed potential ($`\vartheta 90^{}\text{ or }0^{}`$). The equilibrium shape is found by minimizing $`E^{}`$ with respect to the deformation parameters $`\epsilon _2`$ and $`\epsilon _4`$ of the potential. The calculated angular momentum $`J(\omega )=\sqrt{J_1^{\mathrm{\hspace{0.17em}2}}+J_3^{\mathrm{\hspace{0.17em}2}}}`$ is compared with the corresponding experimental function, which is constructed by the standard procedure: In terms of the energy levels $`E(I)`$ of a $`\mathrm{\Delta }I=1`$ rotational band, where $`I`$ denotes the angular momentum quantum number, one sets $`\omega (J)=E(I)E(I1)`$ for $`J=I`$. For a given observed band, this defines a discrete set of empirical pairs of $`J`$ and $`\omega `$ from which the experimental function $`J(\omega )`$ is obtained by interpolation. (Taking $`\omega (J)`$ at $`J=(I\frac{1}{2})+\frac{1}{2}=I`$ simulates an RPA-correction to the Hartree-Fock energy ). ## III Single-nucleon Hamiltonian and deformations In the present calculation for the nuclei <sup>178</sup>Hf, <sup>178</sup>W and <sup>179</sup>W, the modified oscillator form of the Hamiltonian $`h_{\text{def}}`$ was adopted. For the combinations of single-nucleon states listed in Table I, the equilibrium shape at zero angular velocity was determined. Most of the configurations in <sup>178</sup>W and <sup>179</sup>W were found to have equilibrium values of the quadrupole deformation $`\epsilon _2`$ and hexadecapole deformation $`\epsilon _4`$ (see Ref. ) close to $`\epsilon _2=0.23`$ and $`\epsilon _4=0.02`$. Only the $`K^\pi =45/2^{}`$ and $`25^+`$ configurations, which have a proton in the $`1h_{9/2}`$ state, have somewhat larger equilibrium deformations, given approximately by $`\epsilon _2=0.25`$ and $`\epsilon _4=0.015`$. In the $`K^\pi =16^+`$ configuration in <sup>178</sup>Hf, the equilibrium shape has $`\epsilon _2=0.22`$ and $`\epsilon _4=0.05`$. These values of the shape parameters were used in the following calculations. The difference between the deformation of the $`K^\pi =45/2^{}`$ and $`25^+`$ configurations and that of the other configurations in <sup>178</sup>W and <sup>179</sup>W changes the rigid-body moment of inertia by 6 %. For the $`K^\pi =45/2^{}`$ and $`25^+`$ configurations, we studied the change of equilibrium shape as a function of the angular velocity. In the relevant interval of $`\omega `$, the variation of $`\epsilon _2`$ stays below 0.005 and that of $`\epsilon _4`$ is negligible. This corresponds to a 2 % variation of the rigid-body moment of inertia. ## IV Results and discussion Figure 1 shows the calculated and empirical functions $`J(\omega )`$ for the configurations in Table I except those with $`K^\pi =7^{}`$ and $`15^+`$. A close correspondence between calculation and data is apparent from this figure. This includes recent data for a $`K^\pi =30^+`$ band in <sup>178</sup>. It is also evident that the moments of inertia are considerably smaller than the rigid-body value, which is about 85 MeV<sup>-1</sup> for these masses and shapes. The typical empirical moment of inertia is about 55 MeV<sup>-1</sup>. The $`K^\pi =45/2^{}`$ and $`25^+`$ bands are discussed later. This strong deviation from the behavior of the moment of inertia in the limit of large particle number may be understood from the details of the shell structure at prolate deformation. Thus, the upper and middle parts of the 50–82 proton and 82–126 neutron shells, where the Fermi levels are situated in these nuclei (with $`Z=72,\mathrm{\hspace{0.17em}74}`$ and $`N=104\text{}106`$), have a concentration of orbitals that are strongly coupled to the deformed potential. This inhibits the generation of total angular momentum by alignment of the angular momenta of the individual nucleons with the 1-axis. The result is a moment of inertia that is smaller than the average. The effect is illustrated by Fig. 2, which shows that in comparison with the weakly coupled $`1h_{9/2}`$ proton orbital, the angular momentum of the strongly coupled orbitals tends to stay closely aligned with the 3-axis, and for some orbitals even slightly antialigned with the 1-axis. In contrast, moments of inertia above the average are expected for nuclei with Fermi levels in the lower parts of the major shells. Such a variation was actually found through the 82–126 neutron shell in the detailed calculations in Ref. . With increasing deformation, the shell structure, and hence its contribution to the moment of inertia, is progressively damped . Less pronounced deviations from the rigid-body value are therefore expected for superdeformed nuclei. It should be noted that the substantial deviations from the rigid-body moment of inertia seen in Fig. 1 occur at the calculated equilibrium shape of each configuration. A similar experience applies to the magnetic susceptibility of small metal clusters , which have a flat-bottom single-particle potential like that of atomic nuclei. The deviation from the rigid-body moment of inertia reflects a non-rigid flow of mass in the rotational states. Such intrinsic mass currents have been discussed for atomic nuclei by several authors as well as for small metal clusters . The behavior of the $`K^\pi =16^+`$, $`39/2^+`$, $`22^{}`$ and $`30^+`$ bands is well described in terms of a constant moment of inertia of each configuration with a value about 55 MeV<sup>-1</sup>. Such a constant moment of inertia corresponds to the familiar expression for the energy levels in a rotational band built on a strongly coupled intrinsic state, $`E(I)=(I(I+1)K^2)/2𝒥`$, and it indicates a collective origin of the angular momentum with respect to the 1-axis. The $`K^\pi =45/2^{}`$ and $`25^+`$ bands show a totally different behavior with a large up-curvature of the function $`J(\omega )`$. Asymptotically, in the limit of large angular velocity, the moments of inertia approach values similar to those of the other bands. As discussed in Ref. , this behavior results from the presence of a $`1h_{9/2}`$ proton orbital in the configurations of the $`K^\pi =45/2^{}`$ and $`25^+`$ bands. In fact, as the component $`\omega _1`$ of the angular velocity becomes finite, this weakly coupled orbital, which intrudes from the the $`Z=82\text{}126`$ spherical shell, immediately aligns its angular momentum with the 1-axis, thus making a significant contribution to the component $`J_1`$ of the total angular momentum on the 1-axis. The situation is illustrated in Fig. 2. The functions $`J_1(\omega _1)`$ actually calculated for these two bands are shown in Fig. 3. Corresponding empirical functions were extracted from the data by assuming, in close accordance with what is calculated, that $`J_3`$ is constant and equal to $`K`$, i. e. $$J_1=\sqrt{J^2K^2},\omega _1=\omega \sqrt{1(K/J)^2}.$$ (3) In the empirical range of $`\omega _1`$, both the calculated and the measured functions are seen to be fairly linear, and extrapolating these parts of the curves to $`\omega _1=0`$ yields the common value $`J_1=2.8\pm 0.5`$ (cf. Fig. 3). In order to see how the behavior of the $`K^\pi =45/2^{}`$ and $`25^+`$ bands seen in Fig. 1 may emerge from this picture, consider an idealized scenario where the $`1h_{9/2}`$ proton orbital makes a constant contribution $`i`$ to $`J_1`$, and all orbitals together a constant contribution to $`J_3`$ equal to $`K`$ and a contribution to $`J_1`$ equal to $`𝒥_R\omega _1`$, where $`𝒥_R`$ is a constant. (Such a schematic model is discussed in more detail in Sec. V C). With $`J_1=𝒥_R\omega _1+i`$ and (3), we have $$\omega =\left(1\frac{i}{\sqrt{J^2K^2}}\right)\omega _{\text{sc}},\omega _{\text{sc}}=\frac{J}{𝒥_R},$$ (4) whence $`\omega `$ is seen to become smaller than the frequency $`\omega _{\text{sc}}`$ for strong coupling ($`i=0`$). In the calculation, there is a gradual increase of the contribution $`i`$ to $`J_1`$ of the $`1h_{9/2}`$ proton orbital towards it maximum 9/2. Thus, the assumption above of a constant $`i`$ was too schematic. The calculated curves show a slight down-curvature due to saturation of $`i`$. The absence of a similar down-curvature in the data might be the result of a counteractive non-linearity of the remaining, collective, part of $`J_1`$. In that case, the present calculation does not get this part quite right and overestimates the collective moment of inertia by about 5 MeV<sup>-1</sup>. Nevertheless, these considerations show that (i) the essential difference of behavior, induced by the alignment with the 1-axis of the angular momentum of the $`1h_{9/2}`$ proton, can be well understood, and (ii) the collective part $`𝒥_R`$ is about 55 MeV<sup>-1</sup>, like for the other bands. ## V Additional investigations The calculations for zero pairing, and their comparison with experimental data, constitute the principal outcome of this work. However, it is also instructive to investigate some finite-pairing effects and other model assumptions. ### A Static pairing Pairing is taken into account by including the pair field in the quasiparticle Routhian $$h^{}=h_{\text{def}}+\mathrm{\Delta }(P^++P)\lambda N\omega (j_1\mathrm{sin}\vartheta +j_3\mathrm{cos}\vartheta ),$$ (5) where $`P^+`$ is the monopole pair operator and $`N`$ is the particle number. In order to keep the notation simple we do not distinguish between the proton and neutron parts of the pair field. The rotating deformed state is obtained by replacing the Slater determinant by the quasiparticle configuration $`|\omega `$, which is the eigenstate of (5). The vector $`\stackrel{}{J}`$ is equal to $`\omega |\stackrel{}{j}|\omega `$ with this new state $`|\omega `$. The chemical potential $`\lambda `$ is fixed by requiring $`\omega |N|\omega `$ to be equal to the actual particle number, and the pair gap $`\mathrm{\Delta }`$ by the self-consistency condition $`\mathrm{\Delta }=G\omega |P|\omega `$. For $`\mathrm{\Delta }=0`$, this formalism is equivalent to the previous one. The pairing force constants $`G_n`$ and $`G_p`$ were determined by the condition that the pair gaps in the nuclear ground state should be equal to the empirical odd-even mass differences. It is well known from previous studies (for instance, Ref. ) that with increasing angular velocity, the pair gaps and chemical potentials change their values essentially stepwise with a successive breaking of Cooper pairs. Since a detailed description of the paired state is not the concern of this paper, the chemical potentials and pair gaps were kept constant for each configuration as long as there was no pair breaking encountered. The pair gaps determined at the band heads are listed in Table I. For most of the configurations, they are seen to vanish. Exceptions are the $`K^\pi =7^{}`$ and $`15^+`$ states. These have a common neutron configuration with one broken Cooper pair, which leaves a reduced but finite neutron pair gap. The $`K^\pi =7^{}`$ state furthermore has the ground-state proton configuration and hence the ground-state proton pair gap. The proton configuration of the $`K^\pi =15^+`$ state is found in the calculation to be just on the border of having a static proton pair field. Small variations of $`G_p`$ about the value obtained by adjustment to the odd-even mass difference in fact cause $`\mathrm{\Delta }_p`$ to vary between 0 and 0.5 MeV. For the calculations, we have chosen $`\mathrm{\Delta }_p=0`$, as also listed in Table I. This gives a good agreement with the measured function $`J(\omega )`$. Figure 4 shows the functions $`J(\omega )`$ calculated for the $`K^\pi =7^{}`$ and $`15^+`$ bands. Both of them are seen to bend upwards near $`\omega =0.35`$ MeV. This is because, by breaking a Cooper pair, two neutrons in $`1i_{13/2}`$ orbitals align their angular momenta with the 1-axis. For $`\omega 0.4`$ MeV, a vanishing pair gap is calculated for this neutron configuration. Therefore, we let in the figure the curves calculated with the neutron pair gap at the band head join for $`\omega 0.4`$ MeV those calculated with $`\mathrm{\Delta }_n=0`$. These are about 2 units below the measured curves in this range of $`\omega `$. We could not find a reason for the discrepancy. This pair breaking is of the type known as a $`BC`$-crossing (see, for example, Ref. ). As also seen from Fig. 4, no similar upbends arise in the case $`\mathrm{\Delta }_n=0`$. This conforms to the general experience that a static pair field is required for band crossings of the types $`AB`$, $`BC`$, etc.. Thus, the presence of upbends in the data is evidence for a static neutron pair field in these bands. ### B Pair fluctuations Near the critical point of the vanishing of the static pair gap, large fluctuations of the pair field, known as dynamic pair correlations, are expected . Dynamic pair correlations are taken into account in an approximate way by the Lipkin-Nogami correction for the fluctuation of particle number in the BCS state. (See Ref. and refs. therein). For several configurations, we made the Lipkin-Nogami calculation at the band head. The resulting Lipkin-Nogami pair gaps are also shown in Table I. With these gaps, $`J(\omega )`$ was calculated as in the case of static pairing (see Sec. V A), except that the chemical potentials were adjusted with the angular velocity so as to keep the correct expectation values of the proton and neutron numbers. The calculated functions $`J(\omega )`$ for the $`K^\pi =16^+`$ and $`25^+`$ bands shown in Fig. 5 are representative for the results. It is seen that relative to the calculation without pairing, the pair fields produced by the Lipkin-Nogami pair gaps make only minor corrections to the angular momentum (of the order of 1 unit), which do not improve the agreement with experiment. Thus, pair fluctuations appear to be inessential for the explanation of the observed deviations from the rigid value of the moments of inertia at high values of $`K`$. This result may seem to be at variance with the investigation of low-$`K`$ bands in Refs. . There it was found that at frequencies where the static pair gap is zero the pair fluctuations reduce the angular momentum by 3–4 units in the yrast band of even-even nuclei. The different sensitivity to the pair correlations may be understood. In order to generate the angular momentum along the 3-axis (high $`K`$) several pairs are broken. This blocks the affected single particle states from taking part in the pair correlations. However, it is just the contribution of these particles near the Fermi surface which is most sensitive to the pair correlations. In the case of the yrast bands of the even-even nuclei only one neutron pair (1$`i_{13/2}`$) is broken. Consequently these bands are more sensitive to the pair fluctuations. This argument is consistent with Refs. , where it was found that in bands with two broken pairs (odd-$`A`$ nuclei and negative parity bands in even-even nuclei) the pair fluctuations reduce the angular momentum only by 1–2 units. Hence, only the low-$`K`$ bands are suited to study the influence of the pair fluctuations on the moments of inertia. ### C A particle-rotor model calculation It was seen in Sec. IV that the behavior of the $`K^\pi =45/2^{}`$ and $`25^+`$ bands at low angular velocity is largely determined by a single proton in a $`1h_{9/2}`$ orbital. The behavior was qualitatively explained in terms of a particle-rotor model where all nucleons except the $`1h_{9/2}`$ proton are assumed to make a constant contribution to $`J_3`$ equal to $`K_R=K\frac{1}{2}`$ and a contribution to $`J_1`$ equal to $`𝒥_R\omega _1`$, where $`𝒥_R`$ is a constant. This situation may be further analyzed by calculating the quantal states of this model. In particular, we address the question of whether the deviation between the experiment and the calculation in the upper panel of Fig. 3 are related to the violation of angular momentum conservation in the TAC model. A quantal treatment of the system of a particle coupled to a $`K_R0`$ rotor was given previously in Ref. . The coupling of the $`1h_{9/2}`$ proton to the deformed core is treated in a schematic way. The particle space is restricted to a multiplet of angular-momentum eigenstates with quantum number $`j=9/2`$, and $`h_{\text{def}}`$, acting on the single proton, is taken to be a quadratic function of $`j_3`$. The coefficient of this quadratic function is chosen so as to reproduce the splitting of the $`1h_{9/2}`$ proton level found for the full Hamiltonian $`h_{\text{def}}`$ at the deformations of the $`K^\pi =45/2^{}`$ and $`25^+`$ band heads (see Sec. III). The particle-rotor problem can be treated in the semiclassical TAC approximation. The details are described in Ref. . The function $`J_1(\omega _1)`$ of the $`K^\pi =25^+`$ band thus calculated with $`𝒥_R`$=55 MeV<sup>-1</sup> is shown in the lower panel of Fig. 3. It is seen that the schematic model reproduces the result of the full TAC calculation, seen in the upper panel, very closely. The result of the exact quantal treatment of the same particle-rotor model is also shown in the lower panel of Fig. 3. In order to generate the plot the quantal energies are treated like empirical ones (see Secs. II and IV). The quantal calculation conforms better to the data than the TAC approximation in producing a more linear function $`J_1(\omega _1)`$. However, extrapolating this function from the empirical range of $`\omega _1`$ to $`\omega _1=0`$ yields $`J_1=3.5`$, which is significantly larger than the empirical value $`J_1=2.8`$. The different behaviors of the quantal particle-rotor model and the TAC approximation to it arise essentially from replacing the recoil energy $`(j_1^{\mathrm{\hspace{0.17em}2}}+j_2^{\mathrm{\hspace{0.17em}2}})/2𝒥_R`$ by $`j_1^2/2𝒥_R`$ . While the former is approximately a constant, the latter acts as a potential that hinders the increase of $`j_1`$. Contrary to the quantal model, the cranking model was seen to reproduce the extrapolated value of $`J_1`$ found empirically for the $`K^\pi =45/2^{}`$ and $`25^+`$ bands. Thus, the nuclear system does not seem to absorb the recoil angular momentum of the $`1h_{9/2}`$ proton into just a single degree of freedom, as assumed in the quantal particle-rotor model. The present study does not provide an answer to the interesting question: How can the experimental curve $`J_1(\omega _1)`$ be so strikingly linear while the alignment is far from being complete? ### D How important is tilting the cranking axis? In the standard principal-axis cranking (PAC) model, $`\omega _3=0`$ is assumed. Thus, one obtains a function $`J_1(\omega _1)`$. A corresponding empirical function is extracted from the data by combining the TAC geometry with the assumption $`J_3=K=\text{constant}`$ for a rotational band with a band head angular-momentum quantum number $`K`$ . What makes the essential difference between the PAC and TAC models is thus the term $`\omega _3j_3`$ in the Routhian (1) of the latter. This term violates the invariance under rotation by the angle $`180^{}`$ about the 1-axis, whose eigenvalue is the “signature”. In the PAC model, a “favored” and an “unfavored” function $`J_1(\omega _1)`$, where the latter is the larger, are associated with a configuration with $`K0`$. These functions have opposite signature and correspond to two separate level sequences with $`\mathrm{\Delta }I=2`$. Derivatives $`dJ_1/d\omega _1`$ for the $`K^\pi =25^+`$ and $`30^+`$ bands calculated in both models are compared with the corresponding empirical data in Fig. 6. The derivative is seen to depend much more violently on $`\omega _1`$ in the PAC model than in the TAC model. Furthermore, the PAC calculation shows a substantial signature splitting. Since neither of these features is seen in the data, it must be concluded that the term $`\omega _3j_3`$ in the Routhian (1) is significant for the description of these high K-bands. The difference between the PAC and TAC results is larger for the $`K^\pi =30^+`$ than the $`K^\pi =25^+`$ band. This is due to the smaller deformation $`\epsilon _2`$ of the former. ## VI Conclusion It has been shown quantitatively how the moments of inertia in the zero-pairing limit may be substantially lower than the rigid-body value, indicating the presence of mass currents of quantal origin in the body-fixed frame of reference. Lower-than-rigid moments of inertia are both calculated and observed systematically for rotational bands in <sup>178</sup>Hf, <sup>178</sup>W and <sup>179</sup>W, where the neutron and proton Fermi levels are in the mid-to-upper portions of their respective shells. The analysis of a number of high-seniority bands shows that they behave as if the nuclei rotate in the unpaired state. The limited sensitivity of the calculated multi-quasiparticle rotational motion to pair gaps in the range 0–50% of their full value suggests that moments of inertia of high-$`K`$ bands may not be significantly affected by dynamic pair correlations. ###### Acknowledgements. This work is supported in part by the UK Engineering and Physical Sciences Research Council and by the DOE grant DE-FG02-95ER40934.
no-problem/0002/nucl-th0002065.html
ar5iv
text
# Effect of Cluster Formation on Isospin Asymmetry in the Liquid-Gas Phase Transition Region ## Abstract Abstract: Nuclear matter within the liquid-gas phase transition region is investigated in a mean-field two-component Fermi-gas model. Following largely analytic considerations, it is shown that: (1) Due to density dependence of asymmetry energy, some of the neutron excess from the high-density phase could be expelled into the low-density region. (2) Formation of clusters in the gas phase tends to counteract this trend, making the gas phase more liquid-like and reducing the asymmetry in the gas phase. Flow of asymmetry between the spectator and midrapidity region in reactions is discussed and a possible inversion of the flow direction is indicated. preprint: Effect of Cluster Formation on Isospin Asymmetry in the Liquid-Gas Phase Transition Region One interesting possibility in heavy-ion collisions at intermediate energy is the occurrence of a liquid-gas phase transition. Many recent papers addressed this possibility from different perspectives . Following an elementary consideration, it is easy to envisage a first-order phase transition in infinite nuclear matter. Müller and Serot first pointed out the importance of isospin for the liquid-gas phase transition . The additional isospin degree of freedom relaxes the system and makes the transition of second-order. Isospin observables could generally be used to extract a variety of information from heavy-ion collision, see for example the review by Li et al. . Some recent data analyses tried to explore isospin observables and to relate them to a possible occurrence of the phase transition . One focus of interest in connection with the phase transition is the midrapidity region in intermediate-energy heavy-ion collisions . In simulations of semiperipheral collisions, a formation of low-density neck region is observed that likely contributes to the midrapidity . The low-density region in contact with high-density regions (the projectile and target) opens up the possibility for a liquid-gas phase coexistence and phase conversion. In a dynamical simulation with the Boltzmann-Uehling-Uhlenbeck equation, Sobotka et al. observed neutron enrichment in the low-density neck region. However, a high $`n/p`$ ratio (much higher than in the composite system) was found when counting only free nucleons in the neck region, i.e. excluding nucleons in clusters. The paper argued that the symmetric clusters (deuterons and alphas) contributed much to the enrichment of neutron in the neck region. Specific results were purely numerical in nature. In this paper we shall discuss the isospin asymmetry in the phase transition region in a heavy-ion collision and the effect of clusters on that asymmetry. We will first follow crude statistical arguments, and then construct a model illustrating the same ideas. In the general discussion, let us first allow no cluster formation in an isospin-equilibrated heavy-ion reaction. For a given temperature and density, a large isospin asymmetry will increase the total energy, which is unfavorable. In a dense phase, the extra energy for maintaining the same asymmetry will be much larger than in a dilute phase. Thus, if a dilute phase is in isospin equilibrium with a dense phase, the asymmetry in the dilute phase will be larger. For the scenario of a neck region neighbored by a dense region in heavy ion collision, the $`n/p`$ ratio in the liquid phase is close to that of the whole system, while $`n/p`$ ratio in the gas phase could be much larger than in the composite system. Next, we consider letting the clusters be formed in the gas phase. Then the available phase space for liquid does not change while the phase space for the gas phase increases. The added phase space, which corresponds to clusters, has an $`n/p`$ mean value lower than the old phase space for gas. From a statistical equal-partition point of view, partition in the new liquid and gas phase space will drive the whole gas phase more symmetric. If the percentage of clusters is small, however, then there is essentially not much change in the phase space distribution, and asymmetry in the gas phase excluding clusters should not change much. Now let us build a simple model and show how isospin equilibrates between the two phases. We may start with a two component non-interacting Fermi gas of neutrons and protons, and represent the interaction by an energy density consistent with the empirical nuclear equation of state (EOS). For simplicity we assume no temperature dependence for the interaction energy, and the Coulomb interaction is not considered here. The total free energy of the system is then a sum of the free energies of two non-interacting Fermi gases and of a density-dependent nuclear potential energy. For a single phase at temperature $`T=0`$ and density $`\rho `$, the free energy per nucleon may be written as: $$f=/A=a_1\left(\rho /\rho _0\right)^{2/3}+a_2\left(\rho /\rho _0\right)+a_3\left(\rho /\rho _0\right)^{\sigma 1}+\left(a_4\left(\rho /\rho _0\right)^{2/3}+a_5\left(\rho /\rho _0\right)\right)y^2$$ (1) where $`\rho _0`$ is the normal density and $`y`$ is the asymmetry parameter, $`y=(NZ)/(N+Z)`$. For the moment, it is assumed that $`y1`$. The $`\left(\rho /\rho _0\right)^{2/3}`$ terms come from the non-interacting Fermi gas. The terms $`a_2\left(\rho /\rho _0\right)+a_3\left(\rho /\rho _0\right)^{\sigma 1}`$ are associated with a simple parameterization of the nuclear EOS . As we are only concerned with the isospin asymmetry in the liquid-gas phase transition, details of the parameterization do not affect our later discussion (though the exact numerical results may change). Given that the interaction generally contributes to the asymmetry energy , we adopt a simple parameterization in Eq. (1) for that contribution, of the form $`a_5\left(\rho /\rho _0\right)y^2`$. At $`T>0`$, the free energy could not be written in a simple analytic form, but we can still expand the free energy per nucleon about $`y=0`$. This expansion yields the net free energy of the form: $$f=/A=f_0+f_y=f_0+Cy^2$$ (2) where $`f_0`$ and $`C`$ are functions of both temperature and density. The second term on the r.h.s. of (2) is due to isospin asymmetry and may be called asymmetric free energy. Since our model is symmetric with respect to proton-neutron interchange, the expansion of the free energy contains no odd powers of $`y`$. In our numerical calculation, we use $`\rho _0=0.16fm^3`$, $`\sigma =2.1612`$, $`a_2=183.05MeV`$, $`a_3=144.95MeV`$, $`a_5=11.72MeV`$, and at $`T=0`$, $`a_1=22.10MeV`$, and $`a_4=12.28MeV`$ ($`a_4+a_525MeV`$ could be obtained from optical potential analysis or from the mass formula ). Numerical analysis indicates that a quadratic form in $`y`$ is adequate up to almost $`y=1`$ (a similiar conclusion has been reached in ). Figure 1 shows the calculated the asymmetry coefficient $`C`$ as a function of density and temperature in the Fermi gas. The general trend is that $`C`$ increases with increasing density and temperature. Therefore, at a given temperature, a dense phase will need more extra energy for maintaining a given asymmetry than a dilute phase. Now we can consider a system that has two phases of liquid and gas, respectively, in contact with each other. The total free energy will be a sum of the free energies for the two phases. Let us assume that the mechanical and thermal equilibrium has been achieved, and now we only consider the isospin equilibrium between the two phases. Keeping the total asymmetry of the system fixed, we need to vary the asymmetry in the liquid and gas phases to minimize total free energy. This yields the equilibrium condition: $$C_ly_l=C_gy_g$$ (3) Here $`C_l`$ and $`C_g`$ denote the asymmetric free energy coefficients in the liquid phase and in the gas phase. At a given temperature, the liquid phase is denser than the gas phase, and the coefficient $`C`$ is a monotonously increasing function of density, $`C_l>C_g`$. Thus, the asymmetry in the gas phase $`y_g`$ is always larger than that in the liquid phase $`y_l`$. To characterize the relative asymmetry of the two phases, we may define the isospin asymmetry amplification ratio: $$R=C_l/C_g=y_g/y_l$$ (4) Figure 2 displays $`R`$ vs. temperature for the phases in equilibrium. For our model calculation, the ratio $`R`$ stays always larger than $`1`$, which means that the gas phase will always have a higher neutron content than the liquid phase. Notably the amplification ratio is independent of the net isospin asymmetry of the whole system. The ratio $`R`$ decreases as temperature increases, so that a large $`n/p`$ ratio in the gas phase is easier reached at a low temperature. In the case of a nonequilibrium process, Eq. (3) is still of a use due to a variational origin of the equation. If a local equilibrium assumption is met, i.e. statistical variables are still valid locally, then Eq. (3) tells us the direction of developement for the system. The gradient of asymmetry coefficient could result in a net flow of isospin asymmetry, which tries to restore the isospin equilibrium condition Eq. (3). The flow direction is to the steepest decrease of asymmetry coefficient $`C`$. If there is a gradient of density in a nonequilibrium system, hence a gradient of asymmetry coefficient (see Fig. 1), then there could be a flow of isospin asymmetry in the system, with the direction to the low density region. We know that, if the nucleon density is not too low, the mean field description is quite good. But when the density is low, particle-particle correlations become important, and the validity of a mean field description worsens. One way to incorporate particle correlations is to allow for the formation of clusters in the system (as is done in the BUU calculations ). Since clusters are in practice only important for the gas phase, we will only allow clusters there and no clusters in the liquid phase at all. To further simplify the discussion, we shall adopt a droplet model for the clusters (as used by Goodman and many others). We will assume that droplets have the same properties as the liquid phase, that is the same density and asymmetric coefficient; for the present discussion we shall ignore the surface energy term. Suppose the average size of droplets is $`A`$, and asymmetry in terms of average proton and neutron numbers in droplets is $`y_d`$. The density of nucleons in clusters may be represented as $`\rho _d=\alpha \rho `$ and of free nucleons as $`\rho _f=(1\alpha )\rho `$, where $`\rho `$ is the density of the gas phase. The asymmetric free energy of the new (free nucleons + droplets) gas phase is: $$f_y=\left(1\alpha \right)C_fy_f^2+\alpha C_dy_d^2.$$ (5) Here, the subscripts $`f`$ and $`d`$ refer to free nucleons and droplets, respectively. To get the isospin equilibrium condition, we can carry out a similar variation of asymmetry parameters in the liquid, free-nucleon gas, and in droplets, as before, obtaining: $$y_d=y_l,\mathrm{and}y_f/y_l=C_l/C_f.$$ (6) As the density of the gas phase is low, we may use the ideal gas EOS $`p=\rho T`$ for clusters in a calculation. And adding clusters will necessarily decrease $`\rho _f`$ in order to satisfy the mechanical equilibrium condition. However, in Fig. 1 we can see that $`C`$ decreases only slightly as density decreases. To first order, we can take $`C_fC_g`$, so that $`y_f`$ is nearly the same as the in old gas phase. Overall, the asymmetry of the new gas phase is: $$y=\alpha y_d+\left(1\alpha \right)y_f.$$ (7) This may be compared to the asymmetry for the old gas phase, $`y_gy_f`$ , which is much larger than $`y_d=y_l`$. It is clear that the more droplets are added to the gas phase, the more it looks like the liquid phase. The amplification ratio now is: $$R=y/y_l=\alpha +\left(1\alpha \right)C_l/C_f\alpha +\left(1\alpha \right)R_0.$$ (8) where $`R_0=C_l/C_g1`$. The case of $`\alpha =0`$ corresponds to no cluster formation in the reaction, and the isospin amplification ratio reaches then the maximum $`R_0`$. The gas phase acquires then the largest possible net asymmetry at a given temperature. On the other hand, $`\alpha =1.0`$ corresponds to the gas phase with only clusters and the same net asymmetry as for the liquid phase. Figure 3 shows the decrease of the amplification factor $`R`$ as a function of the cluster concentration $`\alpha `$. As we add more clusters, the low-density gas phase will need more energy for the same isospin asymmetry, comparable with that of the liquid phase. As a result, the density and asymmetry in clustered gas will both approach those in the liquid phase. Short of simple tools to estimate typical relative numbers of free neutrons, free protons and clusters in the gas phase in a reaction, we may seek help from experiments. Different regions of velocity space are generally believed to reveal characteristics of different sources, such as the midrapidity particle source for the low-density neck region. Several intermediate-energy experiments pointed out to a neutron-rich midrapidity source in peripheral heavy-ion collisions . Sobotka et al. measured neutron and $`{}_{}{}^{4}He`$ emission from a midrapidity source formed in mid-central $`{}_{}{}^{129}Xe+^{120}Sn`$ collisions at 40MeV/nucleon. They compared their results with results of the INDRA collaboration for the same system and gave a quantitative description of the midrapidity source. About half of the charged particles from this source are $`{}_{}{}^{4}He`$ and only $`10\%`$ are free protons. Similiar results have been obtained in other papers . The number of neutrons is approximately the same as the number of charged particles, or $`10`$ times the number of protons in this source . If we take the average cluster size in the midrapidity as about 5 , then the percentage of nucleons inside clusters will be $`\alpha 80\%`$. The $`N/Z`$ ratio for the midrapidity source is found to be higher than for the full system . Thus the midrapidity source has $`(N/Z)_{mid}1.65`$ or $`y_{mid}0.25`$ while the system has $`(N/Z)_{sys}1.39`$ or $`y_{sys}0.16`$. The asymmetry amplification ratio is then $`R1.5`$. For a mid-rapidity source formed in peripheral heavy-ion collision at similiar energy, a fully consistent comparision of different experiments is not easy. Neverthless, comparison of the peripheral data from also suggests a high cluster concentration and a high $`n/p`$ ratio for free neutron and proton. In our model calculation, Fig. 3 shows that for the cluster concentration $`\alpha `$ as high as $`80\%`$, the asymmetry amplification ratio R will decrease by more than a half when compared with the nonclustered gas phase. This large decrease of R will largely limit the isospin asymmetry in the gas phase when the asymmetry in the liquid phase is fixed. Sobotka et al. extracted the temperature for the midrapidity source as $`67`$MeV. For this temperature and the cluster concentration $`\alpha 80\%`$, we can read off from Fig. 3 the corresponding equilibrium value as $`R(1.92.1)`$. This value is higher than the extracted $`R1.5`$ in the experiment, which means that the system only achieved a partial isospin equilibrium and the asymmetry amplification in the gas phase did not reach its full value. While this kind of equilibrium consideration generally give some limits for the importance of cluster formation on isospin asymmetry in the liquid and gas phase transition region, the developement of isospin asymmetry in heavy-ion collision is essentially a nonequlibrium process which deserves more thorough investigations than can be comprised in this letter, possibly incorporating simulations. So we shall only give some general discussion of the possible isospin asymmetry developement in the system. Because of the transient nature of heavy-ion collision, the development of isospin equilibrium could depend on two time scales. One time scale is for the separation of the midrapidity source from the remaining sources, and the other is for isospin equilibration. At high enough energy, the three sources separate quickly before isospin equilibration could set in between sources. The isospin asymmetry is then determined by the reaction geometry and the isospin content of the target and projectile. Isospin equilibration and cluster formation operate only as post-reorganization processes, changing only isospin asymmetry for free nucleon and clusters within individual sources. The large isospin asymmetry for free nucleons could be the result of clusterization in the low-density phase, with clusters taking over the role of the liquid phase, consistently with the arguments by Sobotka et al. . From our previous discussion, the R ratio in Fig. 2 sets an upper limit to the asymmetry of the free nucleons in the midrapidity source. On the other hand, if the energy is low enough, partial isospin equilibrium will set in before different sources separate from each other, and the reaction scenario becomes more complex. As the two heavy ions collide against each other, initial compression of the participants produces a dense phase in the center, while two spectators remain less dense. As the asymmetry coefficient for the dense phase is larger than for the less dense phase (cf. Fig. 1) at the interfaces between the two spectator regions and participant region, there could be a local density gradient from the center out to the two spectators. From our arguments following Eq. (3), we know there could then appear an isospin asymmetry flow, and it would be out to the two spectators. As the compression stage ends, the center region begins to expand, and the density drops, the asymmetry coefficient also drops as a result. When the gradient of the asymmetry coefficient changes direction, the flow of isospin asymmetry changes direction too. Cluster formation in the center region counteracts the decrease of the asymmetry coefficient, and thus delays the change of flow direction. Further development of the system separates the three sources, and net isospin asymmetries for different sources do not change after the separation. But clusterization still plays a role changing the isospin asymmetry of free nucleons within individual sources. Since dynamical simulations suggest a much longer expansion time than the compression time, we could expect that the isospin asymmetry flow to the midrapidity region dominates. This could give rise to an enhanced asymmetry in the midrapidity region. The experiments also suggest a neutron-rich midrapidity source, which is consistent with the present picture. In conclusion, we have investigated the isospin asymmetry in the nuclear liquid-gas phase-transition region. In the framework of the two-component Fermi-gas with a parameterized interaction, under the assumption of isospin equilibrium, we found that a neutron enrichment in the gas phase is due to the density-dependent part of the asymmetry energy. Meeting the isospin equilibrium condition, Eq. (3), drives extra neutrons out to the low-density phase. The formation of clusters, which have average asymmetry smaller than the gas phase, will make the gas phase more liquid-like, and counteract the neutron enrichment in the gas phase. The $`{}_{}{}^{4}He`$ clusters will be the most important due to their predominance in the neck region . Based on the isospin equilibrium requirement, isospin asymmetry flow was suggested if there exists a local density gradient in heavy-ion collisions. Since the midrapidity undergoes compression and expansion, we also suggested a possible change of the direction of the isospin asymmetry flow during the evolution of the system. \*** This work was partially supported by the National Science Foundation under Grant PHY-9605207.
no-problem/0002/astro-ph0002388.html
ar5iv
text
# Star Formation and Chemical Evolution of Lyman-Break Galaxies ## 1 Introduction The Lyman-break technique (e.g. Steidel, Pettini & Hamilton 1995) has now been proved very successful in finding large numbers of star forming galaxies at redshift $`z3`$ (e.g. Steidel et al. 1996, 1999b). The observed number density and clustering properties of Lyman-break galaxies (hereafter LBGs, Steidel et al. 1998; Giavalisco et al. 1998; Adelberger et al. 1998) are best explained by assuming that they are associated with the most massive haloes at $`z3`$ predicted in hierarchical models of structure formation (Mo & Fukugita 1996; Baugh, Cole & Frenk 1998; Mo, Mao & White 1998b; Coles, et al. 1998; Governato, et al. 1998; Jing 1998; Jing & Suto 1998; Katz, et al. 1998; Kauffmann, et al. 1998; Moscardini, et al. 1998; Peacock, et al. 1998; Wechsler, et al. 1998). This assumption provides a framework for predicting a variety of other observations for the LBG population. Steidel et al. (1999b and references therein) gave a good summary of recent studies on this population including the luminosity functions, luminosity densities, color distribution, star formation rates, clustering properties, and the differential evolution. Assuming that LBGs form when gas in dark haloes settles into rotationally supported discs or, in the case where the angular momentum of the gas is small, settles at the self-gravitating radius, Mo, Mao & White (1998b) predict sizes, kinematics and star formation rates and halo masses for LBGs, and find that the model predictions are consistent with the current (rather limited) observational data; Steidel et al. (1999a) suggest that the total integrated UV luminosity densities of LBGs are quite similar between redshift 3 and 4 although the slope of their luminosity function might have a large change in the faint-end. Furthermore, Steidel et al. (1999b) suggest that a “typical” LBG have a star formation rate of about $`65h_{50}^2\mathrm{M}_{}\mathrm{yr}^1`$ for $`\mathrm{\Omega }_0=1`$ and the star formation time scale be the order of 1Gyr based on their values of E(B-V) as pointed out by Pettini et al. (1997b) after adopting the reddening law of Calzetti (1997). Recently, Friaca & Terlevich (1999) use their chemodynamical model to propose that an early stage (the first Gyr) of intense star formation in the evolution of massive spheroids could be identified as LBGs. However, Sawicki & Yee (1998) argued that LBGs could be very young stellar populations with the age less than 0.2Gyr based on the broadband optical and IR spectral energy distributions. This is also supported by the work of Ouchi & Yamada (1999) based on the expected sub-mm emission and dust properties. It is worthy of noting that the assumptions about the intrinsic LBG spectral shape and the reddening curve play important roles in these results. In this paper, we study how star formation and chemical enrichment may have proceeded in the LBG population. As we will demonstrate in Section 2, the observed star formation rate at $`z3`$ requires a self-regulating process to keep the gas supply for a sufficiently long time. We will show (in Section 2) that such a process can be achieved by the balance between the energy feedback from star formation and gas cooling. Model predictions for the LBG population and further discussions about the results are presented in Section 3, a brief summary is given in Section 4. As an illustration, we show theoretical results for a CDM model with cosmological density parameter $`\mathrm{\Omega }_0=0.3`$, cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. The power spectrum is assumed to be that given in Bardeen et al. (1986), with shape parameter $`\mathrm{\Gamma }=0.2`$ and with normalization $`\sigma _8=1.0`$. We denote the mass fraction in baryons by $`f_\mathrm{B}=\mathrm{\Omega }_\mathrm{B}/\mathrm{\Omega }_0`$, where $`\mathrm{\Omega }_\mathrm{B}`$ is the cosmic baryonic density parameter. According to the cosmic nucleosynthesis, the currently favoured value of $`\mathrm{\Omega }_\mathrm{B}`$ is $`\mathrm{\Omega }_\mathrm{B}0.019h^2`$ (Burles & Tytler 1998), where $`h`$ is the present Hubble constant in units of 100 $`\mathrm{kms}^1\mathrm{Mpc}^1`$, and so $`f_\mathrm{B}0.063h^2`$. Whenever a numerical value of $`h`$ is needed, we take $`h=0.7`$. At the same time, we define parameter $`t_{}`$ as the time scale for star formation in the LBG population throughout the paper. ## 2 Models ### 2.1 Galaxy Formation In this paper, we use the galaxy formation scenario described in Mo, Mao & White (1998a, hereafter MMWa) to model the LBG population. In this scenario, central galaxies are assumed to form in dark matter haloes when collapse of protogalactic gas is halted either by its angular momentum, or by fragmentation as it becomes self-gravitating (see Mo, Mao & White 1998b, hereafter MMWb, for details). As described in MMWb, the observed properties of LBGs can be well reproduced if they are assumed to be the central galaxies formed in the most massive haloes with relatively small spins at $`z3`$. As in MMWb, we assume that gas in a dark halo initially settles into a disk with exponential surface density profile. When the collapsing gas is arrested by its spin, the central gas surface density and the scale length of an exponential disk are $$\mathrm{\Sigma }_0380h\mathrm{M}_{}\mathrm{pc}^2\left(\frac{m_\mathrm{d}}{0.05}\right)\left(\frac{\lambda }{0.05}\right)^2\left(\frac{V_c}{250\mathrm{k}\mathrm{m}\mathrm{s}^1}\right)\left[\frac{H(z)}{H_0}\right],$$ (1) and $$R_\mathrm{d}8.8h^1\mathrm{kpc}\left(\frac{\lambda }{0.05}\right)\left(\frac{V_c}{250\mathrm{k}\mathrm{m}\mathrm{s}^1}\right)\left[\frac{H(z)}{H_0}\right]^1,$$ (2) where $`m_\mathrm{d}`$ is the fraction of halo mass that settles into the disk, $`V_c`$ is the circular velocity of the halo, $`\lambda `$ is the dimensionless spin parameter, $`H(z)`$ is the Hubble constant at redshift $`z`$ and $`H_0`$ is its present value (see MMWa for details). Since $`H(z)`$ increases with $`z`$, for a given $`V_c`$ disks are less massive and smaller but have a higher surface density at higher redshift. When $`\lambda `$ is low and $`m_\mathrm{d}`$ is high, the collapsing gas will become self-gravitating and fragment to form stars before it settles into a rotationally supported disk. In this case, we will take an effective spin $`\lambda m_\mathrm{d}`$ in calculating $`\mathrm{\Sigma }_0`$ and $`R_\mathrm{d}`$. We take the empirical law (Kennicutt 1998) of star formation rate (SFR) to model the star formation in high-redshift disks which is $$\mathrm{\Sigma }_{\mathrm{SFR}}=a\left(\frac{\mathrm{\Sigma }_{\mathrm{gas}}}{\mathrm{M}_{}\mathrm{pc}^2}\right)^b\mathrm{M}_{}\mathrm{yr}^1\mathrm{pc}^2,$$ (3) where $$a=2.5\times 10^{10},b=1.4$$ (4) respectively. Here $`\mathrm{\Sigma }_{\mathrm{SFR}}`$ is the SFR per unit area and $`\mathrm{\Sigma }_{\mathrm{gas}}`$ is the gas surface density. Note that this star formation law was derived by averaging the star formation rate and cold gas density over large areas on spiral disks and over starburst regions (Kennicutt 1998). We will apply this law differentially on a disk and also take into account the Toomre instability criterion of star formation (Toomre 1964; see also Binney & Tremaine 1987). For a given cosmogonic model, the mass function for dark matter haloes at redshift $`z`$ can be estimated from the Press-Schechter formalism (Press & Schechter 1974): $$\mathrm{d}N=\sqrt{\frac{2}{\pi }}\frac{\rho _0}{M}\frac{\delta _c(z)}{\mathrm{\Delta }(R)}\frac{\mathrm{d}\mathrm{ln}\mathrm{\Delta }(R)}{\mathrm{d}\mathrm{ln}M}\mathrm{exp}\left[\frac{\delta _c^2(z)}{2\mathrm{\Delta }^2(R)}\right]\frac{\mathrm{d}M}{M},$$ (5) where $`\delta _c(z)=\delta _c(0)(1+z)g(0)/g(z)`$ with $`g(z)`$ being the linear growth factor at $`z`$ and $`\delta _c(0)1.686`$, $`\mathrm{\Delta }(R)`$ is the linear $`rms`$ mass fluctuation in top-hat windows of radius $`R`$ which is related to the halo mass $`M`$ by $`M=(4\pi /3)\overline{\rho }_0R^3`$, with $`\overline{\rho }_0`$ being the mean mass density of the universe at $`z=0`$. The halo mass $`M`$ is related to halo circular velocity $`V_c`$ by $`M=V_c^3/[10GH(z)]`$. A detailed description of the PS formalism and the related cosmogonic issues can be found in the Appendix of MMWa. From the Press-Schechter formalism and the $`\lambda `$-distribution which is a log-normal function with mean $`\overline{\mathrm{ln}\lambda }=\mathrm{ln}0.05`$ and dispersion $`\sigma _{\mathrm{ln}\lambda }=0.5`$ (see equation in MMWa), we can generate Monte Carlo samples of the halo distributions in the $`V_c`$-$`\lambda `$ plane at a given redshift and, using the star formation law outlined above, assign a star formation rate to each halo. As in MMWb, we select LBGs as the galaxies with the highest star formation rate, so that the comoving number density for LBGs is equal to the observed value, $`N_{\mathrm{LBG}}=2.4\times 10^3h^3\mathrm{Mpc}^3`$ for the assumed cosmology at $`z=3`$, as given in Adelberger et al. (1998). Here it is worth noting that the model selection of LBGs we adopted is without the dust extinction being considered. This implies that the contribution of the dust is assumed to be uniform. But in fact, it could be very different from galaxies to galaxies. So, our selection of LBGs may not have one-to-one correspondence with the observed LBGs (Baugh et al. 1999), but the selection should be correct on average. ### 2.2 Cooling-Regulated Star Formation What regulates the amount of star-forming gas in a dark halo? In the standard hierarchical scenario of galaxy formation (e.g. White & Rees 1978; White & Frenk 1991, hereafter WF), gas in a dark matter halo is assumed to be shock heated to the virial temperature, $$T=2.24\times 10^6\mathrm{K}\left(\frac{V_c}{250\mathrm{k}\mathrm{m}\mathrm{s}^1}\right)^2,$$ (6) as the halo collapses and virializes. The hot gas then cools and settles into the halo centre to form stars. As suggested in WF, the amount of cold gas available for star formation in a dark halo is either limited by gas infall or by gas cooling, depending on the mass of the halo. For the massive haloes ($`V_c200\mathrm{k}\mathrm{m}\mathrm{s}^1`$) we are interested here, gas cooling rate is smaller than gas-infall rate, and the supply of star-forming gas is limited by gas cooling (see WF for details). It is therefore likely that gas cooling is the main process that constantly regulates the SFR in LBGs. To have a quantitative assessment, let us compare different rates involved in the problem. Using equations (1)-(4) we can write the SFR as $`\dot{M}_{}={\displaystyle \frac{2\pi a\mathrm{\Sigma }_0^bR_\mathrm{d}^2}{b^2}}2.33\times 10^2h^{0.6}\left({\displaystyle \frac{m_\mathrm{d}}{0.05}}\right)^{1.4}\left({\displaystyle \frac{\lambda }{0.05}}\right)^{0.8}\left({\displaystyle \frac{V_c}{250\mathrm{k}\mathrm{m}\mathrm{s}^1}}\right)^{3.4}\left[{\displaystyle \frac{H(z)}{H_0}}\right]^{0.6}\mathrm{M}_{}\mathrm{yr}^1,`$ (7) where $`m_\mathrm{d}`$ is the current gas content of the disk. The rate at which gas is consumed by star formation is therefore $$\dot{M}_{\mathrm{SFR}}=(1R_\mathrm{r})\dot{M}_{},$$ (8) where $`R_\mathrm{r}`$ is the returned fraction of stellar mass into the ISM; we take $`R_\mathrm{r}=0.3`$ for a Salpeter IMF (e.g. Madau et al. 1998). According to WF, the heating rate due to supernova explosions under the approximation of instantaneous recycling can be written as $$\frac{\mathrm{d}E}{\mathrm{d}t}=ϵ_0\dot{M}_{}(700\mathrm{k}\mathrm{m}\mathrm{s}^1)^2,$$ (9) where $`ϵ_0`$ is an efficiency parameter which is still very uncertain. We take it to be $`0.02`$ as in WF. The rate at which gas is heated up (to the virial temperature) is therefore $$\dot{M}_{\mathrm{heat}}=\frac{0.8}{V_c^2}\frac{\mathrm{d}E}{\mathrm{d}t}$$ (10) which is the same form as equation (9) of Kauffmann (1996; see also Somerville 1997). At $`z=3`$ and for the cosmology considered here, this rate can be written as $$\dot{M}_{\mathrm{heat}}29.2h^{0.6}\left(\frac{m_d}{0.05}\right)^{1.4}\left(\frac{\lambda }{0.05}\right)^{0.8}\left(\frac{V_c}{250\mathrm{k}\mathrm{m}\mathrm{s}^1}\right)^{1.4}\left[\frac{H(z)}{H_0}\right]^{0.6}\mathrm{M}_{}\mathrm{yr}^1.$$ (11) Comparing this equation with equations (7) and (8), we can find that the rate for gas consumption due to star formation is much larger than the rate of gas heating for LBG halos. Because LBGs are hosted by massive halos which have large circular velocities $`V_\mathrm{c}`$, the halos are cooling dominated which is confirmed during the detailed calculation below. Following WF we define a mass cooling rate by $$\dot{M}_{\mathrm{cool}}=4\pi \rho _{\mathrm{gas}}(r_{\mathrm{cool}})r_{\mathrm{cool}}^2\frac{\mathrm{d}r_{\mathrm{cool}}}{\mathrm{d}t},$$ (12) where $`r_{\mathrm{cool}}`$ is the cooling radius and $`\rho _{\mathrm{gas}}`$ is the density profile of the hot gas in the halo. For simplicity, we assume that $`\rho _{\mathrm{gas}}(r)=f_\mathrm{B}V_c^2/(4\pi Gr^2)`$, and we define $`r_{\mathrm{cool}}`$ to be the radius at which the cooling time is equal to the age of the universe, which is similar to the time interval between major mergers of haloes (Lacey & Cole 1994). The density distribution of the halo mass here is assumed to be isothermal. However, it is the NFW profile (Navarro, Frenk & White, 1997) in MMWb. Because the difference of the resulted cooling rates between these two different choices of density profiles is small (Zhao et al, 1999), and the major goal here is to show whether or not the cooling-regulated star formation can be valid, the adoption of isothermal profile will not influence the final result very much. Under this definition, gas within the cooling radius can cool effectively before the halo merges into a larger system where it may be heated up to the new virial temperature if it is not converted into stars. Using the cooling function given by Binney & Tremaine (1987) where cooling function $`\mathrm{\Lambda }10^{23}\mathrm{ergs}^1\mathrm{cm}^3`$ in the range of $`5\times 10^5\mathrm{K}T2\times 10^7\mathrm{K}`$ (and assuming gas with primordial composition), the mass cooling rate can then be written as $$\dot{M}_{\mathrm{cool}}49.8h^{1/2}\left(\frac{V_c}{250\mathrm{k}\mathrm{m}\mathrm{s}^1}\right)^2\left(\frac{f_\mathrm{B}}{0.1}\right)^{3/2}\mathrm{M}_{}\mathrm{yr}^1.$$ (13) If $`\dot{M}_{}`$ is smaller than $`\dot{M}_{\mathrm{cool}}`$, then cold gas will accumulate in the halo centre and lead to higher star formation rate. If, on the other hand, $`\dot{M}_{}>\dot{M}_{\mathrm{cool}}`$, the amount of cold gas will be reduced by star formation and supernova heating, leading to a lower star formation rate. We therefore assume that there is a rough balance among these three rates: $$\dot{M}_{\mathrm{cool}}\dot{M}_{\mathrm{heat}}+\left(1R_\mathrm{r}\right)\dot{M}_{}.$$ (14) It should be noted that the cooling-regulated star formation process is only a reasonable hypothesis, and the real situation must be much more complicated. For example, during a major merger of galactic haloes, the amount of gas that can cool must be much larger than that given by the cooling argument, and the star formation may be in a short burst (e.g. Mihos & Hernquist 1996). However, such bursts are not expected to dominate the observed LBG population, because of their brief lifetimes. Thus, star formation rates in the majority of LBGs are expected to be regulated by equation (14) on average. As shown in MMWb, to match the observed number density of LBGs, the median value of $`V_c`$ is about $`300\mathrm{km}\mathrm{s}^1`$ in the present cosmogony. The typical star formation rate is of the order $`100\mathrm{M}_{}\mathrm{yr}^1`$. This is not very different from the observed star formation rates, albeit dust distinction in the observations may be difficult to quantify. Figure 1 shows the value of $`m_\mathrm{d}`$ required by the balance condition equation (14) as a function of halo circular velocity, assuming that $`f_\mathrm{B}=0.1`$ and the left hand side exactly equals to the right hand ones in equation (14). Results are shown for two choices of spin parameters, $`\lambda =0.035`$ and 0.08, corresponding to the 50 and 90 percent points of the $`\lambda `$ distribution for the LBG population (MMWb). As one can see, for the majority of LBG hosts, gas cooling indeed regulates the values of $`m_\mathrm{d}`$ to the range from 0.02 to 0.04. So, we can reasonably choose $`m_\mathrm{d}=0.03`$ for the LBG population as MMWb did. Since the cooling time is approximately the age of the universe at $`z3`$, cooling regulation ensures that star formation at the predicted rate can last over a large portion of a Hubble time. ## 3 MODEL PREDICTIONS FOR THE LBG POPULATION Since the cooling regulation discussed above gives specific predictions of how star formation may have proceeded in LBGs, here we use this model to predict the properties of the LBG population. The condition in equation (14) implies that the star formation rate in a disk is equal to the rate of gas infall (due to a balance between cooling and heating). Thus the evolution of the gas in the disk of an LBG host halo is described by the standard chemical evolution model with infall rate equal to star formation rate, i.e., the new infalling gas to the disk distributed radially in an exponential form with the scale length of $`R_\mathrm{d}/b0.7R_\mathrm{d}`$, and the reheated gas removed decreases with the increasing radius due to the decreasing SFR. Under the instantaneous recycling approximation (Tinsley 1980), the gas metallicity $`Z`$ is given by $$Z=y(1e^\nu )+Z_i,\nu =\frac{\mathrm{\Sigma }_{\mathrm{tot}}}{\mathrm{\Sigma }_{\mathrm{gas}}}1,$$ (15) where $`Z_i`$ is the initial metallicity of the infalling gas, $`y`$ is the stellar chemical yield, $`\mathrm{\Sigma }_{\mathrm{gas}}`$ is the gas surface density (which is kept constant by gas infall) and $`\mathrm{\Sigma }_{\mathrm{tot}}`$ is the total mass surface density, which increases as star formation proceeds: $$\frac{\mathrm{d}\mathrm{\Sigma }_{\mathrm{tot}}}{\mathrm{d}t}=(1R_\mathrm{r})\mathrm{\Sigma }_{\mathrm{SFR}}.$$ (16) Here the enrichment of the halo hot gases is not taken into account because the amount of metals heated up to the halos by SNs is relatively smaller than that of primordial gases. ### 3.1 Individual Objects Figure 2 shows the star formation rate as a function of halo circular velocity $`V_c`$ and spin parameter $`\lambda `$. As expected, the predicted SFR increases with $`V_c`$ but decreases with $`\lambda `$ . As we can see from the figure, if we define systems with $`\mathrm{SFR}40\mathrm{M}_{}\mathrm{yr}^1`$ (which matches the SFRs for the observed LBG population) to be LBGs, the majority of their host haloes must have $`V_c200\mathrm{km}\mathrm{s}^1`$ which are cooling dominated. This result is the same as that obtained by MMWb based on the observed number density and clustering of LBGs. Thus, the star formation rate based on cooling argument is also consistent with the observed number density and clustering. Because SFR is higher in a system with smaller $`\lambda `$, the LBG population are biased towards haloes with small spins, but given its relatively narrow distribution, this bias is not very strong. The predicted metallicity gradients on individual disks are shown in Figure 3 for two different choices of star formation time scale $`t_{}`$ of 0.5Gyr and 1Gyr respectively, where we assume that $`y=Z_{}`$ and $`Z_\mathrm{i}=0`$ in order to make the predictions easily compare with observations. The metallicity gradients are negative in all cases. When radius is measured in disk scale length, the predicted metallicity depends weakly on $`V_c`$ but strongly on $`\lambda `$, and is higher for a longer star formation time. As one can see from equation (15), the largest metallicity in the model is $`Z=Z_i+y`$. This metallicity can be achieved in the inner part of compact disks (with small $`\lambda `$) when star formation time $`t_{}1`$ Gyr. The metallicity drops by a factor of $`2`$ from its central value at $`R3R_\mathrm{d}`$. ### 3.2 LBG Population Since the distribution of haloes with respect to $`V_c`$ and $`\lambda `$ are known, we can generate Monte-Carlo samples of the halo distribution in the $`V_c`$-$`\lambda `$ plane at any given redshift. We can then use the galaxy formation model (MMWb) discussed above to transform the halo population into an LBG population based on LBGs with highest SFRs which is the same as that outlined in Sec. 2. We define the typical metallicity of a galaxy as the one at its effective radius. Figure 4 shows the distribution of this metallicity for two choices of the star formation time, $`t_{}=0.5`$ Gyr and 1 Gyr. Just as the same reason as Figure 3 in last section, we have assumed that $`y=Z_{}`$ and $`Z_\mathrm{i}=0`$ in order to make the predictions easily compare with observations. The median values of $`(ZZ_i)/y`$ are 0.60 and 0.84 for $`t_{}=0.5`$ Gyr and 1 Gyr, respectively. The sharp truncation at $`(ZZ_i)/y=1`$ is due to the fact that this quantity has a maximum value of 1 in the present chemical evolution model. It can be inferred form Figure 3 that the range in $`(ZZ_i)/y`$ decreases with increasing star formation time. Thus, if gas infall lasts for a long enough time, the distribution in $`(ZZ_i)/y`$ will be very narrow near 1 and all LBGs will have metallicity $`Z=Z_i+y`$. According to the works of Tinsley (1980) and Maeder (1992), the stellar yield $`y`$ is the order of $`Z_{}`$ for the Salpeter IMF. If we adopt a stellar yield $`y0.5Z_{}`$ and $`Z_i=0.01Z_{}`$, and if LBGs are not short bursts (e.g. $`t_{}0.5`$ Gyr) then their metallicity will be $`Z0.2Z_{}`$ which is similar to that proposed by Pettini (1999). The predicted distribution of effective radii for the LBG population is shown in Figure 5. The distribution is similar to that of MMWb. The predicted range is $`1.0R_{\mathrm{eff}}5.0h^1\mathrm{kpc}`$ with a median value of 2.5 $`h^1`$kpc. Note that the effective radii in the cooling-regulated model are independent of the star formation time $`t_{}`$ and $`m_\mathrm{d}`$. The model prediction is in agreement with the observational results given by Pettini, et al. (1998), Lowenthal, et al. (1997) and Giavalisco et al. (1996) which are mentioned above. The predicted SFR distribution of LBGs also resembles the prediction of MMWb except for a slight difference with MMWb, which is shown in Figure 6. The median values are 180$`\mathrm{M}_{}\mathrm{yr}^1`$ for the model and spans from 100 to 500$`\mathrm{M}_{}\mathrm{yr}^1`$. To compare with observations, we have to take into account the effect of dust. If we apply an average factor of 3 in dust extinction, then the predictions closely match the values derived from infrared observations by Pettini, et al. (1998) although there might exist rare LBGs with very high SFR. ### 3.3 Contribution To The Soft X-ray and UV Background Since the virial temperature of LBG haloes are quite high, in the range of $`10^610^7`$K, significant soft X-ray and hard UV photons may be emitted as the halo hot gas cools. It is therefore interesting to examine whether the LBG population can make substantial contribution to the soft X-ray and UV backgrounds. The dominant cooling mechanism for hot gas with temperature $`10^6`$ K is the thermal bremsstrahlung. The bremsstrahlung emissivity is given by (e.g., Peebles 1993) $$j_\nu =5.4\times 10^{39}n_e^2T^{1/2}e^{h\nu /kT}\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1\mathrm{ster}^1\mathrm{Hz}^1,$$ (17) where $`n_e`$ (in $`\mathrm{cm}^3`$) is the electron density and $`T`$ (in K) is the temperature given by equation (6). The total power emitted per unit volume is $$J=1.42\times 10^{27}T^{1/2}n_e^2\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1.$$ (18) We write the total luminosity $`L_\mathrm{b}`$ in thermal bremsstrahlung as $$L_\mathrm{b}=\beta \dot{M}_{\mathrm{cool}}V_c^2,$$ (19) and we take $`\beta =2.5`$ here as WF so that $`L_\mathrm{b}`$ is equal to the initial thermal energy in the cooling gas. Note that the value of $`\beta `$ is quite uncertain because it depends on the detail density and temperature profiles of the hot gas. Substituting equation (13) into the above equation, we obtain the total soft X-ray luminosity for an LBG $$L_{\mathrm{sx}}(V_c)4.1\times 10^{40}f_{\mathrm{soft}}\left(\frac{V_c}{250\mathrm{k}\mathrm{m}/\mathrm{s}}\right)^4\left(\frac{f_\mathrm{B}}{0.1}\right)^{3/2}\mathrm{erg}\mathrm{s}^1,$$ (20) where $$f_{\mathrm{soft}}=\frac{1}{kT}_{0.5(1+z)}^{2(1+z)}e^{E/kT}𝑑E$$ (21) is the fraction of total energy that falls into the ROSAT soft X-ray (0.5-2 keV) band. The contribution of the LBG population to the soft X-ray background is then $$\rho _{\mathrm{sx}}=𝑑V_c𝑑V_{\mathrm{com}}\frac{n(z)L_{\mathrm{sx}}}{4\pi d_L^2}5.7\times 10^8\left(\frac{f_\mathrm{B}}{0.1}\right)^{3/2}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2,$$ (22) where $`n(z)`$ is the comoving number density of LBG haloes as a function of redshift $`z`$, $`dV_{\mathrm{com}}`$ is the differential comoving volume from $`z`$ to $`z+dz`$ and $`d_L`$ is the luminosity distance. The integrate for $`V_\mathrm{c}`$ is to sum up all selected LBGs with $`V_\mathrm{c}`$ based on their highest SFRs. We have integrated over redshift range from 3 to 4 where the number density of LBGs is nearly a constant (Steidel 1998a,b). This contribution should be compared with the value derived from the ROSAT observations (Hasinger et al. 1998) in the 0.5-2 keV band $$\rho _{\mathrm{sx}}2.4\times 10^7\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2.$$ (23) As we can see, the soft X-ray contribution from LBGs could be a substantial fraction (about 20%) of the total soft X-ray background. Similarly we can calculate the contribution of LBGs to the UV background at $`z=3`$. We evaluate the UV background at 4 Ryd (1Ryd=13.6 eV) using nearly identical procedures, we find that $$i_{4\mathrm{R}\mathrm{y}\mathrm{d}}2.4\times 10^{24}\left(\frac{f_\mathrm{B}}{0.1}\right)^{3/2}\mathrm{ergs}^1\mathrm{cm}^2\mathrm{Hz}^1\mathrm{ster}^1,$$ (24) which is much smaller than the UV background from AGNs, $`i_{4\mathrm{R}\mathrm{y}\mathrm{d}}10^{22}\mathrm{ergs}^1\mathrm{cm}^2\mathrm{Hz}^1\mathrm{ster}^1`$ (e.g. Miralda-Escude & Ostriker 1990). ### 3.4 Contribution to the Total Metals Based on the recent observational results of the cosmic star formation history, Pettini (1999) obtained a predicted total mass of metals produced at $`z=2.5`$. After combining results of all contributors observed, he argued that there seems to exist a very serious “missing metal” problem, i.e., the predicted result is much higher than observed ones. So, it is interesting to evaluate the total metals produced by LBGs in our model. According to the method we select LBGs to be the galaxies with highest SFR and our chemical evolution model mentioned in Sec. 3.2, we can calculate the total metal density produced by the LBG population at $`z=3`$ based on their observed comoving number density which is $`N_{\mathrm{LBG}}=2.4\times 10^3h^3\mathrm{Mpc}^3`$ for the assumed cosmology (Adelberger et al. 1998). Defining that $`\mathrm{\Omega }_\mathrm{Z}`$ is the metal density relative to the critical density, we get that $`\mathrm{\Omega }_\mathrm{Z}`$ of LBGs are $`0.19\mathrm{\Omega }_\mathrm{B}\times y`$ and $`0.29\mathrm{\Omega }_\mathrm{B}\times y`$ for star formation time of 0.5Gyr and 1Gyr respectively, where $`y`$ is the stellar yield which is the same as above. Because the virial temperature of LBG halos are very high, a significant fraction of the metal should be in hot phase. Comparing our results with that estimated by Pettini (1999) which is $`0.08\mathrm{\Omega }_\mathrm{B}\times Z_{}`$ (the cosmogony has been taken into account), we find that there is no “missing metal” problem in our model. ### 3.5 LBGs and Damped Lyman-Alpha Systems Damped Lyman-alpha systems (DLSs) are another population of objects that can be observed at similar redshift to LBGs. The DLSs are selected according to their high neutral HI column density ($`>10^{20.3}\mathrm{cm}^2`$), and are believed to be either high-redshift thick disk galaxies (Prochaska & Wolfe 1998) or merging protogalactic clumps (Haehnelt, Steimetz & Rauch 1998). In either case, to match the observed abundance of DLSs, most DLSs should have circular velocity between $`50\mathrm{km}\mathrm{s}^1`$ to $`200\mathrm{km}\mathrm{s}^1`$, much smaller than the median circular velocity of LBGs ($`300\mathrm{km}\mathrm{s}^1`$). Based on the PS formalism (equation (5)) and disk galaxy formation scenario suggested by MMWa (equations (1) and (2)), we can estimate with the random inclination being taken into account, that the fraction of absorbing cross-sections contributed by LBGs amounts to only about 5% of the total absorption cross-section assumed LBGs with highest SFRs. This means that only a very small fraction of DLSs can be identified as LBGs. The physical connection between LBGs and DLSs is still unclear, although the recent observation of Moller & Warren (1998) using $`HST`$ indicates that some DLSs could be associated with LBGs. In Figure 7, we show the predicted metallicity distribution for the subset of DLSs which can be observed as LBGs. aGain, we have assumed that $`y=Z_{}`$ and $`Z_\mathrm{i}=0`$ to let the predictions more easily compare to observations. As can be seen, the DLSs generally have lower metallicity than LBGs, because they are biased towards the outer region of the host galaxies, where the star formation activity is reduced. Notice, however, that the metallicity of these DLSs could still be higher than most DLSs at the same redshift, which typically have metallicity of $`0.1Z_{}`$ (Pettini, et al 1997a). ## 4 SUMMARY In this paper, we have examined the star formation and chemical enrichment in Lyman break galaxies, assuming them to be the central galaxies of massive haloes at $`z3`$ and using simple chemical evolution models. We found that gas cooling in dark haloes provides a natural process which regulates the amount of star forming gas. The predicted star formation rates and effective radii are consistent with observations. The metallicity of the gas associated with an LBG is roughly equal to the chemical yield, or about the order of $`1Z_{}`$ for a Salpeter IMF. Because of the relatively long star-formation time, the colours of these galaxies should be redder than that of short starbursts. It is not clear whether this prediction is consistent with current (rather) limited observations, because the interpretation of the observational data depends strongly on the adopted dust reddening. Stringent constraint can be obtained when full spectral information of the LBG population is carefully analyzed. The model predicts a marked radial metallicity gradient in an LBG, with the gas in the outer region having lower metallicity. As a result, the metallicities for the damped Lyman-alpha absorption systems expected from the LBG population are lower than those for the LBGs themselves, although high metallicity is expected for a small number of sightlines going through the central regions of an LBG. At the same time, our modeled contribution to the total metal is roughly consistent with that obtained from the observed cosmic star formation history, i.e., there might not exist so-called “missing metal” problem although there could be more than half of the metals to be in the hot phase. Finally, a prediction of our model is that LBG haloes are filled with hot gas. As a result, these galaxies may have a non-negligible contribution to the soft X-ray background. The contribution of LBGs to the ionizing UV background is found to be small. There are two basic assumptions in our work. One is that the LBG population is one-to-one associated with the most massive halos which are generated from the PS formalism, as done by MMWb; another is that the timescale of star formation for LBG population is assumed to be the order of 1Gyr, which is suggested by Steidel et al. (1999a,b, 1995). However, Baugh et al. (1999) recently argue that the prediction of the clustering properties of LBGs based on this first simple assumption will be discrepancy with the results of more detailed semi-analytic models. Still, the second will lead to difficulty in reproducing the redshift evolution of bright galaxies (Kolatt et al. 1999). More detailed modelling done by Somerville (1997) suggest the collisional starbursts could be expected to be an important effect in understanding the LBGs. So, further observations are required to investigate the intrinsic properties of LBGs. ## Acknowledgments This project is partly supported by the Chinese National Natural Foundation. I thank Dr. S. Mao, Dr. H. J. Mo and Prof. S. D. M. White for detailed discussions, and the useful help of anonymous referee.
no-problem/0002/astro-ph0002274.html
ar5iv
text
# Hidden Source of High-Energy Neutrinos in Collapsing Galactic Nucleus ## 1 Introduction: Hidden Sources High energy (HE) neutrino radiation from astrophysical sources is accompanied by other types of radiation, most notably by the HE gamma-radiation. These HE gamma-radiation can be used to put upper limit on the neutrino flux emitted from a source. For example, if neutrinos are produced due to interaction of HE protons with low energy photons in extragalactic space or in the sources transparent for gamma-radiation, the upper limit on diffuse neutrino flux $`I_\nu (E)`$ can be derived from e-m cascade radiation. This radiation is produced due to collisions with photons of microwave radiation $`\gamma _{bb}`$, such as $`\gamma +\gamma _{bb}e^++e^{},e+\gamma _{bb}e^{}+\gamma ^{}`$ etc. These cascade processes transfer the energy density released in high energy photons $`\omega _\gamma `$ into energy density of the remnant cascade photons $`\omega _{cas}`$. These photons get into the observed energy range $`100`$ MeV$`10`$ GeV and their energy density is limited by recent EGRET observations as $`\omega _{cas}210^6`$ eV/cm<sup>3</sup>. Introducing the energy density for neutrinos with individual energies higher than E, $`\omega _\nu (>E)`$, it is easy to obtain the following chain of inequalities (reading from left to write) $$\omega _{cas}>\omega _\nu (>E)=\frac{4\pi }{c}\underset{E}{\overset{\mathrm{}}{}}EI_\nu (E)𝑑E>\frac{4\pi }{c}E\underset{E}{\overset{\mathrm{}}{}}I_\nu (E)𝑑E=\frac{4\pi }{c}EI_\nu (>E).$$ (1) Now the upper limit on the integral HE neutrino flux can be written down as $$I_\nu (>E)<\frac{4\pi }{c}\frac{\omega _{cas}}{E}=4.810^3E_{eV}^1\text{ cm}^2\text{s}^1\text{sr}^1.$$ (2) However, there can be sources, where accompanying electromagnetic radiation, such as gamma and X-rays, are absorbed. They are called “hidden sources” . Several models of hidden sources were discussed in the literature. * Young SN Shell during time $`t_\nu 10^310^4`$ s are opaque for all radiation, but neutrinos. * Thorne-Zytkow Star , the binary with a pulsar submerged into a red giant, can emit HE neutrinos while all kinds of e-m radiation are absorbed by the red giant component. * Cocooned Massive Black Hole (MBH) in AGN is an example of AGN as hidden source: e-m radiation is absorbed in a cocoon around the massive black hole. * AGN with Standing Shock in the vicinity of a MBH can produce large flux of HE neutrinos with relatively weak X-ray radiation. In this paper we propose a new model of the hidden source which can operate in a galactic nucleus at pre-AGN phase, i.e. prior to MBH formation in it. The MBH in AGN is formed through the dynamical evolution of a central stellar cluster resulting in a secular contraction of the cluster and its final collapse . The first stage of this evolution is accompanied by collisions and destruction of normal stars in the evolving cluster, when virial velocities of constituent stars become high enough. The compact stars (neutron stars and black holes) survive this stage and their population continue to contract, being surrounded by the massive envelope composed of the gas from destroyed normal stars. Pre-AGN phase corresponds to a near collapsing central cluster of compact stars in the galactic nucleus. Repeating fireballs after continuous collisions of compact stars in this very dense cluster result in the formation of a rarefied cavity in the massive gas envelope. Particles accelerated in this cavity interact with the gas in the envelope and produce HE neutrinos. Accompanying gamma-radiation can be fully absorbed in the case of thick envelope (matter depth $`X_{env}10^4`$ g/cm<sup>2</sup>). The proposed source is short-lived (lifetime $`t_s10`$ years) and very powerful: neutrino luminosity exceeds the Eddington limit for e-m radiation. ## 2 The Model We consider in the following the basic features of the formation of a short-lived extremely powerful hidden source of HE neutrinos in the process of dynamical evolution of the central stellar cluster in a typical galactic nucleus. ### 2.1 Dynamical Evolution of Galactic Nucleus The dynamical evolution of dense central stellar clusters in the galactic nuclei is accompanied by a secular growth of the velocity dispersion of constituent stars $`v`$ or, equivalently, by the growth of the central gravitational potential. This process is terminated by the formation of the MBH when the velocity dispersion of stars grows up to the light speed (see for a review e.g. and references therein). On its way to a MBH formation the dense galactic nuclei inevitably proceed through the stellar collision phase of evolution , when most normal stars in the cluster are disrupted in mutual collisions. The necessary condition for the collisional destruction of normal stars with mass $`m_{}`$ and radius $`r_{}`$ in the cluster of identical stars with a velocity dispersion $`v`$ is $$v>v_p,$$ (3) where $$v_p=\left(\frac{2Gm_{}}{r_{}}\right)^{1/2}6.210^2\left(\frac{m_{}}{\mathrm{M}_{}}\right)^{1/2}\left(\frac{r_{}}{R_{}}\right)^{1/2}\text{ km s}^1.$$ (4) is an escape (parabolic) velocity from the surface of a constituent normal star. The kinetic energy of colliding star is greater in general than its gravitational bound energy under the inequality (3). If $`v>v_p`$, the normal stars are eventually disrupted in mutual collisions or in collisions with the extremely compact stellar remnants, i.e. with neutron stars (NSs) or stellar mass black holes. Only these compact stellar remnants will survive through the stellar-destruction phase of evolution ($`v=v_p`$) and form the self-gravitating core. We shall refer for simplicity to this core as to the NS cluster. Meanwhile the remnants of disrupted normal stars form a gravitationally bound massive gas envelope in which the NS cluster is submerged. The virial radius of this envelope is $$R_{env}=\frac{GM_{env}}{2v_p^2}=\frac{1}{4}\frac{M_{env}}{m_{}}r_{}0.56M_8\left(\frac{m_{}}{\mathrm{M}_{}}\right)^1\left(\frac{r_{}}{R_{}}\right)\text{ pc},$$ (5) where $`M_{env}=10^8M_8\mathrm{M}_{}`$ is a corresponding mass of the envelope. The gas from disrupted normal stars composes the major part of the progenitor central stellar cluster in the galactic nucleus. So the natural range for the total mass of the envelope is the same as the typical range for the mass of a central stellar cluster in the galactic nucleus, $`M_{env}=10^710^8\mathrm{M}_{}`$. The envelope radius $`R_{env}`$ is given by the virial radius of a central cluster in the galactic nucleus at the moment of evolution corresponding to normal stars destructions, i. e, $`v=v_p`$. The mean number density of gas in the envelope is $$n_{env}=\frac{3}{4\pi }\frac{1}{R_{env}^3}\frac{M_{env}}{m_p}5.410^9M_8^2\left(\frac{m_{}}{\mathrm{M}_{}}\right)^3\left(\frac{r_{}}{R_{}}\right)^3\text{ cm}^3,$$ (6) where $`m_p`$ is a proton mass. A column density of the envelope is $$X_{env}=m_pn_{env}R_{env}1.610^4M_8^1\left(\frac{m_{}}{\mathrm{M}_{}}\right)^2\left(\frac{r_{}}{R_{}}\right)^2\text{ g cm}^2.$$ (7) Such an envelope completely absorbs electromagnetic radiation and HE particles outgoing from the interior, except neutrinos and gravitational waves. A column density becomes less for more massive envelopes. We assume that energy release due to star collisions supports the gas in the cluster in (quasi-)dynamical equilibrium. It implies the equilibrium temperature $`T_{eq}`$ of the gas, $$T_{eq}\frac{m_p}{6k}\frac{GM_{env}}{R_{env}}1.610^7\text{ K}.$$ (8) The thermal velocity of gas particles at the equilibrium temperature is of order of an escape velocity from the surface of a normal star. If $`TT_{eq}`$ the gas outflows from the cluster with the sound speed $$v_s=\left(2\gamma \frac{kT}{m_p}\right)^{1/2}600\left(\frac{T}{T_{eq}}\right)^{1/2}\text{ km/s},$$ (9) where $`\gamma `$ is adiabatic index ($`\gamma =5/3`$ for hydrogen). If $`TT_{eq}`$ gas collapses to the core. ### 2.2 Dense Cluster of Stellar Remnants As was discussed above, the dense NS cluster survives inside the massive envelope of the post-stellar-destruction galactic nucleus. The total mass of this cluster is $`110`$ % of the total mass of a progenitor galactic nucleus and so of the massive envelope, i. e. $`M0.010.1M_{env}`$. We will use the term ‘evolved galactic nucleus’ for this cluster of NSs assuming that (i) $`v>v_p`$ and (ii) the (two-body) relaxation time in the cluster is much less than the age of the host galaxy. Under the last condition the cluster has enough time for an essential dynamical evolution. For example the relaxation time of stars inside a central parsec of the Milky Way galaxy is $`t_r10^710^8`$ years. A further dynamical evolution of the evolved cluster is terminated by the dynamical collapse to a MBH. We consider in the following an evolved central cluster of NSs with identical masses $`m=1.4\mathrm{M}_{}`$. This evolved cluster of NSs is sunk deep into the massive gas envelope remaining after the previous evolution epoch of a typical normal galactic nucleus. Let $`N=M/m=10^6N_6`$ is a total number of NSs stars in the cluster. The virial radius of this cluster is: $$R=\frac{GNm}{2v^2}=\frac{1}{4}\left(\frac{c}{v}\right)^2Nr_g1.010^{13}N_6(v/0.1c)^2\text{ cm},$$ (10) where $`r_g=2Gm/c^2`$ is a gravitational radius of NS. For $`N10^6`$ and $`v0.1c`$ one has nearly collapsing cluster with the virial size of $`1`$ AU. The characteristic times are (i) the dynamical time $`t_{dyn}=R/v=(1/4)N(c/v)^3r_g/c0.95N_6(v/0.1c)^3`$ hour and (ii) the evolution (two-body relaxation) time of the NS cluster $`t_{rel}0.1(N/\mathrm{ln}N)t_{dyn}19N_6^2(v/0.1c)^3`$ years. In general $`t_{rel}t_{dyn}`$, if $`N1`$. This evolution time determines the duration of an active phase for the considered below hidden source, as $`t_st_{rel}10`$ years. ### 2.3 Fireballs in Cluster The most important feature of our model is a secular growing rate of accidental NS collisions in the evolving cluster, accompanied by large energy release. The corresponding rate of NS collisions in the cluster (with the gravitational radiation losses taken into account) is $`\dot{N}_c`$ $`=`$ $`9\sqrt{2}\left({\displaystyle \frac{v}{c}}\right)^{17/7}{\displaystyle \frac{c}{R}}=36\sqrt{2}\left({\displaystyle \frac{v}{c}}\right)^{31/7}{\displaystyle \frac{1}{N}}{\displaystyle \frac{c}{r_g}}`$ (11) $``$ $`4.410^3N_6^1(v/0.1c)^{31/7}\text{ yr}^1.`$ The time between two successive NS collisions is $`t_c`$ $`=`$ $`\dot{N}_c^1={\displaystyle \frac{1}{9\sqrt{2}}}\left({\displaystyle \frac{c}{v}}\right)^{10/7}t_{dyn}={\displaystyle \frac{1}{36\sqrt{2}}}\left({\displaystyle \frac{c}{v}}\right)^{31/7}N{\displaystyle \frac{r_g}{c}}`$ (12) $``$ $`7.310^3N_6(v/0.1c)^{31/7}\text{ s}.`$ Note that a number of NS collisions, given by $`\dot{N_c}t_s`$, comprises only a small fraction, about $`1\%`$, of a total number of NSs in the cluster to the time of the onset of dynamical collapse of the whole cluster into a MBH. Merging of two NSs in collision is similar to the process of a tight binary merging: the NSs are approaching to each other by spiralling down due to the gravitational waves radiation and then coalesce by producing an ultrarelativistic photon-lepton fireball , which we assume to be spherically symmetric. The energy of one fireball is $`E_0=E_{52}10^{52}`$ ergs, and the total energy release in the form of fireballs during lifetime of the hidden source $`t_s10`$ yr is $$E_{tot}\dot{N_c}E_0t_s410^{56}\text{ ergs},$$ (13) where $`\dot{N_c}`$ is the NS collision rate. The physics of fireballs is extensively elaborated especially in recent years for the modeling of cosmological gamma-ray bursts (GRBs) (for review see e. g. and references therein). The newborn fireball expands with relativistic velocity, corresponding to the Lorentz factor $`\mathrm{\Gamma }_f1`$. The relevant parameter of a fireball is the total baryonic mass $$M_0=E_0/\eta c^25.610^6E_{52}\eta _3^1M_{},$$ (14) where baryon-loading mass parameter $`\eta =10^3\eta _3`$. The maximum possible Lorentz factor of expanding fireball is $`\mathrm{\Gamma }_f=\eta +1`$ during the matter-dominated phase of fireball expansion . During the initial phase of expansion, starting from the radius of the ‘inner engine’ $`R_010^610^7`$ cm, the fireball Lorentz factor increases as $`\mathrm{\Gamma }r`$, until it is saturated at the maximum value $`\mathrm{\Gamma }_f=\eta 1`$ at the radius $`R_\eta =R_0\eta `$ (see e. g. ). Internal shocks will take place around $`R_{sh}=R_0\eta ^2`$, if the fireball is inhomogeneous and the velocity is not a monotonic function of radius, e.g. due to the considerable emission fluctuations of the inner engine . The fireball expands with the constant Lorentz factor $`\mathrm{\Gamma }=\eta `$ at $`R>R_\eta `$ until it sweeps up the mass $`M_0/\eta `$ of ambient gas and looses half of its initial momentum. At this moment ($`R=R_\gamma `$) the deceleration stage starts . Interaction of the fireball with an ambient gas determines the length of its relativistic expansion. In our case the fireball propagates through the massive envelope with a mean gas number density $`n_{env}=\rho _{env}/m_p=n_910^9`$ cm<sup>-3</sup> as it follows from Eq. (6). Fireball expands with $`\mathrm{\Gamma }1`$ up to the distance determined by the Sedov length $$l_S=\left(\frac{3}{4\pi }\frac{E_0}{\rho _{env}c^2}\right)^{1/3}1.210^{15}n_9^{1/3}E_{52}^{1/3}\text{ cm}.$$ (15) Fireball becomes mildly relativistic at radius $`r=l_S`$ due to sweeping up the gas from the envelope with the mass $`M_0\eta `$. The radius $`r=l_S`$ is the end point of the ultrarelativistic fireball expansion phase. Far beyond the Sedov length radius ($`rl_S`$) there is the non-relativistic Newtonian shock driven by the decelerated fireball. Its radius $`R(t)`$ obeys the Sedov–Taylor self-similar solution , with $`R(t)=(E_0t^2/\rho _{env})^{1/5}`$. The corresponding shock expansion velocity is $`u=(2/5)[l_S/R(t)]^{3/2}cc`$. ### 2.4 Cavity and Shocks We show here that relativistic fireballs from a dense central cluster of NSs produce the dynamically supported rarefied cavity deep inside the massive gaseous envelope. The first fireball sweeps out the gas from the envelope producing the cavity with a radius $`l_S`$. This cavity expands due to the next fireballs, which propagate first in a rarefied cavity and then hit the boundary pushing it further. Each fireball hitting the dense envelope is preceded by a shock. Propagating through the envelope, the shock sweeps up the gas ahead of it and gradually decelerates. The swept out gas forms a thin shell with a density profile given by Sedov self-similar solution. The next fireball hits this thin shell when it is decelerated down to the non-relativistic velocity. Moving in the envelope, the shell accumulates more gas, retaining the same density profile, and then it is hit by the next fireball again. After a number of collisions the shell becomes massive, and the successive hitting fireballs do not change its velocity appreciably. In this regime one can consider the propagation of massive non-relativistic thin shell with a shock (density discontinuity) ahead of it. The shock speed $`v_{sh}`$ is connected with a velocity $`v_g`$ of gas behind it as $`v_{sh}=(\gamma +1)v_g/2`$, where $`\gamma `$ is an adiabatic index. The density perturbation in the envelope propagates as a shock until $`v_{sh}`$ remains higher than sound speed $`v_s`$. In the considered case, the shock dissipates in the middle of the envelope. For the Sedov solution the shock velocity changes with distance $`r`$ as $$v_{sh}(r)=\frac{2}{5}\alpha _S^1\left(\frac{E_{sh}}{\rho _{env}}\right)^{1/2}r^{3/2},$$ (16) where $`\alpha _S`$ is the constant of self-similar Sedov solution; when radiative pressure dominates $`\alpha _S=0.894`$. The other quantities in Eq. (16) are $`E_{sh}=(1/2)E_{tot}=210^{56}`$ erg/s is the energy of the shock, which includes kinetic and thermal energy (the half of a total energy is transformed to particles accelerated in the cavity), and $`\rho _{env}`$ is a density of the envelope given by Eq. (6). From Eqs. (16) and (9) it follows that $`v_{sh}>v_s`$ holds at distance $`r<110^{18}\text{ cm}0.6R_{env}`$, i. e. that shock does not reach the outer surface of the envelope. In fact the latter conclusion follows already from energy conservation. The gravitational energy of the envelope $`V_0`$ is $$V_0=\kappa \frac{GM_{env}^2}{R_{env}}>9.210^{56}\text{ ergs},$$ (17) where $`\kappa `$ depends on a radial profile of the gas density in the envelope $`\rho (r)`$, and changes from 3/5 to 1. This energy is higher than a total injected energy $`E_{tot}410^{56}`$ ergs, and thus the system remains gravitationally bound. When a shock reaches the boundary of the envelope, the gas distribution changes there. It has the form of a thin shell with gravitational energy $`V_e=GM_{env}^2/2R_{env}`$. To provide the exit of the shock to the surface of the envelope, a total energy release must satisfy the relation $`E_{tot}>V_0V_e`$, where the minimum value of $`V_0V_e`$ is $`1.510^{56}`$ ergs for $`\kappa =3/5`$. Actually $`E_{tot}`$ must be higher because (i) part of the injected energy goes to heat, (ii) more realistic value of $`\kappa =1`$, and (iii) the shell has a non-zero velocity, when the shock disappears, with a gravitational braking taken into account. We conclude thus that for $`E_{tot}410^{56}`$ ergs, shock dissipates inside the envelope. The cavity radius grows with time. For the stage, when a shell moves non-relativistically, the cavity radius calculated as a distance to the shell in the Sedov solution is $$R_{cav}(t)=\left(\frac{E_{sh}}{\alpha _S\rho _{env}}\right)^{1/5}t^{2/5}.$$ (18) At the end of a phase of the hidden source activity, $`t_s10`$ yr, the radius of the cavity reaches $`310^{17}`$ cm, remaining thus much less than $`R_{env}`$. The cavity is filled by direct and reverse relativistic shocks from fireballs. Reverse shocks are produced by decelerated fireballs, most notably when they hit the boundary of the cavity. The expanding fireballs inside the cavity have a shape of thin shells and are separated by distance $$R_c=ct_c2.210^{14}N_6(v/0.1c)^{31/7}\text{ cm}.$$ (19) The gas between two fireballs is swept up by the preceding one. A total number of fireballs existing in the cavity simultaneously is $`N_fR_{cav}/R_c1`$, and this number grows with time as $`t^{2/5}`$. Shocks generated by repeating fireballs are ultrarelativistic inside the cavity and mildly relativistic in the envelope near the inner boundary. Collisions of multiple shocks in the cavity, as well as inside the fireballs, produce strongly turbulized medium favorable for generation of magnetic fields and particle acceleration. ## 3 High Energy Particles in Cavity There are three regions where acceleration of particles take place. (i) NS cluster, where fireballs collide, producing the turbulent medium with large magnetic field. This region has a small size of order of virial radius of the cluster $`R10^{13}`$ cm, and we neglect its contribution to production of accelerated particles. (ii) The region at the boundary between the cavity and the envelope. During the active period of a hidden source, $`t_s10`$ yr, the fireballs hit this region, heating and turbulizing it. The large magnetic equipartition field is created here. This boundary region has density $`\rho \rho _{env}`$, the radius $`RR_{cav}`$ and the width $`\mathrm{\Delta }<0.1R_{cav}`$. (iii) Most of the cavity volume is occupied by fireballs, separated by distance $`R_c`$. Due to collisions of internal shocks, the gas in a fireball is turbulized and equipartition magnetic field is generated . In all three cases the Fermi II acceleration mechanism operates. For all three sites we assume existence of equipartition magnetic field, induced by turbulence and dynamo mechanism: $$\frac{H^2}{8\pi }\frac{\rho u_t^2}{2},$$ (20) where $`\rho `$ and $`u_t`$ are the gas density and velocity of turbulent motions in the gas, respectively. Since turbulence is caused by shocks, the shock spectrum of turbulence $`F_kk^2`$ is valid, where $`k`$ is a wave number. Assuming equipartition magnetic field on each scale $`l1/k,H_l^2kF_k`$, one obtains the distribution of magnetic fields over the scales as $$H_l/H_0=(l/l_0)^{1/2},$$ (21) where $`l_0`$ is a maximum scale with the coherent field $`H_0`$ there. The maximum energy of accelerated particles is given by $$E_{max}eH_0l_0$$ (22) with an acceleration time $$t_{acc}\frac{l_0}{c}\left(\frac{c}{v}\right)^2.$$ (23) For the turbulent shell at the boundary between the cavity and envelope, assuming mildly relativistic turbulence $`u_tc`$ and $`\rho \rho _{env}`$, we obtain $`H_{eq}=410^3`$ G. The maximum acceleration energy is $`E_{max}=210^{21}`$ eV, if the coherent length of magnetic field $`l_0`$ is given by the Sedov length $`l_S`$, and the acceleration time is $`t_{acc}=410^4E_{52}^{1/3}n_9^{1/3}`$ s. The typical time of energy losses, determined by $`pp`$-collisions, is much longer than $`t_{acc}`$, and does not prevent acceleration to $`E_{max}`$ given above: $$t_{pp}=\left(\frac{1}{E}\frac{dE}{dt}\right)^1=\frac{1}{f_p\sigma _{pp}n_{env}c}210^6n_9^1\text{ s},$$ (24) where $`f_p0.5`$ is the fraction of energy lost by HE proton in one collision, $`\sigma _{pp}`$ is a cross-section of $`pp`$-interaction, and $`n_{env}`$ is the gas number density in the boundary turbulent shell. The turbulence in the fireball is produced by collisions of internal shells, and a natural scale for coherence length $`l_0`$ is a width of the internal shell in the local frame $`\delta ^{}`$. The maximum energy in the laboratory frame $`E_{max}eH_{eq}^{}\delta `$, where $`\delta `$ is the corresponding width in the laboratory frame. Since $`H^{}1/R`$ and $`\delta R`$, the maximum energy does not change with time, and can be estimated as in Ref. , $`E_{max}310^{20}`$ eV. Note that in our case a fireball propagates in the very low-density medium. A gas left in the cavity by preceding fireball, as well as high energy particles escaping from it, are accelerated by the next fireball by a factor $`\mathrm{\Gamma }_f^2`$ at each collision. This $`\mathrm{\Gamma }^2`$-mechanism of acceleration works only in pre-hydrodynamic regime of fireball expansion, after reaching the hydrodynamic stage, $`\mathrm{\Gamma }^2`$-mechanism ceases . We conclude, thus, that both efficiency and maximum acceleration energy are very high. We assume that a fireball transfers half of its energy to accelerated particles. ## 4 Neutrino Production and Detection Particles accelerated in the cavity interact with the gas in the envelope producing high energy neutrino flux. We assume that about half of the total power of the source $`L_{tot}`$ is converted into energy of accelerated particles $`L_p710^{47}`$ erg/s. As estimated in Section 2.1, the column density of the envelope varies from $`X_{env}10^2`$ g/cm<sup>2</sup> (for very heavy envelope) up to $`X_{env}10^4`$ g/cm<sup>2</sup> (for the envelope with mass $`M10^8M_{}`$. Taking into account the magnetic field, one concludes that accelerated protons loose in the envelope a large fraction of their energy. The charged pions, produced in $`pp`$-collisions, with Lorentz factors up to $`\mathrm{\Gamma }_c1/(\sigma _{\pi N}n_{env}c\tau _\pi )410^{13}n_9^1`$ freely decay in the envelope (here $`\sigma _{\pi N}310^{26}`$ cm<sup>2</sup> is $`\pi N`$-cross-section, $`\tau _\pi `$ is the lifetime of charged pion, and $`n_{env}=10^9n_9`$ cm<sup>-3</sup> is the number density of gas in the envelope). We assume $`E^2`$ spectrum of accelerated protons $$Q_p(E)=\frac{L_p}{\zeta E^2},$$ (25) where $`\zeta =\mathrm{ln}(E_{max}/E_{min})2030`$. About half of its energy protons transfer to high energy neutrinos through decays of pions, $`L_\nu (2/3)(3/4)L_p`$, and thus the production rate of $`\nu _\mu +\overline{\nu }_\mu `$ neutrinos is $$Q_{\nu _\mu +\overline{\nu }_\mu }(>E)=\frac{L_p}{4\zeta E^2}.$$ (26) Crossing the Earth, these neutrinos create deep underground the equilibrium flux of muons, which can be calculated as : $$F_\mu (>E)=\frac{\sigma _0N_A}{b_\mu }Y_\mu (E_\mu )\frac{L_p}{4\xi E_\mu }\frac{1}{4\pi r^2},$$ (27) where the normalization cross-section $`\sigma _0=110^{34}`$ cm<sup>2</sup>, $`N_A=610^{23}`$ is the Avogadro number, $`b_\mu =410^6`$ cm<sup>2</sup>/g is the rate of muon energy losses, $`Y_\mu (E)`$ is the integral muon moment of $`\nu _\mu N`$ interaction (see e. g. ). The most effective energy of muon detection is $`E_\mu 1`$ TeV . The rate of muon events in the underground detector with effective area $`S`$ at distance $`r`$ from the source is given by $$\dot{N}(\nu _\mu )=F_\mu S70\left(\frac{L_p}{10^{48}\text{ erg s}^1}\right)\left(\frac{S}{1\text{ km}^2}\right)\left(\frac{r}{10^3\text{ Mpc}}\right)^2\text{yr}^1.$$ (28) Thus, we expect about 10 muons per year from the source at distance $`10^3`$ Mpc. ## 5 Accompanying Radiation We shall consider below HE gamma-ray radiation produced by accelerated particles and thermalized infrared radiation from the envelope. As far as HE gamma-ray radiation is concerned, there will be considered two cases: (i) thin envelope with $`X_{env}10^2`$ g/cm<sup>2</sup> and (ii) thick envelope with $`X_{env}10^4`$ g/cm<sup>2</sup>. In the latter case HE gamma-ray radiation is absorbed. ### 5.1 Gamma-Ray Radiation Apart from high energy neutrinos, the discussed source can emit HE gamma-radiation through $`\pi ^02\gamma `$ decays and synchrotron radiation of the electrons. In case of the thick envelope with $`X_{env}10^4`$ g/cm<sup>2</sup> most of HE photons are absorbed in the envelope (characteristic length of absorption is the radiation length $`X_{rad}60`$ g/cm<sup>2</sup>). In the case of the thin envelope, $`X_{env}100`$ g/cm<sup>2</sup>, HE gamma-radiation emerges from the source. Production rate of the synchrotron photons can be readily calculated as $$dQ_{syn}=\frac{dE_e}{E_\gamma }Q_e(>E_e),$$ (29) where $`E_e`$ and $`E_\gamma `$ are the energies of electron and of emitted photon, respectively. Using $`Q_e(>E_e)=L_e/(\eta E_e)`$ and $`E_\gamma =k_{syn}(H)E_e^2`$, where $`k_{syn}`$ is the coefficient of the synchrotron production, one obtains $$Q_{syn}(E_\gamma )=\frac{1}{12}\frac{L_p}{\eta E_\gamma ^2}.$$ (30) Note, that the production rate given by Eq. (30) does not depend on magnetic field. Adding the contribution from $`\pi ^02\gamma `$ decays, one obtains $$Q_\gamma (E_\gamma )=\frac{5}{12}\frac{L_p}{\eta E_\gamma ^2},$$ (31) and the flux at $`E_\gamma 1`$ GeV at the distance to the source $`r=110^3`$ Mpc is $`F_\gamma (>E_\gamma )`$ $`=`$ $`{\displaystyle \frac{5}{12}}\left({\displaystyle \frac{1}{4\pi r^2}}\right){\displaystyle \frac{L_p}{\xi E_\gamma }}`$ (32) $``$ $`2.210^8\left({\displaystyle \frac{L_p}{10^{47}\text{ erg s}^1}}\right)\left({\displaystyle \frac{r}{10^3\text{ Mpc}}}\right)^2\text{ cm}^2\text{ s}^1,`$ i. e. the source is detectable by EGRET. ### 5.2 Infrared and Optical Radiation Hitting the envelope, fireballs dissipate part of its kinetic energy in the envelope in the form of low-energy e-m radiation. This radiation is thermalized in the optically thick envelope and then re-emitted in the form of black-body radiation from the surface of the envelope. It appears much more later than HE neutrino and gamma-radiation. The thermalized radiation diffuses through the envelope with a diffusion coefficient $`Dcl_{dif}`$, where a diffusion length is $`l_{dif}=1/(\sigma _Tn_{env})`$ and $`\sigma _T`$ is the Thompson cross-section. A mean time of the radiation diffusion through the envelope of radius $`R_{env}`$ is $$t_d\frac{R_{env}^2}{D}110^4\text{ yr},$$ (33) independently of the envelope mass. This diffusion time is to be compared with a duration of an active phase $`t_s10`$ years and with a time of flight $`R_{env}/c2`$ years. Since the produced burst is very short, $`t_s10`$ yr, the arrival times of photons to the surface of the envelope have a distribution with a dispersion $`\sigma t_d`$. An average surface black body luminosity is then $$\overline{L}_{bb}E_{tot}/t_d110^{45}\text{ erg/s},$$ (34) with a peak luminosity being somewhat higher. The temperature of this radiation corresponds to IR range $$T_{bb}=\left(\frac{\overline{L}_{bb}}{4\pi R_{env}^2\sigma _{SB}}\right)^{1/4}8.410^2\text{ K}.$$ (35) Thus, in $`10^4`$ years after neutrino burst, the hidden source will be seen in the sky as a luminous IR source. Since we consider a model of production of high-luminosity AGN, the object is typically expected to be at high redshift $`z`$. Its visible magnitude is $$m=2.5\mathrm{log}\left(\frac{L_{bb}H_0^2}{16\pi (z+1\sqrt{1+z})^2f_0}\right)$$ (36) where $`H_0=100h`$ km/s Mpc is the Hubble constant and the flux $`f_0=2.4810^5`$ erg/cm<sup>2</sup>s. For $`L_{bb}=110^{45}`$ erg/s, $`z=3`$ and $`h=0.6`$, the visible magnitude of the IR source is $`m=22.7`$. Such a faint source is not easy identify in e.g. IRAS catalogue as a powerful source, because for redshift determination it is necessary to detect the optical lines from the host galaxy, which are very weak at assumed redshift $`z=3`$. The non-thermal optical radiation can be also produced due to HE proton-induced pion decays in the outer part of the envelope, but its luminosity is very small. Most probably such source will be classified as one of the numerous non-identified weak IR source. ### 5.3 Duration of activity and the number of sources As was indicated in Section 2.2 the duration of the active phase $`t_s`$ is determined by relaxation time of the NS cluster: $`t_st_{rel}1020`$ yr. This stage appears only once during the lifetime of a galaxy, prior to the MBH formation. If to assume that a galactic nucleus turns after it into AGN, the total number of hidden sources in the Universe can be estimated as $$N_{HS}\frac{4}{3}\pi (3ct_0)^3n_{AGN}t_s/t_{AGN},$$ (37) where $`\frac{4}{3}\pi (3ct_0)^3`$ is the cosmological volume inside the horizon $`ct_0`$, $`n_{AGN}`$ is the number density of AGNs and $`t_{AGN}`$ is the AGN lifetime. The value $`t_s/t_{AGN}`$ gives a probability for AGN to be observed at the stage of the hidden source, if to include this short stage ($`t_s10`$ yr) in the much longer ($`t_{AGN}`$) AGN stage and consider (for the aim of estimate) the hidden source stage as the accidental one in the AGN history. The estimates for $`n_{AGN}`$ and $`t_{AGN}`$ taken for different populations of AGNs result in $`N_{HS}10100`$. ## 6 Conclusions Dynamical evolution of the central stellar cluster in the galactic nucleus results in the stellar destruction of the constituent normal stars and in the production of massive gas envelope. The surviving subsystem of NSs submerges deep into this envelope. The fast repeating fireballs caused by NS collisions in the central stellar cluster produce the rarefied cavity inside the massive envelope. Colliding shocks generate the turbulence inside the fireballs and in the cavity, and particles are accelerated by Fermi II mechanism. These particles are then re-accelerated by $`\mathrm{\Gamma }^2`$-mechanism in collisions with relativistic shocks and fireballs. All high energy particles, except neutrinos, can be completely absorbed in the thick envelope. In this case the considered source is an example of a powerful hidden source of HE neutrinos. Prediction of high energy gamma-ray flux depends on the thickness of envelope. In case of the thick envelope, $`X_{rad}10^4`$ g/cm<sup>2</sup>, HE gamma-radiation is absorbed. When an envelope is thin, $`X_{rad}10^2`$ g/cm<sup>2</sup>, gamma-ray radiation from $`\pi ^02\gamma `$ decays and from synchrotron radiation of the secondary electrons can be observed by EGRET and marginally by Whipple detector at $`E_\gamma 1`$ TeV. In all cases the thickness of the envelope is much larger than the Thompson thickness ($`x_T3`$ g/cm<sup>2</sup>), and this condition provides the absorption and X-rays and low energy gamma-rays. A hidden source is to be seen as a bright IR source but, due to slow diffusion through envelope, this radiation appears $`10^4`$ years after the phase of neutrino activity. During the period of neutrino activity the IR luminosity is the same as before it. A considered source is a precursor of most powerful AGN, and therefore most of these sources are expected to be at the same redshifts as AGNs. The luminosity $`L_{IR}10^{45}10^{46}`$ erg/s is not unusual for powerful IR sources from IRAS catalogue. The maximum observed luminosity exceeds $`110^{48}`$ erg/s , and there are many sources with luminosity $`10^{45}10^{46}`$ erg/s . Moreover, for most of the hidden sources the distance cannot be determined, and thus they fall into category of faint non-identified IR sources. Later these hidden sources turn into usual powerful AGNs, and thus the number of hidden sources is restricted by the total number of these AGNs. In our model the shock is fully absorbed in the envelope. Since the total energy release $`E_{tot}`$ is less than gravitational energy of the envelope $`E_{grav}GM_{env}^2/R_{env}`$, the system remains gravitationally bound, and in the end the envelope will collapse into black hole or accretion disc. The expected duration of neutrino activity for a hidden source is $`10`$ yr, and the total number of hidden sources in the horizon volume ranges from a few up to $`100`$, within uncertainties of the estimates. Underground neutrino detector with an effective area $`S1`$ km<sup>2</sup> will observe $`10`$ muons per year with energies $`E_\mu 1`$ TeV from this hidden source. Acknowledgments: We are grateful to Bohdan Hnatyk for useful discussions. This work was supported in part by the INTAS through grant No. 99-1065. One of the authors (VID) is grateful to the staff of Laboratori Nazionali del Gran Sasso for hospitality during his visit.
no-problem/0002/math0002106.html
ar5iv
text
# A Note on the Symmetric Powers of the Standard Representation of 𝑆_𝑛 ## Abstract In this paper, we prove that the dimension of the space spanned by the characters of the symmetric powers of the standard $`n`$-dimensional representation of $`S_n`$ is asymptotic to $`n^2/2`$. This is proved by using generating functions to obtain formulas for upper and lower bounds, both asymptotic to $`n^2/2`$, for this dimension. In particular, for $`n7`$, these characters do not span the full space of class functions on $`S_n`$. ## Notation Let $`P(n)`$ denote the number of (unordered) partitions of $`n`$ into positive integers, and let $`\varphi `$ denote the Euler totient function. Let $`V`$ be the standard $`n`$-dimensional representation of $`S_n`$, so that $`V=e_1\mathrm{}e_n`$ with $`\sigma (e_i)=e_{\sigma i}`$ for $`\sigma S_n`$. Let $`S^NV`$ denote the $`N^{\mathrm{th}}`$ symmetric power of $`V`$, and let $`\chi _N:S_n`$ denote its character. Finally, let $`D(n)`$ denote the dimension of the space of class functions on $`S_n`$ spanned by all the $`\chi _N`$, $`N0`$. ## 1. Preliminaries Our aim in this paper is to investigate the numbers $`D(n)`$. It is a fundamental problem of invariant theory to decompose the character of the symmetric powers of an irreducible representation of a finite group (or more generally a reductive group). A special case with a nice theory is the reflection representation of a finite Coxeter group. This is essentially what we are looking at. (The defining representation of $`S_n`$ consists of the direct sum of the reflection representation and the trivial representation. This trivial summand has no significant effect on the theory.) In this context it seems natural to ask: what is the dimension of the space spanned by the symmetric powers? Moreover, decomposing the symmetric powers of the character of an irreducible representation of $`S_n`$ is an example of the operation of *inner plethysm* \[1, Exer. 7.74\], so we are also obtaining some new information related to this operation. We begin with: ###### Lemma 1.1. Let $`\lambda =(\lambda _1,\mathrm{},\lambda _k)`$ be a partition of $`n`$ (which we denote by $`\lambda n`$), and suppose $`\sigma S_n`$ is a $`\lambda `$-cycle. Then $`\chi _N(\sigma )`$ is equal to the number of solutions $`(x_1,\mathrm{},x_k)`$ in nonnegative integers to the equation $`\lambda _1x_1+\mathrm{}+\lambda _kx_k=N`$. ###### Proof. Suppose without loss of generality that $`\sigma =(\mathrm{1\; 2}\mathrm{}\lambda _1)(\lambda _1+1\mathrm{}\lambda _1+\lambda _2)\mathrm{}(\lambda _1+\mathrm{}+\lambda _{k1}+1\mathrm{}n)`$. Consider a basis vector $`e_1^{c_1}\mathrm{}e_n^{c_n}`$ of $`S^NV`$, so that $`c_1+\mathrm{}+c_n=N`$ with each $`c_i0`$. This vector is fixed by $`\sigma `$ if and only if $`c_1=\mathrm{}=c_{\lambda _1}`$, $`c_{\lambda _1+1}=\mathrm{}=c_{\lambda _1+\lambda _2}`$ and so on. Since $`\chi _N(\sigma )`$ equals the number of basis vectors fixed by $`\sigma `$, the lemma follows. ∎ It seems difficult to work directly with the $`\chi _N`$’s; fortunately, it is not too hard to restate the problem in more concrete terms. Given a partition $`\lambda =(\lambda _1,\mathrm{},\lambda _k)`$ of $`n`$, define (1) $$f_\lambda (q)=\frac{1}{\left(1q^{\lambda _1}\right)\mathrm{}\left(1q^{\lambda _k}\right)}.$$ Next, define $`F_n[[q]]`$ to be the complex vector space spanned by all of these $`f_\lambda (q)`$’s. We have: ###### Proposition 1.2. $`dimF_n=D(n)`$. ###### Proof. Consider the table of the characters $`\chi _N`$; we are interested in the dimension of the row-span of this table. Since the dimension of the row-span of a matrix is equal to the dimension of its column-span, we can equally well study the dimension of the space spanned by the columns of the table. By the preceeding lemma, the $`N^{\mathrm{th}}`$ entry of the column corresponding to the $`\lambda `$-cycles is equal to the number of nonnegative integer solutions to the equation $`\lambda _1x_1+\mathrm{}+\lambda _kx_k=N`$. Consequently, one easily verifies that $`f_\lambda (q)`$ is the generating function for the entries of the column corresponding to the $`\lambda `$-cycles. The dimension of the column-span of our table is therefore equal to $`dimF_n`$, and the proposition is proved. ## 2. Upper Bounds on $`D(n)`$ Our basic strategy for computing upper bounds for $`dimF_n`$ is to put all the generating functions $`f_\lambda (q)`$ over a common denominator; then the dimension of their span is bounded above by $`1`$ plus the degree of their numerators. For example, one can see without much difficulty that $`(1q)(1q^2)\mathrm{}(1q^n)`$ is the least common multiple of the denominators of the $`f_\lambda (q)`$’s. Putting all of the $`f_\lambda (q)`$’s over this common denominator, their numerators then have degree $`n(n+1)/2n`$, which proves (2) $$D(n)\frac{n(n1)}{2}+1.$$ By modifying this strategy carefully, it is possible to find a somewhat better bound. Observe that the denominator of each of our $`f_\lambda `$’s is (up to sign change) a product of cyclotomic polynomials. In fact, the power of the $`j^{\mathrm{th}}`$ cyclotomic polynomial $`\mathrm{\Phi }_j(q)`$ dividing the denominator of $`f_\lambda (q)`$ is precisely equal to the number of $`\lambda _i`$’s which are divisible by $`j`$. It follows that $`\mathrm{\Phi }_j(q)`$ divides the denominator of $`f_\lambda (q)`$ at most $`\frac{n}{j}`$ times, and the partitions $`\lambda `$ for which this upper bound is achieved are precisely the $`P\left(nj\frac{n}{j}\right)`$ partitions of $`n`$ which contain $`\frac{n}{j}`$ copies of $`j`$. Let $`S_j`$ be the collection of $`f_\lambda `$’s corresponding to these $`P\left(nj\frac{n}{j}\right)`$ partitions. One sees immediately that the dimension of the space spanned by the functions in $`S_j`$ is just $`D\left(nj\frac{n}{j}\right)`$: in fact, the functions in this space are exactly $`1/(1q^j)^{\frac{n}{j}}`$ times the functions in $`F_{nj\frac{n}{j}}`$. Now the power of $`\mathrm{\Phi }_j(q)`$ in the least common multiple of the denominators of all of the $`f_\lambda (q)`$’s excluding those in $`S_j`$ is only $`\frac{n}{j}1`$, so the degree of this common denominator is only $`n(n+1)/2\varphi (j)`$. Therefore, as in the first paragraph of this section, the dimension of the space spanned by all of the $`f_\lambda `$’s except those in $`S_j`$ is at most $`n(n1)/2+1\varphi (j)`$; since the dimension spanned by the functions in $`S_j`$ is $`D\left(nj\frac{n}{j}\right)`$, we have proved the upper bound $$D(n)\frac{n(n1)}{2}+1\varphi (j)+D\left(nj\frac{n}{j}\right).$$ If it happens that $`D\left(nj\frac{n}{j}\right)<\varphi (j)`$, then this upper bound is an improvement on our original upper bound. If we repeat this process, this time simultaneously excluding the sets $`S_j`$ for all of the $`j`$’s which gave us an improved upper bound in the above argument, we find that we have proved: ###### Proposition 2.1. $$D(n)\frac{n(n1)}{2}+1\underset{j=1}{\overset{n}{}}\mathrm{max}(0,\varphi (j)D\left(nj\frac{n}{j}\right)).$$ Finally, we obtain an upper bound for $`D(n)`$ which does not depend on other values of $`D()`$: ###### Corollary 2.2. Recursively define $`U(0)=1`$ and $$U(n)=\frac{n(n1)}{2}+1\underset{j=1}{\overset{n}{}}\mathrm{max}(0,\varphi (j)U\left(nj\frac{n}{j}\right)).$$ Then $`D(n)U(n)`$. ###### Proof. We proceed by induction on $`n`$. Equality certainly holds for $`n=0`$. For larger $`n`$, the inductive hypothesis shows that $`D\left(nj\frac{n}{j}\right)U\left(nj\frac{n}{j}\right)`$ when $`j>0`$, and so $`D(n)`$ $``$ $`{\displaystyle \frac{n(n1)}{2}}+1{\displaystyle \underset{j=1}{\overset{n}{}}}\mathrm{max}(0,\varphi (j)D\left(nj{\displaystyle \frac{n}{j}}\right))`$ $``$ $`{\displaystyle \frac{n(n1)}{2}}+1{\displaystyle \underset{j=1}{\overset{n}{}}}\mathrm{max}(0,\varphi (j)U\left(nj{\displaystyle \frac{n}{j}}\right))`$ $`=`$ $`U(n).`$ Below is a table of values of $`D(n)`$ and $`U(n)`$ for $`n23`$, calculated in Maple, with $`P(n)`$ and our first estimate $`\frac{n(n1)}{2}+1`$ provided for contrast. Note that in the range $`1n23`$, we have $`D(n)=U(n)`$ except for $`n=19,20`$, when $`U(n)D(n)=1`$. Is it true, for instance, that $$D(n)+\frac{n(n1)}{2}+1\underset{j=1}{\overset{n}{}}\mathrm{max}(0,\varphi (j)D\left(nj\frac{n}{j}\right))$$ is bounded as $`n\mathrm{}`$? ###### Example 1. The first dimension where $`D(n)<P(n)`$ is $`n=7`$, and it is easy then to show that $`D(n)<P(n)`$ for all $`n7`$. The difference $`P(7)D(7)=2`$ arises from the following two relations: $$\frac{4}{(1x^2)^2(1x)^3}=\frac{3}{(1x^3)(1x)^4}+\frac{1}{(1x^3)(1x^2)^2}$$ and $$\frac{3}{(1x^3)(1x^2)(1x)^2}=\frac{2}{(1x^4)(1x)^3}+\frac{1}{(1x^4)(1x^3)}.$$ The first relation, for example, says that if $`\chi `$ is a linear combination of $`\chi _N`$’s, then $$4\chi ((2,2)\text{-cycle})=3\chi (3\text{-cycle})+\chi ((3,2,2)\text{-cycle}).$$ Alternately, it tells us that for any $`N0`$, four times the number of nonnegative integral solutions to $`2x_1+2x_2+x_3+x_4+x_5=N`$ is equal to three times the number of such solutions to $`3x_1+x_2+x_3+x_4+x_5=N`$ plus the number of such solutions to $`3x_1+2x_2+2x_3=N`$. ## 3. Lower Bounds on $`D(n)`$ Let $`\lambda =(\lambda _1,\mathrm{},\lambda _k)n`$. The rational function $`f_\lambda (q)`$ of equation (1) can be written as $$f_\lambda (q)=p_\lambda (1,q,q^2,\mathrm{}),$$ where $`p_\lambda `$ denotes a power sum symmetric function. (See \[1, Ch. 7\] for the necessary background on symmetric functions.) Since the $`p_\lambda `$ for $`\lambda n`$ form a basis for the vector space (say over $``$) $`\mathrm{\Lambda }^n`$ of all homogeneous symmetric functions of degree $`n`$ \[1, Cor. 7.7.2\], it follows that if $`\{u_\lambda \}_{\lambda n}`$ is any basis for $`\mathrm{\Lambda }^n`$ then $$D(n)=dim\mathrm{span}_{}\{u_\lambda (1,q,q^2,\mathrm{}):\lambda n\}.$$ In particular, let $`u_\lambda =e_\lambda `$, the elementary symmetric function indexed by $`\lambda `$. Define $$d(\lambda )=\underset{i}{}\left(\genfrac{}{}{0pt}{}{\lambda _i}{2}\right).$$ According to \[1, Prop. 7.8.3\], we have $$e_\lambda (1,q,q^2,\mathrm{})=\frac{q^{d(\lambda )}}{_i(1q)(1q^2)\mathrm{}(1q^{\lambda _i})}.$$ Since power series of different degrees (where the *degree* of a power series is the exponent of its first nonzero term) are linearly independent, we obtain from Proposition 1.2 the following result. ###### Proposition 3.1. Let $`E(n)`$ denote the number of distinct integers $`d(\lambda )`$, where $`\lambda `$ ranges over all partitions of $`n`$. Then $`D(n)E(n)`$. Note. We could also use the basis $`s_\lambda `$ of Schur functions instead of $`e_\lambda `$, since by \[1, Cor. 7.21.3\] the degree of the power series $`s_\lambda (1,q,q^2,\mathrm{})`$ is $`d(\lambda ^{})`$, where $`\lambda ^{}`$ denotes the conjugate partition to $`\lambda `$. Define $`G(n)+1`$ to be the least positive integer that cannot be written in the form $`_i\left(\genfrac{}{}{0pt}{}{\lambda _i}{2}\right)`$, where $`\lambda n`$. Thus all integers $`1,2,\mathrm{},G(n)`$ can be so represented, so $`D(n)E(n)G(n)`$. We can obtain a relatively tractable lower bound for $`G(n)`$, as follows. For a positive integer $`m`$, write (uniquely) (3) $$m=\left(\genfrac{}{}{0pt}{}{k_1}{2}\right)+\left(\genfrac{}{}{0pt}{}{k_2}{2}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{k_r}{2}\right),$$ where $`k_1k_2\mathrm{}k_r2`$ and $`k_1,k_2,\mathrm{}`$ are chosen successively as large as possible so that $$m\left(\genfrac{}{}{0pt}{}{k_1}{2}\right)\left(\genfrac{}{}{0pt}{}{k_2}{2}\right)\mathrm{}\left(\genfrac{}{}{0pt}{}{k_i}{2}\right)0$$ for all $`1ir`$. For instance, $`26=\left(\genfrac{}{}{0pt}{}{7}{2}\right)+\left(\genfrac{}{}{0pt}{}{3}{2}\right)+\left(\genfrac{}{}{0pt}{}{2}{2}\right)+\left(\genfrac{}{}{0pt}{}{2}{2}\right)`$. Define $`\nu (m)=k_1+k_2+\mathrm{}+k_r`$. Suppose that $`\nu (m)n`$ for all $`mN`$. Then if $`mN`$ we can write $`m=\left(\genfrac{}{}{0pt}{}{k_1}{2}\right)+\mathrm{}+\left(\genfrac{}{}{0pt}{}{k_r}{2}\right)`$ so that $`k_1+\mathrm{}+k_rn`$. Hence if $`\lambda =(k_1,\mathrm{},k_r,1^{n{\scriptscriptstyle k_i}})`$ (where $`1^s`$ denotes $`s`$ parts equal to 1), then $`\lambda `$ is a partition of $`n`$ for which $`_i\left(\genfrac{}{}{0pt}{}{\lambda _i}{2}\right)=m`$. It follows that if $`\nu (m)n`$ for all $`mN`$ then $`G(n)N`$. Hence if we define $`H(n)`$ to be the largest integer $`N`$ for which $`\nu (m)n`$ whenever $`mN`$, then we have established the string of inequalities (4) $$D(n)E(n)G(n)H(n).$$ Here is a table of values of these numbers for $`1n23`$. Note that $`D(n)`$ appears to be close to $`E(n+1)`$. We don’t have any theoretical explanation of this observation. ###### Proposition 3.2. We have (5) $$\nu (m)\sqrt{2m}+3m^{1/4}$$ for all $`m405`$. ###### Proof. The proof is by induction on $`m`$. It can be checked with a computer that equation (5) is true for $`405m50000`$. Now assume that $`M>50000`$ and that (5) holds for $`405m<M`$. Let $`p=p_M`$ be the unique positive integer satisfying $$\left(\genfrac{}{}{0pt}{}{p}{2}\right)M<\left(\genfrac{}{}{0pt}{}{p+1}{2}\right).$$ Thus $`p`$ is just the integer $`k_1`$ of equation (3). Explicitly we have $$p_M=\frac{1+\sqrt{8M+1}}{2}.$$ By the definition of $`\nu (M)`$ we have $$\nu (M)=p_M+\nu \left(M\left(\genfrac{}{}{0pt}{}{p_M}{2}\right)\right).$$ It can be checked that the maximum value of $`\nu (m)`$ for $`m<405`$ is $`\nu (404)=42`$. Set $`q_M=(1+\sqrt{8M+1})/2`$. Since $`M\left(\genfrac{}{}{0pt}{}{p_M}{2}\right)p_Mq_M`$, by the induction hypothesis we have $$\nu (M)q_M+\mathrm{max}(42,\sqrt{2q_M}+3q_M^{1/4}).$$ It is routine to check that when $`M>50000`$ the right hand side is less than $`\sqrt{2M}+3M^{1/4}`$, and the proof follows. ∎ ###### Proposition 3.3. There exists a constant $`c>0`$ such that $$H(n)\frac{n^2}{2}cn^{3/2}$$ for all $`n1`$. ###### Proof. From the definition of $`H(n)`$ and Proposition 3.2 (and the fact that the right-hand side of equation (5) is increasing), along with the inquality $`\nu (m)42=\sqrt{2405}+3405^{1/4}`$ for $`m404`$, it follows that $$H\left(\sqrt{2m}+3m^{1/4}\right)m$$ for $`m>404`$. For $`n`$ sufficiently large, we can evidently choose $`m`$ such that $`n=\sqrt{2m}+3m^{1/4}`$, so $`H(n)m`$. Since $`\sqrt{2m}+3m^{1/4}+1>n`$, an application of the quadratic formula (again for $`n`$ sufficiently large) shows $$m^{1/4}\frac{3+\sqrt{9+4\sqrt{2}(n1)}}{2\sqrt{2}},$$ from which the result follows without difficulty. Since we have established both upper bounds (equation (2)) and lower bounds (equation (4) and Proposition 3.3) for $`D(n)`$ asymptotic to $`n^2/2`$, we obtain the following corollary. ###### Corollary 3.4. There holds the asymptotic formula $`D(n)\frac{1}{2}n^2`$.
no-problem/0002/nucl-th0002021.html
ar5iv
text
# Realistic shell-model calculations for proton-rich N=50 isotones ## I Introduction In recent years, we have studied a number of nuclei around doubly magic <sup>100</sup>Sn, <sup>132</sup>Sn, and <sup>208</sup>Pb within the framework of the shell model employing realistic effective interactions derived from the meson-theoretic Bonn-A nucleon-nucleon ($`NN`$) potential . We have focused attention on nuclei with few valence particles or holes, since they provide the best testing ground for the basic ingredients of shell-model calculations, especially as regards the matrix elements of the two-body effective interaction. The main motivation for carrying out this extensive program of calculations was to try to assess the role of realistic effective interactions in the shell-model approach to the description of nuclear structure properties. The results of our calculations have so far turned out to be in remarkably good agreement with experiment for all the nuclei considered, providing evidence that realistic effective interactions are able to describe with quantitative accuracy the spectroscopic properties of complex nuclei. In this connection, it is worth noting that these results are considerably better than those obtained in earlier works for the light $`s`$-$`d`$ nuclei . While in the <sup>132</sup>Sn and <sup>208</sup>Pb regions we have studied nuclei with both valence particles and holes, around $`A=100`$ we have only considered the light Sn isotopes, namely we have not gone below the 50-82 shell. The study of nuclei lacking few nucleons with respect to <sup>100</sup>Sn, which is the heaviest $`N=Z`$ doubly magic nucleus, is of course of great relevance from the shell-model point of view. Nuclei of this kind, however, lie well away from the valley of stability and experimental information on their spectroscopic properties is still very scanty. In this context, the proton-rich $`N=50`$ isotones are of special interest. In fact, while the development of radioactive ion beams opens up the prospect of spectroscopic studies of a number of <sup>100</sup>Sn neighbors, use of large multidetector $`\gamma `$-ray arrays is already providing new experimental data for these singly magic nuclei. In particular, four excited states in <sup>98</sup>Cd, two proton holes from <sup>100</sup>Sn, have been recently identified in an in-beam spectroscopy experiment . On the above grounds, we found it very interesting to include in our program of calculations the proton-rich $`N=50`$ isotones <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd (some preliminary results have already been presented in Ref. ). Actually, the motivation for the present study is twofold. On the one hand, these nuclei with two, three, and four holes in the $`Z=2850`$ shell offer the opportunity to test our realistic effective interaction for nuclei below <sup>100</sup>Sn. On the other hand, the success achieved by our previous calculations on medium- and heavy-mass nuclei encourages us to make predictions which may stimulate, and be helpful to, future experiments. The $`N=50`$ isotones have long been the subject of theoretical interest. In most of the shell-model calculations performed in the last two decades (earlier references can be found in Ref. ), however, attention has been focused on the lighter isotones up to mass 95 or 96. In this context, we may mention the extensive study of the $`N=50`$ isotones from <sup>82</sup>Ge up to <sup>96</sup>Pd performed some ten years ago by Ji and Wildenthal . In that work <sup>78</sup>Ni was considered as a closed core and an empirical effective Hamiltonian was obtained by fitting the two-body matrix elements and the single-particle energies to approximately 170 experimental data. Actually, the low-energy spectra of <sup>98</sup>Cd and <sup>97</sup>Ag have been predicted only in the work of Ref. , where the protons were assumed to fill solely the $`0g_{9/2}`$ and $`1p_{1/2}`$ levels. In all previous calculations empirical two-body matrix elements have been used, an exception being the work of Ref. , where the effective interaction was derived from the Sussex interaction . To our knowledge, the present calculations are the first ones where the two-body effective interaction has been derived from a modern free nucleon-nucleon potential. Before closing this section we should remark that, at variance with our previous calculations in the <sup>132</sup>Sn and <sup>208</sup>Pb regions, we had to face here the problem of choosing a set of single proton-hole energies without much guidance from experiment. In fact, while no spectroscopic data are yet available for the single-hole valence nucleus <sup>99</sup>In, only little relevant information is provided by the observed spectra of <sup>98</sup>Cd and <sup>97</sup>Ag. We will come back to this important point later. The paper is organized as follows. In Sec. II we give an outline of our calculations and describe in detail how we have determined the single-hole energies. Our results are presented and compared with the experimental data in Sec. III, where we also comment on the results of Ref. . Section IV presents a summary of our conclusions. ## II Outline of calculations Our effective interaction $`V_{\mathrm{e}ff}`$ was derived from the Bonn-A free nucleon-nucleon ($`NN`$) potential using a $`G`$-matrix formalism, including renormalizations from both core polarization and folded diagrams. Since we have assumed <sup>100</sup>Sn as a closed core, protons are treated as valence holes, which implies the derivation of a hole-hole effective interaction. We have chosen the Pauli exclusion operator $`Q_2`$ in the $`G`$-matrix equation, $$G(\omega )=V+VQ_2\frac{1}{\omega Q_2TQ_2}Q_2G(\omega ),$$ (1) as specified by ($`n_1,n_2,n_3`$) = (11, 21, 55) for both neutron and proton orbits. Here $`V`$ represents the $`NN`$ potential, $`T`$ denotes the two-nucleon kinetic energy, and $`\omega `$ is the so-called starting energy. We employ a matrix inversion method to calculate the above $`G`$ matrix in an essentially exact way . For the harmonic oscillator parameter $`\mathrm{}\omega `$ we adopt the value 8.5 MeV, as given by the expression $`\mathrm{}\omega =45A^{1/3}25A^{2/3}`$ for $`A=100`$. Using the above $`G`$ matrix we then calculate the so-called $`\widehat{Q}`$ box, which is composed of irreducible valence-linked diagrams up to second order in $`G`$. These are just the seven one- and two-body diagrams considered in Ref. . Since we are dealing with external hole lines, the calculation of the $`\widehat{Q}`$-box diagrams is somewhat different from that usual for particles. For example, the familiar core-polarization diagram $`G_{3p1h}`$ becomes $`ab;J|G_{3h1p}|cd;J`$ $`=`$ $`{\displaystyle \frac{(1)^{j_a+j_b+j_c+j_d}}{\widehat{J}}}{\displaystyle \underset{J^{}}{}}\widehat{J^{}}{\displaystyle \underset{\mathrm{p}h}{}}(1)^{j_pj_h+J^{}}X\left(\begin{array}{ccc}j_cj_dJ& & \\ j_aj_bJ& & \\ J^{}J^{}0& & \end{array}\right)`$ (6) $`\times {\displaystyle \frac{1}{\omega (ϵ_pϵ_hϵ_bϵ_c)}}\text{ }c\text{ }h|G(\omega _1)J^{}J^{}|ap\text{ }p\text{ }d|G(\omega _2)J^{}J^{}|hb,`$ where $`\widehat{x}=(2x+1)^{1/2}`$ and the off-shell energy variables are: $`\omega _1=\omega +ϵ_h+ϵ_a+ϵ_b+ϵ_c`$ and $`\omega _2=\omega +ϵ_h+ϵ_b+ϵ_c+ϵ_d`$. The $`ϵ`$’s are the unperturbed single-particle energies. $`X`$ is the standard normalized 9-$`j`$ symbol. The cross-coupled $`G`$-matrix elements, on the right side of Eq. (2), are related to the usual direct-coupled ones by a simple transformation, as in Ref.. The effective interaction, which is energy independent, can be schematically written in operator form as: $$V_{\mathrm{e}ff}=\widehat{Q}\widehat{Q}^{^{}}\widehat{Q}+\widehat{Q}^{^{}}\widehat{Q}\widehat{Q}\widehat{Q}^{^{}}\widehat{Q}\widehat{Q}\widehat{Q}\mathrm{},$$ (7) where the integral sign represents a generalized folding operation . $`\widehat{Q}^{^{}}`$ is obtained from $`\widehat{Q}`$ by removing terms of first order in the reaction matrix $`G`$. After the $`\widehat{Q}`$ box is calculated, $`V_{\mathrm{e}ff}`$ is then obtained by summing up the folded-diagram series of Eq. (3) to all orders using the Lee-Suzuki iteration method . As regards the electromagnetic observables, these have been calculated by making use of effective operators which take into account core-polarization effects. More precisely, by using a diagrammatic description as in Ref. , we have only included first-order diagrams in $`G`$. This implies that folded-diagram renormalizations are not needed . Let us now come to the single-hole (SH) energies. As already mentioned in the Introduction, no spectroscopic information on <sup>99</sup>In is available. To obtain information on the location of the SH levels we have therefore resorted to an analysis of the spectra of the lighter $`N=50`$ isotones. Of course, most relevant to this analysis are those states which are predominantly of one-hole nature. Actually, <sup>91</sup>Nb is the first isotone where a state of this kind has been unambiguously identified for each of the four SH levels. More precisely, no states with a firm $`\frac{3}{2}^{}`$ or $`\frac{5}{2}^{}`$ assignment are reported for the heavier isotones, while at least one $`\frac{1}{2}^{}`$ and one $`\frac{9}{2}^+`$ state have been identified up to <sup>95</sup>Rh. In <sup>97</sup>Ag there is only one $`\frac{9}{2}^+`$ state, which is the ground state. ¿From the above it is clear that, if one wants to determine the SH energies by reproducing the observed one-hole states, calculations up to nine valence holes have to be carried out. It is to be expected, however, that a set of SH energies determined in this way may not be the most appropriate for calculations where few valence holes are considered. In fact, as is well known, significant changes in the nuclear mean field may occur when moving away from closed shells. In addition, an effective two-hole interaction derived by considering <sup>100</sup>Sn as a closed core may not be quite adequate for systems with several valence holes as, in these cases, many-body correlations are likely to come into play. In this situation, we have tried to determine the SH energies $`ϵ_{p_{1/2}}`$, $`ϵ_{p_{3/2}}`$, and $`ϵ_{f_{5/2}}`$ relative to the $`g_{9/2}`$ level, which has long been recognized to be the lowest-lying one, from an analysis of the spectra of the <sup>100</sup>Sn neighbors <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd, with two to four proton holes in the $`N=2850`$ shell. We have found that (i) the energies of all the excited levels in <sup>98</sup>Cd and <sup>97</sup>Ag, which have an experimental counterpart, are quite insensitive to the position of the $`p_{3/2}`$ and $`f_{5/2}`$ orbits; (ii) the ground-state energies of all three nuclei, as well as the seniority-two states $`J^\pi =2^+,4^+,6^+,`$ and $`8^+`$ in <sup>96</sup>Pd, depend practically only on the sum of the energies of these two levels, $`ϵ=ϵ_{p_{3/2}}+ϵ_{f_{5/2}}`$. It turns out that all the considered experimental spectra are well described overall by fixing $`ϵ`$ at 5.2 MeV. More precisely, only the $`2^+`$ states in the two even isotones and the $`\frac{13}{2}^+`$ and the $`10^+`$ states in <sup>97</sup>Ag and <sup>96</sup>Pd show a rather large discrepancy. To eliminate this discrepancy a much larger value of $`ϵ`$ should be used, namely about 10 MeV. This value, however, would produce a significant downshift of all other levels. In addition, as we shall see in the following, it would be at variance with an empirical analysis of the one-hole states in $`N=50`$ isotones. It may also be mentioned that the energies of the $`2^+`$ states, as well as those of the $`\frac{13}{2}^+`$ and $`10^+`$ states, are all strongly dependent on the two-body matrix element $`g_{9/2}^2J^\pi =2^+|V_{\mathrm{eff}}|g_{9/2}^2J^\pi =2^+`$. In this context, we should recall that also for the light Sn isotopes our calculations with the Bonn-A potential produced $`2^+`$ excitation energies somewhat higher than the observed values . As for the $`p_{1/2}`$ level, two states are sensitive to its position. They are the $`\frac{17}{2}^{}`$ and $`5^{}`$ states in <sup>97</sup>Ag and <sup>96</sup>Pd, respectively. We find that their experimental energies are very well reproduced by our calculations for $`ϵ_{p_{1/2}}=0.7`$ MeV. We have verified that this choice is rather independent of the value of $`ϵ`$. For instance, increasing $`ϵ`$ by about 2 MeV brings $`ϵ_{p_{1/2}}`$ up to only 0.9 MeV. ¿From the above findings it appears that the SH energies $`ϵ_{p_{3/2}}`$ and $`ϵ_{f_{5/2}}`$ cannot be determined individually from the experimental data for <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd presently available. To obtain an estimate for these two $`ϵ`$’s, we have made a linear extrapolation of the energies of the $`\frac{3}{2}^{}`$ and $`\frac{5}{2}^{}`$ one-hole states observed in <sup>89</sup>Y, <sup>91</sup>Nb, and <sup>93</sup>Tc. Actually, states of this kind have been unambiguously identified only in <sup>89</sup>Y and <sup>91</sup>Nb. In particular , in the latter nucleus two $`\frac{3}{2}^{}`$ states have been observed which exhaust almost all the $`p_{3/2}`$ strength. In our extrapolation, however, we have also included the experimental data relative to <sup>93</sup>Tc, according to the indications of Ref. . In this work the level at 2.1 MeV is identified as an $`l=3`$, $`J=\frac{5}{2}`$ state while plausible arguments are given favoring the $`\frac{3}{2}^{}`$ assignment to the two states observed at 1.5 and 1.8 MeV. The above procedure yields the values of about 2 and 3 MeV for the $`\frac{3}{2}^{}`$ and $`\frac{5}{2}^{}`$ SH energies in <sup>99</sup>In. Owing to the uncertainty inherent in such a derivation, these values should be taken only as a reasonable estimate. In support of this procedure, however, speaks the fact that for the $`p_{1/2}`$ level it yields $`ϵ_{p_{1/2}}=0.8`$ MeV. On the above grounds, we have adopted for the SH energies the following values (in MeV): $`ϵ_{g_{9/2}}=0.0`$, $`ϵ_{p_{1/2}}=0.7`$, $`ϵ_{p_{3/2}}=2.1`$, and $`ϵ_{f_{5/2}}=3.1`$. It should be pointed out that these values are quite different from those adopted by other authors. In particular, the SH energies determined in Ref. are higher than ours, the difference ranging from more than 1 MeV for $`ϵ_{p_{1/2}}`$ and $`ϵ_{p_{3/2}}`$ to 3.2 MeV for $`ϵ_{f_{5/2}}`$. ## III Results and Comparison with experiment We present here the results of our calculations for <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd. They have been obtained by using the OXBASH shell-model code . The experimental and theoretical spectra are compared in Figs. 1, 2, and 3, where we report all the experimental levels, except the $`13^+`$ and $`15^+`$ states observed at 6.7 and 7.0 MeV in <sup>96</sup>Pd, which cannot be constructed in our model space. In the calculated spectra only those yrast states which are candidates for the observed levels are reported. A complete list of excitation energies up to 5, 3, and 4 MeV is given in Tables I-III for <sup>98</sup>Cd <sup>97</sup>Ag, and <sup>96</sup>Pd, respectively. ¿From Figs. 1-3 we see that our results are in very good agreement with experiment. A measure of the quality of the agreement is given by the rms deviation $`\sigma `$We define $`\sigma =\{(1/N_d)_i[E_{exp}(i)E_{calc}(i)]^2\}^{1/2}`$, where $`N_d`$ is the number of data., whose values are 107, 108, and 122 keV for <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd, respectively. As was already discussed in Sec. II, the main point of disagreement is the position of the $`2^+`$ state in both the even isotones, as well as that of the $`\frac{13}{2}^+`$ and $`10^+`$ states in <sup>97</sup>Ag and <sup>96</sup>Pd. In fact, the discrepancy between theory and experiment for the energies of these four states goes from 140 to 263 keV while it is less than 100 keV for all other states. As regards the structure of the states having an experimental counterpart, we find that the positive-parity states in all three nuclei are dominated by the $`g_{9/2}^n`$ configuration, while the negative parity ones are practically of pure $`g_{9/2}^{(n1)}p_{1/2}^1`$ character. In <sup>98</sup>Cd and <sup>97</sup>Ag only the ground states receive a significant contribution from configurations other than the dominant one, the percentage being about 20% in both nuclei. As for <sup>96</sup>Pd, the wave functions of the ground state and the first four excited states are even less pure. In fact, the percentage of the $`g_{9/2}^4`$ configuration reaches at most 81% for the $`4_1^+`$, $`6_1^+`$, and $`8_1^+`$ states, being only 64% for the ground state. Note that in these states, as well as in the ground states of <sup>98</sup>Cd and <sup>97</sup>Ag, a significant percentage of the $`g_{9/2}^{(n2)}p_{3/2}^2`$ and $`g_{9/2}^{(n2)}f_{5/2}^2`$ is present. In particular, in the ground state of <sup>96</sup>Pd the percentage of each of these two these configurations is 9%. ¿From Figs. 1-3 we see that rather little experimental information is presently available for <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd. Much richer spectra, however, are predicted by the theory. It is therefore interesting to discuss in some detail our predictions, in the hope that they may verified in a not too distant future. As regards <sup>98</sup>Cd, it may be seen from Table I that, just above the first four excited states having an experimental counterpart, we find three states with $`J^\pi =4^{},5^{}`$, and $`0^+`$, the first two being the members of the doublet $`g_{9/2}^1p_{1/2}^1`$ and the third one arising from the configuration $`p_{1/2}^2`$. The position of the $`5^{}`$ state is quite consistent with the experimental information available for the two lighter even isotones. In fact, in <sup>96</sup>Pd and <sup>94</sup>Ru a $`5^{}`$ state has been observed at 2.65 and 2.62 MeV, respectively. Between 3.8 and 5 MeV we find all the members of the $`g_{9/2}^1p_{3/2}^1`$ multiplet and the $`7^{}`$ state arising from the $`g_{9/2}^1f_{5/2}^1`$ configuration. In this energy interval is also located the $`2^+`$ state of the $`p_{1/2}^1p_{3/2}^1`$ configuration. In Table II all the excitation energies up to 3 MeV are reported for <sup>97</sup>Ag. Below this energy we find all the states arising from the configurations $`g_{9/2}^3`$, $`g_{9/2}^2p_{1/2}^1`$, and $`g_{9/2}^1p_{1/2}^2`$ as well as the two seniority-one states of the $`g_{9/2}^2p_{3/2}^1`$ and $`g_{9/2}^2f_{5/2}^1`$ configurations. In particular, we predict as first excited state a seniority-one $`\frac{1}{2}^{}`$ state at about 0.5 MeV. This prediction is in agreement with the experimental findings for the lighter isotones . Furthermore, it should be mentioned that our first $`\frac{5}{2}^{}`$ state is essentially a pure seniority-three $`g_{9/2}^2p_{1/2}^1`$ state while almost all the $`l=3`$ one-hole strength is concentrated in the second one at 2.6 MeV. On the other hand, we find that the $`p_{3/2}`$ strength is almost equally distributed between the first and second $`\frac{3}{2}^{}`$ states. As for <sup>96</sup>Pd, only 2 out of the 25 states which we predict up to 4 MeV (see Table III) arise from configurations other than $`g_{9/2}^4`$ and $`g_{9/2}^3p_{1/2}^1`$. They are the $`J^\pi =0_2^+`$ and $`3_1^{}`$ states with a $`g_{9/2}^2p_{1/2}^2`$ and $`g_{9/2}^3p_{3/2}^1`$ dominant component, respectively. ¿From the above discussion it is evident that some of our predictions are closely related to the values adopted for $`ϵ_{p_{3/2}}`$ and $`ϵ_{f_{5/2}}`$. For instance, as shown before, we find that the wave functions of several states in the three considered isotones contain non negligible components outside the ($`g_{9/2}`$,$`p_{1/2}`$) space. This indicates that a two-level model space would not be adequate even for the description of the heavier $`N=50`$ isotones. We also predict the absence of a pronounced gap above the $`0_2^+`$ state in the spectrum of <sup>98</sup>Cd as well as rather low-lying one-hole $`\frac{3}{2}^{}`$ and $`\frac{5}{2}^{}`$ states in <sup>97</sup>Ag. This makes it clear that, in absence of a spectroscopic study of <sup>99</sup>In, the discovery of new selected levels in <sup>98</sup>Cd and <sup>97</sup>Ag represents the best source of information on the SH spectrum. To conclude this discussion, a further comment is in order. As it occurs for the $`\frac{13}{2}^+`$ state in <sup>97</sup>Ag and the $`2^+`$ and $`10^+`$ states in <sup>96</sup>Pd, we expect that the calculated excitation energies of all other states in these two nuclei arising from the $`2^+`$ state of <sup>98</sup>Cd may be somewhat overestimated (200-300 keV). This is the case, for instance, of the $`(\frac{5}{2}^+)_1`$, $`(\frac{7}{2}^+)_1`$, $`(\frac{9}{2}^+)_2`$, and $`(\frac{11}{2}^+)_1`$ states in <sup>97</sup>Ag. Let us now come to the electromagnetic observables. The effective operators needed for the calculation have been derived as described in Sec. II. Experimental information on electromagnetic properties in proton-rich $`N=50`$ isotones is very scanty. The measured $`E2`$ transition rates are compared with the calculated values in Table IV, where we also report our predicted $`B(E2)`$ values for all the states having an experimental counterpart. As regards the $`B(E2;8^+6^+)`$ in <sup>98</sup>Cd, the two different measured values result from the experiments of Refs. and , where this nucleus was produced by a fusion-evaporation reaction and by fragmentation of a <sup>106</sup>Cd beam, respectively. While there are some doubts about both these values , the one in Ref. , which corresponds to a proton effective charge fairly larger than e, is consistent with that measured for <sup>96</sup>Pd. From Table IV we see that the agreement between experiment and theory for the $`B(E2;8^+6^+)`$ and $`B(E2;6^+4^+)`$ in <sup>96</sup>Pd is quite satisfactory, the calculated values being only slightly smaller than the observed ones. As for <sup>98</sup>Cd, the calculated $`B(E2;8^+6^+)`$ value agrees with the result of Ref. within the error bars. It is worth noting that our results do not differ significantly from those obtained using an effective proton charge $`e_p^{\mathrm{eff}}=1.35e`$. As regards the magnetic observables, only the magnetic moment of the $`8^+`$ state in <sup>96</sup>Pd is known. The measured value is $`10.97\pm 0.06`$ nm , to be compared with the calculated one 10.54 nm. We have already mentioned in the Introduction that several calculations have been performed to study the shell-model structure of the $`N=50`$ isotones. We only comment here on the calculation of Ref. where, assuming <sup>100</sup>Sn as a closed core, the two-hole effective interaction was derived by using the Sussex matrix elements in a perturbation scheme up to second order without folded-diagram renormalization. As pointed out in Sec. II, the adopted SH energies, as determined from a least-squares fit to the energies of the $`N=50`$, $`37Z44`$ nuclei, are much higher than ours. In that work, however, attention was focused on nuclei with $`Z=3446`$ and no results were given for <sup>98</sup>Cd and <sup>97</sup>Ag, for which experimental information has become available only in more recent times. We have therefore found it interesting to perform calculations for these two nuclei using the effective interaction and the SH energies of Ref. . We have also calculated a more complete spectrum of <sup>96</sup>Pd than that given in . Hereafter we shall refer to these calculations as Sussex (SUX) calculations. The $`\sigma `$ value relative to the SUX calculations for <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd turns out to be 84, 353, and 218 keV, respectively. More explicitly, the experimental position of the positive-parity states is well reproduced. In particular, the calculated energies of the $`2^+`$ states in both even isotopes, as well as those of the $`\frac{13}{2}^+`$ and $`10^+`$ states in <sup>97</sup>Ag and <sup>96</sup>Pd, come closer to the experimental values than those obtained from our calculations. For all other positive-parity states the agreement with experiment obtained from SUX and our calculations is comparable. On the other hand, the $`\frac{17}{2}^{}`$ and $`5^{}`$ states in <sup>97</sup>Ag and <sup>96</sup>Pd lie 704 and 560 keV above the experimental ones, respectively, and the excitation energies of the first $`5^{}`$ and $`\frac{1}{2}^{}`$ states in <sup>98</sup>Cd and <sup>97</sup>Ag are predicted to be about 3.5 and 1.4 MeV, which are not consistent with the experimental information available for the lighter isotones. Furthermore, for <sup>97</sup>Ag the $`p_{3/2}`$ and $`f_{5/2}`$ strengths are concentrated in the second $`\frac{3}{2}^{}`$ and the second $`\frac{5}{2}^{}`$ states, which are predicted to lie at about 3 and 4 MeV, respectively. These values are more than 1 MeV higher than ours which come quite close to those obtained by extrapolating the energies of these states from the lighter isotones. ¿From the above we feel that the values of the SH energies adopted in Ref. are overestimated. As regards the $`p_{1/2}`$ level, this conclusion is strongly supported by the fact that, as mentioned above, the calculated energies of the $`\frac{17}{2}^{}`$ and $`5^{}`$ states in <sup>97</sup>Ag and <sup>96</sup>Pd, which are substantially dependent on the position of this level, are largely overestimated. On the other hand, we have verified that decreasing the values of SH energies is not sufficient to improve the agreement between theory and experiment on the whole. In fact, while this produces a lowering of the negative-parity states it moves the positive-parity states up to too high an energy. This latter effect, however, might be compensated by taking into account folded-diagram renormalization, which produces in general a compression of the spectra. In this context, we may mention that the authors of Ref. say that the folded-diagram renormalization would have worked against the outcome of their calculations. Our preceding remarks indicate that this conclusion could have been turned round had they made a different choice of the SH energies. ## IV Summary and conclusions In this paper, we have performed a shell-model study of the $`N=50`$ isotones <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd employing a two-hole effective interaction derived from the Bonn-A nucleon-nucleon potential by means of a $`G`$-matrix folded-diagram approach. We have shown that all the experimental data available for these nuclei are well reproduced by our calculations. In addition, some of our predictions, namely the existence of a $`5^{}`$ state in <sup>98</sup>Cd and a $`\frac{1}{2}^{}`$ state in <sup>97</sup>Ag at 2.7 and 0.5 MeV, respectively, are strongly supported by the experimental information existing for the lighter isotones. This work is framed in the context of an extensive program of calculations aimed at assessing just how accurate a description of nuclear structure properties can be provided by an effective interaction derived from the $`NN`$ potential. The quality of the results presented here turns out to be comparable to that obtained in the <sup>132</sup>Sn and <sup>208</sup>Pb regions where, however, there is a substantially larger body of experimental data with which to compare the calculated spectroscopic properties. In particular, we emphasize that the experimental information existing for the $`N=50`$ isotones provides only little guidance to the choice of the SH energies, which renders it a difficult task. In this situation, our choice has been based on an analysis of the spectra of <sup>98</sup>Cd, <sup>97</sup>Ag, and <sup>96</sup>Pd and on the values of the experimental $`\frac{3}{2}^{}`$ and $`\frac{5}{2}^{}`$ single-hole energies in <sup>89</sup>Y, <sup>91</sup>Nb, and <sup>93</sup>Tc. We feel, however, that the unavoidable uncertainty in the adopted SH energies should not be so severe as to make our findings questionable. In conclusion, we are confident that the present work may be of stimulus and help towards new experimental studies of the proton-rich $`N=50`$ isotones. ###### Acknowledgements. This work was supported in part by the Italian Ministero dell’Università e della Ricerca Scientifica e Tecnologica (MURST) and by the U.S. Grant No. DE-FG02-88ER40388. NI thanks the European Social Fund for financial support.
no-problem/0002/astro-ph0002042.html
ar5iv
text
# The Variation of Gas Mass Distribution in Galaxy Clusters: Effects of Preheating and Shocks ## 1 Introduction Correlations among physical quantities of clusters of galaxies are very useful tools for studying the formation of clusters and cosmological parameters. Recently, we have found that clusters at low redshifts ($`z0.1`$) form a plane (the fundamental plane) in the three dimensional space represented by their core structures, that is, the central gas density $`\rho _{\mathrm{gas},0}`$, core radius $`r_c`$, and X-ray temperature $`T_{\mathrm{gas}}`$ (Fujita and Takahara, 1999a, Paper I). On the other hand, a simple theoretical model of cluster formation predicts that clusters should be characterized by the virial density $`\rho _{\mathrm{vir}}`$ (or the collapse redshift $`z_{\mathrm{coll}}`$) and the virial mass $`M_{\mathrm{vir}}`$ (Fujita and Takahara 1999b, Paper II). Thus, assuming the similarity of the dark matter distributions, clusters should form a plane in the three dimensional space of the dark matter density in the core $`\rho _{\mathrm{DM},\mathrm{c}}`$, the core radius of dark matter distribution $`r_{\mathrm{DM},\mathrm{c}}`$, and the virial temperature $`T_{\mathrm{vir}}`$ <sup>1</sup><sup>1</sup>1In Paper I, we used the terms ’ virial density’, ’virial radius’, and ’ virial mass’ to denote the dark matter density in the core, the core radius of dark matter distribution, and the core mass, respectively. This is because we assumed that the dark matter density in the core is proportional to the average dark matter density over the whole cluster (Paper II). To avoid possible confusions, here we use the term ’ dark matter’, and the term ’ virial’ will be used to represent spatially averaged quantities of gravitational matter (mostly dark matter) within the virialized region.. However, the relations between the two planes are not simple; for example, it is found that $`\rho _{\mathrm{gas},0}`$ is not proportional to $`\rho _{\mathrm{DM},\mathrm{c}}`$. In Paper I, we found that the ratio $`\rho _{\mathrm{gas},0}/\rho _{\mathrm{DM},\mathrm{c}}`$ is not constant but obeys the relation of $`\rho _{\mathrm{gas},0}/\rho _{\mathrm{DM},\mathrm{c}}\rho _{\mathrm{DM},\mathrm{c}}^{0.1}M_{\mathrm{DM},\mathrm{c}}^{0.4}`$, where $`M_{\mathrm{DM},\mathrm{c}}`$ is the core mass. This raises the question how the segregation between gas and dark matter occurs. In the hierarchical structure formation, dark halos are expected to obey scaling relations. In fact, numerical simulations suggest that the density distribution in dark halos take a universal form as claimed by Navarro et al. (1996, 1997). On a cluster scale, it can be approximated by $`\rho _{\mathrm{DM}}(r)r^2`$ for $`r1`$ Mpc, where detailed X-ray observations have been done (Makino et al., 1998). On the contrary, observations show that the slope of the density profile of the hot diffuse intracluster gas has a range of value. Radial surface brightness profiles of X-ray emission are often fitted with the conventional $`\beta `$ model as $$I(R)=\frac{I_0}{(1+R^2/r_c^2)^{3\beta _{\mathrm{obs}}1/2}},$$ (1) where $`\beta _{\mathrm{obs}}`$ is the slope parameter (Cavaliere and Fusco-Femiano, 1978). If the intracluster gas is isothermal, equation (1) corresponds to the gas density profile of $$\rho _{\mathrm{gas}}(r)=\frac{\rho _{\mathrm{gas},0}}{(1+r^2/r_c^2)^{3\beta _{\mathrm{obs}}/2}}.$$ (2) Observations show that the slope parameter takes a range $`\beta _{\mathrm{obs}}0.41`$ (Jones and Forman, 1984, 1999). This means that for $`r>>r_\mathrm{c}`$, the density profiles range from $`r^{1.2}`$ to $`r^3`$, which are more diverse than those of dark matter. Moreover, observations show that the clusters with large $`r_c`$ and $`T_{\mathrm{gas}}`$ tend to have large $`\beta _{\mathrm{obs}}`$ (e.g. Neumann and Arnaud 1999; Horner et al. 1999; Jones and Forman 1999). Since the average gas fraction of clusters within radii much larger than $`r_\mathrm{c}`$ should be universal and the dark matter distribution of clusters is also universal, the variation of $`\beta _{\mathrm{obs}}`$ is expected to correlate with that of the gas fraction in the core region. In other words, the gas fraction at the core is not the same as that of the whole cluster and is not proportional to the dark matter density. This fact must be taken care of when we discuss cosmological parameters using observational X-ray data. Since the emissivity of X-ray gas is proportional to $`\rho _{\mathrm{gas}}^2`$, most of the X-ray emission of a cluster comes form the central region where $`\rho _{\mathrm{gas}}`$ is large. Although in Papers I and II, we did not take account of the effects of $`\beta _{\mathrm{obs}}`$, we did find the gas mass fraction in the core region is diverse by analyzing the X-ray emission. In this paper, we reanalyze the data taking account of $`\beta _{\mathrm{obs}}`$ and discuss the relation between core and global gas mass fractions. We will also show that major conclusions on the fundamental relations are not changed. The variation of gas mass fraction itself has been investigated by several authors (e.g. Ettori and Fabian, 1999; Arnaud and Evrard, 1999). Ettori and Fabian (1999) argue that it is partially explained if the dark matter has a significant baryonic component. Another possible explanation of the diverse gas distributions is that intracluster gas had already been heated before the collapse into the cluster; the energetic winds generated by supernovae are one possible mechanism to increase gas entropy (e.g. Dekel and Silk, 1986; Mihara and Takahara, 1994). In fact, Ponman et al. (1999) find that the entropy of the intracluster gas near the center of clusters is higher than can be explained by gravitational collapse alone. In order to estimate the effect of the preheating on intracluster gas, we must take account of shocks forming when the gas collapses into the cluster; they supply additional entropy to the gas. Cavaliere et al. (1997, 1998, 1999) have investigated both the effects and predicted the relation between X-ray luminosities and temperatures ($`L_\mathrm{X}T_{\mathrm{gas}}`$ relation). They predicted that the gas distributions of poor clusters are flatter than those of rich clusters, which results in a steeper slope of $`L_\mathrm{X}T_{\mathrm{gas}}`$ relation for poor clusters. This is generally consistent with the observations. It is an interesting issue to investigate whether this scenario provides a natural explanation for the observed dispersion of gas mass fraction in the cluster core and whether it reproduces the X-ray fundamental plane we found in Paper I in our general theoretical scenario. In order to clarify what determines the gas distribution, we construct as a simple model as possible. Although many authors have studied the preheating of clusters (Kaiser 1991; Evrard and Henry 1991; Metzler and Evrard 1994; Balogh et al. 1999; Kay and Bower 1999; Wu et al. 1999; Valageas and Silk 1999), this is the first time to consider the influence of the preheating and shocks on the fundamental plane and two-parameter family nature of clusters paying attention to the difference between the collapse redshift $`z_{\mathrm{coll}}`$ and the observed redshift $`z_{\mathrm{obs}}`$ of clusters explicitly. In §2, we explain the model of dark matter potential and shock heating of intracluster gas. In §3, we use the model to predict $`\beta _{\mathrm{obs}}T_{\mathrm{gas}}`$ and $`\beta _{\mathrm{obs}}r_\mathrm{c}`$ relations, and the fundamental plane and band of clusters. The predictions are compared with observations. ## 2 Models ### 2.1 Dark Matter Potential In order to predict the relations among parameters describing a dark matter potential, we use a spherical collapse model (Tomita, 1969; Gunn and Gott, 1972). Although the details are described in Paper II, we summarize them here for convenience. The virial density of a cluster $`\rho _{\mathrm{vir}}`$ at the time of the cluster collapse ($`z_{\mathrm{coll}}`$) is $`\mathrm{\Delta }_c`$ times the critical density of a universe at $`z=z_{\mathrm{coll}}`$. It is given by $$\rho _{\mathrm{vir}}=\mathrm{\Delta }_c\rho _{\mathrm{crit}}(z_{\mathrm{coll}})=\mathrm{\Delta }_c\rho _{\mathrm{crit},0}E(z_{\mathrm{coll}})^2=\mathrm{\Delta }_c\rho _{\mathrm{crit},0}\frac{\mathrm{\Omega }_0(1+z_{\mathrm{coll}})^3}{\mathrm{\Omega }(z_{\mathrm{coll}})},$$ (3) where $`\mathrm{\Omega }(z)`$ is the cosmological density parameter, and $`E(z)^2=\mathrm{\Omega }_0(1+z)^3/\mathrm{\Omega }(z)`$, where we do not take account of the cosmological constant. The index 0 refers to the values at $`z=0`$. Note that the redshift-dependent Hubble constant can be written as $`H(z)=100hE(z)\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. We adopt $`h=0.5`$ for numerical values. In practice, we use the fitting formula of Bryan and Norman (1998) for the virial density: $$\mathrm{\Delta }_c=18\pi ^2+60x32x^2,$$ (4) where $`x=\mathrm{\Omega }(z_{\mathrm{coll}})1`$. It is convenient to relate the collapse time in the spherical model with the density contrast calculated by the linear theory. We define the critical density contrast $`\delta _c`$ that is the value, extrapolated to the present time ($`t=t_0`$) using linear theory, of the overdensity which collapses at $`t=t_{\mathrm{coll}}`$ in the exact spherical model. It is given by $`\delta _c(t_{\mathrm{coll}})`$ $`=`$ $`{\displaystyle \frac{3}{2}}D(t_0)\left[1+\left({\displaystyle \frac{t_\mathrm{\Omega }}{t_{\mathrm{coll}}}}\right)^{2/3}\right](\mathrm{\Omega }_0<1)`$ (5) $`=`$ $`{\displaystyle \frac{3(12\pi )^{2/3}}{20}}\left({\displaystyle \frac{t_0}{t_{\mathrm{coll}}}}\right)^{2/3}(\mathrm{\Omega }_0=1)`$ (6) (Lacey and Cole, 1993), where $`D(t)`$ is the linear growth factor given by equation (A13) of Lacey and Cole (1993) and $`t_\mathrm{\Omega }=\pi H_0^1\mathrm{\Omega }_0(1\mathrm{\Omega }_0)^{3/2}`$. For a power-law initial fluctuation spectrum $`P(k)k^n`$, the rms amplitude of the linear mass fluctuations in a sphere containing an average mass $`M`$ at a given time is $`\delta M^{(n+3)/6}`$. Thus, the virial mass of clusters which collapse at $`t_{\mathrm{coll}}`$ is related to that at $`t_0`$ as $$M_{\mathrm{vir}}(t_{\mathrm{coll}})=M_{\mathrm{vir},0}\left[\frac{\delta _c(t_{\mathrm{coll}})}{\delta _c(t_0)}\right]^{6/(n+3)}.$$ (7) Here, $`M_{\mathrm{vir},0}(=M_{\mathrm{vir}}[t_0])`$ is regarded as a variable because actual amplitude of initial fluctuations has a distribution. We relate $`t=t_{\mathrm{coll}}`$ to the collapse or formation redshift $`z_{\mathrm{coll}}`$, which depends on cosmological parameters. Thus, $`M_{\mathrm{vir}}`$ is a function of $`z_{\mathrm{coll}}`$ as well as $`M_{\mathrm{vir},0}`$. This means that for a given mass scale $`M_{\mathrm{vir}}`$, the amplitude of initial fluctuations takes a range of value, and spheres containing a mass of $`M_{\mathrm{vir}}`$ collapse at a range of redshift. In the following, the slope of the spectrum is fixed at $`n=1`$. It is typical of the scenario of standard cold dark matter for a cluster mass range, and is consistent with observations as shown in Paper II. The virial radius and temperature of a cluster are then calculated by $$r_{\mathrm{vir}}=\left(\frac{3M_{\mathrm{vir}}}{4\pi \rho _{\mathrm{vir}}}\right)^{1/3},$$ (8) $$T_{\mathrm{vir}}=\gamma \frac{\mu m_\mathrm{H}}{3k_\mathrm{B}}\frac{GM_{\mathrm{vir}}}{r_{\mathrm{vir}}},$$ (9) where $`\mu (=0.6)`$ is the mean molecular weight, $`m_\mathrm{H}`$ is the hydrogen mass, $`k_\mathrm{B}`$ is the Boltzmann constant, $`G`$ is the gravitational constant, and $`\gamma `$ is a fudge factor which typically ranges between 1 and 1.5. In Paper II, we adopted the value $`\gamma =1`$. Note that the value of $`\gamma `$ is applied only to dark matter, but not to gas, because we do not assume $`T_{\mathrm{gas}}=T_{\mathrm{vir}}`$ here. We emphasize that $`M_{\mathrm{vir}}`$, $`\rho _{\mathrm{vir}}`$, and $`r_{\mathrm{vir}}`$ are the virial mass, density, and radius at the time of the cluster collapse, respectively. ### 2.2 Shocks and Hydrostatic Equilibrium To study the effect of preheating, we here adopt a very simple model as a first step. When a cluster collapses, we expect that a shock wave forms and the infalling gas is heated. In order to derive the postshock temperature, we use a shock model of Cavaliere et al. (1998). For a given preshock temperature $`T_1`$, the postshock temperature $`T_2`$ can be calculated from the Rankine-Hugoniot relations. Assuming that the shock is strong and that the shock front forms in the vicinity of $`r_{\mathrm{vir}}`$, it is approximately given by $$k_\mathrm{B}T_2=\frac{\varphi (r_{\mathrm{vir}})}{3}+\frac{3}{2}k_\mathrm{B}T_1$$ (10) (Cavaliere et al., 1998), where $`\varphi (r)`$ is the potential at $`r`$. According to the virial theorem and the continuity when $`T_1`$ approaches zero, we should take $`\varphi (r_{\mathrm{vir}})/3=k_\mathrm{B}T_{\mathrm{vir}}`$. For $`r<r_{\mathrm{vir}}`$, we assume that the gas is isothermal and hydrostatic, and that the matter accretion after the cluster collapse does not much change the structure of the central region of the cluster significantly, as confirmed by numerical simulations (e.g. Takizawa and Mineshige 1998). It is to be noted that even if the density profile of dark matter is represented by the universal profile (Navarro et al., 1996, 1997), it is not inconsistent with the isothermal $`\beta `$ model of gas represented by equation (2) (Makino et al. 1998; Eke et al. 1998) within the present observational scopes. On these assumptions, the gas temperature in the inner region of a cluster is $`T_{\mathrm{gas}}=T_2`$, and the mass within $`r`$ of the cluster center $`M_{\mathrm{DM}}`$ is related to the density profile of intracluster gas, $`\rho _{\mathrm{gas}}`$, by $$M_{\mathrm{DM}}(r)=\frac{k_\mathrm{B}T_{\mathrm{gas}}}{\mu m_\mathrm{H}G}r\frac{d\mathrm{ln}\rho _{\mathrm{gas}}}{d\mathrm{ln}r}.$$ (11) Since $`M_{\mathrm{DM}}(r_{\mathrm{vir}})=M_{\mathrm{vir}}`$, equations (9) and (11) yield $$T_{\mathrm{vir}}=\frac{\gamma }{3}T_{\mathrm{gas}}\frac{d\mathrm{ln}\rho _{\mathrm{gas}}}{d\mathrm{ln}r}|_{r=r_{\mathrm{vir}}}.$$ (12) Defining $`\beta =T_{\mathrm{vir}}/T_{\mathrm{gas}}`$, the gas density profile is thus given by $$\rho _{\mathrm{gas}}(r)r^{3\beta /\gamma },$$ (13) as long as ($`d\mathrm{ln}\rho _{\mathrm{gas}}/d\mathrm{ln}r`$) is nearly constant. Equation (10) shows that in this model $`\beta `$ is a function of only $`T_{\mathrm{vir}}`$ when $`T_1`$ is regarded as an external parameter, that is, $$\beta =\frac{T_{\mathrm{vir}}}{T_{\mathrm{vir}}+(3/2)T_1}.$$ (14) Since $`T_{\mathrm{vir}}=T_{\mathrm{gas}}\beta `$, equation (14) is written as $$\beta =\frac{T_{\mathrm{gas}}(3/2)T_1}{T_{\mathrm{gas}}}.$$ (15) Thus, the $`\beta T_{\mathrm{gas}}`$ relation can be used to determine $`T_1`$ by comparing with the observation. Since both $`T_{\mathrm{vir}}`$ and $`r_{\mathrm{vir}}`$ are the two-parameter families of $`z_{\mathrm{coll}}`$ and $`M_{\mathrm{vir},0}`$ (equations , , and ), equation (14) shows that $`\beta `$ can be represented by $`r_{\mathrm{vir}}`$ as $`\beta =\beta (r_{\mathrm{vir}},M_{\mathrm{vir},0})`$, if $`T_1`$ is specified. Recent numerical simulations suggest that the structure of central region of clusters is related to $`z_{\mathrm{coll}}`$ (Navarro et al., 1997), and in particular $`r_{\mathrm{DM},\mathrm{c}}`$ is proportional to $`r_{\mathrm{vir}}`$ (Salvador-Solé et al., 1998; Makino et al., 1998). Therefore, if we assume that $`r_{\mathrm{DM},\mathrm{c}}=r_\mathrm{c}`$ and that $`r_{\mathrm{vir}}/r_\mathrm{c}`$ is constant as in Paper II, $`T_1`$ can also be determined by comparing the theoretical prediction of the $`\beta r_\mathrm{c}`$ relation with the observation. Since a spherical collapse model predicts $`r_{\mathrm{vir}}(z_{\mathrm{coll}}=0)4`$ Mpc and observations show that $`r_\mathrm{c}(z_{\mathrm{coll}}=0)0.5`$ Mpc (Figure 1b in Paper II), we adopt $`r_{\mathrm{vir}}/r_\mathrm{c}=8`$ from now on. Thus, we obtain $`\beta =\beta (8r_\mathrm{c}[z_{\mathrm{coll}},M_{\mathrm{vir},0}],M_{\mathrm{vir},0})`$. ## 3 Results and Discussion ### 3.1 $`\beta T_{\mathrm{gas}}`$ and $`\beta r_\mathrm{c}`$ Relations Using the model constructed in §2.2, we predict relations between $`\beta `$ and $`T_{\mathrm{gas}}`$, and between $`\beta `$ and $`r_\mathrm{c}`$. If $`T_1`$ is mainly determined by the energetic winds generated in the forming galaxies or quasars before the formation of clusters, $`T_1`$ should be constant if subsequent adiabatic heating or cooling is neglected. However, if, besides the winds, the gravitational energy released in the subclusters, which later merged into the observed clusters, contributes to $`T_1`$, we expect that $`T_1`$ has a distribution produced by different merging histories. In order to determine the distribution in detail, we must calculate the merging histories by Monte Carlo realizations as Cavaliere et al. (1997, 1998) did. In this study, however, we consider the scatter by investigating a range of $`T_1`$ for simplicity. We show in Figure 1 the $`\beta T_{\mathrm{gas}}`$ relation for $`T_1=0.5`$, 1, and 2 keV. The observational data are overlaid. Since equation (2) is approximated to be $`\rho _{\mathrm{gas}}(r)r^{3\beta _{\mathrm{obs}}}`$ for $`r>>r_c`$, the relation $$\beta =\gamma \beta _{\mathrm{obs}}$$ (16) is obtained by comparing the relation (13). Thus, in the following figures, the observed values of $`\beta _{\mathrm{obs}}`$ are converted by relation (16). In Figure 1 we assumed $`\gamma =1`$. As the data, we use only relatively hot ($`T_{\mathrm{gas}}3`$ keV) and low redshift ($`z0.1`$) clusters obtained by Mohr et al. (1999) and Peres et al. (1998). Instead of $`\beta _{\mathrm{obs}}`$, Peres et al. (1998) present velocity dispersions corresponding to gravitational potential well, $`\sigma _{\mathrm{deproj}}`$, derived with the deprojection method, ignoring velocity dispersion anisotropies and gradients . Thus, for the data we assume that $`k_\mathrm{B}T_{\mathrm{vir}}=\mu m_\mathrm{H}\sigma _{\mathrm{deproj}}^2`$ and define $`\beta `$ as $`T_{\mathrm{vir}}/T_{\mathrm{gas}}`$. Figure 1 shows that the observational data are consistent with $`0.5T_12`$ keV but it seems that a single value of $`T_1`$ does not represent the range of data. The preheating ($`T_1>0`$) is expected to reduce $`\beta `$ of the clusters with small $`T_{\mathrm{gas}}`$ (Figure 1). At first glance, no correlations between $`\beta `$ and $`T_{\mathrm{gas}}`$ are recognized observationally in this temperature range. However, some reports on the existence of a weak correlation have been made when clusters with lower $`T_{\mathrm{gas}}`$ are included (e.g. Horner et al. 1999). Thus, our prediction is not inconsistent with the observations. As discussed in §2.2, the $`\beta r_\mathrm{c}`$ relation is represented by two parameters $`z_{\mathrm{coll}}`$ and $`M_{\mathrm{vir},0}`$, for a given value of $`T_1`$. The results are shown in Figure 2 for $`\gamma =1`$. Figure 2a and 2b are for $`\mathrm{\Omega }_0=1`$ and 0.2, respectively. For comparison, we also present observational data (Mohr et al., 1999; Peres et al., 1998). As was in Paper I, for the data of Mohr et al. (1999) we use here only the component of surface brightness reflecting the global structure of clusters, although the central component (so-called cooling flow component) may also have formed in the scenario of hierarchical clustering (Fujita and Takahara, 2000). The mass $`M_{\mathrm{vir},0}`$ corresponds to the mass of clusters collapsed at $`z0`$ and takes a range of value due to the dispersion of initial density fluctuation of the universe. Since observations and numerical simulations show $`M_{\mathrm{vir},0}10^{15}\mathrm{M}_{\mathrm{}}`$ (Evrard et al., 1996), the observational data are expected to lie between the two lines of $`M_{\mathrm{vir},0}=5\times 10^{14}\mathrm{M}_{\mathrm{}}`$ (arc BC) and $`M_{\mathrm{vir},0}=5\times 10^{15}\mathrm{M}_{\mathrm{}}`$ (arc AD) for fixed $`T_1`$ in Figure 2. Note that the distribution of $`M_{\mathrm{vir},0}`$ degenerates on the lines in Figure 1. In Figure 2, the positions along the arcs AD and BC indicate the formation redshifts of the clusters. When $`\mathrm{\Omega }_0=1`$, most of the observed clusters should have collapsed at $`z0`$ because clusters continue growing even at $`z=0`$ (Peebles, 1980). Thus, the cluster data are expected to be distributed along the part of the lines close to the point of $`z_{\mathrm{coll}}=0`$ (segment AB). In fact, calculations done by Lacey and Cole (1993), and Kitayama and Suto (1996) show that if $`\mathrm{\Omega }_0=1`$, most of present day clusters ($`M_{\mathrm{vir}}10^{1415}\mathrm{M}_{\mathrm{}}`$) should have formed in the range of $`z_{\mathrm{coll}}0.5`$ (parallelogram ABCD in Figure 2a). In contrast, when $`\mathrm{\Omega }_0=0.2`$, the growth rate of clusters decreases and cluster formation gradually ceases at $`z1/\mathrm{\Omega }_01`$ (Peebles, 1980). Thus, in Figure 2b, cluster data are expected to be distributed between the points of $`z_{\mathrm{coll}}=0`$ (segment AB) and $`z_{\mathrm{coll}}=1/\mathrm{\Omega }_01`$ (segment CD) and should have a two-dimensional distribution (parallelogram ABCD). Thus, compared with the observations, the models in Figure 2 show that $`T_11`$ keV and $`\mathrm{\Omega }_0<1`$ are preferred. The latter result is quite consistent with that of Paper II, where we found that the $`T_{\mathrm{gas}}r_\mathrm{c}`$ relation suggests $`\mathrm{\Omega }_0<1`$. Since $`\beta `$ is related to $`T_{\mathrm{gas}}`$ by equation (15), the $`\beta r_\mathrm{c}`$ relation is equivalent to the $`T_{\mathrm{gas}}r_\mathrm{c}`$ relation for a fixed value of $`T_1`$. Note that in Figure 2 predicted regions corresponding to different $`T_1`$ overlap each other; this implies that the position of a cluster in Figure 2 does not uniquely correspond to $`T_1`$ in contrast to Figure 1. For a given $`\beta `$ and $`r_\mathrm{c}`$, larger $`T_1`$ corresponds to larger $`M_{\mathrm{vir},0}`$ or larger amplitude of the initial fluctuation. The dispersion of $`T_1`$ appears to be caused by gravitational heating in subclusters that are to merge to the cluster ($`T_{\mathrm{gas}}3`$ keV) at the time of the cluster formation. In fact, Figure 2b shows that observed clusters are situated close to the line of $`M_{\mathrm{vir},0}=5\times 10^{15}\mathrm{M}_{\mathrm{}}`$ (arc AD) when $`T_12`$ keV, while they are situated close to the line of $`M_{\mathrm{vir},0}=5\times 10^{14}\mathrm{M}_{\mathrm{}}`$ (arc BC) when $`0.5<T_1<1`$ keV. Moreover, Figure 1 suggests that clusters with large $`T_{\mathrm{gas}}`$ favor large $`T_1`$. These may reflect that clusters with larger (smaller) $`M_{\mathrm{vir},0}`$ or $`T_{\mathrm{gas}}`$ tend to have more (less) massive progenitors with larger (smaller) $`T_1`$, although these are only loose tendencies, and we need more samples and more improved models to obtain a definite conclusion. Note that gravitational heating in subclusters itself is a self-similar process and does not modify self-similar scaling relations such as the luminosity-temperature relation (e.g. Eke et al., 1998). Thus, an additional entropy other than expected from purely gravitational assembly of a cluster must be injected into the gas. Valageas and Silk (1999) investigate the entropy evolution of intergalactic medium (IGM) and find that clusters with $`T_{\mathrm{vir}}0.5`$ keV are affected by the additional entropy when it is generated by quasar heating. This is because the additional entropy is comparable to the entropy generated by gravitational collapse of the clusters. In other words, the adiabatic compression of the gas from the preheated IGM alone can heat the gas up to $`T_{\mathrm{ad},\mathrm{cl}}0.5`$ keV. Therefore, in addition to the gravitational processes in subclusters, the preheating may significantly contribute to $`T_1`$, and the lower bound of which is given by $`T_{\mathrm{ad},\mathrm{cl}}`$. If $`T_{\mathrm{ad},\mathrm{cl}}0.5`$ keV, this is consistent with our result (Figures 1 and 2). Valageas and Silk (1999) also investigate the case when only supernova heating is taken into account and quasar heating is ignored. The result is $`T_{\mathrm{ad},\mathrm{cl}}<0.1`$ keV. In this case, and effects of the preheating is small and we expect that $`\beta `$ does not much depend on $`T_{\mathrm{gas}}`$ and $`r_\mathrm{c}`$, although $`\beta `$ would have a scatter owing to the difference of merging history. This is inconsistent with the observations. The insufficient power of the supernova heating is also suggested by Wu et al. (1999) (but see Loewenstein, 1999). Another possible source of heating is that due to shocks forming at higher redshift on the largest scales, such as filaments and sheets. Cen and Ostriker (1999) indicate that most of baryons at low redshift should have a temperature in the range of $`10^510^7`$ K. The relatively large value of $`T_1`$ may reflect this temperature. We also investigate the case of $`\gamma =1.2`$ and $`\mathrm{\Omega }_0=0.2`$, which are presented in Figure 3. In this case, the model of $`T_1=0.5`$ keV is preferred especially for the data obtained by Mohr et al. (1999). This means that $`\gamma `$ and $`T_1`$ are correlated and they cannot be determined independently. However, the model of $`\gamma >1.2`$ is inappropriate because $`\beta =\gamma \beta _{\mathrm{obs}}`$ exceeds unity for some observational data while relation (14) or (15) limits $`\beta `$ to less than one. If a cluster is not isothermal, the temperature in the central region $`T_{\mathrm{gas}}`$ should be larger than $`T_2`$ (Cavaliere et al., 1998). In this case, the discrepancy between the model and the observations is more significant. Thus, it seems to be difficult to construct a model that predicts $`\beta >1`$. ### 3.2 The Fundamental Band and Plane It is interesting to investigate whether the gas distribution in clusters derived above is consistent with the observations of central gas fraction, and the fundamental band and plane we found in Paper I. The shapes of the band and plane are also related to the origin of the observed relation of $`L_\mathrm{X}T_{\mathrm{gas}}^3`$ (Paper I). We did not explore the origin of the variation of the central gas mass fraction in previous papers, where $`\beta _{\mathrm{obs}}`$ was regarded as constant. Below, we will show that this is related to the variation of $`\beta `$. From relation (13), the gas density at the cluster core is approximately given by $$\rho _{\mathrm{gas},0}=\rho _{\mathrm{gas}}(r_{\mathrm{vir}})(r_{\mathrm{vir}}/r_c)^{3\beta /\gamma },$$ (17) where $`r_{\mathrm{vir}}`$ and $`\beta (T_{\mathrm{vir}},T_1)`$ are functions of $`z_{\mathrm{coll}}`$ and $`M_{\mathrm{vir},0}`$2), and $`\rho _{\mathrm{gas}}(r)`$ is the gas density at radius $`r`$ from the cluster center. We assume that the profile of dark matter is isothermal ($`\rho _{\mathrm{DM}}r^2`$) at least for $`r_\mathrm{c}rr_{\mathrm{vir}}`$, and $`\rho _{\mathrm{DM},\mathrm{c}}=64\rho _{\mathrm{vir}}`$. Moreover, we assume that the average gas fraction within radius $`r_{\mathrm{vir}}`$ is $`f_{\mathrm{gas}}(r_{\mathrm{vir}})=0.25`$ regardless of $`z_{\mathrm{coll}}`$ and $`M_{\mathrm{vir},0}`$. The value of $`f_{\mathrm{gas}}`$ is nearly the largest gas mass fraction of observed clusters (e.g. David et al. 1995; Ettori and Fabian 1999). On these assumptions, the central gas density and the gas fraction at the cluster core are respectively given by $`\rho _{\mathrm{gas},0}`$ $`=`$ $`\left(1{\displaystyle \frac{\beta }{\gamma }}\right)f_{\mathrm{gas}}\rho _{\mathrm{vir}}(z_{\mathrm{coll}})\left({\displaystyle \frac{r_{\mathrm{vir}}}{r_\mathrm{c}}}\right)^{3\beta /\gamma }`$ (18) $`=`$ $`0.25\left(1{\displaystyle \frac{\beta }{\gamma }}\right)\rho _{\mathrm{vir}}(z_{\mathrm{coll}})\mathrm{\hspace{0.33em}8}^{3\beta /\gamma }`$ and $$f_{\mathrm{gas}}(0)=0.25\left(1\frac{\beta }{\gamma }\right)\mathrm{\hspace{0.33em}8}^{3(\beta /\gamma )2},$$ (19) where $`f_{\mathrm{gas}}(0)\rho _{\mathrm{gas},0}/\rho _{\mathrm{DM},\mathrm{c}}`$ is the gas fraction at the cluster center. The above equations are valid when $`\beta <\gamma `$. Note that in Paper II, we derive the central gas density according to the relation $`\rho _{\mathrm{gas},0}\rho _{\mathrm{vir}}f_{\mathrm{gas}}(0)`$, in which $`f_{\mathrm{gas}}(0)`$ is differently determined by the observations<sup>2</sup><sup>2</sup>2In Papers I and II, we assumed that $`T_{\mathrm{gas}}=T_{\mathrm{vir}}`$ and did not take account of the variation of $`\beta _{\mathrm{obs}}`$ when we derive $`f_{\mathrm{gas}}(0)`$ from observations of $`\rho _{\mathrm{gas},0}`$, $`r_\mathrm{c}`$, and $`T_{\mathrm{gas}}`$.. In contrast, in equation (18), we derive $`\rho _{\mathrm{gas},0}`$ assuming that $`f_{\mathrm{gas}}(r_{\mathrm{vir}})=constant`$. The above model values (equation or ) can be obtained from observational data. Using equations (8), (9), and (16) we obtain $$\rho _{\mathrm{vir}}=\frac{9k_\mathrm{B}T_{\mathrm{gas}}}{4\pi G\mu m_\mathrm{H}}\frac{\beta _{\mathrm{obs}}}{(8r_\mathrm{c})^2},$$ (20) where we used the relations of $`T_{\mathrm{vir}}=\beta T_{\mathrm{gas}}`$ and $`r_\mathrm{c}=r_{\mathrm{vir}}/8`$. Thus, using equation (16), the right hand of equation (18) should be written as $$\rho _{\mathrm{gas},0}^{\mathrm{model}}0.25\beta _{\mathrm{obs}}(1\beta _{\mathrm{obs}})\frac{9k_\mathrm{B}T_{\mathrm{gas}}}{4\pi G\mu m_\mathrm{H}}\frac{8^{3\beta _{\mathrm{obs}}}}{(8r_\mathrm{c})^2}.$$ (21) Hence, $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ can be derived from the observable quantities $`r_\mathrm{c}`$, $`T_{\mathrm{gas}}`$, and $`\beta _{\mathrm{obs}}`$. Figure 4 displays a plot of $`\rho _{\mathrm{gas},0}`$ and $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ based on the data obtained by Mohr et al. (1999). Note that Peres et al. (1998) do not present $`\rho _{\mathrm{gas},0}`$. We do not show the uncertainties of $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ to avoid complexity. Here we use only $`\rho _{\mathrm{gas},0}`$ corresponding to the global cluster component as we did in Paper I. Figure 4 shows that $`\rho _{\mathrm{gas},0}`$ well agrees with $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ although $`\rho _{\mathrm{gas},0}`$ is slightly smaller than $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ for clusters with large $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$. Thus, we conclude that the variation of $`f_{\mathrm{gas}}(0)`$ is due to that of the slope parameter of the gas distribution $`\beta `$ within $`r_{\mathrm{vir}}`$. One possible reason for the slight disagreement between $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ and $`\rho _{\mathrm{gas},0}`$ is an uncertainty of the value of $`f_{\mathrm{gas}}(r_{\mathrm{vir}})`$. Another is the influence of central excess emission of clusters. When the distance to a cluster is relatively large, the center and global surface brightness components may not be distinguished even if the two components exist. In this case, the cluster may be considered that it has only a global component. However, when the central emission is strong, the fitting of the surface brightness profile by one component may be affected by the central emission and may give a smaller core radius than the real. This may make $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ large for the clusters. In fact, clusters with $`\rho _{\mathrm{gas},0}^{\mathrm{model}}>3\times 10^{26}\mathrm{g}\mathrm{cm}^3`$ are regarded by Mohr et al. (1999) as having only one (global) component of surface brightness. Note that core radii derived by Peres et al. (1998) may be less affected by the central emission because they take account of cooling flows and the gravitation of central cluster galaxies, which are responsible for the central emission, for all clusters they investigate (Figures 2 and 5b). We present the theoretically predicted relations among $`\rho _{\mathrm{gas},0}`$, $`r_{\mathrm{vir}}`$, and $`T_{\mathrm{gas}}`$ in Figure 5. Although these relations are presented in Paper II using the observed relation between $`f_{\mathrm{gas}}(0)`$ and $`M_{\mathrm{DM},\mathrm{c}}`$, here we plot them by directly using $`\beta `$. For lines in Figure 5, we use the relation $`T_{\mathrm{gas}}=T_{\mathrm{vir}}/\beta `$. For comparison, we plot the observational data in the catalogue of Mohr et al. (1999) and Peres et al. (1998). For the data, we use the relation $`r_{\mathrm{vir}}=8r_\mathrm{c}`$. Figure 5 shows that our model can well reproduce the band distribution of observational data in the ($`\rho _{\mathrm{gas},0},r_\mathrm{c},T_{\mathrm{gas}}`$)-space. Moreover, our model can explain the planar distribution of the observational data. In Paper I, we find that the observational data satisfy the relation of the fundamental plane, $`\rho _{\mathrm{gas},0}^{0.47}r_\mathrm{c}^{0.65}T_{\mathrm{gas}}^{0.60}constant`$. For $`M_{\mathrm{vir},0}10^{15}\mathrm{M}_{\mathrm{}}`$ and $`z_{\mathrm{coll}}2`$, our model with $`\mathrm{\Omega }_0=0.2`$ and $`T_1=1`$ keV predicts the plane of $`\rho _{\mathrm{gas},0}^{0.32}r_\mathrm{c}^{0.64}T_{\mathrm{gas}}^{0.70}constant`$, which is approximately consistent with the observation. Note that the index of $`\rho _{\mathrm{gas},0}`$ is somewhat smaller than the observed value considering the uncertainty ($`0.1`$), which may be related to the slight disagreement between $`\rho _{\mathrm{gas},0}^{\mathrm{model}}`$ and $`\rho _{\mathrm{gas},0}`$ (Figure 4). The plane is represented by the two parameters, $`z_{\mathrm{coll}}`$ and $`M_{\mathrm{vir},0}`$, as discussed in §2. Since the cross section of the fundamental plane corresponds to the observed $`L_\mathrm{X}T_{\mathrm{gas}}`$ relation, and the fundamental plane corresponds to the observed dependence of $`f_{\mathrm{gas}}(0)`$ on $`\rho _{\mathrm{DM},\mathrm{c}}`$ and $`M_{\mathrm{DM},\mathrm{c}}`$ (Paper I), our model can also reproduce these relations. These results strengthen our interpretation that the difference of gas distribution among clusters is caused by heating of the gas before the cluster collapse and by shock heating at the time of the cluster collapse (equation or ). ## 4 Conclusions We have investigated the influence of heating before cluster collapse and shocks during cluster formation on the gas distribution in the central region of clusters of galaxies. We assumed that the core structure has not much changed since the formation of a cluster. Using a spherical collapse model of a dark halo and a simple shock model, we predict the relations among the slope of gas distribution $`\beta `$, the gas temperature $`T_{\mathrm{gas}}`$, and the core radius $`r_\mathrm{c}`$ of clusters. By comparing them with observations of relatively hot ($`3`$ keV) and low redshift clusters, we find that the temperature of the preheated gas collapsed into the clusters is about $`0.52`$ keV. Since the temperature is higher than that predicted by a preheating model of supernovae, it may reflect the heating by quasars or gravitational heating on the largest scales at high redshift. Moreover, gravitational heating in subclusters assembled when the clusters formed also seems to affect the temperature of the preheated gas and produce the dispersion in the preheating temperature. Assuming that the global gas mass fraction of clusters are constant, we predict that the gas mass fraction in the core region of clusters should vary correlating with $`\beta `$ through a simple law, which is shown to be consistent with the observations. Thus, we conclude that the variation of the gas mass fraction in the cluster core is due to the shock heating of preheated gas. Furthermore, we have confirmed that the observed fundamental plane and band of clusters are reproduced by the model even when the effects of preheating are taken into account. Thus, major conclusions about the cluster formation and cosmology obtained in our previous papers are not changed. We thank for A. C. Edge, C. S. Frenk, and T. Kodama for useful discussions. This work was supported in part by the JSPS Research Fellowship for Young Scientists.
no-problem/0002/astro-ph0002383.html
ar5iv
text
# Predictions on the number of variable stars for the GAIA space mission and for surveys as the ground-based International Liquid Mirror Telescope ## 1. The total number of variable stars A first estimate of the total number of variable stars observable by GAIA was done by Eyer (1999). The star population used came from the star-count model of Figueras et al. (1999) and the variability detection threshold was derived from the Hipparcos survey results. With the new qualifications of the GAIA mission, about 1 billion stars (up to mag G$`<`$20) are expected to be observed, with about 18 million variable stars, including about 5 million ”classic” periodic variables. Very different star counts are obtained according to the extinction laws used (Figueras, private communication). Since the quality of the GAIA photometry in the crowded fields is still uncertain, we cannot discuss here the number of variables in dense clusters and galaxies. About 2 to 3 millions eclipsing binaries will be observed, but their detection probability will be studied in detail in the future. About 300 000 stars with rotation induced variability can be expected as well. ## 2. The methods For a specific interval of V-I, we computed the proportion of variables in the Hipparcos survey and we applied that rate to the number of stars obtained from the Figueras model (method A). Surface densities were calculated, either from the Hipparcos parallaxes or from the specific properties of the stars. We integrated and removed the stars behind the bulge (method B). We extrapolated the GCVS data (Kholopov et al., 1998) assuming detection completeness up to a certain magnitude and a magnitude limit for the population beyond which no more stars are present (method C). We also analysed the detection rates of the microlensing surveys (when available) and scanned the literature. ## 3. Pulsating variables Method A and C estimate the number of $`\beta `$ Cephei stars to be about 3000. 15 000 SPB variables will be detected according to Method A. Applying method A and C gave about the same estimate for $`\delta `$ Scuti stars: 60 000. However, it will be very difficult to analyse the very reddened low amplitude variables. With method B even higher numbers as 240 000 $`\delta `$ Scuti stars show up. With a total number of RR Lyrae as given by Suntzeff et al. (1991) we arrive at 70 000 observable RR Lyrae (method B). Using the OGLE and MACHO detection rates, we expect 15 000 to 40 000 RR Lyrae in the bulge. All galactic Cepheids are within the observational range of GAIA, if not too obscured by interstellar extinction. Results of recent deep surveys confirm the early estimates of a total of 2000 to 8000 Cepheids. With the help of the Fernie database (1995), we obtained (Method B) a density of 15-20 Cepheids/$`\text{kpc}^2`$, leading to an estimate of 5 200–6 900 observable stars. Early estimates gave in total 200 000 Mira and related long period variables in the Galaxy. With 500 Miras/$`\text{kpc}^2`$, 140 000 to 170 000 Miras will be observable. Method B gave us a density of 250-350 Semi-Regular variables/$`\text{kpc}^2`$ or a total of 100 000 observable SR stars. We plan to calculate and analyse all categories of variables stars in more detail to arrive at reliable estimates of all observable variable stars in the Galaxy. ## 4. Variable stars in deep surveys An example: The International Liquid MirrorTelescope (ILMT). (see `http://vela.astro.ulg.ac.be/themes/telins/lmt/index_e.html`) An international group of institutions are actively interested in developing a 4-m class liquid mirror telescope. If the view of the ILMT includes fields near the galactic center and all stars from R magnitude 17 up to 20 can be measured with high precision ($``$ 0.01 mag), the project will yield a unique time series of about 2 million stars with a total of 500 measurements of each star during 5 years. About 10 000 new variable stars can be expected, including 6 000 faint eclipsing binaries, 200 RR Lyrae and 300 long periodic variables. ## References Eyer, L. 1999, Balt. Ast., 8, 321 Fernie, J.D., Beattie, B., Evans, N.R., & Seager, S. 1995, IBVS 4148 Figueras, F. et al., 1999, Balt. Ast., 8, 291 GAIA: `http://astro.estec.esa.nl/SA-general/Projects/GAIA/` HIPPARCOS: Hipparcos and Tycho catalogues, ESA SP-1200 Kholopov, P.N., et al. 1998, GCVS, 4th Edition Suntzeff, N.B., Kinman, T.D., & Kraft, R.P. 1991, ApJ, 367, 528
no-problem/0002/hep-ex0002005.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of $`g`$-factors of subatomic particles can trace its roots back to the 1921 paper by Stern which was a proposal to study space quantization with an apparatus which is now called a “Stern-Gerlach apparatus”. By 1924 the famous experiments had been done and a review paper was written summarizing their results. Their final conclusion, that “to within 10% the magnetic moment of the electron was one Bohr magneton”, meant in modern language that the $`g`$-value of the electron was 2, where the gyromagnetic ratio $`g`$ is the proportionality constant between the magnetic moment and the spin, $$\stackrel{}{\mu }=g(\frac{e}{2m})\stackrel{}{s}.$$ (1) The discovery that $`g_e2`$, and the calculation by Schwinger predicting that (to first order) the radiative correction to $`g_e`$ was $`\alpha /\pi `$, were important early steps in the development of Quantum Electrodynamics (see Fig. 1). The long lifetime of the muon permits a precision measurement of its anomalous moment at the ppm level. The muon magnetic moment is given by $$\mu _\mu =(1+a_\mu )\frac{e\mathrm{}}{2m_\mu }\mathrm{where}a_\mu =\frac{(g2)}{2}.$$ (2) The electron anomalous magnetic moment has been measured to a few parts per billion, which can be completely described by QED of electrons and photons to eighth order, $`(\frac{\alpha }{\pi })^4`$. The contributions of virtual muons, tauons, etc. enter at the few ppb level. The calculation of the electron anomalous moment is limited by the knowledge of the fine-structure constant. With the reliability of modern QED calculations, Kinoshita has turned things around and has used the electron $`g`$ value measurement to give the best value for $`\alpha `$. The relative contribution of heavier particles to the muon anomaly scales as $`(m_\mu /m_e)^2`$ and the famous CERN experiment, which obtained a relative error on $`a_\mu `$ of $`\pm 7.3`$ parts per million (ppm), easily observed the predicted $`60`$ ppm contribution of virtual hadrons. In 1984 efforts began to make a new measurement of the muon anomalous moment to a precision of $`\pm 0.35`$ ppm, which would represent a 5 standard deviation observation of the electroweak contribution, and would also be sensitive to contributions from “new physics” such as muon substructure or supersymmetry. ## 2 Theoretical Contributions to $`(g2)`$ The standard model value of $`a_\mu `$ is given by $`a_\mu (\mathrm{SM})=a_\mu (\mathrm{QED})+a_\mu (\mathrm{hadronic})+a_\mu (\mathrm{weak})`$ and any contribution from new physics would be reflected in a measured value which did not agree with the standard model. Comparison of the measurements and calculations of the electron $`g`$ value gives one great confidence in our understanding of QED to the level needed for muon $`(g2)`$. Taking the value of $`\alpha `$ from the electron $`(g2)`$, yields the total QED contribution $`a_\mu (\mathrm{QED})=\mathrm{116\; 584\; 705.7}(1.8)(0.5)\times 10^{11}`$. The hadronic contribution to $`(g2)`$ cannot be calculated directly, but must be determined from data. The first-order hadronic vacuum polarization dominates the uncertainty in the theoretical value of $`a_\mu `$, since it is calculated using dispersion theory and data from $`e^+e^{}\mathrm{hadrons}`$ and hadronic $`\tau `$ decay as input. The various order hadronic contributions are shown in Fig. 2. Diagrams for hadroproduction and hadronic $`\tau `$ decay are shown in Fig. 3. The most precise determination of the first-order hadronic contribution is $`a_\mu (\mathrm{had};1)=6924(62)\times 10^{11}`$ which is $`59.39\pm 0.53`$ ppm of $`a_\mu `$, but there is continuing discussion of the use of CVC and the $`\tau `$-decay data. The higher-order contribution is $`a_\mu (\mathrm{had};2)=101(6)\times 10^{11}`$ The hadronic light-by-light scattering shown in Fig. 4 has now been calculated by two groups, using essentially the same model, and agreement is found: $`a_\mu (\mathrm{had};\mathrm{lbl})=85(32)10^{11}`$. However the two groups disagree on the uncertainty on the calculation, and I have taken the larger error from Ref. . The uncertainty in this contribution could be reduced substantially by the appropriate calculation on the lattice, and perhaps by other additional calculations as well. The total hadronic contribution is given by $`a_\mu (\mathrm{had};1+2+\mathrm{lbl})=6738(70)\times 10^{11}`$ which is $`57.79\pm 0.60`$ ppm of $`a_\mu `$, with an uncertainty dominated by the uncertainty on the first-order hadronic vacuum polarization. It is precisely this contribution which is being addressed by the programs to measure $`R(s)`$ at BES and the Budker Institute. We look forward to additional high quality data from these experiments, from DAPHNE, as well as $`\tau `$-decay data from CLEO to further reduce the uncertainty on the hadronic contribution. The standard model electroweak contribution arises from the diagrams shown in Fig. 5 (the standard model Higgs contribution is negligible). The single-loop $`W`$ and $`Z`$ contributions were calculated by a number of authors shortly after the standard model was developed. The result is $`a_\mu (\mathrm{weak};1)=195\times 10^{11}`$ or 1.7 ppm of $`a_\mu `$. Partial calculations of the two-loop electroweak contributions indicated that they might not be small. The full calculation which was later confirmed independently showed that the total first and second-order weak contribution was 20% less than the first order result. The result is $`a_\mu (\mathrm{weak};1+2)=151(4)\times 10^{11}`$ which is $`1.30\pm 0.03`$ ppm of $`a_\mu `$. The standard model prediction for $`a_\mu `$ is $`a_\mu (\mathrm{SM})=(\mathrm{116\; 591\; 594.7}\pm 70)\times 10^{11}`$ ($`\pm 0.60`$ ppm). A great deal has been written about the possible contribution to the muon $`(g2)`$ value from non-standard model physics. Just as the proton substructure produces a $`g`$-value which is not equal to two, muon substructure would also contribute to the anomalous moment, the critical issue being the scale of the substructure. A standard model value for $`(g2)`$ at the 0.35 ppm level would restrict the substructure scale to around 5 TeV. In Fig. 5(a) the triple gauge vertex $`WW\gamma `$ appears, and it is through this diagram that the muon $`(g2)`$ obtains its sensitivity to $`W`$ substructure and anomalous gauge couplings. The combined sensitivity of LEP1, LEP2 and $`(g2)`$, and the unique contribution which $`(g2)`$ makes in constraining the existence of such couplings, is described by Renard et al. Supersymmetry has become a serious candidate for physics beyond the standard model. The SUSY contribution is shown in Fig. 6. In the case of large $`\mathrm{tan}\beta `$, the chargino diagram dominates and the contribution to $`(g2)`$ from SUSY is given by $$a_\mu (\mathrm{SUSY})\frac{\alpha }{8\pi \mathrm{sin}^2\theta _W}\frac{m_\mu ^2}{\stackrel{~}{m}^2}\mathrm{tan}\beta 140\times 10^{11}(\frac{100\mathrm{GeV}}{\stackrel{~}{m}})^2\mathrm{tan}\beta .$$ (3) The goal of E821 is to reach a precision of $`\pm 40\times 10^{11}`$ ($`\pm 0.35`$ ppm), so the factor of 140 above corresponds to 1.2 ppm. For $`\stackrel{~}{m}=750`$ GeV and $`\mathrm{tan}\beta =40`$, $`a_\mu (\mathrm{SUSY})=100\times 10^{11}`$. ## 3 The New $`(g2)`$ Experiment For polarized muons moving in a uniform magnetic field $`\stackrel{}{B}`$ which is perpendicular to the muon spin direction and to the plane of the orbit, and with an electric quadrupole field $`\stackrel{}{E}`$ for vertical focusing, the difference angular frequency, $`\omega _a`$, between the spin precession frequency $`\omega _s`$ and the cyclotron frequency $`\omega _c`$, is given by $$\stackrel{}{\omega }_a=\frac{e}{m}\left[a_\mu \stackrel{}{B}\left(a_\mu \frac{1}{\gamma ^21}\right)\stackrel{}{\beta }\times \stackrel{}{E}\right].$$ (4) The dependence of $`\omega _a`$ on the electric field is eliminated by storing muons with the “magic” $`\gamma _\mu `$=29.3, which corresponds to a muon momentum $`p_\mu `$ = 3.09 GeV/$`c`$. Hence measurement of $`\omega _a`$ and of $`B`$ determines $`a_\mu `$. At the magic gamma, the muon lifetime is $`\gamma \tau =64.4`$ $`\mu `$s, the $`(g2)`$ precession period is 4.37 $`\mu `$s, and for the central orbit radius of 7.11 m the cyclotron period is 149 ns. The storage ring magnet is a superferric 700 ton, 14 m diameter circular “C”-magnet, with the opening facing inward towards the ring center. The field is excited by three 14 m diameter superconducting coils which carry $`5.2`$ kA from a low voltage power supply to produce the $`1.45`$ T magnetic field. The short term field stability over several AGS cycles is better than 0.1 ppm. The magnetic field which enters in Eq. 4 is the average field seen by the muon distribution. Since direct injection of muons does not uniformly fill the phase space, we used a tracking code to calculate the distribution of muons in the storage ring. The radial distribution obtained from this tracking code was compared with the distribution obtained from observing the beam debunching in the ring at early times. The two distributions agreed quite well. In the 1999 run, a straw-tube array was operational at one detector location which provided information on the decay positron trajectories coming out of the storage region. These data will permit us to reconstruct directly the muon spatial distribution in one section of the ring. In 1998, direct muon injection into the storage ring was employed for the first time. The AGS performance, the beamline and the inflector magnet were as described in Carey et al., and except for the muon injection many of the experimental details are the same as described there. The positive muon beam with the magic momentum is formed by collecting the highest energy muons from pion decay in a 72 m long decay section of our beamline, which results in a muon polarization of 96%. The flux incident on the inflector magnet was $`2\times 10^6`$ per fill of the ring. The 10 mrad kick needed to put the muon beam onto a stable orbit was achieved with a currents, since usual magnetic kicker techniques would spoil the precision magnetic field. Three pulse-forming networks powered three identical 1.7 m long kicker sections consisting of parallel plates on either side of the beam. Current flowed down one side crossed over and flowed back up the other side. The kicker plate geometry and composition was chosen to minimize eddy currents, and the eddy current effect on the total field seen by the muons was less than 0.1 ppm 20 $`\mu `$s after injection. The current pulse, which was formed by an under-damped LCR circuit, had a peak current of 4100 A and a pulse base width of 400 ns. Since the cyclotron period of the muon beam from the AGS was 149 ns, the beam was kicked several times before the kicker pulse died out. With muon injection, the number of detected positrons per hour was increased by an order of magnitude over pion injection. Thus the use of a muon beam an order of magnitude less intense than the pion beam resulted in a substantial increase in stored muons per fill of the ring, with the injection related background reduced by about a factor of 50. Positrons from the in-flight decay $`\mu ^+e^+\nu _e\overline{\nu }_\mu `$ are detected with Pb-scintillating fiber calorimeters placed symmetrically at 24 positions around the inside of the storage ring. The decay positron time spectrum is $$N_0e^{t/\gamma \tau }\left[1+A(E)\mathrm{cos}\left(\omega _at+\varphi (E)\right)\right].$$ (5) The normalization constant $`N_0`$ and the parity violating asymmetry parameter $`A(E)`$ depend on the energy threshold placed on the positrons. The fractional statistical error on $`\omega _a`$ is proportional to $`A^1N_e^{1/2}`$, where $`N_e`$ is the number of decay positrons detected above some energy threshold. For an energy threshold of 1.8 GeV, we measure $`A`$ to be $`0.34`$ consistent with its theoretical value, which we attribute to the good calorimeter energy resolution ($`\sigma /E=10\%`$ at 1 GeV) and a scalloped vacuum chamber which minimizes pre-showering before the positrons reach the calorimeters. The photomultiplier tubes of the calorimeter were gated off before injection and when gated on, they recovered to 90% pulse height in $`400`$ ns, and reached full operating gain in several $`\mu `$s. With the reduced flash following injection it was possible to begin counting as soon as 5 $`\mu `$s after injection in the region of the ring 180 around from the injection point. The calorimeter pulses were continuously sampled by custom 400 MHz waveform digitizers (WFDs), which provided both timing and energy information for the positrons. Both the NMR and WFD clocks were phase-locked to the same LORAN-C frequency signal. The waveforms were zero-suppressed, and stored in memory in the WFD until the end of the AGS cycle. Between AGS acceleration cycles the WFD data were written to tape for off-line analysis, as were the calorimeter calibration data and the magnetic field data. A laser/LED calibration system was used to monitor calorimeter time and gain shifts during the data-collection period. Early-to-late timing shifts over the first 200 $`\mu `$s were on-average less than 20 ps, which is needed to keep systematic timing errors smaller than 0.1 ppm. For the offline analysis, the detector response (waveform shape) to positrons was determined from our data for each calorimeter. These shapes were then fit to all pulses in the data to determine a time, an amplitude and a width parameter for each pulse. Time histograms were formed for each detector. These independent data sets were analyzed separately and were in agreement ($`\chi ^2/\nu =17.2/20`$). A completely blind analysis was performed on the data. Arbitrary offsets were put on the muon frequency and the proton frequency from the NMR probes. Each offset was known by one person, making it impossible to determine the actual value of $`a_\mu `$. Only when the analysis of both the magnetic field and $`\omega _a`$ were completed were the offsets removed and the new value of $`a_\mu `$ determined. After the offsets were removed, it was necessary to make two corrections to the frequency obtained from the fitting. For muons with the “magic” momentum, $`\omega _a`$ is not affected by the electric field. For the ensemble of muons in our storage ring there is a small electric field correction to $`\omega _a`$ since not all muons are at the magic momentum. There is also a pitch correction because of the vertical betatron oscillations. The sum of these two corrections for these data is ($`0.9\pm 0.2`$) ppm. The dominant systematic errors which were reported in our first measurement have been completely eliminated. The remaining systematic errors are under study. With many of them approximated by upper limits, one obtains a total systematic error of $`1`$ ppm, with the systematic error assigned to the magnetic field of 0.5 ppm. Since the study of systematic errors is a source of much continuing work, we have chosen not to present a detailed list at this time. We do wish to note the substantial improvement in the magnetic field quality which has been obtained by additional shimming. In Fig. 7 we show the average magnetic field from the 1997 and 1998 runs. The field uniformity over the storage aperture in 1998 was almost an order of magnitude better than was obtained in the CERN experiment. ## 4 New Results One month after the Symposium, we finished our analysis of the 1998 data and obtained a new result at the precision of $`\pm 5`$ ppm. Our experiment measures the frequency ratio $`R=\omega _a/\omega _p`$, where $`\omega _p`$ is the free proton NMR frequency in our magnetic field. Including the pitch and electric field corrections, we obtain $`R=\mathrm{3.707\; 201}(19)\times 10^3`$, where the 5 ppm error includes a 1 ppm systematic error estimate. We obtain $`a_{\mu ^+}`$ from $`a_{\mu ^+}=R/(\lambda R)`$ $`=\mathrm{116\; 591\; 91}(59)\times 10^{10}`$ in which $`\lambda =\mu _\mu /\mu _p=\mathrm{3.183\; 345\; 39}(10)`$. This new result is in good agreement with the mean of the CERN measurements for $`a_{\mu ^+}`$ and $`a_\mu ^{}`$, and our previous measurement of $`a_{\mu ^+}`$, which are tabulated below. Assuming CPT symmetry, the weighted mean of the four measurements gives a new world average of $`a_\mu =\mathrm{116\; 592\; 10}(46)\times 10^{10}`$ $`(\pm 3.9\mathrm{ppm})`$, which agrees with the standard model to within one standard deviation. These results are displayed graphically in Fig. 8. Also shown is the projected error from the 1999 data, and the $`\pm 0.35`$ ppm goal of E821. ## 5 Outlook and Conclusions The standard model has been remarkably successful in describing a wide range of phenomena including the entire body of results from LEP. The new result from E821, which has a precision of 5 parts per million, lowers the uncertainty on our knowledge of $`(g2)_\mu `$ to 3.9 ppm. At that level there is good agreement with the standard model, but there is still room for the observation of a value at the level of one ppm or better which could agree with the previous measurements, and disagree with the standard model. We expect our one ppm result to be available before the end of 2000. This experiment is very much a work in progress. The initial design of E821 was made with the systematic error budget of 0.12 ppm. Many of the improvements which were made to the CERN technique have indeed worked, making it straightforward to obtain a systematic error less than one ppm. While we have been learning about our systematic errors from the beginning of the first pion injection run, with the sub-ppm statistics available with muon injection we are now challenged to push the analysis of systematic errors to the limit. Our ultimate goal is a statistical answer at the 0.3 ppm level, with systematic errors at perhaps half this level. Our work thus far seems to indicate that this level of systematic error will be possible. A new inflector magnet has been installed in the ring, which has a fringe field in the storage region a factor of 5 less than the old inflector had. This will improve our ability to map the field everywhere and will help to reduce the uncertainty on our knowledge of the field. The principal magnetic field issue remaining is our ability to track the field with time, and to understand the full calibration of the NMR probes to a few tenths of a ppm. The other principal challenge is how to handle pile-up in the detectors without discarding an unreasonable portion of the data set. The dominant analysis effort now is to understand this effect. It has been 20 years since the CERN experiment presented its final report which verified the hadronic contribution at the eight standard deviation level, but which left it to a future experiment to verify the electroweak contribution. The new Brookhaven experiment is now approaching that goal. Within the next few years either the standard model value will be confirmed, or evidence of a new contribution to the muon $`(g2)`$ will be discovered. I wish to acknowledge the efforts of the many collaborators who have worked on $`(g2)`$ over the past 15 years. The steady support of the funding agencies (US and abroad) and the Brookhaven Laboratory management was essential to our reaching this point. I wish to thank R. Carey, D. Hertzog, K. Jungmann, V. Hughes, J. Miller, Y. Semertzidis and E. Sichtermann, for their comments on this manuscript.
no-problem/0002/astro-ph0002029.html
ar5iv
text
# High-resolution simulations and visualization of protoplanetary disks ## 1. The Method Extremely small temporal and spatial scales involved in the problem of accretion onto a protoplanet necessitate the use of nonuniform discretization in the vicinity of the accretor. In our study we used adaptive mesh refinement (AMR) method combined with a high-resolution Godunov-type advection scheme (amra, Plewa & Müller 2000). The AMR discretization scheme follows the approach of Berger and Colella (1989). The computational domain is covered by a set of completely nested patches occupying levels. The levels create a refinement hierarchy. As one moves toward higher levels, the numerical resolution increases by a prescribed integer factor (separate for every direction). The net flow of material between patches at different levels is carefully accounted for in order to preserve conservation properties of hydrodynamical equations. Boundary data for child patches are either obtained by parabolic two-dimensional conservative interpolation of parental data or set according to prescribed boundary conditions. Hydrodynamical equations are solved with the help of the Direct Eulerian Piecewise-Parabolic Method (PPMDE) of Colella & Woodward (1984), as implemented in herakles solver (Plewa & Müller 2000). Simulations have been done in spherical polar coordinates in a frame of reference corotating with the protoplanet. herakles guarantees exact conservation of angular momentum which is particularly important in numerical modeling of disk accretion problems. The use of its multifluid option with tracer materials distributed within disk (not presented here) allows to identify the origin of the material accreted onto protoplanet. The amra code is written purely in FORTRAN 77 and has been successfully used on both vector supercomputers and superscalar cache-oriented workstations. Its parallelization on shared memory machines exploits microtasking (through the use of vendor-specific directives) or the OpenMP standard. ## 2. Simulation setup The computational domain extends from 0.25 to 2.5 radii of the planet’s orbit. We employ 7 levels with refinement ratios ranging from (2,4) to (4,4). The base level contains the protoplanetary (circumstellar) disk while the 7th level contains the planet and its immediate vicinity. The base grid consist of $`128\times 128`$ cells uniformly distributed in $`r`$ and $`\theta `$. The effective resolution at the 7th level is $`131072\times 524288`$ in $`r`$ and $`\theta `$, respectively. The topmost five levels are schematically shown in Figure 1. White lines are boundaries of the patches. There are 1, 1, 1, 1, 12, 4 and 49 patches at levels 1-7, respectively. The structure of the grid at level 7 is shown in Figure 2f with individual cell boundaries drawn with white lines (the dark blue circle shows size of the planet). ## 3. Physical model The simulation is initialized with a Keplerian disk. Originally the disk has a mass of 0.01 M$`_{}`$, constant $`h/r`$ ratio of 0.05 and surface density proportional to $`r^{1/2}`$. The temperature is a fixed function of $`r`$ throughout the simulation. There is no explicit viscosity in the disk. At the outer and inner boundary of the base grid the gas is allowed to flow freely from the computational domain. No inflow is allowed for. The accretion onto the planet is accounted for in a very simplified way. At every time step the mean value of the density within two planetary radii is calculated, and whenever it is higher then a preset value, the excess gas is removed. At $`t=0`$ a planet of one Jupiter mass in inserted into the disk on a circular orbit. The radius of the orbit and the mass of the planet remain constant throughout the simulation. The disk is allowed to evolve for 100 planetary orbits. A gap is cleared in it, and a secondary, circumplanetary disk is formed. The sequence of surface plots in Figure 2 shows the final structure of both disks (the surface density distribution is displayed). The red peak in Figure 2a is the unresolved image of the very dense circumplanetary disk. We have been able for the first time to see the details of the latter (Figures 2(c)-d). The streams of gas flowing across the gap from left and right edge of the frame (light blue) collide with the outer part of the circumplanetary disk. The collision regions (green wedges) bear strong resemblance to hot spots in cataclysmic binaries. In every region two strong shock waves are excited, one of them propagating into the stream, and the other into the disk. The shocked gas flows from the collision region along a loosely wound spiral towards the planet ((Figure 2e). This picture is significantly more detailed than the one recently published by Lubow, Seibert, & Artymowicz (2000). Streamlines of the flow around the planet are shown in Figure 3, and they are in good accordance with those of Lubow et al. Our simulation is of preliminary nature, and its sole purpose is to demonstrate the capabilities of amra. Currently, we are improving the physics of the model. One of the problems we are going to attack is the calculation of the accurate value of the gravitational torque from the disk onto the planet in the phase preceding gap formation. ## 4. Visualization To visualize the complicated amra output, we have chosen the AVS/Express environment for visual programming. It allows the user to quickly built simple applications employing standard library modules. Advanced users can develop their own, highly specialized modules and applications. Our amra-visualization application (visa) is partly based on modules written by Favre, Walder, & Follini (1999), which have been substantially modified, and partly on our own modules. A screenshot of visa is shown in Figure 4. The panel and the viewer are contained in the two topmost windows, while the bottom window contains the AVS/Express programming platform. Currently we are able to read the AMR data, extract components, perform mathematical operations on data sets and coordinates, extract any subset of levels or patches, and apply to them various visualization technique (e.g. 2-D plot, surface plot, isolines, slice). Streamlines can also be calculated. The application is still under development, and new options are being added. ### Acknowledgments. This research is supported by the Polish Committee for Scientific Research through the grant 2.P03D.004.13. ## References Berger, M. J., & Colella, P. 1989, J. Comput. Phys., 82, 64 Colella, P., & Woodward, P.R. 1984, J. Comput. Phys., 59, 264 Plewa, T., & Müller, E. 2000, Comp. Phys. Commun. (in preparation) Lubow, S.H, Seibert, M., & Artymowicz, P. 2000, ApJ (astro-ph/9910404) Favre, J. M., Walder, R., & Follini, D. 1998, in Proceedings, 40th Cray User Group Conference, Stuttgart, Germany (June 1998)
no-problem/0002/astro-ph0002040.html
ar5iv
text
# An 11.6 Micron Keck Search For Exo-Zodiacal Dust ## 1 INTRODUCTION Our sun is surrounded by a disk of warm ($`>`$150 K) “zodiacal” dust that radiates most of its thermal energy at 10–30 microns. This zodiacal dust is produced largely in the inner part of the solar system by collisions in the asteroid belt (Dermott et al. 1992) and cometary outgassing (Liou and Zook 1996). Zodiacal dust is interesting as a general feature of planetary systems, and as an indicator of the presence of larger bodies which supply it; dust orbiting a few AU from a star is quickly removed as it loses angular momentum to Poynting-Robertson drag (Robertson 1937). Understanding the extra-solar analogs of zodiacal dust may also be crucial in the search for extra-solar planets (Beichman et al. 1996) since exo-zodiacal dust in a planetary system could easily outshine the planets and make them much harder to detect. The best current upper limits for the existence of exo-zodiacal dust disks come from IRAS measurements of 12 and 25 micron excesses above photospheric emission. Seen from a nearby star, solar system zodiacal dust would create only a $`10^4`$ excess over the sun’s photospheric emission at 20 microns. IRAS measurements, however, have typical measurement errors of 5 percent (Moshir et al. 1992) and display systematic offsets of a similar magnitude when they are compared to other photometry (Cohen et al. 1996). If there were a solar-type zodiacal disk with 1000 times the density of the disk around the sun around Tau Ceti, the nearest G star, the excess infrared emisison would barely exceed the formal 68% confidence intervals of the IRAS photometry. Moreover, all photometric detection schemes of this sort are limited by how accurately the star’s mid-infrared photospheric emission is known. For farther, fainter stars than Tau Ceti, inferring the presence of dust from the IRAS data becomes still harder. The detection of faint exo-zodiacal-dust emission is more feasible if one can resolve the dust emitting region. The high resolution and dynamic range needed for these observations will generally require large interferometers like the Keck Interferometer, the Large Binocular Telescope, and the Very Large Telescope Interferometer. But it is already possible to resolve the zodiacal dust mid-infrared emitting regions of the nearest stars. A 10-meter telescope operating at 12 microns has a diffraction-limited resolution of 0.25 arc seconds, corresponding, for example, to a transverse distance of 2 AU at 8 parsecs. We have begun a search for zodiacal dust around the nearest stars using the mid-infrared imaging capabilities of the Long Wavelength Spectrometer (LWS) (Jones & Peutter 1993) on the W. M. Keck telescope. The large aperture of the telescope allows us to make spatially resolved images of the zodiacal dust 11.6 micron emitting region around the stars so that we can look for dust emission above the wings of the point-spread function (PSF) rather than as a tiny photometric excess against the photosphere. We present here the results of two nights of observations, and compare them with a simple model of exo-zodiacal thermal emission to place upper limits on the amount of dust present in the systems we observed. ## 2 OBSERVATIONS We observed six nearby stars with LWS on the W. M. Keck telescope on August 3rd and 4th, 1996 using standard mid-infrared imaging techniques. The target stars were the nearest A–K main-sequence stars observable from Mauna Kea on those dates. With the object on-axis, we took a series of frames lasting 0.8 ms each, chopping the secondary mirror between the object and blank sky 8 arcseconds to the north at a frequency of 10 Hz. Then we nodded the primary mirror for the next series of frames so that the sky was on-axis and the object off-axis. We repeated this process for 3 nods over a period of 5 minutes, for an on-source integration time of 1.1 minutes, and a typical noise of 2 mJy in one 0.11 by 0.11 arcsecond pixel due to the thermal background. The seeing was poor both nights, up to 2 arc seconds in the visible. To measure the atmosphere-telescope transfer function, we made similar observations of seven distant, luminous calibrator stars near our targets on the sky, alternating between target and calibrator every 5–10 minutes. We increased our frame-rate for the second night of observations so that we could compensate for the seeing using speckle analysis. Figure 1 shows a cut through a single 84 ms exposure of Altair on August 4, compared to an Airy function representing the diffraction-limited PSF of a filled 10-meter aperture at 11.6 microns. The cores of the images are diffraction-limited, but the wings are sensitive to the instantaneous seeing, making speckle analysis necessary. Table 1 provides a summary of our observations. We flat-fielded the images by comparing the response of each pixel to the response of a reference pixel near the center of the detector. First we plotted the data number (DN) recorded by a given pixel against the DN in the reference pixel for all the frames in each run. Since the response of each pixel is approximately linear over the dynamic range of our observations and most of the signal is sky background, which varies with time but is uniform across the chip, the plotted points for each pixel describe a straight line; if all the pixels had the same response, the slope of each line would equal 1. We divided each pixel’s DN by the actual slope of its response curve relative to the reference pixel, effectively matching all pixels to the reference pixel. We then interpolated over bad pixels, frame by frame. To compensate for the differences in the thermal background between the two nod positions, we averaged together all the on-axis sky frames to measure the on-axis thermal background and subtracted this average from all of the on-axis frames—both object and sky. We used the same procedure to correct the off-axis frames. Next, we chose subframes of 32 by 32 pixels on each image, centered on the star (or for sky frames, the location of the star in an adjacent object frame), and processed these according to classical speckle analysis (Labeyrie 1970). We Fourier transformed them, and summed the power spectra, yielding a sky power spectrum and an object power spectrum for each series. Then we azimuthally averaged the power spectra in the u-v plane—that is, we averaged over all the frequency vectors of a given magnitude, $`\sqrt{u^2+v^2}`$. This azimuthal averaging corrects for the rotation of the focal plane of the alt-az-mounted Keck telescope with respect to the sky. We then subtracted from every object power spectrum the corresponding sky power spectrum and divided each corrected target power spectrum by the corrected power spectrum of a calibrator star observed in the same manner as the target star immediately before or after the target star. Figure 2 shows an azimuthally-averaged power spectrum of Altair and the corresponding sky power spectrum, compared with a power spectrum of calibrator Gamma Aquila and its corresponding sky power. We then averaged all the calibrated power spectra for a given target. If the object and calibrator are both unresolved, the average calibrated power spectrum should be the power spectrum of the delta function: a constant. We found that the pixels along the u and v axes of the power spectra were often contaminated by noise artifacts from the detector amplifiers, so we masked them out. Figures 3 and 4 show the calibrated azimuthally-averaged power spectra for our target stars. To compare different power spectra from the same target, we normalized each azimuthally-averaged power spectrum so that the geometric mean of the first 10 data points in each spectrum equals 1. For Altair and 61 Cygni A and B we had more than three pairs of target and calibrator observations, i.e. calibrated power spectra, so we show the average of all the spectra and error bars representing the 68% confidence interval for each datum, estimated from the variation among the individual power spectra. The error is primarily due to differences in the atmosphere-telescope transfer function between object and calibrator. None of the calibrated power spectra deviate from a straight line by more than a typical error; all the targets are unresolved to the accuracy of a our measurements. ## 3 DISCUSSION To interpret our observations we compared them to models of the IR emission from the solar zodiacal cloud. We constructed a model for exo-zodiacal emission based on the smooth component of the Kelsall et al. (1998) model of the solar system zodiacal cloud as seen by COBE/DIRBE, with emissivity $`ϵr^{0.34}`$ and a temperature $`T=286\mathrm{K}r^{0.467}L^{0.234}`$, where $`r`$ is the distance from the star in AU, and $`L`$ is the luminosity of the star in terms of $`L_{}`$. For a dust cloud consisting entirely of a single kind of dust particle of a given size and albedo, the $`L`$ exponent in the expression for the temperature is simply $`1/2`$ times the $`r`$ exponent (Backman & Paresce 1993). The physics of the innermost part of the solar zodiacal dust is complicated (see Mann & MacQueen 1993), but our results are not sensitive to the details, because the hottest dust is too close to the star for us to resolve. We assume that the dust sublimates at a temperature of 1500 K, and allow this assumption to define the inner radius of the disk. We set the outer radius of the model to 3 AU, the heliocentric distance of the inner edge of our own main asteroid belt. Our conclusions are not sensitive to this assumption; decreasing the outer radius to 2 AU or increasing it to infinity makes a negligible difference in the visibility of the model, even for A stars. The assumed surface density profile, however, does make a difference. A collisionless cloud of dust in approximately circular orbits spiraling into a star due to Poynting-Robertson drag that is steadily replenished at its outer edge attains an equilibrium surface density that is independent of radius (Wyatt and Whipple 1950, Briggs 1962). Models that fit data from the Helios space probes (Leinert et al 1981), the fit by Kelsall et al. (1998) to the COBE/DIRBE measurements and Good’s (1997) revised fit to the IRAS data all have surface densities that go roughly as $`r^{0.4}`$. This distribution appears to continue all the way in to the solar corona (MacQueen & Greely 1995). We find that in general, if we assume an $`r^\alpha `$ surface density profile, our upper limit for the 1 AU density of a given disk scales roughly as $`10^{\alpha /2}`$; disks with more dust towards the outer edge of the 11.6 micron emitting region are easier to resolve. Likewise, the assumed temperature profile strongly affects our upper limits. Unfortunately, we know little about the temperature profile of the solar zodiacal cloud. COBE/DIRBE and IRAS only probed the dust thermal emission near 1 AU, and Helios measured the solar system cloud in scattered light, which does not indicate the dust temperature. We found that a dust cloud model with the IRAS temperature profile ($`T=266\mathrm{K}r^{0.359}L^{0.180}`$) was much easier to resolve than the model based on DIRBE measurements that we present here, especially for G and K stars. To compare the models with the observations, we synthesized high resolution images of the model disks at an inclination of 30 degrees. We calculated the IR flux of the stars from the blackbody function, and obtained the parallaxes of the stars from the Hipparcos Catalog (ESA 1997). We inferred stellar radii and effective temperatures for each star from the literature and checked them by comparing the blackbody fluxes to spectral energy distributions based on photometry from the SIMBAD database (Egret et al. 1991). For Altair and Vega, we use the interferometrically measured angular diameters (1974) (they are 2.98 +/-0.14 mas and 3.24 mas). Stellar fluxes typically disagree with fitted blackbody curves by $`10\%`$ in the mid-infrared (Engelke 1990), but our method does not require precise photometry, and the blackbody numbers suffice for determining conservative upper limits. We computed the power spectra of the images, and normalized them just like the observed power spectra. In figures 3 and 4, the azimuthally-averaged power spectra for our target stars are compared to the extrapolated COBE/DIRBE model at a range of model surface densities. Disks with masses as high as $`10^3`$ times the mass of the solar disk will suffer collisional depletion in their inner regions, so they are unlikely to have the same structure as the solar disk. By neglecting this effect we are being conservative in our mass limits. The density of the densest model disk consistent with the data in each case is listed in table 1. Altair Our best upper limit is for Altair (spectral type A7, distance 5.1 pc); with 11 pairs of object and calibrator observations we were able to rule out a solar-type disk a few times $`10^3`$ as dense as our zodiacal cloud. Such a disk would have been marginally detectable by IRAS as a photometric excess. Vega IRAS detected no infrared excess in Vega’s spectral energy distribution at 12 microns, with an uncertainty of 0.8 Jy. This may be due to a central void in the disk interior to about 26 AU (Backman & Paresce 1993). Aumann et al (1984) suggested that Vega (A0, 7.8 pc) could have a hot grain component (500 K) with up to $`10^3`$ of the grain area of the observed component and not violate this limit. The apparent upward trend in the visibility data may be a symptom of resolved flux in the calibrator stars. We have only 3 object/calibrator pairs for Vega, not enough to test this hypothesis. Our upper limit is a solar-type disk with approximately $`3\times 10^3`$ times the density of the solar disk. This disk would have a $``$500 K emitting area of $`10^{24}\mathrm{cm}^2`$, about $`10^3`$ of the grain area of the observed component. 61 Cygni A and B Though 61 Cygni is close to the galactic plane and surrounded by cool cirrus emission, Backman, Gillett and Low (1986) identified an IRAS point source with this binary system and deduced a far-infrared excess not unlike Vega’s. The color temperature of the excess suggests the presence of dust at distances $`>15`$ AU from either star. However these stars are dim (spectral types K5 and K7) and the region of the disk hot enough to emit strongly at 11.6 microns is close to the star and difficult to resolve; we could not detect a solar-type dust disk around either of these objects at any density, assuming the COBE/DIRBE model, or unless it had $`10^5`$ times the density of the solar disk, assuming the IRAS model. 70 Oph B 70 Oph is a binary (types K0 and K4) with a separation of 24 pixels (2.6 arcsec). We were able to assemble a power spectrum for B from 9 object/calibrator pairs, but the image of A fell on a part of the LWS chip that suffered from many bad pixels and was unusable. The image of A may also have been distorted by off-axis effects. 70 Oph B, like 61 Cygni A and B, is dim, making any dust around it cool and hard to detect at 11.6 microns. $`\tau `$ Ceti IRAS could have barely detected a disk with $`1000`$ times the emitting area of the solar disk around Tau Ceti (G8, 3.6 pc), the nearest G star. We have only three object/calibrator pairs for this object, not enough data to improve on this limit. We are grateful to Dana Backman, Alycia Weinberger, Keith Matthews and Eric Gaidos for helpful discussions, and to Keith Matthews and Shri Kulkarni for assistance with the observations. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. The observations reported here were obtained at the W. M. Keck Observatory, which is operated by the California Association for Research in Astronomy, a scientific partnership among California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. It was made possible by the generous financial support of the W. M. Keck Foundation.
no-problem/0002/astro-ph0002251.html
ar5iv
text
# Ionizing radiation in Smoothed Particle Hydrodynamics ## 1 Introduction Smoothed Particle Hydrodynamics (sph) has become a numerical method widely used for addressing problems related to fluid flows in astrophysics. Due to its Lagrangian nature it is especially well suited for applications involving variations by many orders of magnitude in density. Examples for this type of applications are simulations of the collapse of molecular clouds and the formation of a stellar cluster, as performed by Klessen, Burkert & Bate \[Klessen et al. 1998\]. A comparison between grid based methods and sph was performed by Burkert, Bate & Bodenheimer \[Burkert et al. 1996\] and Bate & Burkert \[Bate & Burkert 1997\]. They applied both methods to the numerically demanding problem of gravitational collapse and fragmentation of a slightly perturbed rotating cloud core with an $`r^2`$ density profile. Both methods yielded the same qualitative results. Bate \[Bate 1998\] performed the first calculation which followed the collapse of a molecular cloud core in 3 dimensions down to a protostellar object in hydrodynamical equilibrium, thus spanning 17 (!) orders of magnitude in density. Other applications include accretion processes in massive circumbinary disks \[Bonnell & Bate 1994, Bate & Bonnell 1997\], the collapse of cloud cores induced by shock waves \[Vanhala & Cameron 1998\] or colliding clumps \[Bhattal et al. 1998\], the precession of accretion disks in binary systems \[Larwood & Papaloizou 1997\], the dynamical behaviour of massive protostellar disks \[Nelson et al. 1998\] or the formation of large scale structure and galaxies in the early universe \[Steinmetz 1996\]. A variety of physical processes are at work in the interstellar medium, like magnetic fields, radiation or thermal conductivity, necessitating their inclusion into numerical codes. This has already been achieved to a large extent in grid based methods like the magneto-hydrodynamics codes zeus \[Stone & Norman 1992\] or nirvana \[Ziegler, Yorke & Kaisig 1996\], or codes including effects of IR and UV radiation \[Yorke & Kaisig 1995, Sonnhalter, Preibisch & Yorke 1995, Richling & Yorke 1998\]. In contrast, the addition of physical processes to sph codes is just at its beginnings. Extensions achieved so far are sophisticated equations of state (e.g. Vanhala et al. 1998) and self-gravity. Some efforts were made to make sph faster and more accurate. The introduction of tree algorithms \[Barnes & Hut 1989, Press 1986, Benz et al. 1990\] and the use of GRAvity PipE (grape), a hardware device for fast computation of the gravitational N-body forces \[Umemura et al. 1993, Steinmetz 1996\], helped reducing the numerical effort for the gravitational force calculation and the determination of the nearest neighbours for each particle. Inutsuka \[Inutsuka 1995\] presented a Godunov-like solver for the Eulerian equations in sph thus enhancing the numerical treatment of shocks. The introduction of gravitational periodic boundaries \[Hernquist, Bouchet & Suto 1991, Klessen 1997\] allows the treatment of fragmentation and turbulence in molecular clouds without global collapse. The timestep problem which arises during isothermal collapse calculations at high densities is circumvented by the formation of sink particles, which substitute the innermost parts of the collapsing clump by one particle and accumulate the infalling mass and momenta \[Bate, Bonnell & Price 1995\]. The strength of sph lies in its Lagrangian nature, which makes it especially attractive for problems involving gravitational collapse and star formation. Applications like e.g. by Klessen et al. \[Klessen 1997\], which deal with the collapse and fragmentation of molecular clouds, neglect the feedback processes of newly born stars which act on their parental cloud through stellar winds, outflows and ionization. This simplification may be justified as long as the simulations deal with collapse on timescales smaller than $``$ 1 Myr, on which single and binary stars or T Tau-like clusters are formed \[Efremov & Elmegreen 1998\]. The case is different for larger timescales, on which OB subgroups and associations are formed. Neglecting feedback in these cases can lead to unphysical results, like a star formation efficiency of 100 per cent, since in the purely isothermal case all material will sooner or later be accreted onto the evolving protostellar cores. This is in strong contradiction to observations, which estimate a global star formation efficiency for ordinary molecular clouds of order 10 per cent \[Wilking & Lada 1985\]. Another possible effect of feedback is the induction of star formation due to the compression of cloud material by shock waves and ionization fronts. In this paper we discuss the implementation of the effects of ionizing UV radiation by massive stars into sph calculations as a first step in order to perform collapse calculations on scales where OB-stars are formed in a more realistic way. This will in future applications allow us to assess questions like: How does the process of ionization by massive stars change the stellar initial mass function? What are the implications for the star formation efficiency? Can star formation be induced by ionization, and if yes, what are the time scales and the parameter space, for which induced star formation can be expected? These questions will be discussed in subsequent papers. ## 2 Physical problem We incorporate the effects of ionizing radiation from hot stellar photospheres into sph by dividing the problem into three major substeps: 1. calculation of the UV radiation field by solving the time-independent, non-relativistic equation of radiative transfer, 2. determination of the ionization and recombination rates from the local radiation field, density and ionization fraction, 3. advancing the ionization state of the particles in time by solving the time-dependent ionization rate equation. ### 2.1 Calculation of the UV radiation field Given a planar infall of ionizing photons from a distant source onto the border of the volume of interest with a flux $`J_0`$ Lyman continuum photons per time and square area, the resulting photon flux inside this volume is given by $$J(s)=J_0\mathrm{exp}\left(\tau \left(s\right)\right),$$ where $`\tau (s)`$ is the optical depth for the ionizing photons along the line of sight parallel to the infall direction of the photons, and $`s`$ is the distance from the border of the integration volume along the line of sight: $$\tau (s)=_0^s\left[\overline{\kappa }\left(s^{}\right)+\kappa _\mathrm{d}\left(s\right)\right]ds^{}.$$ (1) We neglect the effect of ‘photon hardening’, i.e. the stronger absorption of weaker photons, and use an ‘effective’ absorption coefficient $`\overline{\kappa }`$, the mean of $`\kappa _\nu `$ over frequency, weighted by the spectrum of the source $`S_\nu `$: $$\overline{\kappa }=n_\mathrm{H}\overline{\sigma }=n_\mathrm{H}\frac{S_\nu ^{(\mathrm{i})}\sigma _\nu d\nu }{S_{\mathrm{tot}}^{(\mathrm{i})}},$$ (2) where $`\sigma _\nu `$ denotes the ionization cross section of hydrogen in the ground state and $`n_\mathrm{H}`$ the particle density of the H atoms. The role of dust in Hii regions and its effect on ionizing radiation is still very uncertain \[Feldt et al. 1998\]. If dust is present, it will partially absorb UV photons, heat up and reemit the energy in the IR regime. Its first order effect can be included easily under the assumption of a homogeneous distribution of the dust in the Hii region. The corresponding contribution to the optical depth can be incorporated by adding the dust absorption coefficient at the Lyman border $`\kappa _\mathrm{d}`$ to the absorption coefficient in Eq. 1. $`\kappa _\mathrm{d}`$ depends on the dust model used and is regularily determined using Mie theory for grains with given distributions in size and shape. In this paper, we set $`\kappa _\mathrm{d}`$ to zero throughout. We also neglect the diffuse field of Lyman continuum photons, which are being produced by recombinations of electrons into the ground level and which themselves possess sufficient energy for ionizing other H atoms. A thorough treatment of this radiation can only be achieved by detailed radiation transfer calculations as proposed e.g. by Yorke & Kaisig \[Yorke & Kaisig 1995\]. Instead we use the assumption of the validity of the ‘on the spot’ approximation as follows: due to the fact that the spectrum of the Lyman recombination photons as well as the the ionization cross section is strongly peaked at the Lyman border, a small amount of H atoms in the ionized region is sufficient to make the medium optically thick for the Lyman recombination photons. This leads to the absorption of these photons in the ultimate vicinity of their creation sites. As the creation of one photon is related to the creation of one H atom, its absorption leads to the destruction of one H atom. Thus the net effect of these photons on the local ionization structure is zero. This assumption breaks down in regions next to OB stars, where due to the high UV flux the density of H atoms is not sufficient to make the medium optically thick to Lyman continuum photons. Next to ionization fronts, where the density of H atoms is much higher, the ‘on the spot’ assumption is nevertheless a good approximation. On further details refer to Yorke \[Yorke 1988\]. ### 2.2 Ionization and recombination rates The ionization rate in the medium is given by the sinks of the UV radiation field, since every ionization leads to the absorption of one UV photon: $$=n_\mathrm{H}\overline{\sigma }J=𝐉,$$ (3) where $`𝐉=J\widehat{𝐞}_s`$ is the flux vector in the direction $`\widehat{𝐞}_s`$ of the line of sight. The recombination rate can be estimated as : $$=n_\mathrm{e}^2\alpha _\mathrm{B}=n^2x^2\alpha _\mathrm{B},$$ (4) with $`n`$ being the particle density of H atoms and protons together, $`n_\mathrm{e}`$ the particle density of free electrons, $`x=n_\mathrm{e}/n`$ the ionization fraction and $`\alpha _\mathrm{B}`$ the effective recombination coefficient under assumption of validity of the ‘on the spot’ approximation. The recombination coefficient $`\alpha `$ is given as the sum over the individual recombination coefficients $`\alpha _n`$, where the electron ends up in the atomic level $`n`$: $$\alpha =\underset{n}{}\alpha _n.$$ (5) Under the assumption of the ‘on the spot’ approximation recombinations into the ground level do not lead to any net effect and thus $`\alpha _1`$ can be neglected in Eq. 5. The resulting net recombination rate which is used in Eq. 4 is commonly called $`\alpha _\mathrm{B}`$ after the nomenclature introduced by Baker & Menzel \[Baker & Menzel 1962\]: $$\alpha _\mathrm{B}=\underset{n=2}{\overset{\mathrm{}}{}}\alpha _n.$$ ### 2.3 Ionization rate equation Knowing the ionization and recombination rates, $``$ and $``$, the ionization fraction can be calculated from the ionization rate equation. The time dependency of the ionization fraction in the frame comoving with the corresponding particle, i.e. its Lagrangian formulation, is given by: $$\frac{\mathrm{d}n_\mathrm{e}}{\mathrm{d}t}=.$$ (6) ### 2.4 Modeling the source Since the spectral distribution of the UV radiation emitted by the photospheres of intermediate to high mass stars is very uncertain, we assume a black radiator with an effective temperature $`T_{}`$. ## 3 Numerical treatment We developed two different methods for the numerical treatment of time dependent ionization in the sph calculations. Both have in common the method of finding paths from the ionizing source to the particles, along which the optical depth for the Lyman continuum photons can be calculated. They differ in the way the ionization rate is determined given the radiation field. Method A uses the sph formalism to calculate the divergence of the radiation field in Eq. 3. In method B we adopt a different approach also used in grid methods, where we derive the ionization rate from the difference in the numbers of photons entering and leaving a particle. ### 3.1 Finding the evaluation points on the path towards the source First, we specify the position, the rate of ionizing photons $`S_{\mathrm{tot}}`$ and $`\overline{\sigma }`$ (from Eq. 2) of the source. For each particle i we now proceed in the following way (see Fig. 1): Given the list of nearest neighbours of particle i, which has to be determined anyway for the sph formalism, we look for the particle j in the list, closest to the line of sight defined by the smallest angle $`\mathrm{\Theta }`$ between the line connecting the particles i and j and the line of sight. We choose the angle between, not the distance from, the line of sight, since we are interested in controlling the error in the direction towards the source. This is not garanteed by the latter criterion. We store this particle in a list and determine the evaluation point S<sub>j</sub> as the projected particle position on the line of sight. To determine the next evaluation point S<sub>k</sub> even closer to the source we now repeat this method using the neighbor list of particle j and so forth until we reach the source. ### 3.2 Calculating the optical depth and ionization rate for the particles #### 3.2.1 Method A: sph formalism method Now the path from the source to particle i is known, and the integration of Eq. (1) can be discretized by using the evaluation points $`S_\mathrm{i}`$. The value for $`n_\mathrm{H}`$ can be estimated by using the sph smoothing formalism: $$n_\mathrm{H}\left(𝐫\right)=n_{\mathrm{H},\mathrm{i}}W\left(𝐫𝐫_\mathrm{i}\right),$$ (7) where the sum runs over the particle corresponding to the evaluation point and its nearest neighbours. $`W`$ is the weight factor for each neighboring particle provided by the smoothing kernel. We calculate the optical depth along the line of sight by applying the Trapezian Formula, until we reach particle i: $$\tau _{\mathrm{k}+1}=\tau _\mathrm{k}+\frac{1}{2}\overline{\sigma }\left(s_{\mathrm{k}+1}s_\mathrm{k}\right)\left(n_{\mathrm{H},\mathrm{k}+1}+n_{\mathrm{H},\mathrm{k}}\right),$$ with $`s_\mathrm{k}`$ being the position of the evaluation point on the line of sight. Note that this treatment neglects the effects of scattering of the ionizing photons by recombination or dust. The distance between two successive evaluation points is smaller or equal to the local smoothing length, which determines the largest distance of the particles included in the nearest neighbour list as well as the spatial resolution. This guarantees that the line of sight integration of Eq. (1) is discretized into a reasonable amount of substeps, consistent with the resolution given by the underlying particle distribution. The flux of ionizing photons at the position of particle i into the direction of photon propagation $`\widehat{𝐞}_s`$ is then given by: $$𝐉_\mathrm{i}=J_0\widehat{𝐞}_s\mathrm{exp}\left(\tau \left(s\right)\right).$$ With the ionizing flux known at the particle positions, the nabla operator in Eq. (3) can be calculated by the sph formalism. It is given for each particle i as the sum over its neighbours: $$_\mathrm{i}=\frac{m_\mathrm{j}}{\rho _\mathrm{j}}𝐉_\mathrm{j}_\mathrm{i}W_{\mathrm{j},\mathrm{i}}.$$ (8) Now we are able to solve Eq. 6, which we write as: $$\frac{\mathrm{d}x_\mathrm{i}}{\mathrm{d}t}=_\mathrm{i}n_\mathrm{i}x_\mathrm{i}^2\alpha _\mathrm{B}.$$ (9) The time scale for the establishment of ionization equilibrium is given by $`1/(n\alpha _\mathrm{B})`$, which is regularly much shorter than the dynamical and gravitational timescales we are interested in. In order to avoid small timesteps arising from the usage of explicit methods, we use an implicit scheme. The first order discretization of Eq. 9 over a time interval $`\mathrm{\Delta }t`$ is given by: $$x_\mathrm{i}^{\mathrm{n}+1}=x_\mathrm{i}^\mathrm{n}+\mathrm{\Delta }t\left(_\mathrm{i}^{\mathrm{n}+1}n_\mathrm{i}^{\mathrm{n}+1}x_\mathrm{i}^{\mathrm{n}+1}\alpha _\mathrm{B}\right),$$ (10) where the indices n and n$`+1`$ denote the values at the beginning and the end of the actual timestep $`\mathrm{\Delta }t`$, respectively. We already know all the values on the right hand side from advancing the particles by the sph formalism, except the value for $`_\mathrm{i}^{\mathrm{n}+1}`$. Therefore a fully consistent implicit treatment is not feasible. We use the following guess for this value: $$_\mathrm{i}^{\mathrm{n}+1}=_\mathrm{i}^\mathrm{n}\frac{1\mathrm{exp}\left(n_\mathrm{i}^{\mathrm{n}+1}\overline{\sigma }a_\mathrm{i}^{\mathrm{n}+1}\left(1x_\mathrm{i}^{\mathrm{n}+1}\right)\right)}{1\mathrm{exp}\left(n_\mathrm{i}^{\mathrm{n}+1}\overline{\sigma }a_\mathrm{i}^{\mathrm{n}+1}\left(1x_\mathrm{i}^\mathrm{n}\right)\right)}.$$ (11) In this equation, we assign an effective radius $`a_\mathrm{i}`$ to each particle i proportional to the mean particle separation, given by $`a_\mathrm{i}=(M_\mathrm{i}/\rho _\mathrm{i})^{1/3}`$. This is the estimate of the size of a region with the particle mass $`M_\mathrm{i}`$ and density $`\rho _\mathrm{i}`$. The factor with the exponentials on the right hand side accounts for the effect of higher absorption and hence ionization rate with decreasing ionization fraction. We must use the effective radius $`a_\mathrm{i}`$ in Eq. 11 instead of the smoothing length $`h`$, since the method works analogous to implementations in grid codes. In contrast to the sph formalism, each particle now represents a volume of total mass $`M_\mathrm{i}`$ and density $`\rho _\mathrm{i}`$, in which ionizing radiation enters on one side and leaves on the opposite side. The size of this volume is given by $`a_\mathrm{i}`$ as defined above. It is proportional to the particle spacing. In contrast, $`h_i`$ differs from the mean particle separation as it is defined by the condition that there is a fixed number of neighbors $`N_{\mathrm{neigh}}`$ of mass $`M`$ in the sphere with radius $`2h_i`$ and is thus given as $$h_i=\left(\frac{3N_{\mathrm{neigh}}M}{32\pi \rho }\right)^{1/3}.$$ It depends on $`N_{\mathrm{neigh}}`$ and can therefore not be used instead of $`a_\mathrm{i}`$ in Eq. 11. One consequence of the discretization of the ionization rate equation is that the solution in ionized regions tends to oscillate around the equilibrium value. In order to avoid small timesteps arising from this, we set the ionization fraction $`x`$ of particles with an $`x>0.95`$ to the equilibrium value $`x_\mathrm{E}`$, which is defined by $`\mathrm{d}n_\mathrm{e}/\mathrm{d}t=0`$ in Eq. 6: $$\frac{\mathrm{d}x}{\mathrm{d}t}=\frac{1}{n}\frac{\mathrm{d}n_\mathrm{e}}{\mathrm{d}t}=\overline{\sigma }(1x_\mathrm{E})Jnx_\mathrm{E}^2\alpha _\mathrm{B}=0.$$ With $`k=\overline{\sigma }J/(n\alpha _\mathrm{B})`$ follows that $$x_\mathrm{E}=\frac{1}{2}\left[\left(k^2+4k\right)^{1/2}k\right].$$ This method works well in absolutely smooth, noise free particle distributions. However, if one wishes to initially distribute the particles randomly in space, one runs into problems. The sum in Eq. 8 is very sensitive to noisy particle distributions. Eventually the noise can be so high, that the error of the sum introduced by noise reaches the order of the sum itself. The ionization rate then locally drops below zero for some particles, which can only be avoided by smoothing the ionization rate spatially over several smoothing lengths. The result is poor resolution. We circumvent this problem in method B. #### 3.2.2 Method B: grid based method In this case, a different method is used to discretize the calculation of the optical depth. We determine the positions of the evaluation points i along the line of sight as described in Sect. 3.1 and calculate the hydrogen density $`n_{\mathrm{H},\mathrm{i}}`$ at these positions using Eq. 7. The path is then divided into pieces with length $`\mathrm{\Delta }s_\mathrm{i}=(s_{\mathrm{i}+1}s_{\mathrm{i}1})/2`$, assuming a constant hydrogen density $`n_{\mathrm{H},\mathrm{i}}`$ along each interval. The optical depth for one piece can then be approximated by $$\mathrm{\Delta }\tau _\mathrm{i}=\overline{\sigma }n_{\mathrm{H},\mathrm{i}}\mathrm{\Delta }s_\mathrm{i}.$$ These contributions to the optical depth are summed up until we reach the position located one effective radius $`a_\mathrm{i}`$ before the position of particle k. A first order approximation for the ionization rate is now given by $$_\mathrm{k}=\frac{J_0}{2a_\mathrm{k}n_\mathrm{k}}\mathrm{exp}\left(\tau _{\mathrm{k}\mathrm{a}}\right)\left(1\mathrm{exp}\left(\mathrm{\Delta }\tau _\mathrm{k}\right)\right),$$ where $`\tau _{\mathrm{k}\mathrm{a}}=\mathrm{\Delta }\tau _\mathrm{i}`$ denotes the optical depth one effective radius before the particle’s position and $`\mathrm{\Delta }\tau _\mathrm{k}=2a_\mathrm{k}n_{\mathrm{H},\mathrm{k}}\overline{\sigma }`$ the optical depth across the particle. With the ionization rate derived above we solve the ionization rate equation as described for case A. One can easily show that Eqs. (10) and (11) now give the exact implicit first order discretization for Eq. (9). The solution now approaches the equilibrium value $`x_\mathrm{E}`$ in the ionized regions without the instabilities mentioned in method A. It is not necessary to set $`x`$ artificially to $`x_\mathrm{E}`$. Method A seems to be the more consistent method since it uses the sph formalism for the calculation of $``$. This is the reason why it is also discussed in this paper. Nevertheless we prefer method B due to its robustness against noisy particle distributions and higher consistency concerning the integration scheme and have applied it to a couple of test cases. ### 3.3 Computational effort If the procedure explained above is used, the computational effort for the line of sight integration scales approximately as $`N^{4/3}`$, since the integration has to be done for each of the $`N`$ particles, and the average number of evaluation points on each line of sight scales as $`N^{1/3}`$. We can reduce the exponent from $`4/3`$ to 1 by introducing a ‘tolerance angle’ $`\mathrm{\Theta }_{\mathrm{tol}}`$. Suppose we determine the particles along the line of sight as expalined . As soon as $`\mathrm{\Theta }`$ for a particle j along the line of sight towards the source is smaller than $`\mathrm{\Theta }_{\mathrm{tol}}`$ we stop our search here. The optical depth of this particle $`\tau _j`$ is then used as an estimate of the optical depth along the remaining part of the line of sight from the source to S<sub>j</sub>. Thus no integration is needed for this part of the path. One only has to make sure that $`\tau _j`$ is already known, i.e. that the line of sight integration for particle j has been performed earlier. In this case, the average number of evaluation points $`I`$ per line of sight only depends on $`\mathrm{\Theta }_{\mathrm{tol}}`$ for large $`N`$. As shown in Fig. 2, $`I`$ becomes constant for large $`N`$ and decreases with increasing $`\mathrm{\Theta }_{\mathrm{tol}}`$. As soon as $`I`$ becomes independent of $`N`$ the total computational effort for all lines of sight together scales $`N`$. We demonstrate the effects of using the tolerance angle on the accuracy of the ionization rate calculation in Fig. 3. Histograms are plotted for the errors in $``$ and $`\tau `$ for calculations with $`\mathrm{\Theta }_{\mathrm{tol}}=0.5^{}`$, $`1^{}`$, $`2^{}`$ and $`90^{}`$ compared to $`\mathrm{\Theta }_{\mathrm{tol}}=0^{}`$. As the particle distribution we chose the evolved state of a numerical simulation which studies the compression and collapse of a dense clump within the UV field of an OB association using 200 000 particles. The results of this calculation will be presented elsewhere \[Kessel & Burkert 1999\]. Note that $`\mathrm{\Theta }_{\mathrm{tol}}=90^{}`$ represents the worst case, since the tolerance angle criterion now is fulfilled for every particle with minimal $`\mathrm{\Theta }`$ per search through the nearest neighbour list. The particles which are most affected by the tolerance angle criterion lie next to the borders of shadows cast by optically thick regions, since here the path for the integration along the line of sight may be bent through the optically thick region, thus decreasing the ionizing flux artificially. In the opposite case, the path may lead around the opaque region, increasing the ionizing flux at the position of a particle in the shadow. These extreme cases lead to the tail in the error histograms in Fig. 3. Applying the tolerance angle criterion thus numerically blurs shadows. The mean errors in $`\tau `$ are $`1.3`$ per cent for $`\mathrm{\Theta }_{\mathrm{tol}}=0.5^{}`$, $`2.2`$ per cent for $`\mathrm{\Theta }_{\mathrm{tol}}=1.0^{}`$, $`3.4`$ per cent for $`\mathrm{\Theta }_{\mathrm{tol}}=2.0^{}`$ and $`11.2`$ per cent for $`\mathrm{\Theta }_{\mathrm{tol}}=90^{}`$. The correspnding mean errors in $``$ are $`2.8`$, $`4.1`$, $`5.7`$ and $`13.3`$ per cent, respectively. For the remaining test cases presented in this paper the choice of $`\mathrm{\Theta }_{\mathrm{tol}}`$ has no effect, since they deal with one-dimensional problems, in which the optical depth is only a function of distance from the source. Applying the tolerance angle criterion only shifts the evaluation points from the lines of sight in directions perpendicular to these, along which there is no change in the optical depth. Indeed even the choice $`\mathrm{\Theta }_{\mathrm{tol}}=90^{}`$ gives the same results in the one-dimensional test cases as for $`\mathrm{\Theta }_{\mathrm{tol}}=0^{}`$. Thus the errors introduced by the angle criterion must be checked with problems in which this symmetry is broken and shadows are present, as the one mentioned above. ### 3.4 Smoothing the ionization front For reasons of noise reduction we smooth the ionization front, which is not resolvable by the sph representation, over a distance of the order of one local smoothing length. Nature provides a simple way for doing this. The width of the ionization zone is of the order of one photon mean free path length, $$d=(\overline{\sigma }n_\mathrm{H})^1,$$ (12) where $`\overline{\sigma }`$ is the net absorption cross section for ionizing photons as defined in Eq. 2. Since we cannot resolve the ionization region anyway, we are free to adjust $`\sigma `$ in a way that the width of the ionization region given by Eq. 12 is equal to a constant factor $`C1`$ times the local smoothing length $`h`$, but never larger than the value $`\overline{\sigma }`$ given by Eq. 2: $$\sigma =\mathrm{min}[\overline{\sigma },\left(n_\mathrm{H}Ch\right)^1].$$ Test calculations have shown that a good value is $`C=0.1`$. It has proven to sufficiently reduce numerical noise introduced into the ionization structure by noise in the particle distribution and at the same time to keep the resolution of ionization fronts better than the resolution of the sph formalism in order not to worsen the overall resolution. Note that, when “smoothing” the ionization front over $`0.1`$ times the smoothing length, the noise reducing effect is not caused by the spatial smoothing, since it is ten times smaller than the sph smoothing. It rather results from a larger number of time steps needed to ionize a particle in the front from an ionization fraction of $`x=0`$ to $`x1`$. This gives the neighbouring particles the opportunity to react to the changed state in a smoother way. ### 3.5 Heating effect We assume that heating and cooling effects lead to an equilibrium temperature of 10 000 K in the ionized gas penetrated by ionizing radiation. The cross sections for elastic electron–electron and electron–proton scattering are of the order $`10^{13}\mathrm{cm}^2`$. Together with a mean velocity of the electrons of the order of $`600\mathrm{km}\mathrm{s}^1`$ the thermalization timescale for the energies of the ejected electrons is far less than a year for densities of 1 particle cm<sup>-3</sup>, which is many orders of magnitude smaller than the dynamical timescale. Thermalization thus occurs quasi instantaneously. This process runs even more rapidly for higher densities. Thus we are allowed to treat the gas behind the ionization front as thermalized. We set the internal energy to: $$e=xe_{10000}+(1x)e_{\mathrm{cold}},$$ with $`e_{10000}`$ being the internal energy corresponding to a temperature of 10 000 K for ionized hydrogen, and $`e_{\mathrm{cold}}`$ to the internal energy for the 10 K cold, neutral gas. Note that this method does not properly treat recombination zones, since in this case one needs the correct inclusion of the heating and cooling processes in order to achieve the correct gas temperatures, sound velocities and pressures. Also, the equilibrium temperature in Hii-regions can vary by 20 per cent from this value. These deviations can also only be taken into account by proper treatment of heating and cooling. ## 4 Tests of the numerical treatment Although being of one-dimensional nature, the following test problems were performed fully in three dimensions. ### 4.1 Test 1: Ionization of a slab with constant density With this problem we test the implementation of the time-dependent ionization rate equation by ionizing a slab of Hi gas of constant density $`n`$ with ionizing radiation falling perpendicular onto one of the boundary surfaces. With hydrodynamics switched off, we let the ionization front traverse the slab with a constant velocity $`v_\mathrm{f}`$. To achieve this, we have to vary the infalling photon flux with time. It is given by $$J(t)=J_\mathrm{f}+J_\mathrm{t}=nv_\mathrm{f}+n^2\alpha _\mathrm{B}v_\mathrm{f}t,$$ where the first term on the right hand side is the flux which provides the photons being absorbed in the ionization front. The second, time-dependent term equals the loss of photons on their way through the slab until they reach the front. For the initial setup we place a number $`N`$ of particles randomly into a slab with length-to-height and length-to-width ratios of 10. Subsequently we let the particle distribution relax by evolving it isothermally within the slab, adding a damping term to the force law. This is necessary to diminish the numerical noise which was introduced by the random distribution. We now have an ensemble of the particles which does not possess any privileged directions and which represents a gas of constant density and temperature. We use this distribution as our starting configuration. From now on we keep the particles fixed in space and switch off hydrodynamics. The test was performed for a total number of $`N=\mathrm{2\hspace{0.17em}000},\mathrm{16\hspace{0.17em}000}`$ and $`\mathrm{128\hspace{0.17em}000}`$ particles. Since the spatial resolution for sph calculations scales as $`N^{1/3}`$ (with number of neighbours $`N_{\mathrm{neigh}}`$ per particle fixed), this yields an increase of linear resolution of a factor of two from one simulation to the simulation with next higher resolution. The results of these tests are shown in Fig. 4. The mean relative errors between the theoretical result and the calculations decrease linearly with increasing resolution, consistent with our first order discretization of both the line of sight integration and the time dependent ionization equation. The error also decreases with time as the representation of the ionization front gets thinner and thinner compared to the already ionized region. The spread in the errors for $`N=16000`$ and $`N=128000`$ results from the fact that in these cases the numerical solution oscillates around the theoretical solution, sometimes being larger than the latter, sometimes smaller. ### 4.2 Test 2: Ionization of a slab with density gradient We proceed as in test 1, with the difference that we choose a slab with a constant density gradient in the direction of photon propagation. We choose the time dependence of $`J`$ such that the ionization front should travel through the gas with constant $`v_\mathrm{f}`$. $`J`$ is given by: $`J(t)=n_0v_\mathrm{f}+\left(\alpha _\mathrm{B}n_0^2+{\displaystyle \frac{\mathrm{d}n}{\mathrm{d}x}}v_\mathrm{f}\right)v_\mathrm{f}t+\alpha _\mathrm{B}n_0{\displaystyle \frac{\mathrm{d}n}{\mathrm{d}x}}v_\mathrm{f}^2t^2+`$ $`\alpha _\mathrm{B}\left({\displaystyle \frac{\mathrm{d}n}{\mathrm{d}x}}\right)^2v_\mathrm{f}^3t^3,`$ where $`n_0`$ denotes the density at the surface where the radiation penetrates the slab and $`\frac{\mathrm{d}n}{\mathrm{d}x}`$ is the density gradient. In Fig. 5 we plot the ionized mass for the theoretical solution and the numerical simulations against time. The numerical results converge against the theoretical solution with increasing resolution. The deviations at $`t>0.9`$ are caused by the ionization front reaching the rear boundary of the slab. Note that the version of sphi used in this paper is not able to follow ionization fronts exactly which travel faster than one local smoothing length per time step. This must be taken into account during the timestep determination. In applications with fast ionization fronts (typically R-type fronts in the early phases of the evolution of Hii-regions) this criterion can lead to very small timesteps and thus to a high amount of CPU time needed. A version which circumvents this problem is being developed. ### 4.3 Test 3: Coupling of ionization and hydrodynamics For this test we adopt the problem mentioned by Lefloch & Lazareff \[Lefloch & Lazareff 1994\]. A box filled with atomic hydrogen of particle density $`n_0=10`$ cm<sup>-3</sup> and temperature $`T_{\mathrm{cold}}=100`$ K is exposed to ionizing radiation, with the photon flux increasing from zero linearly with time with a rate $`\mathrm{d}\mathrm{\Phi }/\mathrm{d}t=5.0710^2`$ cm<sup>-2</sup> s<sup>-2</sup>. There exists an analytical solution to this problem, which is self similar in the sense that physical values at position $`x`$ measured in the direction of the photon flow at time $`t`$ are only functions of $`x/t`$. This means: the structure is stretched with time. The convergence of the code towards the correct solution with increasing resolution can be tested in one calculation, since for all appearing structures the ratio between structure sizes and smoothing lengths increases linearly with time. The resulting structure is the following: an isothermal shock is driven into the neutral medium, sweeping up a dense layer of material. This is followed by an ionization front which leaves the ionized material in quasi-static equilibrium (see Figs. 6,7). Using the parameter $`\mathrm{\Lambda }=\alpha ^1(\mathrm{d}\mathrm{\Phi }/\mathrm{d}t)`$, Lefloch & Lazareff \[Lefloch & Lazareff 1994\] find the following analytical solution: $`\mathrm{\Lambda }=n_\mathrm{i}^2V_\mathrm{i}`$ $`n_\mathrm{i}=\left({\displaystyle \frac{n_0\mathrm{\Lambda }^2}{c_\mathrm{i}^2}}\right)^{\frac{1}{5}}`$ $`V_\mathrm{i}=\left({\displaystyle \frac{\mathrm{\Lambda }c_\mathrm{i}^4}{n_0^2}}\right)^{\frac{1}{5}}`$ $`n_\mathrm{c}=n_0\left({\displaystyle \frac{\mathrm{\Lambda }}{n_\mathrm{i}^2c_\mathrm{n}}}\right)^2`$ $`V_\mathrm{s}=c_\mathrm{n}\left({\displaystyle \frac{n_\mathrm{c}}{n_0}}\right)^{\frac{1}{2}},`$ where $`n_\mathrm{i}`$, $`n_0`$ and $`n_\mathrm{c}`$ denote the particle densities of the ionized gas, the undisturbed neutral gas and the gas in the compressed layer, respectively, and $`V_\mathrm{i}`$ and $`V_\mathrm{s}`$ the velocities of the ionization front and the shock front, respectively. We adopt $`\alpha _\mathrm{B}=2.710^{13}`$ cm<sup>3</sup> s<sup>-1</sup> from Lefloch & Lazareff \[Lefloch & Lazareff 1994\] in order to directly compare the results of the sphi code to those of their grid-based method using a piecewise linear scheme for the advection terms proposed by Van Leer \[Van Leer 1979\]. The resolution of 192 grid cells along the slab of their calculations, from which they derived their results, is comparable with the one used in our high resolution case. We use the same method as described in Sect. 4.3 to produce the initial conditions. No gas is allowed to enter or leave the surface. Table 1 lists the result of this comparison. The sphi calculation slightly underestimates $`V_\mathrm{s}`$ and $`V_\mathrm{i}`$, as is also observed for the grid code. The errors of order 5 per cent are comparable to those achieved by Lefloch & Lazareff \[Lefloch & Lazareff 1994\]. In the early phases, i.e. low resolution, the poor treatment of the ionization front leads to irregularities in the ionized region and thus produces sound waves travelling back and forth between the boundary to the left and the ionization front (Figs. 6, 7), which decrease in power as time increases, i.e. at higher resolution. With increasing resolution, i.e. increasing ratio of layer thickness to smoothing length, the representation of the dense layer and the shock front improves (Figs. 8, 9). ## 5 Summary The method presented in this paper allows the treatment of the dynamical effects of ionizing radiation in sph calculations. Thus the study of astrophysical problems arising from ionization, like the impact of ionizing radiation from newly born stars onto the evolution of their parental molecular clouds or the more consistent treatment of heating by OB associations in galaxy dynamics calculations are now feasible for the first time with sphi in 3 dimensions. We demonstrate that the code is able to treat time-dependent ionization, the related heating effects and hydrodynamics correctly. Our first applications, detailed calculations of photoionization induced collapse in molecular clouds and results obtained from them, will be presented in a subsequent paper. To allow the correct treatment of recombination zones, one has to include the effects of time dependent heating and cooling processes by ionization and recombination, emission of forbidden lines and thermal radiation from dust. Another important aspect which was neglected here is the effect of the diffuse Lyman continuum recombination field. It can lead to the penetration of regions shielded from the direct ionizing radiation by the ionization front, which is e.g. seen in calculations of photoevaporating protostellar disks \[Yorke & Welz 1996, Richling & Yorke 1998\]. An implementation of these processes into our sphi code is planned in the future. ## Acknowledgments This work was supported by the Deutsche Forschungsgemeinschaft (DFG), grant Bu 842/4. We’d also like to kindly thank Matthew Bate and Ralf Klessen for the useful discussions concerning the capabilities and implementation of the sph method.
no-problem/0002/nlin0002053.html
ar5iv
text
# Localized structures in coupled Ginzburg–Landau equations ## 1 Introduction When an extended system is close to a Hopf bifurcation leading to uniform oscillations, the amplitude of the oscillations can be generically described in terms of the complex Ginzburg-Landau (CGL) equation . When there are two fields becoming unstable at the same bifurcation, coupled complex Ginzburg-Landau equations (CCGL) should be used instead. This model set of equations appears in a number of contexts including convection in binary mixtures and transverse instabilities in unpolarized lasers . Coherent structures such as fronts, shocks, pulses, and other localized objects play an important role in the dynamics of extended systems . In particular, for the complex Ginzburg-Landau equation, they provide the building blocks from which some kinds of spatiotemporally chaotic behavior are built-up . A systematic study of localized structures in CCGL equations in one spatial dimension was initiated in . Here we present results on one dimensional CCGL equations in parameter ranges such that they can be written as $$_tA_\pm =\mu A_\pm +(1+i\alpha )_x^2A_\pm (1+i\beta )\left(|A_\pm |^2+\gamma |A_{}|^2\right)A_\pm .$$ (1) Group velocity terms of the form $`\pm v_g_xA_\pm `$ are explicitly excluded, and $`\gamma `$ is restricted to take real values (without additional loss of generality, $`\alpha `$ and $`\beta `$ are also real parameters). In addition we just consider $`1+\alpha \beta >0`$ (Benjamin-Feir stable range). These restrictions are the appropriate ones for the description of transverse laser instabilities . In that case $`A_\pm `$ are related to the two orthogonal circularly polarized light components. We further restrict our study to the case $`0<\gamma <1`$ which is the range obtained when atomic properties in the laser medium favor linearly polarized emission. In terms of the wave amplitudes $`A_\pm `$, wave coexistence is preferred. ## 2 Numerical studies Many experiments on traveling wave systems or numerical simulations of Ginzburg–Landau–type equations exhibit local structures that have a shape essentially time–independent and propagate with a constant velocity, at least during an interval of time where they appear to be coherent structures . In order to analyze these structures it is common to reduce the initial partial differential equation to a set of ordinary differential equations by restricting the class of solutions to uniformly traveling ones. Localized structures are homoclinic or heteroclinic orbits in this reduced dynamical system, that is they approach simple solutions (typically plane waves) in opposite parts of the system, whereas they exhibit a distinct shape in between. Instead of looking for solutions of the reduced dynamical system, we prefer here to resort to direct numerical solution of (1) under different initial conditions. A pseudo–spectral code with periodic boundary conditions and a second–order accuracy in time is used. Spatial resolution was typically 512 modes. Time step was typically $`0.05`$. The system size was always taken to be $`L=512`$. Several kinds of localized objects which maintain coherence for a time appear and travel around the system. Different initial conditions give birth to different kinds of structures. Some of them decay shortly, and the qualitative dynamics at long times becomes determined by the remaining ones, and essentially independent of the initial conditions. The upper part of Fig. 1 shows the spatiotemporal evolution of $`|A_+(x,t)|`$ and $`|A_{}(x,t)|`$ at parameter values $`\alpha =0.35,\beta =2.0\text{ and }\gamma =0.2`$. Time runs upwards and $`x`$ is represented in the horizontal direction. Lighter grey corresponds to the maximum values of $`|A_\pm (x,t)|`$ and darker to the minima. This particular evolution was obtained starting from $`A_+(x,0)`$ equal to the Nozaki-Bekki hole, a known analytical solution of the single Ginzburg-Landau equation , and for $`A_{}(x,0)`$ a Nozaki-Bekki pulse . These are not exact solutions of the set of equations (1) so that this initial condition decays and gives rise to complex spatiotemporal structures. After a transient that will be described below, the configuration of the system consists in portions with a modulus nearly constant (corresponding to plane wave states) interrupted by localized objects with particle-like behavior. Dark features in $`|A_+|`$ appear where $`|A_{}|`$ has bright features, thus indicating that the localized object carries a kind of anticorrelation between the fields. The lower panels of Fig. 1 show the modulus of the two fields at $`t=399`$ and $`x300`$, where one of such objects is present. One of the components shows a maximum in the modulus, whereas the other displays a deep minimum. We can call this object a “hole–maximum pair”. It seems to be a dissipative analog of the ‘out-gap’ solitons appearing in Kerr media with a grating , and here it is the characteristic object building-up the disordered intermittent dynamics seen at long times. It is clear that these objects connect the plane wave states (that is the constant modulus regions) filling most of the system. Before reaching the asymptotic state just described, the system evolves through configurations where additional kinds of localized objects are seen. The presence of the Nozaki-Bekki hole-pulse pair as initial condition in the central part of Fig. 1 gives birth to a pair of fronts which replace the initial lateral plane-waves by new ones. Interestingly, a different kind of localized object is seen to form just where the initial hole-pulse pair was placed. A close-up of it at $`t=90`$ is displayed in Fig. 2. It is a kind of coupled maximum-maximum pair. The moduli of the two fields are superposed in the central panel showing the full object. The lateral small bumps are propagating waves that travel towards the central maxima. Thus the center of the coherent structure acts as a wave sink . In Figure 3 the spatiotemporal evolution of $`|A_+(x,t)|`$ and $`|A_{}(x,t)|`$ was obtained using as initial conditions a sharp phase jump at the center of the system, with small random white noise added. The parameter values are $`\alpha =0.6,\beta =1.4\text{ and }\gamma =0.7`$. After a short time, the system reaches a state dominated by branching hole–hole pair structures. Lighter grey correspond to the maximum values of $`|A_\pm (x,t)|`$ and darker to the minima. The two big triangles correspond to regions of constant modulus, that is, plane waves. The bottom panels show $`|A_+|`$ and $`|A_{}|`$ in a portion of the system at these early times. Both are superposed in the central panel to show the complete matching of the two fields. At longer times, all the hole-hole pairs disappear from the system, thus indicating that they are not stable objects at this value of the parameters. The system decays to the same state as at the end of Fig. 1: the dominant coherent structures are the maximum-hole pairs. ## 3 Exact solutions The different spatiotemporal evolutions shown in the previous figures (1)–(3) are themselves interesting enough for a detailed study. The localized objects appearing in the simulations are clearly responsible for most of the complex dynamics in the system. We can interpret some of the observed structures from a simple ansatz: $$A_+(x,t)=e^{i\phi }A_{}(x,t)$$ (2) where $`\phi `$ is constant, and $`A_{}(x,t)`$ is any solution of the single CGL equation: $$_tA_{}=A_{}+b_x^2A_{}cA_{}^2A_{},$$ (3) where $`b=1+i\alpha `$ and $`c=(1+\gamma )+i(1+\gamma )\beta `$ . This simple ansatz gives us a rather rich set of exact solutions: for each known analytical solution of the single CGL equation (3), there is a corresponding solution of the CCGL equation set, in which $`A_{}`$ and $`A_+`$ have essentially the same shape except for a constant global phase. In particular, hole, pulse, shock, and front solutions are localized solutions analytically known for the single equation , so that hole-hole, pulse-pulse, shock-shock and front-front pairs are immediately found as analytical solutions of the CCGL set. In particular pulse-pulse and hole-hole structures are present in Figs. 1 to 3, and turn out to be well described by the ansatz (2). It is worthwhile to note that the studies of instability for these objects in the complex Ginzburg-Landau equation are immediately translated into instability results for the paired structures in CCGL equations. ## 4 Conclusion In summary, we have shown numerically the existence of different kinds of localized objects, responsible for the complex behavior or solutions of the CCGL equations. Some of these objects can be understood in terms of exact solutions arising from a simple ansatz. A more detailed analysis is still needed, however. In particular, the hole-maximum structure, which appears as the dominant coherent structure at long times, can not be described by our ansatz. In addition, much more work is needed in order to establish the stability properties of the different objects, and the nature of their interactions. In a recent work new exact solutions of equation 1 were obtained by using the Painlevé expansion method. The authors describe these solutions as analogues of the Nozaki-Bekki solutions . Comparison of these solutions, different from the ansatz (2), with our numerical results is under progress. Financial support from DGICYT Projects PB94-1167 and PB97-141-C02-01 is acknowledged. R.M. Acknowledge financial support from CONICYT-Fondo Clemente Estable (Uruguay)
no-problem/0002/hep-ph0002084.html
ar5iv
text
# References Time invariance violation in photon-atom and photon-molecule interactions. V.A.Kuz’menko Troitsk Institute of Innovation and Fusion Research, Troitsk, Moscow region, 142092, Russian Federation. ## Abstract A direct experimental proof of strong T-invariance violation in interactions of the photons with atoms and molecules exists in the molecular physics. PACS number: 42.50.-p ”Even for the physicist \[a\] description in plain language will be a criterion of the degree of understanding that has been reached”. Werner Heisenberg, ”Physics and Philosophy” (Harper Bros., New York, p.168, 1958). The searches for the P- and T- invariance violation are being conducted in the atomic physics for many years . As concerns the T-invariance violation, these searchers are based on the indirect indications, namely, they assume the neutron, electron and atoms to possess an electric dipole momentum . Time reversibility invariance demands, that the probability amplitudes of the direct and reverse processes should be equal . The problem of the direct test of T-invariance is not discussed in the elementary particle physics. Apparently, this is due to the obvious facts, that reverse in time, for example, of the $`K^0`$ meson decay is practically unreal. On the contrary, the reverse in time of the process of interaction of the photon with atoms and molecules, in principle, can be easily implemented experimentally. The time invariance in this case, apparently, implies the absorption cross-section of the photons to be equal to that of their stimulating emission. We can effect on the atoms or molecules by laser radiation to excite them. Then we can influence by the laser radiation once more to deexcite them. After that, it is possible to measure the cross-section of the direct process (the absorption) and that of the inverse one (the stimulating emission) and to compare them. There is no principal difficulty. The main problem of the experimental technique is that the absorption lines are usually very narrow, so that the Doppler effect hinders to measure their natural widths and cross-sections. In other cases, when the natural linewidth happens to be wider then the Doppler width, the lifetime of the excited states proves to be very small. It does not allow separating in time the processes of excitation and monitoring the excited states. However, in the molecular physics there is a unique object, which is characterized by an unusual combination of properties. On the one hand, it has very large homogeneous spectral width of optical transition (several orders of magnitude greater than the Doppler width). On the other hand, it has a long lifetime of the excited states compared to that of the spontaneous emission ($`>1ms`$), that allows to separate in time the processes of the laser excitation and monitoring. Experiments with this object in late $`1980^{th}`$ had yielded surprising results, which were not given any theoretical explanation, the works with this object were stopped and for more than ten years nobody dared to return to it. Those experiments directly demonstrated T-invariance violation in interactions of the photons with molecules. This unique object is the so-called wing of the spectral line or the wide component of the line . It appears as a certain wide continuum in the absorption spectrum of the medium size molecules (having usually 4–10 atoms). Its nature, apparently, is connected with some features of the vibrational motion of atoms in the molecules . It is important, that we have several reliable experimental proofs of the existence of the wide component. For example, from the results of works , where an excitation of the $`SF_6`$ molecules by radiation of a pulsed $`CO_2`$-laser in the conditions of a molecular jet was studied, it is possible to obtain precisely the absorption cross-section of the wide component $`\sigma =610^{20}cm^2`$ and to evaluate it’s lorentzian width ($`150GHz`$). Direct observation of the wide component as a continuum was conducted in the work , where absorption of radiation of the continuous wave $`CO_2`$-laser by the $`SF_6`$ molecules was studied in the conditions of the molecular beam. From the results of this work it is possible to evaluate the absorption cross-section of the continuum ($`10^{19}cm^2`$). Finally, the lorentzian shape of a continuum was confirmed by detection of the far wings of molecular absorption bands, which have a lorentzian shape and have intensity, corresponding to the intensity of the continuum near the center of the absorption band . Thus, the continuum characterizes mainly the homogeneous absorption of molecules. And probing the excited states by radiation of the second $`CO_2`$-laser demonstrates presence in the absorption spectrum of a sharp dip with a width $`450kHz`$ . It is a typical indication of the heterogeneity of a spectrum. In late $`1980^{th}`$ this effect was not given any explanation. The only explanation is that the spectral width of the reverse optical transition should be $`310^5`$ times less, than that of the forward transition. Moreover, in fact, this ratio can be even much greater . In the same conditions, amplification of the probe laser radiation was also observed. If the fluence of the first $`CO_2`$-laser radiation is less then $`20\mu j/cm^2`$, about 0,01% of the total $`SF_6`$ molecules are excited in the beam . Amplification of the probe laser radiation in this case is possible only if the cross-section of the reverse transition exceeds that of the direct one by a factor of $`>10^4`$. Thus, these experiments clearly show that the direct and the reverse processes differ very much in their cross-sections and spectral widths. However, integrated cross-sections of the direct and reverse processes (the Einstein coefficients) can be identical. Observations of the Rabi oscillations vote for the benefit of such an assumption. The experiments, mentioned above, can be considered as a direct proof of the T-invariance violation. At the same time, numerous facts, accumulated in the atomic and molecular physics, can be regarded as indirect indications of this phenomenon. An existing semiclassical theory of the optical transitions allows satisfactory description of the physical effects from the apparent, formal side. But on this basis it is practically impossible to get any concrete information on the physical nature of those processes, which underlie the observable effects. Perhaps, most brightly this feature manifests itself in explanation of the population transfer effect at sweeping the resonance conditions. If the laser is turned precisely in a resonance with the optical transition in a two-level system, than, according to the theory, the population of levels should oscillate with Rabi frequency. If the wavelength of the laser radiation changes and passes through a resonance (sweeping of the resonance conditions), it can be regarded as decrease in the effective energy of the laser pulse. In this case, reduction in the number of the Rabi oscillations can be expected. However, only complete and irreversible transfer of population from one level on another in one sweeping cycle is observed in the experiments. Manipulations with vectors and series in the model of a rotating wave allow the theorists to declare that they can explain this effect . However, they cannot reveal the physical reason of the incipient asymmetry and irreversibility of the population transfer process. The difference in the spectral width of the forward and backward optical transitions quite suits as such a physical reason. In this case sweeping of the resonance conditions must lead in a natural and inevitable way to the complete population transfer. A similar situation takes place in many other cases also. T-invariance violation, described above, is a good physical basis for explanation of the nature of such effects, as amplification without inversion , a phase conjugation , a photon echo , an electromagnetically induced transparency , a coherent population trapping , the Autler-Townes effect and others. An existing semiclassical theory of the optical transitions is based on the assumption about the complete T-invariance of the process of interaction of photons with atoms and molecules. Taking the fact of strong T-invariance violation demands rather radical revision of the theory. But more important now is to reanimate and to prolong the experiments with the line wings .
no-problem/0002/hep-ph0002202.html
ar5iv
text
# References CHARMED HADRONS PRODUCTION IN HIGH-ENERGY $`\mathrm{\Sigma }^{}`$ BEAM A.K. Likhoded<sup>1</sup> and S.R. Slabospitsky State Research Center Institute for High Energy Physics, Protvino, Moscow Region 142284, RUSSIA Abstract We present the calculation of the inclusive $`x_F`$-distributions of charmed hadrons, produced in high-energy $`\mathrm{\Sigma }^{}`$-beam. Our calculation is based on the modified mechanism of charmed quarks fragmentation as well as on the mechanism of $`c`$-quark recombination with the valence quarks from initial hadrons. <sup>1</sup>E–mail: LIKHODED$`\mathrm{@}`$mx.ihep.su, Perturbative QCD provides a reasonable description of the experimental data on the inclusive cross sections of open charm and beauty production on fixed targets . However, it is well known that the experiments indicate that there is a substantial difference in the production of the charmed and anticharmed hadrons in the fragmentation region of the initial hadrons (a leading particle effect). This leading particle asymmetry $`A`$ is defined as follows: $`A{\displaystyle \frac{\sigma (\mathrm{leading})\sigma (\mathrm{nonleading})}{\sigma (\mathrm{leading})+\sigma (\mathrm{nonleading})}}.`$ (1) It worth to note, that perturbative QCD calculation is unable to reproduce this effect. Indeed, in the quark parton model framework the production of hadrons containing a heavy quark proceeds via two subsequent stages * heavy quark pair $`Q\overline{Q}`$ is produced as a result of the hard collision of the partons from the initial hadrons (e.g. the subprocesses $`ggc\overline{c}`$ and $`q\overline{q}c\overline{c}`$ in Born approximation); * transition of the heavy quarks $`c`$ into charmed hadrons (“hadronization”). The standard way of describing heavy quark hadronization is to use the fragmentation function $`D(z)`$ of the heavy $`c`$-quark into the charmed hadron ($`D`$-meson or baryon) (here $`z=|\stackrel{}{p}_D|/|\stackrel{}{p}_c|`$ is the fraction of the heavy quark momentum carried away by the charmed hadron $`D`$). It should be noted that the usage of the fragmentation function assumes the absence of the interaction of the produced heavy quark $`Q`$ with the remnants of the initial hadrons. Therefore, it should be no difference between the spectra of charmed and anticharmed hadrons. Moreover, any modification of the fragmentation mechanism can not reproduce the production asymmetry (the leading particle effect). Note, that the fragmentation mechanism can be apply for the production of the $`c\overline{c}`$ pair in the color–singlet state or for high $`p_T`$ production of the open charm. On the other hand, for the case of the hadronic production of color $`c\overline{c}`$ pair with small $`p_T`$ one should takes into account the possibility of charmed $`c`$ and $`\overline{c}`$ quarks interaction with the initial hadron remnants. Therefore, due to the different valence quarks in the initial hadrons one may expect the different inclusive spectra of the final charmed hadrons. In the parton model framework, a heavy $`c`$–quark should interact with a high probability with its nearest neighbor in the rapidity space able to form a color-singlet state with it. In some cases, the heavy antiquark may find itself close (in rapidity space) to a valence light quark from the initial hadron. This would result in the formation of a fast heavy meson in the fragmentation region of the initial hadron. Alternatively, the proximity of a heavy quark to a valence diquark results in the fast charmed $`B(cq_1q_2)`$-baryon production. Therefore, the ”hard” part of charmed hadron spectra is very sensitive to the form of valence quark distribution in the initial hadrons. In this note we consider the charmed hadron production in high-energy beam of $`\mathrm{\Sigma }^{}`$-hyperon. We may expect the different behavior of the distributions of the valence $`d`$\- and $`s`$-quarks. As a result we should observe the different $`x_F`$-dependence of spectra of charmed hadrons with $`d`$\- or $`s`$-quarks, namely, $`D^{}(\overline{c}d)`$ and $`D_s^{}(\overline{c}s)`$, $`\mathrm{\Xi }_c^0(cds)`$ and $`\mathrm{\Sigma }_c^{}(cdd)`$, etc. Indeed, very roughly, the distribution of valence quark in the baryon $`B(q_1q_2q_3)`$ can be presented as follows : $$V_{q_1}^B(x)x^{\alpha _1}(1x)^{\gamma _b\alpha _2\alpha _3},$$ (2) where $`\alpha _i`$ is the intercept of the leading Regge-trajectory for $`q_i`$-quark, while $`\gamma _B4`$. Note, that due to violation flavor $`SU(N)`$-symmetry, we have different intercepts for $`d(u)`$\- and $`s`$-quarks : $`\alpha _u=\alpha _d={\displaystyle \frac{1}{2}},\alpha _s0,\alpha _c2.2.`$ (3) As a result, the $`x`$-dependence of the valence $`d`$ and $`s`$-quark in the $`\mathrm{\Sigma }^{}(sdd)`$-hyperon has the form as follows : $$V_d^\mathrm{\Sigma }\frac{1}{\sqrt{x}}(1x)^{3.5},V_s^\mathrm{\Sigma }(1x)^3$$ (4) It is seen from the (4) that the valence $`s`$-quark in the $`\mathrm{\Sigma }^{}`$-hyperon has slightly harder $`x`$-distribution than that for $`d`$-quark. We use the model to describe the production asymmetry for charmed hadrons. In this model the interaction of the charmed quarks with valence quarks from the initial hadrons describes with the help of the recombination function . The detail description of this mechanism is given elsewhere . The recombination of $`q_V`$ and $`\overline{c}`$ quarks into $`D`$-meson is described by the function of $`R_M(x_V,z;x)`$: $`R_M(x_q,z;x)=Z_M\xi _q^{(1\alpha _q)}\xi _c^{(1\alpha _c)}\delta (1\xi _q\xi _c),`$ (5) where $`\xi _q=x_q/x`$ and $`\xi _c=z/x`$, while $`x_q`$, $`z`$, and $`x`$ are the fractions of the initial-hadron c.m. momentum that are carried away by the valence $`q`$-quark, charmed $`c`$-quark, and the meson $`M_{\overline{c}q}`$, respectively. The corresponding recombination of three quarks into baryon can be described by means of the similar recombination function: $`R_B(x_1,x_2,z;x)=Z_M\xi _1^{(1\alpha _1)}\xi _2^{(1\alpha _2)}\xi _c^{(1\alpha _c)}\delta (1\xi _1\xi _2\xi _c).`$ (6) These functions take into account the momentum conservation and the proximity of partons in the rapidity space. Actually, the recombination function is the modulus squared of the heavy meson wave function in momentum space, being considered in the infinite momentum frame in the valence quark approximation. As a result, the total differential cross section for the production of $`H_c`$–hadron looks as follows: $`{\displaystyle \frac{d\sigma (H_c)}{dx}}={\displaystyle \frac{d\sigma ^R(H_c)}{dx}}+{\displaystyle \frac{d\sigma ^F(H_c)}{dx}},`$ (7) where the first term in the r.h.s. is the cross section for $`H_c`$–hadron production due to the recombination of the charmed $`c`$–quark with valence quarks from initial hadron, while the second term is the cross section for $`H_c`$ production due to charmed quark fragmentation. Note, that this model provides more or less successful description of the charmed $`D`$-meson production in $`\pi ^{}N`$-interactions (see Fig. 1, 2 and for details). We use LO formulas for the cross sections of quark-antiquark and gluon-gluon annihilation into charmed quark pair. We set $`m_c=1.25`$ GeV, $`\alpha _s=0.3`$ and find the following cross section value of $`\mathrm{\Sigma }^{}p`$ interaction at $`P_{LAB}=600`$ GeV: $$\sigma (\mathrm{\Sigma }^{}pc\overline{c}X)8\mu b$$ (8) In our calculations we do not pretend to reproduce the absolute value of this cross section (see , for detail consideration of this problem). We concentrate on the description of $`x_F`$-distribution of charmed mesons and baryons. The corresponding distributions (integrated over $`p_{}`$) are presented in Fig.3–5. We may see from these figures, that the considered charmed quark interaction in the final state (recombination) leads, indeed, to noticeable differences in $`x_F`$-spectra. These differences can be explicitly seen in Fig.6, where we present the corresponding asymmetry $`A`$ (see (1 for definition). The most non-trivial prediction of the proposed model is presented in two lower plots in Fig.6, where we present the ratio of the inclusive spectra of $`D_s^{}(\overline{c}s)`$ and $`D^{}(\overline{c}d)`$ mesons. It is evident from this figure the difference of $`x_F`$-spectra of the these two mesons, which is a result of of different $`x`$-distributions of the valence $`d`$-and $`s`$-quarks in the initial $`\mathrm{\Sigma }^{}`$-beam (see (4)). Conclusion In the present note we wish to stress once more, that the source of the observed asymmetry in charmed hadron production is the interaction of produced charmed quarks with valence quarks from initial hadrons. Note, the model under consideration provides also the additional method to measure the valence quark distribution functions of $`K`$-meson and $`\mathrm{\Sigma }`$-baryons.
no-problem/0002/cond-mat0002154.html
ar5iv
text
# Viscosity critical behaviour at the gel point in a 3⁢𝑑 lattice model ## Abstract Within a recently introduced model based on the bond-fluctuation dynamics we study the viscoelastic behaviour of a polymer solution at the gelation threshold. We here present the results of the numerical simulation of the model on a cubic lattice: the percolation transition, the diffusion properties and the time autocorrelation functions have been studied. From both the diffusion coefficients and the relaxation times critical behaviour a critical exponent $`k`$ for the viscosity coefficient has been extracted: the two results are comparable within the errors giving $`k1.3`$, in close agreement with the Rouse model prediction and with some experimental results. In the critical region below the transition threshold the time autocorrelation functions show a long time tail which is well fitted by a stretched exponential decay. PACS: 05.20.-y, 82.70.Gg, 83.10.Nn Polymeric materials are characterized by a rich and complex phenomenology, which goes from non newtonian dynamic behaviour in polymer liquids to viscoelastic properties of polymer gels and the glass transition of polymer melts, intensively investigated in the last decades. The interest in such systems has been recently further increased by several possibilities of technological applications in many different fields but still the non trivial viscoelastic behaviour of polymeric systems does not have a completely satisfying description. The gelation transition which transforms the polymeric solution, i.e. the sol, in a polymeric gel is characterized by a dramatic change in the viscoelastic properties: this is usually described in terms of the divergence of the viscosity coefficient and the appearance of an elastic modulus which characterizes the gel phase . In the experiments both the viscosity coefficient and the elastic modulus dependence on the polymerization reaction extent are well fitted by a power law but the experimental determination of these critical exponents is quite controversial: the results are in fact scattered, probably because of the practical difficulties in obtaining the gelation transition in a reproducible manner, and could only be interpreted on the basis of a better comprehension of the relevant mechanisms in the transition. Recent experimental measurements of the viscosity critical exponent $`k`$ give values ranging from $`0.7`$ in diisocyanate/triol to $`1.5`$ in epoxy resins whereas for the elastic modulus critical exponent $`t`$ the values are even more scattered, ranging from $`1.9`$ in diisocianate/triol to $`3.0`$ in polyesters (see references ). With simple statistical mechanics models it is possible to analyse the essential aspects of the transition and its critical properties. From the Flory model this has led to the description of the gelation process in terms of a percolation transition. The percolation model has turned out to be the satisfactory model for the sol-gel transition , it is able to describe the role of connectivity and gives the critical exponents for all the geometrical properties, which are in perfect agreement with the experimental results. On the other hand the viscoelastic dynamic behaviour is not simply obtained in terms of the connectivity transition. The difficulties in studying the viscosity critical behaviour at the gel point come from the determination of the viscosity of a very complex medium, which is the sol at the transition threshold, a highly polydisperse polymeric solution at high concentration. The complex polymer dynamics is characterized by the relaxation processes over many different time scales and compete with the increasing connectivity to produce the observed viscoelastic behaviour. The simplest approach consists in considering the sol as a polydisperse suspension of solid spheres neglecting the cluster-cluster interactions and generalizing the Einstein formula for the viscosity of a monodisperse suspension of solid spheres , which corresponds to a highly diluted regime. Within the Flory classical theory of gelation the viscosity remains finite or diverges at most logarithmically. Using instead the Rouse model for the polymer dynamics , which neglects the entanglement effects and the hydrodynamics interactions, the viscosity in a solution of polymeric clusters, expressed in terms of the macroscopic relaxation time, grows like $`<R^2>`$ as the cluster radius $`R`$ grows in the gelation process. The contribution of the $`n_s`$ molecules of size $`s`$ and gyration radius $`R_s`$ to the average $`<>`$ is of the order of $`sn_sR_s^2`$ leading to the critical exponent $`k=2\nu \beta `$ , where $`\nu `$ is the critical exponent of the correlation length diverging at the gel point and $`\beta `$ is the critical exponent describing the growth of the gel phase. With the random percolation exponents the value $`k1.35`$ is found, that agrees quite well with the experimental measurements for silica gels of ref.. Actually this Rouse exponent could be considered as an upper limit due to the complete screening of the hydrodynamic interactions and the entanglement effects. The Zimm approach , where the monomer correlation due to the hydrodynamic interactions are not completely screened, would give a smaller exponent. Another approach has been proposed by de Gennes using an analogy between the viscosity at the gelation threshold and the diverging conductivity in the random superconducting network model. Following this analogy an exponent $`k0.7`$ is obtained in $`3d`$, according to the determination of the conductivity critical exponent in the random superconducting model . This result is in good agreement with the values experimentally obtained in gelling solutions of polystyrene/divinylbenzene and diisocyanate/triol . Our approach consists in directly investigating the viscoelastic properties at the sol-gel transition, introducing within the random percolation model the bond fluctuation ($`BF`$) dynamics which is able to take into account the polymer conformational changes . We study a solution of tetrafunctional monomers at concentration $`p`$ and with a probability $`p_b`$ of bonds formation. In terms of these two parameters one has different cluster size distributions and eventually a percolation transition. Monomers interact via excluded volume interactions and can diffuse with local random movements. The monomer diffusion process produces a variation of the bond vectors and is constrained by the excluded volume interaction and the SAW condition for polymer clusters: these two requirements can be satisfied if the bond lengths vary within an allowed range. This dynamics results to take into account the main dynamic features of polymer molecules. We have performed numerical simulations of the model on the cubic lattice of different lattice sizes ($`L=24,32,40`$) with periodic boundary conditions. The eight sites which are the vertices of a lattice elementary cell are simultaneously occupied by a monomer, with the constraint that two nearest neighbour ($`nn`$) monomers are always separated by an empty elementary cell, i.e. two occupied cells cannot have common sites. The lattice of cells, with double lattice spacing, has been occupied with probability $`p`$, which coincides with the monomer concentration on the main lattice in the thermodynamic limit . Monomers are randomly distributed on the main lattice via a diffusion process, then between two $`nn`$ or next nearest neighbour ($`nnn`$) monomers bonds are instantaneously created with probability $`p_b`$ along lattice directions. Since most of the experimental data on the gelation transition refer to polymers with monomer functionality $`f=4`$ we have considered this case allowing the formation of at most four bonds per monomer. First, a qualitative phase diagram has been determined, studying the onset of the gel phase varying $`p`$ and $`p_b`$: on this basis we have then fixed $`p_b=1`$ and let $`p`$ vary in the interesting range from the sol to the gel phase. The percolation transition has thus been studied via the percolation probability $`P`$ and the mean cluster size $`\chi `$ on lattices of different size. From their finite size scaling behaviours we have obtained the percolation threshold $`p_c0.718\pm 0.005`$, the critical exponents $`\nu 0.89\pm 0.01`$ for the percolation correlation length $`\xi `$ and $`\gamma 1.8\pm 0.05`$ for the mean cluster size $`\chi `$ (fig.(1)). These results do agree with the random percolation critical exponents . <sup>*</sup><sup>*</sup>*The same agreement with the random percolation critical exponents has already been obtained in the $`d=2`$ version of the model . Here it is worth to mention that this case on a cubic lattice with monomer functionality $`f=4`$ is rather the problem of a restricted valence percolation. This is a percolation on a lattice where the number of bonds emanating from the same site is restricted or no site may have more than a fixed number of nn. It does reproduce the occurrence of valence saturation for monomers in the gelation process. If the number of allowed bonds per site is greater than $`2`$ this problem is expected to belong to the same universality class as the random percolation . As the restriction on the number of bonds per monomer clearly introduces a correlation effect in the process of bond formation we have optimized our algorithm to minimize this effect and actually obtained a good agreement with the random percolation exponents. The system evolves according to the bond fluctuation dynamics, i.e. the monomers diffuse with random local movements along lattice directions within the excluded volume constraint and produce the bond length fluctuation among the allowed values. In fact the $`BF`$ dynamics can be easily expressed in terms of a lattice algorithm and on a cubic lattice it can be shown that the bond lengths which guarantee the SAW condition are $`l=2,\sqrt{5},\sqrt{6},3,\sqrt{10}`$ in lattice spacing units . In fig.(2) a simple example of time evolution of a cluster is shown. We remind that the percolation properties are not modified during the dynamic evolution of the system, as it is not possible to break or form bonds, which would change the cluster size distribution. In order to determine the viscosity critical behaviour we use two independent ways, based respectively on the diffusion behaviour of the clusters and on the relaxation times. Within the study of diffusion properties in the system an interesting picture is obtained with a simple scaling argument on the diffusion coefficients as presented in : the sol at the sol-gel transition is a heteregenous medium formed by the solvent and all the other clusters of different sizes, with the mean cluster size rapidly growing near the percolation threshold. A cluster with gyration radius $`R`$ can be seen as a probe diffusing in such a medium: as long as its radius is much greater than the value of the percolation correlation length in the sol the Stokes-Einstein relation is expected to be valid, and the diffusion coefficient $`D(R)`$ of the probe will decrease proportionally to the inverse of the viscosity coefficient of the medium, $`D(R)1/R^{d2}\eta `$. Then the generic probe of size $`R`$ diffuses in a medium with a viscosity coefficient depending on $`R`$, $`\eta (R)`$, and a Stokes-Einstein generalized relation $`D(R)1/R^{d2}\eta (R)`$ should be expected to hold. As the percolation threshold is approached the probe diffuses in a medium where a spanning cluster appears with a self-similar structure of holes at any length scale. At the percolation threshold the viscosity coefficient of the sol (the bulk viscosity coefficient) diverges as $`\eta (p_cp)^k`$ and for the viscosity coefficient depending on $`R`$ the scaling behaviour $`\eta (R)R^{k/\nu }`$ should be expected. When $`R`$ is of the order of the correlation length then $`\eta (R)\eta `$. Following this scaling argument $`D(R)1/R^{d2+k/\nu }`$ at $`p=p_c`$. Within this description the use of the Rouse model would consist in taking $`D(s)1/s`$ for a cluster of size $`s`$. Then for large enough cluster sizes $`s`$ in percolation $`sR^{d_f}`$ where $`d_f`$ is the fractal dimensionality of the percolating cluster. Taking $`D(R)1/R^{d_f}`$ leads to $`k(d_f+2d)\nu `$, which again gives $`k=2\nu \beta `$, the Rouse exponent given in . We then study the diffusion of monomers and clusters via the mean square displacement of the center of mass. For a cluster of size $`s`$ (an $`s`$-cluster) this quantity is calculated from the coordinates of its center of mass $`\stackrel{}{R}_s(t)`$ as $$\mathrm{\Delta }R_s^2(t)=\frac{1}{N_s}\underset{\alpha =1}{\overset{N_s}{}}(\stackrel{}{R}_s^\alpha (t)\stackrel{}{R}_s^\alpha (0))^2$$ (1) where the index alpha refers to the $`\alpha `$th $`s`$-cluster and $`N_s`$ is the number of $`s`$-clusters so that this quantity is averaged over all the $`s`$-clusters. All the data refer to a lattice size $`L=32`$ and the calculated quantities have been averaged over $`30`$ different site and bond configurations of the same $`(p,p_b)`$ values . On the basis of the theory of Brownian motion and a simple Rouse model approach the center of mass of a polymeric molecule is expected to behave as a Brownian particle after a sufficiently long time. Indeed we find a linear dependence on time in the long time behaviour of $`\mathrm{\Delta }R_s^2(t)`$ and determine the diffusion coefficient of an $`s`$-cluster in the environment formed by the solvent and the other clusters. In fig.(3) it is shown how the asymptotic diffusive behaviour is reached after a time which increases with the cluster size $`s`$, due to the more complicated relaxation mechanism linked to the inner degrees of freedom in the molecule. As $`p`$ increases towards $`p_c`$ the $`s`$-clusters move in a medium which is more viscous and whose structure is more and more complex. As a consequence, we observe an immediate increase in the time necessary to reach the asymptotic diffusive behaviour of the center of mass motion for all the cluster sizes. At $`p=p_c`$ we have calculated the diffusion coefficients of clusters of size $`s`$ and radius of gyration $`R`$ in order to obtain the dependence $`D(R)`$. The diffusion coefficients decrease with the increasing size of the cluster, as it is expected, but after a gradual decrease their values dramatically go to zero. This behaviour which is due to the block of the diffusion for finite cluster size in a finite system has allowed to consider only cluster size up to $`s=30`$. In fig.(4) we have plotted $`D(R)`$ for different cluster sizes ($`s=5`$ to $`30`$): the data results to be well fitted by a power law behaviour. Using this scaling argument which gives the prediction $`D(R)1/R^{1+k/\nu }`$ we obtain for the viscosity coefficient a critical exponent $`k1.3\pm 0.1`$. We here briefly mention that on the other side the diffusion coefficients of very small clusters are not expected to be linked to the macroscopic viscosity, and in fact the diffusion coefficient of monomers does not go to zero at $`p_c`$, but has a definitively non-zero value for $`p>p_c`$ (fig.(5)). It is also interesting to notice that the data seems to agree with a dependence $`e^{1/1p}`$, suggesting some cooperative mechanism in the diffusion process. In order to study the viscosity in the system independently from the scaling hypothesis given before, we have studied the relaxation times via the density time autocorrelation functions. We have calculated the time autocorrelation function $`g(t)`$ of the number of pairs of $`nn`$ monomers $`\epsilon (t)`$, defined as $$g(t)=\frac{\overline{\epsilon (t^{})\epsilon (t^{}+t)}\overline{\epsilon (t^{})}^2}{\overline{\epsilon (t^{})^2}\overline{\epsilon (t^{})}^2}$$ (2) where the bar indicates the average over $`t^{}`$ (of the order of $`10^3`$ time intervals) and the brackets indicate the average over about $`30`$ different initial site and bond configurations. At different $`p`$ values in the critical region after a fast transient $`g(t)`$ decays to zero but can not be fitted by a simple time exponential behaviour. This is a sign of the existence of a distribution of relaxation times which cannot be related to a single time and this behaviour had already been observed in the $`d=2`$ study of the model. It is a typical feature of polymeric systems where the relaxation process always envolves the rearrangement of the system over many different length scales . This idea of a complex relaxation behaviour is further confirmed by the good fit of the long-time decay of the $`g(t)`$ with a stretched exponential law (fig.(6)). This behaviour of the relaxation functions is considered typical of complex materials and usually interpreted in terms of a very broad distribution of relaxation times or eventually an infinite number of them and it is in fact experimentally observed in a sol in the gelation critical regime . The picture we obtain via the density time autocorrelation functions is coherent and very close to the experimental characterization of the sol at the gelation threshold. For $`pp_c`$ we have then fitted the $`g(t)`$ with a stretched exponential behaviour $`e^{(t/\tau _0)^\beta }`$ (fig.(7)), where $`\beta 0.3`$. This value is quite lower than the ones experimentally obtained in ref. for a gelling solution or analitically predicted in ref. for randomly branched polymers: this discrepancy could be due to the fact that our data refer to a quite narrow region near the gelation threshold where $`\beta `$ is expected to assume the lowest value. On the other hand this value of $`\beta `$ agrees with the experimental results in ref. and with the asymptotic value of the stretched exponential exponent $`\beta `$ in both Ising spin glasses and polymer melts close to the freezing point in ref.. The characteristic time $`\tau _0`$ varies with $`p`$ and increases as $`p_c`$ is approached. Plotting $`\tau _0`$ as a function of $`(p_cp)`$ the data can be fitted by a power law dependence, with a critical exponent $`1.27\pm 0.05`$ (fig.(8)), very close to the viscosity critical exponent obtained from the diffusion properties. This characteristic time extracted from the long time relaxation behaviour diverges at the percolation threshold because of the passage between two different viscoelastic regimes. The most immediate way to characterize the distribution of the relaxation times in the system is the average characteristic time defined as $$\tau (p)=\frac{_0^tt^{}g(t^{})𝑑t^{}}{_0^tg(t^{})𝑑t^{}}$$ (3) which is a typical macroscopic relaxation time and can be directly linked to the viscosity coefficient. Numerically in the eq.(3) $`t`$ has been chosen by the condition $`g(t^{})0.001`$ for $`t^{}t`$. This characteristic time grows with $`p`$ and diverges at the percolation threshold according to the critical behaviour $$\tau (p_cp)^k$$ (4) with an exponent $`k1.3\pm 0.03`$ (fig.9), which gives the critical exponent for the viscosity at the sol-gel transition. From an immediate comparison between $`k`$ and the critical exponent for $`\tau _o`$ there seems to be a unique power law characterizing the divergence of the relaxation times in the system approaching from below the transition threshold. For a more detailed description of the distribution of relaxation times at the sol-gel transition together with a study of the frequency dependence of the viscoelastic properties see references . The critical exponent obtained for the viscosity critical behaviour is then $`k1.3`$ and it well agrees with the value of $`k`$ previously given, although independently obtained. This value is quite close to the one experimentally measured in silica gels and interestingly these systems are characterized by a polyfunctional condensation mechanism with tetrafunctional monomers, which is actually very similar to the case simulated here. This result is also close to the value obtained by recent accurate measurements in PDMS . It does not agree with the random superconducting network exponent $`k=0.7`$ according to the de Gennes’ analogy, whereas it is quite close to the Rouse exponent discussed above. We have already mentioned that a Rouse-like description of a polymer solution corresponds to a complete screening of the entanglement effects and the hydrodynamic interactions and is usually considered not realistic enough. Actually the entanglement effects could be not so important in the relaxation mechanism in the sol on the macroscopic relaxation time scale: due to the fractal structure of the gel phase the system is in fact quite fluid, probably there is no blocking entanglement yet and such temporary entanglements relaxe on a smaller time scale, not really affecting the macroscopic relaxation time . Furthermore the screening effect of the hydrodynamic interactions in a polymeric solution in the semidiluted regime can be quite strong, drastically reducing the range of the interactions so that the Rouse model results to be in fact very satisfactory . This could reasonably be the case of the sol at the gelation threshold too, and the deviation of the real critical exponent from the Rouse value would turn out to be actually very small. The numerical simulations have been performed on the parallel Cray-t3e system of CINECA (Bologna,Italy), taking about 15000 hours of CPU time. The authors aknowledge partial support from the European TMR Network-Fractals c.n.FMRXCT980183 and from the MURST grant (PRIN 97). This work was also supported by the INFM Parallel Computing and by the European Social Fund.
no-problem/0002/hep-ph0002022.html
ar5iv
text
# Limits on a Composite Higgs Boson (March, 2000) Precision electroweak data are generally believed to constrain the Higgs boson mass to lie below approximately 190 GeV at 95% confidence level. The standard Higgs model is, however, trivial and can only be an effective field theory valid below some high energy scale characteristic of the underlying non-trivial physics. Corrections to the custodial isospin violating parameter $`T`$ arising from interactions at this higher energy scale dramatically enlarge the allowed range of Higgs mass. We perform a fit to precision electroweak data and determine the region in the $`(m_H,\mathrm{\Delta }T)`$ plane that is consistent with experimental results. Overlaying the estimated size of corrections to $`T`$ arising from the underlying dynamics, we find that a Higgs mass up to 500 GeV is allowed. We review two composite Higgs models which can realize the possibility of a phenomenologically acceptable heavy Higgs boson. We comment on the potential of improvements in the measurements of $`m_t`$ and $`M_W`$ to improve constraints on composite Higgs models. Precision electroweak data are generally believed to constrain the Higgs boson mass to lie below approximately 190 GeV at 95% confidence level . The standard Higgs model is, however, trivial and can only be an effective field theory valid below some high energy scale $`\mathrm{\Lambda }`$ characteristic of the underlying non-trivial physics. Additional interactions coming from the underlying theory, and suppressed by the scale $`\mathrm{\Lambda }`$, give rise to model-dependent corrections to measured electroweak quantities. When potential corrections from physics at higher energy scales are included, the limit on the Higgs boson mass becomes weaker<sup>1</sup><sup>1</sup>1See also, Langacker and Erler in . . In the context of the triviality of the standard model and given the relatively weak (logarithmic) dependence of electroweak observables on the Higgs boson mass , the typical size of corrections to $`T`$ arising from custodial symmetry violating non-trivial underlying physics can dramatically enlarge<sup>2</sup><sup>2</sup>2In contrast, in theories lacking a custodial symmetry the contributions to $`S`$ are relatively small and do not have a significant effect on Higgs mass bounds. the allowed Higgs mass range . In this note we perform a fit to precision electroweak data and determine the region in the $`(m_H,\mathrm{\Delta }T)`$ plane that is consistent with experimental results. Overlaying the predicted size of corrections to $`T`$ arising from the underlying dynamics, we find that a Higgs mass up to 500 GeV is allowed. We review two composite Higgs models which can realize the possibility of a phenomenologically acceptable heavy Higgs boson. For a given Higgs boson mass, an upper bound on the scale $`\mathrm{\Lambda }`$ is given by the position of the Landau pole of the Higgs boson self-coupling $`\lambda `$. As the Higgs boson mass is proportional to $`\lambda (m_H)`$, the larger the Higgs boson mass the smaller the upper bound on the scale $`\mathrm{\Lambda }`$. We may estimate<sup>3</sup><sup>3</sup>3While this estimate is based on perturbation theory, non-perturbative calculations yield essentially the same result . this upper bound by integrating the one-loop beta function for the self-coupling $`\lambda `$, which yields $$\mathrm{\Lambda }\stackrel{<}{}m_H\mathrm{exp}\left(\frac{4\pi ^2v^2}{3m_H^2}\right),$$ (1.1) where $`m_H`$ is the Higgs boson mass and $`v246`$ GeV is the vacuum expectation value of the Higgs boson. The leading corrections to electroweak observables from the underlying theory are encoded in dimension six operators which contribute to the Peskin-Takeuchi $`S`$ and $`T`$ parameters . Given the scale of the underlying non-trivial physics, dimensional analysis may be used to estimate the size of effects from these dynamics in the low-energy Higgs theory . If the underlying theory does not respect custodial symmetry , the contribution to $`T`$ is dominant and is estimated to be $$|\mathrm{\Delta }T|\frac{b\kappa ^2v^2}{\alpha _{em}(M_Z^2)\mathrm{\Lambda }^2},$$ (1.2) or larger . Here $`\alpha _{em}`$ is the electromagnetic coupling renormalized at $`M_Z^2`$, $`b`$ is a model-dependent coefficient of order 1, and $`\kappa `$ is a measure of the size of dimensionless couplings in the effective Higgs theory and is expected to lie between 1 and $`4\pi `$ . Combining eqn. 1.2 with the bound on $`\mathrm{\Lambda }`$ shown in eqn. 1.1, we find $$|\mathrm{\Delta }T|\stackrel{>}{}\frac{b\kappa ^2v^2}{\alpha _{em}(M_Z^2)m_H^2}\mathrm{exp}\left(\frac{8\pi ^2v^2}{3m_H^2}\right).$$ (1.3) Since the Higgs model is trivial, the potential effects of the underlying non-trivial dynamics must be included when establishing constraints on the Higgs mass . As the contributions to $`T`$ are expected to dominate, we have performed a fit to electroweak measurements and have determined the region in the $`(m_H,\mathrm{\Delta }T)`$ plane that is consistent with these results. In addition to measurements at the $`Z`$-pole from LEP and SLD, we include measurements of $`M_W`$ from LEP and the Tevatron, and measurements of $`m_t`$ from the Tevatron. In performing these fits, we have used ZFITTER 6.21 to generate the standard model predictions for a given value of the $`Z`$ mass, Higgs mass, top-quark mass, and strong ($`\alpha _s`$) and electromagnetic ($`\alpha _{em}`$) couplings, and have introduced the effect of non-zero $`\mathrm{\Delta }T`$ linearly . We have included the determinations of $`\alpha _{em}`$ $$\alpha _{em}^1(M_Z^2)=128.905\pm 0.036$$ (1.4) and the (non-electroweak) determinations of $`\alpha _s`$ $$\alpha _s(M_Z^2)=0.119\pm 0.002$$ (1.5) as observations, i.e. we have included deviations from the listed central values in our computation of $`\chi ^2`$. The correlation matrices listed in ref. are incorporated in our calculation of $`\chi ^2`$. The result of our fit is summarized in Figure 1. The best-fit value<sup>4</sup><sup>4</sup>4This best fit value is, of course, below the direct experimental lower bound of order 108 GeV. is shown and occurs at a Higgs boson mass of 90 GeV; it corresponds to a minimum value of $`\chi ^2=21.7`$ for 21 observables while varying 5 fit parameters ($`M_Z`$, $`m_H`$, $`m_t`$, $`\alpha _s`$, and $`\alpha _{em}`$). For two degrees of freedom, the 68% and 95% CL bounds correspond to $`\mathrm{\Delta }\chi ^2`$ of 2.30 and 6.17 respectively. The two degree<sup>5</sup><sup>5</sup>5Note that the one degree of freedom 95% CL upper bound on $`m_H`$, $`\mathrm{\Delta }\chi ^2=4`$ and $`\mathrm{\Delta }T=0`$, is approximately 190 GeV in agreement with . of freedom 95% CL upper bound on $`m_H`$ is 243 GeV for $`\mathrm{\Delta }T=0`$. Extending this bound to non-zero $`\mathrm{\Delta }T`$, we see that the region in the $`(m_H,\mathrm{\Delta }T)`$ plane which fits the observed data as well as the “standard model” at 95% CL extends to large Higgs masses for a positive value of $`\mathrm{\Delta }T`$. It is not possible, however, for this entire region to be realized consistent with the constraints of triviality. For example, motivated by the models we consider below, the area excluded by eqn. 1.3 with $`b\kappa ^2=4\pi `$ is shown as the light region on the right in Figure 1. Overlaying the constraints, we see that Higgs masses above 500 GeV would likely imply the existence of new physics at such low scales ($`\mathrm{\Lambda }\stackrel{<}{}12`$ TeV from eqn. 1.1) as to give rise to a contribution to $`T`$ which is too large . We emphasize that these estimates are based on dimensional arguments, and we are not arguing that it is impossible to construct a composite Higgs model consistent with precision electroweak tests with $`m_H`$ greater than 500 GeV. Rather, barring accidental cancellations in a theory without a custodial symmetry, contributions to $`\mathrm{\Delta }T`$ consistent with eqn. 1.3 are generally to be expected. This expectation is illustrated in the two models which we now discuss. The top-quark seesaw theory of electroweak symmetry breaking provides a simple example of a model with a potentially heavy composite Higgs boson consistent with electroweak data. In this case, electroweak symmetry breaking is due to the condensation, driven by a strong topcolor gauge interaction, of the left-handed top-quark with a new right-handed singlet fermion $`\chi `$. Such an interaction gives rise to a composite Higgs field at low energies, and the mass of the top-color gauge boson sets the scale of the Landau pole $`\mathrm{\Lambda }`$ . The weak singlet $`\chi _L`$ and $`t_R`$ fields are introduced so that the $`2\times 2`$ mass matrix, $$\left(\begin{array}{cc}0& m_{t\chi }\\ m_{\chi t}& m_{\chi \chi }\end{array}\right)$$ (1.6) is of seesaw form and has a light eigenvalue corresponding to the observed top quark. The value of $`m_{t\chi }`$ is related to the weak scale, and its value is estimated to be 600 GeV . The coupling of the top-quark to $`\chi `$ violates custodial symmetry in the same way that the top-quark mass does in the standard model. The leading contribution to $`T`$ from the underlying top seesaw physics arises from contributions to $`W`$ and $`Z`$ vacuum polarization diagrams involving the $`\chi `$. This contribution is positive and is calculated to be $$\mathrm{\Delta }T=\frac{N_c}{16\pi ^2\alpha _{em}(M_Z^2)}\frac{m_{t\chi }^4}{m_{\chi \chi }^2v^2}\frac{0.7}{\alpha _{em}}\left(\frac{\mathrm{\Lambda }^2}{m_{\chi \chi }^2}\right)\left(\frac{v^2}{\mathrm{\Lambda }^2}\right),$$ (1.7) which is of the form of eqn. 1.2 with $`b\kappa ^2(\mathrm{\Lambda }/m_{\chi \chi })^2`$. Note that $`\mathrm{\Lambda }/m_{\chi \chi }`$ cannot be small since top-color gauge interactions must drive $`t\chi `$ chiral symmetry breaking. Taking $`\mathrm{\Lambda }/m_{\chi \chi }4`$, we reproduce the positive branch of the boundary of the light region excluded by triviality shown in Figure 1. By varying $`\mathrm{\Lambda }`$ and $`m_{\chi \chi }`$, the entire allowed $`(m_H,\mathrm{\Delta }T)`$ region with positive $`\mathrm{\Delta }T`$ and to the left of the triviality constraint can be obtained. In particular, we note that it is possible to obtain a light Higgs boson in this context as well. The fact that contributions to $`T`$ greatly expand the region of allowed Higgs mass in the top seesaw model is discussed in detail in ref. . Here we see that the running of the Higgs self-coupling encoded in the constraints of eqn. 1.3 prevent Higgs masses higher than about 500 GeV from being realized . “Composite Higgs Models” also provide examples of theories with a potentially heavy composite Higgs boson. In the simplest of these models, one introduces three new fermions which couple to a vectorial “ultracolor” $`SU(N)`$ gauge interaction. Two of these fermions ($`\psi `$) transform as a vectorial doublet under $`SU(2)_W`$, while the third ($`\sigma `$) is assumed to be a singlet. Dirac mass terms can be introduced for all of these fermions and, as so far described, chiral symmetry breaking driven by the ultracolor interactions leaves the vectorial $`SU(2)_W\times U(1)_Y`$ unbroken. Extra chiral interactions are then introduced to misalign the vacuum by a small amount, causing a nonzero $`\overline{\psi }\sigma `$ condensate and breaking the weak interactions. The octet of pions which result from ultracolor chiral symmetry breaking include a set, the analogs of the kaons, which form a composite Higgs boson. Models can be constructed in which the Higgs boson can formally be as heavy as a TeV (i.e. at tree-level), while the other four pions have masses controlled by the ultracolor scale and can be much heavier. This simplest model does not have a custodial symmetry. A direct calculation of the $`W`$ and $`Z`$ masses yields the positive contribution $$\mathrm{\Delta }T=\frac{v^2}{4\alpha _{em}(M_Z^2)f^2}.$$ (1.8) Here $`f`$ is the pion decay constant for ultracolor chiral symmetry breaking, the analog of $`f_\pi `$ in QCD. The ultracolor chiral symmetry breaking scale, estimated to be $`𝒪(4\pi f)`$, sets the compositeness scale $`\mathrm{\Lambda }`$ of the Higgs boson. Comparing eqns. 1.2 and 1.8, we see that the contribution to $`T`$ is of the same form with $`b\kappa ^24\pi ^2`$, excluding the light and dark shaded regions to the right in Figure 1. From this we see that phenomenologically acceptable composite Higgs models can be constructed with Higgs masses up to approximately 450 GeV. Again, in this case by varying the Dirac masses of the fermions and adjusting the size of the chiral interaction, it is possible to construct models that realize any $`(m_H,\mathrm{\Delta }T)`$ to the left of the triviality constraint for positive $`\mathrm{\Delta }T`$. Finally, we briefly consider the prospects for improving these indirect limits over the next few years. The measurements of $`M_W`$ and $`m_t`$ are likely to be greatly improved during Run II of the Fermilab Tevatron. With an integrated luminosity of 10 fb<sup>-1</sup>, it may be possible to reduce the uncertainty in the top mass to 2 GeV and in the $`W`$ mass to 30 MeV . To illustrate the potential of these measurements, in Figure 2 we plot the 68% and 95% CL bounds in the $`(m_H,\mathrm{\Delta }T)`$ plane which would be allowed if $`M_W`$ and $`m_t`$ assumed their current “best-fit” values while the uncertainties dropped as projected. Note that although the 95% CL region is somewhat smaller than in Fig. 1 (e.g. the two degree of freedom upper bound on the “standard model” Higgs boson mass – $`\mathrm{\Delta }T=0`$ – drops to $`𝒪`$(180 GeV)), there would still be composite Higgs models consistent with electroweak data with a Higgs boson mass up to 500 GeV for positive $`\mathrm{\Delta }T`$. In a forthcoming publication , we will detail the calculation of corrections to precisely measured electroweak quantities in the two composite Higgs models we reviewed above and consider the complementary constraints arising from bounds on $`Zb\overline{b}`$, flavor-changing neutral currents , and CP-violation. Acknowledgments We thank Gustavo Burdman, Aaron Grant, Marko Popovic, and Elizabeth Simmons for useful discussions. NE is grateful to PPARC for the sponsorship of an Advanced Fellowship. This work was supported in part by the Department of Energy under grant DE-FG02-91ER40676.
no-problem/0002/astro-ph0002353.html
ar5iv
text
# Lithium abundances in main-sequence F stars and sub-giants ## References Burkhart, C., Coupry, M.F. 2000, preprint Charbonnel, C., Talon, S. 1999, A&A 351, 635 Charbonnel, C., Talon, S. 2000, this conference Dias do Nascimento Jr., J.D., Charbonnel, C., Lèbre, A., de Laverny, P., de Medeiros, J.R., 1999, preprint Maeder, A., & Zahn, J.-P. 1998, A&A, 334, 1000 Mestel L., 1953, M.N.R.A.S. 113, 716 Théado and Vauclair 2000, this conference Vauclair, S. 1991, in IAU Symp. 145, Evolution of Stars : the Photospheric Abundance Connection ( Michaud, G., Tutukov, A. eds.) p. 327 Vauclair, S. 1999, A&A, 351, 973 Vauclair, S. 2000, this conference Zahn, J.-P. 1992, A&A, 265, 115
no-problem/0002/hep-th0002251.html
ar5iv
text
# The diagonalization of quantum field Hamiltonians ## 1 Introduction Most computational work in non-perturbative quantum field theory and many body phenomena rely on one of two general techniques, Monte Carlo or diagonalization. These methods are nearly opposite in their strengths and weaknesses. Monte Carlo requires relatively little storage, can be performed using parallel processors, and in some cases the computational effort scales reasonably with system size. But it has great difficulty for systems with sign or phase oscillations and provides only indirect information on wavefunctions and excited states. In contrast diagonalization methods do not suffer from fermion sign problems, can handle complex-valued actions, and can extract details of the spectrum and eigenstate wavefunctions. However the main problem with diagonalization is that the required memory and CPU time scales exponentially with the size of the system. In view of the complementary nature of the two methods, we consider the combination of both diagonalization and Monte Carlo within a computational scheme. We propose a new approach which takes advantage of the strengths of the two computational methods in their respective domains. The first half of the method involves finding and diagonalizing the Hamiltonian restricted to an optimal subspace. This subspace is designed to include the most important basis vectors of the lowest energy eigenstates. Once the most important basis vectors are found and their interactions treated exactly, Monte Carlo is used to sample the contribution of the remaining basis vectors. By this two-step procedure much of the sign problem is negated by treating the interactions of the most important basis states exactly, while storage and CPU problems are resolved by stochastically sampling the collective effect of the remaining states. In our approach diagonalization is used as the starting point of the Monte Carlo calculation. Therefore the two methods should not only be efficient but work well together. On the diagonalization side there are several existing methods using Tamm-Dancoff truncation , similarity transformations , density matrix renormalization group , or variational algorithms such as stochastic diagonalization . However we find that each of these methods is either not sufficiently general, not able to search an infinite or large dimensional Hilbert space, not efficient at finding important basis vectors, or not compatible with the subsequent Monte Carlo part of the calculation. The Monte Carlo part of our diagonalization/Monte Carlo scheme is discussed separately in a companion paper . In this paper we consider the diagonalization part of the scheme. We introduce a new diagonalization method called quasi-sparse eigenvector (QSE) diagonalization. It is a general algorithm which can operate using any basis, either orthogonal or non-orthogonal, and any sparse Hamiltonian, either real, complex, Hermitian, non-Hermitian, finite-dimensional, or infinite-dimensional. It is able to find the most important basis states of several low energy eigenvectors simultaneously, including those with identical quantum numbers, from a random start with no prior knowledge about the form of the eigenvectors. Our discussion is organized as follows. We first define the notion of quasi-sparsity in eigenvectors and introduce the quasi-sparse eigenvector method. We discuss when the low energy eigenvectors are likely to be quasi-sparse and make an analogy with Anderson localization. We then consider three examples which test the performance of the algorithm. In the first example we find the lowest energy eigenstates for a random sparse real symmetric matrix. In the second example we find the lowest eigenstates sorted according to the real part of the eigenvalue for a random sparse complex non-Hermitian matrix. In the last example we consider the case of an infinite-dimensional Hamiltonian defined by $`1+1`$ dimensional $`\varphi ^4`$ theory in a periodic box. We conclude with a summary and some comments on the role of quasi-sparse eigenvector diagonalization within the context of the new diagonalization/Monte Carlo approach. ## 2 Quasi-sparse eigenvector method Let $`|e_i`$ denote a complete set of basis vectors. For a given energy eigenstate $$|v=\underset{i}{}c_i|e_i,$$ (1) we define the important basis states of $`|v`$ to be those $`|e_i`$ such that for fixed normalizations of $`|v`$ and the basis states, $`\left|c_i\right|`$ exceeds a prescribed threshold value. If $`|v`$ can be well-approximated by the contribution from only its important basis states we refer to the eigenvector $`|v`$ as quasi-sparse with respect to $`|e_i`$. Standard sparse matrix algorithms such as the Lanczos or Arnoldi methods allow one to find the extreme eigenvalues and eigenvectors of a sparse matrix efficiently, without having to store or manipulate large non-sparse matrices. However in quantum field theory or many body theory one considers very large or infinite dimensional spaces where even storing the components of a general vector is impossible. For these more difficult problems the strategy is to approximate the low energy eigenvectors of the large space by diagonalizing smaller subspaces. If one has sufficient intuition about the low energy eigenstates it may be possible to find a useful truncation of the full vector space to an appropriate smaller subspace. In most cases, however, not enough is known a priori about the low energy eigenvectors. The dilemma is that to find the low energy eigenstates one must truncate the vector space, but in order to truncate the space something must be known about the low energy states. Our solution to this puzzle is to find the low energy eigenstates and the appropriate subspace truncation at the same time by a recursive process. We call the method quasi-sparse eigenvector (QSE) diagonalization, and we describe the steps of the algorithm as follows. The starting point is any complete basis for which the Hamiltonian matrix $`H_{ij}`$ is sparse. The basis vectors may be non-orthogonal and/or the Hamiltonian matrix may be non-Hermitian. The following steps are now iterated: 1. Select a subset of basis vectors $`\{e_{i_1},\mathrm{},e_{i_n}\}`$ and call the corresponding subspace $`S`$. 2. Diagonalize $`H`$ restricted to $`S`$ and find one eigenvector $`v`$. 3. Sort the basis components of $`v`$ according to their magnitude and remove the least important basis vectors. 4. Replace the discarded basis vectors by new basis vectors. These are selected at random according to some weighting function from a pool of candidate basis vectors which are connected to the old basis vectors through non-vanishing matrix elements of $`H`$. 5. Redefine $`S`$ as the subspace spanned by the updated set of basis vectors and repeat steps 2 through 5. If the subset of basis vectors is sufficiently large, the exact low energy eigenvectors will be stable fixed points of the QSE update process. We can show this as follows. Let $`|i`$ be the eigenvectors of the submatrix of $`H`$ restricted to the subspace $`S`$, where $`S`$ is the span of the subset of basis vectors after step 3 of the QSE algorithm. Let $`|A_j`$ be the remaining basis vectors in the full space not contained in $`S`$. We can represent $`H`$ as $$\left[\begin{array}{cccccc}\lambda _1& 0& \mathrm{}& 1\left|H\right|A_1& 1\left|H\right|A_2& \mathrm{}\\ 0& \lambda _2& \mathrm{}& 2\left|H\right|A_1& 2\left|H\right|A_2& \mathrm{}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ A_1\left|H\right|1& A_1\left|H\right|2& \mathrm{}& E\lambda _{A_1}& A_1\left|H\right|A_2& \mathrm{}\\ A_2\left|H\right|1& A_2\left|H\right|2& \mathrm{}& A_2\left|H\right|A_1& E\lambda _{A_2}& \mathrm{}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\end{array}\right].$$ (2) We have used Dirac’s bra-ket notation to represent the terms of the Hamiltonian matrix. In cases where the basis is non-orthogonal and/or the Hamiltonian is non-Hermitian, the meaning of this notation may not be clear. When writing $`A_1\left|H\right|1`$, for example, we mean the result of the dual vector to $`|A_1`$ acting upon the vector $`H|1`$. In (2) we have written the diagonal terms for the basis vectors $`|A_j`$ with an explicit factor $`E`$. We let $`|1`$ be the approximate eigenvector of interest and have shifted the diagonal entries so that $`\lambda _1=0.`$ Our starting hypothesis is that $`|1`$ is close to some exact eigenvector of $`H`$ which we denote as $`|1_{\text{full}}`$. More precisely we assume that the components of $`|1_{\text{full}}`$ outside $`S`$ are small enough so that we can expand in inverse powers of the introduced parameter $`E.`$ We now expand the eigenvector as $$|1_{\text{full}}=\left[\begin{array}{c}1\\ c_2^{}E^1+\mathrm{}\\ \mathrm{}\\ c_{A_1}^{}E^1+\mathrm{}\\ c_{A_2}^{}E^1+\mathrm{}\\ \mathrm{}\end{array}\right]$$ (3) and the corresponding eigenvalue as $$\lambda _{\text{full}}=\lambda _1^{}E^1+\mathrm{}.$$ (4) In (3) we have chosen the normalization of $`|1_{\text{full}}`$ such that $`1|1_{\text{full}}=1`$. From the eigenvalue equation $$H|1_{\text{full}}=\lambda _{\text{full}}|1_{\text{full}}$$ (5) we find at lowest order $$c_{A_j}^{}=\frac{A_j\left|H\right|1}{\lambda _{A_j}}.$$ (6) We see that at lowest order the component of $`|1_{\text{full}}`$ in the $`|A_j`$ direction is independent of the other vectors $`|A_j^{}`$. If $`|1`$ is sufficiently close to $`|1_{\text{full}}`$ then the limitation that only a fixed number of new basis vectors is added in step 4 of the QSE algorithm is not relevant. At lowest order in $`E^1`$ the comparison of basis components in step 3 (in the next iteration) is the same as if we had included all remaining vectors $`|A_j`$ at once. Therefore at each update only the truly largest components are kept and the algorithm converges to some optimal approximation of $`|1_{\text{full}}`$. This is consistent with the actual performance of the algorithm as we will see in some examples later. In those examples we also demonstrate that the QSE algorithm is able to find several low energy eigenvectors simultaneously. The only change is that when diagonalizing the subspace $`S`$ we find more than one eigenvector and apply steps 3 and 4 of the algorithm to each of the eigenvectors. ## 3 Quasi-sparsity and Anderson localization As the name indicates the accuracy of the quasi-sparse eigenvector method depends on the quasi-sparsity of the low energy eigenstates in the chosen basis. If the eigenvectors are quasi-sparse then the QSE method provides an efficient way to find the important basis vectors. In the context of our diagonalization/Monte Carlo approach, this means that diagonalization does most of the work and only a small amount of correction is needed. This correction is found by Monte Carlo sampling the remaining basis vectors, a technique called stochastic error correction . If however the eigenvectors are not quasi-sparse then one must rely more heavily on the Monte Carlo portion of the calculation. The fastest and most reliable way we know to determine whether the low energy eigenstates of a Hamiltonian are quasi-sparse with respect to a chosen basis is to use the QSE algorithm and look at the results of the successive iterations. But it is also useful to consider the question more intuitively, and so we consider the following example. Let $`H`$ be a sparse Hermitian $`2000\times 2000`$ matrix defined by $$H_{jk}=\mathrm{log}(j)\delta _{jk}+x_{jk}M_{jk},$$ (7) where $`j`$ and $`k`$ run from $`1`$ to $`2000`$, $`x_{jk}`$ is a Gaussian random real variable centered at zero with standard deviation $`x_{\text{rms}}=0.25`$, and $`M_{jk}`$ is a sparse symmetric matrix consisting of random $`0`$’s and $`1`$’s such that the density of $`1`$’s is $`5\%`$. The reason for introducing the $`\mathrm{log}(j)`$ term in the diagonal is to produce a large variation in the density of states. With this choice the density of states increases exponentially with energy. Our test matrix is small enough that all eigenvectors can be found without difficulty. We will consider the distribution of basis components for the eigenvectors of $`H`$. In Figure 1 we show the square of the basis components for a given low energy eigenvector $`|v.`$ The basis components are sorted in order of descending importance. The ratio of $`\mathrm{\Delta }E`$, the average spacing between neighboring energy levels, to $`x_{\text{rms}}`$ is $`0.13`$. We see that the eigenvector is dominated by a few of its most important basis components. In Figure 2 we show the same plot for another eigenstate but one where the spacing between levels is three times smaller, $`\mathrm{\Delta }E/x_{\text{rms}}=0.041.`$ This eigenvector is not nearly as quasi-sparse. The effect is even stronger in Figure 3, where we show an eigenvector such that the spacing between levels is $`\mathrm{\Delta }E/x_{\text{rms}}=0.024`$. Our observations show a strong effect of the density of states on the quasi-sparsity of the eigenvectors. States with a smaller spacing between neighboring levels tend to have basis components that extend throughout the entire space, while states with a larger spacing tend to be quasi-sparse. The relationship between extended versus localized eigenstates and the density of states has been studied in the context of Anderson localization and metal-insulator transitions . The simplest example is the tight-binding model for a single electron on a one-dimensional lattice with $`Z`$ sites, $$H=\underset{j}{}d_j|jj\left|+\underset{jj^{}}{}t_{jj^{}}\right|jj^{}|.$$ (8) $`|j`$ denotes the atomic orbital state at site $`j,`$ $`d_j`$ is the on-site potential, and $`t_{jj^{}}`$ is the hopping term between nearest neighbor sites $`j`$ and $`j^{}`$. If both terms are uniform ($`d_j=d,`$ $`t_{jj^{}}=t`$) then the eigenvalues and eigenvectors of $`H`$ are $`Hv_n`$ $`=(d+2t\mathrm{cos}\frac{2\pi n}{Z})v_n,`$ (9) $`v_n`$ $`=\frac{1}{\sqrt{Z}}{\displaystyle \underset{j}{}}e^{i{\scriptscriptstyle \frac{2\pi nj}{Z}}}|j,`$ (10) where $`n=1,\mathrm{},Z`$ labels the eigenvectors. In the absence of diagonal and off-diagonal disorder, the eigenstates of $`H`$ extend throughout the entire lattice. The eigenvalues are also approximately degenerate, all lying within an interval of size 4$`t`$. However, if diagonal and/or off-diagonal disorder is introduced, the eigenvalue spectrum becomes less degenerate. If the disorder is sufficiently large, the eigenstates become localized to only a few neighboring lattice sites giving rise to a transition of the material from metal to insulator. We can regard a sparse quantum Hamiltonian as a similar type of system, one with both diagonal and general off-diagonal disorder. If the disorder is sufficient such that the eigenvalues become non-degenerate, then the eigenvectors will be quasi-sparse. We reiterate that the most reliable way to determine if the low energy states are quasi-sparse is to use the QSE algorithm. Intuitively, though, we expect the eigenstates to be quasi-sparse with respect to a chosen basis if the spacing between energy levels is not too small compared with the size of the off-diagonal entries of the Hamiltonian matrix. ## 4 Finite matrix examples As a first test of the QSE method, we will find the lowest four energy states of the random symmetric matrix $`H`$ defined in (7). So that there is no misunderstanding, we should repeat that diagonalizing a $`2000\times 2000`$ matrix is not difficult. The purpose of this test is to analyze the performance of the method in a controlled environment. One interesting twist is that the algorithm uses only small pieces of the matrix and operates under the assumption that the space may be infinite dimensional. A sample MATLAB program similar to the one used here has been printed out as a tutorial example in . The program starts from a random configuration, 70 basis states for each of the four eigenvectors. With each iteration we select $`10`$ replacement basis states for each of the eigenvectors. In Figure 4 we show the exact energies and the results of the QSE method as functions of iteration number. In Figure 5 we show the inner products of the normalized QSE eigenvectors with the normalized exact eigenvectors. We note that all of the eigenvectors were found after about 15 iterations and remained stable throughout successive iterations. Errors are at the $`5`$ to $`10\%`$ level, which is about the theoretical limit one can achieve using this number of basis states. The QSE method has little difficulty finding several low lying eigenvectors simultaneously because it uses the distribution of basis components for each of the eigenvectors to determine the update process. This provides a performance advantage over variational-based techniques such as stochastic diagonalization in finding eigenstates other than the ground state. As a second test we consider a sparse non-Hermitian matrix with complex eigenvalues. This type of matrix is not amenable to variational-based methods. We will find the four eigenstates corresponding with eigenvalues with the lowest real part for the random complex non-Hermitian matrix $$H_{jk}^{}=(1+ic_{jk})H_{jk}.$$ (11) $`H_{jk}`$ is the same matrix used previously and$`c_{jk}`$ is a uniform random variable distributed between $`1`$ and 1. As before the program is started from a random configuration, 70 basis states for each of the four eigenvectors. For each iteration $`10`$ replacement basis vectors are selected for each of the eigenvectors. In Figure 6 the exact eigenvalues and the results of the QSE run are shown in the complex plane as functions of iteration number. In Figure 7 we show the inner products of the QSE eigenvectors with the exact eigenvectors. All of the eigenvectors were found after about 20 iterations and remained stable throughout successive iterations. Errors were again at about the $`5`$ to $`10\%`$ level. ## 5 $`\varphi ^4`$ theory in $`1+1`$ dimensions We now apply the QSE method to an infinite dimensional quantum Hamiltonian. We consider $`\varphi ^4`$ theory in $`1+1`$ dimensions, a system that is familiar to us from previous studies using Monte Carlo and explicit diagonalization . The Hamiltonian density for $`\varphi ^4`$ theory in $`1+1`$ dimensions has the form $$=\frac{1}{2}\left(\frac{\varphi }{t}\right)^2+\frac{1}{2}\left(\frac{\varphi }{x}\right)^2+\frac{\mu ^2}{2}\varphi ^2+\frac{\lambda }{4!}\text{:}\varphi ^4\text{:},$$ where the normal ordering is with respect to the mass $`\mu `$. We consider the system in a periodic box of length $`2L`$. We then expand in momentum modes and reinterpret the problem as an equivalent Schrödinger equation . The resulting Hamiltonian is $`H`$ $`=\frac{1}{2}{\displaystyle \underset{n}{}}\frac{}{q_n}\frac{}{q_n}+\frac{1}{2}{\displaystyle \underset{n}{}}\left(\omega _n^2(\mu )\frac{\lambda b(\mu )}{8L}\right)q_nq_n`$ (12) $`+\frac{\lambda }{4!2L}{\displaystyle \underset{n_1+n_2+n_3+n_4=0}{}}q_{n_1}q_{n_2}q_{n_3}q_{n_4}`$ where $$\omega _n(\mu )=\sqrt{\frac{n^2\pi ^2}{L^2}+\mu ^2}$$ (13) and $`b(\mu )`$ is the coefficient for the mass counterterm $$b(\mu )=\underset{n}{}\frac{1}{2\omega _n(\mu )}.$$ (14) It is convenient to split the Hamiltonian into free and interacting parts with respect to an arbitrary mass $`\mu ^{}`$: $$H_{free}=\frac{1}{2}\underset{n}{}\frac{}{q_n}\frac{}{q_n}+\frac{1}{2}\underset{n}{}\omega _n^2(\mu ^{})q_nq_n,$$ (15) $`H`$ $`=H_{free}+\frac{1}{2}{\displaystyle \underset{n}{}}\left(\mu ^2\mu ^2\frac{\lambda b(\mu )}{8L}\right)q_nq_n`$ (16) $`+\frac{\lambda }{4!2L}{\displaystyle \underset{n_1+n_2+n_3+n_4=0}{}}q_{n_1}q_{n_2}q_{n_3}q_{n_4}.`$ $`\mu ^{}`$ is used to define the basis states of our Fock space. Since $`H`$ is independent of $`\mu ^{}`$, we perform calculations for different $`\mu ^{}`$ to obtain a reasonable estimate of the error. It is also useful to find the range of values for $`\mu ^{}`$ which maximizes the quasi-sparsity of the eigenvectors and therefore improves the accuracy of the calculation. For the calculations presented here, we set the length of the box to size $`L=5\pi \mu ^1`$. We restrict our attention to momentum modes $`q_n`$ such that $`\left|n\right|N_{\mathrm{max}}`$, where $`N_{\mathrm{max}}=20`$. This corresponds with a momentum cutoff scale of $`\mathrm{\Lambda }=4\mu .`$ To implement the QSE algorithm on this infinite dimensional Hilbert space, we first define ladder operators with respect to $`\mu ^{}`$, $`a_n(\mu ^{})`$ $`=\frac{1}{\sqrt{2\omega _n(\mu ^{})}}\left[q_n\omega _n(\mu ^{})+\frac{}{q_n}\right]`$ (17) $`a_n^{}(\mu ^{})`$ $`=\frac{1}{\sqrt{2\omega _n(\mu ^{})}}\left[q_n\omega _n(\mu ^{})\frac{}{q_n}\right].`$ (18) The Hamiltonian can now be rewritten as $`H`$ $`={\displaystyle \underset{n}{}}\omega _n(\mu ^{})a_n^{}a_n+\frac{1}{4}(\mu ^2\mu ^2\frac{\lambda b}{8L}){\displaystyle \underset{n}{}}\frac{\left(a_n+a_n^{}\right)\left(a_n+a_n^{}\right)}{\omega _n(\mu ^{})}`$ (19) $`+\frac{\lambda }{192L}{\displaystyle \underset{n_1+n_2+n_3+n_4=0}{}}\left[\frac{\left(a_{n_1}+a_{n_1}^{}\right)}{\sqrt{\omega _{n_1}(\mu ^{})}}\frac{\left(a_{n_2}+a_{n_2}^{}\right)}{\sqrt{\omega _{n_2}(\mu ^{})}}\frac{\left(a_{n_3}+a_{n_3}^{}\right)}{\sqrt{\omega _{n_3}(\mu ^{})}}\frac{\left(a_{n_4}+a_{n_4}^{}\right)}{\sqrt{\omega _{n_4}(\mu ^{})}}\right].`$ In (19) we have omitted constants contributing only to the vacuum energy. We represent any momentum-space Fock state as a string of occupation numbers, $`|o_{N_{\mathrm{max}}},\mathrm{},o_{N_{\mathrm{max}}}`$, where $$a_n^{}a_n|o_{N_{\mathrm{max}}},\mathrm{},o_{N_{\mathrm{max}}}=o_n|o_{N_{\mathrm{max}}},\mathrm{},o_{N_{\mathrm{max}}}.$$ (20) From the usual ladder operator relations, it is straightforward to calculate the matrix element of $`H`$ between two arbitrary Fock states. Aside from calculating matrix elements, the only other fundamental operation needed for the QSE algorithm is the generation of new basis vectors. The new states should be connected to some old basis vector through non-vanishing matrix elements of $`H`$. Let us refer to the old basis vector as $`|e`$. For this example there are two types of terms in our interaction Hamiltonian, a quartic interaction $$\underset{n_1,n_2,n_3}{}\left(a_{n_1}+a_{n_1}^{}\right)\left(a_{n_2}+a_{n_2}^{}\right)\left(a_{n_3}+a_{n_3}^{}\right)\left(a_{n_1n_2n_3}+a_{n_1+n_2+n_3}^{}\right),$$ (21) and a quadratic interaction $$\underset{n}{}\left(a_n+a_n^{}\right)\left(a_n+a_n^{}\right).$$ (22) To produce a new vector from $`|e`$ we simply choose one of the possible operator monomials $`a_{n_1}a_{n_2}a_{n_3}a_{n_1n_2n_3},a_{n_1}^{}a_{n_2}a_{n_3}a_{n_1n_2n_3},\mathrm{},`$ (23) $`a_na_n,a_n^{}a_n^{},\mathrm{}`$ and act on $`|e`$. Our experience is that the interactions involving the small momentum modes are generally more important than those for the large momentum modes, a signal that the ultraviolet divergences have been properly renormalized. For this reason it is best to arrange the selection probabilities such that the smaller values of $`\left|n_1\right|`$, $`\left|n_2\right|`$, $`\left|n_3\right|`$ and $`\left|n\right|`$ are chosen more often. For each QSE iteration, $`50`$ new basis vectors were selected for each eigenstate and $`250`$ basis vectors were retained. The results for the lowest energy eigenvalues are shown in Figure 8. The error bars were estimated by repeating the calculation for different values of the auxiliary mass parameter $`\mu ^{}`$. From prior Monte Carlo calculations we know that the theory has a phase transition at $`\frac{\lambda }{4!}2.5\mu ^2`$ corresponding with spontaneous breaking of the $`\varphi \varphi `$ reflection symmetry. In the broken phase there are two degenerate ground states and we refer to these as the even and odd vacuum states. In Figure 8 we see signs of a second order phase transition near $`\frac{\lambda }{4!}2.5\mu ^2`$. Since we are working in a finite volume the spectrum is discrete, and we can track the energy eigenvalues as functions of the coupling. Crossing the phase boundary, we see that the vacuum in the symmetric phase becomes the even vacuum in the broken phase while the one-particle state in the symmetric phase becomes the odd vacuum. The energy difference between the states is also in agreement with a Monte Carlo calculation of the same quantities. The state marking the two-particle threshold in the symmetric phase becomes the one-particle state above the odd vacuum, while the state at the three-particle threshold becomes the one-particle state above the even vacuum. These one-particle states should be degenerate in the infinite volume limit. One rather unusual feature is the behavior of the first two-particle state above threshold in the symmetric phase. In the symmetric phase this state lies close to the two-particle threshold. But as we cross the phase boundary the state which was the two-particle threshold is changed into a one-particle state. Thus our two-particle state is pushed up even further to become a two-particle state above the even vacuum and we see a pronounced level crossing. We note that while the one-particle mass vanishes near the critical point, the energies of the two-particle and three-particle thresholds reach a minimum but do not come as close to zero energy. It is known that this model is repulsive in the two-particle scattering channel. In a large but finite volume the ground state and one-particle states do not feel significant finite volume effects. The two-particle state at threshold, however, requires that the two asymptotic particles be widely separated. In our periodic box of length 2$`L`$ the maximal separation distance is $`L`$ and we expect an increase in energy with respect to twice the one-particle mass of size $`V(L)`$, where $`V`$ is the potential energy between particles. Likewise a three-particle state will increase in energy an amount $`3V(2L/3)`$. Our results indicate that finite volume effects for the excited states are significant for this value of $`L`$. ## 6 Summary We have proposed a new approach which combines both diagonalization and Monte Carlo within a computational scheme. The motivation for our approach is to take advantage of the strengths of the two computational methods in their respective domains. We remedy sign and phase oscillation problems by handling the interactions of the most important basis states exactly using diagonalization, and we deal with storage and CPU problems by stochastically sampling the contribution of the remaining states. We discussed the diagonalization part of the method in this paper. The goal of diagonalization within our scheme is to find the most important basis vectors of the low energy eigenstates and treat the interactions among them exactly. We have introduced a new diagonalization method called quasi-sparse eigenvector diagonalization which achieves this goal efficiently and can operate using any basis, either orthogonal or non-orthogonal, and any sparse Hamiltonian, either real, complex, Hermitian, non-Hermitian, finite-dimensional, or infinite-dimensional. Quasi-sparse eigenvector diagonalization is the only method we know which can address all of these problems. We considered three examples which tested the performance of the algorithm. We found the lowest energy eigenstates for a random sparse real symmetric matrix, the lowest eigenstates (sorted according to the real part of the eigenvalue) for a random sparse complex non-Hermitian matrix, and the lowest energy eigenstates for an infinite-dimensional Hamiltonian defined by $`1+1`$ dimensional $`\varphi ^4`$ theory in a periodic box. We regard QSE diagonalization as only a starting point for the Monte Carlo part of the calculation. Once the most important basis vectors are found and their interactions treated exactly, a technique called stochastic error correction is used to sample the contribution of the remaining basis vectors. This method is introduced in . ### Acknowledgments We thank P. van Baal, H. S. Seung, H. Sompolinsky, and M. Windoloski for useful discussions. Support provided by the National Science Foundation.
no-problem/0002/cond-mat0002399.html
ar5iv
text
# Interaction-assisted propagation of Coulomb-correlated electron-hole pairs in disordered semiconductors ## Abstract A two-band model of a disordered semiconductor is used to analyze dynamical interaction induced weakening of localization in a system that is accessible to experimental verification. The results show a dependence on the sign of the two-particle interaction and on the optical excitation energy of the Coulomb-correlated electron-hole pair. The problem of two interacting particles (TIP) in a random potential is an excellent paradigm for the general question of the interplay of disorder and interactions in many-body systems. First addressed in a 1990 paper by Dorokhov , the subject has been especially well studied since Shepelyansky’s 1994 publication . Considering the TIP localization length $`l_2`$, most authors obtain an interaction-induced increase $`l_2>l_1`$ over the single-particle localization length $`l_1`$ independent of the sign of the interaction, with $`l_2/l_1l_1^a`$ and $`a=1`$ or $`a=0.65`$ . Here, $`l_1`$ and $`l_2`$ are measured in units of the lattice constant of a one-dimensional Anderson chain. Similar results have been obtained for TIP in a quasiperiodic chain . The independence of the predicted effect on the sign of the interaction is an especially intriguing feature. Early work approached this problem using a wide variety of theoretical techniques and focused on establishing the existence of the TIP-effect, while more recent work has dealt with quantitative details like scaling behaviour and the influence of interaction range, strength and sign . Existing works comprise purely theoretical case studies since the model of just two particles in a single band does not correspond to any real physical situation. Obviously, experimental study is needed to promote further understanding of the TIP problem, and would put the presently rather academic discussion on a firm physical basis. Exploiting the fact that the coherent spatio-temporal dynamics of Coulomb-correlated electron-hole pairs is strongly influenced by the two-particle interaction, we show in the present paper that the TIP localization properties should be accessible to modern ultrafast optical techniques. The corresponding spatial and temporal scales are on the order of sub-$`\mu `$m and 1 ps. Our numerical studies are based on integration of the Semiconductor Bloch Equations and include no a priori assumptions about energy hierarchies or interaction matrix elements. The calculations were performed for a disordered 1D semiconductor quantum wire, where localization effects are most important. Despite its simplicity, this model system contains already all essential ingredients to describe the dynamics following optical excitation even in disordered systems of higher dimension . The model parameters have been given values that resemble those of realistic disordered semiconductor quantum wires. Additional physical parameters (excitation energy, spectral pulse width, screening length, different masses of the two particles) allow the study of a wide variety of observable phenomena. In our numerical calculations we investigate the spreading of an electron-hole wave packet after local excitation by an optical pulse. Here the interaction is given by the long-range Coulomb potential which, besides producing bound states (excitons) near the edges of the excitation spectrum, also correlates the electrons and holes in the pair continuum. Previous theoretical studies of the spatial-temporal dynamics of wave packets formed from excitons show that their motion is rather limited in the presence of scattering . This result is recovered by our present calculations. Here we focus our interest on the dynamics of optically generated wave packets in the pair continuum. We find that the excitation conditions in the presence of particle-particle interaction influence the carrier dynamics dramatically. In addition, and in contrast to some previous claims in the literature, we find that the sign of the interaction has a pronounced effect on the spatio-temporal dynamics. We consider a 1D array of sites $`i`$ with diagonal disorder in both the valence band ($`vb`$) and the conduction band ($`cb`$). The site energies $`\epsilon _{vi}`$ and $`\epsilon _{ci}`$ corresponding to the $`vb`$ and the $`cb`$, respectively, are randomly distributed over the interval $`[W/2,W/2]`$ and are uncorrelated. The nearest neighbor $`cb`$ levels are coupled by the tunneling term $`J^c`$, the $`vb`$ levels by $`J^v`$. We use the Coulomb interaction in its monopole-monopole form with matrix elements $$V_{ij}=\frac{U}{4\pi ϵϵ_0}\frac{e^2}{r_{ij}+\alpha }$$ (1) which has been regularized in order to cope with the pathological singularity in one dimension. The constant $`\alpha `$ has been chosen to be 5 times the lattice constant . The total Hamiltonian is written as $`\widehat{H}=\widehat{H_0}+\widehat{H_I}+\widehat{H_C}`$, where $`\widehat{H_0}`$ describes the $`vb`$ and $`cb`$ band structure, $`\widehat{H_I}`$ represents the semiclassical interaction with the electromagnetic field $`E_i(t)`$ in dipole approximation, and $`\widehat{H_C}`$ defines the electron-electron interaction term . In the following, we assume a local initial excitation at the central site $`i=0`$, which is modeled by setting $`\mu _i=\delta _{i,0}`$ for the local dipole matrix element in $`\widehat{H_I}`$. The optical polarization $`p_{ij}(t)`$ is obtained from the equation of motion for the polarization operator $`\widehat{p}_{ij}=\widehat{d}_i\widehat{c}_j`$, which is coupled to the equation of motion of the electron and hole intraband quantities $`\widehat{n}_{ij}^e=\widehat{c}_i^+\widehat{c}_j`$ and $`\widehat{n}_{ij}^h=\widehat{d}_i^+\widehat{d}_j`$, respectively, where the operators $`\widehat{c}_i^+,\widehat{c}_i`$ ($`\widehat{d}_i^+,\widehat{d}_i`$) describe the electron (hole) creation and annihilation operators at site $`i`$. The equation of motion for the expectation values $`p_{ij}(t)`$ and $`n_{ij}^{e,h}(t)`$ is treated using the well-known Semiconductor Bloch Equations for $`p_{ij}(t)`$ and $`n_{ij}^{e,h}(t)`$ in the real-space representation . Detailed derivations of the Semiconductor Bloch Equations are given in and the textbook . As we are interested in small excitation densities, we write only the equation for $`p_{ij}`$ in the lowest (linear-response) order in the exciting field $`_tp_{ij}=`$ $``$ $`i\left(\epsilon _i^e\epsilon _j^hV_{ij}\right)p_{ij}+i{\displaystyle \underset{l=1}{\overset{N}{}}}\left(J^ep_{il}+J^hp_{lj}\right)`$ (2) $`+`$ $`i\mu _jE_j(t)\delta _{ij}.`$ (3) Using the conservation laws $`n_{ij}^e=_lp_{lj}p_{li}^{}`$ and $`n_{ij}^h=_lp_{jl}p_{il}^{}`$ valid in this lowest order , we obtain the intraband quantities. Instead of studying a rather academic localization length which describes only the asymptotic behavior of wave functions, we calculate the experimentally more relevant participation number $`\mathrm{\Lambda }(t)=(_in_{ii}^2)^1`$. Here $`n_{ii}`$ stands for either $`n_{ii}^e`$ or $`n_{ii}^h`$. With the packet localized at site $`0`$, $`n_{ii}=\delta _{i0}`$ and $`\mathrm{\Lambda }=1`$, while for an excitation uniformly extended over the sample of $`N`$ sites, $`n_{ii}=1/N`$ and $`\mathrm{\Lambda }=N`$. Our calculations were performed for chains containing $`N=240`$ sites. Boundary effects can easily be identified in the temporal evolution of $`\mathrm{\Lambda }`$ and do not play any role as long as $`\mathrm{\Lambda }<N/2`$. All the data presented are free of finite-size effects. The transform-limited optical pulse is defined by its mean energy $`\mathrm{}\omega `$ and the temporal width $`\tau `$ of the gaussian envelope $`exp\{(t/\tau )^2\}`$. We define an excitation energy $`E_{exc}`$ refered to the bottom of the (ordered) absorption band, i.e. $`E_{exc}=\mathrm{}\omega E_{gap}`$. All results are given for $`\tau =100`$ fs, which corresponds to an energetic width (FWHM) of 22 meV. To make contact with the previous work where two particles in a single band were placed initially at a single site, we first consider the situation of a symmetric band structure with $`J^e=J^h=20`$ meV. The absorption spectra with and without Coulomb interaction are shown in Fig. 1 for the ordered case. The peak structure near the absorption is due to the excitonic resonances. Upon changing the sign of the Coulomb interaction, the bound state resonances are shifted from the bottom to the top of the absorption spectrum. As the dynamics of electrons and holes are the same for the assumed symmetric band structure, we restrict our discussion to the electrons. We first discuss the situation in the absence of Coulomb interaction. Fig. 2 shows the corresponding $`\mathrm{\Lambda }_e(t)`$ for two different disorder parameters $`W`$ after excitation by a pulse at $`E_{exc}=80`$ meV. The excitation is centered in the absorption spectrum as indicated in Fig. 1. $`\mathrm{\Lambda }_e(t)`$ evolves exponentially with rise time less than 1 ps. Here and below, we take the saturation value as a measure of localization. As expected, it decreases rapidly with increasing disorder. We find $`\mathrm{\Lambda }W^{1.3}`$ as $`W`$ is varied over the range 40 meV to 240 meV for $`J=20`$ meV. A discussion of related exponents can be found in . Fig. 2 contrasts the interacting and noninteracting behavior for two values of disorder and reveals three remarkable features. i) The interaction clearly leads to a reduction of the localization of the particles. We have carefully checked that the saturation value of $`\mathrm{\Lambda }_e(t)`$ at long times is not due to a finite size effect; values $`N/2`$ are fully converged with respect to the sample size. ii) While the participation number in the noninteracting situation evolves exponentially and saturates quickly ($`<1`$ ps), the interacting wave packets evolve diffusively and reach their saturation values at much longer times. iii) The sign of the Coulomb interaction ($`U=\pm 1`$) has virtually no influence on the propagation of the particles in this case. The same is true if we apply a very short excitation pulse which spectrally covers the whole band. The spectral position of the central pulse frequency within the band is then completely irrelevant. In this situation the excited particle pair-wave packet is initially situated exclusively at site $`i=0`$. These observations are not new . However, iii) has been questioned and ii) remained unexplained. In all cases where $`J^e`$ and $`J^h`$ are of comparable magnitude we find that the participation number is enhanced by the interaction. In a mean field picture, it is the temporal fluctuations of the field originating from the partner particle which destroy the coherence necessary to produce localization. This explanation in terms of a dynamic-correlation-induced weakening of the influence of disorder can be nicely corroborated by a number of case studies. We note that contrary to previous statements, the independence of the sign of the Coulomb interaction is not a general feature, but is a consequence of the imposed electron-hole symmetry. In particular, displacing the central frequency of excitation pulses from the center of the absorption band, the situation changes completely. Note that this choice of the excitation frequency corresponds to the realistic situation where electron-hole pairs are excited close to the absorption edge in semiconductors. Fig. 3 shows the participation number $`\mathrm{\Lambda }`$ for light electrons and heavy holes, i.e., $`J^e=2J^h=20`$ meV. The central excitation energy of the pulse is placed in the lower part of the pair continuum at $`E_{exc}=40`$ meV. Results averaged over 60 realizations are shown for $`W=80`$ meV and $`U=0,\pm 1`$. The results are invariant under reflection of the excitation frequency through band center and simultaneous switching of the sign of the interaction. This reflects the approximate symmetry (within fluctuations in the site energy distribution) of the Hamiltonian. It is at first sight counterintuitive that the enhancement of the participation number is larger for attractive ($`U=1`$) than for repulsive ($`U=+1`$) interaction. This behavior can be attributed to the fact that for attractive interaction and positive masses (i.e. for excitation into the lower half of the excitation continuum) the electron-hole pair tends to stay closer together. The fluctuating field due to the accompanying particle is then more pronounced as compared to the case of repulsive interaction, where the mutually repulsive particle pair tends to be separated. Hence the dynamic-correlation-induced weakening of the influence of disorder is less effective for repulsive than for attractive interaction. Completely different behavior is found for a static field. We consider an infinitely heavy hole, $`J^h=0`$, which now produces a static field, and excitation at the (interaction-free) band center. For both attractive and repulsive interactions, the participation number is decreased with respect to the noninteracting case. This result is easily understood without invoking fluctuating fields since at band center electron states have maximal extent. In the presence of interaction, off-center states are admixed leading to greater confinement. The effect of the static interaction is thus opposite to that of a fluctuating field . The strong retardation of the saturation in the interacting case can also be understood in our picture. Whether with or without interaction, the electron and hole wave packets spread over a range given by the single-particle levels involved in the optical transition just after the short excitation pulse. The fluctuating Coulomb field due to the partner particle then leads to an increase of the spread of the wave packets. As a consequence, the average fluctuating field acting on a given particle is reduced, which in turn tends to slow further spreading, eventually leading to the observed saturation at long times. The neglect of phonon interactions in our model is justified a posteriori. Fig. 2 makes it clear that the time scales between 100 fs and $``$ 3 ps are fully sufficient for experimental observations while near-band-edge acoustic phonon scattering occurs on longer time scales . Previous work on the TIP problem suggests a scaling of the two-particle localization length $`l_2/l_1l_1^a(U/J)^2`$, with $`a=1`$ or $`a=0.65`$ . Our results for the participation number do not obey such a scaling law as far as the dependence on $`U`$ is concerned. We obtain for electrons and holes, both for attractive and repulsive interaction, $`\mathrm{\Lambda }(U=\pm 1)/\mathrm{\Lambda }(U=0)\mathrm{\Lambda }(U=0)^b`$ with $`b=0.65\pm 0.3`$. For electrons and attractive interaction the present model predicts $`\mathrm{\Lambda }_eW^c`$ with a larger exponent $`c2.2`$ compared to $`c=1.3`$ for the noninteracting case. In conclusion, we have studied the localization of a pair of interacting particles in a situation which, in principle, is accessible to experiments. Optical excitation in the pair continuum of a disordered one-dimensional semiconductor with long-range Coulomb interaction has been considered. Starting from a tight-binding description, the temporal evolution of the participation numbers of the electron and the hole wave packets has been calculated by a direct solution of the equation of motion of the correlated material excitation within linear response with regard to the exciting laser field. The participation number increases with interaction for both attractive and repulsive interaction. We find that in general the degree of delocalization depends strongly on the sign of the interaction, in contrast to previously published predictions. The sign of the interaction becomes irrelevant (even if the masses of electrons and holes are different) only for two special situations: excitation in the center of the pair continuum, or excitation of the whole band. We have checked that this result is independent of the assumed form of the interaction and that it remains true also for the short-range interactions studied in the literature. Compared to the single-band models treated in the past, the present semiconductor model admits a richer variety of phenomena, which can be qualitatively explained within a mean-field picture. We emphasize that the enhancement of the participation number is clearly not due to a finite size effect, and that it should be experimentally observable. Ultra-short time-of-flight experiments on arrays of semiconductor quantum wires in the coherent limit using pump-probe techniques are a promising option. The enhancement should also be observable in disordered semiconductor quantum wells. In this case we expect the enhancement to be even more pronounced, since, in contrast with one-dimensional systems, only states close to the band edge are essentially affected by the disorder in two dimensions, so that the interaction will lead to coupling with rather extended states. This work is supported by DFG, SFB 383 and 341, the Leibniz Prize, OTKA (T021228, T024136, F024135), SNSF (2000-52183.97), and the A. v. Humboldt Foundation. Discussions with A. Knorr and F. Gebhard are gratefully acknowledged.
no-problem/0002/hep-lat0002004.html
ar5iv
text
# 1 twist UCLA/00/TEP/06 INLO-PUB 02/00 Computation of the Vortex Free Energy in SU(2) Gauge Theory Tamás G. Kovács Department of Physics, Instituut-Lorentz for Theoretical Physics, P.O.Box 9506, 2300 RA, Leiden, The Netherlands e-mail: kovacs@lorentz.leidenuniv.nl and E. T. Tomboulis Department of Physics, UCLA, Los Angeles, CA 90095-1547 e-mail: tombouli@physics.ucla.edu Abstract We present the first measurement of the vortex free-energy order parameter at weak coupling for SU(2) in simulations employing multihistogram methods. The result shows that the excitation probability for a sufficiently thick vortex in the vacuum tends to unity. This is rigorously known to provide a necessary and sufficient condition for maintaining confinement at weak coupling in $`SU(N)`$ gauge theories. The vortex free energy (also known as magnetic-flux free energy) order parameter in gauge theories is defined as the ratio of the partition function in the presence of a topologically trapped vortex excitation (introduced by a singular gauge transformation) to that without it. Its Fourier transform w.r.t. to the center ($`Z(N)`$) of the gauge group ($`SU(N)`$) defines the so-called electric-flux free energy which is rigorously known to provide an upper bound on the Wilson loop. These flux order parameters can characterize all possible phases of a (pure) gauge theory, and furthermore do this in terms of the behavior of the excitation expectation for a vortex. They were first considered in the study of gauge theories in , though the use of the analogous quantities in statistical mechanics goes back much further . The idea that vortex configurations underlie confinement at weak coupling has a long history, and has been the subject of intense recent activity. (We refer to Ref. for a review of recent developments and references to early and recent work.) In view of the physical significance of the magnetic-flux free energy, it may appear surprising that it has not been measured in simulations over the last twenty years. Accurate determination of (differences of) free energies in gauge theories, however, is well-known to be difficult. In fact, it is at first not quite clear how one should go about computing such totally nonlocal (lattice-length) quantities. We present here a computation for the group $`SU(2)`$ based on multihistogram methods . Such a method was recently used in Ref. to compute the free energy of a pair of $`Z(N)`$ monopoles, a quantity related to the ’t Hooft loop operator. Our result demonstrates that the excitation expectation for a sufficiently extended ‘thick’ vortex at large $`\beta `$ is essentially unity. This is the feature responsible for maintaining the confining phase in $`SU(N)`$ gauge theories even at weak coupling. We work on a $`d`$-dimensional hypercubic lattice $`\mathrm{\Lambda }`$ of size $`L_1\times \mathrm{}\times L_d`$ with periodic boundary conditions in all directions. We generally denote bonds by $`b`$, plaquettes by $`p`$, cubes by $`c`$, etc. The plaquette action is denoted by $`A_p(U_p)`$, where, as usual, $`U_p=_{bp}U_b`$, the product of the bond variables around the plaquette; for the minimal (Wilson) action $`A_p(U_p)=\beta \mathrm{Re}\mathrm{tr}U_p`$. The trace ”tr” is defined to include a $`1/N`$ normalization. A coclosed set of plaquettes (2-cells) is a closed set of $`(d2)`$-cells on the dual lattice. Thus, in $`d=3`$, it is a closed loop of dual bonds; in $`d=4`$, a closed two-dimensional surface of dual plaquettes. For fixed $`\mu `$, $`\nu `$, let $`𝒱_{\mu \nu }`$ denote a coclosed set of plaquettes that winds through every 2-dim $`[\mu \nu ]`$-plane of $`\mathrm{\Lambda }`$, i.e. a topologically nontrivial plaquette set wrapped around the periodic lattice ($`d`$-torus) in the $`(d2)`$ directions $`\lambda \mu ,\nu `$ perpendicular to $`\mu ,\nu `$ This is depicted in figure 1(a), where the short lines represent the plaquettes in $`𝒱`$, with the horizontal axis representing the $`x^\mu `$, $`x^\nu `$ directions, and the vertical axis the remaining $`(d2)`$ perpendicular directions. Define the partition function $$Z_\mathrm{\Lambda }(\tau _{\mu \nu })=\underset{b}{}dU_b\mathrm{exp}\left(\underset{p𝒱_{\mu \nu }}{}A_p(U_p)\underset{p𝒱_{\mu \nu }}{}A_p(\tau _{\mu \nu }U_p)\right),$$ (1) where the plaquette action $`A_p(U_p)`$ is replaced by the ‘twisted’ action $`A_p(\tau _{\mu \nu }U_p)`$ for each plaquette of $`𝒱_{\mu \nu }`$. Here the ‘twist’ $`\tau _{\mu \nu }Z(N)`$ is an element of the center. There are thus $`(N1)`$ different nontrivial choices for $`\tau _{\mu \nu }`$. The trivial element $`\tau _{\mu \nu }=1`$ is the ordinary partition function $`Z_\mathrm{\Lambda }(1)Z_\mathrm{\Lambda }`$. As indicated by the notation on the l.h.s. of (1), the exact position or shape of $`𝒱_{\mu \nu }`$ is irrelevant; the only dependence is on the presence of the $`Z(N)`$ flux winding through each $`[\mu \nu ]`$-plane. It is indeed easily seen that $`𝒱_{\mu \nu }`$ can be moved around and distorted by a shift of integration variables, but not removed; it is rendered topologically stable by winding completely around the lattice (figure 1(b)). By the same token introducing two twists, $`\tau _{\mu \nu }`$ on $`𝒱_{\mu \nu }`$ and $`\tau _{\mu \nu }^{}`$ on $`𝒱_{\mu \nu }^{}`$ in (1), is equivalent to introducing one twist $`\tau _{\mu \nu }^{\prime \prime }=\tau _{\mu \nu }\tau _{\mu \nu }^{}`$ since $`𝒱_{\mu \nu }`$ and $`𝒱_{\mu \nu }^{}`$ can be brought together by a shift of integration variables (figure 2). This expresses the mod $`N`$ conservation of the $`Z(N)`$ flux introduced by the twist. Thus, for $`N=2`$, any odd number of such (nontrivial) twists is equivalent to one, and any even number to none. The magnetic-flux free energy order parameter is now defined as $`\mathrm{exp}(F_{\mathrm{mg}}(\tau _{\mu \nu }))`$ $`=`$ $`{\displaystyle \frac{Z_\mathrm{\Lambda }(\tau _{\mu \nu })}{Z_\mathrm{\Lambda }}}`$ (2) $`=`$ $`\mathrm{exp}\left({\displaystyle \underset{p𝒱_{\mu \nu }}{}}\left(A_p(\tau _{\mu \nu }U_p)A_p(U_p)\right)\right).`$ Generalizations of (1)-(2) may be considered by introducing sets $`𝒱_{\kappa \lambda }`$ for several or all of the $`\frac{1}{2}d(d1)`$ possible distinct choices of planes $`[\kappa \lambda ]`$. The twist amounts to a discontinuous (singular) $`SU(N)`$ gauge transformation on the configurations in (1) with multivaluedness in $`Z(N)`$ (so it is single-valued in $`SU(N)/Z(N)`$)), i.e. the introduction of a $`\pi _1(SU(N)/Z(N))=Z(N)`$ vortex. The set $`𝒱_{\mu \nu }`$ represents the topological obstruction to having singlevaluedness everywhere. (1) is then the partition sum for the system with a topologically stable vortex completely winding around the lattice; and (2) is the normalized expectation for the excitation of such a vortex. Hence, it is also referred to as the vortex free energy. Choosing, say, $`[\mu \nu ]=[12]`$ in (1), we now drop the $`\mu \nu `$ subscript. One is interested in the behavior of (2) in the large volume limit (in the van Hove sense), i.e. as the size of the lattice increases in any power law fashion, e.g. $`L_\mu =2^{la_\mu }`$ for some fixed choice of positive exponents $`a_\mu `$, integer $`l\mathrm{}`$. Let $`A=L_1L_2`$ be the area of each $`[12]`$-plane, and $`L=L_3\mathrm{}L_d`$ the lattice volume in the perpendicular directions. One is interested, in particular, in $`LA`$. The twist introduces a cost in action localized on the plaquettes in $`𝒱`$. This cost, proportional to $`L`$, may be lowered if there are configurations that contribute with finite measure in the integral (2), and allow the flux introduced by the twist to spread in the two directions perpendicular to $`𝒱`$, so that the action is closer to its minimum; in other words, if there is finite probability for exciting a ‘thick’ vortex. For sufficiently large lattices, there are then three possibilities ($`\tau 1`$): 1. $`\mathrm{exp}(F_{\mathrm{mg}}(\tau ))\mathrm{exp}(\alpha (\beta ,\tau )L)`$ 2. $`\mathrm{exp}(F_{\mathrm{mg}}(\tau ))\mathrm{exp}(\beta c(\tau ){\displaystyle \frac{L}{A}})`$ 3. $`\mathrm{exp}(F_{\mathrm{mg}}(\tau ))\mathrm{exp}(cLe^{\rho (\beta ,\tau )A})`$ In case (a) the magnetic flux stays focused in a thin vortex; this describes a Higgs phase. In (b) the flux can spread in a Coulomb-like fashion lowering the free-energy cost; this describes a massless Coulomb phase, where the long distance behavior is accurately given by weak coupling pertubative expansion. In (c) the gain in thickening the vortex is exponential; this characterizes the confinement phase. It is important to note that, in contrast to (a)-(b), only (c) gives a value which survives and in fact tends (exponentially) to unity for all ways of taking the thermodynamic limit as described above; this is the signature of the confinement phase. Since our computation below is for $`N=2`$, we now write explicit formulae only for this case. The Fourier transform of (2) w.r.t. $`Z(N)`$ is known as the electric-flux free energy. For $`N=2`$ this is simply: $$\mathrm{exp}(F_{\mathrm{el}})=\underset{\tau =1,1}{}\tau \mathrm{exp}(F_{\mathrm{mg}}(\tau ))=1\mathrm{exp}(F_{\mathrm{mg}}(1)).$$ (3) Consider now a rectangular loop $`C`$ in a $`[12]`$-plane. Then, for any reflection positive plaquette action, the Wilson loop obeys the bound : $$\mathrm{tr}(U[C])\left(\mathrm{exp}(F_{\mathrm{el}})\right)^{{\scriptscriptstyle \frac{A_C}{A}}},$$ (4) where $`A_C`$ is the minimal area bounded by $`C`$. (4) shows that confining behavior (c) for the vortex free energy implies area-law for the Wilson loop with string tension bounded from below by the excitation expectation for a vortex. So confining behavior for the vortex free energy is a sufficient condition for linear asymptotic quark confinement. Placing suitable constraints in the functional measure in (1) which forbid the spreading of flux across $`[12]`$-planes, thus eliminating the occurance of thick vortices, results in nonconfining behavior of type (a) above . In this case (4) cannot tell us anything about the Wilson loop. To show loss of confining behavior for the Wilson loop itself in the presence of such constraints, one needs a lower bound on it which exhibits perimeter-law. This was recently proven for large $`\beta `$ in . Thus the occurance of thick vortices is also a necessary condition for confinement at weak coupling. Our measurement of (2) for $`SU(2)`$ was done by an application of a multihistogram method . From now on we restrict the form of the action to the Wilson action $$A_p(U_p)=\beta \mathrm{tr}U_p,$$ (5) which was used in the measurement. The basic quantity in our procedure is the density of states $`w(S)`$ as a function of the total action $`S`$ along the twisted plaquettes. This is defined as $$w(S)=𝑑U_b\mathrm{exp}\left(\beta \underset{p𝒱}{}\mathrm{tr}U_p\right)\delta (S+\underset{p𝒱}{}\mathrm{tr}U_p).$$ (6) If $`w(S)`$ is known, the partition function can be easily computed for any coupling $`\beta _𝒱`$ along $`𝒱`$ as $$Z(\beta _𝒱)=𝑑Sw(S)e^{\beta _𝒱S}.$$ (7) In particular, we are interested in $`Z(\beta )`$, the untwisted, and $`Z(\beta )`$, the twisted partition function. The problem is that the dominant contribution for $`Z(\beta _𝒱)`$ comes from different regions of $`S`$, depending on $`\beta _𝒱`$. Therefore one needs to know $`w(S)`$ to a good accuracy in a wide range of $`S`$. A simulation done at a certain value of $`\beta _𝒱`$, however will give accurate information on $`w(S)`$ only in a narrow neighbourhood of $`S_{\beta _𝒱}`$. The main idea of the Ferrenberg-Swendsen multihistogram method is to combine information on $`w(S)`$ coming from simulations at different $`\beta _𝒱`$’s to obtain $`w(S)`$ in a wide range of $`S`$ accurately. This can be done by noting that for a given $`\beta _𝒱`$ the probability distribution of $`S`$, $`P(S,\beta _𝒱)`$, goes as $$P(S,\beta _𝒱)\frac{1}{Z(\beta _𝒱)}e^{\beta _𝒱S},$$ (8) and that $`P(S,\beta _𝒱)`$ can be directly measured by making a histogram of the action along $`𝒱`$. In this way, any simulation at a certain $`\beta _𝒱`$ gives an estimate for $`w(S)`$, $$w(S)=P(S,\beta _𝒱)e^{\beta _𝒱S}Z(\beta _𝒱).$$ (9) These estimates coming from simulations with different $`\beta _𝒱`$’s (say $`\beta _1`$, $`\beta _2`$,…$`\beta _K`$) can then be averaged with suitable ($`S`$ dependent) weights to minimise the error in $`w(S)`$ over a given range of $`S`$. This results in the following set of coupled equations: $`w(S)={\displaystyle \frac{_{n=1}^KP(S,\beta _n)}{_{n=1}^K\frac{\mathrm{exp}\left(\beta _nS\right)}{Z\left(\beta _n\right)}}}`$ (10) $`Z(\beta _n)={\displaystyle 𝑑Se^{\beta _nS}w(S)},`$ (11) which can be solved by iteration starting from $`Z(\beta _n)=1`$. To optimize the procedure, one needs a sufficient overlap between the $`P(S,\beta _n)`$ distributions corresponding to successive $`\beta _n`$’s. Since the distributions quickly become narrower with increasing lattice size, the number of simulations, $`K`$, also needs to be increased accordingly. This makes our measurement very expensive on large lattices. For the largest lattices we typically used $`K=4080`$ simulations with the $`\beta _n`$’s equally spaced in the $`\beta `$ $`+\beta `$ range. The result of the computation for (2) is shown in figure 3. We have performed the computation on lattices of equal linear size in all directions for three different values of $`\beta `$. The lattice spacings are $`a=0.165`$ fm, $`a=0.119`$ fm and $`a=0.085`$ fm for $`\beta =2.3`$, $`\beta =2.4`$, and $`\beta =2.5`$, resp. Notice that, with the lattice size expressed in physical units, the measurements for different $`\beta `$’s fall on the same curve, as they should. This indicates that the universal curve has been reached, and will not change at larger beta. Also, the onset of the sharp rise around $`0.7`$ fm is in the region of the finite temperature deconfining phase transition providing another indirect consistency check. The approach to unity for sufficiently large lattice size in figure 3 is striking. In comparison, for Coulomb-like massless behavior, an upper bound obtained by action minimizing within the spin-wave approximation gives $`\mathrm{exp}(\beta (\pi /2)^2)0.085`$ at $`\beta =2.3`$. The points forming the upper part of the plot are well within the confinement region. The string tension values extracted from the vortex free energy in the confining region are consistent with the values from heavy-quark potential calculations (see eg. ), though still better precision in the measurement of the vortex free energy is required for precise quantitative comparisons. In conclusion, the result of our computation clearly demonstrates that the weighted expectation for the excitation of a sufficiently thick vortex in the vacuum tends to one. In this sense the vacuum can indeed be viewed as having a ‘condensate’ of thick long vortices. This is sufficient for maintaining confinement at large $`\beta `$ in $`SU(N)`$ gauge theories. As mentioned, rigorous results also show it to be necessary: were the behavior for (2) exhibited in figure 3 not to occur, confinement at large beta would be lost. We are very grateful to P. de Forcrand for correspondence. This research was supported by FOM (T.G.K.) and by NSF, Grant No. NSF-PHY-9819686 (E.T.T.).
no-problem/0002/cond-mat0002130.html
ar5iv
text
# The influence of critical behavior on the spin glass phase ## I Introduction Despite over two decades of work, the controversy concerning the nature of the ordered phase of short range Ising spin glasses continues. For a few years, Monte Carlo simulations appeared to be providing evidence for replica symmetry breaking (RSB) in these systems. However, recent developments have cast doubt on this interpretation of the Monte Carlo data. In a series of papers on the Ising spin glass within the Migdal-Kadanoff approximation (MKA), we showed that the equilibrium Monte Carlo data in three dimensions that had been interpreted in the past as giving evidence for RSB can actually be interpreted quite easily within the droplet picture, with apparent RSB effects being attributed to a crossover between critical behavior and the asymptotic droplet-like behavior for small system sizes. We also showed that system sizes well beyond the reach of current simulations would probably be required in order to unambiguously see droplet-like behavior. The finding that the critical-point effects can still be felt at temperatures lower than those accessible by Monte Carlo simulations is supported by the Monte Carlo simulations of Berg and Janke who found critical scaling working reasonably well down to $`T=0.8T_c`$ for system sizes upto $`L=8`$ in three dimensions. The zero temperature study of Pallasini and Young also suggests that the ground- state structure of three-dimensional Edwards-Anderson model is well described by droplet theory, though the existence of low energy excitations not included in the conventional droplet theory remains an open question. Thus, while puzzles do remain, the weight of the evidence seems to be shifting towards a droplet-like description of the ordered phase in short range Ising spin glasses. However, it is expected that critical point effects are less dominant in four dimensions than in three dimensions. Our aim in this paper is to quantify the extent of critical point effects in the low temperature phase of the four-dimensional Edwards-Anderson spin glass. We do this by providing results for the four-dimensional Ising spin glass in the MKA and compare these with existing Monte Carlo work. In particular, we study the Parisi overlap function and the link overlap function for system sizes up to $`L=16`$ and temperatures as low as $`T=0.16T_c`$. We find that for system sizes and temperatures comparable to those of the Monte Carlo simulations, the Parisi overlap distribution shows also in MKA the sample-to-sample fluctations and the stationary behavior at small overlap values, that are normally attributed to RSB. It is only for larger system sizes (or for lower temperatures), that the asymptotic droplet-like behavior becomes apparent. For the link overlap, we find similar double-peaked curves as those found in Monte-Carlo simulations. This double peak structure is expected on quite general grounds independent of the nature of the low temperature phase. However, we show that two peaks in the link overlap in MKA occur because of a difference between domain-wall excitations (which cross the entire system) and droplet excitations (which do not cross the entire system). We argue that for small system sizes, the effect of domain walls increases with increasing dimension, making it necessary to go very far below $`T_c`$ to see the asymptotic droplet behavior. This paper is organized as follows: in section II, we define the quantities discussed in this paper, and the droplet-model predictions for their behavior. In section III, we describe the MKA, and our numerical methods of evaluating the overlap distribution. In section IV, we present our numerical results for the Parisi overlap distribution, and compare to Monte-Carlo data. The following section studies the link overlap distribution. Finally, section VI contains the concluding remarks, including some on the effects of critical behavior on the dynamics in the spin glass phase. Again we suspect that arguments which have been advanced against the droplet picture on the basis of dynamical studies have failed to take into account the effects arising from proximity to the critical point. ## II Definitions and Scaling Laws The Edwards-Anderson spin glass in the absence of an external magnetic field is defined by the Hamiltonian $$H=\underset{i,j}{}J_{ij}S_iS_j,$$ where the Ising spins can take the values $`\pm 1`$, and the nearest-neighbor couplings $`J_{ij}`$ are independent from each other and Gaussian distributed with a standard deviation $`J`$. It has proven useful to consider two identical copies (replicas) of the system, and to measure overlaps between them. This gives information about the structure of the low-temperature phase, in particular about the number of pure states. The quantities considered in this paper are the Parisi overlap function $`P(q,L)`$ and the link overlap function $`P(q_l,L)`$. They are defined by $$P(q,L)=\left[\delta \left(\underset{ij}{}\frac{S_i^{(1)}S_i^{(2)}+S_j^{(1)}S_j^{(2)}}{2N_L}q\right)\right],$$ (1) and $$P(q_l,L)=\left[\delta \left(\underset{ij}{}\frac{S_i^{(1)}S_i^{(2)}S_j^{(1)}S_j^{(2)}}{N_L}q_l\right)\right].$$ (2) Here, the superscripts $`(1)`$ and $`(2)`$ denote the two replicas of the system, $`N_L`$ is the number of bonds, and $`\mathrm{}`$ and $`\left[\mathrm{}\right]`$ denote the thermodynamic and disorder average respectively. We use $`P(q,L)`$ and $`P(q_l,L)`$ to denote the overlap functions for a finite system of size $`L`$, reserving the more standard notation $`P(q)`$ and $`P(q_l)`$ for the limit $`lim_L\mathrm{}P(q,L)`$ and $`lim_L\mathrm{}P(q_l,L)`$. In the mean-field RSB picture, $`P(q)`$ is nonzero in the spin glass phase in the entire interval $`[q_{EA},q_{EA}]`$, while it is composed only of two delta functions at $`\pm q_{EA}`$ in the droplet picture. Similarly, $`P(q_l)`$ is nonzero over a finite interval $`[q_l^{min},q_l^{max}]`$ in mean-field theory, while it is a delta-function within the droplet picture. Much of the evidence for RSB for three- and four-dimensional systems comes from observing a stationary $`P(q=0,L)`$ for system sizes that are generally smaller than 20 in 3D and smaller than 10 in 4D, and at temperatures of the order of $`0.7T_c`$. However, even within the droplet picture one expects to see a stationary $`P(q=0,L)`$ for a certain range of system sizes and temperatures. The reason is that at $`T_c`$ the overlap distribution $`P(q,L)`$ obeys the scaling law $$P(q,L)=L^{\beta /\nu }\stackrel{~}{P}(qL^{\beta /\nu }),$$ (3) $`\beta `$ being the order parameter critical exponent, and $`\nu `$ the correlation length exponent. Above the lower critical dimension (which is smaller than 3), $`\beta /\nu `$ is positive, leading to an increase $`P(q=0,L)`$ as a function of $`L`$ (at $`T=T_c`$). On the other hand, for $`TT_c`$, the droplet model predicts a decay $$P(q=0,L)1/L^\theta $$ on length scales larger than the (temperature–dependent) correlation length $`\xi `$, $`\theta `$ being the scaling exponent of the coupling strength $`J`$. A few words are in order here on what we mean by the correlation length. In the spin glass phase, all correlation functions fall off as a power law at large distances. However, within the droplet model, this is true only asymptotically, and the general form of the correlation function for two spins a distance $`r`$ apart, at a temperature $`TT_c`$, is $`r^\theta f(r/\xi )`$ where $`k_B`$ is the Boltzmann constant and $`f`$ is a scaling function. Thus, for $`r\xi `$ there are corrections to the algebraic long-distance behavior and the above expression defines the temperature-dependent correlation length. Note that for $`TT_c`$ this correlation length is expected to diverge with the exponent $`\nu `$. Thus, for temperatures not too far below $`T_c`$, one can expect an almost stationary $`P(q=0,L)`$ for a certain range of system sizes. In three dimensions both $`\beta /\nu 0.3`$ and $`\theta 0.17`$ are rather small, this apparent stationarity may persist over a considerable range of system sizes $`L`$. However, in four dimensions, $`\beta /\nu 0.85`$ and $`\theta 0.65`$ and one would expect the crossover region to be smaller. In the present paper we shall investigate these crossover effects in four dimensions by studying $`P(q,L)`$ for the Edwards-Anderson spin glass within the MKA. It turns out that they are surprisingly persistent even at low temperatures, due to the presence of domain walls. Monte-Carlo simulations of the link overlap distribution show a nontrivial shape with shoulders or even a double peak, which seems to be incompatible with the droplet picture, where the distribution should tend towards a delta-function. For sufficiently low temperatures and large length scales, the droplet picture predicts that the width of the link overlap distribution scales as $$\mathrm{\Delta }q_l\sqrt{kT}L^{d_sd\theta /2},$$ where $`d_s`$ is the fractal dimension of a domain wall. Below, we will show that the nontrivial shape and the double peak reported from Monte-Carlo simulations are also found in MKA in four dimensions, and we will present strong evidence that it is due to the different nature of droplet and domain wall excitations. As the weight of domain walls becomes negligible in the thermodynamic limit, the droplet picture is regained on large scales. ## III Migdal-Kadanoff approximation The Migdal-Kadanoff approximation (MKA) is a real-space renormalization group the gives approximate recursion relations for the various coupling constants. Evaluating a thermodynamic quantity in MKA in $`d`$ dimensions is equivalent to evaluating it on an hierarchical lattice that is constructed iteratively by replacing each bond by $`2^d`$ bonds, as indicated in Fig. 1. The total number of bonds after $`I`$ iterations is $`2^{dI}`$. $`I=1`$, the smallest non-trivial system that can be studied, corresponds to a system linear dimension $`L=2`$, $`I=2`$ corresponds to $`L=4`$, $`I=3`$ corresponds to $`L=8`$ and so on. Note that the number of bonds on hierarchical lattice after $`I`$ iterations is the same as the number of sites of a $`d`$-dimensional lattice of size $`L=2^I`$. Thermodynamic quantities are then evaluated iteratively by tracing over the spins on the highest level of the hierarchy, until the lowest level is reached and the trace over the remaining two spins is calculated . This procedure generates new effective couplings, which have to be included in the recursion relations. In , it was proved that in the limit of infinitely many dimensions (and in an expansion away from infinite dimensions) the MKA reproduces the results of the droplet picture. As was discussed in , the calculation of $`P(q,L)`$ is made easier by first calculating its Fourier transform $`F(y,L)`$, which is given by $$F(y,L)=\left[\mathrm{exp}[iy\underset{ij}{}\frac{(S_i^{(1)}S_i^{(2)}+S_j^{(1)}S_j^{(2)})}{2N_L}]\right].$$ (4) The recursion relations for $`F(y,L)`$ involve two- and four-spin terms, and can easily be evaluated numerically because all terms are now in an exponential. Having calculated $`F(y)`$ one can then invert the Fourier transform to get $`P(q,L)`$. Similarly, $`P(q_l,L)`$ is calculated by first evaluating $$F(y_l,L)=\left[\mathrm{exp}[iy_l\underset{ij}{}\frac{(S_i^{(1)}S_i^{(2)}S_j^{(1)}S_j^{(2)})}{N_L}]\right].$$ (5) Before presenting our numerical results for the Parisi overlap and the link overlap, let us discuss the flow of the coupling constant $`J`$ in the low-temperature phase, as obtained in MKA. In order to obtain this flow, we iterated the MKA recursion relation on a set of $`10^6`$ bonds. At each iteration, each of the new set of $`10^6`$ bonds was generated by randomly choosing 16 bonds from the old set and taking the trace over the inner spins (with a bond arrangement as in Fig. 1). Figure 2 shows $`J/T`$ as function of $`L`$ for different initial values of the coupling strength. The critical point is at $`T_c2.1J`$. The first curve begins at $`J/T=0.5`$, which is close to the critical point, and it reaches the low-temperature behavior only at lengths around 1000. For an initial $`J/T=0.7`$, the asymptotic slope is already reached at $`L`$ around 40, and for $`J/T=3.0`$, which corresponds to $`T0.16T_c`$ the entire curve shows the asymptotic slope. The asymptotic slope is identical to the above-mentioned exponent $`\theta `$ and has the value $`\theta 0.75`$. In contrast to $`d=3`$ , we did not succeed in fitting the crossover regime by doing an expansion around the zero-temperature fixed point. The reason is that dimension 4 is too far above the lower critical dimension, such that the critical temperature is not small. Note that for each temperature the length scale beyond which the flows of the coupling constants show the asymptotic behavior yields one estimate for the correlation length mentioned above. We have considered the flow to be in the asymptotic regime when its slope was within 90% of its asymptotic value. However, this estimate is specific to the flows of the coupling constant, and other quantities may show their asymptotic behavior later. In fact, as we shall see below, the convergence of the overlap distributions is much slower than that of the couplings, and we will have to give reasons for this. ## IV The Parisi overlap We now discuss our results for the Parisi overlap. First, let us briefly describe the critical behavior. Fig. 3 shows a scaling plot for $`P(q,L)`$ for $`L=4,8,16`$ at $`T=T_c2.1J`$. We find a good data collapse if we use the value $`\beta /\nu =0.64`$, thus confirming the finite-size scaling ansatz Eq. 3. We next move on to the low-temperature phase. In Fig. 4 we show $`P(q,L)`$ at $`T=0.5T_c`$ and $`L=8`$ for three different samples. As one can see there are substantial differences between the samples. This sensitivity to samples for system sizes around 10 is in interpreted as evidence for RSB. In our case, where we know that the droplet model is exact, it has to be considered a finite size effect. Note that we have not chosen the three samples in any particular manner. By comparing to the curves obtained for $`L=16`$ (not shown), we can even see the trend to an increasing number of peaks, just as in . Thus, one feature commonly associated with RSB is certainly present in within the MKA for temperatures and system sizes comparable to those studied in simulations. Let us now focus on the behavior of $`P(q=0,L)`$ for different system sizes and temperatures. But before exhibiting our own data, we discuss the Monte Carlo data of Reger, Bhatt and Young who were the first to study $`P(q=0,L)`$ for the Edwards-Anderson spin glass. They studied system sizes $`L=2,3,4,5,6`$ at temperatures down to $`T=0.68T_c`$. At $`T=T_c`$ they found the expected critical scaling, $`P(q=0)L^{\beta /\nu }`$ with $`\beta /\nu 0.75`$. Then, as the temperature was lowered, the curves for $`P(q=0)`$ as a function of $`L`$ showed a downward curvature for the largest system sizes, which they interpreted as the beginning of the crossover between critical behavior and the low temperature behavior. At $`T=0.8T_c`$, $`P(q=0)`$ seemed to be roughly constant or decreasing slowly. However, the striking part of their data was that at $`T=0.68T_c`$ they found that $`P(q=0,L)`$ initially decreased as a function of system size for $`L=2,3,4`$ and then saturated for $`L=4,5,6`$. They interpreted this as suggestive of RSB. They admitted however that other explanations are possible. The most recent Monte-Carlo simulation data for the 4d Ising spin glass are those in . These authors focus on $`T0.6T_c`$, and they find an essentially stationary $`P(q=0,L)`$ for system sizes up to the largest simulated size $`L=10`$. They argue, that stationarity over such a large range of $`L`$ values is most naturally interpreted as evidence for RSB. However, as can be seen from Fig. 3, the correlation length is of the order of 16 for these temperatures and therefore comparable to the system size. In Fig.5, we show the MKA data for $`P(q=0,L)`$. We have calculated $`P(q=0,L)`$ for system sizes $`L=4`$,$`8`$,$`16`$ at temperatures $`T=T_c`$, $`0.68T_c`$, $`0.33T_c`$, and $`0.16T_c`$. At $`T=T_c`$, $`P(q=0,L)`$ grows as $`L^{\beta /\nu }`$ with $`\beta /\nu 0.64`$, in agreement with Fig.3. At $`T=0.68T_c`$ (the lowest temperature studied in , and not far from the lowest temperature studied by ), we do not see a clear decrease even for $`L=16`$. The curve for $`P(q=0)`$ looks more or less flat, though one could say that there is slight increase between $`L=4`$ and $`L=8`$ and a slight decrease between $`L=8`$ and $`L=16`$. This flat behavior is similar to what was found in and . The deviation of the $`L=2`$ and $`L=3`$ data from the flat curve in can probably be ascribed to artifacts at very small system sizes, which are also found elsewhere . For lower temperatures, where the correlation length is smaller than the system size, there is a clear decrease of $`P(q=0)`$ although the decrease is not asympotic even at a temperatures as low as $`T_c/6`$. We conclude that the observed stationarity of $`P(q=0,L)`$ in Monte-Carlo data is due to the effects of a finite system size and finite temperature. Similarly, Monte-Carlo simulations at $`T0.5T_c`$ and at system sizes around 10, should be able to show the negative slope in $`P(q=0,L)`$. In the not too far future, it should become possible to perform these simulations. The fact that $`P(q=0,L)`$ does not show asymptotic behavior even at $`T=T_c/6`$ for the system sizes that we have studied, is surprising, and is different from our findings in $`d=3`$ . That $`P(q=0,L)`$ converges slower towards the asymptotic behavior than the flow of the coupling constant (see Fig. 2), can be understood in the following way: A Parisi overlap value close to zero can be generated by a domain wall excitation. For large system sizes and low temperatures, such an excitation occurs with significant weight only in those samples where a domain wall excitation costs little energy. These are exactly the samples with a small renormalized coupling constant at system size $`L`$. As the width of the probability distribution function of the couplings increases with $`L^\theta `$, the probability for obtaining a small renormalized coupling decreases as $`L^\theta `$. This is the argument that predicts that $`P(q=0,L)L^\theta `$. However, for smaller system sizes and higher temperatures, there are corrections to this argument. Thus, even samples with a renormalized coupling that is not small can contribute to $`P(q=0,L)`$ by means of large or multiple droplet excitations, or of thermally activated domain walls. For this reason, $`P(q=0,L)`$ can be expected to converge towards asymptopia slower that the coupling constant itself. Furthermore, as we shall see in the next section, the superposition of domain wall excitations and droplet excitations leads to deviations from simple scaling, which may further slow down the convergence towards asymptotic scaling behavior. ## V The Link Overlap The link overlap gives additional information about the spin glass phase that is not readily seen in the Parisi overlap. The main qualitative differences between the Parisi overlap and the link overlap are (i) that flipping all spins in one of the two replicas changes the sign of $`q`$ but leaves $`q_l`$ invariant, and (ii) that flipping a droplet of finite size in one of the two replicas changes $`q`$ by an amount proportional to the volume of the droplet, and $`q_l`$ by an amount proportional to the surface of the droplet. Thus, the link overlap contains information about the surface area of excitations. First, let us study $`P(q_l,L)`$ as function of temperature, for a given system size $`L=4`$. Fig. 6 shows our curves for $`T=0.8T_c`$, $`0.67T_c`$, $`0.56T_c`$, $`0.48T_c`$, and $`0.33T_c`$. They appear to result from the superposition of two different peaks, with their distance increasing with decreasing temperature, and the weight shifting from the left peak to the right peak. Fig. 7 shows $`P(q_l,L)`$ for fixed $`T=0.33T_c`$ and for different $`L`$. One can see that with increasing system size the peaks move closer together, and the weight of the left-hand peak decreases. These results are similar to what we found in MKA in three dimensions , however, in four dimensions the peaks are more pronounced. Monte-Carlo simulations of the four-dimensional Ising spin glass also show two peaks for certain system sizes and temperatures . This feature is attributed by the authors to RSB. However, as it is also present in MKA, there must be a different explanation. The width of the curves shrinks with increasing system size in , just as it does in MKA and as is expected from the droplet picture. If the RSB scenario were correct, the width would go to a finite value in the limit $`L\mathrm{}`$. In the following we present evidence that the left peak corresponds to configurations where one of the two replicas has a domain wall excitation, and the right peak to configurations where one of the two replicas has a droplet excitation. In MKA, domain wall excitations involve flipping of one side of the system, including one of the two boundary spins of the hierarchical lattice, while droplet excitations involve flipping of a group of spins in the interior. If the sign of the renormalized coupling is positive (negative), the two boundary spins are parallel (antiparallel) in the ground state. By plotting separately the contributions from configurations with and without flipped boundary spins, we can separate domain wall excitations from droplet excitations. Fig. 8 shows the three contributions from configurations where none, one, or both replicas have a domain wall. Clearly, the left peak is due to domain wall excitations, and the right peak to droplet excitations. Similar curves are obtained for other values of the parameters. We thus have shown that the qualitative differences between droplet and domain wall excitations are sufficient to explain the structure of the link overlap distribution, and no other low-lying excitations like those invoked by RSB are needed. The weight with which domain-wall excitations occur is in agreement with predictions from the droplet model. The probability of having a domain wall in a system of size $`L`$ is according to the droplet picture of the order of $$(T/J)L^\theta ,$$ which is $`0.25`$ at $`T=0.33T_c`$ and $`L=4`$, and $`0.15`$ at $`T=0.33T_c`$ and $`L=8`$. From our simulations, we find that the relative weights of domain walls for these two situations are $`0.12`$ and $`0.076`$, which fits the droplet picture very well if we include a factor 1/2 in the above expression. Domain walls become negligible only when the product $`(T/J)L^\theta `$ becomes small. In higher dimensions, the critical value of $`T/J`$ becomes larger, and for a given relative distance from the critical point, the weight of domain walls therefore also becomes larger. This explains why the effect of domain walls is more visible in 4 dimensions than in 3 dimensions. However, with increasing system size, domain walls should become negligible more rapidly in higher dimensions, due to the larger value of the exponent $`\theta `$. ## VI Conclusions Our results for the Parisi overlap distribution in four dimensions show that there are rather large finite size effects in four dimensions which give rise to phenomena normally attributed to RSB. The system sizes needed to see the beginning of droplet like behavior within the MKA are larger, and the temperatures are lower, than those studied by Monte Carlo simulations. However, at temperatures not too far below those studied in Monte Carlo simulations ($`T=0.5T_c`$), the weight of the Parisi overlap distribution function $`P(q=0,L)`$ within the MKA appears to decrease, albeit with an effective exponent different from the asymptotic value. Thus, simulations at these temperatures for the Ising spin glass on a cubic lattice might resolve the controversy regarding the nature of the ordered state in short range spin glasses. However, the MKA is a low dimensional approximation and it is possible that the system sizes needed to see asymptotic behavior for a hypercubic lattice in four dimensions are different from what is indicated by the MKA. So, any comparison of the MKA with the Monte Carlo data should be taken with a pinch of salt. Recently, a modified droplet picture was suggested by Houdayer and Martin , and by Bouchaud . Within this picture, excitations on length scales much smaller than the system size are droplet-like, however, there exist large-scale excitations that extend over the entire system and that have a small energy that does not diverge with increasing system size. As we have demonstrated within MKA, the double-peaked curves for the link overlap distribution, can be fully explained in terms of two types of excitations that contribute to the low-temperature behavior, namely domain-wall excitations and droplet excitations. We therefore believe that there is no need to invoke system-wide low-energy excitations that are more relevant than domain walls. Finally, the whole field of dynamical studies of spin glasses is thought by many to provide a strong reason for believing the RSB picture. A very recent study of spin glass dynamics on the hierarchical lattice , on which the MKA is exact, indicates that no ageing occurs at low temperatures in the response function whereas in Monte-Carlo simulations on the Edwards-Anderson model and in spin glass experiments ageing is seen in the response function. We suggest that the ageing behavior found in Monte-Carlo simulations and experiment are in fact often dominated by critical point effects, and not by droplet effects. Indeed we would expect that if the simulations of Ref. were performed at temperatures closer to the critical temperature then ageing effects would be seen in the response function, since near the critical point of even a ferromagnet such ageing effects occur . The reason why experiments and simulations on the Edwards-Anderson model see ageing in the response function is that they are probing time scales that may be less than the critical time scale, which is given by $$\tau =\tau _0(\xi /a)^z,$$ with $`a`$ the lattice constant and $`\tau _0`$ the characteristic spin-flip time. The dynamical critical exponent $`z6`$ in 3 dimensions . Only for droplet reversals which take place on time scales larger than $`\tau `$ (i.e for reversals of droplets whose linear dimensions exceed $`\xi `$) will droplet results for the dynamics be appropriate. However, because of the large values of $`\xi `$ down to temperatures of at least $`0.5T_c`$ and the large value of $`z`$, $`\tau `$ may be very large in the Monte-Carlo simulations and experiments. Thus if $`\xi /a`$ is 100, then $`\tau /\tau _0`$ is $`10^{12}`$, which would make droplet like dynamics beyond the reach of a Monte-Carlo simulation. In practice, most data will be in a crossover regime leading to an apparently temperature dependent exponent $`z(T)`$ (see for example Ref ). ###### Acknowledgements. We thank A. P. Young for discussions and for encouraging us to write this paper. Part of this work was performed when HB and BD were at the Department of Physics, University of Manchester, supported by EPSRC Grants GR/K79307 and GR/L38578. BD also acknowledges support from the Minerva foundation.
no-problem/0002/cond-mat0002088.html
ar5iv
text
# Density functional theory of phase coexistence in weakly polydisperse fluids ## Abstract The recently proposed universal relations between the moments of the polydispersity distributions of a phase-separated weakly polydisperse system are analyzed in detail using the numerical results obtained by solving a simple density functional theory of a polydisperse fluid. It is shown that universal properties are the exception rather than the rule. Département de Physique des Matériaux (UMR 5586 du CNRS), Université Claude Bernard-Lyon1, 69622 Villeurbanne Cedex, France <sup>††</sup> Physique des Polymères, Université Libre de Bruxelles, Campus Plaine, CP 223, B-1050 Brussels, Belgium PACS numbers: 05.70.-a, 64.75.+g, 82.60.Lf Many, natural or man-made, systems are mixtures of similar instead of identical objects. For example, in a colloidal dispersion the size and surface charge of the colloidal particles are usually distributed in an almost continuous fashion around some mean value. When this distribution is very narrow the system can often be assimilated to a one-component system of identical objects. Such a system is usually called monodisperse whereas otherwise it is termed polydisperse. Since polydispersity is a direct consequence of the physico-chemical production process it is an intrinsic property of many industrial systems. Therefore, many author have included polydispersity into the description of a given phase of such systems. More recently, a renewed interest can be witnessed for the study of phase transitions occuring in weakly polydisperse systems. The phase behavior of polydisperse systems is of course much richer than that of its monodisperse counterpart. It is also more difficult to study theoretically, essentially because one has to cope with an infinity of thermodynamic coexistence conditions. Therefore, several authors have proposed approximation schemes which try to bypass this difficulty. In the present study we take to opposite point of view by solving numerically the infinitely many thermodynamic coexistence conditions for a simple model polydisperse system. On this basis we have studied the radius of convergence of the weak polydispersity expansion used in ref.4 and found that their “universal law of fractionation” and some of their conclusions have to be modified in several cases. The statistical mechanical description of a polydisperse equilibrium system is equivalent to a density functional theory for a system whose number density, $`\rho (𝐫,\sigma )`$, depends besides the position variable $`𝐫`$ (assuming spherical particles) also on at least one polydispersity variable $`\sigma `$ (which we consider to be dimensionless). Such a theory is completely determined once the intrinsic Helmholtz free-energy per unit volume, $`f[\rho ]`$, has been specified as a functional of $`\rho (𝐫,\sigma )`$ (for notational convenience the dependence on the temperature $`T`$ will not be indicated explicitly). For the spatially uniform fluid phases considered here (and also implicitly in ref.4) we have, $`\rho (𝐫,\sigma )\rho (\sigma )`$, and the pressure can be written as, $`p[\rho ]=𝑑\sigma \rho (\sigma )\mu (\sigma ;[\rho ])f[\rho ]`$, where $`\mu (\sigma ;[\rho ])=\delta f[\rho ]/\delta \rho (\sigma )`$, is the chemical potential of “species” $`\sigma `$. When a parent phase of density $`\rho _0(\sigma )`$ phase separates into $`n`$ daughter phases of density $`\rho _i(\sigma )`$ ($`i=1,\mathrm{},n`$) the phase coexistence conditions imply that, $`p[\rho _1]=p[\rho _2]=\mathrm{}=p[\rho _n]`$, and $`\mu (\sigma ;[\rho _1])=\mu (\sigma ;[\rho _2])=\mathrm{}=\mu (\sigma ;[\rho _n])`$. For simplicity we consider here only the case of two daughter phases ($`n=2`$) and rewrite moreover $`\rho _i(\sigma )=\rho _ih_i(\sigma )(i=0,1,2)`$ in terms of the average density $`\rho _i`$ and a polydispersity distribution $`h_i(\sigma )`$ such that $`𝑑\sigma h_i(\sigma )=1`$. Since the ideal gas contribution to $`f[\rho ]`$ is exactly known one has, $`\mu (\sigma ;[\rho ])=k_BT\mathrm{ln}\{\mathrm{\Lambda }^3(\sigma )\rho (\sigma )\}+\mu _{ex}(\sigma ;[\rho ])`$, where $`k_B`$ is Boltzmann’s constant, $`\mathrm{\Lambda }(\sigma )`$ is the thermal de Broglie wavelength of species $`\sigma `$ and $`\mu _{ex}`$ the excess (ex) contribution to $`\mu `$. This allows us to rewrite the equality of the chemical potentials of the two daughter phases as, $`h_1(\sigma )=h_2(\sigma )A(\sigma )`$, where $`A(\sigma )`$ is a shorthand notation for: $$A(\sigma )=\frac{\rho _2}{\rho _1}\mathrm{exp}\beta \left\{\mu _{ex}(\sigma ;[\rho _2])\mu _{ex}(\sigma ;[\rho _1])\right\}$$ (1) with $`\beta =1/k_BT`$. The polydispersity distributions are further constrained by the relation, $`x_1h_1(\sigma )+x_2h_2(\sigma )=h_0(\sigma )`$, which expresses particle number conservation. The number concentration of phase 1, $`x_1=1x_2`$, is given by the lever rule: $`x_1=\frac{\rho _1}{\rho _1\rho _2}\frac{\rho _0\rho _2}{\rho _0}`$. Combining these two relations one finds: $$h_2(\sigma )h_1(\sigma )=h_0(\sigma )H(\sigma )$$ (2) where $`H(\sigma )(1A(\sigma ))/(x_2+x_1A(\sigma ))`$. Eq.(2) is the starting point to relate the difference between the moments of the daughter phases, $`\mathrm{\Delta }_k=𝑑\sigma \sigma ^k(h_2(\sigma )h_1(\sigma ))`$, to the moments, $`\xi _k=𝑑\sigma \sigma ^kh_0(\sigma )`$ ($`k=0,1,2,\mathrm{}`$), of the parent phase distribution $`h_0(\sigma )`$. Indeed, when $`\sigma `$ is chosen such that $`h_0(\sigma )`$ tends to the Dirac delta function $`\delta (\sigma )`$ in the monodisperse limit, $`\mathrm{\Delta }_k`$ can be obtained from (2) by expanding $`H(\sigma )`$ around $`\sigma =0`$, $`H(\sigma )=_{l=0}^{\mathrm{}}a_l\sigma ^l`$, yielding for a weakly polydisperse system, $`\mathrm{\Delta }_k=_{l=0}^{\mathrm{}}a_l\xi _{l+k}`$. The normalization of the $`h_i(\sigma )`$ ($`i=0,1,2`$) implies $`\mathrm{\Delta }_0=0`$, $`\xi _0=1`$ or $`a_0=_{l=1}^{\mathrm{}}a_l\xi _l`$, and eliminating $`a_0`$ from $`\mathrm{\Delta }_k`$ yields the general moment relation: $$\mathrm{\Delta }_k=a_1\xi _{k+1}+\underset{l=2}{\overset{\mathrm{}}{}}a_l(\xi _{k+l}\xi _l\xi _k).$$ (3) where we took moreover into account that $`\sigma `$ can always be chosen such that $`\xi _1=0`$. When only the first term in the r.h.s. of (3) is retained we recover the universal law $`\mathrm{\Delta }_k/\mathrm{\Delta }_l=\xi _{k+1}/\xi _{l+1}`$, put forward in ref.4. The question left unanswered by the study of ref.4 concerns the radius of convergence of the weak polydispersity expansion (3). In order to study this problem in more detail we now consider a simple model system for which we can determine the $`h_i(\sigma )(i=1,2)`$ numerically and compare the results with (3). The free energy density functional chosen here corresponds to a simple van der Waals (vdW) model for the liquid-vapor transition in polydisperse systems of spherical particles of variable size: $`f[\rho ]=`$ $`k_BT{\displaystyle 𝑑\sigma \rho (\sigma )\left\{\mathrm{ln}(\frac{\mathrm{\Lambda }^3(\sigma )\rho (\sigma )}{E[\rho ]})1\right\}}`$ (4) $`+{\displaystyle \frac{1}{2}}{\displaystyle 𝑑\sigma 𝑑\sigma ^{}V(\sigma ,\sigma ^{})\rho (\sigma )\rho (\sigma ^{})}`$ where, $`E[\rho ]=1𝑑\sigma v(\sigma )\rho (\sigma )`$, describes the average excluded volume correction for particles of radius $`R_\sigma `$ and volume $`v(\sigma )=\frac{4\pi }{3}R_\sigma ^3`$, while $`V(\sigma ,\sigma ^{})=𝑑𝐫V(r;\sigma ,\sigma ^{})`$ is the integrated attraction between two particles of species $`\sigma `$ and $`\sigma ^{}`$, for which we took the usual vdW form, $`V(r;\sigma ,\sigma ^{})=ϵ_0(R_\sigma +R_\sigma ^{})^6/r^6`$ for $`rR_\sigma +R_\sigma ^{}`$ and zero otherwise, $`ϵ_0`$ being the amplitude of the attraction at the contact of the two particles. The size-polydispersity can be described in terms of the dimensionless variable, $`\sigma =R_\sigma /R1`$, with $`R`$ the mean value of $`R_\sigma `$ in the parent phase, hence $`\xi _1=𝑑\sigma \sigma h_0(\sigma )=0`$. The thermodynamics is given in terms of $`h_0(\sigma )`$, the dimensionless temperature $`t=k_BT/ϵ_0`$ and the dimensionless density $`\eta =v_0\rho `$, with $`v_0=\frac{4\pi }{3}R_0^3`$ and $`R_0`$ the value of $`R_\sigma `$ in the monodisperse limit. The coexistence conditions are integral equations which can be solved numerically using, for instance, an iterative algorithm for any $`t`$, $`\eta _0=v_0\rho _0`$ and $`h_0(\sigma )`$. For $`h_0(\sigma )`$ we took a Schulz distribution with zero mean. The normalized distribution is given, for $`1\sigma <\mathrm{}`$, by $`h_0(\sigma )=\alpha ^\alpha (1+\sigma )^{\alpha 1}e^{\alpha (1+\sigma )}/\mathrm{\Gamma }(\alpha )`$, with $`\mathrm{\Gamma }(\alpha )`$ the gamma function and $`1/\alpha `$ a width parameter which measures the distance to the monodisperse limit, $`h_0(\sigma )\delta (\sigma )`$ when $`\alpha \mathrm{}`$. We then have: $`\xi _0=1`$, $`\xi _1=0`$, $`\xi _2=1/\alpha `$, $`\xi _3=2/\alpha ^2`$, $`\xi _4=\frac{3}{\alpha ^2}+\frac{6}{\alpha ^3}`$, $`\xi _5=\frac{20}{\alpha ^3}+\frac{24}{\alpha ^4}`$, etc. For a weakly polydisperse system we retain only the dominant terms of (3) in a $`1/\alpha `$ expansion. From (3) we obtain then: $`\mathrm{\Delta }_1=a_1(\mathrm{})\xi _2+O(1/\alpha ^2)`$, $`\mathrm{\Delta }_2=a_1(\mathrm{})\xi _3+a_2(\mathrm{})(\xi _4\xi _2^2)+O(1/\alpha ^3)=\{a_1(\mathrm{})+a_2(\mathrm{})\}\xi _3+O(1/\alpha ^3)`$, $`\mathrm{\Delta }_3=a_1(\mathrm{})\xi _4+O(1/\alpha ^3)`$, etc, where $`a_l(\mathrm{})`$ are the values of $`a_l`$ for $`\alpha \mathrm{}`$. Using the vdW expression (4) to evaluate (1) one finds, for ex. for $`t=1.0`$ and $`\eta _0=0.5`$, $`a_1(\mathrm{})=1.75`$ and $`a_2(\mathrm{})=2.68`$. Using the corresponding numerical solutions found for $`h_1(\sigma )`$ and $`h_2(\sigma )`$ (see Fig.1) it can be seen from Fig.2 that $`\mathrm{\Delta }_1/\xi _21.75`$, $`\mathrm{\Delta }_2/\xi _30.93`$ and $`\mathrm{\Delta }_3/\xi _41.75`$ are obeyed to within ten percent for $`\alpha `$ larger than, respectively, 40, 80 and 150. We can conclude thus that the weak polydispersity expansion (3) is valid (to dominant order) for Schulz distributions $`h_0(\sigma )`$ with a dispersion $`\left((\xi _2\xi _1^2)^{1/2}\right)`$ smaller than, say, 0.1 ($`\alpha 100`$). These values do of course depend on the thermodynamic state but the case considered here ($`t=1`$, $`\eta _0=0.5`$) is representative of other $`t,\eta _0`$ values. Note also that we have verified numerically that the radius of convergence of (3) with respect to $`1/\alpha `$ is fairly sensitive to the total amount of polydispersity present. Allowing, for instance, the amplitude $`ϵ_0`$ of the pair potential $`V(r;\sigma ,\sigma ^{})`$ to depend on $`\sigma `$ and $`\sigma ^{}`$ does reduce the radius of convergence of (3) considerably. From the above it follows that, $`\frac{\mathrm{\Delta }_3}{\mathrm{\Delta }_1}`$ follows the universal law, $`\frac{\mathrm{\Delta }_3}{\mathrm{\Delta }_1}=\frac{\xi _4}{\xi _2}`$, put forward in ref.4 whereas $`\frac{\mathrm{\Delta }_2}{\mathrm{\Delta }_1}`$ follows the non-universal law, $`\frac{\mathrm{\Delta }_2}{\mathrm{\Delta }_1}=\{1+\frac{a_2(\mathrm{})}{a_1(\mathrm{})}\}\frac{\xi _3}{\xi _2}`$. We have verified that similar results can be obtained for different $`h_0(\sigma )`$ distributions. Taking, for instance, a Gaussian for $`h_0(\sigma )`$ similar results are found although $`\xi _3=0`$ for this case. This invalidates the conclusion of ref.4 that a particular importance should be attached to the skewness of $`h_0(\sigma )`$. In conclusion, the general moment relation (3) can yield useful information about the phase behavior of weakly polydisperse systems but this information is in general not universal. Figure Captions FIG. 1. The polydispersity distributions $`h_n(\sigma )`$ of the parent phase ($`n=0`$: full curve) (a Schulz distribution with the width parameter $`\alpha =50`$), the low-density (n=1: dotted curve) and the high-density (n=2: circles) daughter phases, as obtained by numerically solving the coexistence conditions of the van der Waals model of eq.(4) for $`t=1`$, $`\eta _0=0.5`$. The corresponding dimensionless densities of the coexisting daughter phases are $`\eta _1=0.106`$, $`\eta _2=0.521`$ whereas for the monodisperse system one has, $`\eta _1=0.103`$, $`\eta _2=0.608`$. Also shown are $`h_1(\sigma )h_0(\sigma )`$ (dashed curve) and $`[h_2(\sigma )h_0(\sigma )]50`$ (triangles). FIG. 2. The ratio $`\mathrm{\Delta }_k/\xi _{k+1}`$ ($`k=1,2,3`$) versus $`1/\alpha `$ as obtained from the numerical solution of the van der Waals model of eq.(4) for $`t=1`$, $`\eta _0=0.5`$ and a Schulz distribution for $`h_0(\sigma )`$. The symbols are as follows: circles(k=1), squares(k=2) and triangles(k=3). The dotted lines indicate their asymptotic ($`\alpha \mathrm{}`$) values. The arrows indicate for each case the radius of convergence of the weak polydispersity expansion of eq.(3).
no-problem/0002/astro-ph0002213.html
ar5iv
text
# The Formation of the Hubble Sequence of Disk Galaxies: The Effects of Early Viscous Evolution ## 1 Introduction The current picture of disk galaxy formation and evolution has as its basis the dissipative infall of baryons within a dominant dark halo potential well (White & Rees 1978). The collapse and spin-up of the baryons, with angular momentum conservation, can provide an explanation for many of the observed properties of disks, with the standard initial conditions of baryonic mass fraction $`F0.1`$ and dark halo angular momentum parameter $`\lambda 0.07`$ (Fall & Efstathiou 1980; Gunn 1982; Jones & Wyse 1983; Dalcanton, Spergel & Summers 1997; Hernandez & Gilmore 1998; Mo, Mao & White 1998; van den Bosch 1998). Galaxies such as the Milky Way which have an old stellar population in the disk, must, within the context of a hierarchical-clustering scenario, evolve through only quiescent merging/accretion, so as to avoid excessive heating and disruption of the disk (Ostriker 1990). Further, the merging processes with significant substructure cause angular momentum transport to the outer regions, which must somehow be suppressed to allow the formation of extended disks as observed (e.g. Zurek, Quinn & Salmon 1988; Silk & Wyse 1993; Navarro & Steinmetz 1997). Thus here we adopt the simplified picture that disk galaxies form from smooth gaseous collapse to centrifugal equilibrium, within a steady dark halo potential. We discuss where appropriate below how this may be modified to take account of subsequent infall, or earlier star formation. Our model incorporates the adiabatic response of the dark halo to the disk infall, and we provide new, more general, analytic solutions for the density profile, given a wide range of initial density profiles and angular momentum distributions. We provide new insight into the ‘disk-halo’ conspiracy within the context of this model, demonstrating how an imperfect conspiracy is improved by the disk-halo interaction. We explicitly include subsequent viscous evolution of the gas disk to provide the exponential profile of the stellar disk, and develop analytic expressions that illustrate the process. The resulting radial inflow builds up the central regions of the disk and we investigate the properties of ‘bulges’ that may form as a consequence of instabilities of the central disk. We derive new constraints on the characteristic redshift of disk star formation. We obtain a simple relation connecting the initial conditions, such as spin parameter and baryonic mass fraction, to the efficiency of viscous evolution and star formation. ## 2 The Disk Galaxy Formation Model In this section we shall derive the mass profiles of disk and halo after the collapse of the baryons. We shall follow earlier treatments of disk galaxy formation (e.g. Mo, Mao & White 1998) by assuming that the virialized dark halo, mixed with baryonic gas, is ‘formed’ – or at least assembled – at redshift $`z_f`$. This virialized halo has a limiting radius $`r_{200}`$ within which the mean density is $`200\rho _{crit}(z_f)`$, and contains a baryonic mass fraction $`F`$. Then $$r_{200}=\frac{V_{200}}{10H(z_f)};M_{tot}=\frac{V_{200}^2r_{200}}{G}=\frac{V_{200}^3}{10GH(z_f)},$$ (1) where $`H(z_f)`$ is the value of the Hubble parameter at redshift $`z_f`$, $`M_{tot}(z_f)`$ is the total mass within virialized radius $`r_{200}`$, and $`V_{200}`$ is the circular velocity at $`r_{200}`$. The baryonic gas cools and settles into a disk, causing the dark halo to contract adiabatically (Blumenthal et al. 1986). The specific angular momentum distribution of the gas is assumed to be conserved during these stages. We shall include below the subsequent re-arrangement of the disk due to angular momentum transport. This we investigate by variation of the disk angular momentum distribution function, choosing an appropriate analytic functional form. Let $`m_d(r)`$ and $`m_h(r)`$ respectively denote the fraction of the total baryonic mass, and total dark mass, that is contained within radius $`r`$, and denote the baryonic mass angular momentum distribution function by $$m_d\left(<j\right)=f\left(j/j_{max}\right),$$ (2) where $`j_{max}`$ is the maximum specific angular momentum of the disk. We will be requiring that the functional form, $`f(j/j_{max})`$, vary as the disk evolves, and it is convenient to introduce the notation $`\mathrm{}j/j_{max}`$ and define $$c_f1_0^1f\left(\mathrm{}\right)𝑑\mathrm{},$$ (3) which represents the area above the angular momentum distribution function curve $`f(\mathrm{})`$ for $`0\mathrm{}1`$. We will mimic the effects of viscous evolution by decreasing the value of $`c_f`$ in our evolving disk models in section 4 below. In terms of this parameter the total disk angular momentum is: $`J_d`$ $`=`$ $`FJ_{tot}=M_d{\displaystyle _0^{j_{max}}}j{\displaystyle \frac{dm_d}{dj}}𝑑j`$ (4) $`=`$ $`M_dj_{max}\left(1{\displaystyle _0^1}f\left(\mathrm{}\right)𝑑\mathrm{}\right)=FM_{tot}j_{max}c_f,`$ with $`M_d=FM_{tot}`$. Thus $`j_{max}c_f`$ is the average specific angular momentum of the disk material. The specific angular momentum of the disk material is assumed to follow that of the dark halo, but in general will not be a simple analytic function (e.g. Quinn & Binney 1992). For illustration, we adopt an analytic monotonic increasing function $`f(b,\mathrm{})`$ containing a free parameter $`b`$, with $`0b1`$. We require the initial angular momentum distribution to be scale free, representing the angular momentum distribution of the virialized halo, and will adopt $`f(b=0,\mathrm{})=\mathrm{}^n`$. We shall vary the value of the parameter $`b`$ to mimic the effects of viscous evolution on the angular momentum distribution. The total energy is: $$E_{tot}=\frac{ϵ_0GM_{tot}^2}{2r_{200}}=\frac{ϵ_0M_{tot}V_{200}^2}{2},$$ (5) where $`ϵ_0`$ is a constant of order unity, depending on the dark halo density profile, and since it is constant for any specific halo model, we can take $`ϵ_0=1`$ without loss of generality. The spin parameter $`\lambda `$ is by definition $$\lambda J_{tot}|E_{tot}|^{1/2}G^1M_{tot}^{5/2}.$$ (6) Thus the mean specific angular momentum of the disk material may be expressed as $$c_fj_{max}=\sqrt{2}\lambda V_{200}r_{200}.$$ (7) Assuming spherical symmetry, the rotationally-supported disk has mass profile given by $$m_d=f\left(j/j_{max}\right)=f\left(\frac{\sqrt{GM_{tot}\left(m_d\right)r\left(m_d\right)}}{j_{max}}\right)=f\left(\mathrm{}\right),$$ (8) while the initial virialized halo mass profile is $$g(R_{ini})=m_h(R_{ini})=M_{ini}(R_{ini})/M_{tot},$$ (9) with $`Rr/r_{200}`$. ### 2.1 Constraints on the Final Dark Halo Profile and Mass Angular Momentum Distribution Function The above equations describe the disk and halo just upon the settling of the gas disk to the mid-plane, prior to the subsequent adiabatic compression of the halo. A self-consistent calculation of the modified disk and halo density profiles may be made by consideration of the adiabatic invariance of the angular action, $`I_\theta v_\theta r𝑑\theta =\sqrt{GM_{tot}(r)r}`$, together with the assumption of no shell crossing (cf. Blumenthal et al. 1986). Suppose a dark matter particle initially at $`r_{ini}`$ finally settles at $`r(m_d)`$, the radius within which the dark halo mass fraction is $`m_h`$. Then under adiabatic invariance the disk mass profile, $`m_d`$, and the halo mass profile, $`m_h`$, are related by: $`GM_{tot}\left(m_d\right)r\left(m_d\right)`$ $`=`$ $`GM_{tot}\left(m_h\right)r\left(m_h\right)`$ (10) $`=`$ $`GM_{ini}\left(m_h\right)r_{ini}\left(m_h\right)`$ (11) $`=`$ $`GM_{tot}m_hr_{200}g^1\left(m_h\right),`$ (12) where $`g^1(m_h)`$ is the inverse function of $`g(R)`$, the initial virialized halo mass profile. Further manipulation of these relations is simplified by introduction of the parameter $`\xi `$, given by $$\xi \frac{\sqrt{GM_{tot}r_{200}}}{j_{max}}=\frac{c_f}{\sqrt{2}\lambda }.$$ (13) Generally $`\xi `$ is a quantity that is closely related to the overall disk collapse factor. From equations (8) and (12), we have $`m_d`$ $`=`$ $`f(\mathrm{}),`$ (14) $`\mathrm{}`$ $`=`$ $`\xi \left(m_hg^1(m_h)\right)^{1/2},`$ (15) where $`\mathrm{}`$ is the normalized specific angular momentum. Again, $`0\mathrm{}1`$, and $`\mathrm{}=1`$ corresponds to the maximum specific angular momentum of the disk, which occurs at the edge of the disk, equivalently at the disk cutoff radius. The fraction of the dark matter contained within the disk thus has a maximum value, $`m_{hc}`$, given by $`\mathrm{}=1`$ in the above equation, and for radii with $`m_hm_{hc}`$, $`m_d=1`$. To illustrate the physical meaning of these parameters, consider the rigid singular isothermal halo, for which $`g(m_{hc})=m_{hc}=R_c`$. Then from equation (15), $`R_c=R(\mathrm{}=1)=1/\xi `$ which corresponds to the disk cutoff radius. Thus in this case, remembering that $`R`$ is the normalized radius, $`\xi =1/R_c`$ is the disk collapse factor. Up to now we know the mass profile of disk and halo after collapse, in terms of the normalized specific angular momentum $`\mathrm{}`$, as given in equations (14) and (15), for given forms of the angular momentum distribution function, $`f`$, and initial virialized dark halo mass profile, $`g`$. Next we shall obtain the relation between $`\mathrm{}`$ and radius $`R`$, to complete the derivation of the mass profiles of disk and halo after collapse. Returning to a general halo density profile, the total mass contained within the radius corresponding to $`m_h`$ is $$M(m_h)=M_{tot}\left((1F)m_h+Fm_d\right).$$ (16) From equations (9) - (12) and (16), we have $`R`$ $`=`$ $`{\displaystyle \frac{GM_{ini}(m_h)r_{ini}(m_h)}{GM(m_h)r_{200}}}`$ (17) $`=`$ $`{\displaystyle \frac{m_hg^1(m_h)}{(1F)m_h+Fm_d}}.`$ (18) Introducing the radius variable $`x`$ and coefficient $`c_0`$ as $`x`$ $``$ $`\xi (1F)R,`$ (19) $`c_0`$ $``$ $`\xi F/(1F),`$ (20) we may finally derive the functional dependences on $`\mathrm{}`$ of the disk mass $`m_d`$, of the halo mass $`m_h`$, and of the radius $`x`$: $`m_d`$ $`=`$ $`f(\mathrm{}),`$ (21) $`m_hg^1(m_h)`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{\xi ^2}},`$ (22) $`x`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{\xi m_h(\mathrm{})+c_0f(\mathrm{})}}.`$ (23) Thus $`\mathrm{}`$ can be thought of as a normalized radius. As we shall see later, $`c_0`$ is a measure of the compactness of the final collapsed disk due to the competition between the spin parameter $`\lambda `$ and the baryonic mass fraction $`F`$. These equations (21) - (23) can be used to derive disk and halo properties for a free choice of virialized halo profile $`g(R)`$ and angular momentum distribution function $`f(\mathrm{})`$. Within the disk cutoff radius, with $`0\mathrm{}1`$, the disk surface density, circular velocity and the disk-to-dark mass ratio as function of radius $`\mathrm{}`$ or $`R`$ have the generic forms: $`\mathrm{\Sigma }_d`$ $`=`$ $`{\displaystyle \frac{10H(z)FV_{200}}{2\pi G}}{\displaystyle \frac{1}{R}}{\displaystyle \frac{df}{d\mathrm{}}}{\displaystyle \frac{d\mathrm{}}{dR}},`$ (24) $`V_c`$ $`=`$ $`{\displaystyle \frac{V_{200}\mathrm{}}{\xi R}},`$ (25) $`{\displaystyle \frac{M_d(\mathrm{})}{M_h(\mathrm{})}}`$ $`=`$ $`{\displaystyle \frac{c_0f(\mathrm{})}{\xi m_h(\mathrm{})}},`$ (26) where $`M_d(\mathrm{})=M_d(\mathrm{}=1)m_d(\mathrm{})=M_dm_d(\mathrm{})`$ and $`M_h(\mathrm{})`$ is defined similarly. The circular velocity at radii beyond the disk cutoff, but within the halo, is given by: $`V_c`$ $`=`$ $`{\displaystyle \frac{V_{200}\sqrt{m_hg^1(m_h)}}{R}},`$ (27) $`R`$ $`=`$ $`{\displaystyle \frac{m_hg^1(m_h)}{(1F)m_h+F}}.`$ (28) Armed with these relations, one may now look at various initial virialized halo density profiles and angular momentum distributions, and determine the allowed parameter space from observed properties of disk galaxies. It is convenient to adopt power-law approximations for the initial virialized halo mass profile and angular momentum distributions, such that $`m_h(\mathrm{})\mathrm{}^m`$ and $`f(\mathrm{})\mathrm{}^n`$ for small $`\mathrm{}`$. Figure 1 shows the location of various fiducial models in the plane of these power law indices $`m`$ and $`n`$; the value $`m=1`$ corresponds to the singular isothermal sphere, the value $`m=4/3`$ corresponds to the Hernquist (1990) and to the Navarro, Frenk & White (1997) profiles, while the value $`m=3/2`$ corresponds to the non-singular isothermal sphere with a constant-density core. These profiles span the range of dark-halo profiles suggested by theory, and plausibly consistent with observations. The shaded region is the allowed parameter space for these models, constrained by the surface-density profile, rotation curve and disk-to-halo central mass ratio. The line ABC is the maximum angular-momentum index consistent with a disk surface density profile in the central regions that declines with increasing radius, while the line DEF is the minimum angular-momentum index consistent with a finite value of the central circular velocity. The line BF denotes the maximum values of $`n`$ consistent with a non-zero central disk-to-halo mass ratio. This power-law approximation has an initial virialized halo density profile at small radius as $`\rho _{h,ini}(R)R^{\frac{m}{2m}3}`$ (seen by solution of (22) for the form of $`g`$). Solving equations (21) - (23) above gives the corresponding profile after adiabatic infall. In the central region, where the disk dominates the gravitational potential (i.e. $`c_0f(\mathrm{})\xi m_h(\mathrm{})`$), the halo density profile is $`\rho _h(R)R^{\frac{m}{2n}3}`$. Note that in the region where the dark halo dominates the gravitational potential (i.e. $`c_0f(\mathrm{})\xi m_h(\mathrm{})`$), the halo density profile is essentially unaffected by the disk, as expected. For the case $`n=m`$, the central halo density profile is unaffected since the disk mass density profile and the initial virialized halo density profile have the same dependence on $`\mathrm{}`$. The viable models within the shaded region have $`nm`$, so that the final halo density profile in the central, disk-dominated region should be steeper than its initial virialized profile in this region, not surprisingly. Thus the final halo profile for these models ranges from $`\rho _hR^{0.75}`$ to $`\rho _hR^2`$. It is interesting to note that $`\rho _hR^{0.75}`$, the outcome of an initial virialized halo with a constant density core responding to the settling of a disk with angular momentum index $`n=4/3`$, corresponds to the de-projected de Vaucouleurs central density profile. Thus provided the virialized halo does not have a declining density profile with decreasing radius, which is unphysical, the final dark halo cannot have a constant density core, at least in the very central region where the disk dominates, but should be cuspy. ### 2.2 The Singular Isothermal Sphere The singular isothermal sphere provides a virialized halo density profile that is the most tractable analytically, and we can obtain some important scaling relations without having to specify the angular momentum distribution $`f(\mathrm{})`$; aspects of the analysis of this profile should hold in general, and provide insight. The final disk and halo mass profiles are given by solution of: $`x`$ $`=`$ $`{\displaystyle \frac{\mathrm{}}{1+c_0\frac{f(\mathrm{})}{\mathrm{}}}},`$ (29) $`m_h`$ $`=`$ $`\mathrm{}/\xi ,`$ (30) $`m_d`$ $`=`$ $`f(\mathrm{}),`$ (31) where $`0\mathrm{}1`$ and $`c_0`$ and $`\xi `$, the parameters describing the compactness of the collapsed disk and its collapse factor, are defined above in equations (20) and (13). The collapse factor, defined as the ratio of the pre-collapse radius $`r_{200}`$ to the cutoff disk radius $`r_c`$ at $`\mathrm{}=1`$ (to be distinguished from the final disk scale length) is $$\frac{r_{200}}{r_c}=\xi (1F)(1+c_0)=\frac{c_f}{\sqrt{2}\lambda }\left(1F+\frac{c_f}{\sqrt{2}}\frac{F}{\lambda }\right),$$ (32) where $`c_f`$, defined in equation (3), is a measure of the shape of the angular momentum distribution, and small values of $`c_f`$ mean steeply-rising angular momentum distributions. Typical angular momentum distributions for disks are shown in Figure 2 (a,b) and discussed below. Here one should just note that a value $`c_f0.5`$ is reasonable. With this, and $`\lambda =0.06`$ and $`F=0.1`$, we obtain $`\xi =5.9`$ and $`c_0=0.66`$. The collapse factor as defined here is then about a factor of 9. For fixed angular momentum distribution $`c_f`$, the collapse factor depends not only on $`1/\lambda `$ but also on compactness $`c_0F/\lambda `$. This is consistent with the results of previous studies of the collapse factor in two extreme cases (Jones & Wyse 1983; Peebles 1993): if the final disk is so self-gravitating that the value of $`F/\lambda `$ corresponds to $`c_01`$, then the collapse factor is $`F/\lambda ^2`$; on the other hand if the final disk is sufficiently far from self-gravitating, with small $`F/\lambda `$ and $`c_01`$, the collapse factor is $`1/\lambda `$. The above collapse factor relationship is valid over the entire range of values for the parameters $`\lambda `$ and $`F`$ (and is the most general relation derived to date). It should be noted that varying the normalized angular momentum distribution by varying $`c_f`$ changes the derived collapse factor; this is investigated further below. Comparison between the sizes of observed disks and those predicted from such collapse calculations provides a constraint on the redshift at which the collapse happened (e.g. Mo, Mao & White 1998). Our disk cutoff radius for non-trivial $`f(\mathrm{})`$ may be expressed in terms of the initial conditions as : $$r_c=\frac{\sqrt{2}\lambda V_{200}}{10H(z_f)c_f(1F+\frac{c_f}{\sqrt{2}}\frac{F}{\lambda })},$$ (33) where $`z_f`$ is the ‘formation’ or assembly redshift, at which the halo is identified to have given mass and circular velocity, with no mass infall after this epoch. One can see from this relation that both $`\lambda `$ and $`F/\lambda `$ are equally important; previous determinations considered the baryonic mass fraction fixed (Dalcanton, Spergel & Summers 1997; Mo, Mao & White 1998). The explicit inclusion of the parameter $`c_f`$ allows us to take account of disk evolution, as gas is transformed to stars. Adopting a disk cutoff radius at three disk scale-lengths (e.g. van der Kruit 1987), and choosing specific values of the present-day stellar disk scale-length $`r_d=3.5`$kpc, $`\lambda =0.06`$, $`V_{200}=200`$ kms<sup>-1</sup> and an Einstein-de-Sitter Universe with Hubble constant $`0.5<h<1`$, the above relation gives the formation redshift $`1.6<z_f<3.1`$ for $`F=0.05`$ and $`c_f=1/3`$; $`1.4<z_f<2.8`$ for $`F=0.1`$ and $`c_f=1/3`$; $`0.9<z_f<2.0`$ for $`F=0.05`$ and $`c_f=1/2`$; $`0.7<z_f<1.7`$ for $`F=0.1`$ and $`c_f=1/2`$. Consistent with previous calculations, for fixed formation redshift smaller values of $`\lambda `$ can lead to smaller disk size; we have here explicitly demonstrated that larger $`F/\lambda `$ can also lead to this result. Lower values of $`c_f`$ also lead to higher redshift of formation. Thus for no viscous evolution i.e. there is no angular momentum redistribution during the evolution of the galactic disk and $`c_f`$ has a time-independent value, the formation redshift $`z_f`$ determined by assuming fixed initial $`\lambda `$ and $`F`$ and fixed disk size $`r_c`$ is smaller than would be determined if $`c_f`$ could be decreased (to mimic say viscous evolution). Lower values of $`c_f`$ for given total angular momentum content imply a larger disk scale-length; the effect of viscous evolution is to re-arrange the disk material so as to increase the disk scale-length. Typical values of viscosity parameters lead to a factor of 1.5 increase in disk scale-length in a Hubble time. As we demonstrate below, a smaller value for $`c_f`$ corresponds to larger bulge-to-disk ratio. This result is consistent with the results of Mo, Mao & White (1998): halo and disk formation redshift can be pushed to higher value when a bulge is included. These trends are general, and not tied to the specific model of the halo density profile. One should bear in mind that the old stars in the local thin disk of the Milky Way have ages of at least 10 Gyr, and may be as old as the oldest stars in the Galaxy (Edvardsson et al. 1993); the age distribution of stars at other locations of the Galactic disk is very poorly-determined, but it is clear that a non-trivial component of the thin disk was in place at early times (at redshift $`z>2`$ for the cosmologies considered above). A common assumption in previous work is that the mass angular momentum distribution of the disk is that of a solid-body, rotating uniform density sphere, $`f(\mathrm{})=1(1\mathrm{})^{3/2}`$ (Mestel 1963). For this distribution, $`c_f=0.4`$, and one derives a low redshift of formation for a galaxy like the Milky Way (Mo, Mao & White 1998), which has difficulties with the observations. #### 2.2.1 The Singular Isothermal Halo with Simple $`f(\mathrm{})`$ Analytic solutions to equations (29) - (31) can be obtained by assuming a simple monotonic increasing function $`f(b,\mathrm{})`$ containing one parameter $`b`$ with $`0b1`$. In order to avoid the situation where one obtains a trivial collapse factor due to a very small amount of disk material at very large radius, we restrict the shape of the mass angular momentum distribution function $`f(\mathrm{})`$ to avoid too shallow an asymptotic slope as $`f`$ approaching 1 when $`\mathrm{}`$ increases (see Figure 2). From Figure 1, the singular isothermal halo at point F requires $`f(\mathrm{})\mathrm{}`$ for $`\mathrm{}1`$. A simple form consistent with this is $`f(\mathrm{})=(1+b)\mathrm{}b\mathrm{}^2`$ with $`0\mathrm{}1`$ and $`0b1`$. The angular momentum parameter $`c_f=(3b)/6`$. Figure 2a shows this angular momentum distribution function with different values of the parameter $`b`$, compared with the mass angular momentum distribution of a solid-body, rotating uniform density sphere, $`f(\mathrm{})=1(1\mathrm{})^{3/2}`$ (Mestel 1963). The normalized total angular momentum $`c_f`$ corresponds to the area above each curve. Within the disk, where $`0\mathrm{}1`$, the galactic disk surface density, circular velocity and the disk-to-dark mass ratio as a function of radius are: $`\mathrm{\Sigma }_d(\mathrm{})={\displaystyle \frac{10V_{200}H(z)F(1F)^2\xi ^2}{\pi G}}`$ $`\times {\displaystyle \frac{(1+b2b\mathrm{})(1+c_0+c_0bc_0b\mathrm{})^3}{2\mathrm{}(1+c_0+c_0b)}},`$ (34) $$V_c(\mathrm{})=V_{200}(1F)[1+c_0(1+bb\mathrm{})],$$ (35) $$\frac{M_d(\mathrm{})}{M_h(\mathrm{})}=\frac{(1+bb\mathrm{})(3b)F}{6\sqrt{2}\lambda (1F)},$$ (36) where $$\mathrm{}=\frac{[1+c_0(1+b)]\xi (1F)R}{1+bc_0\xi (1F)R}.$$ (37) From equation (27)-(28), the circular velocity at radii beyond the disk cutoff, but within the halo, is: $$V_c(R)=\frac{V_{200}(1F)}{2}\left[1+\sqrt{1+\frac{4F}{(1F)^2R}}\right].$$ (38) These results are plotted in Figure 3 (a,b,c) for $`\lambda =0.06`$, $`F=0.1`$, $`0\mathrm{}1`$. The different curves correspond to $`b=0,1/4,1/2,3/4,1`$. Larger values of $`b`$ yield larger disk cut-off radii; note that the circular velocities for points beyond the cut-off radius of a given model may be obtained by forming the envelope of the values for the cut-off radius for larger values of $`b`$. Varying the value of the parameter $`b`$ changes the angular momentum distribution function similar to the effects of viscous evolution; the ratio of disk mass to dark halo mass increases at small radius with increasing $`b`$, which can be interpreted as due to radial inflow of disk material. #### 2.2.2 Halo Density Profile and Angular Momentum Combinations For initial virialized halo profiles other than the singular isothermal sphere, the collapse factor has a more general form: $$\frac{r_{200}}{r_c}=\xi (1F)(\xi m_{hc}+c_0),$$ (39) where $`m_{hc}`$ is the solution of $`1=\xi ^2m_{hc}g^1(m_{hc})`$ and is the mass fraction of the dark halo that is contained within the cut-off radius of the disk. The trend of the dependence of the collapse factor on the values of $`\lambda `$ and $`F`$ remains the same as found above for the singular isothermal halo. The range of viable models represented by the shaded region in Figure 1 can be investigated by the appropriate virialized halo profile $`g(R)`$ and angular momentum distribution function $`f(b,\mathrm{})`$ corresponding to the points E,B,D and C. The distribution of surface density, circular velocity and disk-to-halo mass ratio for these models, varying parameter $`b`$, are shown in Figures 4–7, by choosing fixed $`\lambda =0.06`$, $`F=0.1`$ (and halo core size $`c=4`$ if the halo has a core radius). Model E: The results of a model corresponding to point E in Figure 1 are shown in Figure 4; this has a Hernquist halo profile $`g(R)=\frac{(1+c)^2R^2}{(1+cR)^2}`$ and $`f(b,\mathrm{})=(1+b)\mathrm{}b\mathrm{}^2`$. The different curves correspond to $`b=0,1/4,1/2,3/4,1`$. Model B: The results of a model corresponding to point B in Figure 1 are shown in Figure 5; this has a Hernquist halo profile $`g(R)=\frac{(1+c)^2R^2}{(1+cR)^2}`$ and $`f(b,\mathrm{})=(1+10b)\mathrm{}^{4/3}10b\mathrm{}^{22/15}`$. This $`f(b,\mathrm{})`$ is shown in Figure 2b. For $`\mathrm{}1`$, $`f(b=0,\mathrm{})\mathrm{}^{4/3}`$; the $`22/15`$ index in the second term is determined by the requirement that $`f(b,\mathrm{})`$ should be a monotonic increasing function of $`\mathrm{}`$ for all values of $`0b1`$. The different curves correspond to $`b=0,0.1,0.3,0.6,1`$. Model D: The results of a model corresponding to point D in Figure 1 are shown in Figure 6; this has a non-singular isothermal halo with a constant density core, $`g(R)=\frac{cRarctan(cR)}{carctan(c)}`$, and $`f(b,\mathrm{})=(1+b)\mathrm{}b\mathrm{}^2`$. Again the different curves correspond to $`b=0,1/4,1/2,3/4,1`$. Model C: The results of a model corresponding to point C in Figure 1 are shown in Figure 7; this has a non-singular isothermal halo with constant density core, $`g(R)=\frac{cRarctan(cR)}{carctan(c)}`$, and $`f(b,\mathrm{})=(1+10b)\mathrm{}^{4/3}10b\mathrm{}^{22/15}`$. Again the different curves correspond to $`b=0,0.1,0.3,0.6,1`$. As can be seen from the figures, the different halo profiles and angular momentum distributions produce disks with a variety of surface density profiles and rotation curves. As in Figure 3b, the circular velocities for points beyond the cut-off radius of a given model may be obtained by forming the envelope of the values for the cut-off radius for larger values of $`b`$. Thus if the disk is very compact, from equations (27)-(28) or equation (38), we find that the circular velocity beyond the edge of the disk tends to decreases with radius. The location of the edge of the disk depends on both $`\lambda `$ and $`F`$ (in addition to $`b`$). Thus we have shown that rotation curves should show an imperfect disk–halo ‘conspiracy’ if the disk is too compact or too massive. This is consistent with observations (Casertano & van Gorkom 1991). Further, these figures demonstrate that with increasing $`b`$, the inner rotation curves become flat, and the transition between disk-dominated and halo-dominated sections of the rotation curve becomes more and more smooth with increasing $`b`$. As discussed above, and illustrated in Figure 2, a higher value of $`b`$ corresponds to a flatter specific angular momentum distribution function, and increasing $`b`$ mimics the effects of viscous evolution in transporting angular momentum. This indicates that viscous evolution can help the creation of an apparent disk-halo ‘conspiracy’. As can be seen from equations (21) - (26), for given $`c_0`$ (the compactness parameter defined in equation (20)), or $`F/\lambda `$ ratio, within a given model of $`f(\mathrm{},b)`$ and virialized dark halo profile $`g(R)`$, the normalized properties of the disks formed for $`b=0`$ are very similar. In particular the disk surface density profile, rotation curve and disk-to-halo mass ratio profile scale similarly with $`\mathrm{}`$. For example, in the case of the singular isothermal sphere (model F), equations (34) - (37) show that for $`b=0`$ the normalized disks are identical for given $`F/\lambda `$ ratio. Thus $`F/\lambda `$ must be an important factor in distinguishing one disk from another. The overall normalization of the surface density is $`F/\lambda ^2`$, so that again $`F/\lambda `$ and $`\lambda `$ enter separately and are both important. Note that here we are not insisting that the surface density profile of the gas disk so formed be exponential, unlike previous work (Mo, Mao & White 1998). We shall however appeal to viscous evolution tied to star formation to provide a stellar exponential disk. We now turn to this. ## 3 The Viscous Evolution and Star Formation In this paper we aim to link viscous evolution within disks and the Hubble sequence of disk galaxies. One of the motivations for invoking viscous disks is that if the timescale of angular momentum transport via viscosity is similar to that of star formation, a stellar exponential disk is naturally produced independent of the initial gaseous disk surface density profile (Silk & Norman 1981; Lin & Pringle 1987; Saio & Yoshii 1990; Firmani, Hernandez & Gallagher 1996). Angular momentum transport and associated radial gas flows (both inwards and outwards) can also, as shown above, provide a tight ‘conspiracy’ between disk and halo rotation curves, and, as demonstrated below, provide a higher phase space density in bulges as compared to disks. The star formation rate per unit area in a disk can be represented by a modified Schmidt law involving the dynamical time and the gas density (Wyse 1986; Wyse & Silk 1989). We shall use the form of the global star formation rate per unit area, $`\mathrm{\Sigma }_\psi `$, of Kennicutt (1998), based on his observations of the inner regions of nearby large disk galaxies: $$\mathrm{\Sigma }_\psi =\alpha \mathrm{\Sigma }_{gas}\mathrm{\Omega }_{gas},$$ (40) where $`\mathrm{\Sigma }_{gas}`$ is total gas surface density, $`\mathrm{\Omega }_{gas}`$ is the dynamical time at the edge of the gas disk, and the normalization constant, related to the efficiency of star formation, has the value $`\alpha =0.017`$ (Kennicutt 1998). Note that observations of the star-formation rates in the outer regions of disk galaxies suggest that it is actually volume density that should enter the Schmidt law, rather than surface density, and since many (if not all) gas disks flare in their outer regions, equation (40) will over estimate the star formation rate (Ferguson et al. 1998). This is beyond the scope of the present model, but should be borne in mind and will be incorporated in our future work. For our models here, the edge of the initial gas disk is where $`\mathrm{}=1`$, $`R=R_c`$ and thus $`\mathrm{\Omega }_{gas}=\mathrm{\Omega }_c`$. For halo formation redshift $`z_f`$, we can obtain the relationship between global star formation timescale and the galaxy initial conditions in the general form using equation (1), (25) and (39): $`t_{}^1`$ $`=`$ $`\alpha \mathrm{\Omega }_c=\alpha V_c(r_c)/r_c`$ (41) $`=`$ $`10\alpha H(z_f)\xi (1F)^2(\xi m_{hc}+c_0)^2,`$ where again $`m_{hc}`$ is the solution of $`1=\xi ^2m_{hc}g^1(m_{hc})`$, and is the fraction of the dark halo mass that is contained within the cut-off radius of the disk. The parameters $`\xi `$ and $`c_0`$ are defined in equations (13) and (20); $`F`$ is the baryonic mass fraction in the initial density perturbation. In the case of the singular isothermal halo, this relation has a simple form: $$t_{}^1=10\alpha H(z_f)\xi (1F)^2(1+c_0)^2.$$ (42) The gas consumption timescale will be longer than the characteristic star formation timescale due to the gas returned by stars during their evolution and death. For a standard stellar Initial Mass Function, $`t_g2.5t_{}`$ (e.g. Kennicutt et al. 1994). This modified Schmidt law is based on observations of the inner regions of nearby large disk galaxies, and simple theoretical principles. Assuming it holds at all epochs allows one to estimate the properties of present-day disks from the initial conditions of earlier sections in this paper. Let us assume that the dark halo is fully virialized at redshift $`z_f`$. In keeping with the spirit of hierarchical clustering, let us allow for some star formation that could have taken place in the disk, from an earlier redshift $`z_i`$, and that the total mass of the system could increase until $`z_f`$ (although to maintain the thin disk, this accretion and merging must be only of low mass, low density systems). So for any time $`t`$ or redshift $`z`$ between $`z_i`$ and $`z_f`$, $`{\displaystyle \frac{dM_g}{dt}}`$ $`=`$ $`F{\displaystyle \frac{dM_{tot}}{dt}}{\displaystyle \frac{dM_{}}{dt}},`$ (43) $`{\displaystyle \frac{dM_{}}{dt}}`$ $`=`$ $`{\displaystyle \frac{M_g}{t_g(z)}},`$ (44) where $`M_{tot}`$ is defined in equation (1) and $`M_{}`$ is mass locked up into stars. In an Einstein-de-Sitter Universe, $`H(z)=H_0(1+z)^{3/2}`$ and $`H(z)t=2/3`$. We have $$\frac{dM_g}{dt}=A\frac{BM_g}{t},$$ (45) where $`A=\frac{3FV_{200}^3}{20G}`$ and $`B=\frac{8}{3}\alpha \xi (1F)^2(1+c_0)^2`$, and $`\xi `$ and $`c_0`$ are expressed in terms of $`c_f`$, $`\lambda `$ and $`F`$ through equations (13) and (20) in section 2.1. Hence, from equation (1), $`M_{tot}\frac{A}{FH(z)}`$. Identifying the dark halo to have fixed $`V_{200}`$, independent of redshift, leads to $`A`$ also being a constant, and thus the total mass grows as $`M_{tot}t`$. This differs from the standard solution of infall onto a point, $`M_{tot}t^{2/3}`$ (Gunn & Gott 1972). The solution to the above equation is then $$M_g=\frac{A}{1+B}t\left[1+B\left(\frac{t}{t_i}\right)^{(1+B)}\right],$$ (46) where $`t_i`$ corresponds to the redshift $`z_i`$ of the onset of star formation. Thus at the halo formation redshift $`z_f`$, the disk gas fraction is $$f_g(z_f)=\frac{1+B\left(\frac{1+z_f}{1+z_i}\right)^{3(1+B)/2}}{1+B},$$ (47) Since $`B`$ depends on $`c_f`$, $`\lambda `$ and $`F`$, i.e. $`B\frac{c_f}{\lambda }(1F+\frac{c_f}{\sqrt{2}}\frac{F}{\lambda })^2`$, the value of the constant $`B`$ may be evaluated for $`\lambda =0.06`$ and various reasonable values of $`F`$ and $`c_f`$ as: for $`F=0.1`$, $`c_f=1/3`$, $`B=0.30`$; for $`F=0.05`$, $`c_f=1/3`$, $`B=0.23`$; for $`F=0.1`$, $`c_f=0.5`$, $`B=0.59`$; and for $`F=0.05`$, $`c_f=0.5`$, $`B=0.41`$. For fixed $`\lambda `$ and $`F`$, small values of $`B`$ correspond to small values of $`c_f`$, and hence flatter specific angular momentum distributions. Thus for $`1+z_f\mathrm{}<2(1+z_i)`$, $`f_g(z_f)1/(1+B)`$, typically $`\mathrm{}>2/3`$. We assume that after $`z_f`$, there is no further infall, and that the gas in the disk will be consumed with characteristic timescale $`t_g(z_f)`$. Thus the gas fraction of a typical disk galaxy at the present time is $$f_g=f_g(z_f)\mathrm{exp}\left[\delta t(z_f)/t_g(z_f)\right],$$ (48) where $`\delta t(z_f)`$ is the time interval between the halo formation redshift $`z_f`$ and present time $`z=0`$. Assuming an Einstein-de-Sitter universe, we have $`\delta t(z_f)=t_0(1H_0/H(z_f))`$ and $`t_0H_0=2/3`$. Thus a typical value for the present gas fraction of disk galaxies is: $$\mathrm{ln}\left(\frac{f_g(z_f)}{f_g}\right)=B\left[(1+z_f)^{3/2}1\right].$$ (49) With the approximation $`f_g(z_f)=1/(1+B)`$, we have $$(1+z_f)^{3/2}=1+\frac{\mathrm{ln}f_g\mathrm{ln}(1+B)}{B}.$$ (50) For the Milky Way Galaxy, if we adopt $`5\times 10^9M_{}`$ for the atomic $`HI`$ gas, and $`1.3\times 10^9M_{}`$ for the molecular $`H_2`$ gas (Blitz 1996; Dame 1993), then with an estimate of $`46\times 10^{10}M_{}`$ for the total baryonic mass of the Milky Way depending on stellar exponential scale length (Dehnen & Binney 1998), we obtain gas fraction, the total gas mass includes $`24\%`$ helium mass, $`f_g15\%`$ or even higher depending on the mass model of the Galaxy. As mentioned earlier, for fixed $`\lambda `$ and $`F`$, small values of $`B`$ correspond to small values of $`c_f`$, and hence flatter specific angular momentum distributions. For small values of the parameter $`B`$, say $`B0.3`$, we obtain $`z_f2.4`$, while for large values of $`B`$, $`B0.6`$, we obtain $`z_f1.2`$. The larger value of $`z_f`$ is preferred, given what we know of the age distribution of stars in the local thin disk (e.g. Edvardsson et al.1993). The effect of viscous evolution is equivalent to choosing small $`c_f`$, i.e. small values of $`B`$. So the inclusion of viscous evolution can give relatively higher halo formation redshift, which is consistent with the results from the constraint on the redshift of formation that we obtained from considerations of the size of the disk. It should be noted that for fixed halo ‘formation’ redshift $`z_f`$, the star formation timescale $`t_{}^1\frac{1}{\lambda }(1F+\frac{1}{2\sqrt{2}}\frac{F}{\lambda })^2`$. Again, both $`\lambda `$ and $`F/\lambda `$ are important. As we discussed in the previous section, the structure of the normalized disk depends strongly on $`F/\lambda `$ while the overall normalization depends strongly on $`\lambda `$ for fixed $`F/\lambda `$. As we show later, the bulge-to-disk ratio also depends on the importance of $`\lambda `$ and $`F/\lambda `$. Thus many aspects of the Hubble sequence of disk galaxies – star formation timescale, disk gas fraction, bulge-to-disk ratio – depend on both $`\lambda `$ and $`F/\lambda `$. The star formation timescale derived above is independent of $`V_{200}`$, which at first sight is surprising, given that the Hubble sequence of disk galaxies has been interpreted as a sequence of star formation timescales, relative to collapse times (Sandage 1986), and observations show that the Hubble type of a disk galaxies is broadly correlated with the disk luminosity (Lake & Carlberg 1988; de Jong 1995). However, $`V_{200}`$ is not an easily-observed quantity. The present-day luminosity of a disk in our model can be written as $$L=\frac{FM_{tot}(1f_g)}{\gamma _{}}=\frac{F[1f_g(B,z_f)]V_{200}^3}{10GH(z_f)\gamma _{}},$$ (51) with $`\gamma _{}`$ the current value of the mass-to-light ratio. Estimation of the predicted Tully-Fisher relation depends on what model parameter we use for the width of the HI line. Obviously if we identify this with $`V_{200}`$ we have a reasonable relationship provided that the coefficient $`\gamma _{tf}`$ is constant, i.e. $`F[1f_g(B,z_f)]/H(z_f)=\gamma _{TF}`$. This then requires that there should be a correlation between $`F`$ and $`z_f`$, in the sense that for large $`F`$, $`z_f`$ is large. Then we can see from equation (42) that this leads to the star formation timescale being small, implying more efficient viscous evolution, and leading to larger B/T ratio. Large $`z_f`$ may be correlated with small $`V_{200}`$ in the context of hierarchical-clustering cosmologies, and in that case, a short star formation time and large B/T ratio should be correlated with low luminosity, which is not consistent with observation. However, an interpretation that is compatible with observations is that high n-sigma fluctuations for fixed $`V_{200}`$ can form high luminosity disks with large B/T ratio. The Tully-Fisher relationship is not a simple relation between luminosity and $`V_{200}`$, but depends on where the circular velocity $`V_c(R)`$ is measured (Courteau 1997). The relationship between $`V_{200}`$ and $`V_c(R)`$ obviously depends on the details of the halo density profile and the angular momentum distribution. From the rotation curves in fig. 3 to fig. 7, we can see that it is appropriate to choose for our estimate of $`V_c`$ the circular velocity at the cutoff radius of disk, adopted as 3 three scale lengths. From equation (35), we have $`V_c=V_{200}(1F)(1+c_0)`$, where $`c_0=c_fF/\sqrt{2}\lambda (1F)`$, as defined in equation (20), is the compactness of disk. Now the predicted Tully-Fisher relation is $$L=\frac{F[1f_g(B,z_f)]V_c^3}{10GH(z_f)\gamma _{}(1+c_0)^3(1F)^3}.$$ (52) Requiring that the coefficient $`\gamma _{TF}`$ in the Tully-Fisher relation be constant, i.e. $`\gamma _{TF}F[1f_g(B,z_f)]/H(z_f)(1+c_0)^3(1F)^3`$, then gives, from equation (42), $`t_{}^1c_0/(1+c_0)`$. So for small $`c_0`$, the star formation timescale is large, which can cause less efficient viscous evolution. So less efficient viscous evolution and small $`F/\lambda `$ will lead to small B/T ratio. Also small $`c_0`$ is correlated with large $`z_f`$ from the constancy of the coefficient in the Tully-Fisher relation. Similarly large $`z_f`$ may be correlated with small $`V_{200}`$ in the context of hierarchical clustering cosmology. So small $`c_0`$ and small $`V_{200}`$ will lead to small $`V_c`$ and lower disk luminosity. Thus this version of the predicted Tully-Fisher relation appears fully compatible with the observations. However, one should bear in mind that we have adopted a fixed constant of proportionality $`\alpha `$ in the star formation law, and this may well vary with global potential well depth (White & Frenk 1991) or local potential well depth (Silk & Wyse 1993). A further test of the model is the relation between disk scale and circular velocity, and its variation with redshift; observations indicate that $`R_d/V_c`$ is smaller at high redshift, $`z1`$ (Vogt et al. 1996; Simard et al. 1999). In our model there is little change in total mass between redshifts of unity and the present, and so this evolution of disk size cannot be due to the halo mass growth, as had been proposed by Mao, Mo & White (1998). Instead in our model this is due to the different and changing scale lengths of gas and stars. In our model, due to early star formation, the gas fraction at $`z_f`$ is about $`f_g(z_f)1/(1+B)`$, typically $`2/3`$. It is natural that the gas component of the disk will have a larger scale length than the stellar component. The scale length of the gas component can increase with time due to viscous evolution while the scale length of stellar component can also increase with time due to the non-linear local star formation law (Saio & Yoshi 1990). So the stellar scale length at high redshift, when the gas fraction is large, should be much smaller than the stellar scale length at present time, when gas fraction is lower. The study of detailed evolution of gas and stellar component is beyond the scope of this paper; but the prediction of stellar size evolution of our model is qualitatively consistent with the observations. The measured distribution of $`R_d/V_c`$ for the local disk galaxy sample is approximately peaked at $`R_d/V_c1.5`$ ($`R_d`$ in kpc, $`V_c`$ in kms<sup>-1</sup>) with a spread from -2 to -1 (Mao, Mo & White 1998; Courteau 1996). Our estimation of the disk size or scale length in Section 2 is valid for galaxies at the present time, when the gas fraction is small. Then assuming the disk cut off radius is three scale lengths, from equation (41), $`R_d/V_c=R_c/3V_c=\alpha t_{}/3`$, and the present $`R_d/V_c`$ is an indication of the galactic global star formation timescale; further, from the constancy of $`\gamma _{TF}`$, the coefficient of the Tully-Fisher relation in equation (52), we have $$R_d/V_c=\alpha t_{}/3=\frac{G}{3}\gamma _{TF}\gamma _{}\frac{1+c_0}{c_0(1f_g)}.$$ (53) Thus this ratio is also an indication of the disk compactness. Adopting the B-band mass-to-light ratio of our local disk $`\gamma _{}2.5M_{}/L_{}`$, and using the luminosity of our galaxy $`L_B3\times 10^{10}L_{}`$ and $`V_c220`$ kms<sup>-1</sup> to estimate $`\gamma _{TF}=L_B/V_c^3`$, we can obtain that $`\mathrm{log}(R_d/V_c)2.0+\mathrm{log}(\frac{1+c_0}{c_0(1f_g)})`$. Obviously $`\mathrm{log}(R_d/V_c)2`$ roughly corresponds to the predicted lower limit of the local sample, which is consistent with observations. The distribution of $`F`$ from 0.05 to 0.2 and the distribution of $`\lambda `$ from 0.03 to 0.12 will cause the value of the compactness parameter $`c_0`$ to spread from 0.1 to 2 approximately. Adopting typical values $`F=0.1`$, $`\lambda =0.06`$, $`c_f=1/3`$, the peak will be located at $`\mathrm{log}(R_d/V_c)=1.5`$ with $`c_0=0.44`$, which is again consistent with observations. The spread of $`R_d/V_c`$ is simply caused by the spread in the value of compactness parameter $`c_0`$. ## 4 The Formation of Bulges and the Hubble Sequence Galaxies classification schemes based on morphology are the basic first step in understanding how galaxies form and evolve (van den Bergh 1998). The bulge-to-disk luminosity ratio is one of the three basic classification criterion for the Hubble sequence (Sandage 1961). However, the relation between bulge-to-disk ratio and Hubble type has a fair amount of scatter, some of which must be related to the difficulty of a decomposition of the light profile into bulge and disk, and the bulge-to-disk ratio is dependent on the band-pass used to define the luminosity (de Jong 1996). The current observational data show that bulges can be diverse and heterogeneous (Wyse, Gilmore & Franx 1997). Some share properties of disks and some are more similar to ellipticals. Models of bulge formation can be classified into several categories: the bulge is formed from early collapse of low angular momentum gas, with short cooling time and efficient star formation (Eggen et al. 1962; Larson 1976 – who invoked viscosity to transport angular momentum away from the proto-bulge; van den Bosch 1998); the bulge is formed from merging of disk galaxies (Toomre & Toomre 1972; Kauffmann, White & Guiderdoni 1993); the bulge is formed from the disk by secular evolution process after bar instability (Combes et al. 1990; Norman, Sellwood & Hasan 1996; Sellwood & Moore 1999; Avila-Reese & Firmani 1999); the bulge is formed from early dynamical evolution of massive clumps formed in disk (Noguchi 1999). However it is speculated that large bulges, which tend to have de Vaucouleur’s law surface brightness profiles, share formation mechanisms with ellipticals, while smaller bulges, which tend to be better fit by an exponential profile, are formed from their disks through bar dissolution. It should be noted that the significantly higher phase space density of bulges when compared to disks suggests that gaseous inflow should play a part in the instability (Wyse 1998), just as invoked in earlier sections of this paper, for other reasons. Here we only consider this latter case, the formation of small bulges from the disk. Early studies of disk instabilities showed that Toomre’s local stability criterion $`Q>1`$ is also sufficient for global stability to axisymmetric modes (Hohl 1971; Binney & Tremaine 1987). It is known that the bar instability requires a similar condition (Hockney & Hohl 1969; Ostriker & Peebles 1973). Efstathiou, Lake & Negroponte (1982) used N-body techniques to study the global bar instability of a pure exponential disk embedded in a dark halo and proposed a simple instability criterion for the stellar disk, based on the disk-to-halo mass ratio. However, it has been argued recently that there is no such simple criterion for bar instability (Christodoulou, Shlosman & Tohline 1995; Sellwood & Moore 1999). Further, recent N-body simulations (Sellwood & Moore 1999; Norman, Sellwood & Hasan 1996) show that every massive disk form a bar during the early stages of evolution, but later the bar is destroyed by the formation of a dense central object, once the mass of that central concentration reaches several percent of the total disk mass. This can be understood in terms of the linear mode analysis work and nonlinear processes of swing amplifier and feed back loops (Goldreich & Lynden-Bell 1965; Julian & Toomre 1966). Toomre (1981) argued that the bar-instability can be inhibited in two ways: one is that a large dark halo mass fraction can reduce the gain of the swing amplifier, while the other is that feedback through the centre can be shut off by a inner Lindblad resonance (ILR). The dense central object can destroy the bar via the second of these mechanisms. However, most of these studies assume that the dark halo has a constant density core. On the contrary, we have shown that dark halo profile cannot have a constant density core after adiabatic infall, if one starts from physical initial conditions. It will be interesting to study bar formation under different disk-halo profiles in addition to the well-studied harmonic core. We will for simplicity adopt the simple criterion of Efstathiou, Lake & Negroponte (1982), interpreted to determine the size of a bar-unstable region, with the radial extent of the bar, $`r_b`$ defined by $$M_d(r_b)/M_h(r_b)\beta ,$$ (54) with the value of the parameter $`\beta `$ chosen to fit observations. As we have argued in section 2, $`F/\lambda `$ is the important quantity determining the overall normalization of the disk surface density. From N-body simulations (Warren et al. 1992; Barnes & Efstathiou 1987; Cole & Lacey 1996; Steinmetz & Bartelmann 1995), the distribution of $`\lambda `$ can be fit by a log-normal distribution: $$P(\lambda )d\lambda =\frac{1}{\sqrt{2\pi }\sigma _\lambda }exp\left[\frac{\mathrm{ln}^2(\lambda /\lambda _0)}{2\sigma _\lambda }\frac{d\lambda }{\lambda }\right],$$ (55) where $`\lambda _0=0.06`$ and $`\sigma _\lambda =0.5`$ (this result is fairly independent of the slope of the power spectrum of density fluctuations). What of the possible range of the baryonic fraction $`F`$? Some previous studies suggested that $`F\lambda `$ as an explanation of the disk-halo ‘conspiracy’ (Fall & Efstathiou 1980; Jones & Wyse 1983; Ryden & Gunn 1987; Hernandez & Gilmore 1998). Some interpretations of the observed Tully-Fisher relationship suggest that indeed $`F`$ is not invariant (McGaugh & de Blok 1998). Here we shall assume that the distribution of $`F`$, similar to $`\lambda `$, is log-normal, centreed at $`F_0=0.1`$ with $`\sigma _F=0.05`$. So $`F`$ is mainly within the range $`0.050.2`$. We generated a Monte Carlo sample of disks, with fixed halo $`V_{200}`$ and halo formation redshift $`z_f`$, but with values of $`F`$ and $`\lambda `$ following the above distributions. Then for given virialized dark halo profiles $`g(R)`$ and angular momentum distribution $`f(b,\mathrm{})`$ in Figure 1, we can calculate the star formation timescale from equation (41). The parameter $`b`$ represents the efficiency of viscous evolution; on the assumption that the viscous timescale is equal to the star formation timescale, we can use a simple linear correlation between the value of $`b`$ and the star formation timescale. Then the value of parameter $`b`$ can be obtained. From equations (21), (22), (23), (26), (50) and (54), one can calculate the bulge-to-disk ratio and final disk gas fraction. Thus we can plot B/T ratio versus disk gas fraction, for the whole Monte Carlo sample, confining the parameter space of $`\lambda `$ and $`F`$ to give small bulges with $`B/T<0.5`$. Larger values of B/T would correspond to such an unstable disk that the exercise is invalid. Disks that are too stable, having low $`F/\lambda `$ or large $`\lambda `$, will evolve little and probably end up as low surface brightness systems. Figure 8a shows model F, representing the singular isothermal halo. Assuming the typical values $`\lambda =0.06`$ and $`F=0.1`$, then the choice of $`\beta 0.8`$ is required to allow the existence of bulges. This is just the value of $`\beta `$ used in the global bar instability criterion (Efstathiou, Lake & Negroponte 1982). For this model the disk-to-halo mass ratio varies little with radius (as shown in Figure 3a), so the B/T ratio is strongly dependent on $`\lambda /F`$. Only a small range of values of $`\lambda /F`$ is allowed, so as to not over-produce either bulge-less disks or completely unstable disks. The relation between B/T ratio and gas fraction for this model is given in Figure 8b; there a general trend in the observed sense, with large scatter. The different curves in this plot correspond to different values of $`\lambda /F`$, which sets the overall trend. Figure 9 (a,b) shows the equivalent plots of model E, representing a Hernquist profile halo model, which is probably a more realistic case. Here the the disk-to-halo mass ratio varies strongly with radius when approaching the centre, and the allowed parameter space for values of $`\lambda `$ and of $`F`$ that allowing the formation of bulges can be large. Overly-unstable disks are denoted by asterisk symbols in Figure 9a, and low surface-brightness disks are denoted by cross symbols. The relation between B/T ratio and gas fraction is similar to that for the isothermal halo. ### 4.1 Constraints from the Milky Way The Milky Way bulge is reasonably well-fit by an exponential profile, with a scale-length approximately one-tenth that of the disk (Kent et al. 1991). The morphology of the bulge is consistent with some triaxiality (Blitz & Spergel 1991; Binney et al. 1997). Perhaps the Milky Way is a system in which the bulge has formed from the disk, through a bar instability? Observations show no evidence for a significant young or even intermediate-age stellar population in the field population of the Galactic Bulge (Feltzing & Gilmore 1999), despite their being ongoing star formation in the inner disk. This implies that if the bulge were formed from the disk through bar dissolution, only one such episode is allowed, and this should have happened at high redshift. In the context of the present model, the lower star formation rates and longer viscosity timescales of later times act to stabilize the system. However, it remains to be seen if the observed relative frequencies of bars, bulges and central mass concentration is consistent with the models of bar dissolution. ## 5 Summary In the context of hierarchical clustering cosmology, the dark halo of a disk galaxy can be formed by quiescently merging small sub-halos into the primary dark halo, or by smoothly accreting matter into the dark halo. We derive the generic solution to the adiabatic infall model of disk galaxy formation pioneered by many authors (Mestel 1963; Fall & Efstathiou 1980; Gunn 1982; Faber 1982; Jones & Wyse 1983; Ryden & Gunn 1987; Dalcanton, Summer & Spergel 1997; Mo, Mao & White 1998; Hernandez & Gilmore 1998). Through exploring the allowed parameter space of dark halo profile and angular momentum distribution function, we show that the central halo density profile should be cuspy, with the power law index ranging from $`0.75`$ to $`2`$, in the central regions where the disk mass dominates. Using a modified Schmidt law of global star formation rate, we derive a simple scaling relationship between the disk gas fraction and the assembly redshift. We explicitly allow a distribution in the values of the baryonic mass fraction, $`F`$, in addition of the distribution in values of the spin parameter $`\lambda `$. These two are found to play different role in determining the structural properties of the final disk, the star formation properties and bulge-to-disk ratio. We mimic viscous evolution of disks by varying the specific angular momentum distribution of the disk, to redistribute angular momentum as a function of time. We derive a consistent picture of the formation of galaxies like the Milky Way, with old stars in the disk. Under the assumption that the viscous evolution timescale is equal to the star formation timescale, we can further combine the $`\lambda `$ and $`F`$ with the efficiency of angular momentum redistribution caused by viscosity. Assuming that small bulges are formed from their disks through bar dissolution, we can use the global bar instability condition to obtain bulge-to-total ratio, and explore the dependence on $`F`$, $`\lambda `$ and the viscous evolution efficiency. The inclusion of viscous evolution has the merits of addressing several important issues: the conspiracy between disk and halo, the formation of the exponential profile of stellar disk, the high phase space density of bulges. We have presented an analytic treatment, to illustrate these points and identify areas of particular need for more work. ## Acknowledgments We acknowledge support from NASA, ATP Grant NAG5-3928. BZ thanks Colin Norman, Jay Gallagher for helpful comments. RFGW thanks all at the Center for Particle Astrophysics, UC Berkeley, for their hospitality during the early stages of this work.
no-problem/0002/cond-mat0002260.html
ar5iv
text
# Anderson’s “Theorem” and Bogoliubov-de Gennes Equations for Surfaces and Impurities ## Abstract In order to incorporate spatial inhomogeneity due to nonmagnetic impurities, Anderson proposed a BCS-type theory in which single-particle states in such an inhomogeneous system are used. We examine Anderson’s proposal, in comparison with the Bogoliubov-de Gennes equations, for the attractive Hubbard model on a system with surfaces and impurities. The procedure for examining surface and impurity effects on a microscopic level is by now well established. One uses a mean field-like decoupling, with potentials which are determined from self-consistency requirements. These potentials are then used in the effective Hamiltonian, which is numerically diagonalized. This process is continued until self-consistency is achieved. This is the essence of the Bogoliubov-de Gennes (BdG) formalism. An earlier proposal was suggested by Anderson , in which the single-particle problem is first diagonalized. Eigenvalues and eigenstates are obtained, with which one can formulate the BCS problem, but in a vector space associated with these eigenstates. In certain situations, the single-particle problem can be obtained analytically (open boundaries, for example ), or it can be obtained numerically with significantly less effort than required by the full BdG process. In these instances it would be advantageous to utilize the Anderson prescription. In this paper we report on some test cases to evaluate the Anderson prescription. The BdG equations are well documented . In this work we utilize the attractive Hubbard model, with open boundaries, and with the possibility for single site impurity potentials. The resulting equations are : $`E_nu_n(\mathrm{})={\displaystyle \underset{\mathrm{}^{}}{}}A_{\mathrm{}\mathrm{}^{}}u_n(\mathrm{}^{})+\mathrm{\Delta }_{\mathrm{}}v_n(\mathrm{})`$ (1) $`E_nv_n(\mathrm{})={\displaystyle \underset{\mathrm{}^{}}{}}A_{\mathrm{}\mathrm{}^{}}v_n(\mathrm{}^{})+\mathrm{\Delta }_{\mathrm{}}^{}u_n(\mathrm{})`$ (2) where $$A_{\mathrm{}\mathrm{}^{}}=t\underset{\delta }{}\left(\delta _{\mathrm{}^{},\mathrm{}\delta }+\delta _{\mathrm{}^{},\mathrm{}+\delta }\right)+\delta _{\mathrm{}\mathrm{}^{}}\left(V_{\mathrm{}}\mu +ϵ_{\mathrm{}}\right).$$ (3) The self-consistent potentials, $`V_{\mathrm{}}`$, and $`\mathrm{\Delta }_{\mathrm{}}`$, are given by $`\mathrm{\Delta }_{\mathrm{}}=|U|{\displaystyle \underset{n}{}}u_n(\mathrm{})v_n^{}(\mathrm{})(12f_n)`$ (4) $`V_{\mathrm{}}=|U|{\displaystyle \underset{n}{}}\left[|u_n(\mathrm{})|^2f_n+|v_n(\mathrm{})|^2(1f_n)\right],`$ (5) where $`|U|`$ is the strength of the attractive interaction, the index $`n`$ labels the eigenvalues (there are $`2N`$ of them), the index $`\mathrm{}`$ labels the sites (1 through N), and the composite eigenvector is given by $`\left(\genfrac{}{}{0pt}{}{u_n}{v_n}\right)`$, of total length $`2N`$. The sums in Eqs. (4,5) are over positive eigenvalues only. The other physical parameters are the single-particle hopping, $`t`$, the single site impurity potentials, $`ϵ_{\mathrm{}}`$, and the chemical potential, $`\mu `$. The $`f_n`$ is the Fermi function, with argument $`\beta E_n`$, where $`\beta \frac{1}{k_BT}`$, with $`T`$ the temperature. The single site electron density, $`n_{\mathrm{}}`$, is given, through Eq. (5), by $`V_{\mathrm{}}=|U|\frac{n_{\mathrm{}}}{2}`$. These equations are iterated to convergence, with results to be presented below. The alternative Anderson formalism first solves for the eigenvalues and eigenstates of the ‘non-interacting’ problem, i.e., $$E_n^0w_n(\mathrm{})=\underset{\mathrm{}^{}}{}A_{\mathrm{}\mathrm{}^{}}^0w_n(\mathrm{}^{}),$$ (6) where $$A_{\mathrm{}\mathrm{}^{}}^0=t\underset{\delta }{}\left(\delta _{\mathrm{}^{},\mathrm{}\delta }+\delta _{\mathrm{}^{},\mathrm{}+\delta }\right)\delta _{\mathrm{}\mathrm{}^{}}\left(\mu ϵ_{\mathrm{}}\right).$$ (7) The $`N\times N`$ matrix equation (6) is solved for its eigenvalues $`E_n^0`$ and eigenvectors $`w_n`$. This amounts to determining the unitary matrix $`U_\mathrm{}n`$ that gives a basis for the electron operators $$c_\mathrm{}\sigma ^{}=\underset{n}{}U_\mathrm{}n^{}\stackrel{~}{c}_{n\sigma }^{},$$ (8) which diagonalizes the single-particle Hamiltonian. From this matrix we determine the transformed electron-electron interaction: $$V_{nm,n^{}m^{}}=|U|\underset{\mathrm{}}{}U_\mathrm{}n^{}U_\mathrm{}m^{}U_\mathrm{}n^{}U_\mathrm{}m^{},$$ (9) which now mediates the (generally off-diagonal) electron-electron interaction. The gap and number equations are derived in the usual way; they are in general complicated — the gap is a function of the quantum label $`n`$ and the chemical potential is shifted by an $`n`$-dependent quantity. Once these are obtained, we can transform back to real space, and examine the gap function or the electron density, for example, as a function of position. Figure 1 illustrates the gap parameter obtained by the BdG formalism as a function of position, for all densities, in the case of open boundary conditions (OBC), in one dimension. Results in higher dimension will be very similar . Variations in the gap are strongest near the boundaries, as expected, and the Anderson prescription is reasonably accurate in reproducing the oscillations (not shown). In Fig. 2 we show the gap as a function of position for the case of a single impurity (at site 16) with a repulsive potential, with periodic boundary conditions (PBC). As expected, the gap is suppressed at this site, and once again, the Anderson prescription semi-quantitatively reproduces the BdG result.
no-problem/0002/nucl-ex0002005.html
ar5iv
text
# A NEW MEASUREMENT OF THE MUON MAGNETIC ANOMALY ## 1 Physics Motivation The magnetic anomaly of fermions $`a=\frac{1}{2}(g2)`$ describes the deviation of their magnetic g-factor from the value 2 predicted in the Dirac theory. This quantity has been measured for single electrons and positrons in Penning traps by Dehmelt and his coworkers to 10 ppb . Accurate calculations for $`a`$ of these two particles are possible to this level, which involve exclusively the ”pure” Quantum Electrodynamics (QED) of electron, positron and photon fields. The presently most accurate value for the fine structure constant $`\alpha `$ can be obtained from a comparison between experiment and theory, where it appears as an expansion coefficient. The high accuracy, to which QED calculations can be performed, is demonstrated by the compatibility of this value of $`\alpha `$ and the ones obtained in measurements based on the quantum Hall effect or the number extracted from the precisely known Rydberg constant using an accurate determination of the neutron de Broglie wavelength and relevant mass ratios. Moreover, the agreement of $`\alpha `$ values determined from the electron magnetic anomaly and from the hyperfine splitting in the muonium atom may be interpreted as the most precise reassurance of the internal consistency of QED, because the first case involves the theory of free particles whereas in the second case distinctively different bound state approaches need to be applied . The anomalous magnetic moment of the muon $`a_\mu `$ is more sensitive by a factor of $`(m_\mu /m_e)^2410^4`$ to heavier particles, which appear virtually in loop graphs, and other than electromagnetic interactions. Such effects can be studied in a precise determination of $`a_\mu `$, because very high confidence in the validity of calculations of the dominating QED contribution arises from the excellent description of the electron magnetic anomaly and electromagnetic transitions in fundamental systems like, e.g. hydrogen and muonium atoms. In a series of three experiments at CERN $`a_\mu `$ could be measured to 7.2 ppm. This has verified the muons nature as a heavy leptonic particle and the proper description of its electromagnetic interactions to very high accuracy by QED. In the last of these measurements contributions arising from strong interactions, which amount to 57.8(7) ppm , could be verified. At BNL a new dedicated experiment has been designed to determine the muon magnetic anomaly $`a_\mu `$ with 0.35 ppm relative accuracy, meaning a 20 fold improvement over the previous approaches. At this level exists particular sensitivity to contributions arising from weak interaction through loop diagrams with W and Z bosons (1.3 ppm) . The experiment promises here a clean test of renormalization in weak interaction. The muon magnetic anomaly may also contain contributions from new physics . A variety of speculative theories can be tested which have been invented to extend the present standard model in order to explain some of the features which are described but not fundamentally understood yet. The spectrum of such theoretical models includes physics concepts like muon substructure, new gauge bosons, supersymmetry, an anomalous magnetic moment of the W boson, leptoquarks and violation of Lorentz and CPT invariance. Here a precise measurement of $`a_\mu `$ can be complementary to searches in high energy experiments and the sensitivity may even be higher. ## 2 The Brookhaven Muon g-2 Experiment In the new experiment at the alternating gradient synchrotron (AGS) of BNL polarized muons are stored in a magnetic storage ring with highly homogeneous field $`B`$ and with weak electrostatic focussing. The difference $`\omega _a`$ of the spin precession and the cyclotron frequencies, $$\omega _a=\omega _s\omega _c=a_\mu \frac{e}{m_\mu c}B,$$ (1) is measured, with $`m_\mu `$ the muon mass and $`c`$ the speed of light. Positrons (electrons) from the weak decays $`\mu ^\pm e^\pm +2\nu `$ are observed. For relativistic muons the influence of a static electric field vanishes , if $`a_\mu =1/(\gamma _\mu ^21)`$ which corresponds to $`\gamma _\mu =29.3`$ and a muon momentum of $`p=3.09`$ GeV/c, where $`\gamma _\mu =1/\sqrt{1(v_\mu /c)^2}`$ and $`v_\mu `$ is the muon velocity. For sufficient accuracy of the electric field correction the average muon momentum $`p`$ needs to be within a few parts in 10<sup>4</sup> of magic momentum. For a homogeneous field the magnet must have iron flux return and shielding. Because of the particular momentum requirement and in order to avoid strong magnetic saturation effects of the iron a device of 7 m radius was built. It has a C-shaped iron yoke cross section with the open side facing towards the center of the ring. It provides 1.4513 T field in a 18 cm gap. The magnet is energized by 4 superconducting coils carrying 5177 A current. The magnetic field is determined by a newly developed narrow band magnetometer system which is based on pulsed nuclear magnetic resonance (NMR) of protons in water and vaseline. It has a capability for an absolute measurement to $`50`$ ppb. . The field and its homogeneity are continuously monitored by 380 NMR probes. They are distributed around the ring and they are embedded near the magnet poles in the walls of the Al vacuum tank. For mapping the field inside the storage volume a trolley carrying 17 NMR probes is run in regular intervals, typically twice a week. This device contains a fully computerized magnetometer built entirely from nonferromagnetic components. The field accuracy is derived from and related to a precision measurement of the proton gyromagnetic ratio in a spherical water sample . On average the field around the ring is homogeneous to 1 ppm (Fig.1). This has been achieved using mechanical shimming methods which include movable iron wedges in an air gap between the low carbon steel pole pieces and the magnet yoke as well as iron strips of adjusted width fixed to the neighbourhood of junctions between poles. A set of 60 electrical coils, which run on the surface of the pole pieces around the ring and which can be driven at individually different currents, allows the compensation of other than dipole components of the field. The absolute value of the field integral in the storage region is known at present to better than 0.5 ppm. There is a potential for a significant improvement in this figure. Field drifts are compensated using a set of 36 selected fixed NMR probes. Their average is kept within 0.1 ppm of the nominal value by regulating the main magnet power supply. To avoid large short term thermal effects, the magnet yoke has been dressed with passive thermal insulation material. The weak muon focussing is provided by electrostatic quadrupole electrodes with 10 cm separation between opposite plates. They cover 43 % of the ring circumference. The electric field is applied by pulsing $`\pm 24.5`$ kV voltage for $`1.4`$ms duration to minimize electron trapping and avoid electrical breakdown. The storage volume diameter is defined by circular apertures to 9 cm. Due to parity violation in the weak muon decay process the positrons (electrons) are emitted preferentially along (opposite) to the muon spin direction. This causes a time dependent variation of the spatial distribution of decay particles in the muon eigensystem which translates into a time dependent variation of the energy distribution in this experiment. Inside the ring the positrons(electrons) are observed in 24 shower detectors consisting of scintillating fibers embedded in lead. They have 13 readiation lengths thickness and an average resolution of $`\sigma `$/E = 6.8% at the nominal energy cut of E = 1.8 GeV which is applied in the analysis leading to the positron distribution shown in Fig. 2. All positron events are digitized individually in a custom waveform digitizer at 400 MHz rate and stored for analysis. The time standard of the detectors and the field measurement system is a single LORAN C receiver with better than $`10^{11}`$ long term stability. The technical improvements over previous experiments at CERN include an azimuthally symmetric iron construction for the magnet with superconducting coils, a larger gap and higher homogeneity of the field, segmented positron (electron) detectors covering a larger solid angle and improved electronics. A major advantage is the two orders of magnitude higher primary proton intensity available at the AGS Booster at BNL. Further conceptually novel features are the NMR trolley, a superconducting static inflector magnet and direct muon injection onto storage orbits by means of a magnetic kicker. Previously pions had been introduced in to the ring some of which decayed into stored muons. The electrostatic quadrupoles at BNL have twice the field gradient compared to the CERN experiment and in addition the vacuum requirements are more relaxed due to a new design which minimizes electron trapping. The vacuum chamber is scalloped to avoid preshowering. ## 3 Present Status of results By now data taking has been carried out for $`\mu ^+`$ in two extended periods. In the startup phase of the experiment in 1997 pion injection was used. The efficiency of this process was below the theoretical expectation, which is $`2510^6`$, resulting in $`10^3`$ stored muons per injection pulse (with $`510^{12}`$ protons from the AGS on target). This method is accompanied by a significant flash in the detectors caused by hadronic interactions of unused pions. The impact of this effect was minimized by gated photomultiplier operation. The data were useful only after 22-75 $`\mu `$s, depending on the detector position. The first result obtained in this way was $`a_{\mu ^+}=1165925(15)10^9`$(13 ppm). Muon injection, which is employed regularly since 1998 with an efficiency of order 5%, gives about an order of magnitude more muons per injection pulse and largely reduces flash background. In addition, major improvements in the magnetic field homogeneity and stability were made and the detector efficiency was increased. A part of the new data ($``$ 4%) have already been completely analyzed and provides the preliminary value of $`a_{\mu ^+}=1165919(6)10^9`$ (5 ppm) where the uncertainty is dominated by statistics. Among the systematic errors the dominating contributions arise from positron pileup in the detectors, flashlets, i.e. the additional delivery of small bunches of protons after the AGS main pulse, and the field calibration. Combining all the measured values from CERN and BNL (Fig.3) yields $`a_\mu (expt)=1165921(5)10^9`$ (4 ppm). This agrees with the latest theoretical value $`a_\mu (theor)=116591628(77)10^{11}`$ (0.66 ppm). The dominating error here arises from the knowledge of the hadronic part, which has been calculated using electron positron annihilation into hadrons and hadronic $`\tau `$-decays . ## 4 Perspectives The data recorded up to now cover more than $`210^9`$ decay positrons. This leads to an expected statistical uncertainty at the 1 ppm level. Further data taking is in progress, now with typically $`4010^{12}`$ protons per AGS cycle which provides 10 pulses. The systematic errors are expected to sum up to a few 0.1 ppm. In order for the new muon g-2 experiment to reach its 0.35 ppm design accuracy, besides $`\omega _a`$ and the field also the muon mass respectively its magnetic moment needs to be known to 0.1 ppm or better (see eq.(1)). This has been achieved very recently by microwave spectroscopy of the Zeeman effect in the muonium atom ($`\mu ^+e^{}`$) ground state hyperfine structure, resulting in a measurement of the ratio of the muon magnetic moment to the proton magnetic moment $`\mu _\mu /\mu _p`$ to 120 ppb. (A comparison of the simultaneously obtained muonium ground hyperfine interval with QED theory may be interpreted in terms of an even more precise value of this quantity at 30 ppb.) In minimal supersymmetric models, as a particular example of relevant speculative models, a contribution to $`a_\mu `$ of $$\mathrm{\Delta }a_\mu (SUSY)/a_\mu 1.25\mathrm{ppm}\left(\frac{100GeV/c^2}{\stackrel{~}{m}}\right)^2\mathrm{tan}\beta ,$$ (2) is expected, where $`\stackrel{~}{m}`$ is the mass of the lightest supersymmetric particle and $`\mathrm{tan}\beta `$ is the ratio of the vacuum expectation values for the two involved Higgs fields. At the projected accuracy for g-2, there is a sensitivity to large values of the latter parameter. The experiment is planned for both $`\mu ^+`$ and $`\mu ^{}`$ as a test of CPT invariance. There is actual interest in view of the suggestion to compare tests of CPT invariance in different systems on a common basis, i.e. by using the energies of the states involved. For fermion magnetic anomalies particles with spin down in an external field need to be compared to their antiparticles with spin up. The nature of g-2 experiments is such that they provide a figure of merit $`r=|a^{}a^+|\frac{\mathrm{}\omega _c}{mc^2}`$ for a CPT test, where $`a^{}`$ and $`a^+`$ are respective magnetic anomalies, and $`m`$ is the particle mass. For the past electron and positron measurements one has $`r_e1.210^{21}`$ which is a much tighter bound than from the neutral kaon system, were the mass differences between $`K^0`$ and $`\overline{K^0}`$ yield $`r_K110^{18}`$. An even more stringent CPT test arises therefore already from the past muon magnetic anomaly measurements were $`r_\mu 3.510^{24}`$. Hence, this may be viewed as the presently best known CPT test based on system energies. The BNL g-2 experiment allows to look forward to a 20 times more precise test of this fundamental symmetry. According to the standard theory an elementary particle is not allowed to have a finite permanent electric dipole moment (edm) as this would violate CP and T symmetries, if CPT is assumed to be conserved. An edm of the muon would manifest itself in the g-2 experiment in a time dependent up down asymmetry of decay positrons which can be searched for along with the muon g-2 measurements. The BNL experiment is expected to provide one order of magnitude improvement over the present limit at $`1.0510^{18}`$ e cm. This is possible through proper segmentation of the detector packages. A further highly promising approach has been suggested as a dedicated follow on experiment. It is expected that based on the g-2 setup an experiment can be tailored to allow 5-6 orders of magnitude increase in sensitivity. It should be noted that a non standard model value of $`a_\mu `$ would call for a muon edm search, as both quantities are intimately linked in many theories, where their sizes are connected through a CP violating phase . Another possibility is using the magnet as spectrometer in which pion decays are observed for restricting the muon neutrino mass by a further factor of 20. ## 5 Acknowledgements This work was supported in part by the U.S. Department of Energy, the U.S. National Science Foundation, the German Bundesminster für Bildung und Forschung, the Russian Ministry of Science and the US-Japan Agreement in High Energy Physics.
no-problem/0002/cond-mat0002415.html
ar5iv
text
# Quantum transport in ballistic conductors: evolution from conductance quantization to resonant tunneling ## Abstract We study the transport properties of an atomic-scale contact in the ballistic regime. The results for the conductance and related transmission eigenvalues show how the properties of the ideal semi-infinite leads (i.e. measuring device) as well as the coupling between the leads and the conductor influence the transport in a two-probe geometry. We observe the evolution from conductance quantization to resonant tunneling conductance peaks upon changing the hopping parameter in the disorder-free tight-binding Hamiltonian which describes the leads and the coupling to the sample. Mesoscopic physics has changed our understanding of transport in condensed matter systems. The discovery of new effects, such as weak localization or universal conductance fluctuations, has been accompanied by rethinking of the established ideas in a new light. One of the most spectacular discoveries of mesoscopics is conductance quantization (CQ) in a short and narrow constriction connecting two high-mobility (ballistic) two-dimensional electron gases. The conductance of these quantum point contacts as a function of the constriction width $`W\lambda _F`$ has steps of magnitude $`2e^2/h`$. New experimental techniques have allowed observation of similar phenomena in metallic point contacts of atomic size. The Landauer formula for the two-probe conductance $$G=\frac{2e^2}{h}\text{Tr}(\mathrm{𝐭𝐭}^{})=G_Q\underset{n=1}{\overset{M}{}}T_n,$$ (1) has provided an explanation of the stepwise conductance in terms of the number $`NM`$ of transverse propagating states (“channels”) at the Fermi energy $`E_F`$ which are populated in the constriction. Here $`𝐭`$ is the transmission matrix, $`T_n`$ transmission eigenvalues and $`G_Q=2e^2/h`$ is the conductance quantum. In the ballistic case $`(\mathrm{𝐭𝐭}^{})_{ij}`$ is $`\delta _{ij}`$, or equivalently $`T_n`$ is $`1`$. Further studies have explored CQ under a range of conditions. They include geometry, scattering on impurities, temperature effects, and magnetic field. In this paper we study the influence of the attached leads on ballistic transport ($`\mathrm{}>L`$, $`\mathrm{}`$ being elastic mean free path, $`L`$ being the system size) in a nanocrystal. We assume that in the two-probe theory an electron leaving the sample does not reenter the sample in a phase-coherent way. This means that at zero temperature phase coherence length $`L_\varphi `$ is equal to the length of the sample $`L`$. In the jargon of quantum measurement theory, the leads act as a “macroscopic measurement apparatus”. Our concern with the influence of the leads on conductance is therefore also a concern of quantum measurement theory. Recently, the effects of a lead-sample contact on quantum transport in molecular devices have received increased attention in the developing field of “nanoelectronics”. Also, the simplest lattice model and related real-space Green function technique are chosen here in order to address some practical issues which appear in the frequent use of these methods to study transport in disordered samples. We emphasize that the relevant formulas for transport coefficients contain three different energy scales (corresponding to the lead, the sample, and the lead-sample contact), as discussed below. In order to isolate only these effects we pick the strip geometry in the two-probe measuring setup shown on Fig. 1. The nanocrystal (“sample”) is placed between two ideal (disorder-free) semi-infinite “leads” which are connected to macroscopic reservoirs. The electrochemical potential difference $`eV=\mu _L\mu _R`$ is measured between the reservoirs. The leads have the same cross section as the sample. This eliminates scattering induced by the wide to narrow geometry of the sample-lead interface. The whole system is described by a clean tight-binding Hamiltonian (TBH) with nearest neighbor hopping parameters $`t_{\mathrm{𝐦𝐧}}`$ $$\widehat{H}=\underset{𝐦,𝐧}{}t_{\mathrm{𝐦𝐧}}|𝐦𝐧|,$$ (2) where $`|𝐦`$ is the orbital $`\psi (𝐫𝐦)`$ on the site $`𝐦`$. The “sample” is the central section with $`N_x\times N_y\times N_z`$ sites. The “sample” is perfectly ordered with $`t_{\mathrm{𝐦𝐧}}=t`$. The leads are the same except $`t_{\mathrm{𝐦𝐧}}=t_\text{L}`$. Finally, the hopping parameter (coupling) between the sample and the lead is $`t_{\mathrm{𝐦𝐧}}=t_\text{C}`$. We use hard wall boundary conditions in the $`\widehat{y}`$ and $`\widehat{z}`$ directions. The different hopping parameters introduced here have to be used to get the conductance at Fermi energies throughout the whole band extended by the disorder, i.e. $`t_\text{L}>t`$. Thus, one has to be aware of the conductances we calculate in our analysis when engaging in such studies. Our toy model shows exact conductance steps in multiples of $`G_Q`$ when $`t_\text{C}=t_\text{L}=t`$. This is a consequence of infinitely smooth (“ideally adiabatic” ) sample-lead geometry. Then we study the evolution of quantized conductance into resonant tunneling conductance while changing the parameter $`t_\text{L}`$ of the leads as well as the coupling between the leads and the conductor $`t_\text{C}`$. An example of this evolution is given on Fig. 2. The equivalent evolution of the transmission eigenvalues $`T_n`$ of channels is shown on Fig. 3. A similar evolution has been studied recently in one-atom point contacts. The non-zero resistance is a purely geometrical effect caused by reflection when the large number of channels in the macroscopic reservoirs matches the small number of channels in the lead. The sequence of steps ($`1,3,6,5,7,5,6,3,1`$ multiples of $`G_Q`$ as the Fermi energy $`E_F`$ is varied) is explained as follows. The eigenstates in the leads, which comprise the scattering basis, have the form $`\psi _𝐤\mathrm{sin}(k_ym_y)\mathrm{sin}(k_zm_z)e^{ik_xm_x}`$ at atom $`𝐦`$, with energy $`E=2t_\text{L}[\mathrm{cos}(k_xa)+\mathrm{cos}(k_ya)+\mathrm{cos}(k_za)]`$, where $`a`$ is the lattice constant. The discrete values $`k_y(i)=i\pi /(N_y+1)a`$ and $`k_z(j)=j\pi /(N_z+1)a`$ define subbands or “channels” labeled by $`(k_y,k_z)(i,j)`$, where $`i`$ runs from $`1`$ to $`N_y`$ and $`j`$ runs from $`1`$ to $`N_z`$. The channel $`(k_y,k_z)`$ is open if $`E_F`$ lies between the bottom of the subband, $`2t_\text{L}[1+\mathrm{cos}(k_ya)+\mathrm{cos}(k_za)]`$, and the top of the subband, $`2t_\text{L}[1+\mathrm{cos}(k_ya)+\mathrm{cos}(k_za)]`$. Because of the degeneracy of different transverse modes in 3D, several channels $`(k_y,k_z)`$ open or close at the same energy. Each channel contributes one conductance quantum $`G_Q`$. This is shown on Fig. 2 for a sample with $`3\times 3`$ cross section where the number of transverse propagating modes is $`M=9`$. In the adiabatic geometry, channels do not mix, i.e. the transmission matrix is diagonal in the basis of channels defined by the leads. We compute the conductance using the expression obtained in the framework of Keldysh technique by treating the coupling between the central region and the lead as a perturbation. This provides the following, Landauer-type, formula for the conductance in the non-interacting system $`G`$ $`=`$ $`{\displaystyle \frac{2e^2}{h}}\text{Tr}\left(\widehat{\mathrm{\Gamma }}_L\widehat{G}_{1N_x}^r\widehat{\mathrm{\Gamma }}_R\widehat{G}_{N_x1}^a\right)={\displaystyle \frac{2e^2}{h}}\text{Tr}(\mathrm{𝐭𝐭}^{}),`$ (3) $`𝐭`$ $`=`$ $`\sqrt{\widehat{\mathrm{\Gamma }}_L}\widehat{G}_{1N_x}^r\sqrt{\widehat{\mathrm{\Gamma }}_R}.`$ (4) Here $`\widehat{G}_{1N_x}^r`$, $`\widehat{G}_{N_x1}^a`$ are matrices whose elements are the Green functions connecting the layer $`1`$ and $`N_x`$ of the sample. Thus only the block $`N_y\times N_z`$ of the whole matrix $`\widehat{G}(𝐧,𝐦)`$ is needed to compute the conductance. The positive operator $`\widehat{\mathrm{\Gamma }}_L=i(\widehat{\mathrm{\Sigma }}_L^r\widehat{\mathrm{\Sigma }}_L^a)=2\text{Im}\widehat{\mathrm{\Sigma }_L}>0`$ is the counterpart of the spectral function $`\widehat{A}=i(\widehat{G}^r\widehat{G}^a)`$ for the self-energy $`\widehat{\mathrm{\Sigma }}_L`$ introduced by the left lead. It “measures” the coupling of the open sample to the left lead ($`\widehat{\mathrm{\Gamma }}_R`$ is equivalent for the right lead).The Green operator is defined as the inverse of $`(E\widehat{H})`$ including the relevant boundary conditions. Instead of inverting the infinite matrix we invert only $`(E\widehat{H}_S)`$ defined on the Hilbert space spanned by orbitals $`|𝐦`$ inside the sample $$\widehat{G}^r=(E\widehat{H}_S\widehat{\mathrm{\Sigma }}^r)^1,$$ (5) where $`\widehat{H}_S`$ is TBH for the sample only. This is achieved by using the retarded self energy $`\widehat{\mathrm{\Sigma }}^r=\widehat{\mathrm{\Sigma }}_L^r+\widehat{\mathrm{\Sigma }}_R^r`$ introduced by the left $`(L)`$ and the right $`(R)`$ lead. In site representation Green operator $`\widehat{G}^{r,a}`$ is a Green function matrix $`\widehat{G}^{r,a}(𝐧,𝐦)=𝐧|\widehat{G}^{r,a}|𝐦`$. Equation (5) does not need the small imaginary part $`i0^+`$ necessary to specify the boundary conditions for the retarded or advanced Green operator $`\widehat{G}^{r,a}`$ because the lead self-energy $`(\widehat{\mathrm{\Sigma }}^a=[\widehat{\mathrm{\Sigma }^r}]^{})`$ adds a well defined imaginary part to $`E\widehat{H}_S`$. This imaginary part is related to the average time an electron spends inside the sample before escaping into the leads. The self-energy terms have non-zero matrix elements only on the edge layers of the sample adjacent to the leads. They are given in terms of the Green function on the lead edge layer and the coupling parameter $`t_\text{C}`$ $`\widehat{\mathrm{\Sigma }}_{L,R}^r(𝐧,𝐦)`$ $`=`$ $`{\displaystyle \frac{2}{N_y+1}}{\displaystyle \frac{2}{N_z+1}}{\displaystyle \underset{k_y,k_z}{}}\mathrm{sin}(k_yn_y)\mathrm{sin}(k_zn_z)`$ (7) $`\times \widehat{\mathrm{\Sigma }}^r(k_y,k_z)\mathrm{sin}(k_ym_y)\mathrm{sin}(k_zm_z),`$ where $`(𝐧,𝐦)`$ is the pair of sites on the surfaces inside the sample which are adjacent to the leads ($`L`$ or $`R`$). The self-energy $`\widehat{\mathrm{\Sigma }}^r(k_y,k_z)`$ in the channel $`(k_z,k_y)`$ is given by $$\widehat{\mathrm{\Sigma }}^r(k_y,k_z)=\frac{t_\text{C}^2}{2t_\text{L}^2}\left(E_\mathrm{\Sigma }i\sqrt{4t_\text{L}^2E_\mathrm{\Sigma }^2}\right),$$ (8) for $`|E_\mathrm{\Sigma }|<2t_\text{L}`$. We use the shorthand notation $`E_\mathrm{\Sigma }=E\epsilon (k_y,k_z)`$, where $`\epsilon (k_y,k_z)=2t_\text{L}[\mathrm{cos}(k_ya)+\mathrm{cos}(k_za)]`$ is the energy of quantized transverse levels in the lead. In the opposite case $`|E_\mathrm{\Sigma }|>2t_\text{L}`$ we have $$\widehat{\mathrm{\Sigma }}^r(k_y,k_z)=\frac{t_\text{C}^2}{2t_\text{L}^2}\left(E_\mathrm{\Sigma }\text{sgn}E_\mathrm{\Sigma }\sqrt{E_\mathrm{\Sigma }^24t_\text{L}^2}\right).$$ (9) In order to study the conductance as a function of two parameters $`t_\text{L}`$ and $`t_\text{C}`$ we change either one of them while holding the other fixed (at the unit of energy specified by $`t`$), or both at the same time. The first case is shown on Fig. 2 and Fig. 4 (upper panel), while the second one on Fig. 4 (lower panel). The conductance is depressed in all cases since these configurations of hopping parameters $`t_{\mathrm{𝐦𝐧}}`$ effectively act as a barriers. There is a reflection at the sample-lead interface due to the mismatch of the subbands in the lead and in the sample when $`t_\text{L}`$ differs from $`t`$. This demonstrates that adiabaticity is not necessary condition for CQ. In the general case, each set of channels which have the same energy subband is characterized by its own transmission function $`T_n(E_F)`$. When the coupling $`t_\text{C}=0.1`$ is small a double-barrier structure is obtained which has a resonant tunneling conductance. The electron tunnels from one lead to the other via discrete eigenstates. The transmission function is composed of peaks centered at $`E_r=2t[\mathrm{cos}(k_xa)+\mathrm{cos}(k_ya)+\mathrm{cos}(k_za)]`$, where $`k_x=k\pi /(N_x+1)a`$ is now quantized inside the sample, i.e. $`k`$ runs from $`1`$ to $`N_x`$. The magnitude and width of peaks is defined by the rate at which an electron placed between barriers leaks out into the lead. These rates are related to the level widths generated through the coupling to the leads. In our model they are energy (i.e. mode) dependent. For example at $`E_F=0`$ seven transmission eigenvalues are non-zero (in accordance with open channels on Fig. 3) and exactly at $`E_F=0`$ three of them have $`T=1`$ and four $`T=0.5`$. Upon decreasing $`t_\text{C}`$ further all conductance peaks, except the one at $`E_F=0`$, become negligible. Singular behavior of $`G(E_F)`$ at subband edges of the leads was observed before. It is worth mentioning that the same results are obtained using a non-standard version of Kubo-Greenwood formula for the volume averaged conductance $`G`$ $`=`$ $`{\displaystyle \frac{4e^2}{h}}{\displaystyle \frac{1}{L_x^2}}\text{Tr}\left(\mathrm{}\widehat{v}_x\text{Im}\widehat{G}\mathrm{}\widehat{v}_x\text{Im}\widehat{G}\right),`$ (11) $`\text{Im}\widehat{G}`$ $`=`$ $`{\displaystyle \frac{1}{2i}}(\widehat{G}^r\widehat{G}^a),`$ (12) where $`v_x`$ is the $`x`$ component of the velocity operator. This was originally derived for an infinite system without any notion of leads and reservoirs. The crucial non-standard aspect is use of the Green function (5) in formula (4). This takes into account, through lead self-energy (7), the boundary conditions at the reservoirs. The reservoirs are necessary in both Landauer and Kubo formulations of linear transport for open finite systems. They provide thermalization and thereby steady state of the transport in the central region. Semi-infinite leads are a convenient method to model the macroscopic reservoirs. When employing the Kubo formula (4) one can use current conservation and compute the trace only on two adjacent layers inside the sample. To get the correct results in this scheme $`L_x`$ in Eq. (4) should be replaced by a lattice constant $`a`$. In the quantum transport theory of disordered systems the influence of the leads on the conductance of the sample is understood as follows. An isolated sample has a discrete energy spectrum. Attaching leads necessary for transport measurements will broaden energy levels. If the level width $`\mathrm{\Gamma }`$ due to the coupling to leads is larger than the Thouless energy $`E_{\text{Th}}=\mathrm{}/\tau _\text{D}\mathrm{}D/L^2`$, ($`D=v_F\mathrm{}/3`$ being the diffusion constant) the level discreteness is unimportant for transport. For our case of ballistic conduction, $`E_{\text{Th}}`$ is replaced by the inverse time of flight $`\mathrm{}v_F/L`$. In the disordered sample where $`\mathrm{\Gamma }E_{\text{Th}}`$, varying the strength of the coupling to the leads will not change the transport coefficients. In other words, the intrinsic resistance of the sample is much larger than the resistance of the lead-sample contact. In the opposite case, discreteness of levels becomes important and the strength of the coupling defines the conductance. This is the realm of quantum dots where weak enough coupling can make the charging energy $`e^2/2C`$ of a single electron important as well. Changing the properties of the dot-lead contact affects the conductance, i.e. the result of measurement depends on the measuring process. The decay width $`\mathrm{\Gamma }=\mathrm{}/\tau _{\text{dwell}}`$ of the electron emission into one of the leads is determined by transmission probabilities of channels through the contact and mean level spacing. This means that mean dwell time $`\tau _{\text{dwell}}`$ inside our sample depends on both $`t_\text{C}`$ and $`t_\text{L}`$. Changing the hopping parameters will make $`\tau _{\text{dwell}}`$ greater than the time of flight $`\tau _f=L/v_F`$. Thus we find that ballistic conductance sensitively depends on the parameters of the dephasing environment (i.e. the leads). In conclusion, we have studied the transport properties of a ballistic nanocrystal placed between two semi-infinite leads in the simplest strip geometry. We observe extreme sensitivity of the conductance to changes in the hopping parameter in the leads as well as the coupling between the leads and the sample. As can be easily anticipated, the conductance evolves from perfect quantization (as a result of an ideal adiabatic geometry) to resonant tunneling. Nevertheless, it is quite amusing that vastly different $`G(E_F)`$ are obtained between these two limits (e.g. Fig. 4). The results are of relevance for the analogous theoretical studies in disordered conductors as well as in the experiments using clean metal junctions with different effective electron mass throughout the circuit. This work was supported in part by NSF grant no. DMR 9725037. We thank I. L. Aleiner for interesting discussions.
no-problem/0002/cond-mat0002436.html
ar5iv
text
# Critical behavior of thermopower and conductivity at the metal-insulator transition in high-mobility Si-MOSFET’s ## Abstract This letter reports thermopower and conductivity measurements through the metal-insulator transition for 2-dimensional electron gases in high mobility Si-MOSFET’s. At low temperatures both thermopower and conductivity show critical behavior as a function of electron density which is very similar to that expected for an Anderson transition. In particular, when approaching the critical density from the metallic side the diffusion thermopower appears to diverge and the conductivity vanishes. On the insulating side the thermopower shows an upturn with decreasing temperature. Scaling theory of non-interacting, disordered, electron gases predicts that no metal-insulator transition (MIT) occurs in 2 dimensions as temperature $`T0`$. Nevertheless, what appears to be a MIT has been observed (at finite, though low $`T`$), first in $`n`$Si-MOSFET’s and more recently in many other 2-dimensional (2D) hole and electron gases . In the particular case of Si-MOSFET’s, the transition is most clearly visible in high-mobility samples, roughly $`\mu 1`$ m<sup>2</sup>/V s. As the density, $`n`$, is varied, there is a particular value, $`n_0`$, above or below which the resistivity $`\rho `$ shows metallic or insulating temperature dependence respectively. For the present purposes we will use as a working definition that negative $`d\rho /dT`$ indicates an ‘insulator’, and positive $`d\rho /dT`$ at the lowest temperatures we can reach corresponds to a ‘metal’ (possible deviations from this definition and the consequences will be mentioned later). At $`n`$ not too close to $`n_0`$, metallic behaviour is visible over a wide range of $`T`$, roughly $`T<0.5E_F/k_B`$ where $`E_F`$ is the Fermi energy. The decrease of $`\rho `$ in the metallic state for high mobility samples is typically two orders of magnitude larger than can be accounted for by electron-phonon scattering. Most previous work on these systems has focused on $`\rho `$, though measurements of the compressibility have also appeared recently. The present paper presents experimental data on the low temperature thermopower, $`S`$, and conductivity, $`\sigma =1/\rho `$, both of which are found to exhibit critical behavior around $`n_0`$. Earlier, a scaling behaviour was described for the temperature dependence of $`\rho (T)`$ over a temperature range $`(0.050.3)E_F/k_B`$. In contrast, we report a different type of critical behavior for $`\sigma `$. When we extrapolate our data on $`\sigma `$, typically taken over the range $`0.34.2`$ K, to the $`T0`$ limit, we find a power-law critical behaviour as a function of $`(n/n_01)`$ on the metallic side. In addition, at our lowest temperature of around 0.3 K where diffusion thermopower dominates, $`S/T`$ appears to diverge when approaching $`n_0`$. At $`n_0`$ there is an abrupt change in behavior of $`S/T`$, with lower densities showing an upturn in $`S`$ as $`T`$ is decreased. Similar characteristics have long been predicted for an Anderson MIT in 3D but such a transition should not occur in 2D. The main sample used for the present $`\rho `$ and $`S`$ measurements (Sample 1) is the same as that described in a previous paper and the techniques used to measure $`S`$ can also be found there. This sample has $`n_0=1.01\times 10^{15}`$ m<sup>-2</sup> (as defined as above) and a peak mobility $`\mu =1.75`$ m<sup>2</sup>/Vs at $`T=1.1`$ K. $`S`$ and $`\rho `$ have been measured as a function of $`T`$, down to about 0.3 K, at many different values of $`n`$. We have also analyzed independent $`\rho (T,n)`$ data for two other samples over the same range of $`T`$, Sample 2 from the same wafer with $`n_0=0.99\times 10^{15}`$ m<sup>-2</sup>, and Sample 3 with peak $`\mu =3.6`$ m<sup>2</sup>/Vs and $`n_0=0.956\times 10^{15}`$ m<sup>-2</sup>. The major experimental problem was that of measuring thermoelectric voltages with the sample in the insulating state. For this purpose an amplifier with input bias current $`<1`$ pA and input impedance $`>10^{12}\mathrm{\Omega }`$ was used. With some averaging it had a resolution of 0.1 $`\mu `$V for source impedances of less than a few hundred k$`\mathrm{\Omega }`$ rising to about 1 $`\mu `$V at 10-20 M$`\mathrm{\Omega }`$, roughly the highest sample impedance in these measurements. With the sample in the metallic state, a Keithley 182 digital voltmeter usually gave the best compromise of input bias current, input impedance and noise. All connections to the sample had isolation resistance $`>50`$G$`\mathrm{\Omega }`$ and all leads were well shielded and filtered against rf interference. In the metallic region $`n`$ is a linear function of gate voltage and it is believed to follow approximately the same dependence in the insulating region , at least close to $`n_0`$. The results on the temperature dependence of $`\rho `$ on both the insulating and metallic side are not shown but are very similar to those seen in previous work . In the metallic regime we have fitted our data on $`\rho `$, typically over the range 0.3 K to 4.2 K, to the equation $$\rho =\rho _0+\rho _1\mathrm{exp}\left((T_0/T)^p\right)$$ (1) where $`\rho _0`$, $`\rho _1`$, $`T_0`$ and $`p`$ are fitting constants, in order to evaluate $`\rho _0`$. Figure 1 shows the results on $`\sigma _0=1/\rho _0`$ as a function of $`n`$. All samples follow the critical behaviour $$\sigma _0=\sigma _m+\sigma _s\left(\frac{n}{n_0}1\right)^\nu .$$ (2) The solid lines are the best fits with the following parameters (with $`\sigma `$ in units of $`e^2/h`$). For sample 2, $`\sigma _m=0.2\pm 0.3`$, $`\sigma _s=13.6\pm 0.7`$ and $`\nu =0.83\pm 0.03`$. Sample 1 has an identical behaviour within experimental error. The higher-mobility Sample 3 also follows the same equation with $`\sigma _m=0.36\pm 0.15`$, $`\sigma _s=34\pm 5`$ and $`\nu =1.39\pm 0.05`$. These results suggest $`\nu `$ increases with peak mobility but clearly more data on a variety of samples are required. The values of $`\sigma _m`$ for Sample 1 and 2 are consistent with zero within experimental uncertainty. For Sample 3, $`\sigma _m`$ may be finite. However, if $`n_0`$ is allowed to decrease from $`0.956`$ to about $`0.925\times 10^{15}`$ m<sup>-2</sup>, a fit which is indistinguishable over the range of the data can also be obtained with $`\sigma _m=0\pm 0.15`$, $`\sigma _s=32\pm 5`$ and $`\nu =1.48\pm 0.05`$. A small discrepancy in $`n_0`$ could arise from the identification of the critical density for the MIT with that density, $`n_0`$, where $`d\rho /dT`$ changes sign, a procedure which has no firm physical foundation . The critical behavior described by Eq. (2) with $`\sigma _m=0`$ is formally the same as that expected for a (continuous) Anderson transition with a mobility edge at $`n_0`$, whereas a finite $`\sigma _m`$ would correspond to a (discontinuous) Mott-Anderson transition; neither transition should arise in a non-interacting 2D gas . The inclusion of interactions along with disorder is a much more complex and ongoing theoretical problem (e.g. see Refs. and references therein) and it is not yet clear if such transitions become possible under these conditions. Similar critical behaviour, usually with $`\sigma _m`$ consistent with zero, has been seen in many 3D systems, typically with values of $`\nu `$ in the range $`0.51.3`$. There are only two previously reported cases related to 2D. Hanein et al. have made a similar analysis to the one above for a 2D hole gas in GaAs and found a linear relation between $`\sigma _0`$ and $`n`$, but with a finite $`\sigma _m`$. Feng et al. have also found scaling behaviour in a Si-MOSFET but it appears to be unrelated to that seen here. We now turn to the thermopower data. A selection of data on $`S`$ is shown in Fig. 2. At $`n=8.5\times 10^{15}`$ m<sup>-2</sup> diffusion thermopower, $`S^d`$, is almost zero and one sees only phonon drag, $`S^g`$, which varies approximately as $`T^6`$ at the lowest temperatures. As $`n`$ decreases, $`S`$ begins to show two distinct regions with different $`T`$ dependences. At $`T>1`$ K there is a relatively rapid increase of $`S`$, roughly $`T^3`$ which is that expected for $`S^g`$ at intermediate temperatures. At $`T<1`$ K, $`S`$ has a much weaker, approximately linear $`T`$ dependence indicative of $`S^d`$ becoming dominant; for $`n<n_0`$, this low-$`T`$ behaviour, which is characteristic of ordinary metals, is replaced by an upturn in $`S`$. Concentrating on the metallic region, the data at lowest $`T`$ are taken to give the best estimate of diffusion $`S^d=\alpha T`$. Fig. 3 shows that $`\alpha `$ as a function of $`n`$ appears to diverge as $`nn_0`$. One would expect a divergence when $`E_F`$ approaches a gap in the DOS but the present results are inconsistent with this explanation because Hall data show that in the vicinity of $`n_0`$ the mobile carrier density equals $`n`$ within $`10\%`$. However, Eq. (2) also implies a divergence of $`S^d`$. Thus, with the assumption of a constant density of states (DOS) Eq. (2) is consistent with $$\sigma (E_F)=\sigma _m+\sigma _s\left(\frac{E_F}{E_c}1\right)^\nu .$$ (3) Again, with $`\sigma _m=0`$ this is formally equivalent to an Anderson transition with $`E_c`$ being the mobility edge. The use of the Mott relation $`S^d=(\pi ^2k_B^2T/3e)(\mathrm{ln}\sigma /E)_{E_F}`$ with Eq. (3) and taking $`\sigma _m=0`$ then gives $$S^d=\frac{\nu \pi ^2k_B^2T}{3e(E_FE_c)}.$$ (4) This result is valid only if $`(E_FE_c)/k_BT1`$; in the opposite limit $`S^d`$ tends to a constant ($`228\mu `$V/K in 3D). Numerical calculations show that the approximation of Eq. (4) gives a magnitude roughly a factor of 2 too large when $`(E_FE_c)/k_BT2`$ which, for our samples, corresponds to $`\mathrm{\Delta }=(nn_0)/n_00.11`$ at $`T=0.4`$ K (using the ideal DOS, $`g_0`$, with an effective mass of $`0.19m_0`$). To simulate this saturation we add $`\mathrm{\Delta }`$ in the denominator (but allow it to be a variable when determining the best fit to the data) and, rewriting Eq. (4) in terms of $`n`$, we have $$\alpha =S^d/T=C/\sqrt{\mathrm{\Delta }^2+(\frac{n}{n_0}1)^2}$$ (5) where $`C=\nu \pi ^2k_B^2/(3eE_c)`$ is a constant expected to be about 32 $`\mu `$V/K<sup>2</sup> for Sample 1, again using $`g_0`$. If $`\sigma _m`$ is finite in Eq. (3), then the Mott relation shows that it will contribute to the denominator of Eq. (4), also softening the divergence at $`n=n_0`$. However, the experimental $`\sigma _m`$ is so small that this is negligible compared to the finite-$`T`$ effect considered here. The best fit of the data to Eq. (5) gives $`\mathrm{\Delta }=0.15\pm 0.01`$ and $`C=(9.5\pm 1.5)\mu `$V/K<sup>2</sup> and is shown as the solid line in Fig. 3. (As with $`\sigma `$, the fit can be improved if $`n_0`$ is slightly decreased). $`\mathrm{\Delta }`$ is consistent with that expected from the argument above, but $`C`$ is too small by a factor of about 3. However, we emphasize that we are comparing our results for a 2D system with a theoretical model of an Anderson MIT valid for non-interacting electrons in 3D. Some progress has been made on calculating $`S^d`$ with the inclusion of weak interactions and disorder . Corrections are found which are logarithmic in $`T`$ and difficult to detect in thermopower; we are unable to explain the observed strong density dependence in terms of the calculations. We should mention that we can also represent the data over the same range using the simple expression $`\alpha =56/n^{2.5}\mu `$V/K<sup>2</sup>, with $`n`$ in units of $`10^{15}`$ m<sup>-2</sup>, but this has no obvious physical explanation; in particular, it does not have the form that we would expect for $`S^d`$ approaching a band edge at $`n=0`$, i.e. $`S^d1/n`$. The data in the insulating regime also show a critical behaviour qualitatively consistent with a mobility edge. Thus the observed upturn of $`S^d`$ is expected for activated conduction across a mobility gap with $`(E_cE_F)>k_BT`$. Under these conditions the 3D Anderson model predicts (see also the numerical calculations in Ref. ) $$S^d=\frac{k_B}{e}\left(A+\frac{E_cE_F}{k_BT}\right)$$ (6) where $`A`$ is a constant of order unity. For $`n0.79\times 10^{15}`$ m<sup>-2</sup> the observed minima in $`S^d`$ occur at $`T_m`$ consistent with $`(E_cE_F)/k_BT_m2`$. Eq. (6) would then imply that the values of $`S^d`$ at these points should all have about the same magnitude. However, the observed $`S`$ will have other contributions. In particular there will be $`S^g`$ (see below) and also a contribution to $`S^d`$ from variable range hopping (VRH) through localized states. (When two or more conduction mechanisms are present, the appropriate $`S^d`$ are weighted by their contributions to $`\sigma `$). The $`T`$ dependence of our $`\rho `$ data and other previously published data in the insulating region are consistent with Efros-Shklovskii VRH across a soft Coulomb gap. For this mechanism one expects $`S^d`$ to be a constant given by $`S^d=(k_B/e)(k_BT_0/C)(\mathrm{ln}g(E)/E)_{E_F}`$ where $`T_0`$ can be obtained from the temperature dependence of $`\rho `$, e.g. Ref. , $`g(E)`$ is the background DOS, and $`C`$ a constant $`6`$. If we take $`(\mathrm{ln}g(E)/E)_{E_F}1/E_F`$ (implying that $`E_F`$ may be in the tail of the DOS) and again using $`g_0`$ to estimate $`E_F`$, we find that the calculated $`S^d`$ are typically a factor of two smaller than the values of $`S`$ observed at $`T_m`$. The argument is not significantly changed if Mott VRH is assumed. In this case $`S^dT^{1/3}`$ but the magnitudes calculated for $`S^d`$ are similar. As far as we are aware, the only previous work which attempted to follow $`S^d`$ into the region of 2D electron localization was that of Burns and Chaikin on thin films of Pd and PdAu. They found an upturn of $`S^d`$ in the strong localization region but no divergence at higher conductivities. The authors attributed their results to the opening of a Mott-Hubbard gap. In 3D, Lauinger and Baumann observed critical behaviour of $`\sigma `$ and a divergence of $`S^d`$ for metallic AuSb films, but the magnitude of the latter was 2 orders of magnitude smaller than seen here. Other 3D experiments on SiP and NbSi saw no divergence on the metallic side. For completeness, we make a few comments about $`S`$ at higher $`T`$ where $`S^g`$ is dominant. Little is known about the behavior of $`S^g`$ near a MIT but it should be present on the metallic side though its precise form is not known. On the other hand, $`S^g`$ requires conservation of crystal momentum for electron-phonon scattering so that $`S^g=0`$ for conduction via VRH. Thus, $`S^g`$ should only exist on the insulator side when excitation to delocalized states occurs. Our data show that at any fixed $`T2`$ K, $`S`$ rises as $`n`$ decreases but crosses $`n_0`$ smoothly, i.e., we no longer see divergent behavior of $`S`$ at $`n_0`$. These facts show that activated conduction must be present for all densities $`n<n_0`$ that we have investigated, even though $`\rho `$ data (both our own and those of others) appear to follow the Efros-Shklovskii VRH model. In summary, the behavior of $`\sigma `$ and $`S^d`$ in the ‘metallic’ and ‘insulating’ phases in Si-MOSFETs are surprisingly consistent with a 3D Anderson MIT, though such a transition is not expected to occur in a 2D electron gas. Nevertheless, it is important to remember that to reliably identify the observed critical behaviour with a MIT requires data in the zero $`T`$ limit. Although our analysis is based on an extrapolation to zero $`T`$, the actual data extend only to 0.3K. Thus we should be careful not to conclude that a mobility edge or a MIT has necessarily been observed. Even so, the present results provide new information on these systems that further constrains any theoretical model proposed to explain the MIT, whether such a transition be an apparent or real property as $`T0`$. We acknowledge the support of the NSERC Canada, and from INTAS, RFBR, NATO, Programs ‘Physics of nanostructures’, ‘Statistical physics’ and ‘Integration’.
no-problem/0002/cond-mat0002241.html
ar5iv
text
# Magnetic pair-breaking in superconducting Ba1-xKxBiO3 investigated by magnetotunneling ## I Introduction The temperature dependence of the upper critical field $`B_{c2}`$ in several new classes of superconductors attracts much attention as it reveals very unusual behavior. For the conventional type-II superconductors $`B_{c2}`$ shows a linear increase below $`T_c`$ and a saturation at the lowest temperatures in agreement with the theoretical predictions of Maki and Werthamer, Helfand and Hohenberg . In the case of certain new superconductors $`B_{c2}(T)`$ has a positive curvature practically in the whole temperature range. As examples of this anomalous behavior we mention the borocarbides , the organic superconductors , the high-$`T_c`$ bismuthates Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> , and in the most pronounced way the cuprates where in some cases $`B_{c2}(T)`$ even diverges at very low temperatures. Several scenarios have been proposed to explain this anomalous behavior. Among others, one can find a bipolaron model , an unconventional normal state , a strong electron-phonon coupling , and the presence of inhomogeneities and magnetic impurities . In the above mentioned cases, magnetotransport or magnetization measurements were employed for the determination of $`B_{c2}(T)`$. In strongly type-II superconductors a magnetization measurement near $`B_{c2}`$ can be very difficult because of its extremely small value. Moreover, because depinned vortices in the liquid or solid state cause a finite dissipative resistance before reaching the full transition to the normal state, the complexity of the $`BT`$ phase diagram in high-$`T_c`$’s undermines any direct determination of the upper critical field from magnetotransport data. Recently, the $`B_{c2}(T)`$ dependencies have been determined for high-T<sub>c</sub> superconductors by non-dissipative experimental methods. Carrington et al. have shown, that magneto-specific-heat measurements in the overdoped Tl-2201 cuprates yield in the high temperature region a different curvature of $`B_{c2}(T)`$ as compared to that determined from ac-susceptibility or from magnetotransport. Blumberg et al. determined the upper critical field from the electronic Raman scattering in the same cuprates and also obtained a conventional temperature dependence with a negative curvature and saturation at low temperatures. In our previous experimental work elastic tunneling was used as a direct tool to infer the upper critical field $`B_{c2}(T)`$ in the high-$`T_c`$ oxide Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> . Such a method is based on a measurement of the very fundamental superconducting density of states (DOS), where the superconducting part of the $`BT`$ phase diagram is defined from the non-zero value of the superconducting order parameter ($`\mathrm{\Delta }0`$). In the present paper the tunneling characteristics measured on Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> in the mixed state at very high magnetic fields are discussed in the frame work of the de Gennes and Maki theory of the gapless superconductivity . We show that the theory is applicable at all temperatures below $`T_c`$ and in a wide range of magnetic fields below the upper critical one. In the experimentally measured tunneling conductances (proportional to the spatially averaged superconducting DOS) at high fields only the amplitude of the deviations from the normal-state tunneling conductance depends on the applied magnetic field. The energy dependence of these deviations is completely controlled by the temperature. The renormalized tunneling-conductance traces reveal a simple scaling behavior. This allows us to determine the temperature dependence of the pair-breaking parameter $`\alpha (T)`$ and the upper critical field $`B_{c2}(T)`$. The fitted $`B_{c2}(T)`$ follows the Werthamer-Helfand-Hohenberg prediction for classical type-II superconductors and is also in agreement with our previous direct determination . The analysis of the amplitude of the tunneling conductance at different magnetic fields enables us to determine the magnetic field dependence of the spatially averaged value $`\overline{\mathrm{\Delta }}(B)`$ of the superconducting order parameter at different temperatures and the temperature dependence of the second Ginzburg-Landau parameter $`\kappa _2(T)`$. ## II Theory The behavior of type-II superconductors in the presence of external magnetic fields has been described on the basis of the phenomenological Ginzburg-Landau (GL) theory . Abrikosov showed that a type-II superconductor exhibits a mixed (vortex) state, in which the magnetic flux penetrates the sample in the form of quantized flux lines. These results of the GL theory have validity in a restricted temperature region near the transition temperature $`T_c`$, where the lower and upper critical fields ($`B_{c1}(T)`$ and $`B_{c2}(T)`$) are both much smaller then their values $`B_{c1}(0)`$ and $`B_{c2}(0)`$ at zero temperature. An important generalization of the GL theory has been made by Maki and de Gennes in the case of dirty superconductors (where the mean free path $`l`$ is much smaller then the BCS superconducting coherence length $`\xi _0`$) extending this theory to arbitrary temperatures when the magnetic field is close to $`B_{c2}(T)`$. Maki has shown, that Abrikosov’s original theory is applicable at all temperatures in the dirty limit, if in the expressions for the temperature dependencies of critical magnetic field and magnetization two temperature dependent GL parameters $`\kappa _i`$ are introduced $`\kappa _1`$ $`=`$ $`B_{c2}(T)/\sqrt{2}B_c(T),`$ (1) $`M`$ $`=`$ $`{\displaystyle \frac{B_{c2}B}{\mu _0\beta (2\kappa _2^21)}},`$ (2) where $`M`$ is the magnetization and $`\beta `$ a geometric constant of the vortex lattice. The first GL parameter $`\kappa _1(T)`$, the ratio of the upper critical field $`B_{c2}(T)`$ and the thermodynamic critical field $`B_c(T)`$, is a slowly temperature dependent function with a $`20\%`$ larger value at $`0`$ K than at $`T_c`$. The parameter $`\kappa _2`$ is connected with the slope of the magnetization curve near $`B_{c2}`$. Caroli et al. showed that $`\kappa _2(T)`$ will be equal to $`\kappa _1(T)`$ within $`2\%`$ for a dirty bulk type-II superconductor. A magnetic field applied to a dirty superconductor breaks the (time-reversal) symmetry of the Cooper pairs and leads to a finite lifetime $`\tau _k`$ of the condensed paired electrons with the pair breaking parameter $`\alpha _k=\mathrm{}/2\tau _k`$. This decay process competes with a natural growth of pairs associated with a characteristic lifetime $`\tau (T)`$ and related parameter $`\alpha =\mathrm{}/2\tau `$ which is connected with the upper critical field $`B_{c2}`$ according to $$B_{c2}=\frac{\alpha \mathrm{\Phi }_0}{\pi D\mathrm{}},$$ (3) where $`\mathrm{\Phi }_0`$ is the flux quantum and $`D`$ the diffusion constant. Neglecting the effects of spin paramagnetism and spin-orbit scattering the temperature dependence of the pair-breaking parameter $`\alpha (T)`$ is given by $$\mathrm{ln}(T_c/T)=\mathrm{\Psi }\left(\frac{1}{2}+\frac{\alpha }{2\pi k_BT}\right)\mathrm{\Psi }\left(\frac{1}{2}\right),$$ (4) where $`\mathrm{\Psi }(z)`$ is the digamma function. Magnetic pair-breaking in superconductors depresses the superconducting order with the occurence of quasi-particle states inside the otherwise forbidden energy gap. For sufficiently strong pair-breaking, near $`B_{c2}`$, a gapless superconducting state exists. Superconductivity with a finite value of the superconducting order parameter (pair potential $`\mathrm{\Delta }`$) exists without a minimum excitation energy in the quasi-particle excitation spectrum. This gapless superconducting behavior can be clearly seen in the calculations of Skalski et al. for the density of states of the superconducting excitation spectrum in the presence of pair breaking. De Gennes has derived a very simple expression for the density of states $`N(E,𝐫)`$ in dirty superconductors in the gapless region for small $`\mathrm{\Delta }`$ in magnetic fields near $`B_{c2}`$, given by $$N(E,𝐫)=N_N(0)\left[1\frac{\mathrm{\Delta }^2(𝐫,H)}{2}\frac{\alpha ^2E^2}{(E^2+\alpha ^2)^2}\right],$$ (5) where $`N_N(0)`$ is the DOS at the Fermi surface of the superconductor in the normal state. In the range of validity of Eq. (5), $`\alpha `$ does not depend on magnetic field, but is a function of temperature only. Via Eq. (3), the pair-beaking parameter $`\alpha (T)`$ is related to $`B_{c2}(T)`$. The only magnetic field dependent parameter in Eq. (5) is $`\mathrm{\Delta }(𝐫,H)`$. It leads to the important conclusion that the energy dependence of $`N(E,𝐫)`$ is fully controlled by temperature and not by magnetic field or $`\mathrm{\Delta }`$. Via $`\mathrm{\Delta }`$, the magnetic field acts just as a scaling parameter for the amplitude of the deviations of $`N(E,𝐫)`$ from $`N_N(0)`$. A direct experimental verification of gapless superconductivity in the presence of a pair-breaking perturbation can be obtained by tunneling spectroscopy . The differential tunneling conductance measured on a metal-insulator-superconductor (N-I-S) tunnel-junction is directly proportional to the spatially averaged value of the superconducting DOS by $$G(V)=\frac{(dI/dV)}{(dI/dV)_N}=_{\mathrm{}}^{\mathrm{}}\frac{N(E)}{N_N(0)}\left[\frac{f(E+eV)}{(eV)}\right]𝑑E,$$ (6) where $`G(V)`$ represents the tunneling conductance normalized to its normal-state value and the brackets in the integral contain the bias-voltage derivative of the Fermi distribution function. In the case of a gapless DOS the normalized conductance may be transformed into $$G(V)=1+\frac{\mathrm{\Delta }^2}{8\pi ^2k_B^2T^2}Re\left\{\mathrm{\Psi }_3\left(\frac{1}{2}+aib\right)\right\},$$ (7) where $`a=\alpha /2\pi k_BT`$, $`b=eV/2\pi k_BT`$ and $`\mathrm{\Psi }_3`$ is the second derivative of the digamma function. In a geometry with the applied magnetic field perpendicular to the planar surface of the junction barrier, the sample is in the Shubnikov mixed phase with the Abrikosov vortex structure. In a tunneling experiment the spatial average $`\overline{N}(E)`$ of the density of states over the sample surface is measured. Therefore, one has to consider in Eq. (5) the spatially averaged order parameter $`\overline{\mathrm{\Delta }}`$ . The magnetic field dependence of $`\overline{\mathrm{\Delta }}`$ in the gapless region of type-II superconductor near $`B_{c2}`$ has been calculated by Maki yielding $$\overline{\mathrm{\Delta }}^2=\frac{4\pi eck_BT(B_{c2}B)}{1.16\mu _0\sigma (2\kappa _2^21)\mathrm{\Psi }_2(\frac{1}{2}+a)},$$ (8) where $`\mu _0`$ is the vacuum permeability, $`\sigma `$ the normal state conductivity and $`\mathrm{\Psi }_2`$ the first derivative of the digamma function. The linear field dependence of the averaged value of the energy gap squared results in a linearity in the above mentioned scaling of the averaged gapless DOS, or $`G(V)`$, with magnetic field. By a simple renormalization of the tunneling conductance to their zero-bias value according to the expression $`(G(V)1)/(G(0)1)`$, the field-dependent parameter $`\overline{\mathrm{\Delta }}`$ falls out the problem. The in such a way renormalized tunneling conductances at a fixed temperature should collapse on the same curve for different magnetic fields. This behavior can be very useful for the experimental determination of the pair-breaking parameter $`\alpha `$ or $`B_{c2}`$, and of the order parameter $`\overline{\mathrm{\Delta }},`$ or $`\kappa _2`$ . The region of validity of expressions (5),(7) and (8) in the $`BT`$ diagram is not very well known. They are certainly valid in the high-field region $`BB_{c2}(T)`$. ## III Experiment The high quality single-crystalline Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> samples were grown by electrochemical crystallisation . The quality of the single crystals was verified by the measurement of the resistance in a four probe contact configuration. In zero magnetic field with decreasing temperature the samples showed metallic temperature dependence which saturates to a residual resistivity 100 $`\mu \mathrm{\Omega }`$cm above the superconducting transition with a width $`\mathrm{\Delta }T_c1.2`$ K. The ac-susceptibility measurement confirms this transition width. The critical temperature $`T_c=23`$ K has been determined from the midpoint of the zero field transition. The tunnel junctions were prepared by painting a silver spot on the freshly cleaned surface of the crystal. The interface between the silver and Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> counter electrodes served as a natural barrier forming a planar normal metal-insulator-superconductor (N-I-S) tunnel junction. The tunneling measurements were performed in magnetic fields up to 30 T perpendicular to the planar tunneling junction enabling the formation of the vortex state in the junction area . ## IV Results and Discussion In Fig. 1 the experimental tunneling data are shown at different magnetic fields increasing from zero to 30 Tesla with a step of 2 Tesla (if not otherwise specified) for three temperatures T=1.5, 4.2 and 16 K. For a specific temperature the upper critical field can be directly determined as the field where any trace of the superconducting density of states disappears in the conductance data. In our previous analysis of the tunneling data we deduced the upper critical field from a linear extrapolation of the magnetic field dependence of the zero-bias conductance. The obtained temperature dependence of $`B_{c2}`$ agreed with the standard WHH theory. To discuss the validity of the dirty limit ($`l<\xi _0`$) to our system now we have calculated the BCS coherence length $`\xi _049`$ Å from $`B_{c2}(0)=28`$ T and take the mean free path $`l33`$ Å from . The Ginzurg-Landau coherence length is related to the BCS one via $`\xi _{GL}=0.855\sqrt{\xi _0l}`$. The estimated ratio $`l/\xi _0`$ is about $`0.7`$. One of us (P.W.) has calculated the density of states as a function of $`l/\xi _0`$ and found very small differences for the ratio $`l/\xi _0`$ changing from 0 to 1. Similarly Eilenberger found only small differences for this $`l/\xi _0`$ range when calculating the temperature dependence of the Ginzburg Landau parameters. Therefore, we believe that the dirty limit model of de Gennes and Maki is applicable in our case. The obtained agreement with experimental data, as discussed further on, seems to justify this approach. The data sets as shown in Fig. 1 were rewritten in the above mentioned form $`(G(V)1)/(G(0)1)`$ and are shown in the right side of the Fig. 1. This simple renormalization of the conductance reveals, that above a certain magnetic field all renormalized conductance curves belonging to the same temperature collapse to the same curve. This is in a full agreement with Eq. (5) for the description of the gapless regime. We have also made the same rescaling procedure for the data sets taken at $`T`$ = 1.5, 3, 4.2, 8, 12, 16 and 20 K. At all temperatures an agreement with the de Gennes expression can be found for magnetic fields $`B>0.5B_{c2}(T)`$. Rewriting Eq. (5) in the form $`(N(E)1)/(N(0)1)`$, or $`(G(V)1)/(G(0)1)`$, makes that the only unknown quantity is the pair-breaking parameter $`\alpha `$. We have made a fit of this renormalized gapless DOS to the experimental data accounting for the thermal smearing as defined in Eq. (6). By this fitting procedure the same $`\alpha `$ can be obtained for any tunneling trace at $`B>0.5B_{c2}(T)`$ at a particular temperature. The experimental values of the upper critical field obtained directly as mentioned above are displayed in the Fig. 2 as a function of temperature together with the temperature dependence of the pair-breaking parameter $`\alpha `$ obtained by the fits of the voltage dependence of the conductance. The shown error bars account for the scattering of the $`\alpha `$ values as obtained by the fitting to the curves at different fields. Because the amplitude of the normalized superconducting density of states decreases with increasing temperature the error bars increase for increasing temperature. Above 16 K the error bars are of the same order as $`\alpha `$ itself. The direct relation between $`\alpha `$ and $`B_{c2}`$ is defined by Eq. (3), which depends on the diffusion constant $`D`$ of the sample. The best agreement between the experimenatlly determined critical field and pair-breaking parameter is obtained for $`D=1`$ cm<sup>2</sup>s<sup>-1</sup>. This value is in a perfect agreement with the data of Affronte et al. obtained on a very similar sample made in the same laboratory and also agrees reasonably with the value of Roesler et al. obtained for a thin film, where they found $`D=0.64`$ cm<sup>2</sup>s<sup>-1</sup>. In Fig. 3 we show that the zero-field tunneling-conductance trace at 1.5 K can be described by the BCS density of states using the Dynes formula $`N(E)\mathrm{Re}\{E/(E^2\mathrm{\Delta }^2)^{1/2}\}`$ with the complex energy $`E=E^{}+i\mathrm{\Gamma }`$ which takes account for a certain sample inhomogeneity via the broadening parameter $`\mathrm{\Gamma }`$. This formula gives a good description of the tunnel spectra at 1.5 K and zero magnetic field with $`\mathrm{\Delta }=3.9\pm 0.1`$ meV and $`\mathrm{\Gamma }=0.4\pm 0.1`$ meV, yielding $`2\mathrm{\Delta }/kT_c=3.9\pm 0.1`$. This indicates that Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> is a BCS-like superconductor with a medium coupling strength . In the case of weak-coupling superconductors, the pair-breaking parameter at $`T=0`$ K is connected with the energy gap $`\mathrm{\Delta }(0)`$ by the expression $`\alpha =\mathrm{\Delta }(0)/2`$. In our BKBO sample at $`T0`$ K the superconductivity is destroyed at $`\alpha 2.85`$ meV which is about $`45\%`$ higher than the expected value $`\mathrm{\Delta }(0)/22`$ meV. A similar discrepancy has been found in tunneling experiments of conventional type-II superconductors in the dirty limit with a finite value of $`l/\xi _0`$ and with strong-coupling effects . In our case with $`l/\xi _0`$ about $`0.7`$ also the strong-coupling plays an important role as shown in ref. . In Fig. 3, the normalized tunneling conductances $`(dI/dV)/(dI/dV)_N`$ at $`T=1.5`$ K are also displayed for magnetic fields $`B>10`$ T where the scaling in the voltage dependence of the conductance curves holds. These curves are fitted by the de Gennes formula using the previously determined pair-breaking parameter $`\alpha =2.85`$ meV. Then the only fitting parameter is the superconducting order parameter $`\overline{\mathrm{\Delta }}`$. Thus, we obtained the magnetic-field dependence of the superconducting order parameter $`\overline{\mathrm{\Delta }}`$. The same procedure was repeated for the magnetic-field dependent data sets at different temperatures. The results of $`\overline{\mathrm{\Delta }}(B)`$ are shown in Fig. 4 for $`T=1.5`$, 8, and 16 K. In the inset we display the magnetic field dependence of $`\overline{\mathrm{\Delta }}^2`$. $`\overline{\mathrm{\Delta }}^2(B)`$ changes linearly with the applied field not only in the high field region near $`B_{c2}`$ as predicted by Maki (Eq. (8)) but at least from $`B>0.5B_{c2}(T)`$. This finding supports the evaluation of the upper critical field from tunneling spectroscopy as made in our previous work where a linear extrapolation of the zero-bias tunneling conductance $`G(0)`$ was used to obtain $`B_{c2}(T)`$ . The normalized zero-bias tunneling conductance $`G(V=0,B)_T`$ at a certain temperature can be approximated by $$G(V=0,B)_T=1P(B)_T(B_{c2}B),$$ (9) where $`P(B)_T`$ is the slope of the normalized zero-bias conductance $`G(V=0,B)_T`$ versus magnetic field at a fixed temperature with dimension \[T<sup>-1</sup>\]. With this expression for the zero-bias tunneling conductance substituted into Eq. (7) together with $`\overline{\mathrm{\Delta }}`$ from Eq. (8) we can derive the second GL parameter $`\kappa _2`$ as a function of the temperature $$\kappa _2(T)0.5=\frac{e}{4.64\pi \mu _0k_BT\sigma P(B)_T}\frac{\mathrm{\Psi }_3(\frac{1}{2}+a)}{\mathrm{\Psi }_2(\frac{1}{2}+a)}.$$ (10) Here, besides the slope $`P(B)_T`$ of the zero-bias tunneling conductance and the pair-breaking parameter $`\alpha `$ the electrical conductance $`\sigma `$ is an experimental parameter. For $`\sigma `$ we can take $`\sigma =0.125`$ x $`10^7\mathrm{\Omega }^1`$m<sup>-1</sup> from ref. . The resulting temperature dependence of $`\kappa _2`$ is shown in Fig. 5 by open symbols together with the theoretical prediction of Caroli, Cyrot and de Gennes for a dirty type-II s-wave superconductors in the mixed state. Despite the fact that the error bars are very big a discrepancy is obvious. As shown by Guyon at al. the slope $`P(B)_T`$ at different temperatures is expected to be constant. In our case it changes at higher temperatures (see inset in Fig. 5). If the slope is kept artificially constant as it was at low temperatures (dotted line in the inset) we derive a much better agreement with the theoretical prediction of Caroli et al. (full symbols in Fig. 5). One possible explanation of this discrepancy between the theoretical predictions of Caroli et al. and our $`\kappa _2(T)`$ calculation from the real temperature dependent $`P(H)_T`$ values could result from the local sample inhomogeneity because at higher temperatures possibly another phase can play a role in the tunneling characteristics. In this case the correct $`\kappa _2(T)`$ dependence of the major bulk superconducting phase is determined from the constant $`P(H)_T`$ following the predictions of Caroli et al.. Then, $`\kappa _2(T)`$ is equal to $`\kappa _1(T)`$ and it can be used for the determination of basic superconducting quantities, connected with the GL parameters, like the thermodynamic critical magnetic field or the free energy. Anyhow, the $`\kappa _2(T)`$ parameter calculated from our magnetotunneling data reveals a saturation at low temperatures. Its value at higher temperatures is in a qualitative agreement with the earlier published values of the GL parameter $`\kappa `$ in the BKBO system . The saturating character of $`\kappa _2(T)`$ in the low temperature region proves a dirty limit and a non $`d`$-wave character of the superconductivity in the BKBO system. Pure superconductors similarly as $`d`$-wave superconductors should reveal a characteristic $`\kappa _2(T)\mathrm{ln}(1/T)`$ dependence . ## V Conclusions The pair-breaking theory has been applied for analyzing the magnetotunneling results in the BKBO superconductor. It has been shown, that the generalization of the Ginzburg-Landau theory provided by Maki and de Gennes for the dirty type-II superconductors in a field region close to $`B_{c2}`$ is valid in a much wider interval of magnetic fields. The theory for the gapless superconductors describes the experimental tunneling data for magnetic fields $`B>0.5B_{c2}(T)`$ very well. In this region of magnetic fields the tunneling curves can be fitted by the de Gennes formula for the thermally smeared averaged density of states. As a consequence the tunneling curves reveal an universal scaling in the magnetic field dependence for a fixed temperature. This allows us to calculate the temperature dependence of the pair-breaking parameter $`\alpha `$ (or $`B_{c2}`$) and the Maki parameter $`\kappa _2`$ from the tunneling curves. The averaged superconducting order parameter squared $`\overline{\mathrm{\Delta }}^2`$ reveals linearity with the magnetic field in the same range of magnetic fields. This linearity can be applied not only in interpretation of the tunneling measurements but also in the case of many superconducting quantities governed by $`\overline{\mathrm{\Delta }}^2(H)`$ in the high-field limit, like magnetization, magnetic susceptibility, thermal conductivity, etc.. The obtained temperature dependence of the upper critical field is in satisfactory agreement with the theoretical predictions of Maki and Werthamer, Helfand and Hohenberg for classical type-II superconductors. The wide magnetic field range where the pair-breaking theory has been proved to be valid (far below the real $`B_{c2}`$) makes this tunneling approach very promising for the high-$`T_c`$ cuprates where the upper critical field is experimentally not available for reliable estimates of the temperature dependence of $`B_{c2}`$. ###### Acknowledgements. We greatly acknowledge stimulating discussions with K. Maki. This work has been supported by the EC grant No.CIPA-CT93-0183 and the Slovak VEGA contract No. 2/5144/98. FIGURE CAPTIONS Fig. 1: Normalized ($`G`$, left side) and renormalized ($`(G1)/(G(0)1)`$, right side) tunneling conductances of the Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub>-Ag tunnel junction in magnetic fields from zero up to 30 T in steps of 2 T (if not mentioned else) at the indicated temperatures. The renormalized curves have been fitted to the de Gennes formula (Eq. (5)) of the DOS (open circles) yielding the indicated pair-breaking parameter $`\alpha `$. Fig. 2: Temperature dependence of the upper critical magnetic field $`B_{c2}`$ of Ba<sub>1-x</sub>K<sub>x</sub>BiO<sub>3</sub> determined directly from an extrapolation of the zero-bias tunneling conductance (closed circles, left scale) and of the pair-breaking parameter $`\alpha `$ determined from a fit to the de Gennes formula of the DOS for the full voltage-dependence of the conductance (open squares, right scale). The full line shows the prediction of the standard WHH theory. Fig. 3: Experimentally measured normalized tunneling conductances at $`T=1.5`$ K (closed squares) shifted along the Y-axis as indicated by the high-voltage values at the right side. The lines show the BCS dependence at $`B=0`$ T (dotted line) and the fitting to the de Gennes pair-breaking (PB) formula of the DOS at $`B>10`$ T (full lines) . Fig. 4: Magnetic field dependencies of the averaged value of the energy gap $`\overline{\mathrm{\Delta }}`$ at different temperatures. The inset shows the squared values $`\overline{\mathrm{\Delta }}^2`$ as a function of magnetic field. Fig. 5: Temperature dependence of the second Ginzburg-Landau parameter $`\kappa _2`$ calculated from magnetotunneling results (left scale) for the temperature dependent $`P(B)_T`$ (open circles) and for a constant $`P(B)_T=0.0395`$ T<sup>-1</sup> (closed squares). The full line shows the theoretical prediction of Caroli et al. . The inset displays the temperature dependence of the zero-bias slope $`P(B)_T`$.
no-problem/0002/math0002178.html
ar5iv
text
# Untitled Document TOWARDS A COMBINATORIAL INTERSECTION COHOMOLOGY FOR FANS Karl-Heinz FIESELER Matematiska Institutionen, Box 480, Uppsala Universitet, SE-75106 Uppsala, Suède E-mail: khf@math.uu.se Abstract. The real intersection cohomology $`IH^{}(X_\mathrm{\Delta })`$ of a toric variety $`X_\mathrm{\Delta }`$ is described in a purely combinatorial way using methods of elementary commutative algebra only. We define, for arbitrary fans, the notion of a “minimal extension sheaf” $`^{}`$ on the fan $`\mathrm{\Delta }`$ as an axiomatic characterization of the equivariant intersection cohomology sheaf. This provides a purely algebraic interpretation of the $`f`$\- and $`g`$-vector of an arbitrary polytope or fan under a natural vanishing condition. — The results presented in this note originate from joint work with G.Barthel, J.-P.Brasselet and L.Kaup (see ). 1. Minimal Extension Sheaves We introduce the notion of a minimal extension sheaf on a fan and study some elementary properties of such sheaves. Let $`\mathrm{\Delta }`$ be a fan in a real vector space $`V`$ of dimension $`n`$. We endow $`\mathrm{\Delta }`$ with the fan-topology, having the subfans $`\mathrm{\Lambda }`$ of $`\mathrm{\Delta }`$ as the non-empty open subsets. In particular, affine subfans, i.e. fans $`\sigma `$ consisting of $`\sigma `$ and its faces, are open; for simplicity we usually write $`\sigma `$ instead of $`\sigma `$. A sheaf $``$ of real vector spaces on $`\mathrm{\Delta }`$ is determined by the collection of vector spaces $`F_\sigma :=(\sigma )`$ for each $`\sigma \mathrm{\Delta }`$ together with the restriction homomorphisms $`F_\sigma F_\tau `$ for $`\tau \sigma `$. An important example is the sheaf $`𝒜^{}`$ with $`𝒜^{}(\sigma ):=A_\sigma ^{}:=S^{}(V_\sigma ^{})`$, where $`V_\sigma :=`$span$`(\sigma )`$, and the natural restriction homomorphisms. Its sections over a subfan $`\mathrm{\Lambda }`$ are the piecewise polynomial functions on the support $`|\mathrm{\Lambda }|`$ of $`\mathrm{\Lambda }`$. Warning: We use a topologically motivated grading for $`S^{}(V_\sigma ^{})`$: Linear polynomials are of degree $`2`$, etc. We set $`A^{}:=S^{}(V^{})`$. For a graded $`A^{}`$-module $`F^{}`$, let $`\overline{F}^{}:=R^{}_A^{}F^{}=F^{}/𝐦F^{}`$ denote the residue class vector space modulo $`𝐦:=A^{>0}`$, where $`R^{}:=A^{}/𝐦`$. Definition: A sheaf $`^{}`$ of graded $`𝒜^{}`$-modules on the fan $`\mathrm{\Delta }`$ is called a minimal extension sheaf (of $`𝐑^{}`$) if it satisfies the following conditions: (N)Normalization: One has $`E_o^{}A_o^{}=𝐑^{}`$ for the zero cone $`o`$. (PF)Pointwise Freeness: For each cone $`\sigma \mathrm{\Delta }`$, the module $`E_\sigma ^{}`$ is free over $`A_\sigma ^{}`$. (LME)Local Minimal Extension $`mod𝐦`$: For each cone $`\sigma \mathrm{\Delta }\{0\}`$, the restriction mapping $`\phi _\sigma :E_\sigma ^{}E_\sigma ^{}`$ induces an isomorphism $`\overline{\phi }_\sigma :\overline{E}_\sigma ^{}\stackrel{}{}\overline{E}_\sigma ^{}`$ of graded real vector spaces. Condition (LME) implies that $`^{}`$ is minimal in the set of all flabby sheaves of graded $`𝒜^{}`$-modules satisfying conditions (N) and (PF), whence the name “minimal extension sheaf”. Moreover, $`^{}`$ vanishes in odd degrees. For a cone $`\sigma \mathrm{\Delta }`$ and for a subfan $`\mathrm{\Lambda }\mathrm{\Delta }`$, the $`A^{}`$-modules $`E_\sigma ^{}`$ and $`E_\mathrm{\Lambda }^{}`$ are finitely generated. If $`\mathrm{\Delta }`$ is a rational fan for some lattice $`NV`$ of maximal rank, then there is an associated toric variety $`X_\mathrm{\Delta }`$ with the action of an algebraic torus $`𝐓(𝐂^{})^n`$. Let $`IH_𝐓^{}(X_\mathrm{\Delta })`$ denote the equivariant intersection cohomology of $`X_\mathrm{\Delta }`$ with real coefficients. In , the following theorem was proved; it has been the starting point to investigate minimal extension sheaves. Theorem. Let $`\mathrm{\Delta }`$ be a rational fan. i) The assignment $`_𝐓^{}:\mathrm{\Lambda }IH_𝐓^{}(X_\mathrm{\Lambda })`$ defines a sheaf on the fan space $`\mathrm{\Delta }`$; it is a minimal extension sheaf. ii) If $`^{}`$ is a minimal extension sheaf on $`\mathrm{\Delta }`$ and $`\sigma `$ a $`k`$-dimensional cone, then for the local intersection cohomology $`_x^{}`$ of $`X_\mathrm{\Delta }`$ in a point $`x`$ belonging to the orbit corresponding to $`\sigma `$, we have: $`_x^{}\overline{E}_\sigma ^{}.`$ iii) For a complete fan $`\mathrm{\Delta }`$ or an affine fan $`\mathrm{\Delta }=\sigma `$ with a cone $`\sigma `$ of dimension $`n`$, one has $`IH^{}(X_\mathrm{\Delta })\overline{E}_\mathrm{\Delta }^{}.`$ The vanishing axiom for local intersection cohomology together with statement ii) in the above theorem yields that, in the case of a rational fan, the following vanishing condition is satisfied: Vanishing Condition V($`\sigma `$): For a cone $`\sigma `$ and a minimal extension sheaf $`^{}`$ on a fan $`\sigma `$, we have $`\overline{E}_\sigma ^q=0\mathrm{for}qdim\sigma .`$ On every fan $`\mathrm{\Delta }`$ there exists a minimal extension sheaf $`^{}`$. Furthermore, for any two such sheaves $`^{}`$, $`^{}`$ on $`\mathrm{\Delta }`$, each isomorphism $`E_o^{}𝐑^{}F_o^{}`$ extends (non-canonically) to an isomorphism $`^{}\stackrel{}{}^{}`$ of graded $`{}_{\mathrm{\Delta }}{}^{}𝒜_{}^{}`$-modules, which is unique in the case of a simplicial fan. Simplicial fans are easily characterized in terms of minimal extension sheaves: The sheaf $`{}_{\mathrm{\Delta }}{}^{}𝒜_{}^{}`$ is a minimal extension sheaf if and only if the fan $`\mathrm{\Delta }`$ is simplicial. 2. Combinatorial equivariant perverse sheaves We propose a definition for (combinatorially) “perverse” sheaves, here called semisimple sheaves. Definition: A (combinatorially) semi-simple sheaf $`^{}`$ on a fan space $`\mathrm{\Delta }`$ is a flabby sheaf of graded $`𝒜^{}`$-modules such that, for each cone $`\sigma \mathrm{\Delta }`$, the $`A_\sigma ^{}`$-module $`F_\sigma ^{}`$ is finitely generated and free. For each cone $`\tau \mathrm{\Delta }`$, we construct inductively a “simple” sheaf $`{}_{\tau }{}^{}_{}^{}`$ on $`\mathrm{\Delta }`$ as follows: For $`\sigma \mathrm{\Delta }^{dim\tau }:=\{\sigma \mathrm{\Delta };dim\sigma dim\tau \}`$ we set $${}_{\tau }{}^{}E_{\sigma }^{}:={}_{\tau }{}^{}_{}^{}(\sigma ):=\{\begin{array}{cc}A_\tau ^{}\hfill & \text{if }\sigma =\tau \text{,}\hfill \\ 0\hfill & \text{otherwise.}\hfill \end{array}$$ Now, if $`{}_{\tau }{}^{}_{}^{}`$ has been defined on $`\mathrm{\Delta }^m`$ for some $`mdim\tau `$, then for each $`\sigma \mathrm{\Delta }^{m+1}`$, we set $`{}_{\tau }{}^{}E_{\sigma }^{}:=A_\sigma ^{}_𝐑\overline{}_\tau \overline{E}_\sigma ^{}`$ with the restriction map $`{}_{\tau }{}^{}E_{\sigma }^{}{}_{\tau }{}^{}E_{\sigma }^{}`$ being induced by some homogeneous $`𝐑`$-linear section $`s:{}_{\tau }{}^{}\overline{E}_{\sigma }^{}{}_{\tau }{}^{}E_{\sigma }^{}`$ of the residue class map $`{}_{\tau }{}^{}E_{\sigma }^{}{}_{\tau }{}^{}\overline{E}_{\sigma }^{}`$. Decomposition Theorem: Every semi-simple sheaf $`^{}`$ on $`\mathrm{\Delta }`$ is isomorphic to a finite direct sum $`^{}_i{}_{\tau _i}{}^{}_{}^{}[\mathrm{}_i]^{n_i}`$ of shifted simple sheaves with uniquely determined cones $`\tau _i\mathrm{\Delta }`$, natural numbers $`n_i1`$ and integers $`\mathrm{}_i𝐙`$. From the theorem in section 3 we then obtain the following consequence: Corollary 1: Let $`\pi :\widehat{\mathrm{\Delta }}\mathrm{\Delta }`$ be a refinement map of fans with minimal extension sheaves $`\widehat{}^{}`$ resp. $`^{}`$. Then the direct image sheaf $`\pi _{}(\widehat{}^{})`$ is a semisimple sheaf, in particular there is a decomposition $`\pi _{}(\widehat{}^{})^{}_i{}_{\tau _i}{}^{}_{}^{}[\mathrm{}_i]^{n_i}`$ with cones $`\tau _i\mathrm{\Delta }^2`$ and positive integers $`\mathrm{}_i,n_i`$. Corollary 2: For a simplicial refinement $`\widehat{\mathrm{\Delta }}`$ of $`\mathrm{\Delta }`$, let $`\widehat{𝒜}^{}`$ be the sheaf of $`\widehat{\mathrm{\Delta }}`$-piecewise polynomial functions on $`\mathrm{\Delta }`$. Then a minimal extension sheaf $`{}_{\mathrm{\Delta }}{}^{}_{}^{}`$ on $`\mathrm{\Delta }`$ can be realized as a subsheaf of $`{}_{\mathrm{\Delta }}{}^{}\widehat{𝒜}_{}^{}`$. 3. Cellular Cech Cohomology of Minimal Extension Sheave In this section, we investigate under which assumptions the module of global sections $`E_\mathrm{\Delta }^{}:=^{}(\mathrm{\Delta })`$ of a minimal extension sheaf $`^{}`$ is a free $`A^{}`$-module. Definition: A fan $`\mathrm{\Delta }`$ is called quasi-convex if for a minimal extension sheaf $`^{}`$ on $`\mathrm{\Delta }`$, the $`A^{}`$-module $`^{}(\mathrm{\Delta })`$ is free. According to Proposition 6.1 in , a rational fan $`\mathrm{\Delta }`$ is quasi-convex if and only if the intersection cohomology of the associated toric variety $`X_\mathrm{\Delta }`$ vanishes in odd degrees. — The main tool to be used in the sequel is the “complex of cellular cochains with coefficients in $`^{}`$”: To a sheaf $``$ of real vector spaces on the fan $`\mathrm{\Delta }`$, we associate its “cellular cochain complex” $`C^{}(\mathrm{\Delta },)`$. The cochain module in degree $`k`$ is $`_{dim\sigma =nk}(\sigma )`$, the coboundary operator $`\delta ^k:C^k(\mathrm{\Delta },)C^{k+1}(\mathrm{\Delta },)`$ is defined with respect to fixed orientations as in the usual Čech cohomology. For $`=^{}`$, the above complex is – up to a rearrangement of the indices – a “minimal complex” in the sense of Bernstein and Lunts . We also have to consider relative cellular cochain complexes with respect to the boundary subfan $`\mathrm{\Delta }`$ of a purely $`n`$-dimensional fan $`\mathrm{\Delta }`$, supported by the topological boundary of $`\mathrm{\Delta }`$. Definition: If the fan $`\mathrm{\Delta }`$ is purely $`n`$-dimensional, then for a sheaf $``$ of real vector spaces on $`\mathrm{\Delta }`$ we set $$C^{}(\mathrm{\Delta },\mathrm{\Delta };):=C^{}(\mathrm{\Delta };)/C^{}(\mathrm{\Delta };),$$ where $`C^{}(\mathrm{\Delta };)C^{}(\mathrm{\Delta };)`$ is the subcomplex of cochains supported in $`\mathrm{\Delta }`$. We also need the augmented complex $$\stackrel{~}{C}^{}(\mathrm{\Delta },\mathrm{\Delta };):0(\mathrm{\Delta })C^0(\mathrm{\Delta },\mathrm{\Delta };)\mathrm{}C^n(\mathrm{\Delta },\mathrm{\Delta };)0$$ and its cohomology groups $`\stackrel{~}{H}^q(\mathrm{\Delta },\mathrm{\Delta };):=H^q(\stackrel{~}{C}^{}(\mathrm{\Delta },\mathrm{\Delta };)).`$ Theorem: For a purely $`n`$-dimensional fan $`\mathrm{\Delta }`$ and a minimal extension sheaf $`^{}`$ on $`\mathrm{\Delta }`$, the following statements are equivalent: i) We have $`\stackrel{~}{H}^{}(\mathrm{\Delta },\mathrm{\Delta };^{})=0`$. ii) The $`A^{}`$-module $`E_\mathrm{\Delta }^{}:=^{}(\mathrm{\Delta })`$ of global sections is free. iii) The support $`|\mathrm{\Delta }|`$ of the boundary subfan is a real homology manifold. For a rational fan $`\mathrm{\Delta }`$, the above conditions are equivalent to iv) For the toric variety $`X_\mathrm{\Delta }`$ associated to $`\mathrm{\Delta }`$, we have $`IH^{odd}(X_\mathrm{\Delta })=0`$. Since complete fans are quasi-convex, the previous results provide a proof of a conjecture of Bernstein and Lunts (see , 15.9). Corollary : For a complete fan $`\mathrm{\Delta }`$, the minimal complex of Bernstein and Lunts is exact. Furthermore, for a quasi-convex fan $`\mathrm{\Delta }`$ and a minimal extension sheaf $`^{}`$ on $`\mathrm{\Delta }`$, even the $`A^{}`$-submodule $`E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}`$ of $`E_\mathrm{\Delta }^{}`$ consisting of the global sections vanishing on the boundary subfan $`\mathrm{\Delta }`$ is a free $`A^{}`$-submodule. 4. Poincaré Polynomials and Poincaré duality For a quasi-convex fan $`\mathrm{\Delta }`$ and a a minimal extension sheaf $`^{}`$ on $`\mathrm{\Delta }`$, we want to discuss the Poincaré polynomials related to $`\mathrm{\Delta }`$ and the pair $`(\mathrm{\Delta },\mathrm{\Delta })`$. Definition: The Poincaré polynomial of $`\mathrm{\Delta }`$ is the polynomial $`P_\mathrm{\Delta }(t):=_{q0}^<\mathrm{}dim\overline{E}_\mathrm{\Delta }^{2q}t^{2q}`$. The relative Poincaré polynomial $`P_{(\mathrm{\Delta },\mathrm{\Delta })}`$ is defined in an analogous manner. The relation between a global Poincaré polynomial $`P_\mathrm{\Delta }`$ and its local Poincaré polynomials $`P_\sigma `$ for $`\sigma \mathrm{\Delta }`$ is rather explicit: Local-to-Global Formula: If $`\mathrm{\Delta }`$ is a quasi-convex fan of dimension $`n`$, we have $$P_\mathrm{\Delta }(t)=\underset{\sigma \mathrm{\Delta }\mathrm{\Delta }}{}(t^21)^{ndim\sigma }P_\sigma (t)\text{and}P_{(\mathrm{\Delta },\mathrm{\Delta })}(t)=\underset{\sigma \mathrm{\Delta }}{}(t^21)^{ndim\sigma }P_\sigma (t).$$ The proof of the above formulæ depends on the fact that $`C^{}(\mathrm{\Delta },\mathrm{\Delta };^{})`$ resp. $`C^{}(\mathrm{\Delta };^{})`$ are resolutions of $`E_\mathrm{\Delta }^{}`$ resp. $`E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}`$. Hence, the Poincaré series of $`E_\mathrm{\Delta }^{}`$ resp. $`E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}`$ equals the alternating sum of the Poincaré series of the cochain modules $`C^i(\mathrm{})`$. Finally, we use the fact that $`E_\mathrm{\Delta }^{}A^{}_𝐑\overline{E}_\mathrm{\Delta }^{}`$, since $`E_\mathrm{\Delta }^{}`$ is free, and similarly for $`E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}`$, while $`E_\sigma ^{}A_\sigma ^{}_𝐑\overline{E}_\sigma ^{}`$. Using an induction argument one proves: Corollary: Let $`\mathrm{\Delta }`$ be a quasi-convex fan. i) The relative Poincaré polynomial $`P_{(\mathrm{\Delta },\mathrm{\Delta })}`$ is monic of degree $`2n`$. ii) The absolute Poincaré polynomial $`P_\mathrm{\Delta }`$ is of degree $`2n`$ iff $`\mathrm{\Delta }`$ is complete; otherwise, it is of strictly smaller degree. iii) For a non-zero cone $`\sigma `$, the local Poincaré polynomial $`P_\sigma `$ is of degree at most $`2dim\sigma 2`$. Of course, statement ii) is a rather weak vanishing estimate; in fact we expect the much stronger vanishing condition V($`\sigma `$) to hold. In order to have a recursive algorithm for the computation of global Poincaré polynomials, we relate the local Poincaré polynomial $`P_\sigma `$ to the global one of some fan $`\mathrm{\Lambda }_\sigma `$ in a vector space of lower dimension: The fan $`\mathrm{\Lambda }_\sigma `$ “lives” in the quotient vector space $`V_\sigma /L`$, where $`L`$ is a line in $`V`$ passing through the relative interior of $`\sigma `$. For the projection $`\pi :V_\sigma V_\sigma /L`$, we pose $`\mathrm{\Lambda }_\sigma :=\{\pi (\tau );\tau \sigma \}`$. The homeomorphism $`\pi |_\sigma :|\sigma |V_\sigma /L`$ induces an isomorphism of the fans $`\sigma `$ and $`\mathrm{\Lambda }_\sigma `$. Using the truncation operator $`\tau _{<j}(a_qt^q)=_{q<r}a_qt^q`$ we can now formulate the next step: Local Recursion Formula: Let $`\sigma `$ be a non-zero cone. i) If $`\sigma `$ is simplicial, then we have $`P_\sigma 1`$. ii) If the condition $`V(\sigma )`$ of section 1 is satisfied, then we have $`P_\sigma (t)=\tau _{<dim\sigma }\left((1t^2)P_{\mathrm{\Lambda }_\sigma }(t)\right)`$. For the proof, we consider a minimal extension sheaf $`𝒢^{}`$ on the fan $`\mathrm{\Lambda }:=\mathrm{\Lambda }_\sigma `$. Let $`B^{}`$ be the polynomial algebra on $`V_\sigma /L`$, considered as subalgebra of $`A_\sigma ^{}B^{}[T]`$. Then we have an $`B^{}`$-module isomorphism $`G_\mathrm{\Lambda }^{}E_\sigma ^{}`$, such that $`\overline{E}_\sigma ^{}\overline{G}_\mathrm{\Lambda }^{}/T\overline{G}_\mathrm{\Lambda }^{}`$: Here $`\overline{G}_\mathrm{\Lambda }^{}`$ is the residue class module of the $`B^{}`$-module $`G_\mathrm{\Lambda }^{}`$ and $`T`$ acts on it via the isomorphism $`G_\mathrm{\Lambda }^{}E_\sigma ^{}`$, the latter module living over $`A_\sigma ^{}`$. The action of $`T`$ on $`G_\mathrm{\Lambda }^{}`$ coincides with the multiplication with the piecewise linear strictly convex function $`\psi :=T(\pi |_\sigma )^1𝒜^2(\mathrm{\Lambda })`$. Now we use the following combinatorial version of the Hard Lefschetz Theorem: Combinatorial Hard Lefschetz Theorem: In the same notations as in the proof of the Local Recursion Formula we set $`m:=dim(V_\sigma /L)=dim\sigma 1`$. If the condition $`V(\sigma )`$ is satisfied, then $`\mu :G_\mathrm{\Lambda }^{}G_\mathrm{\Lambda }^{}[2],f\psi f`$ induces a map $`\overline{\mu }^{2q}:\overline{G}_\mathrm{\Lambda }^{2q}\overline{G}_\mathrm{\Lambda }^{2q+2},`$ which is injective for $`2qm1`$ and surjective for $`2qm1`$. The surjectivity is nothing but a reformulation of the vanishing condition V($`\sigma `$), whence the injectivity is obtained via Poincaré duality for the real vector space $`\overline{G}_\mathrm{\Lambda }^{}`$, the map $`\overline{\mu }`$ being selfadjoint with respect to the Poincaré duality pairing: Such a Poincaré duality on a quasi-complex fan $`\mathrm{\Delta }`$ is obtained as follows: By a stepwise procedure, one constructs an internal (non-canonical) intersection product $`^{}\times ^{}^{}`$. Then one composes the induced product on the level of global sections with an evaluation map $`E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}A^{}[2n]`$, which is homogeneous of degree 0 and unique up to a non-zero real factor. That construction uses the above corollary and the freeness of $`E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}`$. The pairing thus obtained induces a pairing on the level of residue class vector spaces. Poincaré Duality Theorem: For every quasi-convex fan $`\mathrm{\Delta }`$, the pairings $$\begin{array}{cc}\hfill E_\mathrm{\Delta }^{}\times E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}& E_{(\mathrm{\Delta },\mathrm{\Delta })}^{}A^{}[2n]\hfill \\ \hfill \overline{E}_\mathrm{\Delta }^{}\times \overline{E}_{(\mathrm{\Delta },\mathrm{\Delta })}^{}& \overline{E}_{(\mathrm{\Delta },\mathrm{\Delta })}^{}𝐑^{}[2n].\hfill \end{array}$$ $`\mathrm{𝑎𝑛𝑑}`$ are dual pairings of free $`A^{}`$-modules resp. of $`𝐑`$-vector spaces. We end this section with a numerical version of Poincaré duality: Corollary: For a quasi-convex fan $`\mathrm{\Delta }`$, the global Poincaré polynomials $`P_\mathrm{\Delta }`$ and $`P_{(\mathrm{\Delta },\mathrm{\Delta })}`$ are related by the identity $$P_{(\mathrm{\Delta },\mathrm{\Delta })}(t)=t^{2n}P_\mathrm{\Delta }(t^1).$$ Références bibliographiques Barthel G., Brasselet J.-P., Fieseler K.-H., Kaup L., Equivariant Intersection Cohomology of Toric Varieties, Algebraic Geometry: Hirzebruch 70. Contemp. Math. AMS 241, (1999) 45–68. —, Equivariant Intersection Cohomology of Toric Varieties, Dep. Math. Uppsala, U.U.D.M. Report 1998:34. —, Combinatorial Intersection Cohomology of Fans, To appear. Bernstein J., Lunts V., Equivariant Sheaves and Functors, Lecture Notes in Math., vol. 1578, Springer-Verlag Berlin etc., 1993. Brion M., The Structure of the Polytope Algebra, Tôhoku Math. J., 49, (1997), 1–32. Brylinski J.-L., Equivariant Intersection Cohomology, in: Kazhdan-Lusztig Theory and Related Topics, Contemp. Math. vol. 139, Amer. Math. Soc., Providence, R.I., (1992), 5–32. Fieseler K.-H., Rational Intersection Cohomology of Projective Toric Varieties, Journ. reine angew. Math. (Crelle), 413, (1991), 88–98. Goresky M., Kottwitz R., MacPherson R., Equivariant Cohomology, Koszul Duality, and the Localization Theorem, Invent. Math., 131, (1998), 25–83. Oda T., The Intersection Cohomology and Toric Varieties, in: T. Hibi (ed.), Modern Aspects of Combinatorial Structure on Convex Polytopes, RIMS Kokyuroku, 857 (Jan. 1994), 99–112.
no-problem/0002/hep-ph0002273.html
ar5iv
text
# SOURCES FOR ELECTROWEAK BARYOGENESIS ## 1 Introduction Although CP violation and the phase transition are known to be too weak for baryogenesis within the Standard Model, these problems can be overcome in the Minimal Supersymmetric Standard Model (MSSM). In a small region of MSSM parameter space, corresponding to so called “light stop” scenario, the transition may be strong enough to avoid the wash-out of baryon number by sphaleron interactions in the broken phase. The sphaleron wash-out computations, while mired with problems associated with the infrared sector of gauge theories, are simple in the sense that one is dealing with equilibrium physics. Situation is markedly different for the theory of baryon production. In this case CP-violating currents are generated inside the bubble walls, diffuse into the plasma in the unbroken phase, and bias sphalerons to produce the baryon asymmetry. By the very axioms of baryogenesis this is an inherently out-of-equilibrium system. As of to date, no theory exists that could tackle the problem in its full extent, while many scenarios have been put forward in an attempt to extract the leading effect in one or the other limit. (However, for an ongoing project with the aim to self-consistently derive the transport equations for baryogenesis see ref..) Common to all methods is reducing the problem to a set of diffusion equations for the particle species that bias sphalerons. These coupled equations, it is universally agreed, have the general form $$D_i\mu _i^{\prime \prime }+v_w\mu _i^{}+\mathrm{\Gamma }_i(\mu _i+\mu _j+\mathrm{})=S_i,$$ (1) where $`i`$ labels the particle species, $`\mu _i`$ is its chemical potential, primes denote spatial derivatives in the direction ($`z`$) perpendicular to the wall, $`v_w`$ is the wall velocity, $`\mathrm{\Gamma }_i`$ is the rate of an interaction that converts species $`i`$ into other kinds of particles, and $`S_i`$ is the source term associated with the current generated at the bubble wall. The essential point, and the one where little agreement exists between different aproaches, is how to properly derive the source terms $`S_i`$ appearing in (1). In MSSM, potentially the most dominant source arises from the chargino sector. The CP violating effects are due to the complex parmeters $`m_2`$ and $`\mu `$ in the chargino mass term, $$\overline{\psi }_RM_\chi \psi _L=(\overline{\stackrel{~}{w}^^+},\overline{\stackrel{~}{h}^^+}_2)_R\left(\begin{array}{cc}m_2& gH_2\\ gH_1& \mu \end{array}\right)\left(\begin{array}{c}\stackrel{~}{w}^^+\\ \stackrel{~}{h}_1^^+\end{array}\right)_L.$$ (2) Spatially varying Higgs fields cause the phase of the effective mass eigenstates vary nontrivially over the bubble wall. In all methods that address the thick wall limit, one computes the current effected by these spatialy varying phases to leading order in an expansion in derivatives of the Higgs fields. There was an important discrepancy in the literature concerning the derivative expansion of the chargino source. References and obtained a source for the $`H_1H_2`$ combination of higgsino currents of the form $$S_{H_1H_2}\mathrm{Im}(m_2\mu )(H_1H_2^{}H_2H_1^{}),$$ (3) whereas ref., albeit unknowingly, found the other orthogonal linear combination, $`H_1+H_2`$, for which the result is $$S_{H_1+H_2}\mathrm{Im}(m_2\mu )(H_1H_2^{}+H_2H_1^{}),$$ (4) We have recently understood that this disagreement about the sign is spurious and that all three methods actually agree with eq. (4); it simply was not computed by the other authors of the references. The reason that the combination $`H_1+H_2`$ was not considered by the other authors is because it tends to be suppressed by Yukawa interactions and helicity-flipping interactions from the $`\mu `$ term in the chargino mass matrix. Indeed, if all the interactions arising from the Lagrangian $`V=y\mu \stackrel{~}{h}_1\stackrel{~}{h}_2`$ $`+`$ $`h_2\overline{u}_Rq_L+y\overline{u}_R\stackrel{~}{h}_{2L}\stackrel{~}{q}_L+y\stackrel{~}{u}_R^{}\stackrel{~}{h}_{2L}q_L`$ (5) $``$ $`y\mu h_1\stackrel{~}{q}_L^{}\stackrel{~}{u}_R+yA_t\stackrel{~}{q}_Lh_2\stackrel{~}{u}_R^{}+\text{h.c.},`$ are considered to be in thermal equilibrium, they give rise to the constraints $`\xi _{H_1}\xi _{Q_3}+\xi _T=0`$ and $`\xi _{H_2}+\xi _{Q_3}\xi _T=0`$, which would damp out the effect of the source $`S_{H_1+H_2}`$. The rates $`\mathrm{\Gamma }_A`$ of the processes coming from (5) are finite however, so the equilibrium relations are satisfied only up to corrections of order $`(D_i\mathrm{\Gamma }_A)^{1/2}`$, where $`D_i`$ is the diffusion coefficient for Higgs particles or quarks. Using the Higgs diffusion constant $`D_h20/T`$ and the Yukawa rate $`\mathrm{\Gamma }3y^2T/16\pi `$, one finds only a mild suppression factor $`(D_h\mathrm{\Gamma })^{1/2}1`$. The source $`S_{H_1H_2}`$ on the other hand suffers from a serious suppression: baryon number generated is (obviously) proportional to a spatial variation of $`H_2/H_1`$, but relative deviations from constancy of this ratio have been found to be very small, in the range $`10^210^3`$. Therefore the source $`S_{H_1H_2}`$ should be expected to be subdominant to $`S_{H_1+H_2}`$ even in the models of refs. . In the CFM the situation is even worse, because there the source for $`S_{H_1H_2}`$ actually vanishes, as we shall see below. ## 2 Semiclassical Boltzman equation The classical force baryogenesis rests on particularily appealing intuitive picture. One assumes that the plasma in the condensate region can be described by a collection of semiclassical WKB-states, following world lines set by their WKB-dispersion relations and corresponding canonical equations of motion. One can then immediately write down a semiclassical Boltzman equation for the transport $$(_t+𝐯_g_𝐱+𝐅_𝐩)f_i=C[f_i,f_j,\mathrm{}].$$ (6) where the group velocity and the classical force are given by $$𝐯_g_{𝐩_c}\omega 𝐅=\dot{𝐩}=\omega \dot{𝐯}_g,$$ (7) where $`𝐩_c`$ is the canonical and $`𝐩\omega 𝐯_g`$ is the physical, kinetic momentum along the WKB-world line. Because of CP-violating effects particles and antipartices experience different force in the wall region, $`F_{\mathrm{ap}}F_\mathrm{p}`$, which leads to separation of chiral currents. What remains is to compute the disperson relation to obtain the group velocity and the force, after which the diffusion equations follow from (6) in a standard way by truncated moment expansion. ### 2.1 Dispersion relation I will first consider the example of a single Dirac fermion with a spatially varying complex mass: $$(i\gamma ^\mu _\mu mP_Rm^{}P_L)\psi =0;m=|m(z)|e^{i\theta (z)},$$ (8) where $`P_{L,R}=(1\gamma _5)/2`$. Assuming planar walls I will also boost to the frame in which the momentum parallel to the wall is zero, $`p_x=p_y=0`$ (I am ignoring the effects of thermal background here). In this simple case it is fairly easy to solve the whole wave function to the first nontrivial order in the gradients, $$\psi _s=\frac{|m|}{\sqrt{2p_s^+(\omega +sp_0)}}\left(\begin{array}{c}1\\ \frac{\omega +sp_s^+}{|m|}\end{array}\right)\chi _se^{i{\scriptscriptstyle \stackrel{~}{p}_s}+i\frac{\theta }{2}\gamma _5+i\varphi _G},$$ (9) where $`p_0\sqrt{\omega ^2+m^2}`$, $`\stackrel{~}{p}_sp_0+s\omega \theta ^{}/(2p_0)`$, $`p_s^+\stackrel{~}{p}_s+\omega \theta ^{}/2`$, with $`\theta ^{}_z\theta `$, and $`\sigma _3\chi _s=s\chi _s`$. The phase of the wave function in (9) can be written as an integral over the local (canonical) momentum: $$p_c=p_0+\frac{s\theta ^{}}{2p_0}(\omega \pm sp_0)+\alpha _G^{}.$$ (10) This is, of course, just the usual WKB-dispersion relation which has been derived in many places. The presence of an arbitrary function $`\alpha _G^{}`$<sup>1</sup><sup>1</sup>1It may be introduced at any point by a local phase transition $`\psi e^{i\alpha _G(x)}\psi `$, which leaves the lagrangian invariant. shows explicitly, as one should expect, that $`p_c`$ is a gauge dependent quantity. The physical quantities are gauge independent, however. For example, in the computation of the group velocity, the gauge dependent parts (including the chiral rotation proportional to $`\pm \theta ^{}`$) vanish because they are $`\omega `$-independent: $$v_g=_{p_c}\omega =(_\omega p_c)^1=\frac{p_0}{\omega }\left(1+\frac{sm^2\theta ^{}}{2p_0^2\omega }\right)$$ (11) Similar equation holds for antiparticles, but with $`\theta \theta `$. The gauge independency of the current $`j^\mu =\overline{\psi }\gamma ^\mu \psi `$ is obvious from (9). Moreover, it is easy to show by direct substitution that $`j^\mu =(1/v_g;\widehat{𝐩}).`$ (12) Thus, in the absence of collisions, the WKB-particles merely follow their trajectories (corresponding to the stationary phase of the wave) and if they slow down at some point, the outcome is an increase of local density. The crux of the CFM is that where particles slow down, antiparticles speed up in relation, leading to a local particle-antiparticle bias. ### 2.2 Physical force We still need to see how the classical force arises from the dispersion relation. Physically, one expects that force simply corresponds to acceleration, as was assumed above in Eq. (7). It is instructive to see that this force is consistent with the canonical equations of motion. First note that the physical momentum $`p\omega v_g`$, may be written in terms of canonical momentum as $$pp_c^\pm \alpha ^\pm \frac{s\theta ^{}p}{2\omega }.$$ (13) where $`\alpha ^\pm =\alpha _G^{}\pm \theta ^{}/2`$. Force acting on this momentum is then $$F=\dot{p}=\dot{p}_c\dot{z}_z(\alpha ^\pm +\frac{s\theta ^{}p_k}{2\omega }).$$ (14) Using the canonical equations $`\dot{z}=v_g`$ and $`\dot{p}_c=(_z\omega )_{p_c}`$, along with the energy conservation, one finds that $`F={\displaystyle \frac{mm^{}}{\omega }}+{\displaystyle \frac{s(m^2\theta ^{})^{}}{2\omega ^2}}=\omega v_g_zv_g=\omega \dot{v}_g,`$ (15) in accordance with (7). Note that while the canonical force $`F_c(_z\omega )_{p_c}`$ is obviously gauge dependent, the gauge parts cancel in the expression for the physical force $`F`$. Again, for antiparticles $`\theta \theta `$, so that the second term in (15) is the CP-violating force, which leads to baryon production. ## 3 Baryogensis from chargino transport The WKB-analysis of the chargino sector proceeds very similarly to the above simple example. Naturally there are some complications due to the additional $`2\times 2`$ flavour mixing structure. After a little algebra one finds the dispersion relation $`p_{H_{i_\pm }}=p_{0_\pm }`$ $``$ $`{\displaystyle \frac{s(\omega +sp_{0_\pm })}{2p_{0_\pm }}}{\displaystyle \frac{\mathrm{}(m_2\mu )}{m_\pm ^2\mathrm{\Lambda }}}(u_1u_2^{}+u_2u_1^{})`$ (16) $``$ $`s_{H_i}{\displaystyle \frac{2\mathrm{}(m_2\mu )}{\mathrm{\Lambda }+\mathrm{\Delta }}}(u_1u_2^{}u_2u_1^{})+i\alpha _{i\pm }^{}`$ where $`u_igH_i`$, $`\mathrm{\Lambda }=m_+^2m_{}^2`$ and $`\mathrm{\Delta }=|m_2|^2|\mu |^2+u_2^2u_1^2`$. If $`m_2>\mu `$, ($`m_2<\mu `$) then the larger (smaller) mass eigenstate $`m_+`$ ($`m_{}`$) corresponds to higgsinos. Although promisingly $`s_{H_1}=s_{H_2}=1`$, the $`(u_1u_2^{}u_2u_1^{})`$-term does not source the combination $`H_1H_2`$, because it vanishes when differentiated with respect to $`\omega `$. (It could also be absorbed into the arbitrary phase functions $`\alpha _{i\pm }`$ arising from freedom to perform field redefinitions.) Apart from this “gauge” phase, both higgsinos have identical dispersion relations and hence have identical sources in their diffusion equations, from which it follows that $`S_{H_1H_2}=0`$ in CFM. The nonvanishing source has a very simple form $$S_{H_1+H_2}=\frac{s}{2}\frac{v_wD_h}{p^2/\omega ^2_\pm }p_z/\omega ^3_\pm \left(m_\pm ^2\theta _\mathrm{e}^{}\right)^{\prime \prime },$$ (17) where $`\mathrm{}`$ refers to thermal average and $`m_\pm ^2\theta _\mathrm{e}^{}\mathrm{}(m_2\mu )(u_1u_2^{}+u_2u_1^{})/\mathrm{\Lambda }`$. The appropriate diffusion equations have been set up and solved in reference. The final baryon number can be written as a one-dimensional integral over the source $$\eta _B\frac{\mathrm{\Gamma }_{\mathrm{sph}}}{v_w}C_{sq}_{\mathrm{}}^{\mathrm{}}𝑑zS_{H_1+H_2}(z)𝒢(z),$$ (18) where $`\mathrm{\Gamma }_{\mathrm{sph}}`$ is the Chern-Simons number diffusion rate in the symmetric phase, $`v_w`$ is the wall velocity and $`𝒢(y)`$ is a Greens function which I do not write explicitly here. The parameter $`C_{sq}`$ encodes the essential squark spectrum dependence of our results: if only $`\stackrel{~}{t}_R`$ is light then $`C_{sq}=5/23`$. If, in addition, $`\stackrel{~}{t}_L`$ and $`\stackrel{~}{b}_L`$ are light then $`C_{sq}=1/41`$ and finally, if $`\stackrel{~}{t}_L`$, $`\stackrel{~}{b}_L`$ and $`\stackrel{~}{b}_R`$, and any number of other squarks are light then $`C_{sq}=0`$. This trend lends striking and entirely independent support for the wash-out motivated light stop scenario. In Fig. 1 shown are the contours of $`\delta _\mu =arg(\mu )`$ corresponding to the eventual baryon to photon ratio of $`\eta _B=3\times 10^{10}`$ for $`v_w=0.1`$ and $`v_w=0.01`$. Baryogenesis is seen to remain viable in the MSSM at least for $`\delta _\mu `$ as small as $`\mathrm{𝑓𝑒𝑤}\times 10^3`$. ## 4 Conclusions I have reviewed baryogenesis via the classical force mechanism (CFM) from the chargino transport in the Minimal Supersymmetric Standard Model. It was shown that the physical quantities entering the CFM computation are unambiguos and independent of phase transformations on fields. It was pointed out that the dominant source for baryogenesis in the thick wall limit is the one corresponding to the linear combination of higgsinos $`H_1+H_2`$, despite the suppression by top-Yukawa strength interactions, because the corresponding suppression is much milder than the suppression on $`H_1H_2`$ arising due to need for non-constancy of $`H_2/H_1`$ over the bubble wall. I suggest that this linear combination should lead to dominant effect also in the thin wall limit. It was also observed that CFM is most efficient for the case when as few squarks as possible are light, which lends support for the so called ”light stop scenario”, necessary for avoiding the baryon wash-out in the broken phase. It was finally shown that the CFM may be able to produce the observed baryon asymmetry with the explicit CP-violating phase $`\delta _\mu `$ well below present observational limits. ## References
no-problem/0002/hep-ph0002159.html
ar5iv
text
# Model of Large Mixing Angle MSW Solution ## Abstract We have obtained the neutrino mass matrix with the large mixing angle (LMA) MSW solution, $`\mathrm{sin}^22\theta _{}=0.650.97`$ and $`\mathrm{\Delta }m_{}^2=10^510^4\mathrm{eV}^2`$, in the $`S_{3L}\times S_{3R}`$ flavor symmetry. The structure of our neutrino mass matrix is found to be stable against radiative corrections. The solar neutrino data as well as the atmospheric one give big impact on the study of the lepton mass matrices. There is a typical texture of the lepton mass matrix with the nearly maximal mixing of flavors, which is derived from the symmetry of the lepton flavor democracy , or from the $`S_{3L}\times S_{3R}`$ symmetry of the left-handed Majorana neutrino mass matrix . This texture have given a prediction for the neutrino mixing $`\mathrm{sin}^22\theta _{\mathrm{atm}}=8/9`$. The mixing for the solar neutrino depends on the symmetry breaking pattern of the flavor such as $`\mathrm{sin}^22\theta _{}=1`$ or $`1`$. However, the LMA-MSW solution, $`\mathrm{sin}^22\theta _{}=0.650.97`$ and $`\mathrm{\Delta }m_{}^2=10^510^4\mathrm{eV}^2`$ , has not been obtained in the previous works . We study how to get the LMA-MSW solution in the $`S_{3L}\times S_{3R}`$ symmetric mass matrices, and discuss the stability of the neutrino mass matrix against radiative corrections. The texture of the charged lepton mass matrix was presented based on the $`S_{3L}\times S_{3R}`$ symmetry as follows : $$M_{\mathrm{}}=\frac{c_{\mathrm{}}}{3}\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right)+M_{\mathrm{}}^{(c)},$$ (1) where the second matrix is the flavor symmetry breaking one. The unitary matrix $`V_{\mathrm{}}`$, which diagonalizes the mass matrix $`M_{\mathrm{}}`$, is given as $`V_{\mathrm{}}=FL`$, where $$F=\left(\begin{array}{ccc}1/\sqrt{2}& 1/\sqrt{6}& 1/\sqrt{3}\\ 1/\sqrt{2}& 1/\sqrt{6}& 1/\sqrt{3}\\ 0& 2/\sqrt{6}& 1/\sqrt{3}\end{array}\right)$$ (2) diagonalizes the democratic matrix and $`L`$ depends on the mass correction term $`M_{\mathrm{}}^{(c)}`$. The neutrino mass matrix is different from the democratic one if they are Majorana particles. The $`S_{3L}`$ symmetric mass term is given as follows: $$c_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)+c_\nu r\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right),$$ (3) where $`c_\nu `$ and $`r`$ are arbitrary parameters. The eigenvalues of this matrix are easily obtained by using the orthogonal matrix $`F`$ in eq.(2) as $`c_\nu (1,1,1+3r)`$. The simplest breaking terms of the $`S_{3L}`$ symmetry are added in (3,3) and (2,2) entries. Therefore, the neutrino mass matrix is written as $$M_\nu =c_\nu \left(\begin{array}{ccc}1+r& r& r\\ r& 1+r+ϵ& r\\ r& r& 1+r+\delta \end{array}\right),$$ (4) in terms of small breaking parameters $`ϵ`$ and $`\delta `$. In order to explain both solar and atmospheric neutrinos in this mass matrix, $`r1`$ should be satisfied. However, there is no reason why $`r`$ is very small in this framework. In order to answer this question, we need a higher symmetry of flavors such as the $`O_{3L}\times O_{3R}`$ model . Let us consider the case of $`\delta ϵr`$, in which $`S_{3L}`$ symmetry is completely broken. Then neutrino mass eigenvalues are given as $$m_11+\frac{1}{2}ϵ+r\frac{1}{2}\sqrt{ϵ^2+4r^2},m_21+\frac{1}{2}ϵ+r+\frac{1}{2}\sqrt{ϵ^2+4r^2},m_31+r+\delta ,$$ (5) in the $`c_\nu `$ unit. The orthogonal matrix $`U_\nu `$ is given as $$U_\nu \left(\begin{array}{ccc}t& \sqrt{1t^2}& \frac{r}{\delta }\\ \sqrt{1t^2}& t& \frac{r}{\delta ϵ}\\ \frac{r}{\delta }(\sqrt{1t^2}t)& \frac{r}{\delta ϵ}(t+\sqrt{1t^2})& 1\end{array}\right),t^2=\frac{1}{2}+\frac{1}{2}\frac{ϵ}{\sqrt{ϵ^2+4r^2}}.$$ (6) Since the correction term $`L`$ is close to the unit matrix, the MNS matrix $`U_{\alpha i}`$ is approximately given as $`F^TU_\nu `$ as follows: $$F^TU_\nu \left(\begin{array}{ccc}\frac{1}{\sqrt{2}}(t+\sqrt{1t^2})& \frac{1}{\sqrt{2}}(\sqrt{1t^2}t)& \frac{1}{\sqrt{2}}\frac{ϵr}{\delta (\delta ϵ)}\\ \frac{1}{\sqrt{6}}(t\sqrt{1t^2})(1+\frac{2r}{\delta })& \frac{1}{\sqrt{6}}(t+\sqrt{1t^2})(1+\frac{2r}{\delta ϵ})& \frac{2}{\sqrt{6}}(1\frac{r}{\delta })\\ \frac{1}{\sqrt{3}}(t\sqrt{1t^2})(1\frac{r}{\delta })& \frac{1}{\sqrt{3}}(t+\sqrt{1t^2})(1\frac{r}{\delta ϵ})& \frac{1}{\sqrt{3}}(1+\frac{2r}{\delta })\end{array}\right).$$ (7) The mixing angle between the first and second flavor depends on $`t`$, which is determined by $`r/ϵ`$. It becomes the maximal angle in the case of $`t=1`$ ($`r/ϵ=0`$) and the minimal one in the case of $`t=1/\sqrt{2}`$ ($`ϵ/r=0`$). Since we get $`\mathrm{sin}^22\theta _{}=ϵ^2/(ϵ^2+4r^2)`$, the relevant value of $`r/ϵ`$ leads easily to $`\mathrm{sin}^22\theta _{}=0.650.97`$, which corresponds to the LMA-MSW solution. The numerical results have been shown in ref. . We should carefully discuss the stability of our results against radiative corrections since the model predicts nearly degenerate neutrinos. When the texture of the mass matrix is given at the $`S_{3L}\times S_{3R}`$ symmetry energy scale, radiative corrections are not negligible at the electoroweak (EW) scale. Let us consider the basis, in which the mass matrix of the charged leptons is diagonal. The neutrino mass matrix in eq.(4) is transformed into $`V_{\mathrm{}}^{}M_\nu V_{\mathrm{}}`$. The radiatively corrected mass matrix in the MSSM at the EW scale is given as $`R_G\overline{M}_\nu R_G`$, where $`R_G`$ is given by RGE’s as $$R_G\left(\begin{array}{ccc}1+\eta _e& 0& 0\\ 0& 1+\eta _\mu & 0\\ 0& 0& 1\end{array}\right),\eta _i=1\sqrt{\frac{I_i}{I_\tau }}(i=e,\mu ),I_i\mathrm{exp}\left(\frac{1}{8\pi ^2}\underset{\mathrm{ln}(Mz)}{\overset{\mathrm{ln}(M_R)}{}}y_i^2𝑑t\right).$$ (8) We transform back this neutrino mass matrix $`R_G\overline{M}_\nu R_G`$ into the basis where the charged lepton mass matrix is the democratic one at the EW scale as follows: $$FR_G\overline{M}_\nu R_GF^Tc_\nu \left(\begin{array}{ccc}1+\overline{r}& \overline{r}& \overline{r}\\ \overline{r}& 1+ϵ+\overline{r}& \overline{r}\\ \overline{r}& \overline{r}& 1+\delta +\overline{r}\end{array}\right)+2\eta _Rc_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),\overline{r}=r\frac{2}{3}\eta _R.$$ (9) Here we take $`\eta _R\eta _e\eta _\mu `$, which is a good approximation . Its numerical value depends on $`\mathrm{tan}\beta `$ as: $`10^2`$, $`10^3`$ and $`10^4`$ for $`\mathrm{tan}\beta =60,10,`$ and $`1`$, respectively. As seen in eq.(4) and eq.(9), radiative corrections are absorbed into the original parameters $`r`$, $`ϵ`$ and $`\delta `$ in the leading order. Thus the structure of the mass matrix is stable against radiative corrections although our model leads to nearly degenerate neutrinos. Of course, it does not means that the radiative corrections are small. We have obtained the neutrino mass matrix with the large mixing angle MSW solution, $`\mathrm{sin}^22\theta _{}=0.650.97`$ and $`\mathrm{\Delta }m_{}^2=10^510^4\mathrm{eV}^2`$, in the $`S_{3L}\times S_{3R}`$ flavor symmetry. The structure of our neutrino mass matrix is found to be stable against radiative corrections. We wait for results in KamLAND experiment as well as new solar neutrino data.
no-problem/0002/astro-ph0002115.html
ar5iv
text
# Radio Emission Properties of Millisecond Pulsars ## 1. Introduction — Duo quum faciunt idem, not est idem.<sup>1</sup> <sup>1</sup><sup>1</sup>footnotetext: If two do the same, it is not the same. Through intensive research for almost two decades, it has been well established, both in theory and observation, that millisecond pulsars (MSPs) are the end product of mass accretion in binary systems. As MSPs emerge in the radio universe having been given a second chance in life, they are surrounded by magnetospheres which are several orders of magnitude more compact than those of slower rotating pulsars. Inferred magnetic fields close to the surface of MSPs are 3 to 4 orders of magnitude weaker than in normal pulsars while charges at these regions experience an accelerating potential similar to that of normal pulsars. The impact of the different environment on the emission process in MSP magnetospheres has been a question addressed already shortly after the discovery of a first few such sources. With the plethora of MSPs detected over the years, a significant sample became available to us, enabling a better understanding of not only MSPs (as radio sources and tools) but slower rotating (normal) pulsars as well. In the following, we will concentrate on recent progress, referring to Kramer et al. (1998, Paper I) on spectra, pulse shapes and beaming fraction; Xilouris et al. (1998, Paper II) on polarimetry of 24 MSPs; Sallmen (1998) and Stairs et al. (1999) on multi-frequency polarimetry; Toscano et al. (1998) on spectra of Southern MSPs; Kramer et al. (1999b, Paper III) on multi-frequency evolution; and Kramer et al. (1999a, Paper IV) on profile instabilities of MSPs; but see also the following contributions by Kuzmin & Losovsky and Soglasnov. ## 2. Single Pulses vs. Average Profile Studies Single pulse observations still remain the only tool available to address some fundamental questions listed below. They are, however, still technically challenging and the number of observations described in the literature are scarce. In total, data for only three sources describing 180 min of observations have been presented, i.e. PSRs B1937+21, B1534+12 and J0437–4715 (e.g. Sallmen 1998, Cognard et al. 1996, Jenet et al. 1998 and references therein). The results can be summarized in the statement that based on the single pulses studied, one cannot distinguish between a millisecond or slowly rotating pulsar. More observations are required to further investigate pulse fluctuations (e.g. stabilization processes), the short-term structure (e.g. how it relates to microstructure) and in particular the polarization characteristics in detail. For the time being, we investigate the wealth of information already provided by average profile studies. ## 3. Flux Density Spectra and Radio Luminosity Prior to the investigations leading to Paper I it was commonly believed that the spectra of millisecond pulsars were steeper than those of normal pulsars. We demonstrated in Paper I that the distribution of spectral indices for MSPs is in fact not significantly different, finding an average index of $`1.76\pm 0.14`$ (Paper III). The initial impression was due to a selection effect, since the first MSPs were discovered in previously unidentified steep spectrum sources, as it was later pointed out by Toscano et al. (1998). Consequently, the number of MSPs to be discovered in high-frequency surveys was underestimated. The predictions for searches at frequencies as high as 5 GHz appear even more favourable in light of the latest results presented in Paper III. These suggest that most spectra can be represented by a simple power law, i.e. clear indications for a steepening at a few GHz as known from normal pulsars are not seen. Extending the data to lower frequencies (see Paper III; Kuzmin & Losovsky, next contribution), evidence for spectral turn-overs were not found. <sup>2</sup><sup>2</sup>footnotetext: Upper row: MSPs (PSRs J0218+4218, J0621+1001, B1534+12, J1640+2224, J1730$``$2304), lower row: normal pulsars (PSRs B1831$``$04, B2045$``$16, B2110+27, B2016+28, B1826$``$17) Bailes et al. (1997) pointed out that isolated MSPs are less luminous than those in binary systems, pointing towards a possible relation between radio luminosity and birth scenarios. We have compared a distance limited sample of normal pulsars and MSPs and came to a similar result with the MSPs as a whole appearing as weaker sources than normal pulsars. ## 4. Pulse Profiles – Complexity, Interpulses and Beaming Fraction It was also believed that MSP profiles are more complex than those of normal pulsars. Using a large uniform sample of profiles for fast and slowly rotating pulsars, we showed in Paper I that the apparent larger complexity is due to the (typically) larger duty cycle of MSPs. As a result we see “blown-up” profiles which make it easier to see detailed structure. In fact, blown-up normal pulsar profiles show very similar structure. A quantitative proof is given in Paper I, while Fig. 1 provides an illustration of this effect. Despite this apparent similarity, there is a profound difference betweent MSP profiles and those of normal pulsars! Additional pulse features like interpulses, pre- or post-cursor are much more common for MSPs. While only $`2`$% of all normal pulsars are known to show such features, we detect them for more than 30% of all (field) MSPs. They also appear at apparently random positions across the pulse period in contrast to normal pulsars (Fig. 2a). Their frequent occurrence and location makes one wonder — given the similarity of the main pulse shapes otherwise — whether these components are of the same origin as the main pulse profile or whether other sources of emission (e.g. outer gaps) are responsible (see Paper II). Other possibilities involve an interpretation first put forward for some young pulsars by Manchester (1996), who interpreted some interpulses as the results of cuts through a very wide cone. This is an interesting possibility also for MSPs, since their beam width appears to be much smaller than predicted from the scaling law derived for normal pulsars. The beam width of normal pulsars, $`\rho `$, i.e. the pulse width corrected for geometrical effects (see Gil et al. 1984), follows a distinct $`\rho P^{0.5}`$-law (e.g. Rankin 1993, Kramer et al. 1994, Gould 1994). Using polarization information to determine the viewing geometry and also applying statistical arguments, we calculated $`\rho `$ (at a 10% intensity level) for MSPs in Paper I. We showed that they are not only much smaller than the extrapolation of the known law to small periods, but that – under the assumption of dipolar magnetic fields – the emission of some MSPs seems to come even from within the neutron star — a really disturbing result! While we discuss the possibility of non-dipolar fields and the used polarization information below, one explanation would be that (perhaps below a critical period) the emission beam does not fill the whole open field line region (“unfilled beam”). The situation improves somewhat when we consider the additional pulse features as regular parts of the pulse profile (Fig. 2b). In fact, those MSPs with interpulses may indicate an additional inner scaling parallel to that known for normal pulsars, which could be a result of unfilled beams. We close this section by pointing out that the much smaller beam width has consequences for population studies, which usually utilize the $`\rho P`$ scaling as found for normal pulsars. The failure of this law leads to an overestimated beaming fraction and an underestimation of the birth rate of recycled pulsars (see Paper I). ## 5. Polarization Properties The radio emission of MSPs shows all polarization features known from normal pulsars, i.e. circular polarization which is usually associated with core components, linear polarization which is usually associated with cone components, and also orthogonal polarization modes (see Paper II, Sallmen 1998, Stairs et al. 1999). Despite the qualitative similarities, the position angle (PA) swing is often strikingly different. While normal pulsars show typically a S-like swing, which is interpreted within the rotating vector model (RVM; Radhakrishnan & Cooke 1969), the PAs of many MSPs often appear flat (see e.g. Fig. 3a). This could be interpreted in terms of non-dipolar fields, but Sallmen (1998) noted that larger beam radii lead to a larger probability for outer cuts of the emission cones, i.e. flatter PA swings according to the RVM. Although one should bear in mind the limitations of the $`\rho `$-scaling law and another caveat discussed later, this interpretation justifies the geometrical interpretation of the data, which is supported by the results of Hibschman (these proceedings). Magnetic inclination angles derived from RVM fits are important for binary evolution models and determinations of the companion mass (Fig. 3b). ## 6. Frequency Evolution The radio properties of normal pulsars show a distinct frequency evolution, i.e. with increasing frequency the profile narrows, outer components tend to dominate over inner ones, and the emission depolarizes. The emission of MSPs, which at intermediate frequencies tends to be more polarized than that of normal pulsars (Paper II), also depolarizes at high frequencies (Fig. 4b; Paper III). Simultaneously, the profile width hardly changes or remains constant (see Fig. 4a, Paper III; Kuzmin & Losovsky, these proceedings). This puts under test attempts to link both effects to the same physical origin (i.e. birefringence). In fact, many profiles also exhibit the same shape at all frequencies, while others evolve in an unusual way, i.e. the spectral index of inner components is not necessarily steeper, so that a systematic behaviour as seen for normal pulsars is hardly observed. This can be understood in terms of a compact emission region, an assumption further supported by a simultaneous arrival of the profiles at all frequencies. We emphasize that we have not detected any evidence for the existence of non-dipolar fields in the emission region (Paper III). ## 7. Profile and Polarization Instabilities The amazing stability with time of MSP profiles has enabled high precision timing over the years. However, in Paper IV we discussed the surprising discovery that a few MSPs do show profile changes caused by an unknown origin. The time scales of these profile instabilities are inconsistent with the known mode-changing. In particular, PSR J1022+1001 exhibits a narrow-band profile variation never seen before (Paper IV), which could, however, be the result of magnetospheric scintillation effects described by Lyutikov (these proceedings). With the pulse shape the polarization usually changes as well, and hence this effect is possibly related to phenomena which we discovered in Paper II. Some pulsars like PSR J2145–0750 (Paper II) or PSR J1713+0747 (Sallmen 1998) show occasionally a profile which is much more polarized than usual. In the case of PSR J2145–0750, the PA also changes from some distinct (though not S-like) swing to some very flat curve. This is a strong indication that some of the flat PA swings discussed above may not be of simple geometrical origin alone. ## 8. Summary – MSPs in 2000 and Beyond While we have had to be necessarily brief in reviewing MSP properties, we direct the interested reader to the extensive studies of MSPs presented in the quoted literature. We summarize here our point of view: MSPs emit their radio emission by the same mechanism as normal pulsars. Some distinct differences may originate from the way they were formed, but most observed features can be explained by very compact magnetospheres. Our data can be explained without any need to invoke deviations from dipolar field lines, although a large number of open questions remain. We need more polarization information at higher frequencies and, in particular, single pulse studies. These will allow us to study the formation of the profile and its stability, to see if the additional pulse features are distinct from the main pulse, and how the polarization modes behave under the magnifying glass of the blown-up MSP profiles. There are exciting years to come! ### Acknowledgments. We are very grateful to all the people involved in the studies of MSPs at Bonn, i.e. Don Backer, Fernando Camilo, Oleg Doroshenko, Alexis von Hoensbroech, Axel Jessner, Christoph Lange, Dunc Lorimer, Shauna Sallmen, Norbert Wex, Richard Wielebinski and Alex Wolszczan. ## References Bailes, M., Johnston, S., Bell, J. F., et al. 1997, ApJ, 481, 386 Cognard, I., Shrauner, J., Taylor, J. H., & Thorsett, S. E. 1996, ApJ, 457, 81 Gil, J., Gronkowski, P., & Rudnicki, W. 1984, A&A, 132, 312 Gould, D.M. 1994, PhD thesis, University of Manchester Jenet, F., Anderson, S., Kaspi, V., et al., 1998, ApJ, 498, 365 Kramer M., Xilouris K. M., Lorimer D. R., et al. 1998, ApJ, 501, 270 (Paper I) Kramer M., Xilouris K. M., Camilo F., et al. 1999a, ApJ, 520, 324 (Paper IV) Kramer M., Lange, Ch., Lorimer, D.R., et al. 1999b, ApJ, 526, 975 (Paper III) Manchester, R.N. 1996, in Proc of IAU Colloq. 177, ASP Conf. Series, p. 193 Radhakrishnan, V., & Cooke, D.J., 1969, ARA&A, 32, 591 Rankin, J.M. 1993, ApJ, 405, 285 Sallmen, S. 1998, PhD thesis, University of California at Berkeley Stairs ,I. H., Thorsett, S. E., & Camilo, F. 1999, ApJS, 123, 627 Toscano, M., Bailes, M., Manchester, R.N., & Sandhu, J. 1998, ApJ, 506, 863 Xilouris, K. M., Kramer, M., Jessner, A., et al. 1998, ApJ, 501, 286 (Paper II)
no-problem/0002/astro-ph0002405.html
ar5iv
text
# Recovering the Topology of the Initial Density Fluctuations Using the IRAS Point Source Catalogue Redshift Survey ## INTRODUCTION The IRAS Point Source Catalogue Redshift Survey lends itself to topological studies because of its high sampling densities and large volume. It countains approximately $`15000`$ galaxies to the full depth of the PSC ($`0.6Jy`$). Its sky coverage is $`84.1`$ per cent, where only the zone of avoidance is excluded, here defined as an infrared background exceeding $`25\mathrm{M}\mathrm{J}\mathrm{y}\mathrm{s}\mathrm{r}^1`$ at $`100\mu \mathrm{m}`$, and a few unobserved or contaminated patches at higher latitude. The excluded regions are coded in an angular mask, as shown in Fig. 1. Canavezes et al. analysed the topology of PSCz and showed that it is consistent with the topology of CDM models: The genus curves retain the w-shape characteristic of random-phase density fields even at small smoothing lengths, where non-linear evolution has already generated significant skewness of the 1-point PDF. These non-linearities are, however, detected by a depressed amplitude of the genus curves - amplitude drops - consistent with those detected for the CDM models. Above $`10h^1\mathrm{Mpc}`$ strong phase correlations were not detected in PSCz and those that were detected at smaller scales are expected in the framework of mildly non-linear gravitational evolution. This supports the hypothesis that structure grew from random-phase initial conditions. In this paper we attempt to subtract these phase correlations by reversing gravity. If it is true that structure does indeed originate in random-phase fluctuations, then reversing gravitational evolution should provide us with an initial density fluctuation field free from phase correlations on any scale. There are several methods to achieve this goal. Nusser & Dekel , showed how to express the Zel’dovich approximation in a set of Eulerian coordinates and reverse it to obtain a reconstructed density field. They tested the method with N-body simulations and found satisfactory results. Recently, some improvements have been made on the original method (e.g. Gramann ). Nusser, Dekel & Yahil applied the Nusser & Dekel approximation to the $`1.2`$ Jy $`IRAS`$ redshift survey to recover the 1-point probability distribution function of the initial density field. Their results were consistent with Gaussian initial conditions. The PSCz presents us with an unprecedented number of resolution elements and should constitute therefore the best available data set on which to apply a reconstruction technique. The method we choose to follow here is the quasi-linear method proposed by Nusser & Dekel . In section 1 we describe this method; In section 2 we describe the construction of the density maps used as the input of the time-machine and describe our error estimates; Section 3 is devoted to testing the method using N-body simulations of a CDM model and in section 4 we apply the method to the PSCz and present our results. ## 1 THE RECONSTRUCTION METHOD The reconstruction method we follow is based on two premises: The velocity field is, when smoothed on a scale of a few Mpc, irrotational, and the Zel’dovich approximation is accurate over the mildly non-linear regime. Our first step is to try to express the Zel’dovich approximation in a set of Eulerian coordinates. Given an initial comoving position $`𝐪`$ for a given particle, a final position $`𝐱(𝐪,𝐭)`$ will have the form $$𝐱(𝐪,t)=𝐪+P(𝐪,t).$$ (1) The Zel’dovich approximation states that the displacement term $`P(𝐪,t)`$ can be written as a product of two functions, each a function only of one of the variables $`𝐪`$ or $`t`$, i.e., we can separate the variables $`q`$ and $`t`$: $$𝐱(𝐪,t)=𝐪+D(t)\mathrm{\Psi }(𝐪).$$ (2) This is simply a linear approximation with respect to the particle displacements rather than density. Particles in the Zel’dovich approximation follow straight lines: $$𝐯(𝐪,t)=a(t)\frac{d𝐱}{dt}=a(t)\dot{D}(t)\mathrm{\Psi }(𝐪).$$ (3) The density fluctuation is given by $$\delta (𝐪,t)=\overline{\rho }/J(𝐪,t)1,$$ (4) where $`J(𝐪,t)`$ is the Jacobian of the coordinate transformation $`𝐪𝐱`$. As long as second order terms in $`\delta `$ and $`𝐯`$ are negligible, $`\delta (t)D(t)`$, where $`D(t)`$ is the growing mode solution in linear perturbation theory (see e.g. Peebles ). Unlike linear theory, where the density at a given position evolves according to the linear growth rate, under the Zel’dovich approximation infinite density can develop in a finite time as a result of the convergence of particle trajectories into a pancake. in order to integrate a density field back in time, Nusser & Dekel found the differential equation in Eulerian space which contains the Zel’dovich approximation. This is derived from the standard equation of motion of dust particles in an expanding Universe (e.g. Peebles ): $$\frac{d𝐯}{dt}+H𝐯=\frac{1}{a}\mathrm{\Phi }_g.$$ (5) Here $`𝐯`$ stands for the peculiar velocity of the particle, i.e., $`𝐯=a(t)\frac{d𝐱}{dt}`$, where $`𝐱`$ are the comoving positions; $`=(/x,/y,/z)`$ and $`\mathrm{\Phi }_g`$ is the gravitational potential which is related to the local density fluctuation via the Poisson equation $$^2\mathrm{\Phi }_g=\frac{3}{2}H^2\mathrm{\Omega }a^2\delta .$$ (6) Using the normalized variables $`\theta `$ and $`\phi _g`$ defined thus: $`\theta (𝐱,t)`$ $``$ $`{\displaystyle \frac{𝐯(𝐱,t)}{a\dot{D}}}=\psi (𝐪)`$ (7) $`\varphi _g(𝐱,t)`$ $``$ $`{\displaystyle \frac{\mathrm{\Phi }_g(𝐱,t)}{a^2\dot{D}}},`$ (8) equations 5 and 6 reduce to $$\dot{\theta }+\frac{3H\mathrm{\Omega }}{2f(\mathrm{\Omega })}\theta =\phi _g,$$ (9) where $`f(\mathrm{\Omega })\dot{D}/HD\mathrm{\Omega }^{0.6}`$ . Notice that the Zel’dovich approximation is contained in 7, which is to say $$\dot{\theta }=0$$ (10) along the trajectory of a given particle, i.e., along a line of constant $`𝐪`$. Hence, 9 is reduced, under the Zel’dovich approximation, to $$\frac{3H\mathrm{\Omega }}{2f(\mathrm{\Omega })}\theta =\phi _g.$$ (11) $`\theta `$ is therefore the gradient of a potential: $$\theta =\phi _v,$$ (12) with $$\varphi _v=\frac{2f(\mathrm{\Omega })}{3H\mathrm{\Omega }}\phi _g+F(t).$$ (13) We can obviously set $`F(t)0`$ since $`\theta `$ is the only physical (measurable) quantity. Equation 10 can now be expanded in Eulerian coordinates: $`{\displaystyle \frac{d\theta }{dt}}=0`$ $``$ $`\left[{\displaystyle \frac{}{t}}+{\displaystyle \frac{d𝐱}{dt}}\right]\theta (𝐱,t)=0`$ (14) $``$ $`{\displaystyle \frac{\theta }{t}}+\dot{D}(t)\psi (𝐪)\theta =0`$ $``$ $`{\displaystyle \frac{\theta }{t}}+\dot{D}\left(\theta \right)\theta =0.`$ Making use of the identity $`\left(\theta \right)\theta =1/2\left(\theta \right)^2\theta \left(\theta \right)`$, taking into account the irrotationality of $`\theta `$, and substituting 12 we arrive at $`{\displaystyle \frac{\phi _v}{t}}+\dot{D}{\displaystyle \frac{1}{2}}\left(\theta \right)^2=0`$ $``$ $`\left[{\displaystyle \frac{\phi _v}{t}}+{\displaystyle \frac{\dot{D}}{2}}\left(\phi \right)^2\right]=0`$ (15) $``$ $`{\displaystyle \frac{\phi _v}{t}}+{\displaystyle \frac{\dot{D}}{2}}\left(\phi _v\right)^2=F(t).`$ Again, because only gradients of $`\phi _v`$ have a physical meaning, we can take $`F(t)`$ to be zero. We finally arrive at the equation: $$\frac{\phi _v}{D}+\frac{1}{2}\left(\phi _v\right)^2=0.$$ (16) This is the differential equation which expresses the Zel’dovich approximation in Eulerian coordinates. Knowing $`\phi _v`$ at any time enables us to compute $`\phi _v`$ at any other time simply by integrating 16 forwards or backwards. The first step is to calculate $`\phi _v`$ at the present time. That can be easily achieved using Poisson’s equation, which in Fourier space reads: $$k^2\stackrel{~}{\mathrm{\Phi }}_g=\frac{3}{2}H^2\mathrm{\Omega }a^2\stackrel{~}{\delta },$$ (17) where the tildes denote the Fourier transforms. Using 8 we arrive at: $$Dk^2\stackrel{~}{\phi }_v=\stackrel{~}{\delta }.$$ (18) So, in order to obtain the velocity potential of a given smooth density field with periodic boundary conditions, we first transform it to Fourier space using some FFT code, we then divide the obtained field by $`k^2`$ (normalizing D to unity at present), and then transform back to real space to obtain $`\phi _v`$. $`\phi _v`$ is then integrated back in time in the most trivial way: At every step, we calculate $`1/2\left(\phi _v\right)^2`$ from the potential field; this is then used as a first order Taylor correction to predict the value of $`\phi _v`$ at the next step. Once a suitable number of integration steps has been performed, we use equation 18 again to obtain the value of $`\delta `$ at the corresponding value of $`D`$. For our application - topology, we expect the exact number of integrations to be irrelevant. After a certain number of iterations we expect to reach the linear regime in all scales, which means that the genus curves of the density fields will no longer change. This particular characteristic of the genus can in fact be useful to test the convergence of the method. We obtained several reconstructed density fields for different values of the number of integration steps, and found the topologies to converge. This result is shown is Fig. 2. ## 2 CONSTRUCTION OF THE DENSITY MAPS The construction of the density maps we follow is, to a large extent, equivalent to the method followed by Canavezes et al. . We employ the same fit for the PSCz selection function $`s(z)`$: $$s(z)=\frac{\psi }{z^\alpha \left(1+\left(\frac{z}{z^{}}\right)^\gamma \right)^{\beta /\gamma }},$$ (19) with the parameters shown in Table 1 , and estimate the density $`\rho (r)`$ by $$\rho (r)\frac{m(r)}{s(r)},$$ (20) where $`m(r)`$ is the discrete point distribution. However, the angular mask needs to be treated with care. Because the structure we can see behind the mask will have a significant effect on the evolution of structure on the whole observable area, we need to resort to some sort of filling, prior to using the time-machine. We consider two different types of filling: One, a random filling, in which fake objects are placed randomly over the masked region with an average number density equal to the average number density of the whole survey (weighted by the selection function); And another, a cloning, in which fake objects are placed randomly in each bin, but with an average number density equal to the average number density of the neighbouring observable bins, again weighted by the selection function. On both cases, we create a box containing a sphere of radius $`R_{max}`$ as defined in Table 2. $`R_{max}`$ is the maximum distance up to which the average distance between two neighbouring galaxies in PSCz is smaller than the adopted smoothing length. The size of the box will then be $`\frac{2}{\sqrt{3}}Rmax`$. Although boundary conditions are not periodic, we smooth the density field in Fourier space assuming periodic boundary conditions. We consider this to be preferable to zero padding, as this would create an artificial boundary that would eventually affect the gravitational time-machine. We then apply the Zel’dovich time-machine on the box thus obtained for both filling techniques. In the subsequent topological analysis, we limit ourselves to the sphere inside the box with radius $`R_{max}`$. ### Smoothing Procedure Ideally, one would want to apply the operator time-reverse directly upon the unsmoothed density field obtained from PSCz, and only then smooth it on a range of scales to be able to calculate its genus curve. However, in order to obtain meaningful results, we need a smooth field a priori. This poses a very important problem: These operators (time-reverse and smoothing) do not in general commute. In other words, the topology of the final density field can be significantly different whether the smoothing operation is performed before or after applying the time-machine to our original field. To illustrate this point, let us consider a Gaussian random density field smoothed on some scale $`\lambda `$. After applying the time-machine this density field will still be Gaussian. However, if we are to naively calculate its genus curve choosing for isodensity contours the same isodensity contours defined prior to applying te time-machine, we would wrongly conclude that the field is not Gaussian. In order to circumvent this problem we would be forced to smooth the final field once again on a larger scale, thus reducing considerably the statistical significance of our results. There is, however, an alternative way to solve this dilemma: One can try to find a smoothing operator that comutes with our reconstruction operator. How can we look for such a smoothing operator? One way of ensuring this is to find an operator that does not change the density field if applied once again at a latter stage. By other words, if we ensure that our final density field remains the same regardless of how many times we perform the smoothing operation throughout the time-reversing process, then it is because these two operators do indeed comute. A class of such operators are the adaptive smoothing operators. By adaptive smoothing, we understand a local smoothing of variable smoothing length according to the local structure. There are several types of adaptive smoothing. The ideal adaptive smoothing operator ensures that the total mass enclosed by isodensity contours remains constant throughout the time-reversing process. A simplified version assumes spherical symmetry. At each point we employ a smoothing length $`\lambda `$ such that $`\lambda ^31/\rho =1/\left(\rho _0+\delta \right)`$. The proportionality constant defines the characteristic smoothing length, which is the smoothing length employed when the overdensity $`\delta `$ vanishes. For highly dense regions, a smaller value of $`\lambda `$ will be chosen, whereas voids will be smoothed with a larger value of $`\lambda `$. When we apply the time-machine operator on a map smoothed in this way, the isodensity contours around a cluster will move, but the total mass enclosed by them will remain approximately constant, as long as the spherical symmetry hypothesis is a good approximation. In our application to the PSCz we use a spherically symmetric adaptive smoothing algorithm. We start by creating a set of maps obtained from the original map (this might be either the PSCz data or simulation data) by smoothing it on a range of scales around a given characteristic scale $`l_0`$, using Gaussian filters of the form $$G_\lambda (x)=\frac{1}{\pi ^{3/2}\lambda ^3}e^{x^2/\lambda ^2}$$ (21) where $`\lambda `$ varies around $`l_0`$. We then look for the appropriate value of $`\lambda `$ at each position $`x`$ by enforcing the equation $$(1+\delta _\lambda )\lambda ^3=l_0^3,$$ (22) where $`\delta _\lambda `$ is the density contrast when the original map is smoothed on a scale $`\lambda `$. This ensures that the mass enclosed on a sphere of radius $`\lambda `$ is, to first order in a Taylor expansion, constant throughout the whole map. ### Error Estimates There are three different sources of error that enter our results: Shot noise, cosmic variance and the errors associated with the Zel’dovich approximation itself. The most accurate and realistic way of estimating these errors, which ultimately enter the genus curves of the reconstructed PSCz density fields, is to analyse the topology of fake PSCz catalogues drawn from N-body simulations of some standard CDM model. Since we intend to test the validity of the Zel’dovich time-machine with N-body simulations, it seems reasonable to extent this philosophy to the calculation of the statistical errors themselves. This is achieved by seeking a galaxy number density field with a Poisson distribution, whose expectation value is identical to the density field of the N-body simulation multiplied by the PSCz selection function. This is equivalent to observing the density field of the N-body simulation in a similar way to the way PSCz observes the density field of the real Universe. This galaxy number density field is then divided by the PSCz selection function again to obtain a ”PSCZ-noisy” distance-independent estimate of the real galaxy number density. Each of the mock PSCz catalogues is adaptively smoothed using the algorithm described above and then used as input in the Zel’dovich time-machine. The genus curves of the reconstructed fields are calculated and the variance obtained over 10 mock PSCz catalogues is used as the statistical error estimate. ## 3 TESTING THE METHOD WITH N-BODY SIMULATIONS OF CDM MODELS. As we mentioned previously, we intend to test the regime of validity of the Zel’dovich time-machine by means of an N-body simulation of a CDM model. We apply the time-machine on the present density field drawn from the simulation and compare the density field thus obtained with the density field drawn from the original test-mass positions used as input in the simulation. More specifically, we need only to compare the topologies of the reconstructed and original fields through their genus curves. For this purpose, we use two N-body simulations corresponding to two cold dark matter models, kindly provided by the Virgo consortium .The simulations have been performed with an AP<sup>3</sup>M-SPH code named HYDRA . Here we consider the SCDM model and the $`\tau `$CDM model, which parameters are shown in Table 3. Because this simulations contain CDM on periodic boxes of size $`239.5h^1\mathrm{Mpc}`$ and use such a large number of particles, they constitute an ideal ground for this test. We start by binning the particles in cells of side length $`2h^1\mathrm{Mpc}`$ and smooth further on some characteristic smoothing length $`\lambda `$, using the adaptive smoothing algorithm described in section 2, in an analogous procedure to the way we treat the PSCz galaxies. This ”double” smoothing is required in the real Universe in order to minimize shot noise effects, but it is also necessary in order to eliminate severe non-linearities since the Zel’dovich time-machine is expected not to work over such regimes. It is also needed to smooth out regions of orbit-crossing. The next step is to calculate the genus curves of both the original density field that is used as input in the simulation, smoothed in the same manner as described above, and the reconstructed density field obtained after applying the Zel’dovich time-machine to the present density field smoothed in the same way. In order to compare both genus curves we compute their amplitudes and their amplitude drops. It is very important to note that if we are to obtain gaussianized versions of the density fields at the present time, they will not have, in principle, the same genus amplitudes as the density fields themselves, even when these are in the Gaussian regime. This is so because of the very nature of adaptive smoothing. When a given density field is smoothed adaptively its genus curve will have a different shape and amplitude than the shape and amplitude of the genus curve of the same field when this is smoothed with a constant Gaussian window. We need to be extremely careful not to draw the wrong conclusions about the Gaussian or non-Gaussian nature of our density fields. After applying the time-machine, however, the amplitude of fluctuations will be reduced significantly. In fact they will be reduced to such an extent as to make the field look almost homogeneous. This means that the isodensity contours on adaptively smoothed maps will be very close to the isodensity contours on maps smoothed using a constant Gaussian window. Hence, it is possible to calculate amplitude drops and determine the Gaussian (or non-Gaussian) nature of our density maps after applying the Zel’dovich time-machine, i.e., to determine the Gaussian (or non-Gaussian) nature of the reconstructed fields. In order to test the convergence of the reconstruction method, we obtained several reconstructed density fields for diffent values of the total number of integration steps, as mentioned in 1. Fig. 2 shows a particular slice of the reconstructed density fields at different integration steps when the method is applied to the maps at $`z=0`$ obtained from the $`\tau `$CDM simulation by smoothing on characteristic lengths of $`8h^1\mathrm{Mpc}`$ and $`10h^1\mathrm{Mpc}`$. As it is evident from this Figure, the topology of the density fields is undistinguishable for values of $`z`$ greater than $`4`$ on smoothing scales of both $`8h^1\mathrm{Mpc}`$ and $`10h^1\mathrm{Mpc}`$. Only the amplitude of the density fluctuations changes, indicating that we are now in the linear regime. In Fig. 3 this is made even clearer. Here we show a point-by-point comparison of the reconstructed $`\tau `$CDM density fields at redshifts $`z=4`$ and $`z=9`$, for the characteristic smoothing lengths of $`8h^1\mathrm{Mpc}`$ and $`10h^1\mathrm{Mpc}`$. We only show one in eight of all points, chosen randomly. It is obvious from this plot that the shape of fluctuations did not change from $`z=4`$ to $`z=9`$, for either of the smoothing lengths adopted. It is also obvious that the amplitude of fluctuations was reduced by a factor of $`2`$, as it is expected in the linear regime (notice that in the linear regime the growing mode of fluctuations varies as $`1/(1+z)`$). Hence, our time-machine shows the correct assimptotic behaviour. ### Results Figure 4 shows the isodensity contours obtained for the SCDM simulation and the $`\tau `$CDM simulation, when the fields are adaptively smoothed on a characteristic scale of $`5h^1\mathrm{Mpc}`$ in a comoving box of size $`240h^1\mathrm{Mpc}`$. The first row shows the fields at the present time. The second row shows the fields after applying the Zel’dovich time-machine back to a redshift of 10 and the third row shows the original fields used as input in the simulations. In each panel the thick line represents the contour where the density contrast is zero. Dashed lines represent contours of negative density contrast whereas solid lines represent positive density contrasts. The contour spacing is $`0.2/(1+z)`$. Because this is normalized to the linear reconstruction case where $`\delta 1/(1+z)`$, the linear theory reconstruction density maps look exactly like the present day maps (first row), albeit their much lower fluctuation amplitude. From the contours on both the reconstructed maps and the original maps we notice that their fluctuation amplitudes are larger than the fluctuation amplitude of a linearly reconstructed map. This result is expected because fluctuations will grow faster at the later stages of evolution, in the mildly non-linear regime. It is obvious from Fig. 4 that the Zel’dovich time-machine is able to change the rank order of isodensity contours. This means that the genus curves will also be changed. In fact we can predict, just by looking at Fig. 4 that the amplitude of the genus curves will increase after applying the Zel’dovich time-machine, as it is expected. Figures 5 and 6 show the genus curves obtained for the SCDM model and the $`\tau `$CDM model respectively, at selected characteristic smoothing lengths, namely at $`5h^1\mathrm{Mpc}`$, $`8h^1\mathrm{Mpc}`$ and $`14h^1\mathrm{Mpc}`$. We restrict our analysis to this range of smoothing lengths, as we know a priori that the Zel’dovich time-machine is only relevant in the mildly non-linear regime. In both figures, the first column shows the genus curves of the reconstructed field and its randomized counterpart, whereas the second column shows the genus curves of the original fields, used as input for the simulations. We deliberately do not show the genus curves of the present day adaptively smoothed density fields, since, as we argued in the previous chapter, the isodensity contours are, at this stage, different from the isodensity contours of the fields smoothed using a constant kernel. It is striking in Fig. 5 and Fig. 6 the consistency between the genus curves of the reconstructed field and of its randomized counterpart. This is particularly true for the genus amplitudes, which indicates that there is very little phase correlation in the reconstructed fields on all scales, as it is expected from Gaussian initial conditions. With regard to the genus amplitudes per se, we find consistency with the genus amplitudes of the original density fields, although we notice a tendency for these to be slightly higher in the reconstructed fields. These means that the second moment of the power spectrum is recovered slightly in excess. This becomes more important as we approach the smaller scales, which is to say as the system becomes more non-linear. It is also interesting to notice a depression at high values of $`\nu `$ on the genus curves of the reconstructed fields. We find this depression to be present in all cases, both for the Virgo simulations and the PSCz data. We expect this to be a particular feature of the reconstruction technique, i.e., ultimately dependent on the Zel’dovich approximation itself. At high values of $`\nu `$ the regime is considerably non-linear and so we expect the Zel’dovich approximation to perform rather poorly. This problem is enhanced by the fact that the density fields have been smoothed adaptively. Nevertheless, we do not expect this feature to affect our results considerably as the calculation of the amplitude of the genus curves is restricted to the range $`1<\nu <1`$. Note that there is no sampling noise in these density fields. The tremble in the genus curves is ultimately due to cosmic variance. In Fig. 7 we plot the amplitude drops obtained for the reconstructed SCDM model and for the reconstructed $`\tau `$CDM model as a function of smoothing length. Also shown are the amplitude drops obtained for the models at $`z=0`$ . In all scales, the values obtained for the reconstructed fields are closer to unity than those of the present-day fields. This indicates that the Zel’dovich time-machine is a good tool in recovering Gaussian initial density fields. However, at small scales, we detect a slight departure from the expected value of unity, even for the reconstructed fields. This is not surprising since at this scales non-linearities become strong. The degree of strength of the non-linearities depends on the particular model. As we see from Fig. 7 they are more important in the $`\tau `$CDM model than in the SCDM model, in agreement to what was found by Canavezes et al. . Fig. 8 shows the genus amplitudes of the reconstructed fields for both the SCDM model and the $`\tau `$CDM model, together with the genus amplitudes of the original fields. In the case of the $`\tau `$CDM model the agreement between the original genus amplitude and the genus amplitude of the reconstructed field, is striking, in particular when we consider scales above $`7h^1\mathrm{Mpc}`$. For the SCDM model the agreement is not as good. The reconstructed genus amplitudes appear to be slightly higher than the true original ones. These results indicate that the Zel’dovich time-machine is effective in recovering the right genus amplitude drops on scales larger than $`8h^1\mathrm{Mpc}`$, although recovered amplitudes cannot be considered reliable in the strict sense of the word. ## 4 RECOVERING THE INITIAL DENSITY FIELD FROM PSCZ As mentioned previously, one of the difficulties in using the PSCz in an attempt to recover the initial density fluctuations in the Universe, is the fact that the masked region can make up to $`20\%`$ of the whole observable area. Since this region will have a definite gravitational influence on the observable area when we attempt to reconstruct to original density field, some sort of filling is essential. As mentioned in 2 we employ two different techniques: The random filling and the cloning. Fig. 10 shows how the two different fillings appear to the eye. On the upper panel we show the map where the mask has been filled randomly and on the lower panel we show the map where the mask has been cloned. Although it is impossible to the eye to recognize any significant difference, we will be able to detect some noticeable differences on the topologies of the reconstructed density fields. ### Results & Discussion Fig. 9 shows the genus curves obtained for the reconstructed PSCz fields. On the first column we plot the genus curves of the fields for which the mask has been filled using a cloning technique, and on the second column we plot the genus curves of the fields where the mask has been filled randomly. The thick solid lines refer to the reconstructed PSCz fields proper, whereas the thin solid lines refer to the randomized versions of those fields,i.e., the fields obtained from the reconstructed density fields by randomizing phases in Fourier space subject to the reality constraint $`\delta _𝐤=\delta _𝐤^{}`$ and keeping the same power spectrum. The dotted and dashed lines are the best fitting random-phase curves to both the reconstructed PSCz density fields and their randomized versions. It is striking to notice the proximity between genus curves, even at small smoothing lengths. The statistical errors drawn from the N-body simulation are not shown in Fig. 9 because of the high degree of correlation between the points in the genus curves. Independently of whether the mask has been randomly filled or cloned, the recovered genus curves seem consistent with random-phase Gaussian fluctuations because the amplitude drops appear small. However, the amplitude themselves seem to depend on the mask filling technique. In Fig. 12 we plot the amplitude drops obtained for the reconstructed PSCz density fields, as a function of smoothing length, as well as the error bars drawn from the $`\tau `$CDM model following the method outlined in chapter 2. The amplitude drops of the reconstructed N-body simulations are also shown for comparison. In contrast with Fig. 11, where phase correlations were found for PSCz at small smoothing lengths, the reconstructed PSCz fields do not show any significant phase correlations on scales ranging from $`5h^1\mathrm{Mpc}`$ to $`14h^1\mathrm{Mpc}`$. This reinforces the hypothesis that density fluctuation originate from random-phase Gaussian initial conditions. ## AKNOWLEDGEMENTS We are grateful to the Virgo consortium (J. Colberg, H. Couchman, G. Efstathiou, C. S. Frenk, A. Jenkins, A. Nelson, J. Peacock, F. Pearce, P. Thomas and S. D. M. White) for provinding simulation data ahead of publication. AC aknowledges the support of FCT (Portugal).
no-problem/0002/hep-ph0002090.html
ar5iv
text
# Can the inflaton and the quintessence scalar be the same field? ## 1 Introduction The very last years have witnessed growing interest in cosmological models with $`\mathrm{\Omega }_m1/3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }2/3`$, following the most recent observational data (see for example the discussion in the paper by Bahcall et al. and references therein). A very promising candidate for a dynamical cosmological constant is a rolling scalar field, named “quintessence”. The main motivation for constructing such dynamical schemes resides in the hope of weakening the fine tuning issue implied by the smallness of $`\mathrm{\Lambda }`$. In this respect, a very suitable class of models is provided by inverse power scalar potentials which admit attractor solutions characterized by a negative equation of state. Consider the cosmological evolution of a scalar field $`Q`$, with potential $`V(Q)=M^{4+\alpha }Q^\alpha `$, $`\alpha >0`$, in a regime in which the scalar energy density is subdominant with respect to the background. Then it can be shown that the solution $`Qt^{1n/m}`$, with $`n=3(w_Q+1)`$ and $`m=3(w_B+1)`$, is an attractor in phase space. We have defined $`w_Q`$ to be the equation of state of the scalar field $`Q`$, and $`w_B`$ that of the background ($`=1/3`$ for radiation and $`=0`$ for matter). The equation of state of the scalar field on the attractor is found to be $`w_Q=(\alpha w_B2)/(\alpha +2)`$, which is always negative during matter domination. As a consequence, the ratio of the scalar to background energy density is not constant but scales as $`\rho _Q/\rho _Ba^{mn}`$, thus growing during the cosmological evolution, since $`n`$ $`<m`$. The behaviour of these solutions is determined by the cosmological background and for this reason they have been named “trackers” in the literature. A good feature of these models is that for a very wide range of the initial conditions the scalar field will reach the tracking attractor before the present epoch. Depending on the initial time, you can have several tens of orders of magnitude of allowed initial values for the scalar energy density. This fact, together with the negative equation of state, makes the trackers feasible candidates for explaining the cosmological observation of a presently accelerating universe. The point at which the scalar and matter energy densities are of the same order depends on the mass scale in the potential. This is fixed by requiring that $`\mathrm{\Omega }_Q=𝒪(1)`$ today. An interesting question, then, is whether the ‘quintessence’ scalar and the inflaton field, which dominate the expansion of the universe at very different times, could indeed be the same field. If this is the case, it should also be possible to uniquely fix the initial conditions for the ‘quintessential’ rolling from the end of inflation. Models in which one single scalar field drives both inflation and the late time cosmological accelerated expansion are named “quintessential inflation” models. ## 2 A particle physics model: Supersymmetric QCD As first noted by Binètruy , supersymmetric QCD theories with $`N_c`$ colors and $`N_f<N_c`$ flavors may give an explicit realization of a quintessence model with an inverse power law scalar potential. The matter content of the theory is given by the chiral superfields $`Q_i`$ and $`\overline{Q}_i`$ ($`i=1\mathrm{}N_f`$) transforming according to the $`N_c`$ and $`\overline{N}_c`$ representations of $`SU(N_c)`$, respectively. In the following, the same symbols will be used for the superfields $`Q_i`$, $`\overline{Q}_i`$, and their scalar components. Supersymmetry and anomaly-free global symmetries constrain the superpotential to the unique exact form $$W=(N_cN_f)\left[\mathrm{\Lambda }^{(3N_cN_f)}/\mathrm{det}T\right]^{\frac{1}{N_cN_f}}$$ (1) where the gauge-invariant matrix superfield $`T_{ij}=Q_i\overline{Q}_j`$ appears. $`\mathrm{\Lambda }`$ is the only mass scale of the theory. We consider the general case in which different initial conditions are assigned to the different scalar VEV’s $`Q_i=\overline{Q}_i^{}q_i`$, and the system is described by $`N_f`$ coupled differential equations. In analogy with the one-scalar case, we look for power-law solutions of the form $$q_{tr,i}=C_it^{p_i},i=1,\mathrm{},N_f.$$ (2) It is straightforward to verify that for fixed $`N_f`$ (and when $`\rho _Q\rho _B`$), a solution exists with $`p_ip=p(N_c)`$ and $`C_iC=C(N_c,\mathrm{\Lambda })`$ and that it is the same for all the $`N_f`$ flavors. The equation of state of the tracker is given by $$w_Q=\frac{1+r}{2}w_B\frac{1r}{2},$$ (3) where we have defined $`rN_f/N_c`$. Then, even if the $`q_i`$’s start with different initial conditions, there is a region in field configuration space such that the system evolves towards the equal fields solutions (2), and the late-time behavior is indistinguishable from the case considered by Binetrúy where equal initial conditions for the $`N_f`$ flavors were chosen. In spite of this, the multi-field dynamics introduces some new interesting features. For example, we have found that (in the two–field case) for any given initial energy density such that, for $`q_1^{in}/q_2^{in}=1`$, the tracker is joined before today, there exists always a limiting value for the fields’ difference above which the attractor is not reached in time. A more detailed discussion and numerical results about the two-field dynamics can be found in Masiero et al. ## 3 Quintessential inflation As already discussed, the range of initial conditions which allows $`\rho _Q`$ to join the tracker before the present epoch is very wide. Nevertheless, it should be noted that in principle we do not have any mechanism to prevent $`\rho _Q^{in}`$ from being outside the desired interval. In this respect, an early universe mechanism which could naturally set $`\rho _Q^{in}`$ in the allowed range for a late time tracking, would be highly welcome. Moreover, if we require the quintessence scalar to be identified with the inflaton, we would at the same time obtain a tool for handling the initial conditions and a simple unified picture of the early and late time universe dynamics, which we call “quintessential inflation”. The basic idea is to consider inflaton potentials which, as it is typical in quintessence, go to zero at infinity like inverse powers. In this way it is possible to obtain a late time quintessential behaviour from the same scalar that in the early universe drives inflation. The key point resides in finding a potential which satisfies the condition that inflation and late time tracking both occur, and that they occur at the right times (see Peloso et al. ). Two models have been shown to fulfill these two requirements. One example is a first-order inflation model with potential going to zero at infinity like $`\varphi ^\alpha `$. A bump at $`\varphi M_p`$ allows for an early stage of inflation while the scalar field gets “hung up” in the metastable vacuum of the theory. Nucleation of bubbles of true vacuum through the potential barrier sets the end of the accelerated expansion and starts the reheating phase. After the reheating process is completed, the quintessential rolling of the scalar $`\varphi `$ starts and its initial conditions (uniquely fixed by the end of inflation) can be shown to naturally be within the range which leads to a present day tracking. As an alternative, we considered the model of hybrid inflation proposed by Kinney et al. This is shown to naturally include a late–time quintessential behavior. As typical of hybrid schemes, the potential at early times (that is until the inflaton field is smaller than a critical value $`\varphi _c`$) is dominated by a constant term and inflation takes place. Eventually the inflaton rolls above $`\varphi _c`$, rendering unstable the second scalar of the model, $`\chi `$. This auxiliary field starts oscillating about its minimum and in this stage the universe is reheated. After $`\chi `$ has settled down, the inflaton continues its slow roll down the residual potential, which goes to zero at infinity like $`\varphi ^2`$, thus allowing for a quintessential tracking solution. Also in this case the initial conditions for the quintessential part of the model are not set by hand, but depend uniquely on the value of the inflaton field at the end of reheating. ## Acknowledgments I would like to thank Antonio Masiero, Massimo Pietroni and Marco Peloso with whom I obtained the results presented in this talk.
no-problem/0002/hep-th0002161.html
ar5iv
text
# Quasi-Localization of Gravity by Resonant Modes ## Abstract: We examine the behaviour of gravity in brane theories with extra dimensions in a non-factorizable background geometry. We find that for metrics which are asymptotically flat far from the brane there is a resonant graviton mode at zero energy. The presence of this resonance ensures quasi-localization of gravity, whereby at intermediate scales the gravitational laws on the brane are approximately four dimensional. However, for scales larger than the lifetime of the graviton resonance the five dimensional laws of gravity will be reproduced due to the decay of the four-dimensional graviton. We also give a simple classification of the possible types of effective gravity theories on the brane that can appear for general non-factorizable background geometries. preprint: hep-th/0002161 The past two years have produced several surprising results in the field of gravity in extra dimensions. Firstly, Arkani-Hamed, Dimopoulos and Dvali showed that the size of compact extra dimensions could be as large as a millimeter, with the fundamental Planck scale as low as a TeV, without running into contradiction with the current short-distance gravitational measurements, if the standard model fields are localized on a 4D “brane”. Subsequently Randall and Sundrum (RS) found that using a non-factorizable “warped” geometry for the extra dimension one could in fact have an infinitely large extra dimension, and still reproduce Newton’s Law at large distances on the brane. The key observation of RS is that in their scenario there is a localized zero-energy graviton bound-state in 5D which should be interpreted as the ordinary 4D graviton. In this scenario the geometry of the extra dimension plays a crucial rôle and localization of gravity on the brane is impossible if the geometry far from the brane is asymptotically flat. In view of this fact, it is even more surprising that, as Gregory, Rubakov and Sibiryakov (GRS) recently showed, even if the geometry is asymptotically flat far from the brane, it is still possible to find a phenomenologically viable model if one does not insist on the Newton potential being valid at arbitrarily large scales, but instead only requires it to hold over a range of intermediate scales (a related proposal can be found in ). The aim of this paper is to present a physical explanation of the results of GRS and to provide a universal description of warped gravitational theories. We will show that the reason behind the GRS result is that even though there is no zero energy bound-state graviton in their model, there is a resonant mode—a “quasi bound-state”—at zero energy, which plays the rôle of the 4D graviton. The existence of this resonance at zero energy implies the “quasi-localization” of gravity—that is, it tends to produce a region of intermediate scales on which the gravitational laws appear to be four dimensional. We find that the long-distance scale at which gravity appears to be five dimensional again is inversely related to the width of the graviton resonance. The physics behind the appearance of this new scale is that at very large time scales the graviton decays into plane waves away from the brane, and thus reproduces 5D gravity at large distances. As the width of the resonance approaches zero, the lifetime becomes large and in the limit one regains the RS model. On the other hand, if the resonance becomes very wide, it is basically washed out from the spectrum, its effects become unimportant, and there is no longer a region where gravity is effectively 4D. We show that the existence of such a resonance is expected in these kinds of theories when the geometry is asymptotically flat space far from the brane. Thus we find a simple way of classifying warped gravitational theories: if the ground state wave function is normalizable, one has localization of gravity à la Randall and Sundrum. If the ground state wavefunction is not normalizable, but the geometry does not asymptote to flat space, then there is simply no effective 4D gravity; however, if the ground state wavefunction is non-normalizable and the geometry asymptotes to flat space, we have quasi-localization of gravity via the resonance à la GRS. The most general 5D metric with 4D Poincaré symmetry can be written $$ds^2g_{\mu \nu }dx^\mu dx^\nu =e^{A(z)}\left(\eta _{ab}dx^adx^bdz^2\right).$$ (1) We will assume that the “warp factor” $`A(z)`$ is symmetric and, for simplicity, a non-decreasing function of $`z`$ for $`z>0`$. Furthermore we will also assume that the matter is localized on a brane at $`z=0`$. We now consider fluctuations around the 4D Minkowski metric of the form $`h_{ab}(x,z)=e^{3A(z)/4}\psi (z)\stackrel{ˇ}{h}_{ab}(x)`$, with $`\eta ^{cd}_c_d\stackrel{ˇ}{h}_{ab}(x)=m^2\stackrel{ˇ}{h}_{ab}(x)`$, where $`m`$ is the four-dimensional Kaluza-Klein mass of the fluctuation. The behaviour of the fluctuation in the transverse space is governed by $`\psi (z)`$ which satisfies a Schrödinger-like equation: $$\frac{d^2\psi (z)}{dz^2}+V(z)\psi (z)=m^2\psi (z),V(z)=\frac{9}{16}A^{}(z)^2\frac{3}{4}A^{\prime \prime }(z).$$ (2) Notice that (2) always admits a (not necessarily normalizable) zero-energy wavefunction $$\widehat{\psi }_0(z)=\mathrm{exp}\left[\frac{3}{4}A(z)\right],$$ (3) which potentially describes the 4D graviton. The auxiliary quantum system described by (2) encodes all the properties that we need in order to establish whether under favourable conditions there exists an effective 4D Newton potential on the brane. The relevant quantity to consider is the induced gravitational potential between two unit masses on the brane. A discrete (normalized) eigenfunction $`\psi _m(z)`$ of the quantum system contributes $$\frac{\psi _m(0)^2}{M_{}^3}\frac{e^{mr}}{r},$$ (4) to the potential, where $`M_{}`$ is the fundamental Plank scale in 5D. On the other hand, continuum modes (normalized as plane waves asymptotically) contribute $$_{m_0}^{\mathrm{}}𝑑m\frac{\psi _m(0)^2}{M_{}^3}\frac{e^{mr}}{r}.$$ (5) In the cases which we consider the potential $`V(z)`$ will always approach 0 as $`|z|\mathrm{}`$ and so $`\widehat{\psi }_0(z)`$ is the only possible bound-state of the system and the continuum begins at $`m_0=0`$. The potential has the characteristic volcano shape with a central well surrounded by barriers that decay to zero . We can distinguish 3 classes depending on the asymptotic behaviour of $`\widehat{\psi }_0(z)`$: (a) $`\widehat{\psi }_0(z)`$ is normalizable and consequently falls off faster than $`|z|^{1/2}`$; (b) $`\widehat{\psi }_0(z)`$ is non-normalizable and falls off as a power $`|z|^\alpha `$ ($`\alpha \frac{1}{2}`$); and (c) $`\widehat{\psi }_0(z)`$ is non-normalizable and asymptotes to a constant so the 5D spacetime is asymptotically flat. (Note that this case requires a region where $`A^{\prime \prime }(z)<0`$, and therefore cannot arise as a scalar field domain wall in an otherwise flat background, for example.) For case (a), $`\widehat{\psi }_0(z)`$ gives rise to the usual 4D Newton potential, as is apparent from (4). In it was shown that the effects of the continuum modes generically give small corrections. The intuitive reason for this is that in order for $`\widehat{\psi }_0(z)`$ to be normalizable, the tunneling probability for continuum modes of small $`m`$ through the barriers of $`V(z)`$ must vanish as $`m0`$; this implies that $`\psi _m(0)^2`$ must be less singular than $`m^1`$. Hence the integral over the continuum modes in (5) always yields higher order corrections in $`r^1`$ relative to the leading $`r^1`$ piece. For case (b) there are only continuum modes. In these cases $`\psi _m(0)^2m^{2\alpha }`$, for small $`m`$, and so the potential (5) behaves as $`r^{2\alpha 2}`$ . Consequently there is no effective 4D gravity. Note that in this case gravity does not necessarily appear 5D at large distances, either. Case (c) is the main focus of interest. As in case (b) there is only a continuum contribution to the potential (5); however, we will argue that under favourable conditions an effective 4D Newton’s Law is recovered as in in some intermediate regime $`r_1rr_2`$, where the parameters $`r_{1,2}`$ depend on details of the warp factor $`A(z)`$. This region of “quasi-localization” of gravity is intimately connected with the quasi-bound-state $`\widehat{\psi }_0(z)`$ which causes a resonance in $`\psi _m(0)^2`$ at $`m=0`$. The new long distance scale $`r_2`$ depends upon the width of the resonance: the narrower the resonance the larger the scale $`r_2`$. In the limit in which the width goes to zero $`\widehat{\psi }_0(z)`$ becomes normalizable, $`r_2\mathrm{}`$ and 4D gravity is obtained for all distance scales $`r_1`$, as in the original RS model . In order to prime our intuition it is useful to consider the “volcano box” potential (Figure 1), for which the continuum modes can be calculated exactly. The solution for a symmetric continuum wavefunction has the form: $$\psi _m(z)=\frac{1}{\sqrt{c(m)^2+d(m)^2}}\{\begin{array}{cc}\mathrm{cos}k_1z\hfill & |z|z_1\hfill \\ a(m)e^{k_2(zz_1)}+b(m)e^{k_2(zz_1)}\hfill & z_1|z|z_2\hfill \\ c(m)\mathrm{cos}k_3(zz_2)+d(m)\mathrm{sin}k_3(zz_2)\hfill & |z|z_2,\hfill \end{array}$$ (6) where $$k_1=\sqrt{V_1+m^2},k_2=\sqrt{V_2m^2},k_3=\sqrt{m^2},$$ (7) and $$\begin{array}{cc}\hfill c(m)& =\mathrm{cos}k_1z_1\mathrm{cosh}k_2(z_2z_1)\frac{k_1}{k_2}\mathrm{sin}k_1z_1\mathrm{sinh}k_2(z_2z_1),\hfill \\ \hfill d(m)& =\frac{k_2}{k_3}\mathrm{cos}k_1z_1\mathrm{sinh}k_2(z_2z_1)\frac{k_1}{k_3}\mathrm{sin}k_1z_1\mathrm{cosh}k_2(z_2z_1).\hfill \end{array}$$ (8) In order to have a quasi-bound-state at $`m=0`$ we require a solution of the form (6) with $`\psi _0(z)=c(0)`$ for $`|z|z_2`$. This happens when the parameters of the potential satisfy $$\sqrt{V_2}\mathrm{tanh}\sqrt{V_2}(z_2z_1)=\sqrt{V_1}\mathrm{tan}\sqrt{V_1}z_1.$$ (9) The quantity we are after is $$\psi _m(0)^2=\frac{1}{c(m)^2+d(m)^2}.$$ (10) For small $`m`$, we can expand $`c(m)`$ and $`d(m)`$ in powers of $`m`$. The important point is that $`c(m)`$ has an expansion in even powers of $`m`$ while $`d(m)`$ has a expansion in odd powers of $`m`$: $$c(m)=c_0+c_1m^2+\mathrm{},d(m)=d_0m+d_1m^3+\mathrm{}.$$ (11) Hence for small $`m`$, $`\psi _m(0)^2`$ has the Breit-Wigner form indicative of a resonance $$\psi _m(0)^2=\frac{𝒜}{m^2+\mathrm{\Delta }m^2}+𝒪(m^4),$$ (12) where the width of the resonance is $`\mathrm{\Delta }m=|c_0|/\sqrt{d_0^2+2c_0c_1}`$. The width depends in a complicated way on the parameters of the potential $`V(z)`$. A narrow resonance can be achieved by having $`c_0`$ small, which requires $`\sqrt{V_2}(z_2z_1)1`$. In this limit the width is approximately $$\mathrm{\Delta }m\frac{8}{(1+V_2/V_1)(z_1+1/\sqrt{V_2})}e^{2\sqrt{V_2}(z_2z_1)}.$$ (13) Intuitively, the behaviour of (13) can be deduced by the following reasoning: in order to get a narrow resonance we require that the tunneling probability for modes of small $`m`$ through the barriers of the potential be very small, which is achieved by having $`\sqrt{V_2}(z_2z_1)1`$. In fact the exponential factor in (13) can be deduced from a simple WKB analysis. In this approximation the tunneling probability for the eigenfunction $`\psi _m(z)`$ is $$T(m)\mathrm{exp}\left[2_{z_1}^{z_2}𝑑z\sqrt{V(z)m^2}\right]=\mathrm{exp}\left[2\sqrt{V_2m^2}(z_2z_1)\right].$$ (14) The width of the quasi-bound-state is $`\mathrm{\Delta }mT(0),`$ giving the exponential dependence in (13). We expect the existence of a resonance to be generic, since it is caused by the non-normalizable mode $`\widehat{\psi }_0(z)`$. Let us suppose that the resonance is sufficiently narrow that we can approximate $`\psi _m(0)^2`$ by $$\psi _m(0)^2=\frac{𝒜}{m^2+\mathrm{\Delta }m^2}+f(m),$$ (15) where $`f(m)`$ is some underlying function which rises from 0 to 1 as the energy goes from 0 to just over the height of the barrier. If we assume that $`f(m)m^\beta `$ ($`\beta >0`$), for small $`m`$, the gravitational potential (5) is $$U(r)=\frac{r_2M_{}^3𝒜}{r}_0^{\mathrm{}}𝑑x\frac{e^{xr/r_2}}{x^2+1}+𝒪(1/r^{\beta +2}),$$ (16) where $`r_2=1/\mathrm{\Delta }m`$. The contribution from the resonance gives the 4D Newton’s Law for $`rr_2`$ with Newton’s constant $$G_N=\frac{\pi r_2𝒜}{2M_{}^3},$$ (17) whereas for $`rr_2`$ the contribution goes as $`1/r^2`$ and so the 5D Newton potential is recovered at very large distances. The rise of the underlying part of the continuum $`f(m)`$ gives the short distance corrections to the 4D Newton’s Law and sets the lower scale $`r_1`$. The question now is under what conditions is the resonance narrow so that $`r_2`$ can be large. To investigate this we can, following our analysis of the volcano box potential, use the WKB approximation as a guide. The point is that the width of the resonance is proportional to the tunneling probability $`T(m)`$ for the continuum modes of zero energy, through the barrier of the potential, evaluated at $`m=0`$. The WKB approximation gives $$T(m)\mathrm{exp}\left[2_{z_1}^{\mathrm{}}𝑑z\sqrt{V(z)m^2}\right],$$ (18) where the integral is over the barrier region of the potential. Notice that the integral in (18) is convergent only when $`V(z)`$ falls off faster than $`z^2`$; precisely the situation for case (c) above. In this case the limit $`T(0)`$ is finite and since we expect $`\mathrm{\Delta }mT(0)`$ the requirement for a narrow resonance is $$_{z_1}^{\mathrm{}}𝑑z\sqrt{V(z)}1.$$ (19) In other words, the barriers of the potential must be sufficiently powerful. For case (a) above, on the other hand, the integral diverges and $`T(0)=0`$, as expected since $`\widehat{\psi }_0(z)`$ is normalizable. Now we consider some concrete examples. To recap what we need in order to get quasi-localized gravity on the brane is some geometry which is asymptotically flat in the transverse dimension. This was realized in the simple model of GRS by patching together 5D AdS space onto 5D Minkowski space at some point $`z=z_0`$: $$A(z)=\{\begin{array}{cc}2\mathrm{log}(k|z|+1)\hfill & |z|z_0\hfill \\ 2\mathrm{log}(k|z_0|+1)\hfill & |z|z_0.\hfill \end{array}$$ (20) This case can be exactly solved and one can show that quasi-localization occurs for $`k^1rk^2z_0^3`$. This allows us to check our simple WKB analysis; in this case for $`z_0k^1`$ the WKB integral is $`𝑑z\sqrt{V(z)}\sqrt{15/4}\mathrm{ln}z_0`$ giving $`r_2z_0^{\sqrt{15}}`$ (to compare with the exact behaviour $`r_2z_0^3`$). In the GRS scenario the patching of AdS to flat space requires the existence of new branes at $`z_0`$. However, this is not a necessary feature and one can invent scenarios which smoothly interpolate between the AdS geometry and flat space; for instance by taking $$A(z)=2\mathrm{log}\left(\frac{1}{k|z|+1}+a\right),$$ (21) which can also be smoothed at $`z=0`$ by taking, for example, $$A(z)=\mathrm{log}\left(\frac{1}{k^2z^2+1}+a^2\right).$$ (22) In this case, the crossover occurs smoothly at $`z_01/(ka)`$. For this example, it is a simple matter to numerically solve the differential equation (2) and find $`\psi _m(0)^2`$ as a function of $`m`$. Figure 2 illustrates this function for some values of the parameters giving rise to a fairly broad resonance so that the rise of $`\psi _m(0)^2`$ to 1 can also be seen. For small $`a`$ the resonance becomes very narrow and one can verify numerically that it approximates the form in (15). Figure 3 shows a close-up of the resonance to illustrate the Breit-Wigner form. We can find the height of the resonance from our knowledge of $`\widehat{\psi }_0(z)`$: $$\frac{𝒜}{\mathrm{\Delta }m^2}=\frac{2}{\pi }\widehat{\psi }_0(0)^2=e^{3(A(0)A(\mathrm{}))/2}\frac{2}{\pi }a^3.$$ (23) In addition, when the resonance is very narrow, we can find the width to leading order in $`a`$ by using the fact that, as $`a0`$, $`\widehat{\psi }_0(z)`$ becomes normalizable and the effect of the resonance should appproximate a delta function. This gives $$\frac{𝒜}{\mathrm{\Delta }m}=\frac{4k}{\pi }.$$ (24) Hence, $`\mathrm{\Delta }m2ka^32k^2z_0^3=r_2^1`$. We have introduced the idea of “quasi localization” of gravity on the brane as the general notion lying behind the scenario of GRS whereby over a large range of intermediate distance scales the gravitational potential is, to a good accuracy, Newtonian. We argued that the relevant geometry for quasi localization is when the transverse geometry becomes asymptotically flat. The new large scale above which the gravitational potential becomes five-dimensional depends on the scale at which the crossover to flat space occurs. Physically we can describe the onset of 5D gravity on the brane at this new scale by saying that the effective 4D graviton is unstable and decays into the KK continuum. Obviously in order to be phenomenologically viable the lifetime must be very long. Although we have only considered the case with one extra dimension we expect that the same phenomenon to occur with any number of extra dimensions. In the same vein, we note that quasi-localization may be of relevance in string theory where the transverse geometry to a $`p`$-brane soliton is indeed asymptotically flat. We would like to thank Yuri Shirman for useful discussions and fruitful collaboration. C.C. is an Oppenheimer Fellow at the Los Alamos National Laboratory. C.C., J.E. and T.J.H. are supported by the US Department of Energy under contract W-7405-ENG-36. Note added: After this paper has been completed a revised version of Ref. (as well as Ref. ) appeared, which also independently noted the presence of the resonance in these theories.