id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9910/astro-ph9910519.html | ar5iv | text | # Correlations between Low-Frequency QPOs and Spectral Parameters in XTE J1550–564 and GRO J1655–40
## 1 Introduction
GRO J1655–40 is a black hole X-ray nova (BHXN) with a dynamically-determined primary mass of $`7M_{}`$ (Orosz & Bailyn 1997; Shahbaz et al. 1999), which exceeds the maximum mass of a neutron star (Rhoades & Ruffini 1974; Kalogera & Baym 1996). The source displays superluminal radio jets and is at a distance of $`3.2\pm 0.2`$ kpc (Hjellming & Rupen 1995). GRO J1655–40 was discovered with BATSE on 1994 July 27, and in early 1996 it entered a very low or quiescent state. However, a second outburst commenced on 1996 April 25 and continued for 16 months. During this second outburst, there was an intensive campaign of observations with the Rossi X-ray Timing Explorer (RXTE); this paper is based on extensive spectral and timing results derived from these 52 1996-97 observations (Sobczak et al. 1999a; Remillard et al. 1999a; see also Mendez et al. 1998).
XTE J1550–564 is an X-ray nova (aka soft X-ray transient) and a black hole candidate, which was discovered on 1998 September 6 (Smith et al. 1998) . Two weeks later it reached a peak intensity of 6.8 Crab at 2–10 keV to become the brightest X-ray nova yet observed with RXTE. The source has been observed in the very high, high/soft, and intermediate canonical outburst states of BHXN (Sobczak et al. 1999bc; Cui et al. 1999). There is both a radio counterpart (Campbell-Wilson et al. 1998) and an optical counterpart, which has been studied extensively in outburst (Jain et al. 1999; Sanchez-Fernandez et al. 1999). The mass of the compact primary is unknown. The data presented herein are based on 169 pointed observations with RXTE, which represent all of the pointed observations of XTE J1550–564 during Gain Epoch 3 (i.e. before 16:30 (UT) on 1999 March 22).
Both GRO J1655–40 and XTE J1550–564 are of intense current interest because they display a variety quasi-periodic X-ray oscillations (QPOs) which can be used to probe the accretion process around black holes. It is well known that the temporal characteristics of black hole candidates are strongly correlated with the spectral state (see van der Klis 1995, and references therein). QPOs are fundamental properties of the accretion flow and in a few sources the rms amplitude can exceed 15% of the mean X-ray flux (see §4). Consequently, simultaneous studies of the temporal and spectral properties of BHXN are a promising avenue for determining the origin of QPOs in these sources.
GRO J1655–40 exhibits four types of QPOs between 0.1 Hz and 300 Hz (Remillard et al. 1999a), three of which have relatively stable central frequencies. A fourth QPO with a central frequency that varies from 14–22 Hz is the only one of interest here. Similarly, the power spectra of XTE J1550–564 exhibit two types of QPOs: a high-frequency QPO ($`200`$ Hz) and a variable 0.08–18 Hz QPO (Remillard et al. 1999b; Sobczak et al. 1999b; Cui et al. 1999). Again, we are interested only in this latter QPO. Our focus in this paper is on the correlations between the frequency and amplitude of the 0.08–18 Hz and 14–22 Hz QPOs and the spectral parameters for these two sources. We explore the implications of these correlations for the origin of these QPOs and compare our results with several QPO models.
## 2 Observations and Analysis
The timing and spectral data were obtained using the Proportional Counter Array (PCA; Jahoda et al. 1996) onboard the Rossi X-ray Timing Explorer (RXTE). The PCA consists of five xenon-filled detector units (PCUs) with a total effective area of $``$ 6200 cm<sup>-2</sup> at 5 keV. The PCA is sensitive in the range 2–60 keV, the energy resolution is $``$17% at 5 keV, and the time resolution capability is 1 $`\mu `$sec.
The PCA spectral data in the energy range 2.5–20 keV were fit to the widely used model consisting of a multi-temperature blackbody accretion disk plus power-law (Tanaka & Lewin 1995, and references therein). The spectral parameters include the temperature and radius of the inner accretion disk and the power-law photon index; the power-law flux and the disk flux were derived from the fitted parameters. See Sobczak et al. (1999abc) for additional information regarding spectral fitting methods. The fitted temperature and radius of the inner accretion disk presented here are the observed color temperature and radius of the inner disk. Since the disk emission is likely to be affected by spectral hardening due to electron scattering (Shakura & Sunyaev 1973), the physical interpretation of these parameters remains uncertain.
An X-ray timing analysis was conducted by computing the 2–30 keV power spectrum for each PCA observation. The contribution from counting statistical noise was subtracted and the power spectra were corrected for dead-time effects as described by Zhang et al. (1995) and Morgan, Remillard, & Greiner (1997). A chi-squared minimization technique was used to derive the central frequency and width of each QPO. The QPO features were fit with Lorentzian functions, while the power continuum in the vicinity of the QPO feature was modeled with a power-law function. Triple QPO features exhibiting a fundmental as well as sub- and first-harmonics were common (see Fig. 1a). We identify the central QPO feature with the largest rms amplitude as the fundamental. The integrated fractional rms amplitude of the QPO is the square root of the integrated power in the QPO feature, expressed as a fraction of the mean count rate. The amplitudes of weaker QPOs at harmonic frequencies have been excluded from the QPO amplitude discussed here. See Remillard et al. (1999ab) and for further information regarding the timing analysis of the data for GRO J1655–40 and XTE J1550–564. QPOs observed during the rising phase of XTE J1550–564 (observations 1–14 in Table 1) were first reported by Cui et al. (1999) and details of the timing analysis for those observations can be found therein.
For GRO J1655–40, we consider RXTE programs 10255 and 20402, which constitute almost all of the RXTE exposures of this source. Spectral parameters for GRO J1655–40 are given in Table 1 of Sobczak et al. (1999a) and the QPO parameters are given in Table 3 of Remillard et al. (1999a). The observations of XTE J1550–564 include exposures under RXTE programs 30188-06, 30191, 30435, and 40401, with spectral parameters published in Table 1 of Sobczak et al. (1999bc) and QPO parameters listed here in Table 1.
For four observations listed in Table 1, more than one QPO frequency is given (observations 153–154 & 166–167). In these cases, the QPOs are of approximately equal amplitude and are blended together. Consequently, we are unable to identify the fundamental component, and we have excluded these data from the figures presented here. The QPO frequencies of observations 15–75 listed in Table 1 differ slightly from the corresponding entries (1–60) in Table 1 of Sobczak et al. (1999b), because we used the improved fitting technique described above.
## 3 Results
We now investigate the correlations between the frequency/amplitude of the QPOs and the X-ray spectral parameters of GRO J1655–40 and XTE J1550–564. As noted above, our interest is focused on the strong QPOs in the range 0.08–22 Hz that routinely exhibit changes in frequency from day to day. The correlations between QPO frequency and the spectral parameters are shown in Figures 2 & 3 and the correlations between QPO amplitude and the spectral parameters are shown in Figures 4 & 5. The open circles in Figures 2–7 represent those observations in which high-frequency (161–300 Hz) QPOs are also present.
For both sources, the variable QPO frequency is correlated with all of the spectral parameters plotted in Figures 2a–2f & 3a–3f with a few exceptions (e.g. the vertical branch with $`\nu >17`$ Hz in Fig. 2d). However, the respective correlations for the two sources are generally in the opposite sense, although there is a slight similarity between the QPO frequency vs. $`T_{in}`$ relation for both sources. The only striking exception to the contrary correlations between QPO frequency and the spectral parameters for the two sources is the relation between QPO frequency and disk flux shown in Figures 2c & 2f. In this case, QPO frequency is correlated with the temperature and inner disk radius of the disk in such a way that both sources exhibit a general increase in the QPO frequency as the disk flux increases. This relation for XTE J1550–564 appears roughly linear in the range 1–7 Hz if one excludes the observations when high-frequency QPOs are present (open circles in Fig. 2c).
The QPO amplitude is generally not well correlated with the spectral parameters for both sources (Fig. 4a–4f & 5a–5f), and these correlations are more complicated than the ones considered above. For XTE J1550–564, the QPO amplitude is not well correlated with the temperature or radius of the inner disk (Fig. 4a & 4b), but the QPO amplitude does generally decrease as the disk flux increases (Fig. 4c). Also for XTE J1550–564, the QPO amplitude is correlated with the photon index for $`\mathrm{\Gamma }=1.52.1`$, but not well correlated for $`\mathrm{\Gamma }>2.1`$ (Fig. 5a). In the case of GRO J1655–40, the correlations exhibit two branches, with high-frequency QPOs present on one branch and absent on the other (Fig. 4d–4f & Fig. 5d–5f). For GRO J1655-40, note that the branch without high-frequency QPOs consistently has the steeper slope. The correlations involving the photon index, power-law flux, and total flux for XTE J1550–564 also exhibit two- or possibly three-branches (Fig. 5a–5c). However, the branches are less distinct and the high-frequency QPOs are not restricted to a single branch.
The correlations between QPO amplitude and QPO frequency are shown in Figure 6a–6b for both sources. The QPO amplitude generally decreases as the frequency increases for both sources. The QPO amplitude for XTE J1550–564 is very high and typically about an order of magnitude greater than the amplitude for GRO J1655–40. On the other hand, the QPO frequency is about twice as high on average for GRO J1655–40. The black hole candidate GRS 1915$`+`$105 displays QPOs in approximately the same frequency range and with approximately the same amplitude as XTE J1550–564. The correlations between QPO frequency and amplitude and the spectral parameters for GRS 1915$`+`$105 are similar to those of XTE J1550–564 (Muno, Morgan, & Remillard 1999). The larger amplitudes and lower frequencies of the QPOs in XTE J1550–564 and GRS 1915$`+`$105 may indicate that the QPO mechanism in these two sources is operating in a different regime than for GRO J1655–40 (due to differences in the mass accretion rate or central mass) or that a different QPO mechanism is at work.
## 4 Discussion
Both XTE J1550–564 and GRO J1655–40 exhibit a similar, strong positive correlation between the QPO frequency and disk flux in X-rays. A similar quasi-linear relation between these two quantities was found for the variable 1–15 Hz QPOs in GRS 1915$`+`$105 (Markwardt, Swank, & Taam 1999) and the 20–30 Hz QPOs in XTE J1748–288 (Revnivtsev, Trudolyubov, & Borozdon 1999). The correlation between QPO frequency and disk flux suggests that the QPO frequency is intimately related to the accretion disk. Since the contribution of the disk to the total flux in XTE J1550–564 and GRO J1655–40 varies from 5–95% (Sobczak et al. 1999abc), the disk flux is not necessarily a good indicator of the total mass accretion rate. However, the disk flux alone is a reasonable measure of the mass accretion rate through the inner disk. Therefore, we can conclude that the QPO frequency increases as the mass accretion rate through the inner disk increases.
The QPOs are also related to the power-law component: The variable frequency QPOs appear in XTE J1550–564 and GRO J1655–40 only when the power-law contributes more than 20% of the 2–20 keV flux (Sobczak et al. 1999abc). Figure 7 shows the power-law flux vs. the disk flux. There is a clear distinction in the observations with and without QPOs: The observations without QPOs cluster in a nearly horizontal line at low power-law flux (Fig. 7b,d), whereas the observations with QPOs show significant vertical diplacements toward increased power-law flux (Fig. 7ac). The same distinction between observations with and without QPOs is present in GRS 1915$`+`$105 (Muno et al. 1999). We conclude that not only is the QPO frequency tied to the accretion disk (see above), but in addition the QPO phenomenon is closely related to the power-law component: The power-law flux must reach a threshold of $`20`$% of the 2–20 keV flux to trigger the QPO mechanism.
The variable frequency QPOs in GRO J1655–40 and XTE J1550–564 generally yield rms amplitudes in the range of 0.3–1.5% and 1–15% of the mean count rate, respectively. The X-ray luminosity of XTE J1550–564, which is $`2\times 10^{38}`$($`D`$/6kpc)<sup>2</sup> erg s<sup>-1</sup> at typical brightness levels ($``$ 2 Crab) when QPOs are seen, is modulated with a crest-to-trough ratio as high as 1.5 (see Fig. 1b). Previously, only the microquasar GRS1915+105 has shown QPOs with such a large amplitude and a high luminosity (Morgan et al. 1997). The large modulation of the X-ray luminosity indicates that these QPOs originate within several gravitational radii ($`r_g=GM/c^2`$) of the central object where most of the gravitational energy is liberated in the accretion flow. The QPO frequency cannot be the Keplerian frequency at these radii without some novel mechanism for transporting energy out to large radii, because a 10 Hz QPO would correspond to a radius of 700 km (or $`50r_g`$) for a $`10M_{}`$ black hole.
As discussed by Markwardt et al. (1999), neither the beat frequency model (Alpar & Shaham 1985) nor the sonic point model (Miller, Lamb, & Psaltis 1998) are applicable to the $``$ 10 Hz variable frequency QPOs observed in BHCs. Thermal-viscous instabilities (Chen & Taam 1994; Abramowicz, Chen & Taam 1995) can produce strong oscillations; however, their 0.02–0.06 Hz frequencies are well below most of the 0.08–22 Hz QPOs discussed here. Moreover, this model predicts a negative rather than a postive correlation between the QPO frequency and mass accretion rate. One model that does predict large-amplitude QPOs near 10 Hz, which are positively correlated with the mass accretion rate, invokes an oscillating shock near the inner disk (Moltoni, Sponholtz, & Chakrabarti 1996). However, it is not clear whether such a shock is present for accretion flows with high specific angular momentum (Narayan, Kato, & Honma 1997).
Another possibility is that the variable frequency QPOs are the result of radiation-driven oscillations in a quasi-spherical accretion flow or corona (Cui 1999). Fortner, Lamb, & Miller (1989) modeled radiation-driven radial oscillations for neutron stars with luminosities near the Eddington luminosity. The oscillation frequency is dominated by the inflow time from the outer part of the radial flow, which corresponds to $`10`$ Hz if the radial flow begins a few hundred km from the central object. The observed increase in the QPO frequency as the mass accretion rate through the disk increases could be caused by a shrinking of the radial inflow region. However, in simulations by Fortner et al. (1989), the luminosity is modulated by only $`1`$% compared to the almost 15% modulation observed for XTE J1550–564.
## 5 Conclusions
For XTE J1550–564 and GRO J1655–40, the QPO frequency and amplitude are correlated at some level with all of the spectral parameters, as illustrated in Figures 2–5. However, in the case of the QPO frequency, the correlations are generally opposite for the two sources. The only exception is the QPO frequency vs. disk flux plotted in Figures 2c & 2f. Both sources exhibit a general increase in the QPO frequency as the disk flux increases. The correlation between QPO frequency and disk flux suggests that the QPO frequency is intimately related to the accretion disk. The same relation was found for variable 1–15 Hz QPOs in GRS 1915$`+`$105 (Markwardt et al. 1999) and 20–30 Hz QPOs in XTE J1748–288 (Revnivtsev et al. 1999). Since the disk flux is a reasonable measure of the mass accretion rate through the disk, we can conclude that the QPO frequency increases as the mass accretion rate through the disk increases. In addition, the QPOs are present only when the power-law contributes more than 20% of the 2–20 keV flux, which indicates that both the disk and the power-law components are linked to the QPO phenomenon. The QPO amplitude for XTE J1550–564 is very high and typically about an order of magnitude greater than the amplitude for GRO J1655–40, whereas the QPO frequency is about twice as high for GRO J1655–40. This behavior may indicate that the QPO mechanism is operating in a different regime for each source, which may relate to why the correlations are generally opposite for the two sources.
The correlations between the QPO frequency and amplitude and the X-ray spectral parameters demonstrate that the QPO phenomenon is linked to the overall emission properties of the source. No current model for the origin of these QPOs can adequately explain more than a few of the many correlations presented here. We hope that this study will inspire further significant theoretical work on the subject.
This work was supported, in part, by NASA grant NAG5-3680 and the NASA contract to MIT for instruments of RXTE. Partial support for J.M. and G.S. was provided by the Smithsonian Institution Scholarly Studies Program. W.C. would like to thank Shuang Nan Zhang and Wan Chen for extensive discussions on spectral modeling and interpretation of the results. |
no-problem/9910/hep-lat9910042.html | ar5iv | text | # A New Gauge Fixing Method for Abelian Projection
## 1 Introduction
Since ’tHooft and Mandelstam proposed the QCD vacuum state to behave like a magnetic superconductor, a dual Meissner effect has been considered to play an essential role in the mechanism of color confinement . A gauge is chosen to reduce the gauge symmetry of a non-Abelian group to its maximal Abelian (MA) subgroup, and Abelian fields and magnetic monopole can be identified there. When one reduces $`SU(N)`$ to $`U(1)^{N1}`$ by the partial gauge fixing, monopoles appear in $`U(1)^{N1}`$ sector as a topological object. Confinement of QCD is conjectured to be due to condensation of the monopoles. By using the MA gauge which maximizes the functional
$`R`$ $`=`$ $`{\displaystyle \underset{x,\mu }{}}\text{Tr}U_\mu (x)\sigma _3U_\mu (x)^{}\sigma _3,`$ (1)
Suzuki and Yotsuyanagi first found that the value of Abelian string tension is close to that of the non-Abelian theory, where $`U_\mu (s)`$ are $`SU(2)`$ link variables on the lattice. Since then many numerical evidences have been collected to show the importance of monopoles in QCD vacuum: we refer to Ref. for a review of these results.
There are infinite ways of extracting $`U(1)^{N1}`$ from $`SU(N)`$. This corresponds to the choice of gauge in Abelian projection. Abelian and monopole dominances can be clearly seen in MA gauge but not in the others; they seem to depend on the choice of gauge in the Abelian projection. However, the dual Meissner effect only in MA gauge is not enough for the proof of color confinement, since Abelian charge confinement and color confinement are different.
Recently Ogilvie has developed a character expansion for Abelian and found that gauge fixing is unnecessary, i.e., Abelian projection yields string tensions of the underlying non-Abelian theory even without gauge fixing. Essentially the same mechanism was observed by Ambjørn and Greensite for $`Z_2`$ center projection of $`SU(2)`$ link variables. See also Ref.. Furthermore, by introducing a gauge fixing function $`S_{gf}=\lambda \mathrm{Tr}U_\mu (x)\sigma _3U_\mu (x)^{}\sigma _3`$, Ogilvie has also shown that the Abelian dominance for the string tension occurs for small $`\lambda `$. Hence he conjectures that Abelian dominance is gauge independent and that gauge fixing results in producing fat links for Wilson loop and is computationally advantageous for the measurements. Further Suzuki et.al. have shown that if the gauge independence of Abelian dominance is realized, the gauge independence of monopole dominance is also proved. Hence to prove the gauge independence of Abelian and monopole dominances are very important especially in the intermediate region between no gauge fixing and exact MA gauge fixing.
In this letter, we analyze the gauge dependence of the Abelian projection. We now employ stochastic quantization with gauge fixing as the gauge fixing scheme which has been proposed by Zwanziger
$$\frac{}{\tau }A_\mu ^a(x,\tau )=\frac{\delta S}{\delta A_\mu ^a(x,\tau )}+\frac{1}{\alpha }D_\mu ^{ab}\mathrm{\Delta }^b+\eta _\mu ^a(x,\tau ),$$
(2)
where $`x`$ is Euclidean space-time and $`\tau `$ is fictious time. $`\eta `$ stands for Gaussian white noise
$`\eta _\mu ^a(x,\tau )`$ $`=`$ $`0,`$
$`\eta _\mu ^a(x,\tau )\eta _\nu ^b(x^{},\tau ^{})`$ $`=`$ $`2\delta ^{ab}\delta _{\mu \nu }\delta ^4(xx^{})\delta (\tau \tau ^{}).`$
Here $`\mathrm{\Delta }`$ is defined as
$`\mathrm{\Delta }(x)=\mathrm{\Delta }^1(x)\sigma _1+\mathrm{\Delta }^2(x)\sigma _2,`$
$`\mathrm{\Delta }^1(x)`$ $`=`$ $`4g{\displaystyle \underset{\mu }{}}[_\mu A_\mu ^1(x)gA_\mu ^2(x)A_\mu ^3(x)],`$ (3)
$`\mathrm{\Delta }^2(x)`$ $`=`$ $`4g{\displaystyle \underset{\mu }{}}[_\mu A_\mu ^2(x)+gA_\mu ^1(x)A_\mu ^3(x)].`$ (4)
Note that $`\alpha =0`$ corresponds to the MA gauge fixing and $`\alpha =\mathrm{}`$ is the stochastic quantization without gauge fixing.
## 2 Formulation
$`SU(2)`$ elements can be decomposed into diagonal and off-diagonal parts after Abelian projection,
$`U_\mu (x)`$ $`=`$ $`c_\mu (x)u_\mu (x),`$ (5)
where $`c_\mu (x)`$ is the off-diagonal part and $`u_\mu (x)`$ is the diagonal one,
$$u_\mu (x)=\left(\begin{array}{cc}\mathrm{exp}(i\theta _\mu (x))& 0\\ 0& \mathrm{exp}(i\theta _\mu (x))\end{array}\right).$$
The diagonal part can be regarded as link variable of the remaining U(1). One can construct monopole currents from field strength of U(1) links:
$`\theta _{\mu \nu }(x)`$ $`=`$ $`\theta _\mu (x)+\theta _\nu (x+\widehat{\mu })\theta _\mu (x+\widehat{\nu })\theta _\nu (x)`$
$`=`$ $`\overline{\theta }_{\mu \nu }+2\pi n_{\mu \nu }(\pi \overline{\theta }_{\mu \nu }<\pi ),`$
$`k_\mu (x)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi }}ϵ_{\mu \nu \rho \sigma }_\nu \overline{\theta }_{\rho \sigma }(x).`$ (6)
Wilson loops from Abelian, monopole and photon contributions can be calculated as
$`W^{\mathrm{Abelian}}`$ $`=`$ $`\mathrm{exp}({\displaystyle \frac{i}{2}}{\displaystyle \underset{x,\mu ,\nu }{}}M_{\mu \nu }(x)\theta _{\mu \nu }(x)),`$ (7)
$`W^{\mathrm{monopole}}`$ $`=`$ $`\mathrm{exp}(2\pi i{\displaystyle \underset{x,x^{},\alpha ,\beta ,\rho ,\sigma }{}}k_\beta (s)D(xx^{}){\displaystyle \frac{1}{2}}ϵ_{\alpha \beta \rho \sigma }_\alpha M_{\rho \sigma }(x^{})),`$ (8)
$`W^{\mathrm{photon}}`$ $`=`$ $`\mathrm{exp}(i{\displaystyle \underset{x,x^{},\mu ,\nu }{}}_\mu ^{}\theta _{\mu \nu }(x)D(xx^{})J_\nu (x^{})),`$
$`J_\nu (x)=_\mu ^{}M_{\mu \nu }(x),`$
where $``$ is a lattice forward derivative, $`^{}`$ is a backward derivative and $`D(xx^{})`$ is the lattice Coulomb propagator. $`J_\nu `$ is the external source of electric charge and $`M_{\mu \nu }`$ has values $`\pm 1`$ on the surface inside of Wilson loop. In order to achieve better signals, we perform smearing for spatial link variables of non-Abelian, Abelian, monopole and photon .We set $`\gamma =2.0`$ for non-Abelian configurations and $`\gamma =1.0`$ for the others, where $`\gamma `$ is the parameter which determines the mixing between a link variable itself and staples surrounding the link
$$U_\mu (x)^{\mathrm{s}meared}=\frac{1}{N}\{\gamma U_\mu (x)+(\mathrm{s}taples)\}.$$
Here $`N`$ is a normalization factor.
Stochastic quantization is based on Langevin equation which describes stochastic processes in terms of fictious time. A compact lattice version of this equation with gauge fixing was proposed in Ref.:
$`U_\mu (x,\tau +\delta \tau )`$ $`=`$ $`\omega (x,\tau )^{}\mathrm{exp}(if_\mu ^a\sigma _a)U_\mu (x,\tau )\omega (x+\widehat{\mu },\tau ),`$ (10)
$`f_\mu ^a`$ $`=`$ $`{\displaystyle \frac{S}{A_\mu ^a}}\delta \tau +\eta _\mu ^a(x,\tau )\sqrt{\delta \tau },`$
$`\omega (x,\tau )`$ $`=`$ $`\mathrm{exp}(i{\displaystyle \frac{\beta }{2N_c\alpha }}\mathrm{\Delta }_{\mathrm{l}at}^a(x,\tau )\sigma _a\delta \tau ).`$
In MA gauge,
$`\mathrm{\Delta }_{\mathrm{l}at}(x,\tau )`$ $`=`$ $`i[\sigma _3,X(x,\tau )]`$ (11)
$`=`$ $`2(X_2(x,\tau )\sigma _1X_1(x,\tau )\sigma _2),`$
where
$`X(x,\tau )`$ $`=`$ $`{\displaystyle \underset{\mu }{}}(U_\mu (x,\tau )\sigma _3U_\mu (x,\tau )^{}U_\mu (x\widehat{\mu },\tau )^{}\sigma _3U_\mu (x\widehat{\mu },\tau ))`$ (12)
$`=`$ $`{\displaystyle \underset{i}{}}X_i(x,\tau )\sigma _i.`$ (13)
As an improved action to reduce finite lattice spacing effects, we adopt the Iwasaki action
where $`C_0+8C_1=1`$ and $`C_1=0.331`$. The Runge–Kutta algorithm is employed for solving the discrete Langevin equation. As will be shown, the systematic error which comes from finite $`\delta \tau `$ is much reduced.
## 3 Numerical Results
Numerical simulations were performed on $`8^3\times 12`$ and $`16^3\times 24`$ lattices with $`\beta =0.995`$, $`\alpha =0.1,0.25,0.5,1.0`$, and $`\delta \tau =0.001,0.005,0.01`$. Measurements were done every 100–1000 Langevin time steps after 5000–50000 thermalization Langevin time steps. The numbers of Langevin time steps for the thermalization were determined by monitoring the functional $`R`$ and Wilson loops.
In Fig.1, we plot $`(\mathrm{\Delta }^1)^2+(\mathrm{\Delta }^2)^2`$ as a function of the gauge parameter $`\alpha `$. $`\alpha =0`$ corresponds to $`\mathrm{\Delta }=0`$, i.e., MA gauge. When $`\alpha `$ increases, the deviation from the gauge fixed plane becomes larger.
We calculated the heavy quark potentials from non-Abelian, Abelian, monopole and photon contributions by
$`V(R)=\underset{T\mathrm{}}{lim}\mathrm{log}{\displaystyle \frac{W(R,T)}{W(R,T1)}}.`$ (14)
We fit them with the following function,
$`V(R)=V_0+\sigma R+{\displaystyle \frac{e}{R}}.`$ (15)
In order to check that our Langevin update algorithm with the stochastic gauge fixing term works correctly, we plot in Fig.2 the heavy quark potential $`V(R)`$, which is consistent with the result by the heatbath update. Runge–Kutta method works well, i.e., we see no difference among data with $`\delta \tau =0.001,0.005`$ and $`0.01.`$
In Fig.3 we show the Abelian heavy quark potentials for different $`\alpha `$’s together with that of non-Abelian potential. The heavy quark potentials from monopole and photon contributions are plotted in Fig.4. They can be well fitted by a linear and Coulomb terms, respectively. We see that the linear parts of potentials are essentially same from $`\alpha =0.1`$ to $`1.0`$, and all of them show the confinement linear potential behavior. Therefore even when we deviate from the MA gauge fixing condition, we can identify the monopole contribution of the heavy quark potential showing the confinement behavior. As $`\alpha `$ increases, statistical error becomes larger. This result suggests that the gauge fixing is favorable for decreasing numerical errors as pointed by Ogilvie.
In Fig.5 we plot the values of the string tensions from Abelian, monopole and photon contributions as a function of the gauge parameter $`\alpha `$. They are obtained by fitting the data in the range $`2.0R7.0`$. We have taken into account only statistical errors. The upper two lines stand for the range of the non-Abelian string tension. The Abelian and the monopole dominances are observed for all values of $`\alpha `$. The string tensions from the Abelian parts are about 80% of the non–Abelian one. We expect that the difference of the percentage between our result and that of Bali et.al. becomes smaller when we go to larger lattice size. On the other hands, the string tension from the photon part is consistent with zero.
## 4 Concluding Remarks
We have developed a stochastic gauge fixing method which interpolates between the MA gauge and no gauge fixing. In Refs., and , effects of gauge fixing are studied for lattice algorithms where gauge fixing is done after field configurations are generated. In the stochastic gauge fixing procedure studied here, the attractive force to the gauge fixed plane along a gauge orbit is applied together with the Langevin update force. The method is tested together with the Iwasaki improved action and the Runge-Kutta algorithm. We have found it works well.
We have studied the gauge dependence of Abelian projected heavy quark potential. It is observed that the confinement force is essentially independent of the gauge parameter. In the calculation of Abelian heavy quark potential, we have seen that as gauge parameter $`\alpha `$ increases, the statistical error becomes larger. This result suggests that the gauge fixing is favorable for increasing the statistics as pointed by Ogilvie. It is expected that as $`\alpha `$ increase, Abelian string tension would approach the non–Abelian one . Therefore it is important to see behavior of the string tension as $`\alpha `$ becomes much larger than one. But data are more noisy for large $`\alpha `$ and we are planning to employ a noise reduction technique such as integral method for obtaining statistically significant data.
It is desirable to study the gauge dependence (or independence) of other quantities, such as the monopole condensation, which may reveal the role of gauge fixing in the dual superconductor scenario. Hioki et.al. have reported the correlation between monopole density and Gribov copy in MA gauge fixing. Gribov copy effect for Abelian string tension has been studied by Bali et.al.. For Landau gauge, Gribov copy may be avoided by Langevin update algorithm together with the stochastic gauge fixing. It is, therefore, interesting to investigate effects of Gribov copy in Abelian projection with the method.
T.S and A.N. acknowledge financial support from JSPS Grand-in Aid for Scientific Research( (B) No.10440073 and No.11695029 for T.S.) and ( (C)No.10640272 for A.N.). |
no-problem/9910/cond-mat9910400.html | ar5iv | text | # Systematic Study of Short Range Antiferromagnetic Order and The Spin-Glass State in Lightly Doped La2-xSrxCuO4
## I Introduction
The single-layered high-temperature superconducting (hereafter abbreviated as HTSC) 2-1-4 type cuprates have received intensive attention in explorations not only of the microscopic HTSC mechanism but also of the basic properties of two dimensional magnetism in the square-lattice Heisenberg antiferromagnet. The three-dimensional (3D) antiferromagnetic (AF) long range order in undoped La<sub>2</sub>CuO<sub>4</sub> quickly vanishes with hole doping either through the substitution of La<sup>3+</sup> sites with Sr<sup>2+</sup> or by the insertion of the excess oxygen. Upon further doping, the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) system shows superconductivity in the range $`0.06\stackrel{<}{}x\stackrel{<}{}0.25`$ as well as incommensurate spin fluctuations. Static spin correlations with the same incommensurability as those of dynamic one are also observed for the samples whose doping rates are near $`x=0.12`$ and $`x=0.06`$ In the intermediate regime, $`0.02x0.05`$, placed between the 3D AF state and the superconducting state, there coexists a canonical spin-glass state observed by magnetic susceptibility measurements and quasi-static magnetic order investigated by neutron scattering experiments. From the <sup>139</sup>La NQR measurements, spin glass state with magnetically ordered finite-size regions, i.e. cluster spin glass state, is argued. It is also reported that spin glass freezing temperature determined by the <sup>139</sup>La NQR measurements in this region is proportional to $`1/x`$ Very recently, Wakimoto et al. found that the quasi-static magnetic order for $`x=0.05`$ has an incommensurate structure whose modulation vector is along the orthorhombic $`b`$-axis, $`45^{}`$ away from the incommensurate modulation observed in superconducting samples. Furthermore, an extensive study by Matsuda et al. has revealed that the same type of incommensurate spin structure exists throughout the hole concentration range $`0.024x0.05`$.
From numerical simulations Gooding et al. explained the coexistence of the spin-glass state and quasi-static magnetic order as a cluster spin-glass state taking into account hole-localization around Sr ions. However, important unresolved problems exist for the physics in this intermediate doping region. One issue is how the incommensurate magnetic state is realized in the spin-glass state. For example, the model proposed by Gooding et al. predicts commensurate spin correlations in this region. Therefore, a new model for the spin state seems needed. Another important question is how the spin-glass state changes from the insulating region $`(x0.05)`$ to superconducting region $`(x0.06)`$. In spite of the dramatic change of the quasi-static correlations at the insulator-superconductor boundary from a diagonal incommensurate state, which is modulated along the diagonal line of the CuO<sub>2</sub> square lattice, to a collinear incommensurate one, which is modulated along the collinear line of the CuO<sub>2</sub> square lattice, systematic $`\mu `$SR studies have revealed that the spin freezing temperature, corresponding to the spin-glass transition, changes continuously across the boundary. Thus, it is important to clarify whether the spin-glass state is essential for the superconductivity in this system.
In the present study, focusing on these unresolved issues, we have performed systematic magnetic susceptibility measurements on single crystals with $`x=0.03,0.04`$ and $`0.05`$ and have determined the magnetic phase diagram of the lightly doped region from both the bulk susceptibility and recent neutron scattering experiments. The contents of this paper are as follows: the theoretical background of the spin-glass features, including a scaling hypothesis of the spin-glass order parameter, is presented in Sec. II. Section III describes sample preparation and experimental details. The results of the magnetic susceptibility measurements and a revised magnetic phase diagram are presented in Sec. IV. In Sec. V we discuss interpretations for some of the remarkable features we find, combining the susceptibility with previous neutron scattering and $`\mu `$SR measurements.
## II Features of the canonical spin-glass order parameter
Determination of the spin-glass state is not an easy task, since the effects of randomness do not appear to be unique. Furthermore the effect of frustration by doped holes in the LSCO system should be very strong due to the strong super-exchange interaction in the antiferromagnetic lattice in each CuO<sub>2</sub> plane. If the formation of the spin-glass state is commonly visible in a certain hole concentration range, we can expect that a strong local singlet formation between spins of doped holes at oxygen sites and nearest neighbor Cu<sup>2+</sup> spins must be an essential ingredient of the HTSC mechanism in this material.
In this paper we first demonstrate that the magnetic system we treat is a quenched spin-glass system, with holes introduced by random substitution of La site with Sr cations. Note that another doping case by an insertion of excess oxygens is supposed to be an annealed system, where the doped oxygens are staged and also ordered in each staged layer at slow cooling stage.
In the quenched spin-glass system, magnetic susceptibility is postulated to result from the sum of a temperature independent residual susceptibility and a Curie type magnetic susceptibility:
$$\chi =\chi _0+\frac{C}{T}(1q)$$
(1)
where $`\chi _0`$ is temperature-independent susceptibility, $`C`$ is the Curie constant, and $`q`$ is the spin-glass order parameter. Thus, non-zero spin-glass order parameter $`q`$ gives rise to deviations of the magnetic susceptibility from simple Curie law behavior below the spin-glass transition temperature, $`T_g`$. Usually the thermal evolution of $`\chi `$ shows a distinct cusp at $`T_g`$ which is characterized as a typical spin-glass feature.
More rigorously, the spin-glass order parameter should obey the scaling relation described as
$$q=|t|^\beta f_\pm (H^2|t|^{\beta \gamma })$$
(2)
$$t=\frac{TT_g}{T_g}.$$
(3)
where $`H`$ is applied magnetic field, $`\beta `$ and $`\gamma `$ are critical exponents, and $`T_g`$ is spin-glass transition temperature. The scaling functions $`f_+`$ and $`f_{}`$ are defined for $`T>T_g`$ and $`T<T_g`$, respectively. The critical exponents, $`\beta `$ and $`\gamma `$, should be compared with the values deduced by the renormalization group theory of $`0.5<\beta <1`$ and $`\gamma =3\pm 1`$. In fact, earlier measurements of Chou et al. showed that the spin-glass state of the La<sub>1.96</sub>Sr<sub>0.04</sub>CuO<sub>4</sub> sample obeys the scaling relation with $`\beta 0.9`$ and $`\gamma 4.3`$. These results stand as an important reference for the present experiments. (Based on their results, we reevaluated the hole concentration of the sample used by Chou et al. and find a value somewhat lower than the quoted $`x=0.04`$; see Section IV.)
Another important piece of evidence for the spin-glass state is the remanent magnetization below $`T_g`$ when an external field is turned off after crossing $`T_g`$ (Field cooling effect). It shows the distinct difference of the magnetic susceptibilities between the zero field cooling and field cooling process below $`T_g`$. After the external field is switched off, the remanent magnetization typically relaxes following a stretched exponent function of time $`\tau `$.
$$M(\tau )=M(0)exp[\alpha \tau ^{(1n)}]$$
(4)
Theoretical predictions give $`1n=1/3`$, which is often found in typical spin-glass compounds. In fact, we confirm that the stretching exponent for the magnetization memories of the present crystals have the same values as in the experiments reported in Ref.8.
## III Sample preparation and experimental details
Single crystals of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with $`x=0.03,0.04`$ and $`0.05`$ were grown by TSFZ (Travelling-Solvent Floating-Zone) method using a standard floating-zone furnace with some improvements. Dried powders of La<sub>2</sub>O<sub>3</sub>, SrCO<sub>3</sub> and CuO of 99.99 % purity were used as starting materials for feed rods and solvent. The starting materials were mixed and baked in air at 850C, 950C and 1000C for 24 hours each with a thorough grinding between each baking. After this process, we confirmed by X-ray powder diffraction that the obtained powder samples consisted only of a single 2-1-4 phase. Solvents with the composition of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> : CuO = 35 : 65 in molar ratio were utilized in all growths. The growth conditions were basically the same as those used for the $`x=0.15`$ crystal reported in Ref.19 except that excess CuO of $`1`$ mol% was added into the feed rods to increase their packing density as well as to compensate the loss of CuO vaporizing in the melting process. The final
crystals were typically 6 mm in diameter and 30 mm in length.
In order to characterize the Sr concentration of each crystal, the $`c`$-axis lattice constant was measured by X-ray powder diffraction. We found that lightly doped LSCO crystals tend to become oxygenated by the oxygen atmosphere of the melt-grown process; therefore the as-grown crystals were annealed in flowing Ar at 900C for 12 hours so as to reach the stoichiometric oxygen content before the X-ray diffraction measurements. In fact, an iodometric titration analysis on the $`x=0.03`$ crystal revealed that the oxygen content of the Ar-annealed crystal is closer to the stoichiometric oxygen content than that of the as-grown crystal. Figure 1 shows the dependence of the $`c`$-axis lattice constant on hole concentration. The data for $`x0.06`$ (open circles) are from Ref.3, and the results for the present crystals are plotted by closed circles. In addition, data for $`x=0`$ was determined from a single crystal of La<sub>2</sub>CuO<sub>4</sub> that was grown by the method reported in Ref.19 and annealed in Ar atmosphere at 900C for 12 hours. As clearly shown in Fig.1, the $`c`$-axis lattice constants of the lower doped samples extrapolate smoothly from the published data. This smooth extrapolation indicates that the hole concentration of our crystals correspond closely to the expected values of $`x=0.03,0.04`$ and $`0.05`$. We note that our results are subtly lower than the powder sample data reported by Takayama-Muromachi et al. in the region of $`x0.06`$. We speculate that this small difference is caused by an effect of the Ar annealing performed on the present crystals.
The magnetic susceptibility measurements were performed using a standard Quantum Design SQUID
magnetometer. Measurements were made on each crystal under various applied fields in the range $`0.02\mathrm{T}H5`$ T either parallel or perpendicular to the CuO<sub>2</sub> planes. We did not specify the field direction within the plane. Data was measured either by cooling with the field applied (FC) or by applying field after cooling in zero field (ZFC). Note that the CuO<sub>2</sub> plane corresponds to the $`ab`$ plane of the orthorhombic $`Bmab`$ crystallographic notation, which is utilized throughout the paper.
## IV Magnetic susceptibility
For all of the samples we studied, the magnetic susceptibility under a magnetic field parallel to the CuO<sub>2</sub>
plane, which we call in-plane magnetic susceptibility, exhibits qualitatively the same temperature dependence. As a typical example Fig. 2(a) shows the temperature dependence of the in-plane susceptibility of the $`x=0.03`$ sample. A clear difference exists between the FC and ZFC data corresponding to hysteresis characteristic of spin-glasses. The inset of Fig. 2(a) shows the time dependence of the remanent magnetization after turning off an applied field of 1 T. This time dependence is well fit by Eq. 4 with $`1n=1/3`$ (solid line). These facts provide a direct evidence that a canonical spin-glass state exists in the CuO<sub>2</sub> planes in this system.
Below the spin-glass transition temperature $`T_g`$ the magnetic susceptibility starts to deviate from Curie law, as indicated by a dashed line in Fig. 2(a). The dashed line of the Curie law was calculated by a least-squares fit to the data between 10K and 70K. The spin-glass order parameter $`q`$ can be evaluated from the degree of the deviation using Eq. 1. Figure 2(b) shows the thermal variation of $`q`$ calculated from the FC data in various applied fields. $`T_g`$ was determined by a least-square fit to the temperature dependence of $`q`$ at $`H=0.02`$T using the equation $`q(T_gT)^\beta `$. The solid line in Fig. 2(b) shows the result of the fit with $`T_g=6.3(\pm 0.5)`$ K and $`\beta =0.97(\pm 0.05)`$. Similarly, $`T_g`$ for the $`x=0.04`$ and $`0.05`$ samples are determined to be $`5.5(\pm 0.5)`$ K and $`5.0(\pm 0.5)`$ K with the same value of $`\beta `$, respectively.
To characterize further the spin-glass properties in this system, we verify the scaling hypothesis described by Eq. 2. Figure 3 shows the scaling of the in-plane magnetic
susceptibility. The scaling relation is well satisfied for all samples with the same value of the critical exponents. This fact indicates that the spin-glass behavior in the in-plane susceptibility is common to the $`x=0.03,0.04`$ and $`0.05`$ samples, and that these samples exhibit a canonical quenched spin-glass state. Furthermore, the critical exponents $`\beta `$ and $`\gamma `$ obtained from the universal plots are $`0.97(\pm 0.05)`$ and $`3.2(\pm 0.5)`$, which are consistent with those of typical canonical spin-glass materials.
In contrast, the magnetic susceptibility under a magnetic field along the out-of-plane direction, which we call out-of-plane magnetic susceptibility, shows behavior different from typical spin-glasses. The temperature dependence of the out-of-plane susceptibility of the $`x=0.03`$ sample is shown in Fig. 4(a) as an example. A difference between FC and ZFC data is observable at low $`T`$, and a small shoulder appears at $`T_g`$ determined by the in-plane measurements. These features of the out-of-plane susceptibility are qualitatively similar to those of the in-plane
spin-glass behavior. However, a remarkable difference is that the out-of-plane susceptibility deviates from a simple Curie law at a temperature well above $`T_g`$, as clearly shown in Fig. 4(a). Such a deviation is observed in all the samples and was also reported by Chou et al. This deviation indicates that, under the same analysis as for the in-plane susceptibility, $`q`$ increases from zero far above $`T_g`$. This feature is more clearly visible in a calculation of $`q`$ from the FC data, as shown in Fig. 4(b). The onset temperatures for these deviations, $`T_{dv}`$, are found to be $`19(\pm 2)`$ K, $`17(\pm 2)`$ K and $`15(\pm 2)`$ K for $`x=0.03,0.04`$ and $`0.05`$, respectively. In addition, the spin-glass order parameter of the out-of-plane magnetic susceptibility does not obey the scaling relation of Eq. 2. This result suggests that another magnetic mechanism rather than spin-glass behavior drives the deviation of the out-of-plane susceptibility from the simple Curie type paramagnetic behavior.
We summarize the magnetic susceptibility data in Fig. 5 with a magnetic phase diagram that includes results from Chou et al. on an $`x=0.04`$ crystal and $`\mu `$SR results reported by Niedermayer et al. The closed and open circles indicate $`T_g`$ and $`T_{dv}`$ of the present crystals, respectively, while the closed and open triangles are those of the “$`x=0.04`$” crystal of Chou et al. Previous report on a sample of La<sub>2-x</sub>Bi<sub>x</sub>CuO<sub>4+δ</sub> whose hole concentration is just above the boundary of 3D AF state showed a spin-glass transition with $`T_g17`$ K ; therefore, we draw an $`x`$-dependence line of $`T_g`$ as a guide to the eye which reaches 17 K at $`x=0.02`$. From this line, the actual hole concentration of the “$`x=0.04`$” sample of Chou et al. can be estimated to be 0.027. Thus we treat the data of Chou et al. as $`x=0.027`$ in the present paper.
According to the recent neutron scattering experiments reported by Wakimoto et al. using the same crystals as those in the present experiment, elastic incommensurate magnetic peaks exist around the $`(\pi ,\pi )`$ position in reciprocal space. The solid line of $`T_{el}`$($`\mathrm{\Delta }\omega =0.25`$ meV) in Fig. 5 shows the $`x`$-dependence of the onset temperature where the elastic magnetic peaks become observable with the energy resolution of $`\mathrm{\Delta }\omega =0.25`$ meV. $`T_{dv}`$ exhibits qualitatively the same $`x`$-dependence as that of $`T_{el}`$($`\mathrm{\Delta }\omega =0.25`$ meV). The physical meaning of this feature is discussed in the next section.
The temperatures below which a magnetic signal is observed in the $`\mu `$SR measurements of Niedermayer et al. are also plotted as open diamonds in Fig. 5. (Hereafter, we label these temperatures as $`T_\mu `$.) Niedermayer et al. equated $`T_\mu `$ with the spin-glass transition temperature. However, in our phase diagram, there is a small discrepancy between $`T_\mu `$ and $`T_g`$. Our $`T_g`$ values for $`x=0.03`$ and $`0.04`$ are very close to the spin glass freezing temperature determined by the <sup>139</sup>La NQR measurements in Ref.13. We believe that the differences among $`T_{el}`$, $`T_\mu `$ and $`T_g`$ arise from the differences in the observation time scales for the different experimental methods. In fact, Keimer et al. reported that $`T_{el}`$ depends on the instrumental energy resolution, which varies inversely with observation time scale. Furthermore, the nature of the magnetic susceptibility measured by SQUID is perfectly static. These facts indicate that the magnetic correlations in this system are essentially quasi-static, and, specifically, that with decreasing temperature the spin system freezes into a cluster spin-glass state consisting of domains in which spins are antiferromagnetically correlated.
## V Discussion
The present results of the magnetic susceptibility measurements have revealed the existence of a common canonical spin-glass state for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> in the insulating region of $`0.03x0.05`$. In this section, we discuss the nature of the spin correlations specifically to reconcile this finding with the recent neutron scattering results on samples in the same hole concentration range.
First, we discuss the $`x`$-dependence of the characteristic temperatures $`T_g`$, $`T_{el}`$ and $`T_{dv}`$. In Fig. 5, all of these temperatures exhibit qualitatively similar $`x`$-dependences. Since the difference between $`T_g`$ and $`T_{el}`$ reflects the different time scales of the experimental probes in observing the freezing spin system, the similarity between the $`x`$-dependences of $`T_g`$ and $`T_{el}`$ indicates that the freezing process into the cluster glass state is similar for the
samples in this hole concentration range. Although the $`T_{dv}`$ values sit on the $`T_{el}`$($`\mathrm{\Delta }\omega =0.25`$ meV) line, the significant feature is not their equivalent values but their qualitatively similar $`x`$-dependence. As mentioned in Section IV, the position of $`T_{el}`$ is arbitrary to the extent that it depends on the instrumental energy resolution width $`\mathrm{\Delta }\omega `$. The similar $`x`$-dependences of $`T_{dv}`$ and $`T_{el}`$ suggest a correlation between the formation of the spin clusters and the deviation of the out-of-plane susceptibility from simple Curie type behavior. There may also be a relation between this deviation from the Curie law and the development of short magnetic correlations along the out-of-plane direction as observed by neutron scattering measurements for $`x=0.03`$ and $`0.05`$ at low temperature.
Next, we discuss the spin-glass properties of the in-plane magnetic susceptibility, particularly, the $`x`$-dependence of the Curie constant shown in Fig. 6. The Curie constants, calculated from a least-square fit to the in-plane susceptibility data between 10K and 70K, are essentially independent of $`x`$ for this hole concentration range. The right vertical axis of Fig. 6 indicates the scale of the ratio $`R_{eff}=N_{eff}/N_{all}`$, where $`N_{eff}`$ is the number of effective Cu<sup>2+</sup> spins giving the simple Curie type paramagnetic susceptibility and $`N_{all}`$ is the total number of Cu<sup>2+</sup> spins per unit volume. $`N_{eff}`$ is related to the Curie constant by the following formula:
$$C=N_{eff}\frac{(g\mu _\mathrm{B})^2S(S+1)}{3k_\mathrm{B}}$$
(5)
where $`g=2`$, $`S=1/2`$ and $`k_B`$ is Boltzmann’s constant.
As pointed out by Gooding et al. if the cluster spin-glass state is realized in this system, each cluster consisting of an odd number of spins behaves like a single spin yielding a Curie type paramagnetic behavior. Hence, the number of effective Cu<sup>2+</sup> spins should be half of the total number of the clusters. Therefore, $`R_{eff}`$ can be described using an average cluster size $`L`$ (expressed in the unit of the nearest neighbor Cu-Cu distance) as
$$R_{eff}=\frac{1}{2L^2}.$$
(6)
From a numerical calculation, Gooding et al. reported that $`L`$ is given by $`L=Ax^\eta `$ ($`A0.49`$ and $`\eta 0.98`$) on the basis of the cluster spin-glass model with a random distribution of doped holes. In this model, the doped holes form the boundaries between the clusters. Their result is indicated by a dashed line in Fig. 6. Although the $`R_{eff}`$ value for $`x=0.03`$ is close to the prediction of Gooding et al., the $`R_{eff}`$ values are independent of $`x`$, with those for $`x=0.04`$ and $`0.05`$ significantly smaller than the dashed line. The constant value of $`R_{eff}`$ combined with Eq. 6, means that the cluster size is independent of the hole concentration, instead of decreasing with increasing doping level as predicted for a random distribution of holes located on the cluster boundary.
A possible interpretation for these features is that the doped holes are distributed also inside the clusters as well as on the boundaries. Recent neutron scattering results demonstrate that quasi-static incommensurate spin correlations exist in the spin-glass region. This may imply that the holes inside the clusters form charge stripes and each cluster includes anti-phase antiferromagnetic domains divided by these charge stripes. In this model, the $`x`$-independence of the cluster size suggests that only the distance between nearest neighbor charge stripes (i.e. 1/incommensurability) within the clusters varies with the hole concentration. Consistent with this picture, the incommensurability of the quasi-elastic peaks observed by neutron scattering experiments increases linearly with the hole concentration $`x`$ in the spin-glass region where we observe no change in the Curie constant. The mechanism of cluster boundary formation is still an open question. Possible constituents of the cluster boundary are, for example, a small amount of holes distributed outside the charge stripes or displacements of the stripes. A correct model for the cluster glass state including charge stripes needs further clarification.
Briefly, we should note another possibility for the discrepancy between the theoretical line and the present results for $`R_{eff}`$. In the model of Gooding et al., the spin-glass cluster size is supposed to be equivalent to the instantaneous magnetic correlation length. However, the $`R_{eff}`$ values correspond to a time-averaged cluster size rather than instantaneous one since the $`R_{eff}`$ values are obtained by magnetic susceptibility measurements. Additional careful investigations are required to distinguish the static from dynamic magnetic properties.
To conclude, the present results combined with recent neutron scattering and $`\mu `$SR measurements indicate that the spin system for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> in the range $`0.03x0.05`$ freezes into a cluster spin-glass state at low temperatures while maintaining antiferromagnetically correlated spins in each cluster. Inside the clusters, it is likely that the doped holes are inhomogeneously distributed. A stripe structure is suggested by the elastic incommensurate peaks observed in the neutron scattering.
One of the remaining important issues is the role of the cluster spin-glass state with respect to the superconductivity in the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> system. Since the $`x`$-dependences of all the characteristic temperatures $`T_g`$, $`T_{el}`$ and $`T_{dv}`$ near the superconducting boundary are very small in Fig. 5, the spin-glass region may extend into the superconducting region as suggested by Niedermayer et al. . In fact, neutron scattering reveals quasi-static incommensurate peaks in under-doped superconducting samples which may relate the spin-glass phase within the superconducting state. However, the spatial spin modulation of the spin-glass state in the superconducting phase should differ from that of the spin-glass phase in the non-superconducting state as demonstrated by recent neutron scattering. In order to clarify the role of the cluster spin-glass state in the HTSC mechanism, further investigations are necessary of the spin-glass features in the superconducting region compared with those in the insulating region.
## VI Acknowledgments
We gratefully thank R.J. Birgeneau, F. C. Chou, K. Hirota, Y.-J. Kim, Y.S. Lee, R. Leheny, S. Maekawa, G. Shirane and S.M. Shapiro for invaluable discussions. We also thank M. Onodera for his technical assistance. The present work has been supported by a Grant-in-Aid for Scientific Research from Japanese Ministry of Education, Science, Sports and Culture, by a Grant for the Promotion of Science from the Science and Technology Agency, and by the Core Research for Evolutional Science and Technology (CREST).
* Present address: Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA. |
no-problem/9910/cond-mat9910263.html | ar5iv | text | # Bogomol’nyi Limit For Magnetic Vortices In Rotating Superconductor
## I Introduction
The present work is the sequel of a previous investigation on the energy of vortices in a rotating superconductor, where it was shown in particular that the magnetic and kinetic contributions to the energy density that are proportional to the background angular velocity remarkably cancel. The motivation which brought us to study vortices in rotating superconductors is the study of the interior of neutron stars, more specifically their inner core which is believed to contain a proton superconductor. Although we worked out the macroscopic description of an array of magnetic vortices and superfluid vortices in a general relativistic framework necessary for refined analysis of neutron stars, thus generalizing the earlier work of Lindblom and Mendell in a Newtonian approach, we have here restricted our analysis to a Newtonian framework for simplicity.
In our previous work, we left undetermined the explicit structure of the vortex core. One of the purposes of the present work is to provide a specification for the profile of the condensate particle density, based on a Ginzburg-Landau type approach. Our treatment will however not be restricted just to the widely used standard Ginzburg-Landau description, but will also be valid for a generalized version (more readily justifiable by heuristic considerations), leaving arbitrary the coupling constant $`g_c`$ that enters the gradient energy density.
The second purpose of this work is to reexamine the question of the Bogomol’nyi limit in the context of a superconductor in a rotating background. The conclusion will be that the usual boundary between type I and type II superconductors remains unaffected by the rotation of the background.
Before entering the details of this work, let us recall the essential features of the model and define the relevant quantities. The superconducting matter will consist of a charged superfluid component, represented by a locally variable number density $`n_s`$ of bosonic particles characterized by an effective mass $`m`$, a charge $`q`$, and of an ordinary component of opposite charge which locally compensates the charge of the first component. The essential property that distinguishes the superfluid constituent from ordinary matter is that its momentum is directly related to the phase variable $`\phi `$ (a scalar with period $`2\pi `$) of the boson condensate, according to the expression
$$m\stackrel{}{v}+q\stackrel{}{A}=\mathrm{}\stackrel{}{}\phi .$$
(1)
In this formula, $`\mathrm{}`$ is the Dirac-Planck constant, $`\stackrel{}{A}`$ is the magnetic vector potential and $`\stackrel{}{v}`$ the velocity of the Bose condensate. The whole system will be described, in addition to the relation (1), by the Maxwell equations,
$$\stackrel{}{}\times \stackrel{}{B}=4\pi \stackrel{}{ȷ},$$
(2)
where $`\stackrel{}{B}`$ is the magnetic field, related to the magnetic potential vector by the usual relation $`\stackrel{}{B}=\stackrel{}{}\times \stackrel{}{A}`$, and $`\stackrel{}{ȷ}`$ is the electric current, which consists of the sum of the currents due to the condensate component and to the ordinary component. The ordinary component current will be supposed to be that of a rigidly rotating fluid. To be able to solve the coupled system of equations, one needs a prescription concerning the spatial evolution of the condensate particle number density. It will be given by an energy minimization principle, using the energy functional corresponding to a generalized Ginzburg-Landau approach.
The plan of the paper will be the following. In section II, we shall use the cylindrical symmetry and introduce new variables to simplify the system of coupled equations. Section III will be devoted to the energy minimization principle, which will give a prescription for the determination of $`n_s`$. And finally, section IV will deal with the Bogomol’nyi limit condition.
## II System of coupled equations
The scenarios we shall consider will be of the usual kind, in which each individual vortex is treated as a stationary, cylindrically symmetric configuration consisting of a rigidly rotating background medium with uniform angular velocity $`\mathrm{\Omega }_{\mathrm{}}`$, say, together with a charged superfluid constituent in a state of differential rotation with a velocity $`v`$, which tends at large distance towards the rigid rotation value given by $`\mathrm{\Omega }_{\mathrm{}}r`$, where $`r`$ is the cylindrical radial distance from the axis. It will be supposed that the superfluid particle number density $`n_s`$ is a monotonically increasing function of the cylindrical radius variable $`r`$, tending asymptotically to a constant value, $`n_{\mathrm{}}`$, say, at large distances from the axis. It will be supposed that the local charge density is canceled by the background so that there is no electric field, but that there is a magnetic induction field with magnitude $`B`$ and direction parallel to the axis, whose source is the axially oriented electromagnetic current whose magnitude $`j`$ will be given by
$$j=qn_s(v\mathrm{\Omega }_{\mathrm{}}r).$$
(3)
The relevant Maxwellian source equation for the magnetic field (2) will have the familiar form
$$\frac{dB}{dr}=4\pi j.$$
(4)
The other relevant Maxwellian equation is the one governing the axial component $`A`$ (which in an appropriate gauge will be the only component) of the electromagnetic potential covector, which will be related to the magnetic induction by
$$\frac{d(rA)}{dr}=rB.$$
(5)
The essential property distinguishing the superconducting case from its “normal” analogue is the London flux quantization condition (1), which in the present context (where all physically relevant quantities depend only on the cylindrical radius $`r`$) will be expressible in the well known form
$$mv+qA=\frac{N\mathrm{}}{r},$$
(6)
where $`N`$ is the phase winding number, which must be an integer.
Before proceeding, it will be useful to take advantage of the possibility of transforming the preceding system of equations to a form that is not just linear but also homogeneous, by replacing the variables $`v`$, $`B`$, $`A`$ by corresponding variables $`𝒱`$, $``$, $`𝒜`$ that are defined by
$$𝒱=v\mathrm{\Omega }_{\mathrm{}}r,$$
(7)
$$=BB_{\mathrm{}},$$
(8)
$$𝒜=A\frac{1}{2}rB_{\mathrm{}},$$
(9)
where $`B_{\mathrm{}}`$ is the uniform background magnetic field value that would be generated by a rigidly rotating superconductor, which is given by the London formula
$$B_{\mathrm{}}=\frac{2m}{q}\mathrm{\Omega }_{\mathrm{}},$$
(10)
obtained by combining (6) and (5) in the special case of rigid corotation, i.e., with $`v=\mathrm{\Omega }_{\mathrm{}}r`$.
In terms of these new variables the equation (5) will be transformed into the form
$$\frac{d(r𝒜)}{dr}=r,$$
(11)
while the other differential equation (4) will be transformed into the form
$$\frac{d}{dr}=4\pi j,$$
(12)
in which, rewriting equation (3), we shall have
$$j=qn_s𝒱.$$
(13)
Finally, the flux quantization condition (6) will be converted into the form
$$m𝒱+q𝒜=\frac{N\mathrm{}}{r},$$
(14)
which can be used to transform (11) into
$$\frac{m}{qr}\frac{d(𝒱r)}{dr}=.$$
(15)
The advantage of this reformulation is that unlike $`v`$, $`B`$, and $`A`$, the new variables $`𝒱`$, $``$ and $`𝒜`$ are subject just to homogeneous boundary conditions, which are simply that they all tend to zero as $`r\mathrm{}`$.
## III Energy minimization principle
The equations of the previous section are not sufficient by themselves to fully determine the system. In order to specify the radial distribution of the condensate particle number density $`n_s`$ we will use an energy minimization principle based on a model in which the condensate energy density is postulated to be given as the sum of a gradient contribution and a potential energy contribution by an expression of the form
$$_{\text{con}}=_{\text{grad}}+V,$$
(16)
where the contribution $`_{\text{grad}}`$ is proportional to the square of the gradient of $`n_s`$ with a coefficient that, like the potential energy contribution $`V`$, is given as an algebraic function of $`n_s`$ by some appropriate ansatz. The use of such a model as a fairly plausible approximation is justifiable by heuristic considerations that motivate the use of an ansatz of what we shall refer to as the Ginzburg type, according to which the energy contribution is postulated to have the form
$$_{\text{grad}}=\frac{g_{c}^{}{}_{}{}^{2}\mathrm{}^2}{8mn_s}\left(\frac{dn_s}{dr}\right)^2,$$
(17)
where $`g_c`$ is a dimensionless coupling constant, while the potential energy density $`V`$ is given in terms of some constant proportionality factor $`_c`$, say, by the formula
$$V=_c\left(1\frac{n_s}{n_{\mathrm{}}}\right)^2,$$
(18)
which provides a particularly convenient ansatz for interpolation in the theoretically intractable intermediate region between the comparatively well understood end points of the allowed range $`0n_sn_{\mathrm{}}`$. The constant $`_c`$ is interpretable as representing the maximum condensation energy density. Its value is commonly expressed in terms of the corresponding critical value $`H_c`$, say, representing the strength of the maximum magnetic field that can be expelled from the superconductor by the Meissner effect, to which it is evidently related by the formula
$$_c=\frac{H_{c}^{}{}_{}{}^{2}}{8\pi }.$$
(19)
The total energy density associated with a vortex will be of the general form
$$=_{\text{mag}}+_{\text{kin}}+_{\text{con}},$$
(20)
where $`_{\text{mag}}`$, $`_{\text{kin}}`$ and $`_{\text{con}}`$ are respectively the magnetic, kinetic and condensate energy contributions. More precisely, $`_{\text{mag}}`$ is the extra magnetic energy density arising from a non-zero value of the phase winding number $`N`$, i.e., the local deviation from the magnetic energy density due just to the uniform field $`B_{\mathrm{}}`$ (associated with of the state of rigid corotation characterized by the given background angular velocity $`\mathrm{\Omega }_{\mathrm{}}`$), namely,
$$_{\text{mag}}=\frac{B^2}{8\pi }\frac{B_{\mathrm{}}^{}{}_{}{}^{2}}{8\pi },$$
(21)
while $`_{\text{kin}}`$ is the corresponding deviation of the kinetic energy from that of the state of rigid corotation characterized by the given background angular velocity $`\mathrm{\Omega }_{\mathrm{}}`$, namely,
$$_{\text{kin}}=\frac{m}{2}n_s\left(v^2\mathrm{\Omega }_{\mathrm{}}^{}{}_{}{}^{2}r^2\right).$$
(22)
It is convenient for many purposes to express such a model in terms of a dimensionless amplitude $`\psi `$ that varies in the range $`0\psi 1`$ according to the conventional specification
$$n_s=\psi ^2n_{\mathrm{}}.$$
(23)
Within the general category of Ginzburg type models as thus described, the special case of the standard kind of Ginzburg-Landau model is characterized more specifically by the postulate that the gradient coupling constant $`g_c`$ should be exactly equal to unity. This ansatz has the attractive feature of allowing the theory to be neatly reformulated in terms of a complex variable $`\mathrm{\Psi }\psi \mathrm{e}^{i\phi }`$ where $`\phi `$ is the phase that appears in (1) in a manner that is evocative of the Schroedinger model for a single particle. Indeed, it is easy to verify that in the case of $`g_c=1`$, the gradient term (17) and the kinetic term in (22) can be rewritten, using (1) and (23), to give the usual Ginzburg–Landau type gradient term, i.e.,
$$\frac{\mathrm{}^2}{8mn_s}\left(\stackrel{}{}n_s\right)^2+\frac{m}{2}n_s\stackrel{}{v}^{\mathrm{\hspace{0.17em}2}}=\frac{\mathrm{}^2n_{\mathrm{}}}{2m}\left|\stackrel{}{𝒟}\mathrm{\Psi }\right|^2,$$
(24)
where the covariant derivative is defined as
$$\stackrel{}{𝒟}\stackrel{}{}\frac{iq}{\mathrm{}}\stackrel{}{A}.$$
(25)
However, although there are physical reasons for expecting that $`g_c`$ should be comparable with unity, the seductive supposition that it should exactly satisfy the Landau condition $`g_c=1`$ is more dubious. This more specialized ansatz will not be needed for the work that follows, which applies to the generalized Ginzburg category with no restriction on the parameter $`g_c`$.
Using the expression
$$_{\text{mag}}+_{\text{kin}}=\frac{^2}{8\pi }+\frac{m}{2q}j𝒱+\frac{B_{\mathrm{}}}{8\pi r}\frac{d}{dr}\left(r^2\right)$$
(26)
for the first two terms in the combination (20), it can be seen that the total energy density arising from the presence of the vortex will be given by
$$=\frac{^2}{8\pi }+\frac{H_{c}^{}{}_{}{}^{2}}{8\pi }\left(1\psi ^2\right)^2+\frac{n_{\mathrm{}}}{2m}\left[\left(g_c\mathrm{}\frac{d\psi }{dr}\right)^2+\left(m𝒱\psi \right)^2\right]+\frac{B_{\mathrm{}}}{8\pi r}\frac{d(r^2)}{dr}.$$
(27)
The equation governing the distribution of the condensate particle number density $`n_s`$ is obtained by requiring that the integral of the energy density (27) be stationary with respect to variation of $`n_s`$ or equivalently of $`\psi `$, which gives the field equation for the latter in the form
$$\frac{g_{c}^{}{}_{}{}^{2}\mathrm{}^2}{mr}\frac{d}{dr}\left(r\frac{d\psi }{dr}\right)=m𝒱^2\psi \frac{4}{n_{\mathrm{}}}_c\psi (1\psi ^2).$$
(28)
When this equation is satisfied, it can be seen that the energy density (27) will reduce to a value given by
$$=\frac{^2}{8\pi }+\frac{H_{c}^{}{}_{}{}^{2}}{8\pi }\left(1\psi ^4\right)+\frac{1}{r}\frac{d}{dr}\left(B_{\mathrm{}}\frac{r^2}{8\pi }+r\frac{g_{c}^{}{}_{}{}^{2}\mathrm{}^2n_{\mathrm{}}}{4m}\frac{d\psi ^2}{dr}\right),$$
(29)
in which the last term is a divergence that goes out when integrated, so that for the total energy per unit length one is left simply with
$$U=\frac{1}{8\pi }\left(^2+H_{c}^{}{}_{}{}^{2}\left(1\psi ^4\right)\right)𝑑S.$$
(30)
## IV Bogomol’nyi inequality
We shall now try to rewrite the energy density associated with the vortex in a different form. Let us begin by writing the relation
$$\left(g_c\mathrm{}\frac{d\psi }{dr}\right)^2+\left(m𝒱\psi \right)^2=\left(g_c\mathrm{}\frac{d\psi }{dr}m𝒱\psi \right)^2\pm g_c\mathrm{}q\psi ^2\pm \frac{g_c\mathrm{}m}{qn_{\mathrm{}}}\frac{d(rj)}{rdr},$$
(31)
which can be obtained by rewriting $`\psi (d\psi /dr)`$ in terms of $`dj/dr`$ and $`d𝒱/dr`$, the latter term being transformed by use of the Maxwell equation (15). In the first term on the right hand side of (31), one takes the minus sign if $`𝒱`$ is positive, the plus sign otherwise. We are thus able to obtain a Bogomol’nyi type reformulation of (27) that is given by
$``$ $`=`$ $`{\displaystyle \frac{g_c\mathrm{}qn_{\mathrm{}}}{2m}}||+{\displaystyle \frac{1}{2\pi }}\left({\displaystyle \frac{||}{2}}+{\displaystyle \frac{\pi g_c\mathrm{}qn_{\mathrm{}}}{m}}(\psi ^21)\right)^2+{\displaystyle \frac{n_{\mathrm{}}}{2m}}\left(g_c\mathrm{}{\displaystyle \frac{d\psi }{dr}}m|𝒱|\psi \right)^2`$ (32)
$`+`$ $`\left({\displaystyle \frac{H_{c}^{}{}_{}{}^{2}}{8\pi }}{\displaystyle \frac{\pi g_{c}^{}{}_{}{}^{2}\mathrm{}^2q^2n_{\mathrm{}}^{}{}_{}{}^{2}}{2m^2}}\right)\left(1\psi ^2\right)^2+{\displaystyle \frac{1}{r}}{\displaystyle \frac{d}{dr}}\left({\displaystyle \frac{B_{\mathrm{}}r^2}{8\pi }}+{\displaystyle \frac{g_c\mathrm{}r|j|}{2q}}\right),`$ (33)
which generalizes the original version by inclusion of the term with the coefficient $`B_{\mathrm{}}`$, which allows for the effect of the background rotation velocity $`\mathrm{\Omega }_{\mathrm{}}`$. Since this extra term is just the divergence of a quantity that vanishes both on the axis and in the large distance limit it gives no contribution to the corresponding integral expression, which therefore has the same form as the usual Bogomol’nyi relation for the non rotating case. More generally, performing the Bogomol’nyi trick (31) just for a fraction $`f`$ of the combined kinetic and gradient contribution, one can see that for a vortex with (relative) magnetic flux
$$\mathrm{\Phi }=𝑑S,$$
(34)
the total vortex energy per unit length and per unit of flux will be expressible in the form
$`U`$ $`=`$ $`{\displaystyle \frac{fH_c}{4\pi \kappa }}|\mathrm{\Phi }|+{\displaystyle \frac{H_{c}^{}{}_{}{}^{2}}{8\pi }}\left(1{\displaystyle \frac{f^2}{\kappa ^2}}\right){\displaystyle \left(1\psi ^2\right)^2𝑑S}+{\displaystyle \frac{H_{c}^{}{}_{}{}^{2}}{8\pi }}{\displaystyle \left(\frac{||}{H_c}\frac{f(1\psi ^2)}{\kappa }\right)^2𝑑S}`$ (35)
$`+`$ $`{\displaystyle \frac{fn_{\mathrm{}}}{2m}}{\displaystyle \left(g_c\mathrm{}\frac{d\psi }{dr}m|𝒱|\psi \right)^2𝑑S}+{\displaystyle \frac{(1f)n_{\mathrm{}}}{2m}}{\displaystyle \left(\left(g_c\mathrm{}\frac{d\psi }{dr}\right)^2+\left(m𝒱\psi \right)^2\right)𝑑S},`$ (36)
in which $`\kappa `$ is a dimensionless constant given by the definition
$$\kappa =\frac{mH_c}{2g_c\pi \mathrm{}qn_{\mathrm{}}}.$$
(37)
It can be convenient to introduce the so-called London penetration length $`\lambda `$, defined by the expression
$$\lambda ^2=\frac{m}{4\pi q^2n_{\mathrm{}}},$$
(38)
and the usual flux quantum
$$\stackrel{~}{\mathrm{\Phi }}=\frac{2\pi \mathrm{}}{q}.$$
(39)
It is to be observed that all the terms in this expression will be non-negative provided the quantity $`f`$ is chosen not only so as to lie in the range $`0f1`$ but also so as to satisfy $`f\kappa `$, a requirement that will be more restrictive if $`\kappa 1`$. In the latter case we can maximize the first term on the right of (36) by choosing $`f=\kappa `$, thereby incidentally eliminating the second term , so that we obtain the lower limit
$$\kappa 1\frac{U}{|\mathrm{\Phi }|}\frac{H_c}{4\pi }.$$
(40)
If $`\kappa 1`$ we shall be able to choose $`f=1`$, thereby eliminating the final term in (36), so that we obtain the inequality
$$\kappa 1\frac{U}{|\mathrm{\Phi }|}\frac{H_c}{4\pi \kappa }.$$
(41)
More particularly it can be seen that choosing $`f=1`$ will eliminate both the second term and the last term on the right of (36) in the special Bogomol’nyi limit case characterized the condition
$$\kappa =1.$$
(42)
(Readers should be warned that much of the relevant literature follows a rather awkward tradition in which the symbol $`\kappa `$ is used for what in the present notation scheme would be denoted by $`\sqrt{2}\kappa `$, which instead of (42) makes the critical condition come out to be $`\kappa =1/\sqrt{2}`$.) In this Bogomol’nyi limit, the energy per unit flux per unit length (36) will be minimized by imposing the conditions
$$g_c\mathrm{}\frac{d\psi }{dr}=m|𝒱|\psi ,$$
(43)
(with the sign adjusted so as to make the right hand side positive), and
$$||=\frac{g_c\stackrel{~}{\mathrm{\Phi }}}{4\pi \lambda ^2}\left(1\psi ^2\right).$$
(44)
which (as when the background is non-rotating) will automatically guarantee the solution of the field equations in this case, annihilating the last two terms in (36) so that one is left simply with
$$\frac{U}{|\mathrm{\Phi }|}=\frac{H_c}{4\pi }.$$
(45)
The qualitative distinction between what are known as Pippard type or type I superconductors on one hand and as London type or type II superconductors on the other hand is based on the criterion of whether or not, for a given total flux, the energy will be minimized by gathering the flux in a small number of vortices, each with large winding number $`N`$ or by separating the flux in a large number of vortices, each just with unit winding number. Within the framework of our analysis, a model may be characterized as type I if the derivative with respect to $`|\mathrm{\Phi }|`$ of $`U/|\mathrm{\Phi }|`$ is always negative, i.e., if $`dU/d|\mathrm{\Phi }|<U/|\mathrm{\Phi }|`$, in which case the vortices will effectively be mutually attractive, and as being of type II if the derivative with respect to $`|\mathrm{\Phi }|`$ of $`U/|\mathrm{\Phi }|`$ is always positive, i.e., if $`dU/d|\mathrm{\Phi }|>U/|\mathrm{\Phi }|`$, in which case the vortices will effectively be mutually repulsive. One can of course envisage the possibility of models that are intermediate in the sense of having a sign for the derivative of $`U/|\mathrm{\Phi }|`$ that depends on $`N`$, so that the minimum is obtained for some large but finite value of the winding number.
What can be seen directly from (45) is that within the category of models characterized by the Ginzburg Landau ansatz, the special Bogomol’nyi limit case lies precisely on the boundary between type I and type II, since it evidently satisfies the exact equality
$$4\pi \lambda ^2H_c=g_c\stackrel{~}{\mathrm{\Phi }}\frac{dU}{d|\mathrm{\Phi }|}=\frac{U}{|\mathrm{\Phi }|},$$
(46)
for all values of $`|\mathrm{\Phi }|`$. The implication of our work is that the well known conclusion that there is neither repulsion nor attraction between Ginzburg model vortices in the Bogomol’nyi limit case will remain valid even in the presence of a rotating background. (It has also been shown to be generalizable to cases where self gravitation is allowed for in a general relativistic framework ).
Since the change of variables performed in section II has transformed the system to a representation that is formally identical to that of the case with a non-rotating background, it can also be concluded that the conclusions of the pioneering analysis of Kramer will remain valid, i.e., that in the sense of the preceding paragraph the system will be of the Pippard kind (type I), if $`\kappa <1`$, i.e.,
$$4\pi \lambda ^2H_c<g_c\stackrel{~}{\mathrm{\Phi }}\frac{dU}{d|\mathrm{\Phi }|}<\frac{U}{|\mathrm{\Phi }|},$$
(47)
and that it will be of the London kind (type II), if $`\kappa >1`$, i.e., if
$$4\pi \lambda ^2H_c>g_c\stackrel{~}{\mathrm{\Phi }}\frac{dU}{d|\mathrm{\Phi }|}>\frac{U}{|\mathrm{\Phi }|},$$
(48)
at least so long as the winding number $`N`$ and the parameters $`\kappa `$ and $`g_c`$ are not too far from the neighborhood of unity, in which the numerical investigations have been carried out. It does not seem that the possibility of an intermediate scenario, with $`U/|\mathrm{\Phi }|`$ minimized by a winding number $`N`$ that is finite but larger that one, can occur within the framework of Ginzburg type models except perhaps for parameter values that are too extreme to be of likely physical relevance. |
no-problem/9910/astro-ph9910368.html | ar5iv | text | # 2.4 – 197 𝜇m spectroscopy of OH/IR stars: The IR characteristics of circumstellar dust in O-rich environments Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) with the participation of ISAS and NASA.
## 1 Introduction
The post-main sequence evolution of stars of low or intermediate mass takes these stars on to the Asymptotic Giant Branch (AGB), where they lose mass at rates of 10<sup>-7</sup>–10<sup>-4</sup>Myr<sup>-1</sup>. In the circumstellar outflows molecules are formed and dust grains condense. The relative abundances of carbon and oxygen in the star determine the chemical composition of the gas and dust in the outflows. Oxygen-rich stars produce silicate dust and molecules such as H<sub>2</sub>O and OH. If the mass-loss rate is sufficiently high, the dust completely obscures the star at visible wavelengths, and the object is known as an OH/IR star because of its strong emission in the infrared (IR), produced by the dust grains, and in radio OH lines, due to maser action by OH molecules. See Habing (habing (1996)) for a detailed review of AGB and OH/IR stars. The optically-thick dust envelopes of OH/IR stars may be the result of a recent increase in mass-loss rate: the so-called ‘superwind’ phase (e.g. Justtanont et al. 1996b ; Delfosse et al. delfosse (1997)). Omont et al. (omont (1990)) detected the 43- and 60-$`\mu `$m emission bands of water ice in the KAO spectra of a number of OH/IR stars, attributing the bands to the condensation of water molecules onto silicate grain cores in the dense outflows.
An important result from the Infrared Space Observatory (ISO) mission is the detection of emission from crystalline silicates in the far-IR spectra of many sources, including the circumstellar environments of young and evolved stars and solar-system comets (e.g. Waters et al. waters96 (1996), waters99 (1999)). Crystalline silicate bands have been detected in OH/IR star spectra (Cami et al. cami (1998)) but are not seen in O-rich AGB stars with low mass-loss rates, suggesting that the abundance of the crystalline materials is related to the density of the circumstellar matter at the dust condensation radius (Waters et al. waters96 (1996)). ISO has also detected thermal emission and absorption by water, in both the gaseous and solid (ice) phases (e.g. Barlow barlow (1998)), from O-rich circumstellar environments.
In this paper we present ISO spectra of seven well-known OH/IR stars covering a range of mass-loss rates. The spectrum of the archetypal Mira variable, $`o`$ Cet, is presented for comparison. For most of our targets, the spectra cover the complete 2.4–197$`\mu `$m spectral range of ISO. Sect. 2 of the paper describes the observations and data reduction. In Sect. 3, the spectra are presented and analyzed, with emphasis on the determination of the continuum and the features due to ices and silicates. Concluding remarks are made in Sect. 4.
## 2 Observations
Seven of our eight sources were observed with both the SWS and LWS instruments, while of OH32.8$``$0.3 only the SWS data were useful. Table 1 lists the sources observed, and the JD dates of the observations (henceforth we abbreviate the OH/IR star designations to OH104.9, OH127.8 etc). Some of our sources (e.g. OH104.9, OH26.5) were observed nearly contemporaneously with the two instruments, while for others, the two spectra were taken more than 100 days apart. In the past, modelling work (e.g. Lorenz-Martins & de Araújo lorenz (1997)) has been hindered to some extent by the non-simultaneity of NIR photometry and 10-20 $`\mu `$m spectra (usually IRAS LRS data). The fact that the SWS spectrum covers both these wavelength regions at a single epoch will be useful for future modelling of these sources (Kemper et al, in preparation).
### 2.1 SWS
The 2.38–45.2 $`\mu `$m part of the spectrum was obtained using the ISO Short Wavelength Spectrometer (SWS). A detailed description of the instrument can be found in de Graauw et al. (1996a ). Our objects were observed in AOT 1 mode, speed 2, except for Mira (speed 3) and OH104.9 (speed 1). The spectrum scanned with SWS contains 12 subspectra, that each consist of two scans, one in the direction of decreasing wavelength (‘up’ scan) and one in the increasing wavelength direction (‘down’ scan). There are small regions of overlap in wavelength between the subspectra. Each sub-spectrum is recorded by 12 independent detectors.
The data reduction was performed using the ESA SWS Interactive Analysis package (IA<sup>3</sup>), together with the calibration files available in January 1999, equivalent to pipe-line version 6.0. We started from the Standard Processed Data (SPD) to determine the final spectrum, according to the steps described in this session.
The observations suffer from severe memory effects in the 4.08–12.0 $`\mu `$m and 29.0–45.2 $`\mu `$m wavelength regions. It is possible to correct for the memory effects for the individual detectors, using a combined dark-current and memory-effect subtraction method. This method was applied assuming that the flux levels in these wavelength regions are very high, and treating the memory effect as giving an additive contribution to the observed signal. We also assumed that the spurious signal from the memory effect reaches a certain saturation value very quickly after the start of the up scan, and then remains constant throughout the rest of the up scan and the entire down scan. This memory saturation value is measured immediately after the down scan is ended, and is subtracted from the up and down scan measurements. The spectral shape of the memory-corrected down scan is now correct; the error in the spectral shape of the up scan is corrected by fitting a polynomial to the up scan and adjusting this fit to the down scan, without changing the detailed structure of the spectrum. The order of the applied polynomial fit differs per subband, but is chosen to be in agreement with the spectral shape in that band. For band 1 and 4 we mostly used polynomials of order 1 or 2, for band 2a, 2b and 3 we predominantly used order 2 or 3, and for band 2c higher order polynomials (up to order 10) were required to adjust the up scan to the down scan.
The spectra of some of our objects showed fringes in the 12.0–29.0 $`\mu `$m wavelength region. This was corrected using the defringe procedures of IA<sup>3</sup>.
Glitches caused by particle hits on the detector were removed by hand. Glitches can be easily recognized: they start with a sudden increase in flux level, followed by a tail which decreases exponentially with time. Any given glitch affects only one of the two scans.
The data were further analyzed by shifting all spectra of the separate detectors to a mean value, followed by sigma clipping and rebinning to a resolution of $`\lambda /\mathrm{\Delta }\lambda =600`$, which is reasonable for AOT 1 speed 2 observations.
### 2.2 LWS
We obtained 43–197 $`\mu `$m grating spectra using the LWS instrument. Details of the instrument and its performance can be found in Clegg et al. (clegg (1996)) and Swinyard et al. (swinyard (1996)) respectively. The resolution element was 0.3 $`\mu `$m for the the short-wavelength detectors ($`\lambda 93`$ $`\mu `$m) and 0.6 $`\mu `$m for the long-wavelength detectors ($`\lambda 80`$ $`\mu `$m). Four samples were taken per resolution element. Between 6 and 26 fast grating scans were made of each target, depending on source brightness and scheduling constraints. Each scan consisted of a single 0.5-sec integration per sample.
The data were reduced using the LWS off-line processing software (version 7.0), and then averaging the scans after sigma-clipping to remove the discrepant points caused by cosmic-ray hits.
For all our LWS sources, apart from Mira and WX Psc, the Galactic background FIR emission was strong, and off-source spectra were taken to enable the background to be subtracted from the on-source spectrum. Galactic background flux levels were only significant for $`\lambda 100`$ $`\mu `$m. The Galactic background emission is extended compared to the LWS beam, and therefore gives rise to strong fringing in both the on- and off-source spectra. The background-subtracted spectra do not show fringing, indicating that the OH/IR stars are point-like to the LWS, as expected.
After averaging and background subtraction (if necessary), each observation consisted of ten subspectra (one per detector), which were rescaled by small factors to give the consistent fluxes in regions of overlap, and merged to give a final spectrum.
### 2.3 Joining the SWS and LWS spectra
One of the main goals of this article is to study the overall ISO spectra of the selected objects. Therefore, it is necessary to join the SWS and LWS spectra in such a way that the flux levels and slopes of the spectra agree for both LWS and SWS. Differences in the flux levels of the LWS and SWS spectra are mostly due to flux calibration uncertainties. Although the spectral shape is very reliable, the absolute flux calibration uncertainty is 30% for the SWS at 45 $`\mu `$m (Schaeidt et al. schaeidt (1996)), and 10-15% for the LWS at the same wavelength (Swinyard et al. swin98 (1998)). Therefore, differences between the flux levels of LWS and SWS which are smaller than 33% are acceptable within the limits of the combined error bars.
The SWS and LWS spectra were scaled according to their fluxes in the overlap region. Generally this resulted in a shift of less than 20% (see Table 1). In the case of Mira and WX Psc, a much larger shift was required, presumably due to the large time interval between the SWS and LWS observations of these variable stars.
## 3 Results
The combined spectra are presented in Fig. 1, in $`\lambda F_\lambda `$ units.
The spectra are ordered by increasing optical depth in the observed 10-$`\mu `$m silicate absorption, $`\tau _\mathrm{s}`$, and hence are in approximate order of increasing mass-loss rate, assuming roughly similar luminosities. By modelling the infrared excess emission of O-rich AGB stars, Schutte & Tielens (schutte (1989)) and Justtanont & Tielens (justtiel (1992)) have determined the dust mass loss rates for several individual O-rich AGB stars. Their results are summarized in Table 2, where our sources are listed in the same order as in Fig. 1. According to Table 2, our sample is indeed ordered with increasing mass loss rate, excluding WX Psc. The values of $`\tau _\mathrm{s}`$ measured from the spectra are the apparent optical depth compared to our continuum fit to the overall SED, or to an assumed silicate emission profile for CRL 2199 and WX Psc. They therefore represent only a part of the total 10-$`\mu `$m silicate optical depth towards the sources. This is evident from Table 2, where the measured $`\tau _\mathrm{s}`$ are quoted, along with the 10-$`\mu `$m optical depths derived for some of our sources by Justtanont & Tielens (justtiel (1992)), using radiative transfer modelling. A description of the determination of our continuum fit can be found in Sect. 3.1.
All of our sources are dominated by continuum emission from cool dust. The most striking feature in our spectra is the 10-$`\mu `$m silicate band, which appears in emission for Mira, partially self-absorbed for WX Psc and CRL 2199, and in strong absorption for the other sources. The overall shape of the observed spectral energy distribution (SED) varies with $`\tau _\mathrm{s}`$, the sources with high $`\tau _\mathrm{s}`$ showing ‘redder’ SEDs. The SEDs can be roughly approximated using blackbodies with temperatures ranging from 300 K for the most optically-thick sources, to 600 K for the partially self-absorbed sources (see Table 2).
The 20-$`\mu `$m silicate feature also passes from emission to absorption going down the sequence in Fig. 1, but none of our sources show it as a self-absorbed emission feature. Variations in shape as well as optical depth are evident in the 10- and 20-$`\mu `$m absorption features. Weaker features beyond 20 $`\mu `$m can be discerned: these are features of water ice and crystalline silicates, and will be discussed below. Evident at short wavelengths are molecular absorption bands (see Justtanont et al. 1996a for identifications for the supergiant source NML Cyg, which is not included here) and the 3.1-$`\mu `$m H<sub>2</sub>O ice absorption feature.
### 3.1 Determination of the continuum
In order to determine the shape and relative strength of the emission and absorption features we will define a pseudo- continuum, which is assumed to represent featureless thermal emission from the dust. The continuum-divided spectrum will provide us with information on the optical depth of the different species that are located outside the continuum-producing region and the wavelength at which the material becomes optically thin. However, one has to be very careful with the physical meaning of the pseudo-continuum, since the observed continuum results from several wavelength-dependent dust emissivities, with a large range of dust temperatures, modified by strong optical depth effects, rather than being simply a superposition of blackbody energy distributions corresponding to the physical temperature gradient.
For the determination of the continuum we plotted the energy distribution log $`F_\lambda `$ (W m<sup>-2</sup> $`\mu `$m<sup>-1</sup>) against $`\lambda `$ ($`\mu `$m). Plotting the data this way one can easily recognize the general shape of the continuum; the parts of the spectrum where the continuum is well defined, i.e. the long and short wavelength edges, are emphasized. This makes it easier to constrain the continuum in the wavelength regions where the strong spectral features are present. At $`\lambda >50`$ $`\mu `$m, the dust is optically thin, and only weak emission features are present. At $`\lambda <7`$ $`\mu `$m, the radiation from the stellar photosphere is partially (for the Miras) or completely (OH/IR stars) absorbed by molecular lines and the high dust opacity towards the central star. When a spline fit is performed in log $`F_\lambda `$ space, we can thus constrain the continuum by using the known long and short wavelength continuum contributions as reference points. Applying spline fitting with the same reference points in other parameter spaces ($`\lambda `$ log $`F_\lambda `$ space, $`F_\nu `$ space, log $`F_\nu `$ space etc.) shows that all the fits vary within 10% (in flux) with respect to the adopted pseudo-continuum in the 10–20 $`\mu `$m region.
### 3.2 Description of the spectrum
Figs. 24 show the observed spectra after division by the adopted continuum. A wealth of interesting features can be recognized in the spectra. At the shortest wavelengths, from $`\lambda =`$ 2.38 – $``$7 $`\mu `$m, see Fig. 2, the spectrum is dominated by molecular absorption bands and the effects of dust absorption. In Mira and the intermediate type stars (WX Psc and CRL 2199), strong absorption features due to a blend of H<sub>2</sub>O and CO are present from 2.38–$``$3.3 $`\mu `$m (2.38–$``$3.8 $`\mu `$m in the case of WX Psc), see Fig. 2. Absorption due to SiO is found in some objects around 4.0 $`\mu `$m, most prominently in WX Psc and OH104.9. In most of our spectra we find clear absorption features of gaseous CO<sub>2</sub> at 4.3 $`\mu `$m. CO line absorption around 4.5 $`\mu `$m is found in all our spectra. Throughout the whole 2.38–7 $`\mu `$m wavelength region, H<sub>2</sub>O absorption lines are present; very strong lines of gaseous H<sub>2</sub>O are found at 6.60–6.63 $`\mu `$m. Yamamura et al. (yamamura (1999)) have performed a detailed study of the water features in the spectrum of Mira; Mira is also the only source in our sample that shows OH absorption lines in the 2.5–3.5 $`\mu `$m region. In the high mass-loss rate objects, water ice features are present as well, at 3.1 and (possibly) 6.0 $`\mu `$m. The ice features will be discussed in the next section.
The 7–25 $`\mu `$m region (Fig. 3) is dominated by the strong features of amorphous silicates. The 9.7-$`\mu `$m feature (ranging from 8 to 12–13 $`\mu `$m) and the 18 $`\mu `$m feature (ranging from 15 to 20–25 $`\mu `$m) are very strong bands that occur in emission or (self-)absorption. Crystalline silicate features also occur in the same wavelength ranges but are usually much weaker, so the actual shape of the observed silicate bands is due to a blend of crystalline and amorphous silicates. Absorption by molecular SiO can be seen at 8 $`\mu `$m, on the wing of the silicate feature. Around 15 $`\mu `$m the spectrum of Mira exhibits some sharp emission lines due to CO<sub>2</sub>.
Longwards of 25 $`\mu `$m (Fig. 4), most spectral features occur in emission. The only absorption feature is the OH pumping line at 34.6 $`\mu `$m, which is detected in some of the sample stars. Groups of crystalline silicate emission features occur near 28, 33, and 43 $`\mu `$m, the latter probably being a blend with the 43 $`\mu `$m crystalline water ice feature, while a broad feature, which we ascribe to water ice, peaks around 62 $`\mu `$m. Superposed on this is a sharp feature at 69 $`\mu `$m, due to forsterite.
Several sources show emission features near 47.5 and 52 $`\mu `$m. The longitudinal optical band of water ice lies close to 52 $`\mu `$m (Bertie & Whalley bertie (1967)), but in laboratory data it is only ever seen as a shoulder on the 43-$`\mu `$m band (e.g. Smith et al. smithal (1994)), not a clearly-separated feature. The observed feature is therefore unlikely to be the longitudinal ice band. Malfait et al. (malf99 (1999)) suggested that montmorillonite, which gave a good fit to the broad 100-$`\mu `$m emission feature in the spectrum of HD 142527, is a possible carrier of the 47 and 50$`\mu `$m features in this source. The planetary nebula NGC 6302 also shows the 47, 53 and 100 $`\mu `$m bands (Lim et al., 1999). If the three features do have a common carrier, the apparent absence of the 100 $`\mu `$m band in the spectra presented here may be due to a difference in dust temperature.
As well as broad emission bands, unresolved emission lines can be detected in the long-wavelength regions of the least noisy of our spectra, (e.g. WX Psc and AFGL 5379). Most of these lines are pure rotational lines of water vapour. The 157.7-$`\mu `$m \[C ii\] line is also visible in most of our sources; this is the residual Galactic background \[C ii\] emission after subtraction of the off-source spectrum. In the case of AFGL 5379, the \[C ii\] line is seemingly in absorption: again, this is due to imperfect cancellation of the background emission.
### 3.3 Water Ice
Water ice is an important component of the solid-phase material in cool astronomical sources. Its spectrum shows bands at 3.1, 6.0, 11-12, 43 and 62 $`\mu `$m. The 3.1-$`\mu `$m stretching band is seen (always in absorption) in the spectra of many highly-embedded young stars (e.g. Whittet et al. whittet (1988)) and in some OH/IR stars (e.g. Meyer et al. meyer (1998)). Its formation in the circumstellar envelopes of OH/IR stars has been discussed in particular by Jura & Morris (jura (1985)). Before the ISO mission, the far-IR ice bands had been observed in emission in a small number of sources including the OH/IR stars OH26.5, OH127.8 and OH231.8$`+`$4.2 (Omont et al. omont (1990) and references therein). ISO spectra have shown the 43- and 62-$`\mu `$m ice bands in emission in various objects (Barlow barlow (1998)), such as the planetary nebulae CPD$`56^\mathrm{o}8032`$ (Cohen et al. cohen (1999)) and NGC 6302 (Lim et al. lim (1999)), the post Red Supergiant source AFGL 4106 (Molster et al. 1999a ) and Herbig Ae/Be stars (Waters & Waelkens wawa (1998); Malfait et al. malfait (1998), malf99 (1999)), while the 43-$`\mu `$m band has been detected in absorption toward the highly-embedded sources AFGL 7009 and IRAS 19110+1045 (Dartois et al. dartois (1998), Cox & Roelfsema cox (1999)). Both of the far-IR bands can be blended with crystalline silicate emission, but examination of the shapes and positions of the observed bands can distinguish between silicate and ice emission.
The new ISO spectra have significantly better resolution and sensitivity than the earlier KAO data. Fig. 5 shows attempts to fit the 30–90 $`\mu `$m region of our (continuum-subtracted) spectra, using a spectral synthesis routine kindly provided by Dr T. Lim (personal comm.). This routine takes absorption (or emission) efficiencies for materials of interest, along with user-defined temperatures and relative amounts, and produces the resulting spectrum for optically-thin emission from the individual materials, as well the total emission from all the materials (shown as the thin solid line in Fig. 5). The materials used to fit the OH/IR star spectra were forsterite, enstatite and crystalline water ice; temperatures of order 50–100 K were used for the fitting. The detected ice features are listed in Table 3.
Pyroxenes (such as enstatite; see dash-dotted line in Fig. 5) also show strong 43-$`\mu `$m features, therefore detection of a 43-$`\mu `$m band in an observed spectrum is not sufficient evidence to demonstrate the presence of H<sub>2</sub>O ice. However, for temperatures $``$40 K, the 43-$`\mu `$m peak is at least as prominent as the 62-$`\mu `$m peak, so objects which do not have a 43-$`\mu `$m feature are unlikely to contain much water ice (unless it is very cold). Conversely, if an object shows the 62-$`\mu `$m feature, H<sub>2</sub>O ice is likely to be responsible for at least part of that object’s 43-$`\mu `$m feature.
We claim detections of crystalline water ice emission based on the presence of the bands at 43 and 62 $`\mu `$m in OH127.8, OH26.5 and AFGL 5379, confirming the tentative detections for the first two sources by Omont et al. (omont (1990)). OH32.8 also shows a 43-$`\mu `$m feature (Fig. 4), but without observations of the 62-$`\mu `$m feature, we cannot determine if ice emission is present.
CRL 2199 and WX Psc both seem to show broad 50–70 $`\mu `$m features, but the shape of these features does not resemble laboratory crystalline ice features, unlike the observed features of OH127.8, OH26.5 and GL5379. In particular, the emissivity of ice has a minimum near 55 $`\mu `$m before reaching its peak at 62 $`\mu `$m (see dashed line in Fig. 5). This structure is present in the spectra of the latter three sources, while the CRL 2199 and WX Psc features are more flat-topped, with strong emission at 55 $`\mu `$m and no real evidence of a peak at 62 $`\mu `$m. The attempt to fit the WX Psc spectrum with ice emission (Fig. 5) illustrates this point. The OH 104.9 features are rather weak and ill-defined; there may be a weak 62-$`\mu `$m band, but the 43-$`\mu `$m band is replaced by a broader feature peaking near 47 $`\mu `$m.
If the 50–70 $`\mu `$m features in CRL 2199 and WX Psc are not water ice, what are they? They may simply be instrumental artefacts: Fig. 1 illustrates how the spectra are dominated by the steep downward slope of the SED, and that spectral features around 60 $`\mu `$m are small perturbations on this overall trend. (Clearly, this statement also holds for the features which we believe to be real.) Experiments with adopting different LWS dark currents and with methods for dealing with detector memory effects made little difference to the spectra. The clino-pyroxene optical constants of Koike et al. (koike (1993)) show a broad feature around 60 $`\mu `$m (see also Cohen et al. cohen (1999)), but this peaks at longer wavelengths than the crystalline water ice band, and so is not a likely carrier for the CRL 2199 and WX Psc features. Similarly, the 62-$`\mu `$m features in OH127.8, OH26.5 and GL 5379 do not need an additional long-wavelength component, so the Koike pyroxene is not necessary to fit these spectra. The Jäger et al. (jaeger (1998)) laboratory data do not reproduce the broad 60-$`\mu `$m band seen in the Koike data, so the feature may not be real.
In general, we find that the observed features in Fig. 5 are narrower than our fits, suggesting that we have overestimated the continuum level in this wavelength region, or that the optical constants we used do not adequately represent the circumstellar materials.
The sources in which we detect the ice emission features are the four stars with the deepest 10-$`\mu `$m silicate absorption, and hence presumably the highest mass-loss rates. These same stars also show the 3.1-$`\mu `$m absorption band (see Fig. 2). OH32.8 was too faint at 3 $`\mu `$m to be detected by the SWS, but the 3.1-$`\mu `$m ice absorption feature has been detected in ground-based spectra (Roche & Aitken roche (1984)). The sources with self-absorbed silicate emission features, do not appear to show ice features. OH104.9, which shows a relatively shallow silicate absorption, may show a weak 3.1-$`\mu `$m absorption, but it is hard to discern, because the spectrum is noisy at short wavelengths.
The 6.0-$`\mu `$m band of water ice is significantly weaker than the 3.1$`\mu `$m band (see e.g. Moore moore (1999)). The spectra of our most heavily-obscured sources, OH32.8, AFGL 5379 and OH26.5, show a weak depression around 6 $`\mu `$m, but this wavelength region is very rich in gaseous H<sub>2</sub>O lines, making it difficult to ascribe the observed feature to ice absorption.
Ice formation requires cool temperatures and sufficient shielding from stellar and interstellar radiation (e.g. Whittet et al. whittet (1988)). The high densities in the (general) outflow that accompany large mass-loss rates may provide the required shielding. Alternatively, enhanced densities could be provided by the formation of a circumstellar disk in the superwind phase, or by inhomogeneous mass loss, such as is apparent in studies of H<sub>2</sub>O and OH maser clumps (e.g. Richards et al. richards (1999)). As discussed by Omont et al. (omont (1990)), the presence of the 63-$`\mu `$m band requires that the water ice is at least partially crystalline, implying that the ice remained relatively warm ($``$100 K) for long enough to allow crystalline reorganization to take place.
Optical depths and column densities for the detected 3.1-$`\mu `$m features are given in Table 3. Meyer et al. (meyer (1998)) have proposed that the ice column density correlates better with the ratio of mass-loss rate to luminosity ($`\dot{M}/L`$) than with $`\dot{M}`$ alone. Adopting reasonable estimates (based on values in the literature) for these parameters, our results support the relation between ice column density and $`\mathrm{log}\dot{M}/L`$ proposed by Meyer et al (see their Fig. 3). However, given the uncertainties in both parameters, and that the luminosity changes significantly with the variability phase, the relationship should be treated with some caution.
OH32.8 shows the 3.1-$`\mu `$m ice band, and an 11-$`\mu `$m absorption feature in the wing of the silicate absorption feature (Roche & Aitken roche (1984)) which was attributed to the libration mode of water ice. Justtanont & Tielens (justtiel (1992)) were able to model ground-based and IRAS observations of this source using silicate grains with water-ice mantles, which give a much broader 10-$`\mu `$m absorption feature than do bare silicate grains.
The broad 11-$`\mu `$m feature is clearly seen in the five sources with strong 10-$`\mu `$m absorption (Fig. 1). It appears strongly in OH26.5, OH104.9, OH127.8 and OH32.8, and as an inflection near 11.5 $`\mu `$m in AFGL 5379. The contribution of this feature to the overall 10-$`\mu `$m absorption profile therefore does not appear to correlate fully with the presence of the other water ice bands: AFGL 5379 shows strong far-IR ice emission and 3.1-$`\mu `$m absorption, but only weak 11-$`\mu `$m absorption, while OH104.9 shows strong 11-$`\mu `$m absorption but has weak or absent far-IR and 3.1-$`\mu `$m features.
Smith & Herman (smith (1990)) found an 11-$`\mu `$m absorption feature in the spectrum of another OH/IR star, OH138.0+7.3, which does not show any ice absorption at 3.1 $`\mu `$m. Since the 3.1-$`\mu `$m stretching mode is intrinsically stronger than the libration mode, Smith & Herman concluded that the 11-$`\mu `$m feature observed towards OH138.0 is not produced by water ice, and suggested that it is due to partially-crystalline silicates. Another possibility is that spectra like that of OH138.0 are the absorption counterpart of the Little-Marenin & Little (lml (1990)) ‘Sil++’ or ‘Broad’ emission features, which show an emission component at $``$11 $`\mu `$m on the wing of the silicate feature. These features have been ascribed to crystalline silicates or amorphous alumina grains (see e.g. Sloan & Price sloan (1998)). Clearly, full radiative-transfer modelling would be useful to determine whether ice mantles can indeed explain the range of 11-$`\mu `$m features seen, or whether other grain components are necessary.
The presence of strong water ice features in our spectra indicates that a substantial amount of the H<sub>2</sub>O in the circumstellar envelopes may be depleted into the solid phase. This would decrease the amount of gas-phase H<sub>2</sub>O (and photodissociated OH) in the outer regions of the circumstellar envelope which can be detected by maser and thermal emission, Water maser lines are observed to be relatively weaker in OH/IR stars than in objects with lower mass-loss rates (e.g. Likkel likkel (1989)). Collisional quenching due to the high densities in the inner parts of the outflow is thought to suppress the maser action; our results indicate that depletion into the solid (ice) phase may also play a role.
CO ice shows features near 4.7 $`\mu `$m (e.g. Chiar et al. chiar (1995)): these are not seen in our spectra, but a broad absorption band around 4.3 $`\mu `$m, due to gas-phase CO is seen. This band is significantly broader than the 4.27-$`\mu `$m CO<sub>2</sub> ice absorption feature seen in molecular clouds (e.g. de Graauw et al. 1996b ). We see no evidence of CO<sub>2</sub> ice at 4.27 $`\mu `$m. CO<sub>2</sub> ice shows another strong feature at 15 $`\mu `$m; our spectra show some structure near this wavelength (Fig. 6), but this may be an artefact of the instrument or data-reduction process.
### 3.4 Silicates
For these objects, most of the energy radiated by the central star is absorbed by the circumstellar dust shell and re-radiated as thermal emission, predominantly by amorphous silicates. Amorphous silicates have strong features at 10 and 18 $`\mu `$m. The current objects can be ordered by increasing optical depth of those features, which agrees with ordering by increasing wavelength of the peak of the SED. When the 10-$`\mu `$m feature becomes optically thick, the energy from the central star must be re-radiated at even longer wavelengths, therefore the peak position shifts towards 30 $`\mu `$m (see Fig. 1). It is believed that the low mass loss rate Miras evolve into high mass loss rate OH/IR stars, while the IRAS colors indicate that the peak of the dust emission shifts towards longer wavelengths (van der Veen & Habing vanderveen (1988)). With increasing mass loss rate the characteristic density in the wind will increase, increasing the optical depth towards the central star. This evolution can be found in the oxygen-rich AGB stars in our sample.
In Fig. 6 the objects are plotted in the same order as in Fig. 1, however, the intensities are now in $`F_\nu `$ units, and normalized with respect to the measured maximum intensity. The 10-$`\mu `$m feature of Mira is completely in emission; for WX Psc and CRL 2199, the 10-$`\mu `$m feature is partially self-absorbed. For the OH/IR stars OH104.9, OH127.8, OH26.5, AFGL 5379 and OH32.8, the 10-$`\mu `$m feature is completely in absorption, with optical depths ranging from 1.1 for OH104.9 to 3.6 for OH32.8. A detailed analysis of the amorphous silicate features will be presented in a future paper (Kemper et al, in preparation). As the optical depth increases, some structure becomes apparent in the 20–45 $`\mu `$m region.
In the lower mass-loss rate objects, strong emission features due to amorphous silicates are present, but there are no obvious narrow features at $`\lambda >20`$ $`\mu `$m. For the redder objects, we find that the amorphous silicates features are (self-)absorbed, and that some structure is apparent at wavelengths $`\lambda >20`$ $`\mu `$m. These narrow features can be identified as crystalline silicates, both olivines and pyroxenes (Waters et al. waters96 (1996)). The identifications are based on laboratory spectra of crystalline silicates (Jäger et al. jaeger (1998); Koike et al. koike (1993), Koike & Shibai koike98 (1998)) and similar bands seen in other objects on which detailed studies have been performed, i.e. AFGL 4106 (Molster et al. 1999a ) and to HD 45677 (Voors voors (1999)). The dashed lines in Fig. 6 represent the position of some important crystalline silicate complexes, which are listed in Table 4. These crystalline silicate features are found in emission at the longest wavelengths but sometimes in absorption at somewhat shorter wavelengths, for example the 23.6 $`\mu `$m olivine feature in OH32.8 and in OH26.5 (see Fig. 6). The OH/IR stars presented here are the only objects known to exhibit crystalline silicates in absorption outside the 8-13 $`\mu `$m wavelength region. For OH32.8 this was already reported by Waters & Molster (wamo (1999)) in comparison with AFGL 4106. The presence of the most important crystalline silicate features is indicated in Table 4 for all the sources in our sample. Note that those features are detected by close examination of the spectrum; not all features are visible in the overview figures presented in this paper. Detailed modelling of the wealth of crystalline silicate features shown by the individual objects is deferred to a future paper.
The crystalline silicate features tend to appear in those sources having greater optical depth at 10 $`\mu `$m. However, the sharpness of the crystalline silicate features shows a large variation. The sharpness is expected to be determined by properties of the crystalline silicates, such as the presence of impurities, the shape of the dust grains, and irregularities in the lattice structure. The spectrum of AFGL 5379 does not show the sharp crystalline peaks found in the spectra of the other OH/IR stars and in the spectra of AFGL 4106 and HD 45677, but the shapes of its features do resemble the laboratory spectra of crystalline silicates (Jäger et al. jaeger (1998)). This suggests that the dust grains around these objects exhibit differences in the properties of the lattice structure, such as impurities, holes, and edge effects due to grain size. The appearance of the crystalline silicate emission features in the redder sources in our sample confirms the relationship between dust crystallinity and envelope colour temperature, and hence mass-loss rate, identified by Waters et al. (waters96 (1996)). Specifically, there seems to be a threshold value for the mass loss rate above which the crystalline silicate features appear in the spectrum. However, above this threshold value, the strength and width of the features seem to be uncorrelated to the mass loss rate.
Fig. 7 presents a more detailed overview of the location of the crystalline silicate features. In the upper panel, the upper spectrum is that of OH104.9. The solid line represents the continuum fit obtained using the method described above. At longer wavelengths the spectrum shows emission features superposed on the pseudo-continuum. The 18-$`\mu `$m *absorption* feature extends to $`\lambda 26`$ $`\mu `$m. However, as Fig. 7 clearly shows for both plots, there are crystalline *emission* features present within this wavelength region, in particular, the 17.5–20 $`\mu `$m complex. For reference, the continuum-subtracted spectrum of AFGL 4106 (Molster et al. 1999a ) is also plotted. The 20.6 $`\mu `$m emission feature is probably related to crystalline silicates, although it is not yet identified (Molster et al. 1999a ; Voors voors (1999)). These OH/IR star spectra are the first to show crystalline silicates in emission simultaneously with amorphous silicates in absorption in the same wavelength region.
The presence of crystalline emission features in the spectrum of OH104.9, at wavelengths where the amorphous dust component is still in absorption, implies that the crystalline silicate dust must have a different spatial distribution than the amorphous silicate dust. We consider two possible geometries: spherical and axi-symmetrical.
For the case of a spherically symmetric distribution, the crystalline dust can have a different radial distribution and be located further out in the envelope than the amorphous dust. The SWS and LWS beam sizes are much larger than the angular size of the dust shells of these OH/IR stars, so the amorphous silicate absorption can originate from the entire dust shell, while the crystalline silicate emission can arise from the cool outer layers of the dust shell, where the material is optically thin. If the crystalline silicates are located further out, we can conclude that the crystalline and amorphous dust has not formed at the same time, but that the amorphous dust annealed as it moved away from the star. This could imply that lower mass loss rate Mira variables could in principle be able to form crystalline material as well, but that we cannot detect it because the column densities are not high enough. However, given that higher temperatures are required for annealing amorphous silicates into crystalline silicates than are required for the formation of amorphous silicates themselves, it does seem unlikely that crystalline silicate grains could be formed further out in an outflow than amorphous silicate grains. One possibility is that some small fraction of the particles which formed in an outflow (perhaps the smallest particles) immediately annealed into crystalline silicates in the inner, hottest regions. The large total column density of amorphous silicate grains would lead to net absorption in the 10- and 18-um bands and thus obscure these hot crystalline silicates. When these particles are cooled while flowing outwards, the strong emission features of crystalline silicates at wavelengths longer than $``$ 15 $`\mu `$m can be seen superposed on the amorphous dust features.
An alternative scenario for the observed behaviour is that OH/IR stars possess a dust disk, in addition to a more spherically symmetric outflow. Two alternatives suggest themselves:
* Crystalline silicates that have formed and moved out from the inner regions of the outflow produce emission bands that are seen superposed against the continuum emission and amorphous silicate features that arise from the disk. This would require that those OH/IR stars that have the largest 10-$`\mu `$m amorphous silicate optical depths are the ones whose disks are seen most nearly edge-on.
* The crystalline silicates are located in the disk, while the amorphous silicates are located mainly in the outflow and thus have a more spherically symmetric distribution, with the amorphous silicate absorption features arising from optically thick lines of sight towards the central star. Depending on the inclination angle of the system, the crystalline silicate features can be seen in emission. When the disk is viewed face-on, the crystalline silicate features would be optically thin. Radiation from most of the disk surface reaches the observer via lines of sight which pass through only the outer regions of the spherical dust shell, where the amorphous material is not optically thick.
Two mechanisms can be invoked to explain the high abundance of crystalline silicates in a disk. Below the so-called glass temperature, only amorphous silicates condense, while crystalline silicates can form at temperatures greater than the glass temperature. At the high densities occurring in a disk, condensation of silicates will be able to proceed at higher temperatures than usual (Gail & Sedlmayr gail (1999)) and the temperature range in which it is possible to condense crystalline silicates is thus broadened. In our current sample there is empirical evidence for a correlation between crystallinity and density. Second, if there is amorphous material present in the disk, this can be transformed into crystalline material by annealing. In order to allow the annealing process of the silicates to proceed, the dust-forming region should not cool too rapidly. An orbiting (or slowly outflowing) disk provides the required stability and keeps the amorphous silicates relatively close to the central star for a sufficiently long time for the annealing into crystalline material to occur. Both the above mechanisms provide circumstances that allow the formation of crystalline silicates, which would not be the case if the stellar wind removes the newly-formed silicates at the outflow velocity, such that they rapidly cool.
In order to study the spatial distribution of both dust components in more detail, spectral mapping of the OH/IR stars is necessary. Then we may be able to put constraints on the spatial distribution, and determine the annealing time, of the crystalline dust, using travel time and stability arguments. Using laboratory data on annealing time scales, we may be able to determine physical parameters such as temperature and density in the circumstellar dust shell, thus helping to clarify the AGB mass loss phase of stellar evolution. Recent high-resolution imaging of the dust around evolved stars, such as Mira (Lopez et al. lopez (1997)) and VY CMa (Monnier et al. monnier99 (1999)), has shown that substantial inhomogeneities exist in the dusty outflows; our spectra suggest that the dust around OH/IR stars may also show complex morphologies.
For any geometry of the circumstellar shell, the amorphous–crystalline volume ratio could also be affected by grain-grain collisions. Such collisions have long been recognised as important processes for the evolution of grains in circumstellar envelopes (see e.g. Biermann & Harwit bier (1980)) and in the interstellar medium (Jones et al. jones (1996)). The shock wave driven into the two grains by the collision can lead to vaporization, the formation of high pressure phases, melting, annealing, and shattering depending on the pressures involved (cf., Tielens et al. tmsh (1994)). Experiments show that at a relative collision velocity of $``$1 km s<sup>-1</sup>, mechanical effects (shattering, crater formation) become important. Thermal effects such as crystallization, involving the intergranular nucleation of new, strain–free grains in a previously highly deformed matrix start at somewhat higher velocities ($``$ 5 km s<sup>-1</sup>; 700 kbar) and are never very pervasive. Above $``$ 7 km s<sup>-1</sup> (1 Mbar), melting followed by rapid quenching leads to glass formation (Bauer bauer (1979); Schaal et al. schaal (1979)). Thus, these experiments imply that crystallization only occurs over a very narrow collision velocity range and is not very efficient. Of course, if the projectile/target size ratio is large, even high velocity impacts will lead to a small volume fraction of the target grain passing through the regime where recrystallization can occur when the shock wave expands.
Given the grain velocity profile in AGB outflows (Habing et al. habingea (1994)), potentially crystallizing grain-grain collisions will be largely confined to the acceleration zone at the base of the flow. An important objection against annealing through grain-grain collisions is that the cooling time scales of dust grains are very short compared to the annealing time scales. Therefore the silicate dust grains solidify in the amorphous state (Molster et al. 1999b ). The annealing process can be described as a three dimensional random walk diffusion process on a cubic lattice (Gail hpgail (1998)), which provides an estimate of the annealing time scale as a function of dust temperature. At a $`T_\mathrm{d}`$ = 2000 K the annealing time scale is $``$ 1 s and strongly increases for lower temperatures. The cooling time scale can be derived under the assumption that the power emitted by the dust grain is given by the Planck function, modified by the Planck mean of the absorption efficiency $`Q_{\mathrm{abs}}`$. By comparing the emitted power to the internal heat of the grain one finds for the cooling time scales $`\tau _{\mathrm{cool}}`$ $``$ 10<sup>-3</sup> s for $`T_\mathrm{d}`$ = 2000 K and $`\tau _{\mathrm{cool}}`$ $``$ 0.02 s for $`T_\mathrm{d}`$ = 800 K. In agreement with the experiments, only a small fraction of crystallized material is expected due to the grain-grain collisional shock loading. Final assessment of the viability of this mechanism for the formation of crystalline silicates in AGB outflows has to await detailed modeling of such grain-grain collisions in circumstellar outflows (Kemper et al. in preparation). Finally, we note that the spectrum of $`\beta `$ Pic, which is due to a dust size distribution that is surely collisionally dominated, shows little evidence for crystalline silicates. Although the 10 $`\mu `$m spectrum of $`\beta `$ Pic (Knacke et al. knacke (1993), Aitken et al. aitken (1993)) may show some spectral structure resembling that of solar system comets – characteristic for silicate minerals – the longer wavelength bands so prominent in cometary spectra are completely absent in this source (Pantin et al. malfbpic (1999)). While this may at first sight argue against this mechanism, the collisional velocities in this system may be much lower and predominantly lead to shattering.
## 4 Conclusions
We have presented the complete SWS/LWS spectra for seven oxygen-rich evolved stars, together with the SWS spectrum of an eighth source. For the OH/IR stars, which have optically thick dust shells, essentially all of the stellar luminosity is radiated in the wavelength region covered by ISO. Emission features of crystalline silicates are seen longwards of 15 $`\mu `$m in the dustier objects. Some of these emission features lie within the 20-$`\mu `$m absorption feature of amorphous silicate, suggesting that the crystalline and amorphous components have different spatial distributions. The dust shells of these sources are sufficiently cool for abundant water ice to form, as indicated by the near-IR absorption and far-IR emission features of crystalline ice.
In a future paper (Kemper et al., in preparation) we will present some of the further analysis and modelling required to determine the physical conditions and processes which give rise to these very rich spectra.
###### Acknowledgements.
We thank Tanya Lim for making available the spectral synthesis routine, and the referee, H. Habing, for constructive comments. FK and LBFMW acknowledge financial support from NWO Pionier grant number 616-78-333, and from an NWO Spinoza grant number 08-0 to E.P.J. van den Heuvel. |
no-problem/9910/math-ph9910017.html | ar5iv | text | # Uniform spectral properties of one-dimensional quasicrystals, III. [1]𝜶-continuity
## 1. Introduction
In this article we continue the study of the spectral properties of discrete one-dimensional Schrödinger operators with Sturmian potentials which was started in . That is, we shall consider the operators
(1)
$$[H_{\lambda ,\theta ,\beta }u](n)=u(n+1)+u(n1)+\lambda v_{\theta ,\beta }(n)u(n),$$
acting in $`\mathrm{}^2()`$. Here
$$v_{\theta ,\beta }(n)=\chi _{[1\theta ,1)}\left(n\theta +\beta \text{ mod }1\right),$$
with coupling constant $`\lambda \{0\}`$, irrational rotation number $`\theta (0,1)`$, and phase $`\beta [0,1)`$. Most of our study will be concerned with the investigation of the solutions to the corresponding difference equation
(2)
$$(H_{\lambda ,\theta ,\beta }E)u=0,$$
where the sequence $`(u(n))_n`$ will always be normalized in the sense that
(3)
$$|u(0)|^2+|u(1)|^2=1.$$
The family of operators $`(H_{\lambda ,\theta ,\beta })`$ is commonly agreed to model a one-dimensional quasicrystal. It provides a natural generalization of the Fibonacci family of operators which corresponds to rotation number $`\theta =\theta _F=\frac{\sqrt{5}1}{2}`$, the golden mean. This model was introduced independently by two groups in the early 80’s and has been studied extensively since. The review articles recount the history of generalizations of the basic Fibonacci model and the results obtained for each of them.
It was conjectured early that the operators $`H_{\lambda ,\theta ,\beta }`$ have purely singular continuous zero-measure spectrum. In the Fibonacci case this belief was made explicit, for example, in . A major step towards establishing these properties was achieved when it was realized that the general Kotani theory has very strong consequences for potentials taking finitely many values . This immediately implied (by combining it with a recent stability result of Last and Simon ) that the absolutely continuous spectrum of $`H_{\lambda ,\theta ,\beta }`$ is empty. Moreover, the key technical result of was used by Sütő and Bellissard et al. to establish the zero-measure property in full generality. That is, for all admissible parameter values, the spectrum of $`H_{\lambda ,\theta ,\beta }`$ has zero Lebesgue measure, and hence is a Cantor set. All previously known results on the absence of point spectrum, however, were partial and generic in either a topological or measure-theoretical sense. First, Delyon and Petritis proved empty point spectrum for every $`\lambda `$, almost every $`\theta `$, and almost every $`\beta `$ . Sütő then proved absence of eigenvalues in the Fibonacci case, which was not contained in the full measure set of rotation numbers from the Delyon-Petritis result, for $`\beta =0`$ and all $`\lambda `$ . By general principles, this yields empty point spectrum for a dense $`G_\delta `$-set of $`\beta `$’s . This result was implicitly extended to arbitrary $`\theta `$ by Bellissard et al. (see for explicit proofs). In Kaminaga proved absence of eigenvalues for every $`\lambda `$, every $`\theta `$, and almost every $`\beta `$. Finally, the paper treated every $`\lambda `$, almost every $`\theta `$, and every $`\beta `$. Our first goal here is to complete the identification of the spectral type for all parameter values.
###### Theorem 1.
For every $`\lambda ,\theta ,\beta `$, the operator $`H_{\lambda ,\theta ,\beta }`$ has empty point spectrum.
###### Corollary 1.1.
For every $`\lambda ,\theta ,\beta `$, the operator $`H_{\lambda ,\theta ,\beta }`$ has purely singular continuous zero-measure spectrum.
Recently, the understanding of quantum dynamics for operators with purely singular continuous spectrum has been considerably improved. Lower bounds on the time evolution of the associated quantum systems can be obtained by means of certain Hausdorff dimensional properties of the spectral measures. The first results in this direction were obtained by Guarneri and Combes . They essentially required uniform $`\alpha `$-Hölder continuity. Their results were extended by Last in to spectral measures with non-trivial $`\alpha `$-continuous component, that is, measures that are not supported on a set of zero $`\alpha `$-Hausdorff measure. We refer the reader to for subsequent developments. Apart from the implications for dynamics, the notion of $`\alpha `$-continuity has the advantage of being accessible by an investigation of solutions to (2) as was realized by Jitomirskaya and Last . In these papers they establish an extension of the classical Gilbert-Pearson theory of subordinacy and present several applications. Using their general approach, established purely $`\alpha `$-continuous spectral measures for the operators $`H_{\lambda ,\theta ,\beta }`$ for every $`\lambda `$, an uncountable zero-measure set of $`\theta `$’s and $`\beta =0`$ with a positive $`\alpha `$ which depends on both $`\lambda `$ and $`\theta `$. From a physical point of view, however, the dynamical implications that can be derived for one $`\beta `$ only (they, of course, trivially extend to the elements in the orbit of $`\beta `$ under the irrational rotation by $`\theta `$) are not entirely satisfying since a quasicrystal is essentially modelled by the local isomorphism class of a given potential . It should therefore be expected that the dynamics behave uniformly with respect to $`\beta `$. Thus, our second goal here is to prove that $`\alpha `$-continuity indeed holds uniformly in $`\beta `$. Before stating the result, let us recall some basic notions from continued fraction expansion theory; we mention as general references.
Given $`\theta (0,1)`$ irrational, we have an expansion
$$\theta =\frac{1}{a_1+{\displaystyle \frac{1}{a_2+{\displaystyle \frac{1}{a_3+\mathrm{}}}}}}$$
with uniquely determined $`a_n`$. The associated rational approximants $`\frac{p_n}{q_n}`$ are defined by
$`p_0`$ $`=0,`$ $`p_1`$ $`=1,`$ $`p_n`$ $`=a_np_{n1}+p_{n2},`$
$`q_0`$ $`=1,`$ $`q_1`$ $`=a_1,`$ $`q_n`$ $`=a_nq_{n1}+q_{n2}.`$
The number $`\theta `$ is said to have bounded density if
$$\underset{n\mathrm{}}{lim\; sup}\frac{1}{n}\underset{i=1}{\overset{n}{}}a_i<\mathrm{}.$$
The set of bounded density numbers is uncountable but has Lebesgue measure zero.
###### Theorem 2.
Let $`\theta `$ be a bounded density number. Then for every $`\lambda `$ there exists $`\alpha =\alpha (\lambda ,\theta )>0`$ such that for every $`\beta `$, and every $`\varphi \mathrm{}^2()`$ of compact support, the spectral measure for the pair $`(H_{\lambda ,\theta ,\beta },\varphi )`$ is uniformly $`\alpha `$-Hölder continuous. In particular, $`H_{\lambda ,\theta ,\beta }`$ has purely $`\alpha `$-continuous spectrum.
Remarks.
1. A finite positive measure $`d\mathrm{\Lambda }`$ is said to be uniformly $`\alpha `$-Hölder continuous (or U$`\alpha `$H) if the distribution function
$$\mathrm{\Lambda }(E)=_{\mathrm{}}^E𝑑\mathrm{\Lambda }$$
is uniformly $`\alpha `$-Hölder continuous. A measure is said to be $`\alpha `$-continuous if it is absolutely continuous with respect to a U$`\alpha `$H measure. This is equivalent to the statement $`\mu (S)=0`$ for all sets $`S`$ of zero $`\alpha `$-Hausdorff measure .
2. See for explicit dynamical bounds that may be deduced from this result.
3. As was noted in , $`\alpha `$-continuity implies a lower bound on the Hausdorff dimension of the spectrum as a set. This lower bound has been complemented by Raymond in . This work contains a non-trivial upper bound on the dimension of the spectrum in the Fibonacci case at large coupling. We refer the reader to which proves upper bounds on the dynamics drawing on the ideas of and .
The Gilbert-Pearson theory, as well as the Jitomirskaya-Last extension thereof, relates spectral measure properties to the asymptotic behavior of the following quantity,
(4)
$$u_L^2=\underset{n=0}{\overset{L}{}}\left|u(n)\right|^2+(LL)\left|u(L+1)\right|^2,$$
where $`u`$ is a solution to (2). A proof of purely $`\alpha `$-continuous spectrum can be obtained by the following three-step procedure.
* Prove power-law upper bounds on $`u_L`$ for (spectrally almost) every energy $`E`$ in the spectrum of the given operator, uniformly for all solutions.
* Prove similar power-law lower bounds on $`u_L`$.
* Relate the spectral properties of the whole-line operator to the study of the solutions on the half-line.
The resulting $`\alpha `$ depends in an explicit way on the exponents in the power law bounds on $`u_L`$.
Let us remark that some works, e.g. , associate a slightly different family of operators to the parameters $`\lambda ,\theta `$ which is larger than the family parametrized by $`\theta [0,1)`$. The above theorems also hold for this family (cf. the corresponding discussions in ).
The organization of this article is as follows. In Section 2 we present some crucial properties of Sturmian potentials. We recall in particular the unique decomposition property and the uniform bounds on the traces of certain transfer matrices. Section 3 provides a study of the scaling properties of solutions to (2) with respect to the decomposition of the potentials on various levels and shows how Theorem 1 follows from these scaling properties. Uniform upper and lower power-law bounds on $`u_L`$ for certain rotation numbers are established in Section 4. Finally, Section 5 discusses the transition from half-line eigenfunction estimates to spectral properties of the whole-line operator which, together with the power-law bounds on the solutions, proves Theorem 2. Since this interplay might be of independent interest, we present it in a general form, independent of the potential.
## 2. Basic properties of Sturmian potentials
In this section we recall some basic properties of Sturmian potentials. For further information we refer the reader to . We focus, in particular, on the decomposition of Sturmian potentials into canonical words, which obey recursive relations, and known results on the traces of the transfer matrices associated to these words.
Fix some rotation number, $`\theta `$, and let $`a_n`$ denote the coefficients in its continued fraction expansion. Define the words $`s_n`$ over the alphabet $`𝒜=\{0,1\}`$ by
(5)
$$s_1=1,s_0=0,s_1=s_0^{a_11}s_1,s_n=s_{n1}^{a_n}s_{n2},n2.$$
In particular, the word $`s_n`$ has length $`q_n`$ for each $`n0`$. By definition, $`s_{n1}`$ is a prefix of $`s_n`$ for each $`n2`$. For later use, we recall the following elementary formula .
###### Proposition 2.1.
For each $`n2`$, $`s_ns_{n+1}=s_{n+1}s_{n1}^{a_n1}s_{n2}s_{n1}`$.
Thus, the word $`s_ns_{n+1}`$ has $`s_{n+1}`$ as a prefix. Note that the dependence of $`a_n,p_n,q_n,s_n`$ on $`\theta `$ is left implicit. Fix coupling constant $`\lambda `$ and energy $`E`$; then, for each $`w=w_1\mathrm{}w_n𝒜^n`$, we define the transfer matrix $`M(\lambda ,E,w)`$ by
(6)
$$M(\lambda ,E,w)=\left[\begin{array}{cc}E\lambda w_n& 1\\ 1& 0\end{array}\right]\times \mathrm{}\times \left[\begin{array}{cc}E\lambda w_1& 1\\ 1& 0\end{array}\right].$$
If $`u`$ is a solution to (2), we have
$$U(n+1)=M(\lambda ,E,v_{\theta ,\beta }(1)\mathrm{}v_{\theta ,\beta }(n))U(1),$$
where
$$U(n)=\left[\begin{array}{c}u(n)\\ u(n1)\end{array}\right].$$
When studying the power-law behavior of $`u_L`$, one can investigate as well the behavior of
(7)
$$U_L=\left(\underset{n=1}{\overset{L}{}}U(n)^2+(LL)U(L+1)^2\right)^{\frac{1}{2}},$$
where
$$U(n)^2=|u(n)|^2+|u(n1)|^2,$$
since
(8)
$$\frac{1}{2}U_L^2u_L^2U_L^2.$$
Now, the spectrum of $`H_{\lambda ,\theta ,\beta }`$ is independent of $`\beta `$ and can thus be denoted by $`\mathrm{\Sigma }_{\lambda ,\theta }`$. Let us define
$`x_n`$ $`=\mathrm{tr}\left(M(\lambda ,E,s_{n1})\right),`$
$`y_n`$ $`=\mathrm{tr}\left(M(\lambda ,E,s_n)\right),`$
$`z_n`$ $`=\mathrm{tr}\left(M(\lambda ,E,s_ns_{n1})\right),`$
with dependence on $`\lambda `$ and $`E`$ suppressed.
###### Proposition 2.2.
For every $`\lambda `$ there exists $`C_\lambda (1,\mathrm{})`$ such that for every irrational $`\theta `$, every $`E\mathrm{\Sigma }_{\lambda ,\theta }`$, and every $`n`$, we have
$$\mathrm{max}\{|x_n|,|y_n|,|z_n|\}C_\lambda .$$
Proof. This result follows implicitly from . It can be derived from the analysis in by combining their bound on $`|x_n|`$ and $`|y_n|`$ with the fact that the traces obey the Fricke-Vogt invariant
$$x_n^2+y_n^2+z_n^2x_ny_nz_n=\lambda ^2+4,$$
which was also shown in .∎
The words $`s_n`$ are now related to the sequences $`v_{\theta ,\beta }`$ in the following way. For each pair $`(\theta ,n)`$, every sequence $`v_{\theta ,\beta }`$ may be partitioned into words such that each word is either $`s_n`$ and $`s_{n1}`$. This uniform combinatorial property, together with the uniform trace bounds given in Proposition 2.2, lies at the heart of the results contained in this paper and its precursors . Let us make this property explicit.
###### Definition 2.3.
Let $`n_0`$ be given. An $`(n,\theta )`$-partition of a function $`f:\{0,1\}`$ is a sequence of pairs $`(I_j,z_j)`$, $`j`$ such that:
* the sets $`I_j`$ partition $``$;
* $`1I_0`$;
* each block $`z_j`$ belongs to $`\{s_n,s_{n1}\}`$; and
* the restriction of $`f`$ to $`I_j`$ is $`z_j`$. That is, $`f_{d_j}f_{d_j+1}\mathrm{}f_{d_{j+1}1}=z_j`$.
We will suppress the dependence on $`\theta `$ if it is understood to which $`\theta `$ we refer. In particular, we will write $`n`$-partition instead of $`(n,\theta )`$-partition. The unique decomposition property is now given in the following lemma which was proved in .
###### Lemma 2.4.
For every $`n_0`$ and every $`\beta [0,1)`$, there exists a unique $`n`$-partition $`(I_j,z_j)`$ of $`v_{\theta ,\beta }`$. Moreover, if $`z_j=s_{n1}`$, then $`z_{j1}=z_{j+1}=s_n`$. If $`z_j=s_n`$, then there is an interval $`I=\{d,d+1,\mathrm{},d+l1\}`$ containing $`j`$ and of length $`l\{a_{n+1},a_{n+1}+1\}`$ such that $`z_i=s_n`$ for all $`iI`$ and $`z_{d1}=z_{d+l}=s_{n1}`$.
We finish this section with a short discussion of symmetry properties of the words $`v_{\theta ,\beta }`$. This will show that the considerations below, based on a study of the operators $`H_{\lambda ,\theta ,\beta }`$ on the right half-line, could equally well be based on a study of the operators on the left half-line. This particularly implies that for all parameter values, given an energy in the spectrum, both at $`+\mathrm{}`$ and $`\mathrm{}`$ every solution of (2) does not tend to zero.
For a finite word $`w=w_1\mathrm{}w_n`$ over $`\{0,1\}`$, define the reverse word $`w^R`$ by $`w^R=w_n\mathrm{}w_1`$ and for a word $`w\{0,1\}^{}`$, define the reverse word $`w^R`$ by $`w^R=v`$ with $`v_n=w_n`$ for $`n`$. It is not hard to show that every $`v_{\theta ,\beta }`$ allows a unique $`n`$-$`R`$-partition . Here, an $`n`$-$`R`$-partition is defined by replacing $`s_{n1}`$ and $`s_n`$ by $`s_{n1}^R`$ and $`s_n^R`$, respectively, in the definition of $`n`$-partition. Mimicking the proof of Lemma 5.1 in with the norm replaced by the trace, gives immediately $`x_n^R=x_n`$, $`y_n^R=y_n`$ and $`z_n^R=z_n`$. Here, $`x_n^R,y_n^R`$ and $`z_n^R`$ are defined by replacing $`s_{n1},s_n`$ and $`s_ns_{n1}`$ with their reverse words in the definition of $`x_n,y_n`$ and $`z_n`$, respectively. Thus, the analog of Proposition 2.2 holds for $`x_n^R,y_n^R,z_n^R`$ (in fact, this can also be established by remarking that the underlying trace map system is essentially unchanged by passing from $`s_n`$ to $`s_n^R`$). The $`n`$-$`R`$-partitions and the bound on the traces allow to study the operators on the left half-line in exactly the same way as the operators on the right half-line are studied in the following two sections. Alternatively, it is possible to show that the map $`R`$ leaves the set $`\overline{\{v_{\theta ,\beta }:\beta [0,1)\}}\{0,1\}^{}`$ invariant, where the bar denotes closure with respect to product topology . This could also be used to show that the two half-lines are equally well accessible.
## 3. Scaling behavior of solutions
In this section, we use the trace bounds and the partition lemma to study the growth of $`U_L`$ for energies in the spectrum and normalized solutions to (2). For our purposes it will be sufficient to consider this quantity only for $`L=q_{8n}`$, $`n`$. In Lemma 3.1 below it is shown that this growth has a lower bound which is exponential in $`n`$. In particular, this will imply absence of eigenvalues as claimed in Theorem 1 and it will also be used in our proof of power-law (in $`L`$) lower bounds for certain rotation numbers which will be given in the next section.
###### Lemma 3.1.
Let $`\lambda ,\theta ,\beta `$ be arbitrary, $`E\mathrm{\Sigma }_{\lambda ,\theta }`$, and let $`u`$ be a normalized solution to (2). Then, for every $`n8`$, the inequality
$$U_{q_n}D_\lambda U_{q_{n8}},$$
holds, where
$$D_\lambda ^2=1+\left[\frac{1}{2C_\lambda }\right]^2.$$
Proof of Theorem 1. It follows immediately from Lemma 3.1 that for all parameter values $`\lambda ,\theta ,\beta `$, the operator $`H_{\lambda ,\theta ,\beta }`$ has no eigenvalues.∎
Before giving the proof of Lemma 3.1, let us make explicit the core argument we shall use. Similar to , we employ a mass-reproduction technique which is based upon the two-block version of the Gordon argument from .
###### Lemma 3.2.
Fix $`\lambda ,\theta ,\beta `$. Suppose that $`v_{\theta ,\beta }(j)\mathrm{}v_{\theta ,\beta }(j+2k1)`$ is conjugate to $`(s_{n1})^2`$, $`(s_n)^2`$, or $`(s_{n1}s_n)^2`$ for some $`n`$, $`lk`$, and every $`j\{1,\mathrm{},l\}`$. Let $`E\mathrm{\Sigma }_{\lambda ,\theta }`$. Then every normalized solution $`u`$ to (2) satisfies
$$U_{l+2k}D_\lambda U_l.$$
Remark. A word $`w=w_1\mathrm{}w_n`$ is conjugate to a word $`v=v_1\mathrm{}v_n`$ if for some $`i\{1,\mathrm{},n\}`$, we have $`w_1\mathrm{}w_n=v_i\mathrm{}v_nv_1\mathrm{}v_{i1}`$, that is, if $`w`$ is obtained from $`v`$ by a cyclic permutation of its symbols.
Proof. Consider some $`j\{1,\mathrm{},l\}`$. By definition, we have
$$U(j+k)=M(\lambda ,E,v_{\theta ,\beta }(j)\mathrm{}v_{\theta ,\beta }(j+k1))U(j),$$
and by assumption,
$`U(j+2k)`$ $`=M(\lambda ,E,v_{\theta ,\beta }(j)\mathrm{}v_{\theta ,\beta }(j+2k1))U(j)`$
$`=\left[M(\lambda ,E,v_{\theta ,\beta }(j)\mathrm{}v_{\theta ,\beta }(j+k1))\right]^2U(j).`$
Hence, applying the Cayley-Hamilton Theorem,
(9)
$$U(j+2k)\mathrm{tr}\left[M(\lambda ,E,v_{\theta ,\beta }(j)\mathrm{}v_{\theta ,\beta }(j+k1))\right]U(j+k)+U(j)=0.$$
Moreover,
(10)
$$\left|\mathrm{tr}\left[M(\lambda ,E,v_{\theta ,\beta }(j)\mathrm{}v_{\theta ,\beta }(j+k1))\right]\right|C_\lambda .$$
Combining (9) and (10), we obtain
(11)
$$\mathrm{max}\{U(j+k),U(j+2k)\}\frac{1}{2C_\lambda }U(j)$$
for all $`1jl`$. We can therefore proceed as follows,
$`U_{l+2k}^2`$ $`={\displaystyle \underset{m=1}{\overset{l+2k}{}}}U(m)^2`$
$`={\displaystyle \underset{m=1}{\overset{l}{}}}U(m)^2+{\displaystyle \underset{m=l+1}{\overset{l+2k}{}}}U(m)^2`$
$`{\displaystyle \underset{m=1}{\overset{l}{}}}U(m)^2+\left[\frac{1}{2C_\lambda }\right]^2{\displaystyle \underset{m=1}{\overset{l}{}}}U(m)^2`$
$`=\left(1+\left[\frac{1}{2C_\lambda }\right]^2\right)U_l^2.`$
This proves the assertion.∎
Proof of Lemma 3.1. We make use of the information provided by Lemma 2.4 and exhibit squares in the potentials which are suitable in the sense that they satisfy the assumption of Lemma 3.2. In fact, we shall show
(12)
$$U_{2(q_{n+1}+q_n)+q_{n1}}D_\lambda U_{q_{n4}}$$
for all $`\lambda ,\theta ,\beta `$, all $`E\mathrm{\Sigma }_{\lambda ,\theta }`$, all solutions $`u`$, and all $`n4`$. Since $`q_{n+4}2(q_{n+1}+q_n)+q_{n1}`$, this proves the assertion.
Fix $`\lambda ,\theta ,\beta `$ and some $`n4`$ and consider the $`n`$-partition of $`v_{\theta ,\beta }`$. Since we want to exhibit squares close to the origin, we consider the following cases.
* Applying (5) and Proposition 2.1, we see that this block is followed by $`s_{n1}^2s_{n4}`$. We can, therefore, apply Lemma 3.2 with $`l=q_{n4}`$ and $`k=q_{n1}`$. This yields (12) and we are done in this case.
* Proposition (2.1) yields that these two blocks are followed by $`s_ns_{n3}`$. Lemma 3.2 now applies with $`l=q_{n3}`$ and $`k=q_n`$.
* Let $`z_j^{}`$ label the blocks in the $`(n+1)`$-partition of $`v_{\theta ,\beta }`$. Therefore we have $`z_0^{}=s_{n+1}`$. Let us consider the following subcases.
+ Similar to Case 2, this implies that $`z_0^{}z_1^{}`$ is followed by $`s_{n+1}s_{n2}`$ and hence Lemma 3.2 applies with $`l=q_{n2}`$ and $`k=q_{n+1}`$.
+ It follows that $`z_2^{}=s_{n+1}`$. Again we consider two subcases.
- Of course, this case can only occur if $`a_{n+2}=1`$. We infer that $`z_4^{}=s_{n+1}`$. But this implies that we have squares conjugate to $`s_ns_{n+1}`$ and Lemma 3.2 is applicable with $`l=q_{n1}`$ and $`k=q_n+q_{n+1}`$. Hence, (12) also holds in this case.
- Let us consider the consequences of this particular case for the blocks in the $`n`$-partition. We have
(13)
$$z_0z_1\mathrm{}z_{2a_{n+1}+4}=s_ns_{n1}s_ns_n^{a_{n+1}}s_{n1}s_n^{a_{n+1}}s_{n1}.$$
Since $`s_n`$ is a prefix of $`s_{n+1}`$, this must be followed by $`s_n`$. We therefore have the sequence of blocks
$$s_ns_{n1}s_ns_n^{a_{n+1}}s_{n1}s_n^{a_{n+1}}s_{n1}s_n$$
where the site $`1`$ is contained in the leftmost block. Using Proposition 2.1 this can be rewritten as
$$s_ns_{n1}s_ns_n^{a_{n+1}}s_{n1}s_n^{a_{n+1}}s_ns_{n2}^{a_{n1}1}s_{n3}s_{n2},$$
which can as well be interpreted as
$$s_ns_{n1}s_ns_n^{a_{n+1}}s_{n1}s_ns_n^{a_{n+1}}s_{n2}^{a_{n1}1}s_{n3}s_{n2}.$$
Thus, Lemma 3.2 is applicable with $`l=q_{n3}`$ and $`k=q_n+q_{n+1}`$ which closes Case 3.2.2.
Between Cases 1, 2, and 3 we have covered all possible choices of $`z_0,z_1`$.∎
## 4. Power-law upper and lower bounds on solutions
In this section we provide power-law bounds for $`u_L`$ in the case where the rotation number $`\theta `$ has suitable number theoretic properties. Recall that $`a_n`$ denote the coefficients in the continued fraction expansion of $`\theta `$ and $`q_n`$ denote the denominators of the canonical continued fraction approximants to $`\theta `$.
###### Proposition 4.1.
Let $`\theta `$ be such that for some $`B<\mathrm{}`$, $`q_nB^n`$ for every $`n`$. Then for every $`\lambda `$, there exist $`0<\gamma _1,C_1<\mathrm{}`$ such that for every $`E\mathrm{\Sigma }_{\lambda ,\theta }`$ and every $`\beta `$, every solution $`u`$ of (2),(3) obeys
(14)
$$u_LC_1L^{\gamma _1}$$
for $`L`$ sufficiently large.
Remark. The set of $`\theta `$’s obeying the assumption of Proposition 4.1 has full Lebesgue measure .
Proof. The bound (14) can be derived from the exponential lower bound on $`U_{q_{8n}},n`$ given the exponential upper bound on $`q_n,n`$. Lemma 3.1 established the power-law bound for $`L=q_{8n}`$. It can then be interpolated to other values of $`L`$ (see for details).∎
###### Proposition 4.2.
Let $`\theta `$ be a bounded density number. Then for every $`\lambda `$, there exist $`0<\gamma _2,C_2<\mathrm{}`$ such that for every $`E\mathrm{\Sigma }_{\lambda ,\theta }`$ and every $`\beta `$, every normalized solution $`u`$ of (2),(3) obeys
(15)
$$u_LC_2L^{\gamma _2}$$
for all $`L`$.
Proof. The proof is based upon local partitions and results by Iochum et al. . Up to interpolation to non-integer $`L`$’s, it was given in .∎
Remark. It is easy to see that bounded density numbers obey the assumption of Proposition 4.1. Thus, if $`\theta `$ is a bounded density number, we have
$$C_1L^{\gamma _1}u_LC_2L^{\gamma _2}$$
with $`\lambda `$-dependent constants $`\gamma _i,C_i`$, uniformly for all energies from the spectrum, all phases $`\beta `$, and all normalized solutions of (2).
## 5. Subordinacy Theory
In this section we demonstrate how the solution estimates of the previous section may be used to prove $`\alpha `$-continuity of spectral measures for some $`\alpha >0`$.
As it will cost us nothing in clarity, we shall treat the operator
$$[Hu](n)=u(n+1)+u(n1)+V(n)u(n)$$
with arbitrary potential $`V:`$. To each such whole-line operator we associate two half-line operators, $`H_+=P_+^{}HP_+`$ and $`H_{}=P_{}^{}HP_{}`$, where $`P_\pm `$ denote the inclusions $`P_+:\mathrm{}^2(\{1,2,\mathrm{}\})\mathrm{}^2()`$ and $`P_{}:\mathrm{}^2(\{0,1,2,\mathrm{}\})\mathrm{}^2()`$.
The spectral properties of $`H,H_\pm `$ are typically studied via the Weyl $`m`$-functions. For each $`z`$ we define $`\psi ^\pm (n;z)`$ to be the unique solutions to
$$H\psi ^\pm =z\psi ^\pm ,\psi ^\pm (0;z)=1\text{and}\underset{n=0}{\overset{\mathrm{}}{}}|\psi ^\pm (\pm n;z)|^2<\mathrm{}.$$
With this notation we can define the Weyl functions by
$`m^+(z)`$ $`=\delta _1|(H_+z)^1\delta _1=\psi ^+(1;z)/\psi ^+(0;z)`$
$`m^{}(z)`$ $`=\delta _0|(H_{}z)^1\delta _0=\psi ^{}(0;z)/\psi ^{}(1;z)`$
for each $`z`$. Here and elsewhere, $`\delta _n`$ denotes the vector in $`\mathrm{}^2`$ supported at $`n`$ with $`\delta _n(n)=1`$. For the whole-line problem the $`m`$-function role is played by the $`2\times 2`$ matrix $`M(z)`$:
$$\left[\begin{array}{c}a\\ b\end{array}\right]^{}M(z)\left[\begin{array}{c}a\\ b\end{array}\right]=(a\delta _0+b\delta _1)|(Hz)^1(a\delta _0+b\delta _1).$$
Or, more explicitly,
$`M`$ $`={\displaystyle \frac{1}{\psi ^+(1)\psi ^{}(0)\psi ^+(0)\psi ^{}(1)}}\left[\begin{array}{cc}\psi ^+(0)\psi ^{}(0)& \psi ^+(1)\psi ^{}(0)\\ \psi ^+(1)\psi ^{}(0)& \psi ^+(1)\psi ^{}(1)\end{array}\right]`$
$`={\displaystyle \frac{1}{1m^+m^{}}}\left[\begin{array}{cc}m^{}& m^+m^{}\\ m^+m^{}& m^+\end{array}\right]`$
with $`z`$ dependence suppressed. We define $`m(z)=\mathrm{tr}\left(M(z)\right)`$, that is, the trace of $`M`$. From these definitions one obtains:
$`m^\pm (z)`$ $`={\displaystyle \frac{1}{tz}𝑑\rho ^\pm (t)},`$
(16) $`m(z)`$ $`={\displaystyle \frac{1}{tz}𝑑\mathrm{\Lambda }(t)},`$
where $`d\rho ^+,d\rho ^{}`$ are the spectral measures for the pairs $`(H_+,\delta _1),(H_{},\delta _0)`$, respectively, and $`d\mathrm{\Lambda }`$ is the sum of the spectral measures for the pairs $`(H,\delta _0)`$ and $`(H,\delta _1)`$. An immediate consequence of these representations is that each of the $`m`$-functions maps $`^+=\{x+iy:y>0\}`$ to itself.
The pair of vectors $`\{\delta _0,\delta _1\}`$ is cyclic for $`H`$; indeed, if $`\varphi `$ is supported in $`\{N,\mathrm{},N,N+1\}`$, then there exist polynomials $`P_0,P_1`$ of degree not exceeding $`N`$ such that $`\varphi =P_0(H)\delta _0+P_1(H)\delta _1`$. This may be proved readily, by induction, once it is observed that $`\varphi (N),\varphi (N+1)`$ uniquely determine the leading coefficients of $`P_0,P_1`$, respectively.
Our immediate goal is to prove that $`d\mathrm{\Lambda }`$ is uniformly $`\alpha `$-Hölder continuous. This will follow quickly from
###### Theorem 3.
Fix $`E`$. Suppose every solution of $`(HE)u=0`$ with $`|u(0)|^2+|u(1)|^2=1`$ obeys the estimate
(17)
$$C_1L^{\gamma _1}u_LC_2L^{\gamma _2}$$
for $`L>0`$ sufficiently large. Then
(18)
$$\underset{\phi }{sup}\left|\frac{\mathrm{sin}(\phi )+\mathrm{cos}(\phi )m^+(E+iϵ)}{\mathrm{cos}(\phi )\mathrm{sin}(\phi )m^+(E+iϵ)}\right|C_3ϵ^{\alpha 1},$$
where $`\alpha =2\gamma _1/(\gamma _1+\gamma _2)`$.
Proof. This result lies within the Gilbert-Pearson theory of subordinacy . A concise proof is available in . In this context, the $`\phi `$ above corresponds to the choice of boundary conditions.∎
###### Corollary 5.1.
Given a Borel set $`\mathrm{\Sigma }`$, suppose that the estimate (17) holds for every $`E\mathrm{\Sigma }`$ with $`C_1,C_2`$ independent of $`E`$. Then, given any function $`m^{}:^+^+`$, and any $`E\mathrm{\Sigma }`$,
(19)
$$|m(E+iϵ)|=\left|\frac{m^+(E+iϵ)+m^{}(E+iϵ)}{1m^+(E+iϵ)m^{}(E+iϵ)}\right|C_3ϵ^{\alpha 1}$$
for all $`ϵ>0`$. Consequently, $`\mathrm{\Lambda }(E)`$ is uniformly $`\alpha `$-Hölder continuous at all points $`E\mathrm{\Sigma }`$. In particular, $`d\mathrm{\Lambda }`$ is $`\alpha `$-continuous on $`\mathrm{\Sigma }`$.
Proof. Fix $`E\mathrm{\Sigma }`$ and $`ϵ>0`$. Then, by introducing new variables $`z=e^{2i\phi }`$ and $`\mu =(m^+i)/(m^++i)`$, we may rewrite (18) as
$$\underset{|z|=1}{sup}\left|\frac{1+\mu z}{1\mu z}\right|C_3ϵ^{\alpha 1}.$$
Note that $`\mathrm{Im}(m^+)>0`$ implies $`|\mu |<1`$ and so $`(1+\mu z)/(1\mu z)`$ defines an analytic function on $`\{z:|z|1\}`$. The point $`z=(im^{})/(i+m^{})`$ lies inside the unit disk since $`\mathrm{Im}(m^{})>0`$. The estimate (19) now follows from the maximum modulus principle and a few simple manipulations. This estimate and the representation (16) provide
$$\mathrm{\Lambda }\left([Eϵ,E+ϵ]\right)2ϵ\mathrm{Im}\left(m(E+iϵ)\right)2C_3ϵ^\alpha \text{for all }E\mathrm{\Sigma }\text{}ϵ>0\text{,}$$
from which $`\mathrm{\Lambda }(E)`$ is uniformly $`\alpha `$-Hölder continuous on $`\mathrm{\Sigma }`$.∎
Remark. If we permit $`C_1,C_2`$ to depend on $`E`$, the only consequence is that now $`C_3`$ depends on $`E`$ and so $`\mathrm{\Lambda }`$ need not be uniformly Hölder continuous. However, $`\alpha `$-continuity is still guaranteed.
Proof of Theorem 2. Propositions 4.1 and 4.2 provide the estimate (18) for each $`E`$ in the spectrum $`\mathrm{\Sigma }_{\lambda ,\theta }`$ of $`H_{\lambda ,\theta ,\beta }`$. Of course $`d\mathrm{\Lambda }`$ is supported by $`\mathrm{\Sigma }_{\lambda ,\theta }`$ and so must be uniformly $`\alpha `$-Hölder continuous.
Given $`\varphi \mathrm{}^2()`$ with compact support, the remarks preceding Theorem 3 show that the spectral measure for $`\varphi `$ is bounded by $`f(E)d\mathrm{\Lambda }(E)`$ with $`f(E)`$ uniformly bounded on the compact set $`\mathrm{\Sigma }`$, which completes the proof.∎
Acknowledgments. D. D. was supported by the German Academic Exchange Service through Hochschulsonderprogramm III (Postdoktoranden) and D. L. received financial support from Studienstiftung des Deutschen Volkes (Doktorandenstipendium), both of which are gratefully acknowledged. |
no-problem/9910/cond-mat9910347.html | ar5iv | text | # Spectral Shape of Relaxations in Silica Glass
## Abstract
Precise low-frequency light scattering experiments on silica glass are presented, covering a broad temperature and frequency range ($`9\text{GHz}<\nu <2`$THz). For the first time the spectral shape of relaxations is observed over more than one decade in frequency. The spectra show a power-law low-frequency wing of the relaxational part of the spectrum with an exponent $`\alpha `$ proportional to temperature in the range $`30\text{K}<T<200`$K. A comparison of our results with those from acoustic attenuation experiments performed at different frequencies shows that this power-law behaviour rather well describes relaxations in silica over 9 orders of magnitude in frequency. These findings can be explained by a model of thermally activated transitions in double well potentials.
Relaxations are a characteristic feature of glasses. They show up as a broad quasi-elastic contribution in neutron and light scattering spectra , and also as a damping of sound waves or as a dielectric loss . Acoustic attenuation, or more precisely the internal friction $`Q^1`$, of silica glass (a-SiO<sub>2</sub>) obtained at different frequencies demonstrates the presence of a broad distribution of relaxation rates . For temperatures above some 10 K it is assumed that relaxations are due to thermally activated transitions in asymmetric double well potentials (ADWP’s) , where the same ADWP’s relax via tunnelling at low temperatures ($`T1`$ K) . Within this model it is possible to extract the distribution of barrier heights $`g(V)`$ from the experimental data .
Although relaxations in silica have been extensively studied so far, a spectroscopic work that covers a broad frequency range is still missing. Acoustic and dielectric studies usually measure at a single frequency only, and spectroscopic techniques like neutron and Raman scattering have typically a low frequency limit around 100 GHz and therefore cover only a small part of the relaxational contribution in a range where it is difficult to extract the spectral shape of relaxations. Recently, it has become possible to obtain low frequency light scattering spectra of glasses at temperatures well below $`T_\mathrm{g}`$, extending the spectral resolution to frequencies below 10 GHz for some polymeric and anorganic glasses .
We refine this approach in order to study the much lower signal levels available for silica. Our data for the first time show the spectral shape of relaxations in silica glass over a broad temperature and frequency range. This enables us to perform a cross test on models about relaxations in glasses, regarding the frequency and temperature dependence of relaxations. Furthermore we will compare our data with internal friction data to examine whether both techniques probe the same kind of relaxations.
Depolarized inelastic light scattering spectra of a sample of Suprasil 300 (synthetic silica, Heraeus, $`<`$1 ppm of OH<sup>-</sup>-groups) were obtained using an Ar<sup>+</sup> laser (514.5 nm, 400 mW) and a six-pass Sandercock tandem Fabry-Perot interferometer . The sample was mounted in a dynamic Helium cryostat. The Suprasil windows of the cryostat are antireflection coated, and the cold windows are mounted tensionless in order to avoid tension induced birefringence. Since the signal levels available from the scattering of silica are much lower than for other glasses studied so far , the optical scheme was improved and optimized for alignment on low intensities . We use a 180 back-scattering geometry with a selection of depolarized scattering. Here the laser beam enters a Glan-Taylor polarizer (extinction 10<sup>-6</sup>) as the extraordinary beam. The polarized component of the light is reflected and focused on the sample. The scattered light is collected by the same lens and passes through the same Glan-Taylor prism, transmitting the depolarized component. The scattered light is collected within an angle of about 4. Special care was taken to avoid contributions to the signal from the windows of the cryostat and the polarizer. This background has been carefully measured for all temperatures and free spectral ranges by recording spectra with and without sample. At all settings we find a background of about 3% of the signal, relating it to a contribution from the cold windows, which are close to the focus of our lens, basically at the same temperature as the sample, and of the same material as the sample. Nevertheless, we carefully eliminated its contribution by subtracting the signal from accumulations without sample from all of our spectra. In a double cross check, for some temperatures we also recorded spectra of a sample of Heralux (fused silica, Heraeus) in a near-to-back-scattering-geometry, avoiding any such contributions. The thus recorded spectra agree with the results reported here within our accuracy.
We recorded spectra with the free spectral ranges (FSR’s) of 1000 GHz and 150 GHz over two spectral ranges on either side of the elastic line. The experimentally determined finesse of the spectrometer is better than 120 and typically about 140. In order to suppress higher transmission orders of the tandem (multiples of 20 FSR’s could give contributions to the signal for a Sandercock tandem FPI ), we use a prism in combination with either of two interference filters of a width of 10 nm and 1150 GHz (FWHM). The contrast of the spectrometer was determined to be better than $`10^9`$ at all frequencies.
To further validate the absence of possible contributions from the instrumental tail of the elastic line or from higher transmission orders of the tandem, we measured spectra for both FSR’s at a low temperature, $`T=6`$ K. At this temperature the anti-Stokes part of the spectrum should be almost zero because either the Bose factor or the signal itself are very low for this temperature. Indeed, the anti-Stokes part of the spectrum at this temperature shows no deviations from the dark count level of 2.47 counts/s of our detector, demonstrating the absence of contributions of higher orders or from the elastic line. Any contributions from higher orders are especially problematic for the smaller FSR, because the signal is much higher at frequencies of multiples of 20 of this FSR than in the range of interest: the signal increases with increasing frequency, especially at low temperatures, where the relaxational contribution is small (cf. Fig. 1). For our 6 K-spectrum in this region the sum of the signal and possible higher order contributions is less than 0.3 counts/s, ie, it is absent within the precision of our experiment. Since the vibrational part, that could lead to any parasitic contributions, rises much slower with temperature than the signal from relaxations, it is clear that signals from higher transmission orders do not disturb our spectra.
The upper part of Figure 1 displays the intensity of the depolarized back-scattering spectra of silica. At high frequencies the Boson peak is observed. The signal passes through a minimum and shows features of a central peak. At frequencies of 35 GHz and 20 GHz two narrow peaks are observed that correspond to the longitudinal and transversal Brillouin lines and are due to a leakage from imperfect polarisation and to the finite aperture, respectively. Apart from the Brillouin lines, the central peak shows a power-law frequency dependence, and its shape considerably changes with temperature.
In the susceptibility representation, $`\chi ^{\prime \prime }(\nu )=I/(n(\nu )+1)`$ of the Stokes side, the trivial temperature dependence due to the Bose factor $`n(\nu )`$ is eliminated. The susceptibility data in Figure 1 have been smoothed by adjacent averaging over five points. The presence of two contributions to the low-frequency light scattering in glasses can be distinguished by the different temperature dependence: the vibrational contribution dominates at high frequencies and scales with the Bose factor (ie, the susceptibility is temperature independent), wheras the relaxational spectrum that dominates at lower frequencies strongly changes with temperature. As can be seen by the crossing of the spectra at different temperatures, this is not simply due to an increase of the susceptibility with temperature as would be expected in the case of higher-order scattering processes. For all temperatures, the low-frequency wing of our spectra shows a power-law behaviour. The temperature dependence of the exponent, $`\alpha (T)`$, is shown in the inset of Figure 2: $`\alpha `$ is proportional to temperature up to 200 K, with $`\alpha =T/319`$ K.
Let us see if this frequency and temperature dependence of relaxations can be described within our present understanding of glasses. In 1955 Anderson and Bömmel attributed relaxations in silica to thermally activated processes with a broad distribution of relaxation times . Theodorakopoulos and Jäckle calculated the light scattering due to structurally relaxing two-state defects and show its relation to the acoustic attenuation . This approach was refined by Gilroy and Phillips , who consider thermally activated transitions in ADWP’s. The potential wells are assumed to be the same that at lower temperatures ($`T1`$ K) relax via tunnelling and are responsible for the low temperature anomalies of glasses, ie, the parameters for the potential wells are taken from the tunnelling model . Following the assumption that the distribution of asymmetry parameters $`\mathrm{\Delta }`$ is flat, the light scattering susceptibility $`\chi ^{\prime \prime }(\nu )`$ and the internal friction $`Q^1`$ only depend on the distribution of barrier heights $`g(V)`$ :
$$\chi ^{\prime \prime }(\nu )Q^1\underset{0}{\overset{\mathrm{}}{}}\frac{2\pi \nu \tau }{1+(2\pi \nu \tau )^2}g(V)𝑑V.$$
(1)
Here the relaxation time $`\tau =\tau _0\mathrm{exp}(V/k_\mathrm{B}T)`$,where $`\tau _0`$ is the fastest relaxation time that occurs. Assuming an exponential distribution of barrier heights, $`g(V)=V_0^1\mathrm{exp}(V/V_0)`$, the model predicts for the low frequency wing of the relaxation spectrum, $`\nu (2\pi \tau _0)^1`$, a power-law susceptibility spectrum with an exponent proportional to temperature : $`\alpha =k_\mathrm{B}T/V_0`$. The inset of Figure 2 shows that the exponents of our silica data agree with the model for temperatures below 200 K and we obtain $`V_0/k_\mathrm{B}=319`$ K. At 300 K the observed exponent is less than expected from the model. Since the exponent of the susceptibility cannot become higher than 1 for
relaxation processes and is already close to 1 at room temperature, the deviation at 300 K comes as no surprise. Indeed at higher temperatures up to $`T_\mathrm{g}`$ it is found that the exponent remains constant at 1 . The simple assumption of thermally activated relaxations over barriers with an exponential distribution of barrier heights can therefore well account for the observed temperature dependence of the spectral shape of the low-frequency light scattering data of silica at temperatures up to 200 K.
Within the model of Theodorakopoulos and Jäckle the distribution of activation energies is reflected in the light scattering susceptibility just in the same manner as in the internal friction (ie, $`\chi ^{\prime \prime }(\nu )Q^1`$, cf. Eq. 1) . Taking the internal friction at a frequency that is also accessible to light scattering spectroscopy, the proportionality constant can be eliminated in order to directly compare the results of both techniques. In Figure 2 we plot the susceptibility data of Figure 1 scaled to the internal friction measured at 35 GHz ; a single factor can be used for all spectra, showing that the temperature dependence of $`\chi ^{\prime \prime }(\nu )`$ around 35 GHz is the same as that of the internal friction. Acoustic data from different experiments have a scatter of about a factor of two, and within a factor of two follow the extrapolations of the light scattering data. It appears that relaxations indeed show up in the same manner for both techniques, and that with good approximation the spectral shape of relaxations in silica shows a power-law behaviour with an exponent $`\alpha =T/319`$K over the whole available frequency range $`500\text{Hz}<\nu <500`$ GHz.
This can be further demonstrated by comparing the distributions $`g(V)`$ of the barrier heights obtained from the different experiments. Equation 1 can be further simplified, assuming a broad distribution of barrier heights $`g(V)`$. In this case the
susceptibility spectrum $`\chi ^{\prime \prime }(\nu )`$ and the internal friction $`Q^1(T)`$ for $`2\pi \nu \tau _01`$ directly reflect the distribution of correlation times, and thus the distribution $`g(V)`$ :
$$\chi ^{\prime \prime }Q^1Tg(V),\text{where}V=k_\mathrm{B}T\mathrm{ln}(1/2\pi \nu \tau _0).$$
(2)
Assuming that $`g(V)`$ is temperature independent and that thermally activated transitions determine the light scattering spectra and the internal friction at temperatures above some 10 K, this distribution of barrier heights can be directly extracted from the data by rescaling the axes with $`T`$. Explicitly, we multiply the $`\mathrm{log}(\nu )`$-axis by $`T`$ and divide the $`Q^1`$ and $`\chi ^{\prime \prime }`$ axes by $`T`$.
Then a master curve for $`g(V)`$ should result for the rescaled $`\chi ^{\prime \prime }(\nu )`$ spectra and of the acoustic data $`Q^1(T)`$ for the different frequencies. Since the amplitude of the light scattering data has already been fixed with respect to the acoustic data, the only parameter for the rescaling is $`\nu _0=(2\pi \tau _0)^1`$. Unlike the acoustic experiments our spectra cover this cut-off frequency at the transition from the power-law relaxational contribution to the vibrations in the range of the boson peak (cf. Fig. 1). From the light scattering spectra we obtain $`\nu _0=800`$ GHz (ie, $`\tau _0=0.2`$ ps), which is in good agreement with previous results taken from acoustic data , and use this value for the rescaling of all the data. For the rescaling of the light scattering data we use the frequency range where the power-law holds and where the contribution from vibrations to the spectra is negligible. In principle, the distribution $`g(V)`$ can be obtained from a single experiment at one frequency (cf. eg, ). The light scattering experiment, however, probes the temperature and frequency dependence of the susceptibility simultaneously, allowing a cross test of the model .
Figure 3 shows the result of this rescaling procedure. The light scattering data form the different temperatures show an excellent agreement and the exponential distribution as discussed above. Within the scatter of the acoustic data of about a factor of two, we obtain a master curve for all the available data ($`10\text{K}<T<200`$ K). The rescaling of the abscissa involves both the temperature and the frequency of the data. Therefore at a fixed barrier height of say $`V/k_\mathrm{B}=200`$ K, we compare data that have been obtained at a temperature of 200 K and a frequency of about 200 GHz with those obtained at a temperature of 10 K and a frequency of 500 Hz. Considering this range, it is indeed surprising that the scaling works so well. The rescaled acoustic data cover the range both above and below the temperature of the loss peak found in silica at temperatures between some 25 and 130 K . (In Fig. 2 the precence of this peak is not seen at low frequencies, because the lowest temperature plotted is 32 K.) Taking acoustic data alone, it was found that a modified Gaussian distribution describes the data better than the exponential distribution obtained from our light scattering data . This is demonstrated by the fact that some of the acoustic data show a slight curvature on the semi-logarithmic plot in Figure 3, which is absent for the rescaled light scattering data . However, the simple model with only two parameters, $`\tau _0`$ and $`V_0`$, rather well describes relaxations in silica over a broad temperature and frequency range.
In conclusion, we extend existing light scattering data on silica by about one order of magnitude to lower frequencies, covering a broad range in temperature down to some 20 K. Our data for the first time reveal the spectral shape of relaxations over more than one decade in frequency, where relaxations clearly dominate over the vibrational contribution to the spectrum. The relaxations show a power-law spectral shape at low frequencies with an exponent $`\alpha `$ proportional to temperature. For the temperature range below 200 K our data are in good quantitative agreement with a model attributing relaxations in glasses to thermally activated transitions with an exponential distribution of barrier heights $`g(V)`$. We further compare the light scattering susceptibility $`\chi ^{\prime \prime }(\nu )`$ with the internal friction $`Q^1`$ and find that within a factor of two the power-law behaviour extends down to frequencies of some 500 Hz. Therefore relaxations in silica within the temperature range $`10\text{K}T200\text{K}`$ can be described by two single parameters, $`\nu _0=800`$ GHz and $`V_0/k_\mathrm{B}=319`$ K, for all frequencies extending up to the onset of the boson peak. Although it is known that this model in its simplest form presented here does not work for all glasses , we believe that our demonstration that the model works remarkably well for silica, a paradigmatic glass, should be considered in any refinement of models describing relaxations in glasses.
We thank V. N. Novikov and N. V. Surovtsev for many illuminating discussions and the DFG (SFB 279) for financial support. J. W. appreciates financial support from H. Wiedersich. |
no-problem/9910/nucl-ex9910001.html | ar5iv | text | # Calculation of the interaction of a neutron spin with an atomic electric field
## I Introduction
Most recent experiments to search for the neutron electric dipole moment (EDM) involve the neutron interacting with electric fields created by laboratory apparatus. However there is also the possibility of using electric fields produced by atoms in crystals. There are some reasons for believing this might be advantageous – atomic fields are large and the neutrons interact with many atoms in a coherent fashion. However as we will show the measurable effects are quite small compared to known background effects due to the motional electric field.
The effect in a crystal experiment can be enhanced if the scatteing amplitude due to the EDM interaction can be made to interfere with the much larger scattering amplitude of a nucleus. It will be seen below that the scattering amplitude due to the EDM interaction is imaginary so it can only interfere with an imaginary nuclear amplitude. The idea of searching for a neutron EDM by measuring the interference between the scattering from an atomic electric field (due to the EDM interaction) with nuclear scattering was proposed by Shull; an experiment was performed in 1964 and was based on scattering from a CdS crystal, because Cd, a strong absorber, has a large imaginary scattering amplitude. In this experiment, the penetration depth, hence reflectivity, depends on the orientation of the neutron spin relative to the momentum transfer.
Recently, a proposal to search for a neutron EDM by scattering in a perfect Si crystal was put forward. In this case, the imaginary part of the nuclear scattering length is very small, and the proposed observable is a rotation of the neutron spin direction caused by the superpostion of the spin dependent imaginary amplitude with the real nuclear amplitude.
Because the calculations regarding this effect do not, to our knowledge, appear in the literature and the calculations regarding the Schull experiment are lacking in detail, we felt it worthwhile to estimate the size of a neutron spin rotation due to an EDM interaction, and in addition, include the analysis of the $`\stackrel{}{v}\times \stackrel{}{E}`$ interaction originally studied by Schwinger and demonstrated by Shull and Nathans. We use the Thomas-Fermi model of the atom to give the approximate atomic electric field.
## II Thomas-Fermi model of the atom
The Thomas-Fermi model of the atom is fully developed in §70 of . Briefly, the atom is treated semi-classically with the electron density as a function of position determined by phase space considerations. This leads to a universal function (i.e., it does not depend on atomic number $`Z`$) for the self-consistent electric field within the atom,
$$\sqrt{x}d^2\chi /dx^2=\chi ^{3/2}$$
(1)
where $`\chi `$ describes the shielding (assumed spherically symmetric) of the nuclear point charge, with the boundary conditions that $`\chi (0)=1`$ and $`\chi (\mathrm{})=0`$ (the latter condition determins $`\chi ^{}(0)`$), and the radius $`r`$ is related to $`x`$ as
$$r=xbZ^{1/3};b=\frac{1}{2}(\frac{3}{4}\pi )^{2/3}=0.885$$
(2)
(where we are using atomic units so $`m_ee^2/\mathrm{}^2=1`$). The electric potential within an atom is given by
$$\varphi (r)=\frac{Ze}{r}\chi \left(\frac{rZ^{1/3}}{b}\right)=\frac{Z^{4/3}}{b}\frac{\chi (x)}{x}.$$
(3)
We point out that the Thomas-Fermi model does not apply for either very large or very small $`x`$; however, the major contribution to the scattering integral is from $`x1`$, and we would expect this model to give reasonably accurate results.
## III Interaction of an EDM with the atomic electric field
We are interested in calculating the spin-dependent neutron scattering length by the Born approximation for either the $`v\times E`$ field or a neutron permanent electric dipole moment (EDM). Consider first the EDM interaction,
$$V(r)=de\stackrel{}{\sigma }\stackrel{}{E}(r)$$
(4)
where $`d`$ is the dipole moment length, $`e`$ is the magnitude of the electron charge, $`\sigma `$ is a Pauli matrix, and $`\stackrel{}{E}(r)`$ is an electric field. The scattering amplitude (length) can be determined by use of the Born approximation,
$$a=\frac{m_n}{2\pi \mathrm{}^2}V(r)e^{i\stackrel{}{q}\stackrel{}{r}}\mathrm{d}^3r$$
(5)
Taking the momentum transfer $`\stackrel{}{q}`$ along $`\widehat{z}`$ as the quantization axis, and using the fact that the electric field is spherically symmetric, we find
$$a=\sigma _z\frac{m}{\mathrm{}^2}_0^{\mathrm{}}𝑑eE(r)e^{iqr\mathrm{cos}\theta }\mathrm{cos}\theta \mathrm{sin}\theta r^2\mathrm{d}r\mathrm{d}\theta $$
(6)
and the other components are zero because of symmetry. Taking $`\stackrel{}{E}(r)=\varphi ^{}(r)\widehat{r}`$, and using the Thomas-Fermi wavefunction, the scattering length can be written as, taking $`\beta =bZ^{1/3}`$,
$$a=\sigma _z\frac{m_n}{\mathrm{}^2}\beta Ze^2d_0^{\mathrm{}}\left[\chi (x)+x\chi ^{}(x)\right]e^{ix\beta q\mathrm{cos}\theta }\mathrm{cos}\theta \mathrm{sin}\theta \mathrm{d}x\mathrm{d}\theta $$
(7)
which can be rewritten as
$$a=\sigma _zi\frac{m_n}{m_e}bdZ^{2/3}f(\beta q)$$
(8)
where $`f(\beta q)`$ is the imaginary part (the real part is zero) of the dimensionless integral in the previous equation. The Thomas-Fermi equation was numerically solved using a Runge-Kutta technique, and the integral numerically evaluated. The results, as a function of $`\beta q`$, are shown in Fig. 1.
For the case of Si, $`Z=14`$, $`\beta =0.367`$; a typical $`q`$ is approximately $`2\pi /2\AA \times 0.5\AA /a.u.`$ giving $`\beta q=0.58`$, and the dimensionless integral is about 1. Thus, the difference in the scattering length for the two spin states (along $`\pm \stackrel{}{q}`$) is
$$\mathrm{\Delta }a=2ibdZ^{2/3}m_n/m_e=2\times 10^4d$$
(9)
which leads to an spin rotation, on interference with the Si nuclear scattering amplitude ($`a_0=4\times 10^{13}`$ cm)
$$\mathrm{\Delta }\varphi =\frac{\mathrm{\Delta }a}{a_0}=4.7\times 10^{16}d/\mathrm{cm}$$
(10)
implying that for $`d=5\times 10^{27}`$ cm, a Bragg reflection from an Si crystal would give a rotation of $`2\times 10^{10}`$ rad.
## IV $`\stackrel{}{v}\times \stackrel{}{E}`$ interaction
Next, consider the $`\stackrel{}{v}\times \stackrel{}{E}`$ motional magnetic field interaction which couples to the neutron magnetic moment, which was first considered by Schwinger in 1948. The possibility of measuring effects from the motional field has been discussed in regard to non-centrosymmetric crytals ($`\alpha `$quartz) in which case a non-zero average electric field between scattering planes can exist. However, as has been pointed out, there is a $`\stackrel{}{v}\times \stackrel{}{E}`$ observable even for symmetric crystals.
The hamiltonian for the $`\stackrel{}{v}\times \stackrel{}{E}`$ interaction is
$$V(r)=\stackrel{}{\mu }\left[\left(\frac{\stackrel{}{p}}{m_nc}\right)\times \stackrel{}{E}(\stackrel{}{r})\right]/2.$$
(11)
For the Born approximation, we use the matrix element
$$\stackrel{}{k}_2|V(\stackrel{}{r})|\stackrel{}{k}_1=\stackrel{}{\mu }\frac{1}{2}\stackrel{}{k}_2|\frac{\stackrel{}{p}}{m_nc}\times \stackrel{}{E}(\stackrel{}{r})\stackrel{}{E}(\stackrel{}{r})\times \frac{\stackrel{}{p}}{\stackrel{}{m}_nc}|\stackrel{}{k}_1$$
(12)
where $`\stackrel{}{k}_1,\stackrel{}{k}_2`$ label the incoming and outgoing neutron wavefunction. Taking
$$\stackrel{}{v}=\stackrel{}{p}/m_n=\frac{i\mathrm{}}{m_n}\stackrel{}{}.$$
(13)
With
$$\stackrel{}{p}|\stackrel{}{k}_1=\mathrm{}\stackrel{}{k}_1|\stackrel{}{k}_1;\stackrel{}{k}_2|\stackrel{}{p}=\mathrm{}\stackrel{}{k}_2\stackrel{}{k}_2|$$
(14)
gives
$$\stackrel{}{k}_2|V(\stackrel{}{r})|\stackrel{}{k}_1=\stackrel{}{\mu }\frac{\mathrm{}}{m_nc}\frac{\stackrel{}{k}_1+\stackrel{}{k}_2}{2}\times \stackrel{}{E}(\stackrel{}{r})/2.$$
(15)
Again, assume that $`\stackrel{}{q}`$ lies along $`\widehat{z}`$; we note that $`\stackrel{}{q}=\stackrel{}{k}_2\stackrel{}{k}_1`$ is perpendicular to $`\stackrel{}{k}_1+\stackrel{}{k}_2`$ because
$$(\stackrel{}{k}_2\stackrel{}{k}_1)(\stackrel{}{k}_1+\stackrel{}{k}_2)=k_2^2k_1^2=0$$
(16)
for elastic scattering. By symmetry, the effective electric field lies along $`\widehat{z}`$, and if we assume $`\stackrel{}{k}_1+\stackrel{}{k}_2`$ is along $`\widehat{y}`$ and $`\stackrel{}{q}`$ (and has magnitude $`2k\mathrm{cos}\theta _s`$, $`2\theta _s`$ is the scattering angle), the Born approximation is
$$a=\sigma _x\frac{m_n}{2\pi \mathrm{}^2}\frac{\mu }{2}k\mathrm{cos}\theta _sE(\stackrel{}{r})\mathrm{cos}\theta e^{iqr\mathrm{cos}\theta }\mathrm{sin}\theta \mathrm{d}\theta \mathrm{d}\varphi \mathrm{d}r$$
(17)
where $`\gamma `$ is the neutron magnetic moment (-3 Hz/mG). Thus, the integral is identical to the EDM case, with a different multipliative constant, and
$$\mathrm{\Delta }a=2i\pi k\mathrm{cos}\theta _s\frac{\beta Ze}{c}\mu f(\beta q)$$
(18)
or a spin rotation about $`\widehat{x}`$ of
$$\mathrm{\Delta }\varphi 4\times 10^{15}cm/4\times 10^{13}cm=10^2rad$$
(19)
per Bragg reflection from an Si crystal.
## V Semi-classical model
If we assume there is no electron cloud around the nucleus, in Eq. (7) $`\chi =1`$, and $`a_{edm}1/q`$, which is equivalent to the high-momentum limit. We can estimate the EDM effect by taking a classical trajectory with impact parameter $`\mathrm{b}`$ relative to the nucleus. The time-integrated- electric-field-induced phase shift, assuming a spin in the the $`\widehat{z}`$ direction (propagation in $`\widehat{x}`$ direction) is given by
$`\mathrm{\Delta }\varphi =`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{edE_z}{\mathrm{}}}{\displaystyle \frac{\mathrm{d}x}{v}}={\displaystyle \frac{Ze^2d}{\mathrm{}v}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{\mathrm{b}}{(\mathrm{b}^2+x^2)^{3/2}}}dx`$ (20)
$`=`$ $`{\displaystyle \frac{2Ze^2d}{\mathrm{}\mathrm{b}v}}={\displaystyle \frac{2Ze^2m_nd}{\mathrm{}^2\mathrm{b}k}}=2Zd{\displaystyle \frac{m_n}{m_e}}{\displaystyle \frac{1}{k\mathrm{b}}}`$ (21)
which is essentially the same as before, in the high momentum limit, if we take $`\mathrm{b}=a_{nuc}`$.
## VI Discussion
Comparing Eqs. (10) and (19), we see that the motional field spin rotation is on the order of $`10^8`$ times larger than that due to an EDM with a magnitude that would be of interest in an improved experiment. Unfortunately, the effects cannot be switched on and off as in the case of the more conventional experiments based on spin precession in an applied electric field. Although the two scattering effects are proportional to $`\sigma _x`$ and $`\sigma _z`$ respectively, discrimination between the effects relies on an absolute determination of the polarization and scattering axes. One can also be concerned with the normal nuclear parity violation, which can combine with a misalignment to produce effects that mimic T-violation. This and other issues relevant for a realistic EDM scattering experiment are similar to those relating to a study of time reversal violating effects in slow neutron transmission through polarized matter for which the issues have been addressed in some detail; in particular, the constraints on near-perfect field and polarization alignment, and inability to discriminate effects due to misalignments, have been emphasized. The scattering angles constraints in a Bragg scattering EDM experiment are analogous to the constraints on the sample polarization axis in a neutron transmission experiment as discussed in . Given the constraints (e.g., scattering angle and polarization alignment to $`10^8`$ radian absolute accuracy which requires $`10^{16}`$ neutron counts to measure experimentally) achieving any significant increase in the limit for the neutron EDM would seem a daunting task. |
no-problem/9910/astro-ph9910440.html | ar5iv | text | # The extreme high frequency peaked BL Lac 1517+656Based on observations from the German-Spanish Astronomical Center, Calar Alto, operated by the Max-Planck-Institut für Astronomie, Heidelberg, jointly with the Spanish National Commission for Astronomy
## 1 Introduction
The physical nature of BL Lacertae objects is not well understood yet. The most common view about BL Lac objects is that we are looking into a highly relativistic jet (Blandford & Rees blandford (1978)). This model can explain several observational parameters, but there are still unsolved problems like the nature of the mechanisms that generate and collimate the jet or the physical nature and evolution along the jet. An important question is also, if there is a fundamental difference between BL Lac objects that are found because of their emission in the radio or in the X-ray range respectively. In order to study the nature of this class of BL Lac, extreme objects help to give constraints on the physics which is involved. One of the greatest problems while studying BL Lac is the difficulty in determining their redshifts because of the absence of strong emission and absorption lines. Usually large telescopes and long exposure times are needed to detect the absorption lines in the surrounding host galaxy.
## 2 History of 1517+656
Even though 1517+656 is an X-ray selected BL Lac, this object was detected in the radio band before being known as an X-ray source. It was first noted in the NRAO Green Bank $`4.85\mathrm{GHz}`$ catalog with a radio flux density of $`39\pm 6\mathrm{mJy}`$ (Becker et al. becker (1991)) and was also included in the 87 Green Bank Catalog of Radio Sources with a similar flux density of $`35\mathrm{mJy}`$ (Gregory & Condon 87GB (1991)) but in both cases without identification of the source. The NRAO Very Large Array at $`1.4\mathrm{GHz}`$ confirmed 1517+656 as having an unresolved core with no evidence of extended emission although a very low surface brightness halo could not be ruled out (Kollgaard et al. kollgaard (1996)). The source was first included as an X-ray source in the HEAO-1 A-3 Catalog and was also detected in the Einstein Slew Survey (Elvis et al. elvis (1992)) in the soft X-ray band ($`0.23.5\mathrm{keV}`$) the Imaging Proportional Counter (IPC, Gorenstein et al. IPC (1981)). The IPC count rate was $`0.91\mathrm{cts}\mathrm{sec}^1`$, but the total Slew Survey exposure time was only $`13.7\mathrm{sec}`$. Even though 1517+656 by then was a confirmed BL Lac object (Elvis et al. elvis (1992)) with an apparent magnitude of $`B=15.5\mathrm{mag}`$, no redshift data were available. Known as a bright BL Lac, 1517+656 has been studied several times in different wavelengths in the recent years. Brinkmann & Siebert (brinkmann (1994)) presented ROSAT PSPC ($`0.072.4\mathrm{keV}`$) data and determined the flux to $`f_\mathrm{X}=2.8910^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{sec}^1`$ and the spectral index to $`\mathrm{\Gamma }=2.01\pm 0.08`$ <sup>1</sup><sup>1</sup>1The energy index $`\alpha _E`$ is related to the photon index $`\mathrm{\Gamma }=\alpha _E+1`$. Observations of 1517+656 with BeppoSAX in the $`210\mathrm{keV}`$ band in March 1997 gave an X-ray flux of $`f_\mathrm{X}=1.0310^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{sec}^1`$ and a steeper spectral slope of $`\mathrm{\Gamma }=2.44\pm 0.09`$ (Wolter et al. wolter (1998)). The Energetic Gamma Ray Experiment Telescope (EGRET, Kanbach et al. EGRET (1988)) on the Compton Gamma Ray Observatory did not detect 1517+656 but gave an upper flux limit of
$`810^8\mathrm{photons}\mathrm{cm}^2\mathrm{sec}^1`$ for $`E>100\mathrm{MeV}`$ (Fichtel et al. fichtel (1994)). In the hard X-rays 1517+656 was first detected with OSSE with $`3.6\pm 1.210^3\mathrm{photons}\mathrm{cm}^2\mathrm{sec}^1`$ at $`0.0510\mathrm{MeV}`$ (McNaron-Brown et al. mcnaron (1995)). The BL Lac was then detected in the EUVE-All-Sky Survey with a gaussian significance of $`2.6\sigma `$ during a $`1362\mathrm{sec}`$ exposure, giving a lower and upper count rate limit of $`0.0062\mathrm{cps}`$ and $`0.0189\mathrm{cps}`$ respectively (Marshall et al. EUVE (1995)). For a plot of the spectral energy distribution see Wolter et al. wolter (1998).
## 3 Optical Data
The BL Lac 1517+656 was also included in the Hamburg BL Lac sample selected from the ROSAT All-Sky Survey. This complete sample consists of 35 objects forming a flux limited sample down to $`f_\mathrm{X}(0.52.0\mathrm{keV})=810^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{sec}^1`$ (Bade et al. bade98 (1998), Beckmann beckmann (1999)). Studying evolutionary effects, we had to determine the redshifts of the objects in our sample. In February 1998 we took a half hour exposure of 1517+656 with the 3.5m telescope on Calar Alto, Spain, equipped with MOSCA. Using a grism sensitive in the $`42006600\mathrm{\AA }`$ range with a resolution of $`3\mathrm{\AA }`$ it was possible to detect several absorption lines. The spectrum was sky subtracted and flux calibrated by using the standard star HZ44. Identifying the lines with iron and magnesium absorption we determined the redshift of 1517+656 to $`z0.7024\pm 0.0006`$ (see Fig. 1). The part of the spectrum with the FeII and MgII doublet is shown in Fig. 3. The BL Lac has also been a target for follow-up observation for the Hamburg Quasar Survey (HQS; Hagen et al. Hagen (1995)) in 1993, because it had no published identification then and was independently found by the Quasar selection of the HQS. The $`2700\mathrm{sec}`$ exposure, taken with the 2.2m telescope on Calar Alto and Boller & Chivens spectrograph, showed a power-law like continuum; the significance of the absorption lines in the spectrum was not clear due to the moderate resolution of $`10\mathrm{\AA }`$ (Fig. 2). Nevertheless the MgII doublet at 4761 and $`4774\mathrm{\AA }`$ is also detected in the 1993 spectrum, though only marginally resolved (see Table 2). The equivalent width of the doublet is comparable in both images ($`W_\mathrm{\AA }=0.8/0.9`$ for the 1993/1998 spectrum respectively). Also the Fe II absorption doublet at $`4403/4228\mathrm{\AA }`$ ($`\lambda _{\mathrm{rest}}=2586.6/2600.2\mathrm{\AA }`$) and Mg I at $`4859\mathrm{\AA }`$ ($`\lambda _{\mathrm{rest}}=2853.0\mathrm{\AA }`$) is detectable. For a list of the detected lines, see Table 1. Comparison with equivalent widths of absorption lines in known elliptical galaxies is difficult because of the underlying non-thermal continuum of the BL Lac jet. But the relative line strengths in the FeII and MgII dublet are comparable to those measured in other absorption systems detected in BL Lac objects (e.g. 0215+015, Blades et al. blades (1985)).
Because no emission lines are present and the redshift is measured using absorption lines, the redshift could belong to an absorbing system in the line of sight, as e.g. detected in the absorption line systems in the spectrum of 0215+015 (Bergeron & D’Odorico bergeron (1986)). A higher redshift would make 1517+656 even more luminous; we will consider this case in the further discussion, though we assume that the absorption is caused by the host galaxy of the BL Lac. Assuming a single power law spectrum with $`f_\nu \nu ^{\alpha _o}`$ the spectral slope in the $`47006600\mathrm{\AA }`$ band can be described by $`\alpha _o=0.86\pm 0.07`$. The high redshift of this object is even highly plausible, because it was not possible to resolve its host galaxy on HST snap shot exposures (Scarpa et al. scarpa (1999)).
The apparent magnitude varies slightly through the different epochs, having reached the faintest value of $`R=15.9`$ mag and $`B=16.6`$ mag in February 1999 (direct imaging with Calar Alto 3.5m and MOSCA). These values were derived by comparison with photometric standard stars in the field of view (Villata et al. villata (1998)). $`H_0=50\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$ and $`q_0=0.5`$ leads to an absolute optical magnitude of at least $`M_R=27.2\mathrm{mag}`$ and $`M_B26.4`$ (including K-correction).
## 4 Mass of 1517+656
Scarpa et al. (scarpa (1999)) report the discovery of three arclike structures around 1517+656 in their HST snapshot survey of BL Lac objects. The radius of this possible fragmented Einstein ring is 2.4 arcsec. If this feature indeed represents an Einstein ring, the mass of the host galaxy of 1517+656 can easily be estimated. As the redshift of these background objects is not known, we can only derive a lower limit for the mass of the lens.
For a spherically symmetric mass distribution (with $`\theta `$ being the radius of the Einstein ring, $`D_\mathrm{d}`$ the angular size distance from the observer to the lens, $`D_\mathrm{s}`$ from observer to the source, and $`D_{\mathrm{ds}}`$ the distance from the lens to the source) we get (cf. Schneider et al. schneider (1993)):
$$M=\theta ^2\frac{D_\mathrm{d}D_\mathrm{s}}{D_{\mathrm{ds}}}\frac{c^2}{4G}$$
(1)
Thus the lower limit for the mass inside the Einstein ring is $`M=1.510^{12}M_{\mathrm{}}`$ for Einstein-de Sitter cosmology and $`H_0=50\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$. For other realistic world models (also including a positive cosmological constant), this limit is even higher.
Assuming an isothermal sphere for the lens, the velocity dispersion in the restframe can be calculated by
$$\sigma _v^2=\frac{\theta }{4\pi }\frac{D_\mathrm{s}}{D_{\mathrm{ds}}}c^2$$
(2)
Independent of $`H_0`$ we get a value of at least $`330\mathrm{km}\mathrm{sec}^1`$ for Einstein-de Sitter cosmology, and slightly less ($`320\mathrm{km}\mathrm{sec}^1`$) for a flat low-density universe ($`\mathrm{\Omega }_\mathrm{M}=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$). Other models again lead to even higher values. The true values of the mass and velocity dispersion might be much higher if the redshift of the source is significantly below $`z2`$. Figures 4 and 5 show the mass and velocity dispersion as a function of the source redshift.
If the observed absorption is caused by a foreground object and the redshift of 1517 is higher than 0.7, the mass and velocity dispersion of the host galaxy have to be even higher.
More detailled modelling of this system will be possible when the redshift of the background object is measured. If the arcs are caused by galaxies at different redshift, the mass distribution in the outer parts of the host galaxy of 1517+656 can be determined which will provide very important data for the understanding of galaxy halos. High resolution and high S/N direct images may allow to use more realistic models than symmetrical mass distributions by providing further constraints.
## 5 Discussion
The BL Lac 1517+656 with $`M_R27.2\mathrm{mag}`$ and $`M_B26.4`$ is the most luminous BL Lac object in the optical band. Padovani & Giommi (padovani (1995)) presented in their catalogue of 233 known BL Lacertae objects an even brighter candidate than 1ES1517+656: PKS 0215+015 (redshift $`z=1.715`$, $`V=15.4\mathrm{mag}`$, Véron-Cetty & Veron veron93 (1993)). This radio source has been identified by Bolton & Wall (bolton (1969)) as an $`18.5\mathrm{mag}`$ QSO. The object has been mainly in a bright phase starting from 1978, and became faint again since mid-1983 (Blades et al. blades (1985)). Its brightness is now $`V=18.8\mathrm{mag}`$ ($`M_V=26.2\mathrm{mag}`$; Kirhakos et al. kirhakos (1994), Véron-Cetty & Veron veron98 (1998)).
Also the X-ray properties of 1517+656 are extreme: with an X-ray flux of $`f_\mathrm{X}=2.8910^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{sec}^1`$ in the ROSAT PSPC band we have a luminosity of $`L_X=7.910^{46}\mathrm{erg}\mathrm{sec}^1`$ which is a monochromatic luminosity at $`2\mathrm{keV}`$ of $`L_X=4.610^{21}\mathrm{W}\mathrm{Hz}^1`$. The radio flux of $`37.7\mathrm{mJy}`$ at $`1.4\mathrm{GHz}`$ leads to $`L_R=1.0210^{26}\mathrm{W}\mathrm{Hz}^1`$. Thus 1517+656 is up to now one of the most luminous known BL Lac in X-rays, radio and optical band, also compared to newest results from HST observations (Falomo et al. falomo (1999)). They give detailed analysis for more than 50 BL Lac objects with redshift $`z<0.5`$, showing none of them having an absolute magnitude $`M_R<26`$. Compared to the 22 BL Lac in the complete EMSS sample (Morris et al. morris (1991)), 1517+656 is even more luminous in the radio, optical and X-ray band than all of those high frequency peaked BL Lac objects (HBL). Finding an HBL, like 1517+656 with $`\nu _{\mathrm{peak}}=4.010^{16}\mathrm{Hz}`$ (Wolter et al. wolter (1998)), of such brightness is even more surprising, because the HBL are usually thought to be less luminous than the low frequency peaked ones (e.g. Fossati et al. fossati (1998), Perlman & Stocke perlman (1993), Januzzi et al. januzzi (1994)). In comparison to the SED for different types of Blazars, as shown in Fossati et al. (fossati (1998)), 1517+656 shows a remarkable behaviour. The radio-properties are similar to an HBL ($`\mathrm{log}(\nu L_{4.85\mathrm{GHz}})=42.7`$), in the V-Band ($`\mathrm{log}(\nu L_{5500\AA })=46.1`$) and in the X-rays ($`\mathrm{log}(\nu L_{1\mathrm{keV}})=46.4`$) between bright LBL and faint FSRQ objects.
On the other hand it is not surprising to find one of the most luminous BL Lac objects in a very massive galaxy with $`M>210^{12}M_{\mathrm{}}`$. This mass is a lower limit, as long as the redshift of 1517+656 could be larger than $`z=0.702`$, and is depending on the cosmological model and on the redshift of the lensed object (see Fig. 4).
###### Acknowledgements.
We would like to thank H.-J. Hagen for developing the optical reduction software and for taking the 1993 spectrum of HS 1517+656. Thanks to Anna Wolter and the other colleagues from the Osservatorio Astronomico di Brera for fruitful discussion. This work has received partial financial support from the Deutsche Akademische Austauschdienst. |
no-problem/9910/astro-ph9910387.html | ar5iv | text | # Investigations of the Local Supercluster Velocity Field
## 1 Introduction
When studying the dynamical properties of a structure like the Local (Virgo) supercluster (LSC) one obviously cannot use the standard solutions to the Friedman-Robertson-Walker (FRW) world model which are valid in a homogeneous environment only. General Relativity does, however, provide means for this kind of a problem, namely the so-called Tolman-Bondi (TB) model. While no longer requiring strict homogeneity, spherical symmetry is a necessary requisite. In the present paper we address the question how well the velocity field in the LSC follows the TB-prediction, i.e. how well it meets the requirement of spherical symmetry.
Previous studies (Tully & Shaya Tully84 (1984); Teerikorpi et al. Teerikorpi92 (1992) \- hereafter Paper I) provided evidence for the expected velocity field if the mass of the Virgo cluster itself is roughly equal to its virial mass. However, the Tully-Fisher distances used caused significant scatter. Now we possess a high-quality sample of galaxies with accurate distances from Cepheids.
Tolman (Tolman34 (1934)) found the general solution to the Einstein’s field equations for a spherically symmetric pressure-free dust universe in terms of the comoving coordinates. The metric can be expressed as:
$$ds(r,\tau )^2=d\tau ^2\frac{R^{}(r,\tau )^2}{1+f(r)}dr^2R(r,\tau )^2d\mathrm{\Omega }^2,$$
(1)
where $`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2`$ and $`f(r)`$ is some unknown function of the comoving radius $`r`$. $`R`$ corresponds to the usual concept of distance. Integration of the equation of motion yields:
$$\dot{R}^2=\frac{F(r)}{R}+f(r).$$
(2)
$`\dot{R}`$ refers to $`R/\tau `$ and $`R^{}`$ to $`R/r`$. $`F(r)`$ is another arbitrary function of $`r`$.
Bondi (Bondi47 (1947)) interpreted Eq. 2 as the total energy equation: $`f(r)`$ is proportional to the total energy of the hypersurface of radius $`r`$ and $`F(r)`$ is proportional to the mass inside $`r`$. In this interpretation it is clear that one must require $`F(r)>0`$. Eq. 2 integrates into three distinctive solutions. Two of them are expressed in terms of a parameter $`\eta `$, the development angle of the mass shell in consideration:
$`R`$ $`=`$ $`{\displaystyle \frac{F}{2f}}(\mathrm{cosh}\eta 1)`$
$`\tau \tau _0(r)`$ $`=`$ $`{\displaystyle \frac{F}{2f^{3/2}}}(\mathrm{sinh}\eta \eta ),`$ (3)
for positive energy function $`f(r)>0`$,
$`R`$ $`=`$ $`{\displaystyle \frac{F}{2f}}(1\mathrm{cos}\eta )`$
$`\tau \tau _0(r)`$ $`=`$ $`{\displaystyle \frac{F}{2(f^{3/2})}}(\eta \mathrm{sin}\eta ),`$ (4)
for negative energy function $`f(r)<0`$. The third solution is for $`f(r)=0`$:
$$R=\left(\frac{9F}{4}\right)^{1/3}\left[\tau \tau _0(r)\right]^{2/3}.$$
(5)
It is important to note that the solution has three undefined functions: $`f(r)`$, $`F(r)`$ and $`\tau _0(r)`$. One of these can be fixed by defining some arbitrary transformation of the comoving coordinate $`r`$. In our application we set $`\tau _0(r)0`$ and interpret $`\tau `$ as time elapsed since the Big Bang, i.e. we equal $`\tau `$ with the age of the Universe $`T_0`$.
To find a TB-prediction for the observed velocity, we use the formulation of Paper I, based on the solutions given by Olson & Silk (Olson79 (1979)). In this approach the development angle $`\eta `$ needed for the velocity is solved with the help of a function:
$$A(R,T_0)=\frac{\sqrt{GM(R)}T_0}{R^{3/2}},$$
(6)
where $`R`$ is the distance of spherical mass shell from the origin of the TB-metric, $`M(R)`$ is the mass contained within the shell and $`G`$ is the gravitational constant. $`\eta `$ is solved either from Eq. 7 or Eq. 8 of Paper I depending on value $`A(R,T_0)`$. Ekholm (Ekholm96 (1996); hereafter E96) calculated in his Appendix B the exact value of $`A`$ dividing the family of solutions into open, hyperbolic solutions ($`f(r)>0`$) and closed, elliptic solutions ($`f(r)<0`$): $`A=\sqrt{2}/30.47`$.
## 2 Details of the model used
In Paper I, Ekholm & Teerikorpi (Ekholm94 (1994); hereafter ET94) and E96 one chose as a starting point a “known” mass $`M(R)`$ for each $`R`$ inferred from some density distribution given as a sum of an excess density and the uniform cosmological background. We assume:
$$\rho (R)=\frac{3H_0^2q_0}{4\pi G}(1+kR^\alpha ).$$
(7)
$`H_0`$ is the Hubble constant, $`q_0`$ is the deceleration parameter. $`k`$ and $`\alpha `$ define the details of the density model and will be explained shortly.
E96 further developed this formalism by defining a “local Hubble constant” $`H_0^{}=V_{\mathrm{cosm}}(d=1)`$, where $`d`$ is calculated from Eq. 14. By setting $`d=1`$ we can fix the mass excess $`k^{}`$ independently of the density gradient $`\alpha `$. In this manner the quantity $`A`$ can be expressed in terms of $`d`$ and $`q_0`$ (Eq. 7 in E96):
$$A(d,q_0)=C(q_0)\sqrt{q_0(1+k^{}d^\alpha )}.$$
(8)
The factor $`C(q_0)`$ depends on the cosmological FRW world model chosen (cf. Eqs. 8-10 in E96). The mass contained by a shell of radius $`d`$ is (Eq. 4 in E96):
$$M(d)=\frac{H_0^2}{G}q_0R_{\mathrm{Virgo}}^3d^3(1+k^{}d^\alpha ).$$
(9)
$`k^{}`$ ($`k`$ normalized to the Virgo distance) provides the mass excess within a sphere of radius $`d=1`$ and can be fixed from our knowledge of the observed velocity of the centre of the mass structure and of the infall velocity of the Local Group (LG) as follows. Let $`V_{\mathrm{Virgo}}^{\mathrm{obs}}`$ be the observed velocity of Virgo and $`V_{\mathrm{LG}}^{\mathrm{in}}`$ the infall velocity of the LG. The predicted velocity of a galaxy with respect to the centre of the structure in consideration was given by E96 (his Eq. 11):
$$v(d)=\frac{\gamma dV_{\mathrm{Virgo}}^{\mathrm{obs}}\varphi (\eta _0)}{C(q_0)}$$
(10)
with
$$\gamma =(V_{\mathrm{Virgo}}^{\mathrm{obs}}+V_{\mathrm{LG}}^{\mathrm{in}})/V_{\mathrm{Virgo}}^{\mathrm{obs}}.$$
(11)
$`\varphi (\eta _0)`$ is the angular part of Eqs. 15 and 16 given by ET94, where $`\eta _0`$ is the development angle corresponding to the given value of $`A`$. Hence for each $`V_{\mathrm{LG}}^{\mathrm{in}}`$ one may fix $`k^{}`$ by requiring:
$$v(1)=V_{\mathrm{Virgo}}^{\mathrm{obs}}.$$
(12)
One clear advantage of this formulation is – being perhaps otherwise idealistic – that the value of $`k^{}`$ depends only on the two velocities given above for a fixed FRW world model ($`H_0`$, $`q_0`$). Thus $`\alpha `$ will alter only the distribution of dynamical matter (the larger the $`\alpha `$ the more concentrated the structure).
Finally, we need the predicted counterpart of the observed velocity of a galaxy. In the present application it is solved from:
$$V_{\mathrm{pred}}(d_{\mathrm{gal}})=V_{\mathrm{Virgo}}^{\mathrm{obs}}\mathrm{cos}\mathrm{\Theta }\pm v(d)\sqrt{1\mathrm{sin}^2\mathrm{\Theta }/d^2},$$
(13)
where $`d_{\mathrm{gal}}=R_{\mathrm{gal}}/R_{\mathrm{Virgo}}`$ is the relative distance of a galaxy from the LG, $`\mathrm{\Theta }`$ is the corresponding angular distance and $`d`$, the distance to the galaxy measured from the centre of the structure, is evaluated from:
$$d=\sqrt{1+d_{\mathrm{gal}}^22d_{\mathrm{gal}}\mathrm{cos}\mathrm{\Theta }}.$$
(14)
The ($``$) -sign is valid for points closer than the tangential point $`d_{\mathrm{gal}}<\mathrm{cos}\mathrm{\Theta }`$ and ($`+`$) -sign for $`d_{\mathrm{gal}}\mathrm{cos}\mathrm{\Theta }`$.
## 3 The sample
Eq. 14 reveals one significant difficulty in an analysis of this kind. In order to find the relative distances $`d_{\mathrm{gal}}`$ one needs good estimates for the distances to the galaxies $`R_{\mathrm{gal}}`$. Such are difficult to obtain. Distances inferred from photometric data via, say, the Tully-Fisher relation are hampered by large scatter and as a result, by the Malmquist bias which is difficult to correct for. Distances inferred from velocities using the Hubble law are obviously quite unsuitable.
The HIPPARCOS satellite has provided a sample of galactic Cepheids from which Lanoix et al. (1999a ) obtained a new calibration of the $`PL`$-relation in both V and I band. Lanoix (Lanoix99 (1999)) extracted a sample of 32 galaxies from the Extragalactic Cepheid Database (Lanoix et al. 1999b ) and, by taking into account the incompleteness bias in the extragalactic $`PL`$-relation (Lanoix et al. 1999c ), he inferred the distance moduli for these galaxies with 23 based on HST measurements and the rest on groundbased measurements. It is also important to note that the sample is very homogeneous and the method for calculating the distance modulus is quite accurate. The photometric distances found in this way are far more accurate and of better quality than those derived using Tully-Fisher relation, a fact compensating the smallness of our sample.
We intend to use these galaxies together with the TB-model described in the previous two sections to study the gross features of the dynamical structure of the LSC. Because we need only the observed velocities, $`V_{\mathrm{obs}}`$ and the Cepheid distance moduli $`\mu `$, we can avoid the usual difficulties and caveats of other photometric distance determinations. The velocities were extracted from the Lyon-Meudon Extragalactic Database LEDA. By observed velocity we refer to the mean heliocentric velocity corrected to the LG centroid according to Yahil et al. (Yahil77 (1977)).
We need, however, some additional information: $`V_{\mathrm{Virgo}}^{\mathrm{obs}}`$, $`V_{\mathrm{LG}}^{\mathrm{in}}`$ and $`R_{\mathrm{Virgo}}`$. For the observed velocity of the centre of the LSC the value preferred by Paper I is $`V_{\mathrm{Virgo}}^{\mathrm{obs}}=980.0\mathrm{km}\mathrm{s}^1`$. For the infall velocity of the LG into the centre we choose $`V_{\mathrm{LG}}^{\mathrm{in}}=220\mathrm{km}\mathrm{s}^1`$ (Tammann & Sandage Tammann85 (1985)). It is worth noting that though this value has fluctuated, this relatively old value is still quite compatible with recent re-evaluations (cf. Federspiel et al. Federspiel98 (1998)). Furthermore, the fluctuations have not been very significant. We also need an estimate for the distance of the centre of the LSC, $`R_{\mathrm{Virgo}}`$. One possibility is to establish the Hubble constant $`H_0`$ from some independent method. Lanoix (Lanoix99 (1999)) found using SNe Ia’s:<sup>1</sup><sup>1</sup>1 $`H_0`$ derived is not completely independent because both the SNe and the extragalactic $`PL`$-relation are calibrated with same Cepheids.
$$H_0=57\pm 3\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1.$$
(15)
This global value of $`H_0`$ is in good agreement with our more local results using both the direct and inverse Tully-Fisher relations (Theureau et al. Theureau97 (1997); Ekholm et al. Ekholm99 (1999)). $`H_0`$ and the given velocities yield $`R_{\mathrm{Virgo}}=21.0\mathrm{Mpc}`$, which is in good agreement with $`R_{\mathrm{Virgo}}=20.7\mathrm{Mpc}`$ found by Federspiel et al. (Federspiel98 (1998)). Note also that Federspiel et al. (Federspiel98 (1998)) found the same value for $`H_0`$ from relative cluster distances to Virgo. Finally, one should recognize that our distance estimate is valid only if Virgo is at rest with respect to the FRW-frame. If not, the cosmological velocity of Virgo is something more complicated than simply the sum of $`V_{\mathrm{LG}}^{\mathrm{in}}`$ and $`V_{\mathrm{Virgo}}^{\mathrm{obs}}`$.
## 4 Results in the direction of Virgo ($`\mathrm{\Theta }<30\mathrm{°}`$)
We restrict ourselves to the Einstein-deSitter universe by assigning $`q_0=0.5`$. As regards Eq. 8 we now possess the distance $`d`$ for each galaxy as well as the value $`k^{}=0.606`$ from Eq. 12. We also have $`\gamma `$ from Eq. 11. We need to estimate the best value for $`\alpha `$. We devised a simple statistical test by finding which value or values of $`\alpha `$ minimize the average $`|V_{\mathrm{obs}}V_{\mathrm{pred}}|`$ of sample galaxies in the Virgo direction. We found minimum values around $`\alpha =2.73.0`$. We adopt $`\alpha =2.85`$. The resulting systemic velocity<sup>2</sup><sup>2</sup>2Systemic velocity is a combination of the cosmological velocity and the velocity induced by Virgo. This is in our case the observed velocity defined in Sect. 4. It could contain also other components, but we assume that the Virgo infall dominates. vs. distance diagram is shown in Fig 1. The observed velocities are labelled with circles and the predicted values with crosses. The straight line is what one expects from Hubble law with our choice of $`H_0`$ and the curve is the theoretical TB-solution for $`\mathrm{\Theta }=7\mathrm{°}`$ (most of the galaxies have small angular distances)<sup>3</sup><sup>3</sup>3 This curve is simply to guide the eye. The actual comparison is made between each observed and predicted point.. The relative distances and predicted velocities are also tabulated in Table 1 at columns 4 and 5 for this Model 1. The galaxies follow the overall TB-pattern, with velocities steeply increasing when the Virgo center is approached. Exceptions are NGC 4639 lying at 1.14 and NGC 4548 at 0.73.
NGC 4639 has a positive velocity when – according to the model – it should be falling into Virgo from the backside. Can we explain this strange behaviour? We note it is at a small angular distance from Virgo ($`\mathrm{\Theta }=3.0\mathrm{°}`$). Perhaps NGC 4639 has fallen through the centre and still has some of its frontside infall velocity left. There are, however, no reports of significant hydrogen deficiency which one would expect if the galaxy actually had travelled through the centre. Maybe this galaxy is a genuine member of the Virgo cluster thus having a rather large dispersion in velocity. In fact, Federspiel et al. (Federspiel98 (1998)) include it to their “fiducial sample” of Virgo cluster galaxies belonging to the subgroup “big A” around M87. As a matter of fact our chosen centre ($`l=284\mathrm{°}`$, $`b=74.5\mathrm{°}`$) lies close to the massive pair M86/87.
NGC 4548 is also close to Virgo and has a small angular distance ($`\mathrm{\Theta }=2.4\mathrm{°}`$). Now, however the situation is reversed. This galaxy should be falling into Virgo with a high positive velocity but has a small velocity. Paper I classifies this galaxy belonging to a region interpreted as a component expanding from Virgo (cf. Fig. 8 (Region B) and Table 1c in Paper I).
Finally, because both galaxies are close to Virgo at small angular distances even a slight error in distance determination could result in a considerable error in velocities. Hence both galaxies might follow the TB-curve but error in distance has distorted the figure.
We also tested the exact values given by Federspiel et al. (Federspiel98 (1998)) with $`\alpha =2.85`$. Now $`V_{\mathrm{Virgo}}^{\mathrm{obs}}=920\mathrm{km}\mathrm{s}^1`$ and $`R_{\mathrm{Virgo}}=20.7\mathrm{Mpc}`$ yield with the same infall velocity $`H_0=55\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. We found $`k^{}=0.641`$. The behaviour of the systemic velocity as a function of distance is quite similar to Fig. 1. This solution is named as Model 2 and given in columns 6 and 7 in Table 1. Due to the paucity of the sample it is not possible to decide between these models.
Though we have only a few points, it is quite remarkable how well the dynamical behaviour of galaxies in the direction of Virgo is demonstrated by a simple model. That the galaxies follow so well a spherically symmetric model despite the observed clumpiness of the Virgo region is promising. Fig. 1 clearly lends credence to a presumption that the gravitating matter is distributed more symmetrically than the luminous matter. We have given an affirmative answer to the question asked in the introduction.
## 5 Results outside Virgo ($`\mathrm{\Theta }>30\mathrm{°}`$)
### 5.1 On the chosen Hubble constant
In Paper I the value of the Hubble constant was taken to be $`H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, consistent with the Tully-Fisher calibration adopted at the time and with the used Virgo distances (16.5 and 18.4 Mpc giving different infall velocities of LG). In the present paper we have fixed $`H_0`$ from more global considerations as well as $`V_{\mathrm{LG}}^{\mathrm{in}}`$ corresponding to $`R_{\mathrm{Virgo}}=21\mathrm{Mpc}`$. As discussed by ET94 once the velocity of the LG is fixed the predicted velocities no longer depend on the value of $`H_0`$. It is still interesting to see whether the sample galaxies locally agree with $`H_0=57\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. This is done in Fig. 2. where we have plotted the observed systemic velocities (open circles) outside the Virgo region ($`\mathrm{\Theta }>30\mathrm{°}`$) as a function of the absolute distance in Mpc. The data are given in Table 2. The galaxies follow quite well the quiescent flow. In particular, the closest-by galaxies ($`R_{\mathrm{gal}}<4\mathrm{Mpc}`$) follow the linear prediction with surprising accuracy. Note also that galaxies listed in Table 1 (galaxies partaking in the Virgo infall) would predict $`H_0=76\pm 9\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. This too high a value clearly underlines the need for correct and adequate kinematical model in Virgo region.
However, two of the galaxies are clearly discrepant (NGC 1365 and NGC 4603) and two (NGC 925 and NGC 7331) disagree. First of all, NGC 4603 is a distant galaxy and the distance determination may be biased by the Cepheid incompleteness effect (Lanoix et al. 1999c ). It also does not have a good $`PL`$-relation in V, which means that one should assign a very low weight for it. On the other hand, its velocity differs from the quiescent Hubble flow only by $`375\mathrm{km}\mathrm{s}^1`$ which actually is not very much considering the distance. It belongs to the Virgo Southern extension and its velocity could be influenced by the Hydra-Centaurus complex. NGC 1365 is a member of Fornax, NGC 925 belongs to the NGC 1023 group and NGC 7331 belongs to a small group near the Local Void. Under these conditions it is not surprising that these galaxies show deviations from the Hubble law, except that they all show a tendency to have larger velocity than predicted by the Hubble law.
As can be seen from Fig. 2, these galaxies suggest a much shorter distance to Virgo (for them $`H_0=75\mathrm{km}^1\mathrm{Mpc}^1`$ corresponding to $`R_{\mathrm{Virgo}}=16\mathrm{Mpc}`$ would be more suitable choice for the Hubble constant). This means that without further analysis the data outside the $`30\mathrm{°}`$ cone do not allow us to exclude a shorter distance to Virgo. We examine the effect of a short distance to Virgo on the galaxies within the $`30\mathrm{°}`$ cone in an appendix to this paper.
Finally, we have used so far systemic velocities, i.e. velocities not corrected for the virgocentric motions. We tested how a simple correction would affect Fig. 2. The corrected velocities shown as crosses in Fig. 2 were calculated as
$$V_{\mathrm{corr}}=V_{\mathrm{obs}}+[v(d)_\mathrm{H}v(d)]\sqrt{1\mathrm{sin}^2\mathrm{\Theta }/d^2}+V_{\mathrm{LG}}^{\mathrm{in}}\mathrm{cos}\mathrm{\Theta },$$
(16)
where $`v(d)`$ is solved from Eq. 10 using parameters for our Model 1 and the Hubble velocity at the distance $`d`$ measured from the centre of LSC is $`v(d)_\mathrm{H}=V_{\mathrm{cosm}}(1)\times d`$. This correction does not resolve the problem.
### 5.2 The nearest-by galaxies ($`R_{\mathrm{gal}}<4\mathrm{Mpc}`$)
We noticed in the previous subsection that nearest-by galaxies appear to follow the Hubble law quite well. In this subsection we take a closer look at these galaxies. To begin with we note that the mean heliocentric velocities $`V_{}`$ (shown as filled circles in Fig. 3) have a rather large scatter but on the mean they agree with our primary choice of $`H_0`$. It interesting that these very local galaxies agree with our global value of $`H_0`$. Furthermore, when $`V_{}`$’s are corrected to the centroid of LG one observes a striking effect. In Fig. 3 one can see how the galaxies after the correction obey the Hubble law all the way down to $`R_{\mathrm{gal}}=0`$. The correction according to Yahil et al. (Yahil77 (1977)) are shown as crosses and according to Richter et al. (Richter87 (1987)) as circles. It seems that it is a matter of taste which correction one prefers.
## 6 The virial mass of Virgo
### 6.1 Prediction from Model 1
Tully and Shaya (Tully84 (1984)) estimated the virial mass of the Virgo cluster. Following the notation of Paper I the mass within $`d=0.105`$, which corresponds to $`\mathrm{\Theta }=6\mathrm{°}`$ at the distance of Virgo, is:
$`M_{\mathrm{virial}}`$ $`=`$ $`7.5\times 10^{14}M_{}R_{\mathrm{Virgo}}/16.8\mathrm{Mpc}`$ (17)
$`=`$ $`9.38\times 10^{14}M_{},`$
where the latter equality is based on the adopted distance $`R_{\mathrm{Virgo}}=21\mathrm{Mpc}`$. With the cosmological parameters ($`H_0=57\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`q_0=0.5`$) and with our Model 1 we find using (Eq. 14 of Paper I)
$$M(d)=14.76\times q_0h_0^2\left[\frac{R_{\mathrm{Virgo}}}{16.8\mathrm{Mpc}}\right]^2\times d^3\left(1+k^{}d^\alpha \right),$$
(18)
where $`h_0=H_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, the mass within $`d=0.105`$ in terms of $`M_{\mathrm{virial}}`$:
$$M_{\mathrm{pred}}=1.62\times M_{\mathrm{virial}}.$$
(19)
The mass deduced agrees with the estimates 1.5-2.0 found by Paper I, where it was suspected that the virial mass estimation may come from a more concentrated area, which could explain the higher value obtained from the TB-solution.
### 6.2 An alternative solution (Model 3)
We know all the parameters except $`\alpha `$ needed in Eq. 18. This led us make a test by looking for such an $`\alpha `$ which would bring about exactly one Virgo virial mass within the radius $`d=0.105`$ from the Virgo center. We found $`\alpha =2.634`$. The systemic velocity vs. distance diagram for this model 3 is given in Fig. 4.
Again, NGC 4548 and NGC 4639 show anomalous behaviour. Comments made earlier are valid. There are three other galaxies showing a relatively large disagreement with the given TB-solution. NGC 4535 is close to Virgo at a small angular distance. Hence the arguments used for NGC 4639 are acceptable also here. Note that in Paper I this galaxy was classified as belonging to the region A, where galaxies are falling directly into Virgo. Furthermore, Federspiel et al. (Federspiel98 (1998)) classifies this galaxy as a member of the subgroup B (these galaxies lie within $`2.4\mathrm{°}`$ of M49). This clearly suggests a distortion in the velocity. NGC 4414 may belong to the Region B in Paper I. This could explain its low velocity. It is also rather close to the tangential point so that projection effects can be important. For NGC 4536 it is difficult to find any reasonable explanation. With these notes taken into account we conclude that model 3 cannot be excluded.
## 7 Conclusions
Studies of large-scale density enhancements in the Local Universe using spiral field galaxies – objects suitable for such a project – are discouraged by two factors. The observed distribution of spirals is rather irregular and the photometric distances based on Tully-Fisher relation are uncertain due to large scatter. In the present work we avoided the latter problem up to a degree by using photometric distances from the extragalactic $`PL`$-relation. Such distances are far more accurate than Tully-Fisher distances.
It was quite satisfying to find out that the spherically symmetric TB-model predicts the observed velocity field well. As a matter of fact, in a recent study Hanski et al. (Hanski99 (1999)) implemented the TB-model to the mass determination of the Perseus-Pisces supercluster. Though the irregular behaviour of the luminous matter distribution is even more pronounced in Perseus-Pisces than in LSC the mass estimates were reasonable. These findings tend to indicate that the gravitating mass is more symmetrically distributed than the luminous matter indicates.
We found a solution (Model 1) predicting a Virgo cluster mass within $`d_{21}=0.105`$ of $`M_{\mathrm{Virgo}}=1.62\times M_{\mathrm{virial}}`$ with $`M_{\mathrm{virial}}`$ given by Tully & Shaya (Tully84 (1984)). This result is in agreement with Paper I, where the difference was suspected to mean that the virial mass is estimated from a more concentrated volume. Another plausible explanation is that Virgo is flattened. When presuming spherical symmetry under such condition more mass is required to induce the same effect on the velocities.
We were also able to find a solution predicting exactly one virial mass. Though this model does not agree with observations as well as Model 1 the fit is – considering the large uncertainties involved – acceptable.
We find these results significant in three ways. The intrinsic behaviour of the sample outside Virgo does not disagree with the global value of the Hubble constant $`H_0=57\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The TB-solutions agree with $`R_{\mathrm{Virgo}}=21\mathrm{Mpc}`$, a value in excellent concordance with $`R_{\mathrm{Virgo}}=20.7\mathrm{Mpc}`$ given by Federspiel et al. (Federspiel98 (1998)). It is interesting also to be able to predict exactly one Virgo virial mass in the Einstein-deSitter universe, because in Paper I the mass predictions were larger than one and because $`\mathrm{\Omega }_0=1`$ is strongly supported by theoretical considerations (both Inflation and Grand Unification require this value). Discussion on the cosmological constant $`\mathrm{\Lambda }`$ is postponed to a later phase of our research programme.
Having said all this one should be cautious. Results presented in this paper are based on frontside galaxies only. As can be seen from Fig. 4. in Paper I, predicted systemic velocities are more sensitive to the model parameters in the backside than in the front. In the next paper of this series we intend to find a few background galaxies with as reliable distance moduli as possible from other photometric distance estimators, in particular the Tully-Fisher relation with proper care taken of the selection effects.
###### Acknowledgements.
This work has been partly supported by the Academy of Finland (project 45087: “Galaxy Streams and Structures in the nearby Universe” and project “Cosmology in the Local Universe”). We have made use of the Lyon-Meudon Extragalactic Database LEDA and the Extragalactic Cepheid Database. Finally we are grateful for the referee for constructive criticism and suggestions which have helped us to improve this paper.
## Appendix A Is $`R_{\mathrm{Virgo}}16\mathrm{Mpc}`$?
In the present paper we fixed the distance to the centre of LSC by adopting a $`H_0`$ from external considerations and by fixing the cosmological velocity of the centre of LSC leading to $`R_{\mathrm{Virgo}}=21\mathrm{Mpc}`$. In Sect. 5.1 we noted that the galaxies outside the $`30\mathrm{°}`$ cone measured from the centre of LSC do not allow us to exclude higher values of $`H_0`$. Due to the fixed velocity these higher values, if really cosmological, necessarily lead to a shorter distance to the centre of LSC.
In this appendix we examine what happens to the TB pattern within the $`30\mathrm{°}`$ cone where the TB model is relevant if we use $`R_{\mathrm{Virgo}}=16\mathrm{Mpc}`$. The result is shown in Fig. 5. The infall pattern is still clearly present. The agreement between the observed points (circles) and the predicted points (crosses) is not as good as in Figs. 1 and 4. When Fig. 5 is carefully compared with Fig. 1 one notes that in Fig. 5 the observed systemic velocities are systematically smaller than the predicted ones. This is because for the shorter Virgo distance galaxies in front get closer to Virgo and thus the dynamical influence of Virgo should be larger. In Fig. 1 we observe no such systematic effect. It is thus possible to say that our sample is more favourable to a long distance scale than to a short scale.
We also examine the behaviour of the Hubble ratios $`V_{\mathrm{corr}}/R_{\mathrm{gal}}`$ as a function of $`V_{\mathrm{corr}}`$ (cf. Eq. 16). In Fig. 6 the correction is based on $`R_{\mathrm{Virgo}}=21\mathrm{Mpc}`$ and in Fig. 7 on $`R_{\mathrm{Virgo}}=16\mathrm{Mpc}`$. The two values of $`H_0`$ used in Fig 2 are also given as straight horizontal lines. In both diagrams we observe increase in the Hubble ratios as $`V_{\mathrm{corr}}`$ increases starting at $`V_{\mathrm{corr}}600\mathrm{km}\mathrm{s}^1`$. In particular, for the shorter Virgo distance this effect is quite pronounced. Sandage and Tammann have on many occasion stressed that the value of the Hubble constant, $`H_0`$, should not increase without any clear physical reason. In the absence of such the increase in $`H_0`$ is a warning signal that there is a bias present (e.g. the Malmquist bias). This is also our tentative interpretation for the increase in $`H_0`$. One should also pay attention to the fact that in both cases the mean Hubble ratio is $`55\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for galaxies below $`V_{\mathrm{corr}}=600\mathrm{km}\mathrm{s}^1`$.
However, part of the apparent increasing trend in $`H_0`$ could be an artefact of the velocity dispersion. If we overestimate the velocities galaxies have a tendency to move upwards and right. What is the nature of the suspected bias and its relation to the incompleteness bias of Lanoix et al. (1999c ) as well as what is the influence of the velocity effect are amongst the topics we discuss in the next paper in this series. |
no-problem/9910/hep-th9910135.html | ar5iv | text | # References
UOSTP-99-008
hep-th/9910135
Deformed Nahm Equation and a Noncommutative BPS Monopole
Dongsu Bak <sup>1</sup><sup>1</sup>1Electronic Mail: dsbak@mach.uos.ac.kr
Physics Department, University of Seoul, Seoul 130-743, Korea
> A deformed Nahm equation for the BPS equation in the noncommutative N=4 supersymmetric U(2) Yang-Mills theory is obtained. Using this, we constructed explicitly a monopole solution of the noncommutative BPS equation to the linear order of the noncommutativity scale. We found that the leading order correction to the ordinary SU(2) monopole lies solely in the overall U(1) sector and that the overall U(1) magnetic field has an expected long range component of magnetic dipole moment.
> (14.80.Hv,11.15.-q)
As shown recently, quantum field theories in noncommutative spacetime naturally arise as a decoupling limit of the worldvolume dynamics of D-branes in a constant NS-NS two-form background. But detailed dynamical effects due to the noncommutative geometry are still partially understood.
In this note, we shall exploit the N=4 supersymmetric gauge theories on noncommutative $`𝐑^4`$ that is the worldvolume theory of D3-branes in the NS-NS two-form background. This theory was recently investigated to study the nature of monopoles and dyons in the noncommutative space. The energy and charge of monopoles satisfying the noncommutative BPS equations were identified and shown to agree those of ordinary monopoles. Below, we shall concentrate on the construction of a self-dual monopole solution of the theory by generalizing ADHMN methods to the noncommutative case.
We begin with the noncommutative N=4 supersymmetric Yang-Mills theory. We shall restrict our discussion to the case of $`U(2)`$ gauge group. Among the six Higgs fields, only a Higgs field $`\varphi `$ plays a role in the following discussions of a monopole. The bosonic part of the action is given by
$$S=\frac{1}{4g_{\mathrm{YM}}^2}d^4x\mathrm{tr}\left(F_{\mu \nu }F^{\mu \nu }2D_\mu \varphi D^\mu \varphi \right),$$
(1)
where the $``$-product is defined by
$$a(x)b(x)\left(e^{\frac{i}{2}\theta ^{\mu \nu }_\mu _\nu ^{}}a(x)b(x^{})\right)|_{x=x^{}}$$
(2)
that respects the associativity of the product. We shall assume in the following that $`\theta _{0i}=\theta _{0i}=0`$. Then without loss of generality, one may take the only nonvanishing components to be $`\theta _{12}=\theta _{21}\theta `$. $`F_{\mu \nu }`$ and $`D_\mu \varphi `$ are defined respectively by
$`F_{\mu \nu }_\mu A_\nu _\nu A_\mu +i(A_\mu A_\nu A_\nu A_\mu )`$
$`D_\mu \varphi _\mu \varphi +i(A_\mu \varphi \varphi A_\mu )`$ (3)
The four vector potential and $`\varphi `$ belong to $`U(2)`$ Lie algebra given by $`T_4=\frac{1}{2}I_{2\times 2}`$ and $`(T_1,T_2,T_3)=\frac{1}{2}(\sigma _1,\sigma _2,\sigma _3)`$ normalized by $`\mathrm{tr}T_mT_n=\frac{1}{2}\delta _{mn}`$. The vacuum expectation value of the Higgs field $`\varphi `$ is taken to be $`T_3U`$ in the asymptotic region.
As shown in Ref. , the energy functional
$`M={\displaystyle \frac{1}{2g_{\mathrm{YM}}^2}}{\displaystyle d^3x\mathrm{tr}\left(E_iE_i+D_0\varphi D_0\varphi +B_iB_i+D_i\varphi D_i\varphi \right)}{\displaystyle \frac{1}{g_{YM}^2}}{\displaystyle _{r=\mathrm{}}}𝑑S_k\mathrm{tr}B_k\varphi ,`$ (4)
is bounded as in the case of the ordinary supersymmetric Yang-Mills theory. The saturation of the bound occurs when the BPS equation
$`B_i=D_i\varphi `$ (5)
is satisfied. The mass for the solution is
$`M={\displaystyle \frac{2\pi Q_M}{g_{\mathrm{YM}}^2}}U`$ (6)
where we define the magnetic charge $`Q_M`$ by
$`Q_M={\displaystyle \frac{1}{2\pi U}}{\displaystyle _{r=\mathrm{}}}𝑑S_k\mathrm{tr}B_k\varphi .`$ (7)
As argued in Ref. , the charge is to be quantized at integer values even in the noncommutative case. This is because the fields in the asymptotic region are slowly varying and, hence, the standard argument of the topological quantization of the magnetic charge holds in the noncommutative theory. The main aim of this note is to investigate the detailed form of one self-dual monopole solution in the noncommutative case.
Before proceeding we show first the fact that a noncommutative monopole solution inevitably involves nonvanishing overall $`U(1)`$ parts. To prove this, let us note the solution for the monopole with $`\theta =0`$ is given by<sup>2</sup><sup>2</sup>2In the following, we shall set $`g_{\mathrm{YM}}=1`$ and $`U=1`$.
$`\stackrel{~}{\varphi }=𝐫\sigma {\displaystyle \frac{1}{2r}}\left(\mathrm{coth}r{\displaystyle \frac{1}{r}}\right)`$ (8)
$`\stackrel{~}{A}_i=ϵ_{ijk}x^j\sigma ^k{\displaystyle \frac{1}{2r^2}}\left({\displaystyle \frac{r}{\mathrm{sinh}r}}1\right).`$ (9)
We then expand the solution of the noncommutative BPS equation with respect to $`\theta `$ by
$`\varphi =\stackrel{~}{\varphi }+\theta \varphi _{(1)}+\theta ^2\varphi _{(2)}\mathrm{}`$ (10)
$`A=\stackrel{~}{A}+\theta A_{(1)}+\theta ^2A_{(2)}\mathrm{}.`$ (11)
The leading corrections to $`D_i\varphi `$ and $`B_i`$ are, respectively,
$`(B_i)_{(1)}=ϵ_{ijk}\left(i(\stackrel{~}{A}_j\overline{}\stackrel{~}{A}_k\overline{}\stackrel{~}{A}_j\stackrel{~}{A}_k)+_j(A_{(1)})_k+i[\stackrel{~}{A}_j,(A_{(1)})_k]\right)`$ (12)
$`(D_i\varphi )_{(1)}=i(\stackrel{~}{A}_i\overline{}\stackrel{~}{\varphi }\overline{}\stackrel{~}{A}_i\stackrel{~}{\varphi }\stackrel{~}{\varphi }\overline{}\stackrel{~}{A}_i+\overline{}\stackrel{~}{\varphi }\stackrel{~}{A}_i)+_i\varphi _{(1)}+i[(A_{(1)})_i,\stackrel{~}{\varphi }]+i[\stackrel{~}{A}_i,\varphi _{(1)}],`$ (13)
where $``$ and $`\overline{}`$ denote derivative with respect to $`x+iy`$ and $`xiy`$. The terms in the first parentheses follow from the evaluation of the $``$-product of the ordinary monopole solution and can be explicitly computed. They are
$`iϵ_{ijk}\left((\stackrel{~}{A}_j\overline{}\stackrel{~}{A}_k\overline{}\stackrel{~}{A}_j\stackrel{~}{A}_k)\right)={\displaystyle \frac{1}{2r}}\left((r^2w^2)^{}\delta _{3i}+(w^2)^{}x^3x^i\right)I_{2\times 2}`$ (14)
$`i(\stackrel{~}{A}_i\overline{}\stackrel{~}{\varphi }\overline{}\stackrel{~}{A}_i\stackrel{~}{\varphi }\stackrel{~}{\varphi }\overline{}\stackrel{~}{A}_i\overline{}\stackrel{~}{\varphi }\stackrel{~}{A}_i)={\displaystyle \frac{1}{r}}\left((r^2wh)^{}\delta _{3i}+(wh)^{}x^3x^i\right)I_{2\times 2},`$ (15)
where $`w=\frac{1}{2r^2}\left(\frac{r}{\mathrm{sinh}r}1\right)`$ and $`h=\frac{1}{2r}\left(\mathrm{coth}r\frac{1}{r}\right)`$. We note that these have overall $`U(1)`$ components only and the two are different from each other. Hence to satisfy the Bogomol’nyi equation, there should be overall $`U(1)`$ contributions in $`\varphi _{(1)}`$ or $`A_{(1)}`$ to cancel the difference because the $`SU(2)`$ parts of $`\varphi _{(1)}`$ and $`A_{(1)}`$ do not produce $`U(1)`$ components of the field strengths in this leading order.
We now establish the Nahm’s formalism in the noncommutative case to solve the BPS equation. The basic idea is simply to replace all the ordinary product of the standard derivation by $``$-product. This fact was briefly argued already in Ref. , but the deformed Nahm equation below was not found. To obtain the self-dual monopole solution, Nahm adapted the ADHM construction of instanton<sup>3</sup><sup>3</sup>3See also Ref. for the ADHM construction of noncommutative instanton solutions. to solve the self-dual Yang-Mills equation in Euclidean four space such that the gauge fields are translationally invariant in the $`x_4`$-direction. Here we shall begin with reviewing ADHMN construction to follow the similar line of logic. The ADHMN construction of $`k`$ monopole starts with a matrix operator,
$`\mathrm{\Delta }^{}={\displaystyle \frac{d}{d\tau }}I_{k\times k}I_{2\times 2}+I_{k\times k}\sigma _ix_i+T_i\sigma _i,`$ (16)
where $`T_i`$’s are $`k\times k`$ matrices. The following two conditions are required on $`\mathrm{\Delta }`$;
$`[\mathrm{\Delta }^{}\mathrm{\Delta },I_{k\times k}\sigma _i]=0,\mathrm{\Delta }^{}\mathrm{\Delta }:\mathrm{invertible}.`$ (17)
We then solve the equation
$`\mathrm{\Delta }^{}V=0,`$ (18)
with a normalization condition
$`{\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau V^{}V=I_{2k\times 2k},`$ (19)
where $`V`$ is a $`2k\times 2k`$ matrix. The gauge fields are given by
$`\stackrel{~}{\varphi }={\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau \tau V^{}V,\stackrel{~}{A}_i=i{\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau V^{}_iV.`$ (20)
One may directly verify that
$`F_{mn}=2\overline{\eta }_{mn}^i{\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau {\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau ^{}V^{}(\tau )I_{k\times k}\sigma _i(\mathrm{\Delta }^{}\mathrm{\Delta })^1(\tau ,\tau ^{})V(\tau ^{})`$ (21)
where the indices run from 1 to 4, we identify $`A_4`$ with $`\varphi `$, i.e. $`F_{i4}=D_i\varphi `$, and $`\overline{\eta }_{mn}^i=ϵ_{imn4}\delta _{im}\delta _{4n}+\delta _{in}\delta _{4m}`$ is the self-dual ’t Hooft tensor.
In the noncommutative case, the derivation goes through once all the product operations are replaced by $``$-product operations. Namely, the matrix operator (16) remains the same while the conditions become
$`[\mathrm{\Delta }^{}\mathrm{\Delta },I_{k\times k}\sigma _i]=0,\mathrm{\Delta }^{}\mathrm{\Delta }:\mathrm{invertible}.`$ (22)
We have to solve
$`\mathrm{\Delta }^{}V=0,`$ (23)
with a normalization condition
$`{\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau V^{}V=I_{2k\times 2k}.`$ (24)
The gauge fields are now given by
$`\varphi ={\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau \tau V^{}V,A_i=i{\displaystyle _{\frac{1}{2}}^{\frac{1}{2}}}𝑑\tau V^{}_iV`$ (25)
These potentials then satisfy the noncommutative BPS equation. Namely, the field strength is the one obtained from (21) by replacing all the ordinary product operations with $``$-product operations, so the expression is manifestly self-dual.
Let us now work out the conditions in (17). The operator $`\mathrm{\Delta }^{}\mathrm{\Delta }`$ reads
$`\mathrm{\Delta }^{}\mathrm{\Delta }={\displaystyle \frac{d^2}{d\tau ^2}}I_{k\times k}I_{2\times 2}+(T_i+x^iI_{k\times k})(T_i^{}+x^iI_{k\times k})I_{2\times 2}`$
$`+(T_iT_i^{})\sigma _i{\displaystyle \frac{d}{d\tau }}+({\displaystyle \frac{d}{d\tau }}T_i^{}+iϵ_{ijk}T_jT_k^{}\theta \delta _{3i}I_{k\times k})\sigma _i.`$ (26)
The two conditions demand that $`T_i`$’s are Hermitian matrices and that
$`{\displaystyle \frac{d}{d\tau }}T_i=iϵ_{ijk}T_jT_k\theta \delta _{3i}I_{k\times k}.`$ (27)
This equation is nothing but the Nahm equation for the ordinary Yang-Mills theory if $`\theta `$ is set to zero. This deformed Nahm equation can be reduced to the ordinary Nahm equation
$`{\displaystyle \frac{d}{d\tau }}\stackrel{~}{T}_i=iϵ_{ijk}\stackrel{~}{T}_j\stackrel{~}{T}_k.`$ (28)
introducing $`\stackrel{~}{T}_i`$ by $`T_i=\stackrel{~}{T}_i\theta \tau \delta _{i3}I_{k\times k}`$. The boundary conditions at $`\tau =\pm 1/2`$ for the Nahm equation (28) are not to be changed from those of ordinary monopoles even for the noncommutative case because they correspond to the long range effects and the noncommutativity plays no role. Hence the boundary conditions are that $`\stackrel{~}{T}_i`$’s have simple poles at the boundaries and the residues form an irreducible representation of $`SU(2)`$. The ordinary Nahm equation is naturally interpreted as describing supersymmetric ground states for the worldvolume theory of suspended D-strings between D3-branes. In this context, the modification $`\theta \tau \delta _{i3}I_{k\times k}`$ implies that the D-strings are slanted with a slope $`\theta `$ where $`\tau `$ is identified with a spatial coordinate of the D-string worldvolume. The emerging picture is quite consistent with the direct analysis of a D1-brane in the NS-NS two-form background where the same slope appears. Although interesting, we won’t exploit the relationship further here.
We now turn to the problem of the construction of actual solutions of the BPS equation. We try the case of $`k=1`$, i.e. one monopole. The deformed Nahm equation is solved trivially by $`T_i=\tau \delta _{3i}c_i`$, where $`c_i`$’s are constants related to the monopole position. We shall set $`c_i`$ to zero by invoking the translational invariance of the noncommutative Yang-Mills theory.
The equation (23) is explicitly
$`{\displaystyle \frac{d}{d\tau }}V+\sigma _ix_iV\theta \tau \sigma _3V={\displaystyle \frac{d}{d\tau }}V+\sigma _ix_iV+\theta \widehat{O}V=0,`$ (29)
where we define $`\widehat{O}`$ by
$`\widehat{O}={\displaystyle \frac{i\theta }{2}}(\sigma _1_2\sigma _2_1)\theta \tau \sigma _3.`$ (30)
This equation is equivalent to 3-dimensional Dirac equation with time dependent mass in the background magnetic field $`\theta `$<sup>4</sup><sup>4</sup>4The roles of magnetic field in open string theory and noncommutative geometry were explored in Ref.. . The solution with $`\theta =0`$ is simply
$`\stackrel{~}{V}(\tau ,𝐫)=e^{\sigma 𝐫\tau }K(r),`$ (31)
where $`K(r)=\left(\frac{r}{\mathrm{sinh}r}\right)^{1/2}`$ is determined by the normalization condition, (19). One may solve the equation perturbatively by the Dyson series; the solution reads
$`V=\stackrel{~}{V}+\theta e^{\sigma 𝐫\tau }({\displaystyle _0^\tau }𝑑s_1\widehat{O}_I(s_1)K(r)+W_1(𝐫))`$
$`+\theta ^2e^{\sigma 𝐫\tau }\left({\displaystyle _0^\tau }𝑑s_1\widehat{O}_I(s_1)\left({\displaystyle _0^{s_1}}𝑑s_2\widehat{O}_I(s_2)K(r)+W_1(𝐫)\right)+W_2(𝐫)\right)+\mathrm{},`$ (32)
where $`\widehat{O}_I(\tau )=e^{\sigma 𝐫\tau }\widehat{O}(\tau )e^{\sigma 𝐫\tau }`$. All the integration constants $`W_n(𝐫)`$ are determined by the normalization condition, (24).
Explicit evaluation of $`W_1`$ by imposing the normalization condition leads to
$`W_1(𝐫)={\displaystyle \frac{K(r)}{4r^2}}\left(\left(r\phi (S1)\right)\sigma _3+{\displaystyle \frac{x_3}{r}}\left({\displaystyle \frac{r^2}{2}}r\phi +(S1)\right)\sigma \widehat{r}\right),`$ (33)
where $`\phi \mathrm{coth}r\frac{1}{r}`$ and $`S\frac{r}{\mathrm{sinh}r}`$. The solution, $`V_{(1)}`$ reads explicitly
$`{\displaystyle \frac{V_{(1)}}{K}}={\displaystyle \frac{x_3}{4r^2}}\left(2\tau \mathrm{cosh}(\tau r){\displaystyle \frac{\mathrm{sinh}(\tau r)}{r}}(r\phi +2)\right)\mathrm{sinh}(\tau r)(2\tau +\phi \sigma \widehat{r}){\displaystyle \frac{\sigma _3}{4r}}`$
$`{\displaystyle \frac{\tau ^2x_3}{2r}}\sigma \widehat{r}e^{\sigma 𝐫\tau }+{\displaystyle \frac{1}{2r^3}}\left(r\tau e^{\sigma 𝐫\tau }\mathrm{sinh}(\tau r)\right)\left(\sigma 𝐫\sigma _3x_3\right)`$
$`+{\displaystyle \frac{e^{\sigma 𝐫\tau }}{4r^2}}\left(\left(r\phi (S1)\right)\sigma _3+{\displaystyle \frac{x_3}{r}}\left({\displaystyle \frac{r^2}{2}}r\phi +(S1)\right)\sigma \widehat{r}\right).`$ (34)
It is then straightforward to evaluate $`\varphi _{(1)}`$ and $`A_{(1)}`$ using (25); they are
$`\varphi _{(1)}=0,`$ (35)
$`(A_{(1)})_i={\displaystyle \frac{(S1)}{8r^4}}\left(2r\phi (S1)\right)ϵ_{3ij}x_jI_{2\times 2}.`$ (36)
In this order, there is no correction to the Higgs field, which provides the higher dimensional geometric picture of monopole as a D-string suspended between two D3-branes. Finally, using (12)-(13), the leading corrections to the magnetic field and $`D\varphi `$ are found to be
$`B_{(1)}=(D\varphi )_{(1)}=\left({\displaystyle \frac{1}{r}}(r^2G)^{}\widehat{e}_3+G^{}x_3\widehat{r}\right)I_{2\times 2},`$ (37)
where $`G=\frac{(S1)\phi }{4r^3}`$. A few comments are in order. First, the self duality to this order is manifest in the above as it should be. As proved earlier, there are nonvanishing overall $`U(1)`$ components in the gauge connection. Furthermore, all the leading order corrections lie in the overall $`U(1)`$ sector. The solution in (35)-(36) is rather unique up to zero-mode and gauge fluctuations that may be shown to agree with those of ordinary SU(2) monopole to $`O(\theta )`$. Note that the translational zero modes of a monopole are already fixed by setting $`c_i`$ to zero. The gauge fields, Higgs field and field strengths are nonsingular everywhere to this order. Especially, the magnetic field at the origin behaves
$`\delta B=\theta \left({\displaystyle \frac{1}{36}}\widehat{e}_3+O(r^2)\right)I_{2\times 2}.`$ (38)
We see here constant U(1) magnetic field is induced at the origin. In the asymptotic region, the correction behaves as<sup>5</sup><sup>5</sup>5The dipole term in this expression was first obtained in Ref. .
$`\delta B=\theta {\displaystyle \frac{\widehat{e}_3+3(\widehat{e}_3\widehat{r})\widehat{r}}{4r^3}}I_{2\times 2}+\theta {\displaystyle \frac{\widehat{e}_32(\widehat{e}_3\widehat{r})\widehat{r}}{2r^4}}I_{2\times 2}+\mathrm{exponential}\mathrm{corrections}.`$ (39)
This long range field does not contribute to the magnetic charge in (7) and, thus, we explicitly see that the magnetic charge remains at integer values. The leading overall U(1) correction is the expected magnetic dipole contribution of the form, $`\frac{𝐩+3(𝐩\widehat{r})\widehat{r}}{r^3}`$, with $`𝐩=\frac{\theta }{4}\widehat{e}_3I_{2\times 2}`$; As discussed below (28), $`\pm \frac{\theta }{2}\widehat{e}_3`$ are the end-point displacements of the D-string from the positions of $`\theta =0`$, and the U(1) magnetic charges with a proper normalization are $`\pm \frac{1}{2}`$ on the branes, so $`𝐩`$ agrees with the charges multiplied by the displacements. The subleading long range term is not in the form of quadrupole moment and its physical origin is not clear.
There are many directions to go further. In the large $`\theta `$ limit, the structure of the equation (29) seems simplified and this may be helpful in understanding the nature of large $`\theta `$ limit by finding details. As said earlier, one may equivalently describe the noncommutative supersymmetric Yang-Mills theory by the ordinary Dirac-Born-Infeld theory with a magnetic field background. Understanding this relation was originally one of the main purpose of this note, but no progress is made in this direction.
Our analysis is expected to be directly generalized to the cases of dyons and 1/4 BPS dyons. In addition, the leading effect of noncommutativity to the moduli dynamics of monopoles and 1/4 BPS dyons will be of interest.
Acknowledgments
This work is supported in part by Ministry of Education Grant 98-015-D00061 and KOSEF 1998 Interdisciplinary Research Grant 98-07-02-07-01-5. |
no-problem/9910/cond-mat9910149.html | ar5iv | text | # Theory for the enhanced induced magnetization in coupled magnetic trilayers in the presence of spin fluctuations
## Abstract
Motivated by recent experiments, the effect of the interlayer exchange interaction $`J_{inter}`$ on the magnetic properties of coupled Co/Cu/Ni trilayers is studied theoretically. Here the Ni film has a lower Curie temperature $`T_{C,\mathrm{Ni}}`$ than the Co film in case of decoupled layers. We show that by taking into account magnetic fluctuations the interlayer coupling induces a strong magnetization for $`T>T_{C,\mathrm{Ni}}`$ in the Ni film. For an increasing $`J_{inter}`$ the resonance-like peak of the longitudinal Ni susceptibility is shifted to larger temperatures, whereas its maximum value decreases strongly. A decreasing Ni film thickness enhances the induced Ni magnetization for $`T>T_{C,\mathrm{Ni}}`$. The measurements cannot be explained properly by a mean field estimate, which yields a ten times smaller effect. Thus, the observed magnetic properties indicate the strong effect of 2D magnetic fluctuations in these layered magnetic systems. The calculations are performed with the help of a Heisenberg Hamiltonian and a Green’s function approach.
Recently, the element specific magnetization and the longitudinal susceptibility of magnetic epitaxial Co/Cu/Ni trilayers grown on Cu(001) have been measured . The two ferromagnetic Ni and Co films are coupled by the indirect exchange interaction $`J_{inter}`$ across the nonmagnetic Cu layer, which exhibits an oscillatory behavior as a function of the thickness $`d_{\mathrm{Cu}}`$ of the spacer, and an overall decay like $`d_{\mathrm{Cu}}^2`$ . The thicknesses $`d_{\mathrm{Ni}}`$ and $`d_{\mathrm{Co}}`$ of the Ni and Co films are chosen in such a way that for a vanishing interlayer coupling the Ni film has a lower Curie temperature than the Co film, i.e. $`T_{C,\mathrm{Ni}}(d_{\mathrm{Ni}})<T_{C,\mathrm{Co}}(d_{\mathrm{Co}})`$. $`J_{inter}`$ induces for $`T>T_{C,\mathrm{Ni}}`$ a considerable magnetization in the Ni film, which has been measured to vanish $`3040`$ K above $`T_{C,\mathrm{Ni}}`$. In this work we show that the induced strong Ni magnetization can theoretically only be obtained properly by taking into account magnetic fluctuations in the Ni film. If these fluctuations are neglected in the calculations (for example within a mean field theory (MFT) approach ), the resulting induced Ni magnetization $`M_{\mathrm{Ni}}(T)`$ for $`T>T_{C,\mathrm{Ni}}`$ is an order of magnitude smaller. Vice versa, the neglect of these fluctuations requires an unrealistic large value for $`J_{inter}`$ to yield the observed shift $`\mathrm{\Delta }T`$ of $`M_{\mathrm{Ni}}(T)`$ to larger temperatures. Generally, spin fluctuations diminish the magnetization of a two-dimensional (2D) magnetic system more strongly than for bulk magnets . An external magnetic field suppresses the action of these fluctuations, resulting in a stronger increase of the magnetization in 2D than in bulk systems. Similarly, in case of a coupled trilayer the interlayer coupling reduces the fluctuation effect, since it acts as an external magnetic field. Consequently, $`J_{inter}`$ has a pronounced effect on the Ni film magnetization. The magnetic behavior of such a system can be used to study the action of the strong 2D spin fluctuations.
To take into account the collective magnetic excitations (spin waves, magnons), we apply a many-body Green’s function approach, and use the so-called Tyablikov (or RPA-) decoupling . Since within this method interactions between magnons are partly taken into account, the whole temperature range of interest up to the Curie temperature can be considered. A Heisenberg-type Hamiltonian on an fcc(001) thin film with $`d=d_{\mathrm{Ni}}+d_{\mathrm{Co}}`$ monolayers (ML) is assumed with localized magnetic moments $`𝝁_i=\mu _i𝐒_i/S`$ on lattice sites $`i`$:
$``$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{i,j}{}}J_{ij}𝐒_i𝐒_j{\displaystyle \underset{i}{}}𝐁𝝁_i`$ (2)
$`+{\displaystyle \frac{1}{2}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{i,j}{ij}}{}}{\displaystyle \frac{1}{r^5}}\left[𝝁_i𝝁_jr^23(𝐫𝝁_i)(𝐫𝝁_j)\right].`$
Quantum mechanical spins with spin quantum number $`S=1`$ are assumed. Due to the dipole interaction the magnetization $`M_i(T)=\mathrm{}S_i^z\mathrm{}`$ is directed in-plane, determining the quantization axis ($`z`$-direction). The external magnetic field $`𝐁=(0,0,B)`$ is applied parallel to this axis. $`J_{ij}`$ are the exchange couplings between nearest neighbor spin pairs which are chosen in such a way that they yield the observed Curie temperatures for the separate (i.e. decoupled) layers. We put $`J_{\mathrm{CoCo}}=398`$ K per bond to obtain $`T_{C,\mathrm{Co}}(2)=435`$ K for a Co film with $`d_{\mathrm{Co}}=2`$ ML . To account for the diminished interface magnetic state of the Ni film, we distinguish between exchange couplings in the interface layers and the film interior layers. With $`J_{\mathrm{NiNi}}^{interface}=30`$ K and $`J_{\mathrm{NiNi}}^{interior}=172`$ K per bond one obtains for a Ni film with $`d_{\mathrm{Ni}}=5`$ ML $`T_{C,\mathrm{Ni}}(5)=267`$ K . These numbers for the exchange couplings are somewhat lower than the corresponding values obtained from the bulk Curie temperatures. In addition an interlayer exchange coupling $`J_{inter}`$ across the nonmagnetic Cu spacer layer between Ni and Co spins in the interlayer next to Cu is assumed. Positive as well as negative values of $`J_{inter}`$ can be considered, preferring thus parallel ($`J_{inter}>0`$) or antiparallel ($`J_{inter}<0`$) magnetized Ni and Co layers. The last term in Eq.(2) is the magnetic dipole coupling between spins $`𝝁_i`$ and $`𝝁_j`$ separated by vectors $`𝐫=𝐫_j𝐫_i`$, denoting $`r=|𝐫|`$. The slowly converging oscillating lattice sums are converted into rapidly converging ones with the help of Ewald summation . Layer-dependent magnetic moments $`\mu _i`$ are assumed . In particular we put $`\mu _{\mathrm{Ni}}^{interface}=0.46\mu _B`$, $`\mu _{\mathrm{Ni}}^{interior}=0.61\mu _B`$, and $`\mu _{\mathrm{Co}}=2.02\mu _B`$ for all Co layers, $`\mu _B`$ is the Bohr magneton. Lattice anisotropy terms are not considered here.
For the calculation of the layer-dependent magnetizations, $`M_i(T)`$, $`i=1\mathrm{}d`$, we consider the following two-times (commutator-) Green’s functions, which are written in spectral representation as
$$G_{ij}^{+(n)}(\omega ,𝐤_{})=\mathrm{}\mathrm{}S_i^+;(S_j^z)^nS_j^{}\mathrm{}\mathrm{}_{\omega ,𝐤_{}}=\mathrm{}\mathrm{}S_i^+;C_j^{(n)}\mathrm{}\mathrm{}_{\omega ,𝐤_{}}.$$
(3)
Here the $`i,j`$ refer to layer indices. Because we assume ferromagnetically ordered layers, the lateral periodicity has been used to apply a Fourier transformation into the 2D momentum space, k$`_{}`$ being the 2D wave vector. The Green’s functions are determined by solving the familiar equation of motion. Higher order Green’s functions are approximated by the Tyablikov (RPA-) decoupling for the exchange and dipole interactions ($`ik`$):
$$\mathrm{}\mathrm{}S_i^zS_k^+;C_j^{(n)}\mathrm{}\mathrm{}\mathrm{}S_i^z\mathrm{}\mathrm{}\mathrm{}S_k^+;C_j^{(n)}\mathrm{}\mathrm{}=M_i(T)G_{kj}^{+(n)},$$
(4)
i.e. spin operators $`S_i^z`$ are replaced by their expectation values $`M_i(T)`$. Different integers $`n2S1`$ have to be considered in order to calculate different spin quantum numbers $`S`$ . The equations of motion lead to a set of $`d`$ coupled linear equations for the Green’s functions. With the help of the spectral theorem the respective expectation values (or correlation functions) $`\mathrm{}(S_j^z)^nS_j^{}S_i^+\mathrm{}`$ are determined. The magnetization $`M_i(T)`$ is obtained from the usual relations between spin operators. By comparison with a recent Quantum Monte Carlo calculation of a Heisenberg monolayer in an external magnetic field it was shown that the applied Green’s function method yields a satisfactory result for the magnetization .
In Fig.(1) we present results for the Ni magnetization $`M_{\mathrm{Ni}}(T)`$ as a function of the temperature $`T`$ calculated with different interlayer couplings $`J_{inter}`$. We consider a Co/Cu/Ni trilayer with $`d_{\mathrm{Ni}}=5`$ Ni, $`d_{\mathrm{Cu}}=3`$ Cu, and $`d_{\mathrm{Co}}=3`$ Co monolayers, respectively. For comparison experimental results for the same system are also shown . The layer-dependent magnetizations $`M_i(T)`$, $`i=1\mathrm{}d`$, are determined from an iterative procedure. Presented are the Ni magnetizations $`M_{\mathrm{Ni}}(T)`$ averaged over all Ni layers. We use the inflection point $`T_{infl}`$ of $`M_{\mathrm{Ni}}(T)`$ as a measure of the corresponding temperature shift $`\mathrm{\Delta }T(J_{inter})=T_{infl}T_{C,\mathrm{Ni}}`$ of the Ni magnetization with respect to the decoupled case. One observes that already a small value of the interlayer coupling $`J_{inter}`$ produces a comparably large $`\mathrm{\Delta }T(J_{inter})`$. For example, $`J_{inter}=1`$ K results in $`\mathrm{\Delta }T30`$ K. Such a value for $`J_{inter}`$ compares well with various results measured formerly with different methods . Corresponding results have been determined by us also from a MFT approach. For the same value of $`J_{inter}`$ the calculated $`\mathrm{\Delta }T(J_{inter})`$ obtained from this approximation is about 10 times smaller than the value resulting from the Green’s function method.
We stress that this strong difference is a result of the 2D character of the magnetic trilayer system. The action of an external magnetic field for $`T>T_C`$ is much more pronounced for a 2D magnet than for a corresponding bulk system . For the coupled trilayer system under consideration the interlayer coupling $`J_{inter}`$ acts similar as an external magnetic field. Therefore, for temperatures $`T>T_{C,\mathrm{Ni}}`$ close to the Curie temperature of the single Ni film already a small $`J_{inter}`$ is sufficient to induce a marked Ni magnetization and the corresponding temperature shift $`\mathrm{\Delta }T`$. In contrast, within a MFT approach the exchange coupling alone results in a finite remanent magnetization for a 2D magnet, and does not need the support of the dipole coupling or an external magnetic field. In this case a small interlayer coupling adds simply to the strong Ni-Ni exchange coupling and results in a correspondingly small value of $`\mathrm{\Delta }T`$.
We have tested the assumption that the interlayer coupling acts similar as an external magnetic field. The results of $`M_{\mathrm{Ni}}(T)`$ for a single (decoupled) Ni film with $`d_{\mathrm{Ni}}=5`$ ML, with an external magnetic field acting exclusively on the topmost Ni layer with a strength $`B=J_{inter}/\mu _{\mathrm{Ni}}`$, are practically the same as for the corresponding coupled trilayer system.
Furthermore, we have calculated the induced Ni magnetization at $`T>T_{C,\mathrm{Ni}}`$ for different thicknesses $`d_{\mathrm{Ni}}`$ of the Ni film. Results for $`\mathrm{\Delta }T(d_{\mathrm{Ni}})`$ are shown in Fig.(2) for two different values of $`J_{inter}`$ and for $`1d_{\mathrm{Ni}}6`$ ML. The other coupling parameters are the same. The resulting absolute value $`\mathrm{\Delta }T(d_{\mathrm{Ni}})`$ exhibits a maximum at about $`d_{\mathrm{Ni}}=4`$ ML, see inset of Fig.(2). For comparison also the corresponding results calculated by the MFT approach are shown. On the other hand, the relative temperature shift $`\mathrm{\Delta }T/T_{C,\mathrm{Ni}}(d_{\mathrm{Ni}})`$, scaled by the Curie temperature $`T_{C,\mathrm{Ni}}(d_{\mathrm{Ni}})`$ of the single Ni film, increases by reducing the thickness of the Ni film, s. Fig.(2). This indicates the increasing importance of the action of the magnetic fluctuations for a decreasing film thickness. Experimental results are also displayed for two different Cu spacer thicknesses.
In addition we investigate the longitudinal susceptibility $`\chi _{\mathrm{Ni}}(T)`$ of the Ni film. For this purpose $`\chi _i(T)`$ for the $`i`$th magnetic layer, $`i=1\mathrm{}d`$, is calculated from the difference of the magnetizations $`M_i(T,B)`$ with $`B=0`$ and $`B=2`$ G,
$$\chi _i^{zz}(T)\chi _i(T)=\frac{M_i(T,B)}{B}\frac{\mathrm{\Delta }M_i(T,B)}{\mathrm{\Delta }B}.$$
(5)
In Fig.(3) the Ni susceptibility $`\chi _{\mathrm{Ni}}(T)`$ averaged over all Ni layers is displayed, corresponding to the results of Fig.(1). A resonance-like peak of $`\chi _{\mathrm{Ni}}(T)`$ is obtained for $`T>T_{C,\mathrm{Ni}}`$ as has been reported previously . With increasing interlayer coupling $`J_{inter}`$ the susceptibility peak is shifted to higher temperatures. Also, the maximum value of $`\chi _{\mathrm{Ni}}(T)`$ reduces strongly and its corresponding width increases markedly . For a strong $`J_{inter}`$ the Ni susceptibility is so small that it may be hardly measurable. A singularity of $`\chi _{\mathrm{Ni}}(T)`$ will occur at $`T=T_{C,\mathrm{Co}}`$ since there the induced Ni magnetization, although small, vanishes in accordance with the vanishing Co magnetization. Thus, $`T_{C,\mathrm{Co}}`$ corresponds to the true phase transition temperature of the coupled magnetic Co/Cu/Ni trilayer system .
In Refs. it was found for the Co/Cu/Ni trilayer system that the observed Ni remanent magnetization vanishes above a temperature $`T_{\mathrm{Ni}}^{}`$, where $`T_{C,\mathrm{Ni}}<T_{\mathrm{Ni}}^{}<T_{C,\mathrm{Co}}`$. On the other hand, either no susceptibility signal or only a small peak in $`\chi _{\mathrm{Ni}}(T)`$ could be measured at $`T_{\mathrm{Ni}}^{}`$. This might be due e.g. to the occurrence of a multidomain state or to a magnetic reorientation in the Ni film. A true phase transition in the thermodynamic sense is reminiscent to a nonanalytical behavior of the free energy, resulting in singularities of e.g. the correlation length or the susceptibility, as found at $`T_{C,\mathrm{Co}}`$. Regardless of the particular behavior of the magnetic properties at $`T_{\mathrm{Ni}}^{}`$, we emphasize that the observed strong induced Ni magnetization and the shift of the Ni susceptibility for $`T>T_{C,\mathrm{Ni}}`$ due to the interlayer coupling $`J_{inter}`$ is caused by the presence of magnetic fluctuations. To compare the measured and calculated temperature shift $`\mathrm{\Delta }T`$ we have assumed $`T_{infl}T_{\mathrm{Ni}}^{}`$.
We have investigated asymmetric trilayers which exhibit different Curie temperatures for the decoupled Ni and Co films, e.g. for single magnetic layers or a thick nonmagnetic spacer. The interlayer coupling $`J_{inter}`$ influences mainly the magnetization of the Ni film, which has the lower ordering temperature, whereas $`T_{C,\mathrm{Co}}`$ stays practically constant. On the other hand, a symmetric system, e.g. a Ni/Cu/Ni trilayer with equal Ni film thicknesses or a periodic multilayer system, has a single Curie temperature $`T_C`$. Here $`J_{inter}`$ will enhance $`T_C`$ considerably by amounts similar as discussed for the asymmetric trilayer. Indeed this has been observed for a Ni/Au multilayer system . In principle, by an appropriate combination of tri- and multilayers, for instance by varying the materials and the thicknesses of the magnetic and nonmagnetic layers, one might tune the magnetic properties according to possible applications.
In summary, we have calculated the action of the interlayer exchange coupling in a magnetic Co/Cu/Ni trilayer system by means of a many-body Green’s function approach for a Heisenberg Hamiltonian. $`J_{inter}`$ induces a considerable magnetization in the Ni film for $`T>T_{C,\mathrm{Ni}}`$, and shifts the Ni susceptibility peak to larger temperatures. Also the width of the Ni susceptibility increases, whereas its maximum value decreases strongly. We have shown that for reasonable values of $`J_{inter}`$ the observed strong induced Ni magnetization can be obtained only if the magnetic fluctuations in these 2D systems are taken into account properly. Corresponding results as calculated by a MFT approach, which neglect these fluctuations, yield a 10 times smaller increase, and cannot explain the observed magnetic behavior for the Co/Cu/Ni trilayer system. The influence of the magnetic fluctuations becomes stronger for smaller Ni film thicknesses, s. Fig.(2), indicating the 2D character of the important correlations. Note that we have investigated the effect of $`J_{inter}`$ on the magnetic properties solely by considering thermal fluctuations. Other possible influences such as magnetic noncollinearities are not discussed here.
This work has been supported by the DFG, Sonderforschungsbereich 290. Discussions with C. Timm are gratefully acknowledged. |
no-problem/9910/astro-ph9910004.html | ar5iv | text | # Distribution of Dark Matter in Bulge, Disk and Halo inferred from High-accuracy Rotation Curves11footnote 1 Proc. ”Galaxy Dynamics: from early times to the present”, IAP Ap Meeting, 1999 July, ed. F.Combes, G.Mamon and V.Charmandaris, ASP Conf. Series
## 1. Introduction
The dark matter inferred from analyses of flat rotation curves dominates the mass of galaxies (Rubin et al 1980, 1982; Bosma 1981; Mathewson et al 1996; Persic et al 1996). However, the distribution of dark-matter in the inner disk and bulge are not thoroughly investigated yet because of the difficulty in observing rotation curves of the innermost part of galaxies. In order to investigate the inner kinematics and rotation properties of spiral galaxies, we have shown that the CO molecular line would be most useful because of its high concentration in the center as well as for its negligible extinction through nuclear gas disks (Sofue 1996, 1997, Sofue et al 1997, 1998, 1999). Recent high-dynamic range CCD spectroscopy in optical lines has also made it possible to obtain high accuracy rotation curves for the inner regions (Rubin et al 1997; Sofue et al 1998). In this article, we review the observations of high-accuracy rotation curves of spiral galaxies, and discuss their general characteristics. Using the rotation curves, we derive the distribution of surface mass density (SMD), and discuss the radial variation of mass-to-luminosity ratio and the dark mass fraction.
## 2. Universal Properties of Rotation Curves
Persic et al (1995,1996) have extensively studied the universal properties of rotation curves with particular concern to the outer disk and massive halo, who have used H$`\alpha `$ position-velocity data from Mathewson et al (1996). However, inner rotation curves currently obtained by a method to trace peak-intensity velocities on optical spectra may have missed higher velocity components in the central region, being affected by the bright bulge light. The inner kinematics is also not well traced by HI observations because of the lack in HI in the central regions (Bosma 1981a, b).
We have proposed to use the CO-line emission in order to overcome this difficulty for its negligible extinction as well as its high concentration toward the center. Recent high dynamic-range CCD spectroscopy in the H$`\alpha `$ and \[NII\] line emissions also provides us with accurate kinematics of the central regions (Rubin et al 1998; Sofue et al 1998; Bertola et al 1998). In deriving rotation curves, we applied the envelop-tracing method from PV diagrams.
Fig. 1 shows well-sampled rotation curves obtained from these CO and CCD observations (Sofue et al 1999). Since the dynamical structure of a galaxy varies with the radius rapidly toward the center, a logarithm plot would help to overview the innermost kinematics. In fact, logarithmic plots in Fig. 2 demonstrate the convenience to discuss the central kinematics. In such a plot, we may argue that high-mass galaxies show almost constant rotation velocities from the center to outer edge.
We may summarize the universal properties of rotation curves in Fig. 1 and 2 as follows, which are similar to those for the Milky Way.
(1) Steep central rise: Massive galaxies show a very steep rise of rotation velocity in the central region. Rotation velocity is often found to be finite even in the nucleus. Small-mass galaxies tend to show milder, rigid-body increase of rotation velocity in the center.
(2) Sharp central peak and/or shoulder corresponding to the bulge.
(3) Broad maximum in the disk; and
(4) Flat rotation toward the edge of the galaxy due to the massive halo.
The steep nuclear rise of rotation is a universal property for massive Sb and Sc galaxies, regardless the morphological types. On the other hand, less massive galaxies show a rigid-body rise, except for the Large Magellanic Cloud (see the next section). The fact that almost all massive galaxies have a steep rise indicates that it is not due to non-circular motion by chance. If there is a bar, the interstellar gas is shocked and dense gas is bound to the shocked lane along the bar, while the gas in high-velocity streaming motion is not observed to be in the molecular form. Therefore, observed velocities in the CO lines should manifest the velocities in the shocked lane, which is approximately equal to the pattern speed. Hence, the molecular line observations may underestimate the rotation velocity.
## 3. Surface-Mass Density
The mass distribution in galaxies has been obtained by assuming constant mass-to-luminosity ratios in individual components of the bulge and disk, using the luminosity profiles (e.g. Kent 1987). However, the M/L ratio may not be constant even in a single component, such as due to color gradient as well as the variation of the dark-matter fraction. Forbes (1992) has obtained the mass distribution as a function of the radius, where he has derived mean density within a certain radius (integral surface density). We have developed a method to derive a differential surface mass density as a function of the radius in order to compare the mass distribution with observed profiles of the surface luminosity (Takamiya and Sofue 1999):
Once an accurate rotation curve of a galaxy is given, we are able to calculate the surface-mass density directly from the rotation velocity, $`V(r)`$, as the following (Takamiya and Sofue 1999). We assume that the ’true’ mass distribution in a real disk galaxy will be between two extreme cases; spherical and axisymmetric flat-disk distributions. For these two extreme cases, the SMD, $`\sigma (R)`$, is directly calculated as follows (Takamiya and Sofue 1999):
Spherical case: The mass $`M(r)`$ inside the radius $`r`$ is given by
$$M(r)=\frac{rV(r)^2}{G},$$
(1)
where $`V(r)`$ is the rotation velocity at $`r`$. Then the SMD $`\sigma _S(R)`$ at $`R`$ is calculated by,
$`\sigma _S(R)`$ $`=`$ $`2{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}\rho (r)𝑑z.`$ (2)
Here, $`r=\sqrt{R^2+z^2}`$ with $`z`$ the height from the galactic plane, and the volume mass density $`\rho (r)`$ is given by
$$\rho (r)=\frac{1}{4\pi r^2}\frac{dM(r)}{dr}.$$
(3)
Flat-disk case: The SMD for a thin flat-disk, $`\sigma _D(R)`$, is derived by solving the Poisson’s equation:
$$\sigma _D(R)=\frac{1}{\pi ^2G}\left[\frac{1}{R}\underset{0}{\overset{R}{}}\left(\frac{dV^2}{dr}\right)_xK\left(\frac{x}{R}\right)𝑑x+\underset{R}{\overset{\mathrm{}}{}}\left(\frac{dV^2}{dr}\right)_xK\left(\frac{R}{x}\right)\frac{dx}{x}\right],$$
(4)
where $`K`$ is the complete elliptic integral and becomes very large when $`xR`$(Binney & Tremaine 1987).
The accuracy of this method can be examined by calculating SMD for the spherical and flat-disk assumptions, and the ‘true’ SMD for a known model potential. We have used the Miyamoto-Nagai (MN) potential (Miyamoto & Nagai 1975). Fig. 3 shows the true SMD and calculated SMDs for a spherical and a flat-disk assumptions for an RC corresponding to the MN potential. The spherical case well reproduces the true SMD for the inner region. This is because a spherical component is dominant within the bulge. On the other hand, the flat-disk mimics the true one in the disk region, which is also reasonable. Near to the outer edge, the flat-disk case appears to be a better tracer of the true SMD. We stress that the results from for spherical and flat-disk assumptions do not differ by more than a factor of 1.5 from the true SMD, except for the outermost region where the edge effect for the spherical case becomes significant.
We have calculated SMD profiles for Sb and Sc galaxies, for which accurate rotation curves are available, and show the results in Fig. 4. Sb and Sc galaxies show similar surface-mass distributions to each other. In the disk region at radii 3 to 10 kpc the SMD decreases exponentially outward. In the bulge region of some galaxies, the SMD shows a power-law decrease, much steeper than an exponential decrease. This indicates that the surface-mass concentration in the central region is higher than the luminosity concentration. The central activities appear to be not directly correlated with the mass distribution. Note that the present sample includes galaxies showing Seyfert, LINER, jets and/or black holes.
Recently, high-resolution kinematics of the Large Magellanic Cloud in the HI line has been obtained by Kim et al (1998). Using their position-velocity diagrams, we have derived a rotation curve, and calculated an SMD distribution around the kinematical center. Fig. 5 shows the thus obtained mass distribution in the LMC. The SMD has a sharp peak near the kinematical center, indicating a dense massive core. It is surrounded by an exponential disk and massive halo. The core component is not associated with any optical bulge-like component, and we called it a dark bulge. The dark bulge is significantly displaced from the stellar bar. Hence, such a dwarf galaxy as the LMC was found to show a similar mass-density profile to the spiral galaxies as except for the absolute values.
## 4. Mass-to-Luminosity Ratio and Dark-Matter Fraction
The surface mass density can be, then, directly compared with observed surface luminosity, from which we can derive the mass-to-luminosity ratio (M/L) as normalized to unity at $`R=2`$ kpc. Fig. 5 shows the thus obtained radial distributions of M/L for a disk assumption (Takamiya and Sofue 1999). In order to see qualitatively the variation of dark-matter fraction, we define a quantity, DMF, by
$$DMF=1a/(M/L),$$
where $`a`$ is a constant and M/L is normalized to unity at $`R=2`$ kpc. Here, we assume that the DMF is 0.5 at $`R=2`$kpc, or $`a=0.5`$. Fig. 6 shows the thus obtained DMF profiles for the galaxies.
The figures indicate that the M/L and DMF is not constant at all, but vary significantly within a galaxy.
(1) M/L and DMF vary drastically within the central bulge. In some galaxies, it increases inward toward the center, suggesting a dark massive core. In some galaxies, it decreases toward the center, likely due to luminosity excess such as due to active nuclei.
(2) M/L and DMF gradually increases in the disk region, and the gradient increases with the radius.
(3) M/L and DMF increases drastically from the outer disk toward the outer edge, indicating the massive dark halo. In many galaxies, the dark halo can be nearly directly seen from this figure, where the M/L exceed ten, and sometimes hundred. fraction.
## 5. Discussion
Although the variation of the M/L ratio in the visible bands may partly be due to a gradient of stellar population, the large amplitude of the M/L variation may not entirely be attributed to the color gradient. The M/L and DMF profiles in Fig. 6 and 7 indicate that some galaxies (e.g., NGC 4527 and NGC 6946) show a very steep increase by more than an order of magnitude toward the center within a few hundred pc radius. Such variation of M/L is hard to be understood by color gradient in the bulge. Such a steep central increase of M/L may imply that the bulge contains an excess of dark mass inside a few hundred pc, which we call a “massive dark core”. The scale radius is of the order of 100$``$200 pc, and the mass is estimated to be $`MRV^2/G10^9M_{}`$. The massive dark cores may be an object linking the galactic bulge with a massive black hole in the nuclei (Miyoshi et al. 1995; Genzel et al. 1997; Ghez et al. 1998) and/or massive core objects causing a central Keplerian RC (Bertola et al. 1998).
## References
Bertola, F., Cappellari, M., Funes, J.G., Corsini, E.M., Pizzella, A., & Vega-Bertran, J.C. 1998, ApJ, 509, L93.
Binney, J., & Tremaine, S. 1987, Galactic Dynamics(Princeton Univ. Press)
Bosma A. 1981, AJ 86, 1825
Bosma A. 1981, AJ 86, 1791
Clemens, D.P. 1985, ApJ, 295, 42
Forbes, D.A. 1992, A&AS, 92, 583
Genzel, R., Eckart, A., Ott, T., & Eisenhauer, F. 1997, MNRAS, 291, 219
Ghez, A.M., Klein, B.L., Morris, M., & Becklin, E.E. 1998, ApJ, 509, 678.
Honma, M., & Sofue, Y. 1997, PASJ, 49, 453
Kent, S. M. 1987, AJ 93, 816.
Kim, S., Stavely-Smith, L., Dopita, M. A., Freeman, K. C., Sault, R. J., Kesteven, M. J., and McConnell, D. 1998, ApJ 503, 674.
Mathewson, D.S. and Ford, V.L. 1996 ApJS, 107, 97.
Miyamoto, M., & Nagai, R. 1975, PASJ, 27, 533
Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N., Diamond, P., & Inoue, M. 1995, Nature, 373, 127.
Persic, M., Salucci, P., & Stel, F. 1996, MNRAS, 281, 27
Persic, M., & Salucci, P. 1995, ApJS, 99, 501
Rubin V. C., Ford W. K., Thonnard N. 1980, ApJ 238, 471
Rubin, V. C., Ford, W. K., Thonnard, N. 1982, ApJ, 261, 439
Rubin, V., Kenney, J.D.P., Young, J.S. 1997 AJ, 113, 1250.
Rubin, V.C., Burstein, D., Ford, W.K.,JR., & Thonnard, N. 1985, ApJ, 289, 81
Sofue, Y. 1996, ApJ, 458, 120
Sofue, Y. 1996, ApJ, 458, 120
Sofue, Y. 1997, PASJ, 49, 17
Sofue, Y. 1999, PASJ, 52, in press.
Sofue, Y., Tutui, Y., Honma, M., and Tomita, A., 1997, AJ, 114, 2428
Sofue, Y., Tomita, A., Honma, M., Tutui, Y. and Takeda, Y. 1998, PASJ 50, 427.
Sofue, Y., Tutui, Y., Honma, M., Tomita, A., Takamiya, T., Koda, J., & Takeda, Y. 1999, ApJ, 523 in press
Takamiya, T., and Sofue, Y. 1999, submitted to ApJ. |
no-problem/9910/cond-mat9910482.html | ar5iv | text | # Progressive suppression of spin relaxation in 2D channels of finite width
## I Introduction
Spintronics, a nontrivial extention of the conventional electronics, adds functionality utilizing the carrier spin degree of freedom. Spin can potentially be used as a by far more capacious quantum information storage cell, be involved in the transfer of information, for elaborate schemes of information processing, both quantum and classical, and be integrated with electric-charge counterparts in combined designs. Electron or nuclear spin manipulation is believed to be the key component of the potential realizations of a quantum computer (for example, see Ref. ).
Several devices were proposed so far including spin hybrid and field effect transistors, tunneling structures with magnetic layers, and spin-based memory. The most notable success was obtained with giant magnetoresistance (GMR) effect devices that rely on the variation of electron scattering in a multilayer stack of ferromagnetic films separated by nonmagnetic materials. Though the variation of the current through the structure is small, it is sufficient for detection, and the excellent sensitivity of the device to weak external magnetic fields (typically $`1\%`$ change of resistance per oersted) opened the door for massive data storage applications. Spin coherence is a pivotal prerequisite for the operability of the prospective spintronic devices.
Concerning experimental realization of spintronic elements, most propositions rely on the injection of spin-polarized electrons from the ferromagnetic layer and suffer from the poor quality of the ferromagnet-semiconductor interfaces that produce a large number of surface states causing strong spin relaxation and reducing the polarization of the injected electrons to only a few percent. Since the injected electron crosses the metal-semiconductor interface twice in spin-valve designs, i.e., at the source and drain terminals of the device, the importance of the interface problem multiplies. Furthermore, spin relaxation in the active region of the device adds to the complications as well.
In our study, we consider the possibilities for suppressing spin relaxation of the conduction electrons and evaluate different approaches. It is extremely interesting to investigate what happens with the spin relaxation in a 2D electron gas as the long strip of finite width is formed using electrostatic squeesing split-gate technique or by another method. These systems can be used to connect active elements in integrated chip designs and can even become a part of the active device as we continue to reduce size of the element. Eventually we should reach the 1D limit exhibiting no spin relaxation. This transition to the 1D behavior was recently documented in the computer simulation of the Datta and Das spin transistor reported by Bournel et al. that inspired our research. We concentrate our efforts on the definition of the cross-over regions from the point of view of the spin relaxation time and the description of its behavior in the broad region around the cross-over points.
The rest of this paper is organized as follows: We start with a brief account of spin relaxation mechanisms in semiconductors in order to identify the most relevant. We discuss its transformation to lower-dimensional systems, as well as different possibilities to suppress destructive spin relaxation (Sec. II). The model of a narrow patterned 2D electron gas channel is described in Sec. III. Results of Monte Carlo simulation of spin relaxation in the channel are given in the next section (Sec. IV), along with a comprehensive discussion of the relaxation regimes in terms of the channel width and a value of the spin splitting of electron subbands (Sec. V). A brief summary follows at the end.
## II Mechanisms of spin relaxation
There are several mechanisms that can cause spin relaxation of conduction electrons (see Ref. for an up-to-date informal review):
i) The mechanism of D’yakonov and Perel’ (DP) takes into consideration that spin splitting of the conduction band in zinc-blende semiconductors at finite wave vectors is equivalent to an effective magnetic field that causes electron spin to precess. For electron experiencing random multiple scattering events the orientation of this effective field is random and that leads to the spin relaxation.
ii) Bir-Aronov-Pikus (BAP) processes involve a simultaneous flip of electron and hole spins due to electron-hole exchange coupling.
iii) Spin relaxation due to momentum relaxation is possible directly through spin-orbit coupling \[Elliot-Yafet (EY) process\].
iv) Spin relaxation can take place as a result of hyperfine interaction of electron spins with magnetic momenta of lattice nuclei, the hyperfine magnetic field being randomly changed due to the migration of electrons in the crystal.
BAP processes require a substantial hole concentration that is not available in the unipolar doped structures. EY processes are suppresed in 2D environments — target of our prime interest. Thus, we concentrate our efforts on the DP mechanism as the most relevant one for the case considered.
### A DP mechanism and change of the dimensionality
As a result of the relatively low zinc-blende crystal symmetry, the effective $`2\times 2`$ electron Hamiltonian for the conduction electrons contains spin dependent terms that are cubic in the electron wave vector $`𝒌`$
$$H=\eta ^{}[\sigma _xk_y(k_z^2k_x^2)+\text{ etc }].$$
(1)
The constant $`\eta ^{}`$ reflects the strength of the spin splitting in the conduction band whose value is defined by the details of the semiconductor band structure, $`\sigma _i(i=x,y,z)`$ are the Pauli matrices, other terms in Eq. (1) should be obtained by the cyclic permutation of the indices.
As the system dimensionality is changed from the 3D to 2D by, for example, the extremely strong spacial confinement in the third direction — that can be achieved in the semiconductor heterostructures — modifications to the character of the spin relaxation occur as well. First of all, an average wave vector in the direction of the quantum confinement (axis $`z`$) is large, so the terms in the spin splitting involving $`k_z^2`$ will dominate. This results in the orientation of the effective magnetic field in the plane of the 2D electron gas ($`xy`$ plane). Still the elementary rotations around random axes, all laying in one plane, do not commute with each other, so the electrons reaching the same final destination by different trajectories will have different spin orientations. For longer times, more and more distinguishable trajectories will become possible and this will lead to a progressive reduction of the averaged spin polarization of the electron ensemble. Since it is rather easy to design a structure with $`|k_z|>k_F`$, the splitting for a typical electron will be larger and the rate of spin relaxation will be enhanced. Similar behavior can be naively expected with the further reduction of the dimensionality to the 1D case. Let the axis $`x`$ be along wire in what follows. Here again the main terms will contain spacial-confinement multiplyers $`k_y^2`$, $`k_z^2`$. The principal difference with the 2D case is that now all rotations are limited to a single axis direction and they commute with each other. Apart from the systematic rotation, spin polarization does not disappear with time. All particles, independently of the number and the sequence of the scattering events, that reach the same final point B will have the same spin orientation. This statement is relaxed if one allows intersubband scattering in the one-dimensional system. This type of scattering becomes progressively more important for wider and wider quantum wires with more and more subbands involved. Thus, we will recover, as one can predict, the 2D or 3D case in the limit of very thick quantum wires.
Actually, there are two types of terms that appear in the effective-mass Hamiltonian for the 2D electron gas: the bulk-asymmetry- and structure-asymmetry-induced (Bychkov–Rashba). They are of the same functional form
$$H=\eta 𝝈[𝒌\times \widehat{𝒛}]\eta (\sigma _xk_y\sigma _yk_x),$$
(2)
where $`\widehat{𝒛}`$ is a unit vector in the $`z`$ direction. We are not interested here in the origin of the spin splitting term in our model problem and simply assume its presence with a wave-vector dependence given by Eq. (2). (see Refs. for the band-structure calculation of the linear-in-$`k`$ spin splitting in heterostructures simultaneously treating bulk- and structure-induced asymmetry.)
This form of the spin Hamiltonian is equivalent to the precession of the spin in the effective magnetic field
$$H=\frac{\mathrm{}}{2}𝝈𝛀_{\mathrm{eff}},\text{ where }𝛀_{\mathrm{eff}}\eta _{\mathrm{DP}}𝒗\times \widehat{𝒛}.$$
(3)
Here the particle velocity $`𝒗=\mathrm{}𝒌/m^{}`$, and an obvious substitution $`\eta \eta _{\mathrm{DP}}`$ is done for convenience. $`\eta _{\mathrm{DP}}`$ is expressed in inversed length units. For a particle moving ballistically the distance $`1/\eta _{DP}`$ spin will rotate to the angle $`\varphi =1`$. We remind that the quantum-mechanical description of the evolution of the spin $`1/2`$ is equivalent to the consideration of the classic momentum $`𝑺`$ with the equation of motion
$`{\displaystyle \frac{d𝑺}{dt}}=𝛀_{\mathrm{eff}}\times 𝑺.`$The reciprocal effect of electron spin on the orbital motion through spin-orbit coupling can often be ignored due to the large electron kinetic energy in comparison to the typical spin splittings and strong change of the momentum in scattering events.
### B Possiblities to influence spin relaxation time
In addition to understanding of the reasons governing spin depolarazation of carriers, we wish to consider and assess the possibilities to actively influence these destructive processes in order to improve parameters and gain new functionality of the future spintronic devices. Following are the potential approaches for manipulating spin relaxation times:
i) A rather simple observation follows directly from the essence of the regime of motional narrowing (small elementary spin rotations during ballistic electron flights). Since $`\tau _S^1\tau _p\mathrm{\Omega }_{\mathrm{eff}}^2`$, reduction of momentum relaxation time, $`\tau _p`$, leads to the suppression of spin relaxation. On the other hand, this will lead to increasing of broadenings as well as decoherence, and can worsen device parameters.
ii) Since the bulk-asymmetry- and structure-asymmetry-induced spin splittings are additive with the same $`k`$-dependence, it is possible to tune combined spin splittings in the conduction band to a desired value through manipulation of the external electric field.
iii) Additional spin splitting, which is independent of the electron wave vector will fix the precession axis. An evident possibility here is the Zeeman effect. The time of spin relaxation scales in the presence of the external magnetic field, $`B`$, as
$`{\displaystyle \frac{1}{\tau _S(B)}}={\displaystyle \frac{1}{\tau _S(0)}}{\displaystyle \frac{1}{1+(\mathrm{\Omega }_\mathrm{L}\tau _p)^2}},`$where $`\mathrm{}\mathrm{\Omega }_\mathrm{L}=g\mu _\mathrm{B}B`$ is a Zeeman splitting of electron spin sublevels. This equation suggests that for $`\mathrm{\Omega }_\mathrm{L}\tau _p=1`$ spin relaxation time will double.
iv) There is a possibility of controlling the spin relaxation of the conduction electrons by doping. The first realization was reported in $`\delta `$-doped heterostructures, and a several-orders-of-magnitude longer spin memory has recently been observed in $`n`$-doped structures.
v) When the channel width, $`L`$, is comparable to the magnitude of the electron mean free path, $`L_p`$, from the classical point of view, the sequential alteration of one of the wave vector components should effectively reduce spin relaxation (reflective boundaries). Scattering on the boundaries (diffusive boundaries) will decrease $`\tau _p`$ as well and can potentially influence spin relaxation. Quantum mechanically, the channel narrowing leads to the quantization of the electron transverse motion in the strip and absence of spin relaxation without intersubband scattering.
The first four possibilities are considered to some extend in the scientific literature or are just evident consequences of the relaxation mechanisms. The fifth deserves a more thorough analysis. To check the effect of the patterning of the 2D electron gas into a strip of some large width $`L`$, we developed a simple Monte Carlo sumulator encompassing random scattering of the particles in the channel, reflection from the boundaries and spin rotation during free flight due to spin splitting of the conduction band. Now we describe in more detail our working model.
## III Model
As a model system we consider a strip of the 2D electron gas. The third dimension (axis $`z`$) is quantizing and it is absolutely irrelevant to our consideration of the particle movement in the real space, since the intersubband gap is larger than all other energy scales involved.
i) We will assume that all particles have the same velocity $`|𝒗|`$.
ii) Scattering is considered to be elastic and isotropic in order to retain model simplicity; the former assumption preserves velocity modulus, the later one eliminates any correlations between directions of the particle velocity before and after the scattering event.
iii) We neglect electron-electron interactions and consider all particles to be independent.
iv) An assumption that all scatterers are completely uncorrelated leads to an exponential distribution of times between any two consecutive scattering events, their average is called the momentum relaxation time, $`\tau _p`$. This time corresponds to the mean-free path $`L_p=|𝒗|\tau _p`$.
v) The problem spin Hamiltonian is given in the form of Eq. (2) and influences only spin coordinate. We ignore the reciprocal effect of the spin on the motion in the real space.
vi) The width of the channel is large, several or even tens of mean-free paths so that it is permissible to consider classically the electron real-space movement.
vii) At first we consider only reflecting boundary conditions at the borders of the 2D strip. Reflecting channel boundaries preserve longitudinal component of the particle velocity and change sign of the normal component in collisions. Later, we will compare our results with diffusive boundaries.
### A Types of experiments
For simplicity, in all of our experiments particles will be injected into the system at some particular point A (input terminal) at time $`t=0`$ with spin $`𝑺`$. As the particle experiences multiple scattering events a diffusive pattern of motion is formed with, for an isotropic system, a Gaussian distribution
$`\mathrm{\Gamma }(r){\displaystyle \frac{1}{r}}\mathrm{exp}\left({\displaystyle \frac{r^2}{r^2}}\right),`$that broadens as time increases: $`rL_p\sqrt{t/\tau _p}`$.
Evidently, there are multiple possibilities for experimental setups. The definition of spin relaxation, obtained in these experiments will vary correspondingly. Now, we consider several important possibilities:
i) The most informative type of experiment would be to obtain the average spin $`𝑺`$ as a function of time $`t`$ at each point B (output terminal). Results of all other experimental configurations can be derived by partial integration of this correlation function.
ii) At time $`t`$ average $`𝑺`$ is calculated for the whole ensemble independent of the real-space position of electrons. Optical experiments are considered likely to deliver information of this type, because of the limited possibilities to focus optical system and fundamental restrictions.
iii) Particles, reaching output terminal are removed from the system immediately, $`𝑺`$ is measured as a function, i.e., of the interterminal distance. Individual particles can spend a substantial time in the system, depending on their trajectories. This type of experiment is the most probable variant in electric experiments where points A and B can be identified as real device gates. Made from ferromagnetic materials, gates can inject polarized electrons and sense the polarization of the drain flux, delivering information about the average spin of carriers.
Let us now show, that our result for spin relaxation does indeed depend on the definition of the experiment. As a simple example we consider a pure 1D case. From the point of view of the first and third experiments there is no spin relaxation. There exists a systematic rotation of the spin proportional to the distance from the injection point A. Indepent of the number of scattering events and individual trajectories, all particles reaching point B will have exactly the same spin orientation, but it would be different for different choices of point B. Thus, the transverse component of the spin $`S_y(x)=S_y^0\mathrm{cos}(\eta _{\mathrm{DP}}x)`$. For the second realization we readily conclude that the average spin
$`S_y={\displaystyle 𝑑xS_y(x)\mathrm{\Gamma }(x)}`$ $``$ $`S_y^0\mathrm{exp}\left({\displaystyle \frac{1}{4}}\eta _{\mathrm{DP}}^2x^2\right)`$ (5)
$`S_y^0\mathrm{exp}\left({\displaystyle \frac{1}{4}}\eta _{\mathrm{DP}}^2L_p^2{\displaystyle \frac{t}{\tau _p}}\right),`$
that is an exponential decay of spin polarization in the spatially more and more broadening electron distribution.
Note that in the 2D case the same phenomenon of the systematic rotation of the electron spin takes place in addition to the (2D–3D specific) DP spin relaxation. This systematic rotation on the angle $`\mathit{\varphi }=\eta _{\mathrm{DP}}𝒓\times \widehat{𝒛}`$ for the particle real space transfer $`𝒓`$ is again independent on the detailes of individual trajectories.
## IV Results of simulation
Fig. 1a shows the time dependence of the average spin polarization $`2S_y`$ in the channel. It is found that apart from the systematic rotation this dependence is essentially the same for all points inside channel that have a substantial electron occupation, so it is possible to calculate reliably a value of spin polarization. The calculation is performed for the D’yakonov-Perel’ parameter $`\eta _{\mathrm{DP}}L_p=0.05`$ and different channel widths. For this particular graph the trajectories of $`N=5\times 10^3`$ electrons are traced that gives a standard deviation in the definition of the spin polarization on the order of $`\sqrt{N}10^2`$ (throughout our investigation $`N=10^310^5`$).
A strong dependence of the polarization decay on the channel width is obtained. The decay is found to be approximately exponential, apart from the small initial interval. This is the region where the diffusive regime of electron motion and spin rotation is not yet established ($`\sqrt{t/\tau _p}1`$). It overlaps or is followed by the region where the typical trajectory of a randomly scattered particle does not reaches channel boundaries yet. Here the relaxation should be essentially the same as in the unpatterned 2D gas.
This generally exponential temporal behavior of spin polarization justifies an introduction of the spin relaxation time, the dependence of this decay time on the channel width is presented in Fig. 1b. In case of sufficiently wide channels so defined relaxation time approaches the 2D limit of $`\tau _S^{2\mathrm{D}}`$, for narrow channels $`\tau _S`$ scales as $`L^2`$.
In Fig. 1c we summarize results of our simulation of channels with a fixed width and different values of D’yakonov-Perel’ term $`\eta _{\mathrm{DP}}`$. As the constant $`\eta _{\mathrm{DP}}`$ becomes larger and larger, the system behavior graduately goes out of the regime of motional narrowing since elementary rotations on the elementary free flights are not small anymore. This leads to a quick spin relaxation as reflected by the reduction and saturation of $`\tau _S`$. On the other side of this dependence is a steep increase in the relaxation time as $`\eta _{\mathrm{DP}}`$ decreases. We have found an intermediate region, where $`\tau _S^1`$ scales as a second power of the DP constant, that is followd by a forth-power dependence for very small values of $`\eta _{\mathrm{DP}}`$.
Thus, in the well-established regime of motional narrowing and in the limit of sufficiently narrow channels, we propose an asymptotic formula $`\tau _S\eta _{\mathrm{DP}}^4L^2`$.
To identify an effect of the particular choice of the boundary conditions we have simulated a channel with diffusive boundaries. The particle that reaches the channel border is scattered back into the channel with equal probabilities for all directions of the new particle velocity. The results of the calculation repeat closely the case of the reflective boudaries up to the very narrow channel with widths of only several mean free paths. No systematic deviation of the spin relaxation time is observed for wider channels.
## V Regimes of the DP relaxation in a channel
Thus, we distinguish the following regimes of spin relaxation in channels of finite width as we vary both independent parameters $`\eta _{\mathrm{DP}}`$ and $`L`$ of the problem in consideration:
First of all, for very large spin splitting ($`\eta _{\mathrm{DP}}L_p1`$) we violate a general condition for the motional narrowing regime for the DP spin relaxation. Each elementary rotation is not small and the information about the spin polarization is lost already after the first random scattering event (see as well Fig. 2 as a guide). For this regime $`\tau _S\tau _p`$ and is the shortest of all regimes.
When $`\eta _{\mathrm{DP}}`$ is small ($`\eta _{\mathrm{DP}}L_p<1`$), we are back in the regime of motional narrowing and a well known equation $`\tau _S^{2\mathrm{D}}\tau _p(\eta _{\mathrm{DP}}L_p)^2`$ defines the time of spin relaxation in the 2D system. Now we narrow the strip of the 2D electron gas. The behavior is unchanged until $`\eta _{\mathrm{DP}}L1`$. For smaller channel widths ($`\eta _{\mathrm{DP}}L<1`$) DP spin relaxation is suppresed very effectively, with $`\tau _S\tau _S^{2\mathrm{D}}(\eta _{\mathrm{DP}}L)^2`$. For $`L<L_p`$ the channel width $`L`$ acts as a new mean-free path in the system, substituting $`L_p`$ in equations. Actually, this region does not satisfy our assumption about classical motion of particles in real space; the transverse motion is quantized and this system should be considered as a quantum wire with multiple subbands. The cross-over points define broad regions of mixed behavior that becomes more definitive as we move out of them.
We conducted analysis presented here for spin relaxation in the most convenient from our point of view units. Now, we will check that the chosen range of parameters is well within the reasonable limits present in the contemporary heterostructures. For the asymmetric GaAs/AlGaAs quantum well with the electron concentration of $`10^{12}`$ sm<sup>-2</sup> in the channel the detected experimentally and confirmed by the calculation spin splitting in the conduction band is on the order of $`0.20.3`$ meV at the Fermi energy (see recent paper by Wissinger et. al and references therein). This 2D electron concentration corresponds to the $`k_F3\times 10^5`$ cm<sup>-1</sup>. Samples with $`L_p1/q=3\times 10^6`$ cm are readily available in laboratories. Furthermore, this splitting corresponds to $`\eta _{\mathrm{DP}}=69\times 10^4`$ cm<sup>-1</sup>. This set of parameters is on the border of the regimes of the motional narrowing and large elementary rotations. Reducing the spin splitting (parameter $`\eta _{\mathrm{DP}}`$) (by, for example, reduction of the electron concentration in the 2D electron gas and, thus, structure asymmetry due to the internal electric field) will push parameters into well established regime of motional narrowing, suitable for the experimental observation of the described phenomena. Datta and Das give an estimate of $`\pi /\eta _{\mathrm{DP}}=7\times 10^5`$ cm for some particular InGaAs/InAlAs heterostructures.
## VI Summary
In conclusion, we have investigated spin-dependent transport in semiconductor narrow 2D channels and explored the possibility of suppressing spin relaxation. Our approach is based on a Monte Carlo transport model and incorporates information on conduction band electron spins and spin rotation mechanisms. Specifically, an ensemble of electrons experiencing multiple scattering events is simulated numerically to analyze the decay of electron spin polarization in channels of finite width due to the D’yakonov-Perel’ mechanism. We have been able to identify different regimes of the spin relaxation in the 2D channels of finite width and obtain dependencies of the spin relaxation time on the width $`L`$ and DP parameter $`\eta _{\mathrm{DP}}`$. The most attractive for the future spintronic applications is a regime of the suppresed spin relaxation with the relaxation time, $`\tau _S`$, scaling as $`L^2`$.
This work was supported, in part, by the Office of Naval Research. We thank M. A. Stroscio for the critical reading of the manuscript. |
no-problem/9910/hep-ph9910368.html | ar5iv | text | # 1 Introduction
## 1 Introduction
History of physics provides many examples of how one studied phenomenon was used for investigating other phenomena. Here we discuss one more potential case of such a kind. Because of severe space limitations, this text can be only an extended summary of theoretical work dome in the direction. For brevity reasons as well, references are given only to the basic papers published by the present author. A more complete text may be found in . It contains also a longer list of references including related papers of different authors. Original papers are strongly recommended for interested readers.
Well-known for many years are strangeness oscillations in time evolution (and decays) of neutral kaons. They were applied to studies of the neutral kaons themselves to solve at least three kinds of experimental problems: 1) measurement of the tiny mass difference between two neutral kaon eigenstates, unattainable for conventional mass measurements; 2) identifying $`K_L`$ as the heavier eigenstate; 3) complete and unambiguous measurements of $`CP`$-violating parameters in various neutral kaon decays. As we will explain here, those strangeness oscillations can (and even should) be applied now to studies of heavier flavor hadrons (neutral and charged mesons, or even baryons).
## 2 Cascade decays
Cascade decay is a sequence of decays of both an initial state and its decay products. It consists of two or more stages which are usually considered to be quite independent of each other. However, cascade decays producing neutral kaons may have specific properties. In particular, time distribution at their secondary stage (i.e., for the neutral kaon decay) may depend on parameters of the primary decay.
The simplest example is given by decays of charmed particles (charged mesons or baryons). An essential point here, emphasized in , is that Cabibbo-favored and Cabibbo-suppressed transitions together produce in such decays a coherent mixture of $`K^0`$ and $`\overline{K}^0`$. Then the neutral kaon system evolves and decays. Its decay-time distribution depends on the initial relative content of $`K^0`$ and $`\overline{K}^0`$, i.e. on properties of the primary decay.
More complicated are cascades initiated by neutral $`D`$ or $`B`$ mesons. The initial meson here, before its primary decay, develops flavor oscillations. To the decay moment it transforms into some coherent mixture of, say, $`B^0`$ and $`\overline{B}^0`$. Its content influences the neutral kaon content in the system which emerges just after the primary decay. Thus, the secondary strangeness oscillations coherently continue oscillations of the initial flavor , and we really deal with coherent double-flavor oscillations. Decay time distributions for the secondary kaons are sensitive here to both the primary decay and mixing properties of initial mesons. Moreover, time distributions of the primary and secondary decays are non-factorisable. Detailed expressions and their discussion may be found in for $`B_d,B_s`$ initial mesons and in for $`D`$ mesons.
These examples demonstrate why and how the strangeness oscillations may be helpful in heavy flavor studies.
## 3 Problems for heavy flavors
In this section we briefly consider some problems in heavy flavor physics which may be solved by means of secondary kaon oscillations.
### 3.1 Identification of meson eigenstates
Neutral $`B`$ or $`D`$ mesons, similar to kaons, evolve in time as a mixture of two eigenstates. To discriminate them one needs to use some labels. There are (at least) three possible labels, just as for kaons: 1) shorter/longer lifetime; 2) lighter/heavier mass; 3) odd/even $`CP`$-parity, at least approximate (definition of the approximate $`CP`$-parities and discussion of their possible mode-dependence see in ). To achieve complete identification of the eigenstates one should be able to relate all these labels with each other.
There seems to be no way for direct experimental connection between lifetime and mass labels (see discussion in ). However, it is very easy to relate lifetime with $`CP`$-parity. One should only compare time distributions in decays of neutral heavy flavor mesons to final states of definite (and different) $`CP`$-parities. The secondary kaon oscillations allow to connect $`CP`$-parities and masses of the heavy flavor eigenstates , thus completing their identification. The procedure is similar to how it was done for kaons themselves, see discussion in . No other method for complete identification of the heavy flavor eigenstates has been suggested till now.
### 3.2 Parameters of $`CP`$-violation
$`CP`$-violation in evolution of neutral $`B`$ or $`D`$ mesons and their decays to final states of definite $`CP`$-parities can be parametrized in a manner similar to kaons. However, it was first noticed in , that measurements of the $`CP`$-violating parameters can be not complete: they lead to sign ambiguities if one cannot relate to each other all the eigenstate labels described in the preceding subsection. As explained in , various sign ambiguities found later by other authors (see references in ) have the same underlying nature. Thus, oscillations of secondary neutral kaons, providing complete identification of heavy flavor eigenstates, at the same tine eliminate ambiguities of $`CP`$-violating parameters. Note that measurements of kaon parameters $`\eta `$ would also show sign ambiguities if one did not know that $`K_L`$ is nearly $`CP`$-odd and heavier than $`K_S`$.
$`CP`$-violation in heavy flavor decays to final states with neutral kaons has interesting specific features. It combines, in a coherent way, contributions of two sources: the heavy flavor itself and the neutral kaons. This is true both for neutral flavored mesons and for any other flavored hadrons (charged mesons or even baryons) . Such properties may provide new possibilities for studying heavy flavor $`CP`$-violation through its ”direct” comparison with known kaon violation.
### 3.3 Separation of flavor transitions
A characteristic feature in decays of heavy flavor hadrons, in difference with strange hadrons, is possibility of several various underlying quark decays. They generate hadron decay modes with different flavor changes. Most interesting in the present context are charmed particle decays of the types $`D\overline{K}`$ (Cabibbo-favored) and $`DK`$ (doubly Cabibbo-suppressed). The suppressed decays have the amplitude smallness of order 5%. Several modes of decays $`DK^+`$ have, nevertheless, been observed experimentally.
Decays to charged kaons allow to compare only absolute values of suppressed and favored amplitudes. The situation is different for decays to neutral kaons. Amplitudes of decays to $`\overline{K}^0`$ and $`K^0`$ appear to be coherent, since the secondary decays go to common channels. Therefore, the relative phase of the amplitudes becomes measurable as well. It may be measured by means of secondary kaon oscillations in decays of charmed charged mesons or baryons (detailed formulas for time distributions see in in the limit $`\mathrm{\Delta }m_D,\mathrm{\Delta }\mathrm{\Gamma }_D0`$). In decays of neutral $`D`$-mesons the two flavor transitions may be separated from each other and from initial mixing effects by means of double-time distributions (over primary and secondary decay times) .
Again, as for the eigenstate identification, such approach is the only one suggested till now to measure the measurable relative phase. Detailed knowledge of the suppressed amplitudes would give new valuable information about CKM-matrix and $`CP`$-violation.
## 4 Conclusion
The several examples given above are sufficient to confirm that the secondary kaon oscillations may, indeed, have great analyzing power for heavy flavor physics.
## Acknowledgments
I thank I. Dunietz, B. Kayser, J.P. Silva and Z.-Z. Xing for discussions. |
no-problem/9910/hep-ph9910532.html | ar5iv | text | # DARK MATTER IN THE UNIVERSE
## 1 Introduction
Probably one of the most important discoveries of this Century was the discovery that the universe consists mostly of an unknown form of matter. This matter neither emit nor absorb light and got the name dark (or better to say, invisible) matter. It is observed only indirectly through its gravitational action and, though there are plenty of theoretical hypotheses, the nature of dark matter remains mysterious. First hints on existence of dark matter were found more than half of a century ago . Velocity dispersion of astronomical objects was larger than one would expect from observation of luminous matter. The fact that there is more mass than light in the universe, got a strong support only 40 years later. It was initiated by two groups and stimulated a burst of activity in the field. Now there are a large amount of accumulated astronomical data that unambiguously prove that the universe is dominated by an invisible matter or to be more precise there is much more gravity in the universe than all the visible matter could provide.
Very strong arguments in favor of invisible cosmic matter follow from the so called galactic rotational curves, i.e. from the observed dependence of velocities of gravitationally bound bodies on the distance from the visible center. A very well known example of rotational curves that have led to the seminal discovery of the Newton gravitational law is the distribution of velocities of planets in the Solar system (see fig. 1, taken from ref. ).
On the basis of this data it was concluded that gravitational forces drops down with distance as $`F1/r^2`$ and correspondingly, by the virial theorem, $`v^2(r)G_NM(r)/r`$, so that $`v1/\sqrt{r}`$ for point-like central mass; here $`M(r)`$ is the mass of gravitating matter inside the radius $`r`$. However measurements of rotational velocities of gas around galaxies produce a very different picture, $`v(r)`$ does not go down to zero with an increasing distance from the luminous center but tends to a constant value, see fig. 2 .
At the present day more than 1000 galactic rotational curves are measured (see e.g. ) and they show a similar behavior. It is quite a striking fact that rotational curves are very accurately flat at large distances, $`vconst`$. If such curves were observed at Kepler-Newton’s time one might conclude that the gravitational force did not obey the famous inverse square law but something quite different, $`F1/r`$, with the potential $`U\mathrm{ln}r`$. However it is very difficult, if possible at all, to modify beautiful general relativity at large distances in such a way that it would give $`1/r`$-forces. A normal interpretation of flat rotational curves is that there is an invisible matter around galaxies with mass density decreasing as
$`\rho 1/r^2`$ (1)
and correspondingly $`M(r)r`$. Such mass distribution could be in a self-gravitating isothermal gas sphere. However, if the dark matter particles do not possess a sufficiently strong self-interaction it is not clear how they would acquire thermal equilibrium.
It is not yet established how far the law (1) remains valid. If it is true up to neighboring galaxies, the average mass density of this invisible matter would be rather close to the critical one
$`\rho _c={\displaystyle \frac{3H^2}{8\pi G_N}}1.8610^{29}h_{100}\mathrm{g}/\mathrm{cm}^3`$ (2)
where $`h_{100}=H/100\mathrm{k}\mathrm{m}/\mathrm{s}/\mathrm{Mpc}`$ is dimensionless Hubble constant; by the most recent data $`h_{100}0.7`$ with the error bars of about 10-15%; for a review see ref. .
The contributions of different forms of matter to the cosmological mass/energy density according to the present day data is the following. The visible luminous matter contributes very little to total density :
$`\mathrm{\Omega }_{lum}=\rho _{lum}/\rho _c0.003h_{100}^1`$ (3)
There could be much more non-luminous baryons in the forms of faint stars, gas, etc (see below sec. 2) but the standard theory of primordial nucleosynthesis does not allow too high mass fraction of baryonic matter. It is probably a proper time and place to mention that George Gamow made a pioneering contribution to big bang nucleosynthesis. Abundances of light elements are sensitive to total number fraction of cosmic baryons, more precisely abundances of light elements depend upon the ratio of number densities of baryons to photons, $`\eta _{10}=10^{10}n_b/n_\gamma `$. Comparing theoretical predictions with observations one can deduce the value of this ratio at nucleosynthesis. The result crucially depends upon the observed abundance of deuterium since the latter is especially sensitive to $`\eta `$. There are two conflicting pieces of data: high and low deuterium, see discussion and references in the review . For low $`{}_{}{}^{2}H`$ regions the limits presented in ref. are:
$`\mathrm{\Omega }h_{100}^2=0.0150.023\mathrm{and}\eta _{10}=4.26.3`$ (4)
while for high $`{}_{}{}^{2}H`$:
$`\mathrm{\Omega }h_{100}^2=0.0040.01\mathrm{and}\eta _{10}=1.22.8`$ (5)
Most probably one or other of the above is incorrect and the predominant attitude is in favor of low deuterium. However, it would be extremely interesting if both are true, so that the abundance of primordial deuterium is different in different regions of the universe. A possible explanation of this phenomenon is a large and spatially varying neutrino degeneracy that predicts a large mass fraction of primordial helium, more than 50%, comparing to $`25\%`$ in normal deuterium regions (that were called ”low” above), and quite low helium, $`12\%`$, in the anomalously low deuterium regions .
Anyhow, independently of these subtleties, big bang nucleosynthesis strongly indicates that the mass fraction of normal baryonic matter in the universe is quite small (see also the discussion below in sec. 2). On the other hand, the amount of gravitating matter, found by different dynamical methods (for a review see ), gives $`\mathrm{\Omega }_m0.3`$. These methods are sensitive to clustered matter and do not feel uniformly distributed energy/mass density. Theoretical prediction based on inflationary model is $`\mathrm{\Omega }_{tot}=1\pm 10^4`$. This number may be compatible with the above quoted value for $`\mathrm{\Omega }_m`$ only if the rest of matter is uniformly distributed. The recent indications to a non-zero cosmological constant with
$`\mathrm{\Omega }_{vac}0.7`$ (6)
permit to fill the gap between 0.3 and 1. It is possibly too early to make a definite conclusion, since the result is very important and all possible checks should be done. Moreover the SN-data that led to the conclusion of non-zero $`\mathrm{\Lambda }`$ might be subject to a serious criticism . Still the combined different astronomical data quite strongly suggest that cosmological constant is indeed non-zero.
The attitude to a possibly non-vanishing cosmological constant from different prominent cosmologists and astrophysicists were and is quite diverse. For example Einstein, who ”invented” cosmological constant and introduced it into general relativity, considered it as the biggest blunder of his life. The attitude of Gamow was similar, he wrote in his autobiography book : ”$`\lambda `$ again rises its nasty head” On the other hand, Lemaitre and Eddington considered $`\mathrm{\Lambda }`$ very favorably. Moreover, a non-zero $`\mathrm{\Lambda }`$ (or what is the same, vacuum energy) should be quite naturally non-zero from a particle physicist’s point of view, though any theoretical estimate by far exceeds astronomical upper limits (see discussion in sec. 4).
To conclude, it seems very probable that the normal baryonic matter contributes only a minor fraction to the total mass/energy of the universe and we will discuss below possible forms of this yet unknown but dominant part of our world. It is not excluded that there is not a single form of dark matter. The data request several different ones and if it is indeed the case the mystery becomes even deeper. In particular, one has to understand the so called cosmic conspiracy: why different forms of dark matter give comparable contributions to $`\mathrm{\Omega }`$, while they naturally would differ by many orders of magnitude.
## 2 Baryonic dark matter.
Since an idea that there is a cosmic ocean of an absolutely unknown matter is quite drastic, one is inclined to look for less revolutionary explanations of the data. The first natural question is if all the dark matter, possibly excluding vacuum energy, could be the normal baryonic staff somehow hidden from observation. The relevant discussion of the cosmic baryon budget can be found in ref.
As we have already mentioned in the Introduction a very strong upper limit on the total amount of baryons in the universe follows from the Big Bang Nucleosynthesis. However this limit would be invalid if for example electronic neutrinos are strongly degenerate . A charge asymmetry in electronic neutrinos corresponding to dimensionless chemical potential $`\mu _{\nu _e}/T1`$ could significantly loosen the bound on baryonic mass density and make it close to the necessary $`0.3\rho _c`$.
However there are some other data that make it very difficult to have baryon dominated universe. Strong arguments against this possibility come from the theory of large scale structure formation. In the case of adiabatic perturbations that are characterized by approximate equality of density and temperature fluctuations, $`\delta \rho /\rho \delta T/T`$, there is too little time for cosmic structures to evolve. Indeed the perturbations in the baryonic matter could rise only after hydrogen recombination that took place rather late at redshift $`z10^3`$. After that the perturbations might rise only as the scale factor so to the present time they at most could be amplified by this factor of $`10^3`$. However, it is well known that the fluctuations of the CMB (cosmic microwave background) temperature are quite small, $`\delta T/T<\mathrm{a}\mathrm{few}\times 10^5`$. Hence even today the density fluctuations should be quite small in contrast to the observed developed structures with $`\delta \rho /\rho 1`$.
For isocurvature perturbations the fluctuations of CMB temperature are much smaller than density perturbations, $`\delta T/T\delta \rho /\rho `$, and this permits to avoid the above objection. However if it were the case, the spectrum of angular fluctuations of CMB would be quite different from the observed one. In particular, the first acoustic peak would be near $`l=400`$, while the data strongly indicates that this peak is close to $`l=200`$ in agreement with adiabatic theory (for a recent review and the list of references see e.g. ref. ). This argument can be avoided if the shift of the acoustic peak to higher $`l`$ is compensated by the curvature effects (I thank J. Silk for indication to this point).
Another weighty argument against baryonic universe is that it is practically impossible to conceal 90% of baryons. Baryonic matter strongly interacts with light and even if the baryons are nonluminous themselves, they would strongly absorb light. So baryonic matter should be observed either in emission or absorption lines. There is not much space for baryons to escape detection:
1. Cold gas or dust do not emit light but can be observed by absorption lines (Gunn-Peterson test).
2. Hot gas is seen by X-rays if it is clumped, if it is diffuse it would distort CMB spectrum.
3. Neutron stars or ”normal” black holes” that were produced as a result of stellar evolution, would contaminate interstellar medium by ”metals” (elements that are heavier than $`{}_{}{}^{4}He`$).
4. Dust is seen in infrared
According to ref. the total baryon budget is in the range:
$`0.007\mathrm{\Omega }_B0.041`$ (7)
with the best guess $`\mathrm{\Omega }_B=0.021`$ (for $`h_{100}=0.7`$).
A special search was performed for the so called MACHO’s (massive astrophysical compact halo objects). They may include brown dwarfs, low luminosity stars, primordial black holes. Such objects are not directly visible and they were looked for through gravitational micro-lensing . The search was pioneered by MACHO and EROS collaborations and at the present time about a hundred of such objects were found in the Galaxy and in the nearby halo. According to the EROS results the mass density of the micro-lenses with the masses in the interval $`(510^810^2)M_{}`$ is less than $`0.2\rho _{Halo}`$. The MACHO observations permit to make the conclusion that the masses of micro-lensing objects lies in the interval $`(0.11.0)M_{}`$ at 90% CL. The mean value of the mass is about $`0.5M_{}`$.
Instead of approaching to the resolution of the problem of dark matter, these observations made things even more mysterious and more interesting. A large mass of MACHO’s suggests that they could be the remnants of the usual stars (white dwarfs?). However it is difficult to explain their relatively large number density and distribution. They could be primordial black holes but in this case they are not necessarily baryonic. An intriguing possibility is that they are the so called mirror or shadow stars, i.e they are formed from a new form of matter that is related to ours only gravitationally and possibly by a new very weak interaction (see sec. 8).
Anyhow, baryons seem to contribute only a minor fraction to the total mass of the universe and some new form of matter should exist. There is no shortage of possible candidates but it remains unknown what one (or maybe ones) is (are) the real dominating entity.
## 3 Non-baryonic (exotic?) dark matter; what is it?
For an astronomer the classification of dark matter from the point of view of large scale structure formation is especially relevant. Independently of its physical nature cosmological dark matter can be of the following three types:
1. Hot dark matter (HDM). For this form of dark matter the structure can be originally formed only at very large scales, much larger than galactic size, $`l_{str}l_{gal}`$.
2. Cold dark matter (CDM). It is an opposite limiting case for which the structure is formed at the low scale, $`l_{str}l_{gal}`$.
3. Warm dark matter (WDM). This is an intermediate case when the characteristic scale of the structures is of the order of galactic size, $`l_{str}l_{gal}`$.
Somewhat separately there stands $`\mathrm{\Lambda }`$-term or, what is the same, vacuum energy. There are some rather strong indications that for a good description of the observed large scale structure several different forms of dark matter, including $`\mathrm{\Lambda }`$-term, may be necessary.
Another astronomically important feature of dark matter is its dissipation properties. If dark matter easily loose energy, the structure formation could proceed faster. In the opposite case the cooling of dark matter would be less efficient and the structures on small scales would not be formed. So from this point of view there could be two forms of dark matter, dissipationless and/or dissipative. The dominant part of physical candidates for dark matter particles are weakly interacting and thus dissipationless. However there are some, possibly more exotic, models supplying strongly interacting dark matter particles that could easily loose energy.
There are quite many physically possible, and sometimes even natural, candidates for dark matter particles. An abridged list of them in the order indicating the author’s preference is the following:
1. Massive neutrinos.
2. Non-compensated remnant of vacuum energy.
3. New not yet discovered, but theoretically predicted, elementary particles: lightest supersymmetric particle, axion, majoron, unstable but long-lived particles, super-heavy relics, etc. It is even possible to construct models in which the same kind particles would contribute e.g. both to hot and warm dark matter.
4. New shadow or mirror world.
5. Primordial black holes.
6. Topological defects (topological solitons).
7. Non-topological solitons.
8. Neither of the above.
It is quite possible that the last entry at the bottom of the list may happen after all to become the first.
## 4 Vacuum energy
The problem of vacuum energy is possibly the most striking in the contemporary physics. Any reasonable theoretical estimate disagree with the astronomical upper limits on $`\rho _{vac}`$ by 50-100 orders of magnitude (for a review see refs. ). In fact there are practically experimentally proven contributions into vacuum energy from the known in quantum chromodynamics (QCD) vacuum condensates of quarks and gluons. The existence of these condensates is necessary for correct description of hadron properties. In this sense the existence of these condensates is an experimental fact. So we have a fantastic situation: there are well established contributions into vacuum energy that are larger than the permitted value by the factor $`10^{47}`$. It may only mean that there is some extremely accurate mechanism that compensates this huge amount practically down to zero. Here ”zero” is in the scale of elementary particle physics; on astronomical scale the remaining vacuum energy may be quite significant. This compensation should be achieved by something that is not directly related to quarks and gluons because all the light fields possessing QCD interactions are known, while heavy fields cannot make a compensation with the desired accuracy.
It is tempting to assume that the curvature of space-time created by vacuum energy would generate a vacuum condensate of a new massless (or extremely light) field $`\mathrm{\Phi }`$ and the energy of the condensate would cancel down the original vacuum energy in accordance with the famous Le Chatelier’s principle. It is closely analogous to the axionic mechanism of natural CP-conservation in QCD. Generic features that one should expect from such compensating (or adjustment) mechanism are quite interesting. First, the compensation is never complete, the amount of non-compensated vacuum energy is always parametrically of the order of critical energy:
$`\mathrm{\Delta }\rho _{vac}=\rho _{vac}^{in}\rho _\mathrm{\Phi }(m_{Pl}^2/t^2),`$ (8)
but the coefficient of proportionality may be different at different stages of the evolution of the universe (e.g. at MD- and RD-stages). Another unusual feature is that the equation of state of the dark matter corresponding to $`\mathrm{\Delta }\rho _{vac}`$ may be very much different from the standard ones, $`p=\rho /3`$ at RD-stage or $`p=0`$ at MD-stage.
So hopefully such compensating mechanism may be able not only to cut the ”nasty head of $`\lambda `$” (using Gamow words) but also to extinguish it almost down to nothing with only a small tail remaining. In fact, it is exactly this small tail that induced such a strong negative reaction from Gamow, because it could be 100% cosmologically relevant. This demonstrates two sides of the cosmological constant problem. Astronomers put the question if it is cosmologically important, i.e. if $`\rho _{vac}`$ is not negligible in comparison with $`\rho _c`$. If the answer is affirmative, then another puzzling problem appears: why vacuum energy, that remains constant in the course of the cosmological expansion, is close today to $`\rho _c`$ which evolves as $`1/t^2`$? Particle physicists are more puzzled by the question why vacuum energy does not exceed $`\rho _c`$ by almost an infinite amount. However if this is somehow arranged, then the natural value should be precisely zero. So astronomical indications that $`\rho _{vac}`$ may be non-vanishing are of prime importance for all members of astro-particle community.
The compensation mechanism would successfully address both issues: it permits to compensate $`\rho _{vac}`$ to cosmologically acceptable value and gives a non-compensated remnant of the order of $`\rho _c`$ at any period of the history of the universe. However all that are predictions of a non-existing theory. Original compensating mechanism is based on a massless scalar field with the Lagrangian:
$`_0=\left(\mathrm{\Phi }\right)^2+\xi R\mathrm{\Phi }^2`$ (9)
where $`R`$ is the curvature scalar. For a certain choice of the sign of the constant $`\xi `$ the field $`\mathrm{\Phi }`$ becomes unstable in De Sitter background (the term $`\xi R^2`$ behaves as a negative mass squared) and a vacuum condensate of $`\mathrm{\Phi }`$ would evolve. The back-reaction of this condensate on the expansion results in a change from the exponential De Sitter regime to a more slow Friedman one, $`a(t)t^\alpha `$. So far so good, but this change of the regime was not achieved by the compensation of the vacuum energy. In fact the energy-momentum tensor of $`\mathrm{\Phi }`$ does not have the vacuum form, it is not proportional to the metric tensor $`g_{\mu \nu }`$. The slowing of the expansion is achieved by the decrease of the gravitational coupling constant with time, $`G_N1/t^2`$.
Other possible candidates on the role of the compensating field could be fields with higher spins, vector or tensor ones . More promising seems to be symmetric tensor field $`\mathrm{\Phi }_{\mu \nu }`$. Even the simplest possible Lagrangian:
$`_2=\mathrm{\Phi }_{\mu \nu ;\alpha }\mathrm{\Phi }^{\mu \nu ;\alpha }`$ (10)
gives rise to unstable solution of equations of motion and to development of vacuum condensate that compensates vacuum energy. In contrast to the energy-momentum tensor of the considered above scalar field, the energy-momentum tensor of $`\mathrm{\Phi }_{\mu \nu }`$ is of vacuum form, i.e. proportional to $`g_{\mu \nu }`$. Such a theory possesses a symmetry with respect to transformation $`\mathrm{\Phi }_{\mu \nu }\mathrm{\Phi }_{\mu \nu }+Cg_{\mu \nu }`$. This symmetry prevents from quantum generation of mass of $`\mathrm{\Phi }_{\mu \nu }`$ and may be helpful in some other respects. Still in the simplest versions of the model the gravitational coupling constant evolves with time in the same way as in the scalar field case . Presumably it is related to the breaking of Lorents invariance by the condensate. The model permits a generalization such that the vacuum field $`\mathrm{\Phi }_{\mu \nu }`$ is proportional to the metric tensor, $`g_{\mu \nu }`$, so that the condensate is Lorents invariant. However in any case the cosmology is far from being realistic. Thus, though the compensation mechanism shows some nice features, no workable model giving realistic cosmology is found at the present day.
Stimulated by the indications that the universe may expand with acceleration, i.e. that $`\rho _{vac}>0`$, a new constant parameter $`w`$, was introduced into the standard set of cosmological parameters . This parameter characterizes the equation of state of the cosmological matter:
$`p=w\rho `$ (11)
In the standard cosmology it is assumed that the universe is now dominated by non-relativistic matter, so that $`w=0`$. At an earlier stage relativistic matter was dominating and $`w=1/3`$. In the case of dominance of vacuum energy $`w=1`$. Two more examples giving a negative $`w`$ are the system of non-interacting cosmic strings with $`w=1/3`$ and also non-interacting domain walls with $`w=2/3`$. Since the source of gravity in General Relativity (in isotropic case) is $`(\rho +3p)`$, the universe would expand with acceleration (anti-gravity) if $`w<1/3`$.
In particular a model with a massless or extremely light scalar field was discussed that could give a negative $`w`$. Such field received the name ”quintessence”. For a homogeneous scalar field $`\varphi (t)`$ with a self-interaction potential $`U(\varphi )`$ the parameter $`w`$ is given by:
$`w={\displaystyle \frac{2U(\varphi )\dot{\varphi }^2}{2U(\varphi )+\dot{\varphi }^2}}`$ (12)
If the potential energy is larger than the kinetic one, $`w`$ would be negative. However in this model $`w`$ may be considered as a constant only approximately. A fundamental theory that requests an existence of such a field is missing so such model can be considered as a poor man phenomenology describing an accelerated expansion, more general than just that given by vacuum energy. A raison d’être for such a field could be the adjustment mechanism discussed above, that predicts an existence of non-compensated vacuum energy with an unusual equation of state. Simultaneously, as mentioned above, the adjustment mechanism may explain the puzzling fact that the contribution of quintessence into $`\mathrm{\Omega }`$ is close to 1.
One can see from eq. (12) that the lower limit for $`w`$ is $`w>1`$ and this is quite generic for any normal matter. However in ref. even a possibility of $`w<1`$ was discussed with an appropriate name ”cosmic phantom”. Such really striking equation of state could be realized in models with higher rank tensor fields but it gives rise to a very unusual cosmological singularity (see discussion in ref. ).
## 5 Neutrino.
Neutrino as a possible candidate for dark matter has the following two advantages. First, it is the only one that is definitely known to exist. Second, neutrino should have a non-zero mass. There are recent indications that at least one neutrino species has a mass about $`0.07`$ eV. However the second advantage is simultaneously a disadvantage, because the neutrino mass is normally too small for an appropriate description of the large scale structure of the universe. If cosmic background neutrinos of the $`a`$-th flavor have the standard cosmological abundance, $`n_{\nu _a}=3n_\gamma /11112/\mathrm{cm}^3`$, then their mass is is restricted by the Gerstein-Zeldovich bound:
$`{\displaystyle \underset{a}{}}m_{\nu _a}<94\mathrm{eV}\mathrm{\Omega }h_{100}^2`$ (13)
Such light neutrinos decoupled from cosmic plasma while they were relativistic and they erased all structures by free streaming at the scales below
$`M_{struc}{\displaystyle \frac{m_{Pl}^3}{m_\nu ^2}}10^{15}M_{}\left({\displaystyle \frac{10\mathrm{eV}}{m_\nu }}\right)^2`$ (14)
This is typical example of a hot dark matter. (A more accurate estimate gives somewhat smaller $`M_{struc}`$.)
On the other hand the Tremain-Gunn bound demands that neutrino mass is bounded from below:
$`m_\nu >50100\mathrm{eV}`$ (15)
This bound is a striking example of quantum effects on galactic scale: Fermi exclusion principle forbids too many neutrinos to accumulate in galactic halo, hence to carry all observed mass they should be sufficiently heavy.
The mismatch between the bounds (13) and (15) does not allow the standard neutrinos to constitute all dark matter in the universe. However, if neutrinos possess a new interaction somewhat stronger than the usual electroweak one, their cosmological number density would be smaller and the limit (13) would be less restrictive. Another possibility is that there are the so called sterile neutrinos that may be mirror or shadow neutrinos (see sec. 8) with the mass in keV range thus providing warm dark matter .
Some time ago a very heavy neutrino with the mass in GeV range was considered as a feasible candidate for cold dark matter. However the combined LEP result of precise measuring of $`Z^0`$ width permits only $`N_\nu =2.993\pm 0.011`$ for all neutral fermions with the normal weak coupling to $`Z^0`$ and mass below $`m_Z/245`$ GeV. So if heavy neutrinos, $`\nu _h`$, of the fourth generation exist their mass must be higher than 45 GeV. Most probably such particles should be unstable but if the corresponding leptonic charge is conserved or almost conserved and the charged companion of the heavy neutrino is heavier than $`\nu _h`$ they would be stable or very long lived.
The contribution of $`\nu _h`$ into cosmological energy density is determined by the cross-section of $`\nu _h\overline{\nu }_h`$-annihilation and has a rather peculiar behavior as a function on the $`\nu _h`$ mass. The corresponding $`\mathrm{\Omega }`$ is presented in fig. 3.
In the region of very small masses the ratio of number densities $`n_{\nu _h}/n_\gamma `$ does not depend upon the neutrino mass and $`\rho _{\nu _h}`$ linearly rises with mass. This gives the bound (13). For larger masses $`\sigma _{ann}m_{\nu _h}^2`$ and $`\rho _{\nu _h}1/m_{\nu _h}^2`$. This formally opens a window for $`m_{\nu _h}`$ above 2.5 GeV . A very deep minimum in $`\rho _{\nu _h}`$ near $`m_{\nu _h}=m_Z/2`$ is related to the resonance enhanced cross-section around $`Z`$-pole. Above $`Z`$-pole the cross-section of $`\overline{\nu }_h\nu _h`$-annihilation into light fermions goes down with mass as $`\alpha ^2/m_{\nu _h}^2`$ (as in any normal weakly coupled gauge theory). The corresponding rise in $`\rho _{\nu _h}`$ is shown by a dashed line. This would give the limit $`m_{\nu _h}<35`$ TeV . However for $`m_{\nu _h}>m_W`$ the contribution of the channel $`\overline{\nu }_h\nu _hW^+W^{}`$ leads to the rise of the cross-section with the increasing mass as $`\sigma _{ann}\alpha ^2m_{\nu _h}^2/m_W^4`$ . This would permit to keep $`\rho _{\nu _h}`$ well below $`\rho _c`$ for all masses above 2.5 GeV. The behavior of $`\rho _{\nu _h}`$ with this effect of rising cross-section included, is shown by the solid line till $`m_{\nu _h}=1.5`$ TeV. Above that it is continued as a dashed line. This rise with mass would break unitarity limit for partial wave amplitude when $`m_{\nu _h}`$ reaches 1.5 TeV (or 3 TeV for Majorana neutrino) . If one takes the maximum value of the S-wave cross-section permitted by unitarity, which scales as $`1/m_{\nu _h}^2`$, this would give rise to $`\rho _{\nu _h}m_{\nu _h}^2`$ and it crosses $`\rho _c`$ at $`m_{\nu _h}200`$ TeV. This behavior is continued by the solid line above 1.5 TeV. However for $`m_{\nu _h}\mathrm{a}\mathrm{few}TeV`$ the Yukawa coupling of $`\nu _h`$ to the Higgs field becomes strong and no reliable calculations of the annihilation cross-section has been done in this limit. Presumably the cross-section is much smaller than perturbative result and the cosmological bound for $`m_{\nu _h}`$ is close to several TeV. This possible, though not certain, behavior is presented by the dashed-dotted line.
## 6 Super-heavy relics.
Super-heavy quasi-stable particles with the mass around $`10^{13}`$ GeV were introduced recently in refs. to avoid the GKZ-cutoff for ultra-high energy cosmic rays. These particles could have produced at the end of inflation by coherent oscillations of the inflaton field (for possible mechanisms of production see e.g. ref. ). Some cosmological and astrophysical constraints on superheavy quasistable relics were discussed earlier in refs. . Such particles may have an interesting impact on structure formation and are discussed in more details in this conference by H. Ziaeepour. However their meta-stability is rather mysterious. As was argued many years ago by Zel’dovich , even if baryonic charge is microscopically conserved, proton may decay through formation and subsequent evaporation of a virtual black hole. In accordance with his estimate the proton should decay with the life-time:
$`\tau _p{\displaystyle \frac{1}{m_p}}\left({\displaystyle \frac{m_{Pl}}{m_p}}\right)^410^{45}\mathrm{years}`$ (16)
This estimate can be obtained as follows. The cross-section of the gravitational capture of a particle by the black hole with mass $`M`$ is equal to its Schwarzschild radius squared,
$`\sigma _{grav}r_g^2={\displaystyle \frac{M^2}{m_{pl}^4}}`$ (17)
where $`m_{Pl}=1.2\times 10^{19}`$ GeV is the Planck mass. For the virtual black hole state, which is formed in the process of the gravitational decay of a particle with mass $`m`$, the mass of black hole is around the initial particle mass, $`Mm`$. Assuming that all other dimensional parameters are also close to $`m`$ we obtain the result (16).
We can obtain another (and different) estimate for the proton life-time using the following arguments. The amplitude of the collapse of a particle $`x`$ with mass $`m_x`$ into black hole with the same mass is proportional to the overlap integral:
$`A_{coll}{\displaystyle d^3r\psi _x\mathrm{\Psi }_{BH}}`$ (18)
where $`\psi _x`$ and $`\mathrm{\Psi }_{BH}`$ are the wave functions of the particle and black hole. The particle wave function is localized on its Compton wave length, $`l_C=1/m_x`$, while the black hole wave function is localized at $`r_g=m_x/m_{Pl}^2`$. Evaluating this integral and assuming again that all other dimensional parameters are close to $`m_x`$ we obtain
$`\tau _x{\displaystyle \frac{1}{m_x}}\left({\displaystyle \frac{m_{Pl}}{m_x}}\right)^n=10^{24+19n}\left({\displaystyle \frac{\mathrm{GeV}}{m_x}}\right)^{n+1}\mathrm{sec}`$ (19)
where the power $`n`$ is equal to 6, in contrast to $`n=4`$ in eq. (16).
Later on this conjecture was supported by the arguments that quantum gravity effects should break all global symmetries , in particular due to formation of baby universes . Effective Lagrangian which describes these phenomena contains different terms with different powers of Planck mass, $`m_{Pl}^{4d}`$, where $`d`$ is called the dimension of the corresponding operator. In the examples considered above $`d`$ was equal to 6 and 7. The very dangerous terms are those with $`d=5`$. They would lead to the proton decay with life-time $`\tau _p10^{13}`$ sec, which is well below existing limits. This makes one to believe that the operator with $`d=5`$ do not appear in effective Lagrangian. Note, that the simple estimates presented above give $`d>5`$. If the particle decay is generated by the operator with dimension $`d`$ then its life-time is given by the expression (19) with $`n=2(d4)`$. Thus if we demand that the particle $`x`$ lives longer that the universe age, $`t_U10^{18}`$ sec, then its mass should be bounded from above:
$`m_x<10^{\left(19n42\right)/\left(n+1\right)}\mathrm{GeV}`$ (20)
If the Zeldovich estimate is correct then $`m_x<10^7`$ GeV. If we use the estimate of the present paper which gives $`n=6`$, then $`m_x<10^{10.3}`$ GeV. The condition that these particles are heavier than $`10^{13}`$ GeV, so that their decays explain the origin of ultra-energetic cosmic rays, demands a rather high value $`n>9`$. The dimension of the corresponding operators should be bigger than 8.5.
Of course the arguments presented above are not rigorous but still the gravitational decay mechanism looks very plausible. This mechanism is quite generic and does not depend upon the particle properties but only on their masses. This is related to the universality of gravitational interactions. Of course the presented estimates are rather naive and the unknown non-perturbative dynamics of quantum gravity may significantly change these results. It is possible in particular that the formation of a virtual black hole proceeds as some kind of tunneling process. In this case the decay probability might be suppressed as $`\mathrm{exp}(cm_{Pl}/m_x)`$ (where $`c`$ is a constant) and the discussed here mechanism would be ineffective.
A possible way to avoid the gravitational decay is to assume that the particle in question is the lightest in the family of particles possessing a conserved charge, which is associated with a local (gauge) symmetry (similar to electromagnetic $`U(1)`$). However it would imply that this particle is absolutely stable. To avoid that one would have to assume that the corresponding gauge symmetry is slightly broken in such a way that the gauge boson(s) acquires a tiny but non-zero mass. It is well known that black holes may have only hairs which are related to the long range forces which in turn are associated with zero mass of the particles which transmit interactions. For example Coulomb field of electrically charged black hole is maintained outside the gravitational radius only because photon is strictly massless. In the case that photon has a non-zero mass, a black hole would not have electric hairs even if electric charge is strictly conserved. A limiting transition from the case of strictly massless photon to that with a small mass is achieved by a long time of disappearance of the hairs. This time should be inversely proportional to the mass. So in principle there may exist very heavy and very long lived particles if they possess a conserved charge but the corresponding gauge symmetry is a little broken so that the gauge boson acquire a tiny mass. The charge may remain strictly conserved but the particle would be unstable in the same way as proton becomes unstable due to the collapse into black hole, despite conservation of baryonic charge in particle interactions without gravity. A possible way to realize such a model is to assume a nonminimal and gauge non-invariant coupling of gauge bosons to gravity, for example in the form $`A_\mu ^2R`$ or $`A_\mu A_\nu R^{\mu \nu }`$, where $`R`$ is the curvature scalar and $`R^{\mu \nu }`$ is the Ricci tensor.
Barring this a highly speculative possibility we have to conclude that either the explanation of the highest energy cosmic rays by decays of ultra-heavy long-lived particles is impossible, because such particles should undergo fast ($`\tau _x<t_U`$) decay or that the gravitational breaking of global symmetries is not as strong as we assumed above.
## 7 Lightest supersymmetric particle (LSP).
Low energy supersymmetry has at least two attractive features for a solution of dark matter problem. First, the theory predicts an existence of new stable particles that could constitute cosmological dark matter. Second, with a natural scale of supersymmetry breaking around 1 TeV, the theory predicts that LSP would give $`\mathrm{\Omega }_{LSP}1`$ without any fine tuning. The third feature, that makes this hypothesis especially attractive for experimentalists, is that for a large range of parameters of supersymmetric models these new stable particles are within the reach of of sensitivity of different existing and planned methods of their search. This subject was recently reviewed in great detail in ref. , so I will be very brief here.
There are several possible candidates for the role of the dominating supersymmetric matter in the universe: neutralino (a mixture of gauginos, $`\stackrel{~}{\gamma }+\stackrel{~}{Z}`$, and higgsinos, $`\stackrel{~}{h}_1+\stackrel{~}{h}_2`$); sneutrino (a heavy supersymmetric partner of neutrino); gravitino (the supersymmetric partner of graviton, with spin 3/2); axino (the partner of axion), messenger fields related to a hidden sector of the theory, … . Such particles (at least some of them) can be searched for directly by a registration in low background detectors (Ge, NaI, Xe,…) through the reaction: $`N+\mathrm{Nuclei}\mathrm{recoil}`$. There are also indirect methods based on search for the products of their annihilation in the Earth or in the Sun, producing high energy muons. At the present day only upper limits on the annihilation cross-section are established, though there are indications on annual modulation effect that may be a signature of dark matter.
A very interesting feature of neutralino annihilation in the galactic halo is a production of antimatter: not only anti-protons but also a noticeable fraction of anti-deuterium may be created. According to calculations of ref. the flux of $`\overline{D}`$ at low energy, below 1 GeV, would be much larger than the flux of the secondary $`\overline{D}`$, produced by the normal cosmic ray collisions. The AMS mission could either register anti-deuterium from neutralino annihilation or exclude a significant fraction in the parameter space of the low energy SUSY models. There are also promising ways to register neutralino annihilation through observation of energetic positrons or gamma rays (see ref. for the details).
A low energy supersymmetric extension of the minimal standard model is very natural from particle physics point of view. It supplies possibly the best candidate for the dark matter particles. In most versions of the model these particles would form weakly interacting cold dark matter, though in some cases warm dark matter is also possible. There is a very high experimental activity in search of supersymmetric particles and hopefully at the beginning of the next millennium they will be discovered or, if the nature is not favorable, a large part of the parameter space will be excluded but the mystery of dark matter will still remain.
## 8 Mirror/shadow world
The idea that our world is doubled and there exists a similar or exactly the same world coupled to ours only by gravity, was suggested long ago in connection with conservation of parity, P, or combined parity, CP. Subsequently it was developed and elaborated in several papers . Its popularity greatly increased after it was found that superstring theories have $`G\times G`$ internal symmetry group and the two identical worlds, corresponding to two groups, communicate only through gravity . The considered models, however, were not confined to this simplest option. In addition to gravity a new super-weak (but stronger than gravity) interaction was introduced between our and mirror particles. Moreover, a different patterns of symmetry breaking in these two worlds were considered, so that physics in our and in the mirror, or in this case better to say in the shadow world, became quite different.
At first sight the existence of a whole new world with the same or similar particle content would strongly distort successful predictions of the standard big bang nucleosynthesis (BBN) theory. The latter permits not more than one additional light fermionic species in the cosmological plasma at $`T1`$ MeV (see e.g. ). The completely symmetric mirror world would give slightly more than 7. However, as was argued in ref. the temperature of the mirror matter after inflation could be smaller than the temperature of the usual matter and thus the energy density of mirror matter during nucleosynthesis could be safely suppressed. Concrete mechanisms that could create a colder mirror world if the symmetry between the worlds was broken, were considered e.g. in refs . Another possible way to escape a conflict with BBN by the generation of lepton asymmetry through neutrino oscillations was discussed in ref. .
A new burst of interest to mirror/shadow matter arose after MACHO collaboration announced that the mass of the micro-lenses, they observed, is close to the solar mass (see sec. 2). A natural idea that these objects may be built from mirror matter, immediately attracted a strong attention . In the case of exact symmetry between the worlds the properties of the stellar objects would be the same but the process of structure formation could be quite different by the following two reasons. First, since the the mirror matter is colder than the usual one, the mirror hydrogen recombination would be considerably earlier and the structures might start forming earlier too. Second, baryon asymmetry in the mirror world might be different from ours and it would have an important impact on primordial chemical content of the universe and galactic and stellar formation . The cosmological mass fraction of mirror baryons is unknown but most probably they do not constitute all dark matter in the universe. There is one peculiar feature of this matter that it is strongly interacting and can easily loose energy through emission of mirror photons. Structure formation with this kind of dark matter would be very much different from the normal scenario with dissipationless cold dark matter. The cooling mechanisms, that are very essential for structure formation, could be either stronger or weaker. In particular, in the world with a very large fraction of mirror $`{}_{}{}^{4}He`$ molecular cooling would be considerably less efficient.
There would be even more difference between cosmology and astrophysics of our and mirror world if the mirror symmetry is broken . There could be the case that there are no stable nuclei in the mirror world and thus there could not exist mirror stars with thermonuclear active core. If the mirror electrons are heavier than the usual ones, the mirror hydrogen binding energy would be larger and this would be another reason for earlier recombination. To study the history of stellar formation and evolution in such distorted world would be a very interesting exercise that could reveal essential features of the underlying physics. Except for a different astrophysics and new stellar size invisible bodies, mirror world could provide sterile neutrinos that might explain the observed neutrino anomalies though the oscillations between our neutrinos and sterile ones. In particular, among these sterile neutrinos there could be rather heavy ones with the mass in keV range that might be excellent candidates for warm dark matter.
## 9 Miscellanea
Because of lack of space and time I could not discuss many other interesting forms of dark matter. One of the favorites, axion, is discussed at this conference by Yu. Gnedin. Topological and non-topological solitons may be also quite interesting options. Though the measurements of the angular fluctuations of CMB seemingly exclude cosmic strings as dominant part of cosmological dark matter, they still may give some contribution to the total mass of the universe. Non-topological solitons, $`Q`$-balls, recently attracted a renewed attention . Primordial black holes with the log-normal mass spectrum still remain an interesting possibility. There are some even more exotic candidates that are discussed in the literature; among them are such objects as superstrings giving super-heavy dark matter , domain walls with ”anti-gravitating” equation of state, $`p=(2/3)\rho `$ , or even liquid or solid dark matter .
Unstable dark matter remains attractive, and though it was proposed at the beginning of 80 , the main burst of activity was in the 90th . The basic idea of introducing unstable but long-lived particles into consideration was to increase the horizon length at the time of equality between matter and radiation and to increase by that the power at large scales. Recently this idea was revived in another attempt to save a model of structure formation with pure cold dark matter . The model looks quite natural from particle physics point of view if there exists a light scalar boson, familon or majoron so that a heavier neutrino, that may violate Gerstein-Zeldovich bound, could decay into this boson and lighter neutrino. It is also possible that a massive scalar boson decays into two light neutrinos. A very interesting scenario in the former case is that the scalar bosons are massive and their spectrum is two component: energetic bosons coming from the decay and non-relativistic ones formed during phase transition similar to axions. In this case the same particle may form both cold and hot (or warm) dark matter. A slightly different mechanism was proposed in ref. in the frameworks of string cosmology. It was argued there that weakly interacting non-thermal relics may be produced in the course of dilaton driven inflation with the double peak spectrum that could simultaneously give cold and hot dark matter.
A very interesting form of dark matter is a self-interacting one. One possible example of the latter is given by mirror or shadow world discussed above. A few more models of self-interacting dark matter with particles belonging to our world were considered in the literature; they were either light bosons , e.g. majorons or familons, or neutrinos with an anomalous self-interaction . Observational evidence in favor of self-interacting dark matter was recently analyzed in ref.
## 10 Conclusion
As we have seen, a set of independent arguments unambiguously proves that the main part of matter in the universe is not visible and, moreover, this invisible matter is not the matter that consists of known elementary particles as e.g. protons or neutrons, or neutrinos. Existence of this unknown form of matter is a strong evidence in favor of new physics beyond the minimal standard $`SU(3)\times SU(2)\times U(1)`$-model (MSM). Possibly a low energy supersymmetric extension of MSM solves the mystery of dark matter with lightest supersymmetric particle (LSP) that quite probably could be stable. However astronomical data indicate that one form of dark matter is not enough and except for cold dark matter, that might be provided by LSP, there is a very strong quest for hot and/or warm dark matter. Moreover detailed description of rotation curves at small distances indicates that dark matter may be dissipative. Quite possibly there is one more ingredient of dark matter, related to vacuum energy, that makes the situation even more mysterious.
Even if there is only one form of dark matter, the cosmic conspiracy, namely the close values of $`\mathrm{\Omega }_{baryon}`$ and $`\mathrm{\Omega }_{DM}`$, is quite puzzling. It demands quite a strong fine-tuning in the fundamental particle theory and at the present day no reasonable understanding of the phenomenon exists. The problem of cosmic conspiracy becomes tremendously deeper if there are several $`(>2)`$ forms of invisible matter with the similar contributions to $`\mathrm{\Omega }`$.
An answer to an often asked question, what is the best bet for the dark matter particles, reflects not so much our knowledge of the subject but a personal attitude of the respondent. Seemingly most votes would be given to LSP and possibly the next one is the axion. An advantage of these two is that both were not invented ad hoc but were predicted by particle theory independently of cosmology. By similar arguments mirror or shadow matter is also in a good shape. However other candidates based on more complicated models may have better chances just because their properties are chosen in accordance with cosmological demands.
10 years ago in one of ”Rencontre de Moriond” meeting P. Peebles in his summary talk arranged a public opinion pool, how many dark matter candidates would survive to the end of the century. The stakes were up to double digit numbers. I have to admit that I voted for one dark matter candidate, the only real one that ”would be surely known”. It was extremely over-optimistic point of view and today we have even more possible candidates than 10 years ago (neither old ones is removed from the list and quite a few new ones came into being) and still do not know what is/are the correct one(s).
Acknowledgments The work of A.D. was supported by Danmarks Grundforskningsfond through its funding of the Theoretical Astrophysical Center. |
no-problem/9910/quant-ph9910117.html | ar5iv | text | # Untitled Document
| (a) Vacuum state | (b) Coherent state | (c) Phase diffused coherent state |
| --- | --- | --- |
| | | |
The measured Wigner functions for (a) the vacuum, (b) a weak coherent state with approximately one photon, and (c) a phase diffused coherent state. The photon statistics was collected on a polar grid spanned by 20 amplitudes, and 50 phases for the plots (a) and (b), or 40 phases for the plot (c). The duration of a single counting interval was 40 $`\mu `$s for (a) and (b) and 30 $`\mu `$s for (c). The measurements were performed for slightly different laser intensities, and the radial coordinate for each of the graphs was scaled separately. |
no-problem/9910/solv-int9910007.html | ar5iv | text | # Untitled Document
Singularity confinement and algebraic entropy:
the case of the discrete Painlevé equations
Y. Ohta
Department of Applied Mathematics
Faculty of Engineering, Hiroshima University
1-4-1 Kagamiyama, Higashi-Hiroshima 739-8527, Japan
K. M. Tamizhmani
Department of Mathematics
Pondicherry University
Kalapet, Pondicherry, 605014 India
B. Grammaticos
GMPIB, Université Paris VII
Tour 24-14, 5<sup>e</sup> étage
75251 Paris, France
A. Ramani
CPT, Ecole Polytechnique
CNRS, UMR 7644
91128 Palaiseau, France
Abstract We examine the validity of the results obtained with the singularity confinement integrability criterion in the case of discrete Painlevé equations. The method used is based on the requirement of non-exponential growth of the homogeneous degree of the iterate of the mapping. We show that when we start from an integrable autonomous mapping and deautonomise it using singularity confinement the degrees of growth of the nonautonomous mapping and of the autonomous one are identical. Thus this low-growth based approach is compatible with the integrability of the results obtained through singularity confinement. The origin of the singularity confinement property and its necessary character for integrability are also analysed.
The singularity confinement property has been proposed some years ago as a discrete integrability criterion. The essence of the method is the observation that in integrable mappings a spontaneously appearing singularity does not propagate ad infinitum under the action of the mapping but disappears (“is confined”) after some iteration steps. Thus singularity confinement appeared as a necessary condition for discrete integrability. However the sufficiency of the criterion was not unambiguously established. The attitude (of the present authors at least) has always been that if the singularity confinement condition were strong enough then it would suffice for integrablity, in perfect analogy with the Painlevé-ARS property for continuous systems. This sufficiency of the singularity confinement criterion was recently challenged by Hietarinta and Viallet who produced explicit examples of mappings satisfying singularity confinement which are not integrable to the point of exhibiting chaotic behaviour. Their approach is based on the relation of discrete integrability and the complexity of the evolution introduced by Arnold and Veselov. According to Arnold the complexity (in the case of mappings of the plane) is the number of intersection points of a fixed curve with the image of a second curve obtained under the mapping at hand. While the complexity grows exponentially with the iteration for generic mappings, it can be shown to grow only polynomially for a large class of integrable mappings. As Veselov points out, “integrability has an essential correlation with the weak growth of certain characteristics”. Thus the authors of proposed to directly test the degree of the successive iterates and introduced the notion of algebraic entropy. The method is appropriate for birational mappings. One starts by introducing homogeneous coordinates and studies the degree of the iterate. As Bellon and Viallet remark, the growth of the degree is invariant under coordinate changes though the degree itself is not. A generic (non integrable) mapping leads to degrees that grow exponentially. The algebraic entropy is thus naturally defined as $`E=lim_n\mathrm{}\mathrm{log}(d_n)/n`$ where $`d_n`$ is the degree of the $`n`$-th iterate. Thus nonintegrable mappings have nonzero algebraic entropy. The conjecture in is that integrability implies polynomial growth, leading to zero algebraic entropy.
The main application of the singularity confinement approach was the derivation and study of discrete Painlevé equations (d-$`P`$’s) . On the light of the results of Hietarinta and Viallet which have shown that the criterion used was not restrictive enough, one might be tempted to doubt the integrability of the mappings obtained (despite a considerable volume of integrability-confirming results). The aim of this paper is to show that these doubts are unjustified and to confirm the validity of the approach previously used, with the help of algebraic entropy techniques.
Let us first recall what has always been our approach to the derivation of d-$`P`$’s. We start from an autonomous system the integrability of which has been independently established. In the case of d-$`P`$’s, this system is the QRT mapping :
$$f^{(1)}(x_n)(x_{n+1}+x_{n1})f^{(2)}(x_n)+x_{n+1}x_{n1}f^{(3)}(x_n)=0$$
$`(1)`$
When the $`f^{(i)}`$’s are quartic functions, satisfying specific constraints, the mapping (1) is integrable in terms of elliptic functions. Since the elliptic functions are the autonomous limits of the Painlevé transcendents, the mapping (1) is the appropriate starting point for the construction of the nonautonomous discrete systems which are the analogues of the Painlevé equations. The procedure we used, often referred to as ‘deautonomisation’, consists in finding the dependence of the coefficients of the quartic polynomials appearing in (1) with respect to the independent variable $`n`$, which is compatible with the singularity confinement property. Namely, the $`n`$-dependence is obtained by asking that the singularities are indeed confined. One rule that has always been used, albeit often tacitly, is that confinement must be implemented “the soonest possible”. What this rule really means is that the singularity pattern of the deautonomised mapping must be the same as the one of the autonomous mapping. Our claim is that a deautonomisation with a different singularity pattern (for instance a ‘later’ confinement) would lead to a non-integrable system. The reason why this deautonomisation procedure can be justified is the following. Since the autonomous starting point is integrable, it is expected that the growth of the degree of the iterates is polynomial. Now it turns out that the application of the singularity confinement deautonomisation corresponds to the requirement that the nonautonomous mappings lead to the same factorizations and subsequent simplifications and have precisely the same growth properties as the autonomous ones. These considerations will be made more transparent thanks to the examples we present in what follows.
Let us start with a simple case. We consider the mapping:
$$x_{n+1}+x_{n1}=\frac{ax_n+b}{x_n^2}$$
$`(2)`$
where $`a`$ and $`b`$ are constants. In order to compute the degree of the iterates we introduce homogeneous coordinates by taking $`x_0`$=$`p`$, $`x_1`$=$`q/r`$, assuming that the degree of $`p`$ is zero, and compute the degree of homogeneity in $`q`$ and $`r`$ at every iteration. We could have of course introduced a different choice for $`x_0`$ but it turns out that the choice of a zero-degree $`x_0`$ considerably simplifies the calculations. We obtain thus the degrees: 0, 1, 2, 5, 8, 13, 18, 25, 32, 41, …, . Clearly the degree growth is polynomial. We have $`d_{2m}=2m^2`$ and $`d_{2m+1}=2m^2+2m+1`$. This is in perfect agreement with the fact that the mapping (2) is integrable (in terms of elliptic functions), being a member of the QRT family of integrable mappings. (A remark is necessary at this point. In order to obtain a closed-form expression for the degrees of the iterates, we start by computing a sufficient number of them. Once the expression of the degree has been heuristically established we compute the next few ones and check that they agree with the analytical expression predicted). We now turn to the deautonomisation of the mapping. The singularity confinement result is that $`a`$ and $`b`$ must satisfy the conditions $`a_{n+1}2a_n+a_{n1}=0`$, $`b_{n+1}=b_{n1}`$, i.e. $`a`$ is linear in $`n`$ while $`b`$ is a constant with an even/odd dependence. Assuming now that $`a`$ and $`b`$ are arbitrary functions of $`n`$ we compute the degrees of the iterates of (2). We obtain successively 0, 1, 2, 5, 10, 21, 42, 85,…. The growth is now exponential, the degrees behaving like $`d_{2m1}=(2^{2m}1)/3`$ and $`d_{2m}=2d_{2m1}`$, a clear indication that the mapping is not integrable in general. Already at the fourth iteration the degrees differ in the autonomous and nonautonomous cases. Our approach consists in requiring that the degree in the nonautonomous case be identical to the one obtained in the autonomous one. If we implement the requirement that $`d_4`$ be 8 instead of 10 we find two conditions $`a_{n+1}2a_n+a_{n1}=0`$, $`b_{n+1}=b_{n1}`$, i.e. precisely the ones obtained through singularity confinement. Moreover, once these two conditions are satisfied, the subsequent degrees of the nonautonomous case coincide with that of the autonomous one. Thus this mapping, leading to polynomial growth, should be integrable, and, in fact, it is. As we have shown in , where we presented its Lax pair, equation (2) with $`a(n)=\alpha n+\beta `$ and $`b`$ constant (the even-odd dependence can be gauged out by a parity-dependent rescaling of the variable $`x`$) is a discrete form of the Painlevé I equation. In the examples that follow, we shall show that in all cases the nonautonomous form of an integrable mapping obtained through singularity confinement leads to exactly the same degrees of the iterates as the autonomous one.
Our second example is a multiplicative mapping:
$$x_{n+1}x_{n1}=\frac{a_nx_n+b}{x_n^2}$$
$`(3)`$
where one can put $`b=1`$ through an appropriate gauge. In the autonomous case we obtain, starting with $`x_0`$=$`p`$ and $`x_1`$=$`q/r`$, successively the degrees: 0, 1, 2, 3, 4, 7, 10, 13, 16, 21, 26, …, i.e. again a quadratic growth. In fact, if $`n`$ is of the form $`4m+k`$, ($`k`$=0,1,2,3) the degree is given by $`d_n=4m^2+(2m+1)k`$. The deautonomisation of (3) is straightforward. We compute the successive degrees and find: 0, 1, 2, 3, 4, 7, 11, …, . At this stage we require that a factorization occur in order to bring the degree $`d_6`$ from 11 to 10. The condition for this is $`a_{n+2}a_{n2}=a_n^2`$ i.e. $`a`$ of the form $`a_{e,o}\lambda _{e,o}^n`$ with an even-odd dependence which can be easily gauged away. This condition is sufficient in order to bring the degrees of the successive iterates down to the values obtained in the autonomous case. Quite expectedly the condition on $`a`$ is precisely the one obtained by singularity confinement. The Lax pair of (3) can be easily obtained from our results in . We find that if we introduce the matrices: $`L_n=\left(\begin{array}{cccc}0& 0& \frac{k}{x_n}\hfill & 0\\ 0& 0& x_{n1}& qx_{n1}\\ hx_n& 0& 1& q\\ 0& \frac{hk_{n1}}{x_{n1}}& 0& 0\end{array}\right)`$ and $`M_n=\left(\begin{array}{cccc}0& \frac{x_n}{k(x_n+1)}\hfill & 0& 0\\ 0& 0& 1& 0\\ 0& 0& \frac{1}{x_n}& \frac{q}{x_n}\\ h& 0& 0& 0\end{array}\right)`$ we can obtain from the compatibility $`L_{n+1}M_n(h/q)=M_n(h)L_n`$ the equation $`x_{n+1}x_{n1}=k_nk_{n+1}(x_n+1)/x_n^2`$, where $`k_{n+1}=qk_{n1}`$, which is equivalent to (3) up to a gauge transformation.
The case of the mapping
$$x_{n+1}+x_{n1}=a_n+\frac{b_n}{x_n}$$
$`(4)`$
has a more interesting deautonomisation. In this case we make a slightly different choice of homogeneous coordinates, which simplifies the results for the degrees of the iterates. We assume $`x_0=p/r`$, $`x_1`$=$`q/r`$ and compute the degree of homogeneity in $`p`$, $`q`$ and $`r`$. We find $`d_n`$=1, 1, 2, 3, 5, 8, 11, 15, 20, 25, 31, 38, …, i.e. if $`n`$ is of the form 3$`m`$ we have $`d_n`$=$`3m^2m+1`$, for $`n`$=3$`m`$+1, $`d_n`$=$`3m^2+m+1`$, and for $`n`$=3$`m`$+2, $`d_n`$=$`3m^2+3m+2`$. In the generic nonautonomous case the corresponding degrees are 1, 1, 2, 3, 5, 8, 13, …,. The requirement that $`d_6`$=11 leads to the condition $`a_{n+1}=a_{n1}`$ and $`b_{n+2}b_{n+1}b_{n1}+b_{n2}`$=0. Thus $`b`$ is linear with a ternary symmetry while $`a`$ is a constant (with an even/odd dependence which can be gauged away). This fully nonautonomous form of (4) is a discrete form of Painlevé IV studied in and where we have given its Lax pair.
We now turn to what is known as the “standard” discrete Painlevé equations and compare the results of singularity confinement to those of the algebraic entropy approach. We start with d-P<sub>I</sub> in the form:
$$x_{n+1}+x_n+x_{n1}=a_n+\frac{b_n}{x_n}$$
$`(5)`$
The degrees of the iterates of the autonomous mapping are 0, 1, 2, 3, 6, 9, 12, 17, 22, …, i.e. a quadratic growth with $`d_{3m+k}`$=$`3m^2+(2m+1)k`$, for $`k=`$0,1,2 while those of the generic nonautonomous one are 0, 1, 2, 3, 6, 11, …, . Requiring two extra factorisations at that level (so as to bring $`d_5`$ down to 9) we find the following conditions $`a_{n+1}=a_n`$, so $`a`$ must be a constant, and $`b_{n+2}b_{n+1}b_n+b_{n1}`$=0, i.e. $`b`$ is of the form $`b_n=\alpha n+\beta +\gamma (1)^n`$ which are exactly the result of singularity confinement. Implementing these conditions we find that the autonomous and nonautonomous mappings have the same (polynomial) growth . Both are integrable, the Lax pair of the nonautonomous one, namely d-P<sub>I</sub> having been given in .
For the discrete P<sub>II</sub> equation we have
$$x_{n+1}+x_{n1}=\frac{a_nx_n+b_n}{x_n^21}$$
$`(6)`$
The degrees of the iterates in the autonomous case are $`d_n`$=0, 1, 2, 4, 6, 9, 12, 16, 20, …, (i.e. $`d_{2m1}`$=$`m^2`$, $`d_{2m}`$=$`m^2+m`$) while in the generic nonautonomous case we find the first discrepancy for $`d_4`$ which is now 8. To bring it down to 6 we find two conditions, $`a_{n+1}2a_n+a_{n1}`$=0 and $`b_{n+1}=b_{n1}`$. This means that $`a`$ is linear in $`n`$ and $`b`$ is an even/odd constant, as predicted by singularity confinement. Once we implement these constraints, the degrees of the nonautonomous and autonomous cases coincide. The Lax pair of equation (6) in the nonautonomous form, i.e. d-P<sub>II</sub>, has been presented in .
The $`q`$-P<sub>III</sub> equation was obtained from the deautonomisation of the mapping:
$$x_{n+1}x_{n1}=\frac{(x_na_n)(x_nb_n)}{(1c_nx_n)(1x_n/c_n)}$$
$`(7)`$
In the autonomous case we obtain the degrees $`d_n`$=0, 1, 2, 5, 8, 13, 18, …, just like for equation (2), while in the generic nonautonomous case we have 0, 1, 2, 5, 12,…, . For $`d_4`$ to be 8 instead of 12, one needs four factors to cancel out. The conditions are $`c_{n+1}=c_{n1}`$ and $`a_{n+1}b_{n1}=a_{n1}b_{n+1}=a_nb_n`$. Thus $`c`$ is a constant up to an even/odd dependence, while $`a`$ and $`b`$ are proportional to $`\lambda ^n`$ for some $`\lambda `$, with an extra even/odd dependence, just as predicted by singularity confinement in . The Lax pair for $`q`$-P<sub>III</sub> has been presented in .
For the remaining three discrete Painlevé equations the Lax pairs are not known yet. It is thus important to have one more check of their integrability provided by the algebraic entropy approach. We start with d-P<sub>IV</sub> in the form:
$$(x_{n+1}+x_n)(x_{n1}+x_n)=\frac{(x_n^2a^2)(x_n^2b^2)}{(x_n+z_n)^2c^2}$$
$`(8)`$
where $`a`$, $`b`$ and $`c`$ are constants. If $`z_n`$ is constant we obtain for the degrees of the successive iterates $`d_n`$=0, 1, 3, 6, 11, 17, 24, …, . The general expression of the growth is $`d_n`$=6$`m^2`$ if $`n=3m`$, $`d_n`$=6$`m^2+4m+1`$ if $`n=3m+1`$ and $`d_n`$=6$`m^2+8m+3`$ if $`n=3m+2`$. This polynomial (quadratic) growth is expected since in the autonomous case this equation is integrable, its solution being given in terms of elliptic functions. For a generic $`z_n`$ we obtain the sequence $`d_n`$=0, 1, 3, 6, 13, …, . The condition for the extra factorizations to occur in the last case, bringing down the degree $`d_4`$ to 11, is for $`z`$ to be linear in $`n`$. We can check that the subsequent degrees coincide with those of the autonomous case.
For the $`q`$-P<sub>V</sub> we start from:
$$(x_{n+1}x_n1)(x_{n1}x_n1)=\frac{(x_n^2+ax_n+1)(x_n^2+bx_n+1)}{(1z_ncx_n)(1z_ndx_n)}$$
$`(9)`$
where $`a`$, $`b`$, $`c`$ and $`d`$ are constants. If moreover $`z`$ is also a constant, we obtain exactly the same sequence of degrees $`d_n`$=0, 1, 3, 6, 11, 17, 24, …, as in the d-P<sub>IV</sub> case. Again, this polynomial (quadratic) growth is expected since this mapping is also integrable in terms of elliptic functions. For the generic nonautonomous case we again find the sequence $`d_n`$=0, 1, 3, 6, 13, …, . Once more we require a factorization bringing down $`d_4`$ to 11. It turns out that this entails a $`z`$ which is exponential in $`n`$, which then generates the same sequence of degrees as the autonomous case. In both the d-P<sub>IV</sub> and $`q`$-P<sub>V</sub> cases we find the $`n`$-dependence already obtained through singularity confinement. Since this results to a vanishing algebraic entropy we expect both equations to be integrable.
The final system we shall study is the one related to the discrete P<sub>VI</sub> equation:
$$\frac{(x_{n+1}x_nz_{n+1}z_n)(x_{n1}x_nz_{n1}z_n)}{(x_{n+1}x_n1)(x_{n1}x_n1)}=\frac{(x_n^2+az_nx_n+z_n^2)(x_n^2+bz_nx_n+z_n^2)}{(x_n^2+cx_n+1)(x_n^2+dx_n+1)}$$
$`(10)`$
where $`a`$, $`b`$, $`c`$ and $`d`$ are constants. In fact the generic symmetric QRT mapping can be brought to the autonomous ($`z_n`$ constant) form of equation (10) through the appropriate homographic transformation. In the autonomous case, we obtain the degree sequence $`d_n`$=0, 1, 4, 9, 16, 25, …, i.e. $`d_n`$=$`n^2`$. Since mapping (10) is rather complicated we cannot investigate its full freedom. Still we were able to perform two interesting calculations. First, assume that in the rhs instead of the function $`z_n`$ a different function $`\zeta _n`$ appears. In this case the degrees grow like 0, 1, 5, …, and the condition to have $`d_2`$=4 instead of 5 is $`z_{n+1}z_{n1}z_n^2=\zeta _n^4`$. Assuming this is true, we compute the degree $`d_3`$ of the next iterate and find $`d_3`$=13 instead of 9. To bring down $`d_3`$ to the value 9 we need $`z_n^2=\zeta _n^2`$, which up to a redefinition of $`a`$ and $`b`$ means $`z_n=\zeta _n`$. This implies $`z_{n+1}z_{n1}=z_n^2`$, and $`z_n`$ is thus an exponential function of $`n`$, $`z_n`$=$`\lambda ^n`$ (which is in agreement with the results of ). Then a quartic factor drops out and $`d_3`$ is just 9. One can then check that the next degree is 16, just as in the autonomous case. Thus the $`q`$-P<sub>VI</sub> equation leads to the same growth as the generic symmetric QRT mapping and is thus expected to be integrable. As a matter of fact we were able to show that the generic asymmetric QRT mapping leads to the same growth $`d_n`$=$`n^2`$ as the symmetric one. This is not surprising, given the integrability of this mapping. What is interesting is that the growth of the generic symmetric and asymmetric QRT mappings are the same. Thus $`d_n`$=$`n^2`$ is the maximal growth one can obtain for the QRT mapping in the homogeneous variables we are using. As a matter of fact we have also checked that the asymmetric nonautonomous $`q`$-P<sub>VI</sub> equation, introduced in led to exactly the same degree growth $`d_n`$=$`n^2`$.
Let us summarize our findings. In this paper, we have compared the method of singularity confinement and the approach based on the study of algebraic entropy when applied to the deautonomisation of integrable mappings. We have shown that in every case the confinement condition which ensured that the singularity pattern of the autonomous and non-autonomous cases are identical was precisely the one necessary in order to bring the growth down to the one obtained in the autonomous case. This validates the deautonomisation results obtained through singularity confinement at least in the domain of d-$`P`$’s. This suggests also a strategy for the study of integrable mappings. We believe that in the light of the present results, when one starts from an integrable autonomous mapping, the deautonomisation can be performed solely with the help of singularity confinement, a procedure considerably simpler than the calculation of the algebraic entropy.
Our present investigation also sheds light on the singularity confinement, and its necessary character as discrete integrability criterion. Let us go back to the example of mapping (2) with $`b=1`$. We start with $`x_0=p`$, $`x_1=q/r`$. Iterating further we find
$$x_2=\frac{r^2+aqrpq^2}{q^2},x_3=\frac{qP_4}{r(r^2+aqrpq^2)^2},x_4=\frac{(r^2+aqrpq^2)P_6}{P_4^2},x_5=\frac{P_4P_9}{rP_6^2}$$
where the $`P_k`$’s are homogeneous polynomials in $`q`$, $`r`$ of degree $`k`$. (Remember that $`p`$ is of zero homogeneous degree in our convention). The pattern now becomes clear. Whenever a new polynomial appears in the numerator of $`x_n`$ its square will appear in the denominator of $`x_{n+1}`$ and it will appear one last time as a factor of the numerator of $`x_{n+2}`$, after which it disappears due to factorisations. The singularities we are working with in the singularity confinement approach correspond to the zeros of any of these polynomials, which explains the pattern $`\{0,\mathrm{}^2,0\}`$. The singularity confinement is intimately related to this factorisation which plays a crucial role in the algebraic entropy approach. Let us suppose now that $`a`$ is a generic function of $`n`$. In this case we get the sequence:
$$x_2=\frac{r^2+a_1qrpq^2}{q^2},x_3=\frac{qQ_4}{r(r^2+a_1qrpq^2)^2},x_4=\frac{(r^2+a_1qrpq^2)Q_7}{qQ_4^2}$$
$$x_5=\frac{qQ_4Q_{12}}{r(r^2+a_1qrpq^2)Q_7^2}$$
where the $`Q_k`$’s are also homogeneous polynomials in $`q`$, $`r`$ of degree $`k`$. Now the simplifications that do occur are insufficient to curb the asymptotic growth. As a matter of fact, if we follow a particular factor we can check that it keeps appearing either in the numerator or the denominator (where its degree is alternatively 1 and 2). This corresponds to the unconfined singularity pattern $`\{0,\mathrm{}^2,0,\mathrm{},0,\mathrm{}^2,0,\mathrm{},\mathrm{}\}`$. Once more, the confinement condition $`a_{n+1}2a+a_{n1}=0`$ is the condition for $`q`$ to divide exactly $`Q_7`$, for both $`q`$ and $`r^2+a_1qrpq^2`$ to divide exactly $`Q_{12}`$, etc.. Our analysis clearly shows why singularity confinement is necessary for integrability while not being sufficient in general. Still in the case of integrable deautonomisation it does lead to the correct answer, which explains its success in the derivation of the discrete Painlevé equations.
Acknowledgements. The financial help of the cefipra, through the contract 1201-1, is gratefully acknowledged. The authors are grateful to J. Fitch who provided them with a new (beta) version of reduce without which the calculations presented here would have been impossible. B. Grammaticos acknowledges interesting discussions with J. Hietarinta.
References
B. Grammaticos, A. Ramani and V. Papageorgiou, Phys. Rev. Lett. 67 (1991) 1825.
M.J. Ablowitz, A. Ramani and H. Segur, Lett. Nuov. Cim. 23 (1978) 333.
J. Hietarinta and C.-M. Viallet, Phys. Rev. Lett. 81 (1998) 325.
V.I. Arnold, Bol. Soc. Bras. Mat. 21 (1990) 1.
A.P. Veselov, Comm. Math. Phys. 145 (1992) 181.
M.P. Bellon and C.-M. Viallet, Algebraic Entropy, Comm. Math. Phys. to appear.
A. Ramani, B. Grammaticos and J. Hietarinta, Phys. Rev. Lett. 67 (1991) 1829.
G.R.W. Quispel, J.A.G. Roberts and C.J. Thompson, Physica D34 (1989) 183.
A.S. Fokas, B. Grammaticos and A. Ramani, J. Math. An. and Appl. 180 (1993) 342.
V.G. Papageorgiou, F.W. Nijhoff, B. Grammaticos and A. Ramani, Phys. Lett. A 164 (1992) 57.
B. Grammaticos, A. Ramani, V. Papageorgiou, Phys. Lett. A 235 (1997) 475.
B. Grammaticos, Y. Ohta, A. Ramani, H. Sakai, J. Phys. A 31 (1998) 3545.
A.R. Its, A.V. Kitaev and A.S. Fokas, Usp. Mat. Nauk 45,6 (1990) 135.
B. Grammaticos A. Ramani, J. Phys. A 31 (1998) 5787.
N. Joshi, D. Bartonclay, R.G. Halburd, Lett. Math. Phys. 26 (1992) 123.
B. Grammaticos, F.W. Nijhoff, V.G. Papageorgiou, A. Ramani and J. Satsuma, Phys. Lett. A185 (1994) 446.
M. Jimbo and H. Sakai, Lett. Math. Phys. 38 (1996) 145.
B. Grammaticos and A. Ramani, Phys. Lett. A 257 (1999) 288. |
no-problem/9910/cond-mat9910268.html | ar5iv | text | # Order parameter model for unstable multilane traffic flow
## I Introduction. Macroscopic models for multilane traffic dynamics
The study of traffic flow formed actually a novel branch of physics since the pioneering works by Lighthill and Whitham , Richards , and, then, by Prigogine and Herman . It is singled out by the fact that in spite of motivated, i.e. a non-physical individual behavior of moving vehicles (they make up a so-called ensemble of “self-driven particles”, see, e.g., ), traffic flow exhibits a wide class of critical and self-organization phenomena met in physical systems (for a review see ). Besides, the methods of statistical physics turn out to be a useful basis for the theoretical description of traffic dynamics .
The existence of a new basic phase in vehicle flow on multilane highways called the synchronized motion was recently discovered by Kerner and Rehborn , impacting significantly the physics of traffics as a whole. In particular, it turns out that the spontaneous formation of moving jams on highways proceeds mainly through a sequence of two transitions: “free flow $``$ synchronized motion $``$ stop-and-go pattern” . All the three traffic modes are phase states, meaning their ability to persist individually for a long time. Besides, the two transitions exhibit hysteresis , i.e., for example, the transition from the free flow to the synchronized mode occurs at a higher density and lower velocity than the inverse one. As follows from the experimental data the phase transition “free flow $``$ synchronized mode” is essentially a multilane effect. Recently Kerner assumed it to be caused by “Z”-like form of the overtaking probability depending on the car density.
The synchronized mode is characterized by substantial correlations in the car motion along different lanes because of the lane changing maneuvers. So, to describe such phenomena a multilane traffic theory is required. There have been proposed several macroscopic models dealing with multilane traffic flow and based on the gas-kinetic theory , a compressible fluid model generalizing the approach by Kerner and Konhäuser , and actually a model dealing with the time dependent Ginzburg-Landau equation.
All these models describe traffic flow in terms of the car density $`\rho `$, mean velocity $`v`$, and, may be, the velocity variance $`\theta `$ or ascribe these quantities to vehicle flow at each lane $`\alpha `$ individually. In other words, the quantities $`\{\rho ,v,\theta \}_\alpha `$ are regarded as a complete system of the traffic flow state variables and if they are fixed then all the vehicle flow characteristics should be determined. The given models relate the self-organization phenomena actually to the vehicle flow instability caused by the delay in the driver response to changes in the motion of the nearest cars ahead. In fact, let us briefly consider their simplified version (cf. ) which, nevertheless, catches the basic features taken into account:
$`{\displaystyle \frac{\rho }{t}}+{\displaystyle \frac{(\rho v)}{x}}`$ $`=`$ $`0,`$ (1)
$`{\displaystyle \frac{v}{t}}+v{\displaystyle \frac{v}{x}}`$ $`=`$ $`{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{𝒫}{x}}+{\displaystyle \frac{1}{\tau ^{}}}(𝒰v).`$ (2)
Here the former term on the right-hand side of Eq. (2), a so-called pressure term, reflects dispersion effects due to the finite velocity variance $`\theta `$ of the vehicles and the latter one describes the relaxation of the current velocity within the time $`\tau ^{}`$ to a certain equilibrium value $`𝒰\{\rho ,\theta \}`$. In particular, for
$$𝒫\{\rho ,v,\theta \}=\rho \theta \eta \frac{v}{x},$$
(3)
where $`\eta `$ is a “viscosity” coefficient and the velocity variance $`\theta `$ is treated as a constant, we are confronted with the Kerner-Konhäuser model . In the present form the relaxation time $`\tau ^{}`$ characterizes the acceleration capability of the mean vehicle as well as the delay in the driver control over the headway (see, e.g., ). The value of the relaxation time is typically estimated as $`\tau ^{}30`$s because it is mainly determined by the mean time of vehicle acceleration which is a slower process than the vehicle deceleration or the driver reaction to changes in the headway. The equilibrium velocity $`𝒰\{\rho \}`$ (here the fixed velocity variance $`\theta `$ is not directly shown in the argument list) is chosen by drivers keeping in mind the safety, the readiness for risk, and the legal traffic regulations. For homogeneous traffic flow the equilibrium velocity $`𝒰\{\rho \}=\vartheta (\rho )`$ is regarded as a certain phenomenological function meeting the conditions:
$$\frac{d\vartheta (\rho )}{d\rho }<0\text{and}\rho \vartheta (\rho )0\text{as}\rho \rho _0,$$
(4)
where $`\rho _0`$ is the upper limit of the vehicle density on the road. Since the drivers anticipate risky maneuvers of distant cars also, the dependence $`𝒰\{\rho \}`$ is nonlocal. In particular, it is reasonable to assume that the driver behavior is mainly governed by the car density $`\rho `$ at a certain distant “interaction point” $`x_a=x+L^{}`$ rather than at the current one $`x`$, which gives rise to a new term in Eq. (2) basing on the gas-kinetic theory . Here, for the sake of simplicity following , we take this effect into account expanding $`\rho (x+L^{})`$ and, then, $`𝒰\{\rho \}`$ into the Taylor series and write:
$$𝒰\{\rho \}=\vartheta (\rho )v_0\frac{L^{}}{\rho }\frac{\rho }{x},$$
(5)
where $`v_0`$ is a certain characteristic velocity of the vehicles. Then linearizing the obtained system of equations with respect to the small perturbations $`\delta \rho ,\delta v\mathrm{exp}(\gamma t+ikx)`$ we obtain that the long-wave instability will occur if (cf. )
$$\tau ^{}\left(\rho \vartheta _\rho ^{}\right)^2>v_0L^{}+\tau ^{}\theta ,$$
(6)
in the long-wave limit the instability increment $`\text{Re}\gamma `$ depends on $`k`$ as
$$\text{Re}\gamma =k^2\left[\tau ^{}(\rho \vartheta _\rho ^{})^2(v_0L^{}+\tau ^{}\theta )\right],$$
(7)
and the upper boundary $`k_{\text{max}}`$ of the instability region in the $`k`$-space is given by the expression:
$$k_{\text{max}}^2=\frac{\rho }{\tau \eta }\left\{\left[\frac{\tau ^{}(\rho \vartheta _\rho ^{})^2}{(v_0L^{}+\tau ^{}\theta )}\right]^{1/2}1\right\}$$
(here $`\vartheta _\rho ^{}=d\vartheta (\rho )/d\rho `$). As follows from (6) the instability can occur when the delay time $`\tau ^{}`$ exceeds a certain critical value $`\tau _c`$ and for $`\tau ^{}<\tau _c`$ the homogeneous traffic flow is stable at least with respect to small perturbations. Moreover, the instability increment attains its maximum at $`kk_{\text{max}}`$, so, special conditions are required for a wide vehicle cluster to form . In particular, in the formal limit $`\tau ^{}0`$ from Eq. (2) we get
$$v=\vartheta (\rho )\frac{D}{\rho }\frac{\rho }{x},$$
(8)
where $`D=v_0L^{}`$ plays the role of the diffusion coefficient of vehicle perturbations. The substitution of (8) into (1) yields the Burgers equation
$$\frac{\rho }{t}+\frac{\left[\rho \vartheta (\rho )\right]}{x}=D\frac{^2\rho }{x^2},$$
(9)
which describes the vehicle flux stable, at least, with respect to small perturbations in the vehicle density $`\rho `$.
However, the recent experimental data about traffic flow on German highways have demonstrated that the characteristics of the multilane vehicle motion are more complex (for a review see ). In particular, there are actually three types of synchronized mode, the totally homogeneous state, homogeneous-in-speed and totally heterogeneous ones . Especially the homogeneous-in-speed state demonstrates the fact that, in contrast to the free flow, there is no direct relationship between the density and flux of vehicles in the synchronized mode, because their variations are totally uncorrelated . For example, an increase in the vehicle density can be accompanied by either an increase or decrease in the vehicle flux, with the car velocity being practically constant. As a result, the synchronized mode corresponds to a two-dimensional region in the flow-density plane ($`j\rho `$-plane) rather than to a certain line $`j=\vartheta (\rho )\rho `$ . Keeping in mind a hypothesis by Kerner about the metastability of each particular state in this synchronized mode region it is natural to assume that there should be at least one additional state variable affecting the vehicle flux. The other important feature of the synchronized mode is the key role of some cars bunched together and traveling much faster than the typical ones, which enables us to regard them as a special car group . Therefore, in the synchronized mode the function of car distribution in the velocity space should have two maxima and we will call such fast car groups platoons in speed.
Anomalous properties of the synchronized mode have been substantiated also in using single-car-data. In particular, as the car density comes to the critical value $`\rho _c`$ of the transition “free flow $``$ synchronized mode” the time-headway distribution exhibits a short time peak (at 0.8 sec.). This short time-headway corresponds to “…platoons of some vehicles traveling very fast–their drivers are taking the risk of driving “bumper-to-bumper” with a rather high speed. These platoons are the reason for occurrence of high-flow states in free traffic” . The platoons are metastable and their destruction gives rise to the congested vehicle motion . In the synchronized mode the weight of the short time-headways is less, however, almost every fourth driver falls below the 1-sec-threshold. In the vicinity of the transition “free flow $``$ synchronized mode” the short time-headways have the greatest weight. In other words, at least near the given phase transition the traffic flow state is to be characterized by two different driver groups which separate from each other in the velocity space and, consequently, in multilane traffic flow there should be another relaxation process distinct from one taken into account by the model (1), (2). In order to move faster than the statistically averaged car a driver should permanently manoeuvring pass by the cars moving ahead. Meeting of several such “fast” drivers seems to cause the platoon formation. Obviously, to drive in such a manner requires additional efforts, so, each driver needs a certain time $`\tau `$ to get the decision whether or not to take part in these manoeuvres. Exactly the time $`\tau `$ characterizes the relaxation processes in the platoon evolution. It should be noted that the overtaking manoeuvres are not caused by the control over the headway distance and, thus, the corresponding transient processes may be much slower then the driver response to variations in the headway to prevent possible traffic accidents.
The analysis of the obtained optimal-velocity function $`V(\mathrm{\Delta }x)`$ demonstrates its dependence not only on the headway $`\mathrm{\Delta }x`$ but also on the local car density. So, in congested flow the drivers supervises the vehicle arrangement or, at least, try to do this in a sufficiently large neighborhood covering several lanes.
Another unexpected fact is that the synchronized mode is mainly distinctive not due to the car velocities at different lanes being equal. In the observed traffic flow various lanes did not exhibit a substantial difference in the car velocity even in the free flow. In agreement with the results obtained by Kerner the synchronized mode is singled out by small correlations between fluctuations in the car flow, velocity and density. There is only a strong correlation between the velocities at different lanes taken at the same time, however, it decreases sufficiently fast as the time difference increases. By contrast, there are strong long-time correlations between the flow and density for the free flow as well as the stop-and-go mode. In these phases the vehicle flow directly depends on the density.
Thereby, the free flow, the synchronized mode and the jammed motion seem to be qualitatively distinct from one another at the microscopic level. So, it is likely that to describe macroscopically traffic phase transitions the set of the state variables $`\{\rho ,v,\theta \}_\alpha `$ should be completed with an additional parameter (or parameters) reflecting the internal correlations in the car dynamics. In other words, this parameter has to be due to the “many-body” effects in the car interaction in contrast to such external variables as the mean car density and velocity being actually the zeroth and first moments of the “one-particle” distribution function. Thus, it can be regarded as an independent state variable of traffic flow. The derivation of macroscopic traffic equations based on a Boltzmann like kinetic approach has also shown that there is an additional internal degrees of freedom in the vehicle dynamics.
In any case a theory of unstable traffic flow has to answer, in particular, to a question of why its two phases, e.g., the free flow and the synchronized mode, can coexist and, thus, what is the difference between them as well as why the separating transition region (Fig. 1) does not widen but keeps a certain thickness. Besides, it should specify the velocity $`u`$ of this region depending on the traffic phase characteristics. There is a general expression relating the transition region velocity $`u`$ to the density and mean velocity of cars in the free flow and a developed car cluster: $`\rho _f`$, $`v_f`$, and $`\rho _{cl}`$, $`v_{cl}`$, respectively, that follows from the vehicle conservation , namely, the Lighthill–Whitham formula:
$$u=\frac{\rho _{cl}v_{cl}\rho _fv_f}{\rho _{cl}\rho _f}.$$
(10)
A specific model is to give additional relationships between the quantities $`u`$, $`\rho _f`$, $`v_f`$, and $`\rho _{cl}`$, $`v_{cl}`$ resulting from particular details of the car interaction. We note that a description similar to Eqs. (1), (2) dealing solely with the external parameters $`\{\rho ,v\}`$ do not actually make a distinction between the free-flow and congested phases and their coexistence is due to the particular details of the car interaction.
The transition “free flow $``$ synchronized motion” is rather similar to aggregation phenomena in physical systems such as undercooled liquid when in a metastable phase (undercooled liquid) the transition to a new ordered (crystalline) phase goes through the formation of small clusters. Keeping in mind this analogy Mahnke et al. have proposed a kinetic approach based on stochastic master equation describing the synchronized mode formation that deals with individual free cars and their clusters. The cluster evolution is governed by the attachment and evaporation of the individual cars and the synchronized mode is regarded as the motion of a large cluster.
To describe such phenomena in physical systems it was developed an effective macroscopic approach, called the Landau phase transition theory , that introduces a certain order parameter $`h`$ characterizing the correlations, e.g., in the atom arrangement. In the present paper following practically the spirit of the Landau theory we develop a phenomenological approach to the description of the traffic flow instability that ascribes to the vehicle flux an additional internal parameter which will be also called the order parameter $`h`$ and allows for the effect of lane changing on the vehicle motion. In this way the free flow and the congested phases become in fact distinctive and solely the conditions of their coexistence as well the dynamics of the transition layer are the subject of specific models.
## II Order parameter and the individual driver behavior
We describe the vehicle flow on a multilane highway in terms of its characteristics averaged over the road cross-section, namely, by the car density $`\rho `$, the mean velocity $`v`$, and the order parameter $`h`$. The latter is the measure of the correlations in the car motion or, what is equivalent, of the car arrangement regularity forming due to the lane change by the “fast” drivers. Let us discuss the physical meaning of the order parameter $`h`$ in detail considering the free flow, synchronized mode and jammed traffic individually (Fig. 2).
### A Physical meaning of the order parameter $`h`$ and its governing equation
When vehicles move on a multilane highway without changing the lanes they interact practically with the nearest neighbors ahead only and, so, there should be no internal correlations in the vehicle flow at different lanes. Therefore, although under this condition the traffic flow can exhibit complex behaviour, for example, the “stop-and-go” waves can develop, it is actually of one-dimensional nature. In particular, the drivers that would prefer to move faster than then the statistically mean driver will bunch up forming the platoons headed by a relatively slower vehicle. When the cars begin to change lanes for overtaking slow vehicles the car ensembles at different lanes will affect one another. The case of this interaction is due to that a car during a lane change manoeuvre occupies, in a certain sense, two lanes simultaneously, affecting the cars moving behind it at both the lanes. Figure 2$`b`$ illustrates this interaction for cars 1 and 2 through car 4 changing the lanes. The drivers of both cars 1 and 2 have to regard car 4 as the nearest neighbor and, so, their motion will be correlated during the given manoeuvre and after it during the relaxation time $`\tau ^{}`$. In the same way car 1 is affected by car 3 because the motion of car 4 directly depends on the behavior of car 3. The more frequently lane changing is performed, the more correlated traffic flow on a multilane highway. Therefore, it is reasonable to introduce the order parameter $`h`$ being the mean density of such car triplets normalized to its maximum possible for the given highway and to regard it as a measure of the mulitlane correlations in the vehicle flow.
On the other hand the order parameter $`h`$ introduced in this way can be regarded as a measure of the vehicle arrangement regularity. Let us consider this question in detail for the free flow, synchronized mode, and jammed traffic individually. In the free flow the feasibility of overtaking makes the vehicle arrangement more regular because of platoon dissipation. So as the order parameter $`h`$ grows the free traffic becomes more regular. Nevertheless, in this case the density of the car mulitlane triplets remains relatively low, $`h1`$, and the vehicle ensembles should exhibit weak correlations. Whence it follows also that the mean car velocity $`\vartheta `$ is an increasing function of the order parameter $`h`$ in the free flow. In the jammed motion (Fig. 2$`c`$) leaving current lanes is hampered because of lack of room for the manoeuvres. So the car ensembles at different lanes can be mutually independent in spite of individual complex behavior. In the given case the order parameter must be small too, $`h1`$, but, in contrast, the car mean velocity should be a decreasing function of $`h`$. In fact, for highly dense traffic any lane change of a car requires practically that the neighbor drivers decelerating give place for this manoeuvres.
Figure 3 illustrates the transition “free flow $``$ synchronized mode”. As the car density grows in free flow, the “fast” drivers that at first overtake slow vehicles individually begin to gather into platoons headed by more “slow” cars among them but, nevertheless, moving faster than the statistically mean vehicle (Fig. 3$`a`$). The platoons are formed by drivers preferring to move as fast as possible keeping short headways without lane changing. Such a state of the traffic flow should be sufficiently inhomogeneous and the vehicle headway distribution has to contain a short headway spike as observed experimentally in . Therefore, even at a sufficiently high car density the free flow should be characterized by weak multilane correlations and not too great values of the order parameter $`h_f`$. The structure of these platoons is also inhomogeneous, they comprise cars whose drivers would prefer to move at different headways (for a fixed velocity) under comfortable conditions, i.e., when the cars moving behind a given car do not jam it or none of the vehicles moving on the neighboring lanes hinders its motion at the given velocity provided it changes the current lane. So, when the density of vehicles attains sufficiently high values and their mean velocity decreases remarkably with respect to the velocity on the empty highway some of the “fast” drivers can decide that there is no reason to move so slowly at such short headways requiring strain. Then they can either overtake the car heading the current platoon by changing lanes individually or leave the platoon and take vacant places (Fig. 3$`a`$). The former has to increase the multilane correlations and, in part, to decrease the mean vehicle velocity because the other drivers should give place for this manoeuvres in a sufficiently dense traffic flow. The latter also will decrease the mean vehicle velocity because these places were vacant from the standpoint of sufficiently “fast” drivers only but not from the point of view of the statistically mean ones preferring to keep longer headways in comparison with the platoon headways. Therefore, the statistically mean drivers have to decelerate, decreasing the mean vehicle velocity. The two manoeuvre types make the traffic flow more homogeneous dissipating the platoons and smoothing the headway distribution (Fig. 3$`b`$ and the low fragment). Besides, the single-vehicle experimental data show that the synchronized mode is singled out by long-distant correlations in the vehicle velocities, whereas the headway fluctuations are correlated only on small scales, which justifies the assumptions of the synchronized mode being a more homogeneous state than the free flow. We think that the given scenario describes the synchronized mode formation which must be characterized by a great value of the order parameter, $`h_s>h_f`$, and a lower velocity in comparison with the free flow at the same vehicle density.
In addition, whence it follows that, first, the left boundary of the headway distribution should be approximately the same for both the free flow and the synchronized mode near the phase transition, which corresponds the experimental data . Second, since in this case the transition from the free flow to the synchronized mode leads to the decrease in the mean velocity, the “fast” driver will see no reason to alter their behaviour and to move forming platoons again until the vehicle density decreases and the mean velocity grows enough. It is reasonable to relate this characteristics to the experimentally observed hysteresis in the transition “free flow $``$ synchronized mode” . Third, for a car to be able to leave a given platoon the local vehicle arrangement at the neighboring lane should be of special form and when an event of the vehicle rearrangement occurs its following evolution depends also on the particular details of the neighboring car configuration exhibiting substantial fluctuations. Therefore, the synchronized mode can comprise a great amount of local metastable states and correspond to a certain two-dimensional region on the flow-density plane ($`j\rho `$-plane) rather than a line $`j=\vartheta (\rho )\rho `$, which matches the experimental data and the modern notion of the synchronized mode nature . This feature seems to be similar to that met in physical media with local order, for example, in glasses where phase transitions are characterized by wide range of controlling parameters (temperature, pressure, etc.) rather than their fixed values (see, e.g., ).
This uncertainty of the synchronized mode, at least qualitatively, may be regarded as an effect of the internal fluctuations of the order parameter $`h`$ and at the first step we will ignore them assuming the order parameter $`h`$ to be determined in a unique fashion for fixed values of the vehicle density $`\rho `$ and the mean velocity $`v`$. Thus for a uniform vehicle flow we write:
$$\tau \frac{dh}{dt}=\mathrm{\Phi }(h,\rho ,v),$$
(11)
where $`\tau `$ is the time required of drivers coming to the decision to begin or stop overtaking manoeuvres and the function $`\mathrm{\Phi }(h,\rho ,v)`$ possesses a single stationary point $`h=h(\rho ,v)`$ being stable and, thus,
$$\frac{\mathrm{\Phi }}{h}>0.$$
(12)
The latter inequality is assumed to hold for all the values of the order parameter for simplicity. We note that equation (11) also allows for the delay in the driver response to changes on the road. However, in contrast with models similar to (1) and (2), here this effect is not the origin of the traffic flow instability and, thus, its particular description is not so crucial. Moreover, as discussed in the Introduction, the time $`\tau `$ characterizes the delay in the driver decision concerning to the lane changing but not the control over the headway, enabling us to assume $`\tau \tau ^{}`$.
The particular value $`h(v,\rho )`$ of the order parameter results from the compromise between the danger of accident during changing lanes and the will to move as fast as possible. Obviously, the lower is the mean vehicle velocity $`v`$ for a fixed value of $`\rho `$, the weaker is the lane changing danger and the stronger is the will to move faster. Besides, the higher is the vehicle density $`\rho `$ for a fixed value of $`v`$, the stronger is this danger (here the will has no effect at all). These statements enable us to regard the dependence $`h(v,\rho )`$ as a decreasing function of both the variables $`v`$, $`\rho `$ (Fig. 4) and taking into account inequality (12) to write:
$$\frac{\mathrm{\Phi }}{v}>0,\frac{\mathrm{\Phi }}{\rho }>0,$$
(13)
with the latter inequality stemming from the danger effect only.
Equation (11) describes actually the behavior of the drivers who prefer to move faster than the statistically mean vehicle and whose readiness for risk is greatest. Exactly this group of drivers govern the value of $`h`$. There is, however, another characteristics of the driver behavior, it is the mean velocity $`v=\vartheta (h,\rho )`$ chosen by the statistically averaged driver taking into account also the danger resulting from the frequent lane changing by the “fast” drivers. This characteristics is actually the same as one discussed in the Introduction but depends also on the order parameter $`h`$. So, as a function of $`\rho `$ it meets conditions (4). Concerning the dependence of $`\vartheta (h,\rho )`$ on $`h`$ we can state that generally this function should be increasing for small values of the car density, $`\rho \rho _0`$, because in the given case the lane changing practically makes no danger to traffic and all the drives can overtake vehicle moving at lower speed without risk. By contrast, when the vehicle density is sufficiently high, $`\rho \rho _0`$, only the most “impatient” drivers permanently change the lanes for overtaking, making an additional danger to the most part of other drivers. Therefore, in this case the velocity $`\vartheta (h,\rho )`$ has to decrease as the order parameter $`h`$ increases. For certain intermediate values of the vehicle density, $`\rho \rho _c`$, this dependence is to be weak. Fig. 5 shows the velocity $`\vartheta (h,\rho )`$ as a function of $`h`$ for different values of $`\rho `$, where, in addition, we assume the effect of order parameter $`h(0,1)`$ near the boundary points weak and set
$$\frac{\vartheta }{h}=0\text{at}h=0\text{and}h=1.$$
(14)
We will ignore the delay in the relaxation of the mean velocity to the equilibrium value $`v=\vartheta (h,\rho )`$ because the corresponding delay time characterizes the driver control over the headway and should be short, as already discussed above. Then the governing equation (11) for the order parameter $`h`$ can be rewritten in the form:
$$\tau \frac{dh}{dt}=\varphi (h,\rho );\varphi (h,\rho )\stackrel{\text{def}}{=}\mathrm{\Phi }[h,\rho ,\vartheta (h,\rho )].$$
(15)
For the steady state uniform vehicle flow the solution of the equation $`\varphi (h,\rho )=0`$ specifies the dependence $`h(\rho )`$ of the order parameter on the car density. Let us, now, study its properties and stability.
### B Nonmonotony of the $`h(\rho )`$ dependence and the traffic flow instability
To study the local characteristics of the right-hand side of Eq. (15) we analyze its partial derivatives
$`{\displaystyle \frac{\varphi }{h}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Phi }}{h}}+{\displaystyle \frac{\mathrm{\Phi }}{v}}{\displaystyle \frac{\vartheta }{h}},`$ (16)
$`{\displaystyle \frac{\varphi }{\rho }}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Phi }}{\rho }}+{\displaystyle \frac{\mathrm{\Phi }}{v}}{\displaystyle \frac{\vartheta }{\rho }}.`$ (17)
As mentioned above, the value of $`\mathrm{\Phi }/\rho `$ is solely due to the danger during changing lanes, so this term can be ignored until the vehicle density $`\rho `$ becomes sufficiently high. In other words, in a certain region $`\rho <\rho _h\rho _0`$ the derivative $`\varphi /\rho (\mathrm{\Phi }/v)(\vartheta /\rho )<0`$ by virtue of (4) and (13). So, the local behavior of the function $`h(\rho )`$ (meeting the equality $`d\varphi =0`$ and, thus, $`dh/d\rho =(\varphi /\rho )(\varphi /h)^1`$) depends directly on the sign of the derivative $`\varphi /h`$, it is increasing or decreasing for $`\varphi /h>0`$ or $`\varphi /h<0`$, respectively.
For long-wave perturbations $`\mathrm{exp}\{ikx\}`$ of the car distribution on a highway the density $`\rho `$ can be treated as a constant at the lower order in $`k`$. Therefore, according to Eq. (15) the steady state traffic flow is unstable if $`\varphi /h<0`$.
Due to (12) and (14) the first term on the right-hand side of (16) is dominant in the vicinity of the lines $`h=0`$ and $`h=1`$, thus, in this region the curve $`h(\rho )`$ is increasing and the stationary state of the traffic flow is stable. For $`\rho <\rho _c`$ the value $`\vartheta /h>0`$ (Fig. 5), therefore, the whole region $`\left\{0<h<1,\mathrm{\hspace{0.33em}0}<\rho <\rho _c\right\}`$ corresponds to the stable car motion. However, for $`\rho >\rho _c`$ there can be a region of the order parameter $`h`$ where the derivative $`\varphi /h`$ changes the sign and the vehicle motion becomes unstable. Indeed, the solution $`v=\eta (h,\rho )`$ of the equation $`\mathrm{\Phi }(h,\rho ,v)=0`$ can be regarded as the mean vehicle velocity controlled by the “fast” drivers and is decreasing function of $`h`$ because of $`\eta /h=(\mathrm{\Phi }/h)/(\mathrm{\Phi }/v)^1`$. So, once such “active” drivers become to change the lanes to move faster, they will do this as frequently as possible especially if the mean velocity decreases, which corresponds to a considerable increase in $`h`$ for a small decrease in $`v`$. So, it is quite natural to assume that the value of $`\eta /h`$ for $`\rho >\rho _c`$ is sufficiently small and
$$\frac{\varphi }{h}=\frac{\mathrm{\Phi }}{v}\left(\frac{\vartheta }{h}\frac{\eta }{h}\right)<0.$$
(18)
Under these conditions the instability region does exist, the curve $`h(\rho )`$ can look like “S” (Fig. 6) and its decreasing branch corresponds to the unstable vehicle flow. The lower increasing branch matches the free flow state of the car motion, whereas the upper one should be related to the synchronized mode because it is characterized by the order parameter coming to unity.
### C Hysteresis and the fundamental diagram
The obtained dependence $`h(\rho )`$ actually describes the first order phase transition in the vehicle motion. Indeed, when increasing the car density exceeds the value $`\rho _1`$ the free flow becomes absolutely unstable and the synchronized mode forms through a sharp jump of the order parameter. If, however, after that the car density decreases the synchronized mode will persist until the car density attains the value $`\rho _2<\rho _1`$. It is a typical hysteresis and the region $`(\rho _2,\rho _1)`$ corresponds to the metastable phases of traffic flow.
Let us, now, discuss a possible form of the fundamental diagram $`j=j(\rho )`$ showing the vehicle flux $`j=\rho \vartheta [\rho ]`$ as a function of the car density $`\rho `$, where, by definition, $`\vartheta [\rho ]=\vartheta [h(\rho ),\rho ]`$. It should be pointed out that here we confine our consideration to the region of not too large values of the car density, $`\rho <\rho _h`$, where the transition “free flow $``$ synchronized mode” takes place. The transition “synchronized mode $``$ jammed traffic” will be discussed below. Fig. 7$`a`$ displays the dependence $`\vartheta (h,\rho )`$ of the mean vehicle velocity on the density $`\rho `$ for the fixed limit values of the order parameter $`h=0`$ and 1. For a small values of $`\rho `$ these curves practically coincide with each other. As the vehicle density $`\rho `$ grows and until it comes close to the critical value $`\rho _c`$ when the lane change danger becomes substantial, the velocity $`\vartheta (1,\rho )`$ practically does not depend on $`\rho `$. So at the point $`\rho _c`$ at which the curves $`\vartheta (1,\rho )`$ and $`\vartheta (0,\rho )`$ meet each other, $`\vartheta (1,\rho )`$ is to exhibit sufficiently sharp decrease in comparison with the latter one. Therefore, on one hand, the function $`j_1(\rho )=\rho \vartheta (1,\rho )`$ has to be decreasing for $`\rho >\rho _c`$. On the other hand, at the point $`\rho _c`$ for $`h1`$ the effect of the lane change danger is not extremely strong, it only makes the lane change ineffective, $`\vartheta /h0`$ (Fig. 5). So it is reasonable to assume the function $`j_0(\rho )=\rho \vartheta (0,\rho )`$ increasing near the point $`\rho _c`$. Under the adopted assumptions the relative arrangement of the curves $`j_0(\rho )`$, $`j_1(\rho )`$ is demonstrated in Fig. 7$`b`$, and Fig. 7$`c`$ shows the fundamental diagram of traffic flow resulting from Fig. 6 and Fig. 7$`b`$.
Concluding the present section we note that in the given description of the driver behavior governing the order parameter $`h`$ the vehicle flux $`j(h,\rho )=\rho \vartheta (h,\rho )`$ is an external characteristics of traffic flow. So, the obtained form of the fundamental diagram does not follows directly from the developed model, but can be interpreted sufficiently reasonable. It can be rigorously justified if the critical point $`\rho _c`$ corresponds to the maximum of the flux $`j(h^{},\rho )`$ for a certain fixed value $`h^{}`$ of the order parameter. In other words, when the road capacity is exhausted and the following increase in the vehicle density leads to a decrease in the vehicle flux the drivers divide into two groups, the majority prefer to move at their own lanes whereas the most “impatient” drivers change the lanes as frequently as possible, giving rise to the traffic instability. This problem, however, deserves an individual investigation.
## III Phase coexistence. Diffusion limited cluster motion
The previous section has considered uniform traffic flow, so, analyzed actually the individual characteristics of the free flow and the synchronized mode. In the present section we study their coexistence, i.e., the conditions under which a car cluster of finite size forms. This problem, however, requires that the traffic flow model be defined concretely. Therefore, in what follows we will consider a certain simple model which illustrates the characteristic features of the car cluster self-organization without complex mathematical manipulations.
As before, the model under consideration assumes the mean velocity relaxation to be immediate and modifies the governing equation (15) in such a way as to ascribe the order parameter $`h`$ to a local car group. In other words, we describe the vehicle flow by the Lighthill–Whitham equation with dissipation (see, e.g., and also Introduction), replace the time derivative in Eq. (15) by the particle derivative, and take into account that the order parameter cannot exhibit substantial variations over scales $`l\theta ^{1/2}\tau v_0\tau `$ ($`\theta `$ is the velocity variance, $`v_0`$ is the typical car velocity in the free flow). Namely, we write:
$`{\displaystyle \frac{\rho }{t}}+{\displaystyle \frac{\left[\rho \vartheta (h,\rho )\right]}{x}}`$ $`=`$ $`D{\displaystyle \frac{^2\rho }{x^2}},`$ (19)
$`\tau \left[{\displaystyle \frac{h}{t}}+\vartheta (h,\rho ){\displaystyle \frac{h}{x}}\right]`$ $`=`$ $`\widehat{}\{h\}\varphi (h,\rho )+\xi (x,t).`$ (20)
Let us discuss the meaning of the particular terms of the given model. The Burgers equation (19), as already discussed in Introduction, allows for the fact that drivers govern their motion taking into account not only the behavior of the nearest cars, but the state of traffic flow inside the whole field of their front view of length. The effective diffusivity $`D`$ can be estimated as $`DL^{}v_0`$, where $`L^{}l`$ is a front distance looked through by drivers assumed to be much greater that the scale $`l`$, so
$$D\tau lL^{}l^2.$$
(21)
The function $`\varphi (h,\rho )`$ is of the form
$$\varphi (h,\rho )\stackrel{\text{def}}{=}h(1h)[a(\rho )h],$$
(22)
where
$`a(\rho )=\{\begin{array}{ccc}1\hfill & \text{for}\hfill & \rho <\rho _c,\hfill \\ (\rho _c+\mathrm{\Delta }\rho )/\mathrm{\Delta }\hfill & \text{for}\hfill & \rho _c<\rho <\rho _c+\mathrm{\Delta },\hfill \\ 0\hfill & \text{for}\hfill & \rho >\rho _c+\mathrm{\Delta }.\hfill \end{array}`$
It describes such a driver behavior that $`h=0`$ and $`h=1`$ are the unique stable values of the order parameter for $`\rho <\rho _c`$ and $`\rho >\rho _c+\mathrm{\Delta }`$, respectively, whereas, for $`\rho _c<\rho <\rho _c+\mathrm{\Delta }`$ the points $`h=0`$, $`h=1`$ are both locally stable and there is an additional unstable stationary point, namely, $`h=a(\rho )`$. The term
$$\widehat{}\{h\}\stackrel{\text{def}}{=}l^2\frac{^2h}{x^2}+\frac{l}{\sqrt{2}}\frac{h}{x}$$
(23)
governs spatial variations in the field $`h(x,t)`$ and takes into account that drivers mainly follow the behavior of cars in front of them and cars moving at the rear cannot essentially affect them. The mean car velocity depends on $`h`$ and $`\rho `$ as
$$\rho \vartheta (h,\rho )\stackrel{\text{def}}{=}\rho \vartheta _0(1h)+[\rho _c\vartheta _0\nu (\rho \rho _c)]h.$$
(24)
The last term on the right-hand side of Eq. (20) characterizes the random fluctuations in the order parameter dynamics:
$`\xi (x,t)`$ $`=`$ $`0,`$ (25)
$`\xi (x,t)\xi (x^{},t^{})`$ $`=`$ $`\sigma ^2l\tau \delta (xx^{})\delta (tt^{}),`$ (26)
where $`\sigma `$ is their dimensionless amplitude. Expressions (22) and (24) gives the $`h(\rho )`$-dependence and the fundamental diagram shown in Fig. 8 simplifying the one presented in Fig 7.
If we ignore the random fluctuations of the order parameter $`h`$, i.e., set $`\sigma =0`$, then the model (19), (20) will give us an artificially long delay (much greater than $`\tau `$) in the order parameter variations from, for example, the unstable point $`h=0`$ to the stable point $`h=1`$. Such a delay can lead to a meaningless great increase of the vehicle density in the free flow without phase transition to congestion. In order to avoid this artifact and to allow for the effect of real fluctuations in the driver behavior we also will assume the amplitude $`\sigma `$ to obey the condition :
$$\left(\frac{l}{L^{}}\right)^{5/4}\sigma 1$$
(27)
($`\sigma 1`$, because, otherwise, the traffic flow dynamics would be totally random). It should be noted that small random variations of the order parameter $`h`$ near the points $`h=0`$, $`h=1`$ going into the regions $`h<0`$ and $`h>1`$, respectively, do not come into conflict with its physical meaning as the measure of the car motion correlations. Indeed, the chosen values $`h=0`$ and $`h=1`$ can describe a renormalization of real correlation coefficients $`\stackrel{~}{h}=\stackrel{~}{h}_1>0`$, $`\stackrel{~}{h}_2<1`$.
According to Eq. (20), for the order parameter $`h`$ the characteristic scale of its spatial variations is $`l`$, so, the layer $`\mathrm{}_h`$ separating the regions where $`h0`$ and 1 is of thickness about $`l`$. Due to inequality (21) the car density on such scales can be treated as constant. Therefore, the transition region $`_\rho `$ between practically the uniform free flow and the congested phase is of thickness determined mainly by spatial variations of the vehicle density and on such scales the layer $`\mathrm{}_h`$ can be treated as an infinitely thin interface. In addition, the characteristic time scale of the layer $`\mathrm{}_h`$ formation is about $`\tau `$, whereas it takes about the time $`\tau _\rho D/v_0^2\tau (L^{}/l)\tau `$ for the layer $`_\rho `$ to form. Thereby, when analyzing the motion of a wide car clusters we may regard the order parameter distribution $`h(x,t)`$ as quasi-stationary for a fixed value of the car density $`\rho `$.
Let us, now, consider two possible limits of the layer $`\mathrm{}_h`$ motion under such conditions.
### A Regular dynamics
In the region $`\rho _c<\rho <\rho _c+\mathrm{\Delta }`$ until the value of $`a(\rho )`$ comes close to the boundaries $`h=0`$ and $`h=1`$ the effect of the random fluctuations is ignorable. In this case by virtue of the adopted assumptions the solution of Eq. (20) that describes the layer $`\mathrm{}_h`$ moving at the speed $`u`$ is of the form:
$$h=\frac{1}{2}\left[1+\mathrm{tanh}\left(\frac{xut}{\lambda }\right)\right].$$
(28)
Here for the layer $`\mathrm{}_{01}`$ of the transition “free-flow $``$ synchronized mode” and for the layer $`\mathrm{}_{10}`$ of the opposite transition (Fig. 9)
$$\lambda _{01}=\frac{2\sqrt{2}}{\eta _v}l,\lambda _{10}=2\sqrt{2}\eta _vl$$
(29)
$`u_{01}`$ $`=`$ $`\vartheta _0{\displaystyle \frac{\mathrm{\Delta }_v}{2}}{\displaystyle \frac{l}{\sqrt{2}\eta _v\tau }}\left[1+\eta _v2a(\rho _i)\right],`$ (30)
$`u_{10}`$ $`=`$ $`\vartheta _0{\displaystyle \frac{\mathrm{\Delta }_v}{2}}{\displaystyle \frac{l}{\sqrt{2}\tau }}\left[2\eta _va(\rho _i)(\eta _v1)\right].`$ (31)
where we introduced the quantities:
$`\mathrm{\Delta }_v`$ $`=`$ $`\vartheta (0,\rho _i)\vartheta (1,\rho _i),`$
$`\eta _v`$ $`=`$ $`\left[1+\left({\displaystyle \frac{\tau \mathrm{\Delta }_v}{2\sqrt{2}l}}\right)^2\right]^{1/2}+{\displaystyle \frac{\tau \mathrm{\Delta }_v}{2\sqrt{2}l}}.`$
and $`\rho _i`$ is the corresponding value of the car density inside the layers $`\mathrm{}_{01}`$ and $`\mathrm{}_{10}`$.
Expressions (30), (31) describe the regular dynamics of the car cluster formation because the transition, for example, from the free flow to the synchronized phase at a certain point $`x`$ is induced by this transition at the nearest points. The dependence of the velocities $`u_{01}`$ and $`u_{10}`$ on the local car density $`\rho _i`$ is illustrated in Fig. 9. The characteristic velocities attained in this type motion can be estimated as
$`\vartheta _0u\mathrm{max}\{\vartheta _0\mathrm{\Delta }/\rho _c,l/\tau \},`$
so, under the adopted assumptions the regular dynamics does not allow for the sufficiently fast motion of the layers $`\mathrm{}_h`$ upstream.
### B Noise-induced dynamics
As the car density $`\rho `$ tends to the critical values $`\rho _c`$ or $`\rho _c+\mathrm{\Delta }`$ the value of $`a(\rho )`$ comes close to the boundaries $`a(\rho _c)=1`$ and $`a(\rho _c+\mathrm{\Delta })=0`$, and the point $`h=1`$ or $`h=0`$ becomes unstable, respectively. In this case the effect of the random fluctuations $`\xi (x,t)`$ plays a substantial role. Namely, the phase transition, for example, from the free flow to the synchronized motion (for $`\rho \rho _c+\mathrm{\Delta }`$) is caused by the noise $`\xi (x,t)`$ and equiprobably takes place at every point of the region wherein $`\rho \rho _c+\mathrm{\Delta }`$ rather than is localized near the current position of the layer $`\mathrm{}_{01}`$. Under these conditions the motion of the layers $`\mathrm{}_h`$ can be qualitatively characterized by an extremely high velocity in both the directions, which is illustrated in Fig. 9 by dashed lines.
We note that the noise-induced motion, in contrast to the regular dynamics, is to exhibit significant fluctuations in the displacement of the layer $`\mathrm{}_h`$ as well as in its forms. This question is, however, a subject for an individual study.
### C Diffusion limited motion of vehicle clusters
Let us, now, analyze the motion of a sufficiently large cluster that can form on a highway when the initial car density or, what is the same, the average car density $`\overline{\rho }`$ belongs to the metastable region, $`\overline{\rho }(\rho _c,\rho _c+\mathrm{\Delta })`$. The term “sufficiently large” means that the cluster dimension $`L`$ is assumed to be much greater than the front distance $`L^{}`$ looked through by drivers, so, they cannot look round the congestion as a whole. Exactly, in this case a quasi-local description of traffic flow similar to the differential equations (19), (20) is justified.
Converting to the frame $`y=xut`$ moving at the cluster velocity $`u`$, solving Eq. (19) individually for the free flow and the synchronized phase, and treating the layers $`\mathrm{}_h`$ as infinitely thin interfaces we get the following conclusion. Within the frameworks of the given model the car cluster moves upstream sufficiently fast, so, the motion of the layers $`\mathrm{}_{01}`$ and $`\mathrm{}_{10}`$ is governed by the noise $`\xi (x,t)`$. In this case the values of the car density at the layers $`\mathrm{}_{01}`$ and $`\mathrm{}_{10}`$ have to be $`\rho _j\rho _c+\mathrm{\Delta }`$ and $`\rho _f\rho _c`$, respectively. Thereby, the cluster velocity $`u`$ is mainly determined by the car redistribution governed by the diffusion type processes. The latter feature is the reason why we refer to the cluster dynamics under such conditions as to the diffusion limited motion. The transition region $`_{01}`$ between practically the uniform free flow state and the cluster contains the exponential increase of the vehicle density inside the free flow phase from the value $`\rho _f`$ far from the “interface” $`\mathrm{}_{01}`$ up to $`\rho _j\rho _c+\mathrm{\Delta }`$ at $`\mathrm{}_{01}`$,
$`\rho =\rho _f+(\rho _j\rho _f)\mathrm{exp}\{q_fy\},`$
where $`q_f=(\vartheta _0+\left|u\right|)/D1/L^{}`$ and the frame $`\{y\}`$ is attached to the “interface” $`\mathrm{}_{01}`$. The transition region $`_{10}`$ from the synchronized phase to the uniform free flow is to be localized inside the car cluster. So, it is characterized by the decrease in the vehicle density $`\delta \rho \mathrm{exp}\{q_jy\}`$, where $`q_j=(\left|u\right|\nu )/D`$, and the vehicle free flow leaving the cluster is uniform at all its points (Fig. 10$`a`$).
The cluster velocity is directly determined by the motion of the interface $`\mathrm{}_{01}`$. Therefore, assuming also the cluster dimension $`L`$ large in comparison with $`L^{}`$, from Eq. (19) we get the expression of the same form as the Lighthill–Whitham formula (10) relating the cluster velocity $`u`$ and the vehicle flux characteristics on the both sides of the layer $`_{01}`$. Whence it follows that at the first approximation:
$$u\nu ,$$
(32)
the value $`q_j=0`$, and the vehicle cluster is of the form shown in Fig. 10$`a`$ under the name “mesocluster”. Assuming the total number of cars on the highway of length $`L_{\text{rd}}`$ fixed we get the expression for the mesocluster dimension $`L`$:
$$L=2L_{\text{rd}}\frac{\overline{\rho }\rho _c}{\mathrm{\Delta }}.$$
(33)
However, this result is justified only for sufficiently small values of $`(\overline{\rho }\rho _c)/\mathrm{\Delta }1`$, when the cluster dimension is not too large, $`Lq_j1`$ (nevertheless, $`LL^{}`$). Exactly for this reason we refer to such clusters as mesoscopic ones. In order to study the opposite limit, $`Lq_j1`$, we have to take into account that the value $`\rho _f`$ is not rigorously equal to $`\rho _c`$ but practically is the root $`\rho _f^{}>\rho _c`$ of the equation $`u_{10}(\rho _f^{})=\nu `$. In this case the Lighthill–Whitham formula (10) gives the expression:
$`u\left[\nu +(\vartheta _0+\nu ){\displaystyle \frac{\rho _f^{}\rho _c}{\mathrm{\Delta }}}\right]`$
leading to the following estimates of the thickness $`1/q_j`$ of the transition region $`_{10}`$:
$`1/q_j{\displaystyle \frac{D\mathrm{\Delta }}{(\vartheta _0+\nu )(\rho _f^{}\rho _c)}}L^{}{\displaystyle \frac{\mathrm{\Delta }}{(\rho _f^{}\rho _c)}}.`$
The form of such a wide cluster is shown in Fig. 10$`a`$, its dimension is:
$$L=L_{\text{rd}}\frac{\overline{\rho }\rho _c}{\mathrm{\Delta }}.$$
(34)
and the region of the mean car density corresponding to this limit is specified by the inequality:
$$\frac{\rho ^{}\rho _c}{\mathrm{\Delta }}\frac{L^{}}{L_{\text{rd}}}\frac{\mathrm{\Delta }}{(\rho _f^{}\rho _c)}.$$
(35)
The resulting dependence of the cluster dimension on the mean car density $`\overline{\rho }`$ is illustrated in Fig. 10$`b`$.
## IV Phase transition “synchronized mode $``$ jam”. Brief discussion
In Sec. II we have considered the phase transition between the free flow and the synchronized mode. However, according to the experimental data there is an additional phase transition in traffic flow regarded as the transition between the synchronized motion and the jammed “stop-and-go” traffic. This transition occurs at extremely high vehicle densities $`\rho `$ coming close to the limit value $`\rho _0`$.
The present section briefly demonstrates that the developed model for the driver behavior also predicts a similar phase transition at high car densities. To avoid possible misunderstandings we, beforehand, point out that the model in its present form cannot describe details of the transition “synchronized mode $``$ jam” because we have not taken into account the delay in the driver response to variations in headway. The latter is responsible for the formation of the “stop-and-go” pattern, so, to describe the jammed traffic on multilane highways we, at least, should combine a governing equation for the order parameter $`h`$ and a continuity equation similar to (20), (19) with an equation for the car velocity relaxation similar to (2). This question, however, is worthy of individual study. Besides, the approximations used in Sec. III to characterize the synchronized mode at the car densities near $`\rho _c`$ do not hold here.
In Sec. II we have studied the dependence of the order parameter $`h`$ on the car density ignoring the first term on the right-hand side of Exp. (17) caused by the dangerous of lane changing. This assumption is justified when the car density is not to high. In extremely dense traffic flow, when the car density exceeds a certain value, $`\rho >\rho _h\rho _0`$, changing lanes becomes sufficiently dangerous and the function $`\mathrm{\Phi }(h,v,\rho )`$ describing the driver behavior is to depend strongly on the vehicle density in this region. In addition, the vehicle motion becomes sufficiently slow. Under such conditions the former term on the right-hand side of expression (17) should be dominant and, thus, $`\varphi /\rho >0`$. Therefore, the stable vehicle motion corresponding to $`\varphi /h>0`$ matches the decreasing dependence of the order parameter $`h(\rho )`$ on the vehicle density $`\rho `$ for $`\rho >\rho _h`$. So, as the vehicle density $`\rho `$ increases the curve $`h(\rho )`$ can again go into the instability region (in the $`h\rho `$-plane), which has to give rise to a jump from the synchronized mode with greater values of the order parameter to a new traffic state with its less values (Fig. 11). Obviously, this transition between the two congested phases also exhibit the same hysteresis as one describe in Sec. II.
We identify the latter traffic state with the jammed vehicle motion. Indeed, in extremely dense traffic lane change is practically depressed, making the car ensembles at different lanes independent of one another. So, in this case vehicle flow has to exhibit weak multilane correlations and we should ascribe to it small values of the order parameter $`h`$. It should be noted that the experimental single-vehicle data demonstrates strong correlations of variations in the traffic flux and the car density for both the free flow and the “stop-and-go” motion. By contrast, the synchronized mode is characterized by small values of the cross-covariance between flow, speed, and density. In other words, for the free flow and the “stop-and-go” motion the traffic flux $`j=\vartheta \rho `$ should depends directly on the car density $`\rho `$, as it must in the present model if we set $`h=0`$.
Finalizing the present section we point out that the given model treats the jammed phase as a “faster” vehicle motion then the synchronized mode at the same values of the order parameter. There is no contradiction with the usual view on the synchronized mode as a high flux traffic state. The latter corresponds to the traffic flow at the vehicle densities near the phase transition “free flow $``$ synchronized mode” rather than close to the limit value $`\rho _0`$. Besides, ordinary driver’s experience prompts that a highly dense traffic flow can be blocked at all if one of the cars begin to change the lanes. Nevertheless, in order to describe, at least qualitatively, the real features of the phase transition “synchronized mode $``$ stop-and-go waves” a more sophisticated model is required. The present description only relates it to the instability of the order parameter at high values of the vehicle density.
Besides, the present analysis demonstrates also the nonmonotonic behavior of the order parameter as the car density increases even if we ignore the hysteresis regions and focus our attention the stabel vehicle flow regions only. It should be noted that a similar nonmonotonic dependence of the lane change frequency on the car density as well as the platoon formation has been found in the cellular automaton model for two-lane traffic .
## V Closing remarks
Concluding the paper we recall the key points of the developed model.
We have proposed an original macroscopic approach to the description of multilane traffic flow based on an extended collection of the traffic flow state variables. Namely, in addition to such characteristics as the car density $`\rho `$ and mean velocity $`v`$ being actually the zeroth and first moments of the “one particle” distribution function we introduce a new variable $`h`$ called the “order parameter”. It stands for the internal correlations in the car motion along different lanes that are due to the lane changing manoeuvres. The order parameter, in fact, allows for the essentially “many-body” effects in the car interaction so it is treated as an independent state variable.
Taking into account the general properties of the driver behavior we have stated a governing equation for the order parameter. Based on current experimental data we have assume the correlations in the car motion on multilane highways to be due to a small group of “fast” drivers, i.e. the drivers who move substantially faster than the statistically mean vehicle continuously overtaking other cars. These “fast” cars, on one hand, increase individually the total rate of vehicle flow but, on the other hand, make the accident danger greater and, thus, cause the statistically mean driver to decrease the velocity. The competition of the two effects depends on the car density and the mean velocity and, as shown, can give rise to the traffic flow instability. It turns out that the resulting dependence of the order parameter on the car density describes in the same way the experimentally observed sequence of phase transitions “free flow $``$ synchronized motion $``$ jam” typical for traffic flow on highways . Besides, we have shown that both these transitions should be of the first order type and exhibit hysteresis, matching the experimental data . The synchronized mode is characterized by a large value of the order parameter, whereas the free flow and the jam match its small values. The latter feature enables us to treat the jam as a phase comprising the vehicle flows at different lane with weak mutual interaction because of the lane changing being depressed.
In order to illustrate the characteristic features of the car clusters that self-organizing under these conditions we have considered a simple model dealing only with the evolution of the car density and the order parameter. In particular, it is shown that in the steady state the car density inside the cluster and the free flow being in equilibrium with the cluster, as well as the velocity at which the cluster moves upstream are fixed and determined by the basic properties of the traffic flow. On the contrary, the size of the car cluster depends on the initial conditions.
Finally, we would like to underline that the developed model takes into account only one effect causing the traffic flow instability. The other, the delay in the driver control over the headway, seems to be responsible for the “stop-and-go” waves in the jammed phase (for a review of the continuum description of this phenomena see, e.g., ). So, combining the two approaches into one model it enables detail description of a wide class of phenomena occurring in the transitions from free flow to the heavy congested phase on highways. In this way the order parameter model could describe also the formation of a local jam on a highway whose boundaries comprise both of the phase transitions. In the present form it fails to do this because the free flow and the jammed traffic are characterized by small values of the order parameter.
Concerning a possible derivation of the order parameter model from the gas-kinetic theory we note that the appearance of the “fast” driver platoons demonstrates a substantial deviation of the car distribution function from the monotonic quasi-equilibrium form. So, to construct an adequate system of equations dealing with the moments of the distribution function a more sophisticated approximation is required.
## Acknowledgments
One of us, I. A. Lubashevsky, would like to acknowledge the hospitality of Physics Department of Rostock University during the stay at which this work was carried out. |
no-problem/9910/hep-th9910196.html | ar5iv | text | # KUNS-1612hep-th/9910196 Brane Configuration from Monopole Solution in Non-Commutative Super Yang-Mills Theory
## 1 Introduction
The fertile developments in string theory in this half a decade have enabled us to understand various perturbative and non-perturbative phenomena in field theories by geometrical intuitive pictures. This progress in string theory is now beyond the province of reproduction of the result of the field theories, that is, now the string theory has predictive power in various field theories.
One of the most intriguing examples is the 1/4 BPS dyon solution in 4-dimensional $`𝒩=4`$ super Yang-Mills theory (SYM). The study of the $`1/4`$ BPS states in SYM was triggered by the discovery of the stable string network in type IIB string theory . When this string network has its ends on D3-branes, these states preserve 1/4 supersymmetries of the original D3-brane worldvolume theory . After the study from the string theory side, there appeared some works in which explicit field theoretical solutions for the corresponding solitons were constructed. The properties of the solution favor the string theory interpretation with respect to their $`(p,q)`$-charges, masses and supersymmetries.
Recent topics of the string-theoretical realization of field theories are concerning the space non-commutativity . The non-commutative super Yang-Mills theory (NCSYM) is realized as a decoupling limit of the worldvolume theory on D3-branes in the non-trivial NS-NS 2-form background. Taking advantage of this equivalence, some basic properties of localized objects in this exotic field theory were analyzed . The brane configurations corresponding to the monopoles, dyons and 1/4 BPS dyons were constructed, and they were shown to have the same masses and supersymmetry properties as the ordinary SYM counterparts. One fascinating prediction from this brane technique is that the monopole in the NCSYM has non-locality $`\delta `$ due to the tilt of the D-string suspended between two D3-branes (see fig. 1). Believing that the brane configuration of this figure precisely captures the field theoretical properties, the configuration of the monopole in the NCSYM should reproduce the tilted line, as the eigenvalues of the Higgs field. The ends of the D-string appear to be magnetic charges, hence the field theoretical solution should contain dipole structure.
In this paper, we explicitly solve the BPS equation for the monopole of the NCSYM to the first non-trivial order in $`\theta _{ij}`$ which specifies the non-commutativity. This 1/2 BPS solution has the same mass as the ordinary SYM monopole, in agreement with the prediction by string theory. Solving the non-commutative eigenvalue equation in Sec. 3, we show in Sec. 4 that the solution actually reproduces the tilt of the suspended D-string. Examining the magnetic field, the dipole structure is also found. In the final section, we summarize the paper and give some discussions.
## 2 Non-commutative BPS equation and its solution
The Bogomol’nyi-Prasad-Sommerfield (BPS) monopole solution of the ordinary SYM is saturating a particular energy bound which is usually called the BPS bound. Since this bound is topologically sensible, the state saturating the bound is stable. Now in the case of the NCSYM, unfortunately the topological argument seems not to be valid due to the high complexity of the $``$-product. Even with this complexity, we can argue a similar mass bound as in the following .
The energy of the system without the electric field is given by
$`E=Tr{\displaystyle d^3x\left[\frac{1}{2}F_{ij}F_{ij}+D_i\mathrm{\Phi }D_i\mathrm{\Phi }\right]},`$ (2.1)
where we have defined the field strength and the covariant derivative as
$`F_{ij}`$ $`_iA_j_jA_iiA_iA_j+iA_jA_i,`$ (2.2)
$`D_i\mathrm{\Phi }`$ $`_i\mathrm{\Phi }iA_i\mathrm{\Phi }+i\mathrm{\Phi }A_i.`$ (2.3)
We have put the Yang-Mills coupling constant $`g_{\mathrm{YM}}`$ equal to unity. The $``$-product is defined with the non-commutativity parameter $`\theta _{ij}`$ by
$`(fg)(x)`$ $`f(x)\mathrm{exp}\left({\displaystyle \frac{i}{2}}\theta _{ij}\stackrel{}{_i}\stackrel{}{_j}\right)g(x)=f(x)g(x)+{\displaystyle \frac{i}{2}}\{f,g\}(x)+O(\theta ^2),`$ (2.4)
where $`\{f,g\}`$ is the Poisson bracket,
$`\{f,g\}(x)\theta _{ij}_if(x)_jg(x).`$ (2.5)
We shall take as the gauge group the simplest one $`U(2)`$. Note that the group $`SU(2)`$ is not allowed here since the algebra of any special unitary group is not closed when the multiplication is defined by the $``$-product. The energy (2.1) is bounded below by a surface integral as follows:<sup>*</sup><sup>*</sup>* In deriving the inequality (2.6), we assumed that $`F_{ij}\pm ϵ_{ijk}D_k\mathrm{\Phi }`$ decay sufficiently fast at the infinity so that we can apply the formula $`d^3x(fg)(x)=d^3xf(x)g(x)`$ for these quantities.
$`E`$ $`={\displaystyle \frac{1}{2}}Tr{\displaystyle d^3x\left[ϵ_{ijk}\left(F_{ij}D_k\mathrm{\Phi }+D_k\mathrm{\Phi }F_{ij}\right)+(F_{ij}\pm ϵ_{ijk}D_k\mathrm{\Phi })(F_{ij}\pm ϵ_{ijl}D_l\mathrm{\Phi })\right]}`$ (2.6)
$`Tr{\displaystyle d^3x_k\left[ϵ_{ijk}\left(F_{ij}\mathrm{\Phi }\right)\right]}.`$
If the solution of the non-commutative BPS (NCBPS) equation,
$`D_i\mathrm{\Phi }+{\displaystyle \frac{1}{2}}ϵ_{ijk}F_{jk}=0,`$ (2.7)
has the same asymptotic behavior as the ordinary BPS solution, then the energy remains the same. This fact will be confirmed to the first non-trivial order in $`\theta `$ by explicitly solving the NCBPS equation (2.7).
We shall solve the NCBPS equation (2.7) to $`O(\theta )`$ in the small $`\theta `$ expansion. Let us express the gauge field as
$`A_i\left(A_i^a+a_i^a\right){\displaystyle \frac{1}{2}}\sigma _a+\left(A_i^0+a_i^0\right){\displaystyle \frac{1}{2}}\text{ }\text{ ̵},`$ (2.8)
where the upper (lower) case component fields are of $`O(\theta ^0)`$ ($`O(\theta ^1)`$). The scalar $`\mathrm{\Phi }`$ is expressed in a similar manner. First, the $`O(\theta ^0)`$ part of the NCBPS equation (2.7) is nothing but the ordinary BPS equation, and we adopt the well-known BPS monopole solution as the $`O(\theta ^0)`$ part of the solution:
$`\mathrm{\Phi }^a={\displaystyle \frac{x^a}{r}}F(r),A_i^a=ϵ_{aij}{\displaystyle \frac{x_j}{r}}W(r),\mathrm{\Phi }^0=A_i^0=0,`$ (2.9)
where the the functions appearing in the solution are
$`F(r)C\mathrm{coth}(Cr){\displaystyle \frac{1}{r}},W(r){\displaystyle \frac{1}{r}}{\displaystyle \frac{C}{\mathrm{sinh}(Cr)}}.`$ (2.10)
The dimensionful parameter $`C`$ determines the mass scale of the monopole. For later convenience, we present the asymptotic behavior of these functions:
$`F(r)=C{\displaystyle \frac{1}{r}}+O(e^{Cr}),W(r)={\displaystyle \frac{1}{r}}+O(e^{Cr}).`$ (2.11)
Next let us proceed to the $`O(\theta )`$ part of the NCBPS equation (2.7). Plugging (2.8) into the NCBPS equation (2.7) and taking the $`O(\theta )`$ part, the $`U(1)`$ component reads
$`_i\phi ^0+{\displaystyle \frac{1}{2}}\{A_i^a,\mathrm{\Phi }^a\}+ϵ_{ijk}\left(_ja_k^0+{\displaystyle \frac{1}{4}}\{A_j^a,A_k^a\}\right)=0,`$ (2.12)
and does not contain the $`SU(2)`$ fields $`(a_i^a,\phi ^a)`$. On the other hand, the $`SU(2)`$ component of the $`O(\theta )`$ part of the NCBPS equation decouples from the $`U(1)`$ fields, and is in fact the linearized equation for the fluctuation $`(a_i^a,\phi ^a)`$ obtained from the ordinary BPS equation. Since any solution for $`(a_i^a,\phi ^a)`$ corresponds to a $`\theta `$-dependent gauge transformation on the BPS solution (2.9), we take $`a_i^a=\phi ^a=0`$ hereafter.
Now our task is to solve the equation (2.12). The ansatz for the BPS monopole solution (2.9) was the covariance under the rotation of the diagonal $`SO(3)`$ subgroup of $`SO(3)_{\mathrm{gauge}}\times SO(3)_{\mathrm{space}}`$. In order to solve eq. (2.12), we put the following ansatz for the $`U(1)`$ fields $`(a_i^0,\phi ^0)`$ respecting the covariance under the generalized rotation, in which we rotate also the parameter $`\theta _{ij}`$: The generalized rotational covariance for $`a_i^0`$ allows two other terms with different structures, $`ϵ_{ijk}\theta _{jk}`$ and $`x_iϵ_{jkl}\theta _{jk}x_l`$. However, eq. (2.12) implies the vanishing of these two terms.
$`a_i^0=\theta _{ij}x_jA(r),\phi ^0=\theta _{ij}ϵ_{ijk}x_kB(r),`$ (2.13)
where $`A(r)`$ and $`B(r)`$ are functions of $`r`$ to be determined. Putting these ansatz (2.13) and the explicit forms of the BPS solution (2.9) into the differential equation (2.12), we obtain the following two equations as the coefficients of different tensor structures:
$`A+B+rB^{}+{\displaystyle \frac{1}{4r^2}}W(W+2F)=0,`$ (2.14)
$`A^{}+2B^{}{\displaystyle \frac{d}{dr}}\left[{\displaystyle \frac{1}{4r^2}}W(W+2F)\right]=0.`$ (2.15)
The solution to eqs. (2.14) and (2.15) is given by
$`A(r)={\displaystyle \frac{1}{4r^2}}W(W+2F)2{\displaystyle \frac{c_1}{r^3}}+c_2,B(r)={\displaystyle \frac{c_1}{r^3}}+c_2,`$ (2.16)
with two arbitrary constant parameters, $`c_1`$ and $`c_2`$. The parts in (2.16) containing these constant parameters are actually solution to the homogeneous part of eq. (2.12):
$`_i\phi ^0+ϵ_{ijk}_ja_k^0=0.`$ (2.17)
Since the $`c_2`$ part of the scalar $`\phi ^0`$ diverges at the infinity, we put $`c_2=0`$. As for the $`c_1`$ part, a careful substitution into the left hand side of eq. (2.17) gives in fact a term proportional to $`_i_i(1/r)=4\pi \delta ^3(𝒓)`$. Hence the $`c_1`$ part is not a solution at the origin, and we shall also put $`c_1=0`$. Finally the desired solution of the equation (2.12) is
$`a_i^0=\theta _{ij}x_j{\displaystyle \frac{1}{4r^2}}W(W+2F),\phi ^0=0.`$ (2.18)
Note in particular that the scalar field receives no correction to this order. Since the whole solution has the same leading asymptotic behavior as the BPS solution (2.9), we find that the non-commutativity does not change the mass of the monopole.
## 3 Non-commutative eigenvalue equation
The configuration of the D-string suspended between two parallel D3-branes is described by the deformation of the surface of the D3-branes in the spirit of the BIon (Born-Infeld soliton) physics . The extent of this deformation of the D3-brane surface is given by the eigenvalues of the scalar field on the D3-branes. We saw in the previous section that there is no additional contribution of $`O(\theta )`$ in the scalar field configuration. However, since we are now dealing with the theory with the $``$-product, the eigenvalue problem should be different from that in the usual commutative case. In this section, we see that the $`O(\theta )`$ terms are actually generated in the eigenvalues of the scalar field.
We propose that the eigenvalue equation for a hermitian matrix valued function $`M`$ to be considered in the non-commutative case is
$`M𝒗=\lambda 𝒗,`$ (3.1)
where $`𝒗`$ and $`\lambda `$ are the eigenvector and the eigenvalue, respectively. Though there are other candidates for the non-commutative eigenvalue equation, the present one (3.1) has advantages over the others in various respects as we shall see in this and the final sections.
For solving (3.1) to $`O(\theta )`$, let us make the expansion
$`M=M_0+M_1,𝒗=𝒗_0+𝒗_1,\lambda =\lambda _0+\lambda _1,`$ (3.2)
where the subscript number specifies the order of $`\theta `$. Then, the $`O(\theta ^0)`$ part of (3.1) is $`M_0𝒗_0=\lambda _0𝒗_0`$, and the $`O(\theta )`$ part reads
$`M_0𝒗_1+M_1𝒗_0+{\displaystyle \frac{i}{2}}\{M_0,𝒗_0\}=\lambda _0𝒗_1+\lambda _1𝒗_0+{\displaystyle \frac{i}{2}}\{\lambda _0,𝒗_0\}.`$ (3.3)
Multiplying $`𝒗_0^{}`$ from the left, we obtain the formula which gives the $`O(\theta )`$ part of the eigenvalue:
$`\lambda _1={\displaystyle \frac{1}{𝒗_0^{}𝒗_0}}\left({\displaystyle \frac{i}{2}}𝒗_0^{}\{M_0\lambda _0\text{ }\text{ ̵},𝒗_0\}+𝒗_0^{}M_1𝒗_0\right).`$ (3.4)
In view of the application to the present NCBPS solution, let us consider the particular case with
$`M_0=m_0(𝒓)\widehat{x}_a\sigma _a(\widehat{x}_ix_i/r),M_1=0,`$ (3.5)
and hence $`\lambda _0=\pm m_0(𝒓)`$ and $`\widehat{x}_a\sigma _a𝒗_0=\pm 𝒗_0`$. Then, eq. (3.4) is calculated to give
$`\lambda _1={\displaystyle \frac{i}{2𝒗_0^{}𝒗_0}}{\displaystyle \frac{m_0(𝒓)}{r}}\theta _{ij}\left(𝒗_0^{}\sigma _i_j𝒗_0\widehat{x}_i𝒗_0^{}_j𝒗_0\right)={\displaystyle \frac{m_0(𝒓)}{2r^2}}\theta _i\widehat{x}_i,`$ (3.6)
with $`\theta _i(1/2)ϵ_{ijk}\theta _{jk}`$. Note that $`\lambda _1`$ (3.6) is independent of the sign of $`\lambda _0`$. We obtained the last expression of (3.6) using the explicit form $`𝒗_0=(x_1ix_2,\pm rx_3)^\mathrm{T}`$. However, the general formula for $`\lambda _1`$, eq. (3.4), is in fact invariant under the local phase and scale transformation of $`𝒗_0`$ due to the identity $`𝒗_0^{}\{M_0\lambda _0\text{ }\text{ }\text{ }\text{ }\text{ ̵},f\text{ }\text{ }\text{ }\text{ }\text{ ̵}\}𝒗_0=0`$ valid for any $`f(𝒓)`$. This corresponds to the fact that the eigenvalue $`\lambda `$ of (3.1) is invariant under the right multiplication $`𝒗𝒗h`$ for an arbitrary $`h(𝒓)`$. We shall discuss the gauge transformation property of the eigenvalues in the final section.
Let us evaluate various eigenvalues of the system using the formula (3.6). First, the scalar eigenvalues are obtained by substituting $`m_0(𝒓)=F(r)/2`$ as
$`\lambda _\mathrm{\Phi }=\pm {\displaystyle \frac{1}{2}}F(r){\displaystyle \frac{\theta _i\widehat{x}_i}{4r^2}}F(r)+O(\theta ^2).`$ (3.7)
Next, we shall consider the eigenvalues of the magnetic field $`B_i(1/2)ϵ_{ijk}F_{jk}`$ near the infinity $`r\mathrm{}`$. The magnetic field itself is given from the solution (2.18) as Since the definition of the magnetic field contains the $``$-product in itself, we should calculate also the Poisson bracket term. However, this term contributes only to the $`O(1/r^4)`$ part in (3.8).
$`B_i={\displaystyle \frac{\widehat{x}_i}{2r^2}}\widehat{x}_a\sigma _a+{\displaystyle \frac{C}{2r^3}}\left(\delta _{ij}3\widehat{x}_i\widehat{x}_j\right)\theta _j\text{ }\text{ ̵}+O\left({\displaystyle \frac{1}{r^4}}\right).`$ (3.8)
We would like to evaluate the $`O(\theta )`$ contribution to the eigenvalues by putting $`m_0(𝒓)=\widehat{x}_i/2r^2`$ and $`M_1(𝒓,\theta )=C\left(\delta _{ij}3\widehat{x}_i\widehat{x}_j\right)\theta _j\text{ }\text{ }\text{ }\text{ }\text{ ̵}/2r^3`$. Since $`m_0(𝒓)`$ in this case is $`O(1/r^2)`$, using the formula (3.6), the order of the correction to the eigenvalues from this part is found as $`O(1/r^4)`$. Thus near the infinity, the $`O(1/r^3)`$ part of the eigenvalues of the magnetic field is saturated by $`m_1`$.
## 4 Interpretation of the eigenvalues
In this section, we shall see how the eigenvalues (3.7) and (3.8) reproduce the brane configuration depicted in fig. 1. In the brane picture, the end of the D-string is seen as a magnetic charge in a single D3-brane worldvolume theory. The prediction from the brane configuration is that the magnetic charge on each end of the D-string is actually moved in different directions between the upper and lower D3-branes, as shown in fig. 1. This shift is specified by the spatial vector $`\delta _i`$.
Now, the eigenvalues of the magnetic field (3.8) indicate that the $`U(1)`$ part of the magnetic field exhibits a dipole structure. This structure is exactly the one expected from the brane picture above. Noting that the zero-th order solution (2.9) represents $`1/2`$ charge on the upper D3-brane and $`1/2`$ charge on the lower, it is easy to derive the non-locality $`\delta _i`$ as
$`\delta _i=C\theta _i.`$ (4.1)
This result verifies the prediction of ref. with the identification $`C=U`$.
Although the magnetic charges are expected to indicate only the locations of the ends of the D-string, the eigenvalues of the scalar field must reproduce not only the locations of the ends but also the slope of the D-string. In fact, the asymptotic behavior of the eigenvalues (3.7) is given using (2.11) as
$`\lambda _\mathrm{\Phi }=\pm {\displaystyle \frac{C}{2}}{\displaystyle \frac{1}{2r}}+\left({\displaystyle \frac{C}{4r^2}}+{\displaystyle \frac{1}{4r^3}}\right)\theta _i\widehat{x}_i+O(e^{Cr})`$
$`=\pm {\displaystyle \frac{C}{2}}{\displaystyle \frac{1}{2}}\left|x_i\left({\displaystyle \frac{C}{2}}{\displaystyle \frac{1}{2r}}\right)\theta _i\right|^1+O(e^{Cr}).`$ (4.2)
Eq. (4.2) implies first that in the upper and the lower D3-brane the end of the D-string sits at $`x_i=C\theta _i/2`$ and $`x_i=C\theta _i/2`$, respectively. Hence the non-locality is precisely given by $`\delta _i=C\theta _i`$, in agreement with the result (4.1) from the magnetic field. Secondly, in order to read off the slope of the D-string from (4.2), we rewrite it as
$`\lambda _\mathrm{\Phi }=\pm {\displaystyle \frac{C}{2}}{\displaystyle \frac{1}{2}}\left|x_i\lambda _\mathrm{\Phi }\theta _i\right|^1+O(e^{Cr}).`$ (4.3)
This equation means that, for a given value of $`\lambda _\mathrm{\Phi }`$, the corresponding worldvolume coordinate is located on a sphere with its center at $`x_i=\lambda _\mathrm{\Phi }\theta _i`$. Interpreting the trajectory of the center as the D-string, our analysis reproduces precisely the tilt of the suspended D-string.
In fig. 2 we present the curves of the eigenvalues of the scalar field. The thin straight lines represent the brane configuration of fig. 1. The dashed curves denote the eigenvalues of the scalar field with $`\theta =0`$. Comparing these with the bold curves representing the eigenvalues (3.7) with $`\theta 0`$, we can read off the simple brane configuration of ref. . (The reason why the bold curves are cutoff for small $`|x_1|`$ in fig. 2 is that the $`O(\theta )`$ term of the scalar eigenvalues (3.7) are actually divergent at the origin $`r=0`$. We shall discuss this problem in the next section.)
## 5 Summary and discussion
In this paper, we solved the BPS equation of the NCSYM to the first non-trivial order in $`\theta `$. Evaluating the eigenvalues of the solution, we explicitly showed that the solution exhibits the brane configuration of the tilted D-string, given in ref. . Magnetic field has the dipole structure, and the scalar field is also shifted to reproduce the tilted trajectory of the D-string.
Some comments are in order. Our first comment is on the gauge transformation property of the eigenvalue $`\lambda `$ in our non-commutative eigenvalue equation (3.1). Of course, the eigenvalue $`\lambda `$ in eq. (3.1) is never strictly invariant under the local gauge transformation of $`M`$,
$$MU^1MU,$$
(5.1)
where $`U^1`$ is the inverse of $`U`$ with respect to the $``$-product, $`UU^1=U^1U=\text{ }\text{ }\text{ }\text{ }\text{ ̵}`$. However, we can show that the eigenvalue has a fairly nice property under (5.1). Consider an infinitesimal gauge transformation $`\delta _ϵ`$ on $`M`$ with $`U=\text{ }\text{ }\text{ }\text{ }\text{ ̵}+iϵ`$ ($`ϵ^{}=ϵ`$),
$$\delta _ϵM=i(MϵϵM).$$
(5.2)
Letting $`\delta _ϵ`$ act on (3.1) and $``$-multiplying the resultant equation by $`𝒗^{}`$ from the left, we obtain
$`𝒗^{}\delta _ϵ\lambda 𝒗=i𝒗^{}(\lambda ϵϵ\lambda )𝒗.`$ (5.3)
Taking the $`O(\theta )`$ part of (5.3) and using $`\delta _ϵ\lambda _0=0`$, $`\delta _ϵ\lambda _1`$ is given as
$$\delta _ϵ\lambda _1=𝒗_0^{}\{ϵ,\lambda _0\}𝒗_0,$$
(5.4)
for a normalized $`𝒗_0`$. Eq. (5.4) implies that, at least to $`O(\theta )`$, the gauge transformation corresponds to a coordinate transformation on the eigenvalue $`\lambda (𝒓)`$. In the $`U(2)`$ case with $`ϵ(𝒓)=ϵ_a(𝒓)\sigma _a+ϵ_0(𝒓)\text{ }\text{ }\text{ }\text{ }\text{ ̵}`$, the form of the coordinate transformation is $`\delta _ϵx_i=\theta _{ij}\left(_jϵ_0+𝒗_0^{}\sigma _a𝒗_0_jϵ_a\right)`$.
Therefore, we have shown that the eigenvalue of (3.1) for $`M`$ and the one for $`U^1MU`$ are related by a coordinate transformation on the D3-branes and hence are physically equivalent, at least to the first non-trivial order in $`\theta `$. Recalling also that the eigenvalue of (3.1) is independent of the choice of the eigenvector and that the scalar eigenvalue (3.7) has an invariance under the simultaneous rotation of $`x_i`$ and $`\theta _{ij}`$, the eigenvalue equation (3.1) we have adopted is a satisfactory one. These good properties, in particular, the gauge transformation property of $`\lambda `$, are expected to persist to higher orders in $`\theta `$.
Our second comment is on another type of non-commutative eigenvalue equation,
$`M𝒗=𝒗\lambda ,`$ (5.5)
where, compared with (3.1), $`𝒗`$ and $`\lambda `$ are interchanged on the right hand side. Eq. (5.5) has a property that the eigenvalue $`\lambda `$ is invariant under the gauge transformation (5.1) of $`M`$. However, for a given $`M`$ the eigenvalues are not unique and do depend on the choice of the eigenvectors $`𝒗`$. What is worse, in the analysis similar to those in Sec. 3 by adopting (5.5), we can show that it is impossible to obtain the the scalar eigenvalues possessing an invariance under the simultaneous rotation of $`x_i`$ and $`\theta _{ij}`$. These are the reasons why we did not adopt (5.5).
Our final comment is concerning the singular nature of the scalar eigenvalues (3.7) at the origin $`r=0`$. Since we have adopted the eigenvalue equation (3.1), the matrix whose diagonal entries are these eigenvalues is no longer a solution of the BPS equation. Hence although the eigenvalues (4.2) diverge at the origin, the energy of the configuration is still finite. We would need a technique beyond the expansion in powers of $`\theta `$ to dissolve the singularity at the origin.
Our analysis in this paper can straightforwardly be applied to the case of dyons. We obtain a result consistent with the non-locality predicted by . It would be an interesting subject to study the non-commutative 1/4 BPS dyons using our formulation.
## Note added
While writing this paper, we became aware of the paper which has an overlap concerning the solution of Sec. 2.
## Acknowledgments
We would like to thank A. Hashimoto, T. Hirayama, S. Imai, H. Kawai, I. Kishimoto, T. Kugo, Y. Okawa and S. Terashima for valuable discussions and comments. This work is supported in part by Grant-in-Aid for Scientific Research from Ministry of Education, Science, Sports and Culture of Japan (#3160, #09640346, #4633). The work of K. H. and S. M. is supported in part by the Japan Society for the Promotion of Science under the Predoctoral Research Program. |
no-problem/9910/cond-mat9910376.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Are the mechanisms of speculative trading basically the same in their different manifestations, that is to say whether one considers stocks, property values, diamonds, futures contracts for commodities, postage stamps, etc? Econophysicists are probably inclined to answer affirmatively and to posit that all speculative markets are alike. D. Stauffer recently told me that he had not read in the econophysical literature any statement to the contrary. Yet, economists generally hold the opposite view; as a matter of fact most of them would even found that question somewhat weird. In any case little statistical evidence has so far been produced in support of one or the other claim. One earlier attempt in that direction was a paper that pointed out that the shape of the price peaks belong to the same class of functions . In the present paper I analyze a specific feature of speculative trading which will be referred to as the price multiplier effect and I show that it can be observed in various speculative markets for which adequate data exist, particularly in real estate, diamond and stamp markets. Before coming to this let us briefly discuss the following question: Why, instead of concentrating on just one market, is it important to consider different speculative markets? After all understanding the stock market already poses a formidable challenge and it could seem a good strategy not to disperse efforts. The point is that once we know that speculative attitudes are basically the same in any market the model we will build for the stock market will not be the same as if we had limited our knowledge to that market alone; common factors will be emphasized while idiosyncrasies will be discarded. As a matter of illustration remember that historically it made a big difference to realize that the fall of an apple, the “fall” of the moon and the rising/falling of the tides are different facets of the same phenomenon. Similarly if one can prove that the mechanisms of speculation are basically the same everywhere this has far reaching consequences. One of them is the idea that the phenomenon of speculative trading is a sociological rather than an purely economic phenomenon.
## 2 The price multiplier effect
### 2.1 Position of the problem
The crucial question regarding speculation can be formulated as follows: Suppose you have bought diamonds worth one million euros two years ago, assume that in the meanwhile their price has risen by 360%, but that in recent weeks there was a sudden 25% fall. Will you sell, or keep your holding until the price goes up again or even buy more diamonds? In the second and last cases speculation will be fueled and will continue at least for some time, with the result that prices will rise to even higher levels. On the contrary if you sell, and if other traders react similarly, the bubble is likely to burst. The relevance of the previous question is further emphasized by the following observations (i) The above question is not a mere “Gedanken experiment”, it is based on the speculative episode that seized the diamond market in the late 1970s; naturally, it can be rephrased in similar terms for any other speculative item. (ii) Economists would claim that the above question can be answered by relying on the standard principle of expected revenue maximization. While it can be argued that future revenue of a stock is to some extent determined by the growth perspectives of the company, this is much more questionable for diamonds. As a matter of fact it is precisely to stress the fragility of the fundamentalists’ argument that we picked up that example. (iii) Numerous computer simulations of the stock market have been proposed either by econophysicists or by economists; in that respect one should mention the following works:. Often these simulations are based on astute mechanisms such as minority games or Ising type interactions. Yet, one would be on much firmer ground if these simulations could be built on realistic attitudes at the micro-economic level. Models in statistical physics are successful to the extent that their assumptions about atomic or molecular interaction are no too unrealistic. To the above question the present paper provides the following answers (i) The behavior of a speculator $`(A)`$ who speculates on items worth 10,000 euros is not qualitatively different from that of a speculator $`(B)`$ who speculates (on the same market) on items worth 100 euros. This is reflected in the fact that the curves in Fig.1 have the same shape: same timing, same relaxation time; only the amplitudes are different. (ii) Speculator $`A`$ is significantly more “bullish” than speculator $`B`$. More specifically the amplitude of the bubble (defined as the ratio between peak and initial price) is more or less proportional to the initial price. At first sight the first point could seem fairly evident. It would be indeed for items whose prices differ only slightly, but for items whose price differ by a factor of 100 it is no longer obvious that the attitude of the speculators should be the same. Speculator $`A`$ is probably a professional while speculator $`B`$ is likely to be a mere amateur. The second point means that speculation will be stronger for 2-carat diamonds than for 0.5-carat diamonds, for 5-room flats than for one-room apartments. How much stronger will it be? This question is answered in the next paragraph.
### 2.2 Statistical evidence
#### 2.2.1 Qualitative evidence
Fig.1a,b,c,d presents the multiplier effect for four different speculative bubbles. Each graph refers to items whose prices are markedly different, the upper curve corresponding to the most costly item. The first figure concerns the price of one-carat diamonds; as one knows the price of a diamond of given size depends upon its color and number of flaws. Colorless diamonds are the most costly, they are said to be of class D; classes ranging from F to Z correspond to diamonds of decreasing quality. The upper curve in Fig.1a corresponds to class D, while the lower is for class G. In normal conditions, that is to say before and after the bubble, there is a ratio two between the prices of D and G diamonds. For these diamonds the peak amplitudes (i.e. peak price / initial price) are 6.1 and 5.0 respectively. Fig.1b describes a speculative bubble for property values in Paris. What is shown is the price per square-meter of apartments in the 7th (one of the most expensive) and 19th (one of the less expensive) districts respectively. Again the amplitude of the peak is larger for the most expensive item: 2.83 against 1.95. Fig.1c,d concern postage stamps. For our purpose postage stamps are of particular interest because their value range form a fraction of euro to several thousand euros. During World War II there was a strong speculative bubble in France. Again we verify that the amplitude of the peak is larger for the item having the highest price; the figures are 5.6 and 1.9 respectively. Fig.1d refers to a speculative bubble for British stamps; again the most costly have the largest peak amplitude: 4.9 for the 5,000 franc stamp against 2.2 for the 275 franc stamp. In this case we included an item for which there seems to be no speculative bubble at all. As a matter of fact in stamp catalogues one reads that speculation only concerned stamps with a high face value. One may wonder why. In our explanatory framework the matter becomes simple: in fact speculation also affected the stamps with a low face value but these stamps being much cheaper their price peak was small to the point of being at the same level as the average price. In the next paragraph we will see that this explanation is not only qualitatively satisfactory but although quantitatively correct.
#### 2.2.2 Relationship between price and peak amplitude
From the above examples it is clear that peak amplitude increases with (initial) price. However to get a more precise idea of that relationship one needs a statistical analysis of a larger sample of cases. The corresponding data are summarized in Appendix A. With $`A`$ denoting the peak amplitude and $`p_1`$ the initial price of the item the relationship can be written in the form:
$$A=a\mathrm{ln}p_1+b$$
To give to the values of $`b`$ an intrinsic meaning one has to make the convention that $`p_1`$ must be expressed in a fixed currency. We made the choice to use euros (of January 1999) as our currency scale. To take an example the francs of 1984 used for apartment prices in Paris were transformed in euros through the following steps:
Euro = (1/6.56) F1999 ; F1999=(982/1408) F1984
The first equality is the official exchange rate between a franc of January 1999 and an euro of January 1999 while the second is based on a standard annual price index series.
A linear least square fit gives the following estimates. For diamonds, due to a lack of data, it was not possible to perform a fit for a larger number of cases than in Fig.1a
Table 1 Estimates for the parameters $`a`$ and $`b`$
$$\begin{array}{ccccc}\text{Item}& n& a& b& r\\ & \text{Number of}& & & \text{Coefficient of}\\ & \text{cases}& & & \text{correlation}\\ & & & \\ \text{Appartments in Paris}& 20& 0.60\pm 0.50& 1.7\pm 0.1& 0.48\\ & & & \\ \text{French stamps}& 6& 0.47\pm 0.38& 2.2\pm 0.7& 0.77\\ & & & \\ \text{British stamps}& 13& 0.39\pm 0.17& 2.3\pm 0.6& 0.80\end{array}$$
Two observations are in order (i) In each case the correlation is significantly larger than zero (ii) The estimates for $`a`$ and $`b`$ are fairly close.
## 3 Conclusion
We have shown (at least for those markets for which data were available) that the relative amplitudes $`A`$ of speculative price peaks are larger for more costly items ; more specifically the relationship can be written in the form: $`A=0.5\mathrm{ln}p_1+b`$. Why is this so? Different mechanisms can be imagined. We will in this empirical paper refrain from proposing a detailed model; this will be done in a subsequent paper. Nevertheless it can be of interest to review some of the ideas on which such a model could be based. For the sake of illustration let us consider for instance the real estate market. We assume that there are two types of operators: (i) Residents who buy and sell apartments for personal usage; we call them users (ii) Speculators and property developers who make money by selling and buying property. Such a distinction is in essence similar to the one made between “fundamentalists” and “noise traders” in the paper by Lux et al. (). Now, it is not unreasonable to assume that having limited means the users will be deterred by too high prices. On the contrary for someone who buys in order to sell 6 months later the price makes little difference; only profit matters. For instance if the price of one-room apartments doubles users would still be able to afford them; on the contrary if the price of 5-room apartments doubles they will become far too expensive. Consequently, one can expect that for costly goods the proportion of speculators in the market will be larger. The dynamic pricing behavior of speculators being more aggressive one should not be surprised that these markets show greater amplification factors. It is possible to check that scenario empirically at least to some extent. In a separate paper we tried to estimate the proportion of speculators in different districts of Paris; for instance for the two districts considered in Fig.1b namely the 7th and the 19th one gets 18% and 11% respectively; for all the districts the percentage of speculators varies from 10% (2th district) to 36% (15th district). To get these estimates we used the fact that on average, i.e. for all the 20 districts, the proportion of speculators is about 20% (La Vie Française 18 April 1998). The regression between peak amplitudes and fraction of speculators $`f_s`$ reads: $`A=1.02f_s+2`$, the correlation being equal to 0.28. This test is not completely satisfactory because both the peak amplitudes and the proportions of speculators varied within too narrow limits: from 1.95 to 2.83 for the amplitudes and from 0.10 to 0.36 for the proportion of speculators. It will be the purpose of a subsequent paper to perform similar tests in other markets. The main difficulty in that matter is to find adequate statistics. As a last point one could wonder what (if any) are the implications of the multiplier effect for stock markets. In this case one should of course not reason in terms of share prices (these are of the order of 60 euros and are not very different from one stock to another) but in terms of share packages. For example, some small investors, called “odd lotters” in the jargon of finance, trade small portions of stocks, typically under one hundred at a time. Around 1955 odd lotters were known to be responsible for around 15% of stock trading on the New York Stock Exchange; today they account for less than 1% . In order to test the price multiplier effect one would need more detailed statistics about the size of transactions on given stocks. Once again finding adequate data turns out to be a major obstacle.
Acknowledgment I am indebted to Mr. Nicolas Vuillet, diamond trader at Paris, for very interesting and stimulating discussions; they provided one of the starting points for the writing of this paper. Furthermore I am grateful to the referee thanks to whom the paper has been markedly improved.
## Appendix A Appendix A: Statistical data
In this appendix we give the detailed data for each of the cases considered above. The peak amplitudes are always computed from deflated prices.
### A.1 Real estate bubble in Paris
Apartment prices in 1984 are expressed in thousand French francs per square meter.
$$\begin{array}{ccccccccccc}\text{District number}& (6)& (16)& (7)& (8)& (5)& (15)& (4)& (14)& (1)& (17)\\ \text{Price in 1984}& 11.6& 11.1& 10.3& 10.1& 9.71& 9.60& 9.45& 8.81& 8.03& 7.91\\ \text{Peak amplitude}& 2.47& 2.51& 2.83& 2.78& 2.38& 2.02& 2.43& 2.03& 2.71& 2.31\\ & & & & & & & & & & \\ \text{District number}& (12)& (2)& (13)& (3)& (19)& (11)& (9)& (20)& (18)& (10)\\ \text{Price in 1984}& 7.85& 7.37& 7.19& 6.99& 6.49& 6.45& 6.35& 6.11& 5.90& 5.51\\ \text{Peak amplitude}& 1.96& 2.19& 2.18& 2.63& 1.95& 2.16& 2.36& 2.05& 2.05& 2.27\end{array}$$
### A.2 French stamps
The stamp identification numbers refer to the Cérès catalogues. The 1938 prices are expressed in current French francs.
$$\begin{array}{ccccccc}\text{Stamp number}& (2)& (31)& (30)& (32)& (11)& (16)\\ \text{Price in 1938}& 5,000& 400& 350& 250& 60& 9\\ \text{Peak amplitude}& 5.56& 2.78& 2.64& 4.07& 3.70& 1.93\end{array}$$
### A.3 British stamps
The stamp identification numbers refer to the Yvert and Tellier catalogues. The 1975 prices are expressed in current French francs.
$$\begin{array}{cccccccc}\text{Stamp number}& (90)& (46)& (89)& (156)& (105)& (183)& (155)\\ \text{Price in 1975}& 13,500& 8,000& 6,000& 2,250& 1,500& 1,400& 300\\ \text{Peak amplitude}& 4.87& 2.84& 5.47& 4.33& 5.05& 2.74& 5.44\\ & & & & & & & \\ \text{Stamp number}& (286)& (238)& (355)& (239)& (106)& (512)& & \\ \text{Price in 1975}& 275& 90& 3& 1.75& 1.50& 1.35& \\ \text{Peak amplitude}& 2.19& 2.85& 0.36& 0.76& 0.86& 1.31& \end{array}$$
References
(1) BERTERO (E.M.) 1987: Speculative bubbles and the market for precious metals. PhD thesis. London.
(2) BOUCHAUD (J.-P.), POTTERS (M.) 1997: Théorie des risques financiers. Alea-Saclay, Eyrolles. Paris.
(3) CALDARELLI (G.), MARSILI (M.), ZHANG (Y.-C.) 1997: Europhysics Letters 40,479.
(4) FAY (S.) 1982: The great silver bubble. Hodder and Stoughton. London.
(5) FEIGENBAUM (J.A.), FREUND (P.G.O.) 1996: International Journal of Modern Physics B 10,3737.
(6) LUX (T.), MARCHESI (M.) 1999: Nature 397, 498.
(7) MANTEGNA (R.N.), STANLEY (H.E.) 1997: Physica A 239,255.
(8) MANTEGNA (R.N.), STANLEY (H.E.) 1999: Scaling approach in finance. Cambridge University Press. Cambridge (in press).
(9) MASSACRIER (A.) 1978: Prix des timbres-postes français classiques de 1904 à 1975. A. Maury. Paris.
(10) OLIVEIRA (S.M. de), OLIVEIRA (P.M.C. de), STAUFFER (D.) 1999: Evolution, money, wars and computers. Teubner. Stuttgart. See especially chapter 4 about stock market models.
(11) ROEHNER (B.M.), SORNETTE (D.) 1998: The European Physical Journal B 4, 387.
(12) ROEHNER (B.M.) 1999: Regional Science and Urban Economics 29,73.
(13) SORNETTE (D.), JOHANSEN (A.) 1997: Physica A 245,411.
(14) STAUFFER (D.), OLIVEIRA (P.M.C.), BERNARDES (A.T.) 1999: International Journal for Theoretical and Applied Finance 2,83.
(15) TVEDE (L.) 1990: The psychology of finance. Norwegian University Press. Oslo.
Figure captions
Fig.1a Speculative bubble for polished diamonds. Solid line: one carat, D clarity; broken line: one carat, G clarity; the price data are deflated prices on the Antwerp market. The two figures under the title give the prices of both items at the beginning of the bubble. Sources: The Economist, Special report No 1126.
Fig.1b Speculative bubble for property values (Paris). Solid line: price of apartments in one of the most expensive districts; broken line: price of apartments in one of the less expensive districts; the price data are deflated prices per square meter. The two figures under the title give the prices of both items at the beginning of the bubble. Source: Chambre des Notaires.
Fig.1c Speculative bubble for French stamps. Solid line: price of one of the most expensive stamps; broken line: price of one of the cheapest stamps. The price data are deflated prices. The two figures under the title give the prices of both stamps at the beginning of the bubble. Source: Massacrier (1978).
Fig.1d Speculative bubble for British stamps. Solid line: price of one of the most expensive stamps; broken line: price of a less expensive stamp; dotted line: price of one of the cheapest stamps; the price data are deflated prices. The three figures under the title give the prices of the three stamps at the beginning of the bubble. Source: Catalogue Yvert and Tellier. |
no-problem/9910/cond-mat9910098.html | ar5iv | text | # Effects of doping on thermally excited quasiparticles in the high-𝑇_𝑐 superconducting state
\[
## Abstract
The physical properties of low energy superconducting quasiparticles in high- $`T_c`$ superconductors are examined using magnetic penetration depth and specific heat experimental data. We find that the low energy density of states of quasiparticles of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> scales with $`\left(xx_c\right)/T_c`$ to the leading order approximation, where $`x_c`$ is the critical doping concentration below which $`T_c=0`$. The linear temperature term of the superfluid density is renormalized by quasiparticle interactions and the renormalization factor times the Fermi velocity is found to be doping independent.
\]
In high-$`T_c`$ superconductors (HTS), the low energy density of states (DOS) is linear due to the $`d_{x^2y^2}`$-wave pairing symmetry. This linear DOS leads to a linear in-plane superfluid density (which is proportional to the inverse square of the magnetic penetration depth $`\lambda _{ab}^2`$) and a quadratic electronic specific heat at low temperatures. The experimental observations of these power-law temperature dependences provided some of the early evidence for the unconventional pairing symmetry and lent support to the Fermi liquid description of the high-$`T_c`$ superconducting state. Exploring the physical properties of the low energy quasiparticle excitations is of fundamental importance for understanding the high-$`T_c`$ mechanism. A central issue which is currently under debate is the nature of quasiparticle interactions and their effect on the physics of the superconducting state. Recently Mesot et al calculated the slope of the superfluid stiffness using parameters obtained from angular resolved photoemission (ARPES) measurements for Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub> (BSCCO). They found that the renormalization factor to the superfluid stiffness is doping dependent and about a factor of 2 to 3 smaller than that for non-interacting quasiparticle systems.
In this paper we provide further evidence for the existence of strong quasiparticle interaction in the superconducting state and present a detailed analysis for the low energy DOS and other fundamental parameters of HTS. We calculate the doping dependence of these parameters using high quality penetration depth and specific heat data and discuss an interesting correlation between an energy scale derived from the low temperature superfluid response and the normal state pseudogap.
Let us first consider the low energy DOS of high-$`T_c`$ quasiparticles $`N(\omega )`$. If we denote the linear coefficient of $`N(\omega )`$ by $`\eta `$, then at low energies $`N(\omega )`$ is given by
$$N(\omega )\eta \omega .$$
(1)
Since the low temperature penetration depth and specific heat are governed by thermally excited quasiparticles, it can be shown that the slopes of $`\lambda ^2`$ and $`\gamma `$ at zero temperature are given by
$`{\displaystyle \frac{d\lambda _{ab}^2}{dT}}|_{T0}`$ $`=`$ $`(4\pi \mathrm{ln}2)\eta k_B\left({\displaystyle \frac{e\beta v_F}{c}}\right)^2,`$ (2)
$`{\displaystyle \frac{d\gamma }{dT}}|_{T0}`$ $`=`$ $`5.4\eta k_B^3,`$ (3)
where $`\gamma =C_v/T`$ is the specific heat coefficient and $`v_F`$ is the Fermi velocity. $`\beta `$ is a renormalization factor to the paramagnetic term in the superfluid density due to quasiparticle interactions or vertex corrections. For non-interacting quasiparticles $`\beta =1`$, and Eq. (2) is the standard BCS mean-field result. The above equations show that from experimental data of $`\lambda _{ab}`$ and $`C_v`$, we can determine the values of two important parameters: the linear coefficient of DOS $`\eta `$, and the product of the renormalization constant and Fermi velocity $`\beta v_F`$.
Figure 1 shows the experimental result of $`T_cd\gamma /dT`$ at $`T=0K`$ as a function of doping for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO). Note here we plot the product $`T_cd\gamma /dT`$ rather than $`d\gamma /dT`$ since in an ideal BCS superconductor with $`d`$-wave pairing, $`T_cd\gamma /dT(0K)`$ is proportional to the charge concentration. We find that $`T_cd\gamma /dT(0K)`$ increases monotonically with doping and the slope is smaller in the underdoped regime than in the overdoped one. The value of $`\eta `$ can be obtained from the data shown in Fig. 1. In the whole doping regime, we find that $`\eta `$ can be approximately fitted by
$$\eta \frac{\eta _1(xx_c)+\eta _2(xx_c)^2}{T_c},$$
(4)
where $`x_c`$ $``$ $`0.058`$ is approximately equal to the critical doping concentration below which $`T_c=0`$, $`\eta _1=13`$ $`mJ/(K^2k_B^3mol)`$ and $`\eta _2=87`$ $`mJ/(K^2k_B^3mol)`$. For other high-$`T_c`$ materials, the low temperature data for $`\gamma `$ generally contain a significant contribution from magnetic impurities and it is difficult, especially in the underdoped regime, to determine accurately the value of $`d\gamma /dT(0K)`$. However, from the data available we find that, within experimental errors, the doping dependence of $`\eta `$ is similar to that shown in Fig. 1.
Figure 2 shows the extrapolated experimental results of $`T_cd\lambda _{ab}^2/dT`$ at $`0K`$ as a function of doping, $`x`$, for LSCO and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (YBCO). The magnetic penetration depth data for LSCO and the specific heat data of Loram et al shown in Fig. 1 were obtained from samples of the same batch. The similar doping dependence in $`T_cd\lambda _{ab}^2/dT`$ and $`T_cd\gamma /dT`$ indicates that it is indeed the thermally excited quasiparticles which are responsible for the low temperature thermodynamic response of HTS. Two sets of penetration depth data are shown for YBCO. One set comes from the $`ac`$ -susceptibility measurements of grain-aligned YBCO and the other from microwave measurements of detwinned YBCO single crystals . The $`ac`$-susceptibility technique measures the effective in-plane penetration depth, $`\lambda _{ab}`$, whereas the microwave experiment measures the penetration depth along two principal axes, $`\lambda _a`$ and $`\lambda _b`$. In order to compare these two sets of data, we have converted the single crystal data to the effective in-plane penetration depth by assuming $`\lambda _{ab}^2\lambda _a\lambda _b`$ . The agreement between the two YBCO data sets is striking considering the difference in the type of samples and measurement techniques used by the two groups. For both LSCO and YBCO we find that within experimental errors $`T_cd\lambda _{ab}^2/dT`$ increases almost linearly with doping.
Using the penetration depth and specific heat data for LSCO we have estimated $`\beta v_F`$ from the ratio of $`d\lambda ^2/dT`$ and $`d\gamma /dT`$ . As shown in Fig. 3, $`\beta v_F`$ is almost doping independent and of the order $`5`$ $``$ $`6\times 10^6cm/\mathrm{sec}`$. If we assume that the Fermi velocity of LSCO is equal to the Fermi velocity of BSCCO, i.e. $`v_F=2.5\times 10^7cm/\mathrm{sec}`$, , Fig. 3 suggests that $`\beta `$ is approximately equal to $`0.2`$, a value much smaller than that for non-interacting quasiparticles where $`\beta =1`$. Obviously this estimate for $`\beta `$ is crude since the Fermi velocity for LSCO may not be the same as that for BSCCO. Nevertheless, it indicates that the quasiparticle interaction is quite strong, in qualitative agreement with the analysis of Mesot $`et.`$ $`al`$ for BSCCO. Furthermore, if $`v_F`$ in LSCO is doping independent as in BSCCO the result in Fig. 3 suggests that $`\beta `$ is also doping independent, which disagrees with the conclusions of Mesot $`et`$. $`al.`$ for BSCCO. We see no apparent physical reason why the doping dependence of $`\beta v_F`$ in these two high-$`T_c`$ systems should be different. It is likely that the difference is just due to the experimental uncertainty. To clarify this problem, more measurements of the doping dependence of the Fermi velocity and low energy DOS of quasiparticles for both systems are required.
Since $`\beta v_F`$ for LSCO is approximately doping independent, the above results indicate that in the underdoped regime $`\lambda _{ab}^2(0)\lambda _{ab}^2(T)(xx_c)T/T_c`$. Experimentally, it was found that $`\lambda _{ab}^2(0)`$ is approximately proportional to $`T_c`$ , this therefore gives
$$\frac{\lambda _{ab}^2(T)}{\lambda _{ab}^2(0)}1\alpha \frac{xx_c}{T_c^2}T,$$
(5)
where $`\alpha `$ is a doping independent constant. This equation holds only at low temperatures and low doping. However, if we extrapolate $`\lambda _{ab}^2(T)`$ to high temperatures using this equation, we find that $`\lambda _{ab}^2(T)`$ becomes zero at a temperature $`T_0T_c^2/(xx_c)`$. It is interesting to note that $`T_0`$ has approximately the same doping dependence and order of magnitude as the temperature scale $`T^{}`$ of the normal state gap obtained from tunnelling, ARPES, and other measurements . If we interpret $`T_0`$ as the energy scale at which Cooper pairs begin to form and the difference between $`T_0`$ at $`T_c`$ is due to the pair phase fluctuations this result seems to be consistent with the widely discussed phase fluctuation picture. However, as both the XY order parameter fluctuations and the gauge fluctuations are important in the critical phase transition regime, this interpretation is not unique. Nevertheless, it suggests that from low temperature measurements of the superconducting state, one can also obtain information on phase fluctuations in the normal state.
In conclusion, using low temperature experimental data of the magnetic penetration depth and electronic specific heat of HTS, we have estimated the effect of doping on the low energy DOS of quasiparticles and $`\beta v_F`$. Both $`T_cd\gamma /dT`$ and $`T_cd\lambda _{ab}^2/dT`$ are found to increase with increasing doping whereas $`\beta v_F`$ is doping independent. The low temperature superfluid density $`\lambda _{ab}^2(T)`$ extrapolates to zero at a temperature $`T_0`$ which has approximately the same doping dependence and order of magnitude as the onset temperature of the normal state gap.
We would like to thank J. W. Loram for supplying the specific heat data. CP thanks Trinity College, Cambridge for financial support. T.X. was partially supported by the Natural Science Fundation of China. |
no-problem/9910/physics9910042.html | ar5iv | text | # Untitled Document
UN/ESA Workshops on Basic Space Science: An Update on Their Achievements
Hans J. Haubold
Programme on Space Applications
Office for Outer Space Affairs
United Nations
Vienna International Centre
P.O. Box 500
A-1400 Vienna, Austria
Email: haubold@kph.tuwien.ac.at
Abstract
During the second half of the twentieth century, expensive observatories are being erected at La Silla (Chile), Mauna Kea (Hawai), Las Palmas (Canary Island), and Calar Alto (Spain), to name a view. In 1990, at the beginning of The Decade of Discovery in Astronomy and Astrophysics (Bahcall ), the UN/ESA Workshops on Basic Space Science initiated the establishment of small astronomical telescope facilities, among them many particularly supported by Japan, in developing countries in Asia and the Pacific (Sri Lanka, Philippines), Latin America and the Caribbean (Colombia, Costa Rica, Honduras, Paraguay), and Western Asia (Egypt, Jordan, Morocco). The annual UN/ESA Workshops continue to pursue an agenda to network these small observatory facilities through similar research and education programmes and at the same time encourage the incorporation of cultural elements predominant in the respective cultures. Cross-cultural integration and multi-lingual scientific cooperation may well be a dominant theme in the new millennium (Pyenson ). This trend is supported by the notion that astronomy has deep roots in virtually every human culture, that it helps to understand humanity’s place in the vast scale of the Universe, and that it increases the knowledge of humanity about its origins and evolution. Two of these Workshops have been organized in Europe (Germany 1996 and France 2000) to strengthen cooperation between developing and industrialized countries.
1. Introduction
Answering questions about the Universe challenges astronomers, fascinates a broad national audience and inspires young people to pursue careers in engineering, mathematics, and science. Basic space science research assists nations, directly and indirectly, in achieving societal goals. For example, studies of the Sun, the planets, and the stars have led to experimental techniques for the investigation of the Earth’s environment and to a broader perspective from which to consider terrestrial environmental concerns such as ozone depletion and the greenhouse effect .
Basic space science makes humanistic, educational and technical contributions to society. The most fundamental contribution of basic space science is that it provides modern answers to questions about humanity’s place in the Universe. Quantitative answers can now be found to questions about which ancient philosophers could only speculate. In addition to satisfying curiosity about the Universe, basic space science nourishes a scientific outlook in society at large. Society invests in basic space science research and receives an important dividend in the form of education, both formally through instruction in schools, colleges and universities, and more informally through television programmes, popular books and magazines, and planetarium presentations. Basic space science introduces young people to quantitative reasoning and also contributes to areas of more immediate practicality, including industry, medicine and the understanding of the Earth’s environment .
The international basic space science community has long shown leadership in initiating international collaboration and cooperation. Forums have been established on a regular basis in which the basic space science community has publicized its scientific achievements and the international character of astronomical and space science studies. The most recent such initiatives were the International Space Year (1992), with its elements Mission to Planet Earth and Mission to the Universe, and the Third United Nations Conference on the Exploration and Peaceful Uses of Outer Space (UNISPACE III), held from 19-30 July 1999 at the United Nations Office Vienna, Austria .
Despite the considerable progress made in the development of astronomy and basic space science, the observation has been made that of the 188 countries that are Member States of the United Nations, nearly 100 have professional or amateur astronomical organizations. Only about 60 of these countries, however, are sufficiently involved in astronomy to belong to the International Astronomical Union. Only about 20 countries, representing 15% of the world’s population, have access to the full range of astronomical facilities and information. This does not include most of the Eastern European, Baltic, and former countries of the Soviet Union, whose fragile economies keep them from achieving their full potential, despite the excellence of their astronomical heritage and education .
2. First Cycle of Workshops: Regional Observations and Recommendations
In 1991, the United Nations, through its Programme on Space Applications of the UN Office for Outer Space Affairs, in cooperation with the European Space Agency, held its first Workshop on Basic Space Science in India for Asia and the Pacific region . Since then, such workshops have been held annually in the different regions around the world to make a unique contribution to the worldwide development of astronomy and space science, particularly in developing countries. Workshops were held in 1992 in Costa Rica and Colombia for Latin America and the Caribbean, in 1993 in Nigeria for Africa, and in 1994 in Egypt for Western Asia. Additionally to the direct benefits of a common research workshop, a vital part of each of the workshops were daily working group sessions, which provided participants a forum in which observations and recommendations for developing basic space science in all its aspects, through regional and international cooperation, have been made. The deliberations of these sessions and the observations and recommendations that emanated from them, region by region, are published as UN General Assembly documents that can be used to lobby governments and funding agencies to implement prospective follow-up projects of the workshops .
3. Second Cycle of Workshops: Implementing Follow-up Projects
Among the most important results emanating from the workshop series, starting with the first workshop in India in 1991, is that for each of the regions a number of follow-up projects were identified, mainly the establishment and operation of small astronomical telescope facilities, which have been gradually implemented over the course of the workshops. Examples of them are briefly listed below.
The Galactic Emission Mapping (GEM) project of researchers from Brazil, Colombia, Italy, Spain, and the United States, devised to obtain a full sky, multi-frequency, and high sensitivity survey of the galactic radio emission, lead to the operation of the GEM radio telescope at an equatorial site in Colombia in 1995. Subsequently, in order to cover the northern and southern latitudes not visible from this equatorial site, the radio telescope was moved to Spain (IAC at Tenerife) and to Brazil (INPE at Sao Jose dos Campos) to continue radio frequency observations. Since 1995, the results obtained with GEM are
(i) a galactic radio emission database in the frequencies 408, 2300, and 5000 MHz;
(ii) an estimate of the sky temperature and spectral indices within that frequency range; and
(iii) an estimate of the galactic emission profile and quadrupole component .
In 1995, the workshop was held in Sri Lanka to inaugurate an astronomical telescope facility, based on a donation of an astronomical telescope (45-cm GOTO) from Japan to Sri Lanka, at the Arthur C. Clarke Institute for Modern Technologies. The telescope is equipped with a photoelectric photometer, spectrograph, and an ordinary camera for imaging (recently, a CCD camera was installed), and necessary computer equipment. Young astronomers are being trained and educated for the operation of the telescope facility through comprehensive programmes at Bisei Observatory, Japan. Since 1995, the ACCIMT serves as the national centre for research, education, and popularization of astronomy in Sri Lanka .
In 1997, the workshop inaugurated the Central American Astronomical Observatory (utilizing a 40-cm Maede telescope) at the National Autonomous University of Honduras, Tegucigalpa, Honduras, with the dedication of the Telescopio Rene Sagastume Castillo at the Suyapa Observatory for Central American countries (Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Panama) .
Following the recommendation of the workshop in Egypt in 1994, the long awaited refurbishment and modernization of the 74” Kottamia telescope at Helwan, Egypt, will be finalized in 1999. This telescope, equipped with Cassegrain and Coude spectrographs, saw first light in 1964 and is still the largest telescope in the region of Western Asia and will be made available for regional and international cooperation in the near future. The agreement between the National Research Institute of Astronomy and Geophysics (NRIAG) and the Government of Egypt lead to the replacement of the primary and secondary mirrors of the telescope by new mirrors of Zerodur glass ceramics. To improve the optical performance of the telescope system, a more efficient supporting system was also developed for the primary mirror .
The most recent UN/ESA Workshop on Basic Space Science: Scientific Exploration from Space, was hosted by the Institute of Astronomy and Space Sciences at Al al-Bayt University from 13 to 17 March 1999 in Mafraq, Jordan. The major result of the working group sessions was the urgent recommendation to make the small astronomical telescope facility (40-cm Maede telescope) on campus of Al al-Bayt University operational and to encourage the project of the construction of the 32-m Baquaa radio telescope at the University of Jordan, Amman .
4. Third Cycle of Workshops: Networking Telescopes and Beyond
Based on the request from the United Nations, the Foreign Office of the Government of Germany, through the German Space Agency (DLR), made it possible to hold a UN/ESA Workshop on Basic Space Science at the Max-Planck-Institute for Radioastronomy (MPIfR), Bonn, Germany, in 1996, for the benefit of Europe. This workshop analyzed the results of all previous Workshops on Basic Space Science, particularly the follow-up projects that emanated from the second workshop cycle and charted the course to be followed in the future \[6-10\]. Additional to this objective, the workshop addressed scientific topics at the forefront of research in such diverse fields as photon, neutrino, gravitational waves, and cosmic rays astronomy, respectively. Taking into account that the past workshops did not lead yet to the establishment of an astronomical facility in African countries under consideration for such an effort, this workshop prepared the publication, on a regular basis, of an urgently needed bilingual newsletter (African Skies/Cieux Africains) for the space science community in Africa, a collaborative effort of astronomers from France and South Africa .
The forthcoming Ninth UN/ESA Workshop on Basic Space Science: Satellites and Networks of Telescopes as Tools for Global Participation in the Studies of the Universe, will be held at Toulouse, France, in June 2000. The organizers of the series of workshops have agreed, based on observations and recommendations of the past workshops, that the agenda of this workshop will focus on the following topics:
(i) Feasibility of the establishment of a World Space Observatory (WSO) .
(ii) Network of Oriental Robotic Telescopes (NORT) .
(iii) Networking of small astronomical telescopes to be preferentially utilized for observation of variable stars. The establishment of small astronomical telescope facilities with the sponsorship of Japan in Paraguay and the Philippines. Cooperation between small astronomical telescope facilities in terms of education and research programmes .
(iv) Research with astronomical data bases and the utilization of astrophysics data systems .
5. Results That Supplemented the Workshop Series
1992 had been designated as International Space Year (ISY) by a wide variety of national and international space organizations, including the United Nations. To help generate interest and support for planetariums as centres of culture and education, the United Nations in cooperation with the International Planetarium Society, as part of its International Space Year activities, published a guidebook on the Planetarium: A Challenge for Educators . Subsequently, this booklet was translated by national planetarium associations from English into Japanese, Slovakian, and Spanish, and is still available from the United Nations.
In 1993, the European Space Agency, through the United Nations, donated 30 personal computer systems for use at universities and research institutions in Cuba, Ghana, Honduras, Nigeria, Peru, and Sri Lanka.
In 1995, scientists from around the world gathered at the United Nations Headquarters in New York to discuss a broad range of scientific issues associated with near-Earth objects (NEOs). This gathering became known as the first United Nations International Conference on Near-Earth Objects . Subsequently, the European Space Agency sponsored a study of a global network for research on near-Earth objects with the purpose to design and implement a worldwide information and data exchange centre called Spaceguard Central Node (SCN) in order to support follow-up activities after the detection of NEOs .
Acknowledgement
The author is deeply indebted to Dr. W. Wamsteker (European Space Agency) for his continual support and commitment in organizing the Workshops. The author would like to thank Prof. M. Kitamura (National Astronomical Observatory Tokyo), Dr. K.-U. Schrogl (German Space Agency Cologne), Dr. J. Andersen (International Astronomical Union Paris), and Prof. A.M. Mathai (McGill University Montreal) for their support of the Workshops.
References
Note: The author is writing in his personal capacity and the views expressed in this paper are those of the author and not necessarily those of the United Nations.
For a comprehensive review of social and economic dimensions of science as a collaborative effort see C.H. Lai (editor), Ideals and Realities: Selected Essays of Abdus Salam, 2nd edition (Singapore: World Scientific, 1987) and, focusing on the historic dimension of such endeavors, see .
See J.N. Bahcall, The Decade of Discovery in Astronomy and Astrophysics (Washington, D.C.: National Academy Press, 1991). For the interplay on how technology of astronomical instruments, astrophysics, and mathematics produce the remarkable picture of the Universe, in the course of history of humankind, see R. Osserman, Poetry of the Universe: A Mathematical Exploration of the Cosmos (New York: Doubleday, 1995).
See J.R. Percy and A.H. Batten, ”Chasing the dream”, Mercury, 1995, 24(2):15-18. For an elaboration on the United Nations contributions see H.J. Haubold and W. Wamsteker, ”Worldwide Development of Astronomy: The Story of a Decade of UN/ESA Workshops on Basic Space Science”, Space Technology, 1998, 18(4-6):149-156, and H.J. Haubold, ”UN/ESA Workshops on Basic Space Science: an initiative in the world-wide development of astronomy”, Journal of Astronomical History and Heritage, 1998, 1(2):105-121.
Subsequently, these workshops were co-organized by the Austrian Space Agency (ASA), French Space Agency (CNES), German Space Agency (DLR), European Space Agency (ESA), International Astronomical Union (IAU), International Centre for Theoretical Physics Trieste (ICTP), Institute of Space and Astronautical Science of Japan (ISAS), National Aeronautics and Space Administration of the United States (NASA), The Planetary Society (TPS), and the United Nations (UN).
A month-to-month update on results and new developments related to the UN/ESA Workshops on Basic Space Science is made available at the Workshop’s World-Wide-Web site at http://www.seas.columbia.edu/$``$ah297/un-esa/. The Proceedings of the workshops were published in: (I) Conference Proceedings of the American Institute of Physics Vol. 245, American Institute of Physics, New York, 1992, pp. 350; (II) Earth, Moon, and Planets 63, No. 2 (1993)93-170; (III) Astrophysics and Space Science 214, Nos. 1-2 (1994)1-260; (IV) Conference Proceedings of the American Institute of Physics Vol. 320, American Institute of Physics, New York, 1994, 320pp.; (V) Earth, Moon, and Planets 70, Nos. 1-3 (1995)1-233; (VI) Astrophysics and Space Science 228, Nos. 1-2 (1995)1-405; and (VII) Astrophysics and Space Science 258, Nos. 1-2 (1998)1-394.
For a detailed report on the development of the GEM project, its scientific results, impacts on university education and research in Colombia, and references to the literature see S. Torres, “The UN/ESA Workshop on Basic Space Science in Colombia, 1992: What has been achieved since then?”, COSPAR Information Bulletin, 1999, No. 144, pp. 13-15. The World-Wide-Web site of GEM can be accessed at http://aether.lbl.gov/www/projects/GEM.
S. Gunasekara and P. de Alwis, ”The astronomy promotional programme at ACCIMT”, in Conference on Space Sciences and Technology Application for National Development: Proceedings, held at Colombo, Sri Lanka, 21-22 January 1999, Ministry of Science and Technology of Sri Lanka, pp. 143-146. The World-Wide-Web site of ACCIMT at http://www.slt.lk/accimt/page5.html is gradually incorporating results obtained with the telescope facility. See also the papers of Kitamura and Kogure, respectively, in .
The Observatory and its educational and scientific activities is part of the World-Wide-Web site at http://www.unah.hondunet.net/unah.html. A recent photograph of the Observatory building is available at
http://www.laprensahn.com/natarc/9812/n23001.htm.
S.M. Hasan, “Upgrading the 1.9-m Kottamia telescope”, African Skies, 1998, No. 2, pp. 16-17.
For all workshops, United Nations Reports on the organization of the respective Workshop have been published as UN General Assembly documents, see Report on the Eighth United Nations/European Space Agency Workshop on Basic Space Science: Scientific Exploration from Space, hosted by the Institute of Astronomy and Space Sciences at Al al-Bayt University on behalf of the Government of Jordan, A/AC.105/723, 18 May 1999, 8pp. A special World-Wide-Web site was developed for this workshop at http://www.planetary.org/news/Events/unispace.html.
The World-Wide-Web site of the Working Group for Space Science in Africa is http://da.saao.ac.za:80/$``$wgssa/. In this connection, see, for a detailed review of the workshop in Nigeria, held in 1994, L.I. Onuora, “The UN/ESA Workshop on Basic Space Science in Nigeria: Looking back”, COSPAR Information Bulletin, 1999, No. 144, pp. 15-16.
W. Wamsteker and R. Gonzales Riestra (editors), Ultraviolet astrophysics beyond the IUE final archive: Proceedings of the conference, held at Sevilla, Spain, 11-14 November 1997, European Space Agency SP-413, pp. 849-855. H. Gavaghan, “U.N. plans its future in space”, Science, 1999, 285, p. 819. See also Report on the Eighth United Nations/European Space Agency Workshop on Basic Space Science: Scientific Exploration from Space, hosted by the Institute of Astronomy and Space Sciences at Al al-Bayt University on behalf of the Government of Jordan, A/AC.105/723, 18 May 1999, 8pp.
F.R. Querci and M. Querci, “The network of oriental robotic telescopes”, African Skies, 1998, No. 2, pp. 18-21.
H. Gavaghan, “U.N. plans its future in space”, Science, 1999, 285, p. 819. M. Kitamura, “Provision of astronomical instruments to developing countries by Japanese ODA with emphasis on research observations by the donated 45-cm reflectors in Asia”, in Conference on Space Sciences and Technology Application for National Development: Proceedings, held at Colombo, Sri Lanka, 21-22 January 1999, Ministry of Science and Technology of Sri Lanka, pp. 147-152. T. Kogure, ”Stellar activity and needs for multi-site observations”, in Conference on Space Sciences and Technology Application for National Development: Proceedings, held at Colombo, Sri Lanka, 21-22 January 1999, Ministry of Science and Technology of Sri Lanka, pp. 124-131.
E.g., IUE Newly Extracted Spectra (INES), World-Wide-Web site at http://ines.vilspa.esa.es, which is a complete astronomical archive and data distribution system, representing the final activity of ESA in the context of the International Ultraviolet Explorer project.
E.g., the NASA Astrophysics Data System (ADS), World-Wide-Web site at http://adswww.harvard.edu, whose main resource is an abstract service which includes four sets of abstracts: (i) astronomy and astrophysics, (ii) instrumentation, (iii) physics and geophysics, and (iv) Los Alamos preprint server.
See World-Wide-Web site at http://www.seas.columbia.edu/$``$ah297/un-esa/planetarium.html.
See World-Wide-Web site at http://www.seas.columbia.edu/$``$ah297/un-esa/neo.html.
See World-Wide-Web site at http://spaceguard.ias.rm.cnr.it
L. Pyenson and S. Sheets-Pyenson, Servants of Nature: A History of Scientific Institutions, Enterprises, and Sensibilities, W.W. Norton & Company, New York, 1999, pp. XV+496.
The report on this UNISPACE III Conference is available electronically at http://www.un.or.at/OOSA; as part of the Technical Forum of UNISPACE III, comprising 38 scientific activities, an IAU/COSPAR/UN Special Workshop on Education in Astronomy and Basic Space Science was held leading to conclusions and proposals contained in UN Document
A/CONF.184/C.1/L.8. |
no-problem/9910/hep-ph9910530.html | ar5iv | text | # Determination of the weak phase 𝜸="arg"(𝑽_{𝒖𝒃}^∗)
## 1 Theory of $`𝑩^\mathbf{\pm }\mathbf{}𝝅𝑲`$ decays
The hadronic decays $`B\pi K`$ are mediated by a low-energy effective weak Hamiltonian, whose operators allow for three distinct classes of flavor topologies: QCD penguins, trees, and electroweak penguins. In the Standard Model the weak couplings associated with these topologies are known. From the measured branching ratios one can deduce that QCD penguins dominate the $`B\pi K`$ decay amplitudes , whereas trees and electroweak penguins are subleading and of a similar strength . The theoretical description of the two charged modes $`B^\pm \pi ^\pm K^0`$ and $`B^\pm \pi ^0K^\pm `$ exploits the fact that the amplitudes for these processes differ in a pure isospin amplitude $`A_{3/2}`$, defined as the matrix element of the isovector part of the effective Hamiltonian between a $`B`$ meson and the $`\pi K`$ isospin eigenstate with $`I=\frac{3}{2}`$. In the Standard Model the parameters of this amplitude are determined, up to an overall strong phase $`\varphi `$, in the limit of SU(3) flavor symmetry . Using the QCD factorization theorem proved in , the SU(3)-breaking corrections can be calculated in a model-independent way up to nonfactorizable terms that are power-suppressed in $`\mathrm{\Lambda }/m_b`$ and vanish in the heavy-quark limit.
A convenient parameterization of the decay amplitudes $`𝒜_{+0}𝒜(B^+\pi ^+K^0)`$ and $`𝒜_{0+}\sqrt{2}𝒜(B^+\pi ^0K^+)`$ is
$`𝒜_{+0}`$ $`=`$ $`P(1\epsilon _ae^{i\gamma }e^{i\eta }),`$ (1)
$`𝒜_{0+}`$ $`=`$ $`P\left[1\epsilon _ae^{i\gamma }e^{i\eta }\epsilon _{3/2}e^{i\varphi }(e^{i\gamma }\delta _{\mathrm{EW}})\right],`$
where $`P`$ is the dominant penguin amplitude defined as the sum of all terms in the $`B^+\pi ^+K^0`$ amplitude not proportional to $`e^{i\gamma }`$, $`\eta `$ and $`\varphi `$ are strong phases, and $`\epsilon _a`$, $`\epsilon _{3/2}`$ and $`\delta _{\mathrm{EW}}`$ are real hadronic parameters. The weak phase $`\gamma `$ changes sign under a CP transformation, whereas all other parameters stay invariant.
Based on a naive quark-diagram analysis one would not expect the $`B^+\pi ^+K^0`$ amplitude to receive a contribution from $`\overline{b}\overline{u}u\overline{s}`$ tree topologies; however, such a contribution can be induced through final-state rescattering or annihilation contributions . They are parameterized by $`\epsilon _a=O(\lambda ^2)`$. In the heavy-quark limit this parameter can be calculated and is found to be very small, $`\epsilon _a2\%`$ . In the future, it will be possible to put upper and lower bounds on $`\epsilon _a`$ by comparing the CP-averaged branching ratios for the decays $`B^\pm \pi ^\pm K^0`$ and $`B^\pm K^\pm \overline{K}^0`$ . Below we assume $`|\epsilon _a|0.1`$; however, our results will be almost insensitive to this assumption.
The terms proportional to $`\epsilon _{3/2}`$ in (1) parameterize the isospin amplitude $`A_{3/2}`$. The weak phase $`e^{i\gamma }`$ enters through the tree process $`\overline{b}\overline{u}u\overline{s}`$, whereas the quantity $`\delta _{\mathrm{EW}}`$ describes the effects of electroweak penguins. The parameter $`\epsilon _{3/2}`$ measures the relative strength of tree and QCD penguin contributions. Information about it can be derived by using SU(3) flavor symmetry to relate the tree contribution to the isospin amplitude $`A_{3/2}`$ to the corresponding contribution in the decay $`B^+\pi ^+\pi ^0`$. Since the final state $`\pi ^+\pi ^0`$ has isospin $`I=2`$, the amplitude for this process does not receive any contribution from QCD penguins. Moreover, electroweak penguins in $`\overline{b}\overline{d}q\overline{q}`$ transitions are negligibly small. We define a related parameter $`\overline{\epsilon }_{3/2}`$ by writing $`\epsilon _{3/2}=\overline{\epsilon }_{3/2}\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}`$, so that the two quantities agree in the limit $`\epsilon _a0`$. In the SU(3) limit this new parameter can be determined experimentally form the relation
$$\overline{\epsilon }_{3/2}=R_1\left|\frac{V_{us}}{V_{ud}}\right|\left[\frac{2\text{B}(B^\pm \pi ^\pm \pi ^0)}{\text{B}(B^\pm \pi ^\pm K^0)}\right]^{1/2}.$$
(2)
SU(3)-breaking corrections are described by the factor $`R_1=1.22\pm 0.05`$, which can be calculated in a model-independent way using the QCD factorization theorem for nonleptonic decays . The quoted error is an estimate of the theoretical uncertainty due to corrections of $`O(\frac{1}{N_c}\frac{m_s}{m_b})`$. Using preliminary data reported by the CLEO Collaboration to evaluate the ratio of the CP-averaged branching ratios in (2) we obtain
$$\overline{\epsilon }_{3/2}=0.21\pm 0.06_{\mathrm{exp}}\pm 0.01_{\mathrm{th}}.$$
(3)
With a better measurement of the branching ratios the uncertainty in $`\overline{\epsilon }_{3/2}`$ will be reduced significantly.
Finally, the parameter
$`\delta _{\mathrm{EW}}`$ $`=`$ $`R_2\left|{\displaystyle \frac{V_{cb}^{}V_{cs}}{V_{ub}^{}V_{us}}}\right|{\displaystyle \frac{\alpha }{8\pi }}{\displaystyle \frac{x_t}{\mathrm{sin}^2\theta _W}}\left(1+{\displaystyle \frac{3\mathrm{ln}x_t}{x_t1}}\right)`$ (4)
$`=`$ $`(0.64\pm 0.09)\times {\displaystyle \frac{0.085}{|V_{ub}/V_{cb}|}},`$
with $`x_t=(m_t/m_W)^2`$, describes the ratio of electroweak penguin and tree contributions to the isospin amplitude $`A_{3/2}`$. In the SU(3) limit it is calculable in terms of Standard Model parameters . SU(3)-breaking as well as small electromagnetic corrections are accounted for by the quantity $`R_2=0.92\pm 0.09`$ . The error quoted in (4) includes the uncertainty in the top-quark mass.
Important observables in the study of the weak phase $`\gamma `$ are the ratio of the CP-averaged branching ratios in the two $`B^\pm \pi K`$ decay modes,
$$R_{}=\frac{\text{B}(B^\pm \pi ^\pm K^0)}{2\text{B}(B^\pm \pi ^0K^\pm )}=0.75\pm 0.28,$$
(5)
and a particular combination of the direct CP asymmetries,
$`\stackrel{~}{A}`$ $`=`$ $`{\displaystyle \frac{A_{\mathrm{CP}}(B^\pm \pi ^0K^\pm )}{R_{}}}A_{\mathrm{CP}}(B^\pm \pi ^\pm K^0)`$ (6)
$`=`$ $`0.52\pm 0.42.`$
The experimental values of these quantities are derived using preliminary CLEO data reported in . The theoretical expressions for $`R_{}`$ and $`\stackrel{~}{A}`$ obtained using the parameterization in (1) are
$`R_{}^1`$ $`=`$ $`1+2\overline{\epsilon }_{3/2}\mathrm{cos}\varphi (\delta _{\mathrm{EW}}\mathrm{cos}\gamma )`$
$`+`$ $`\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2)+O(\overline{\epsilon }_{3/2}\epsilon _a),`$
$`\stackrel{~}{A}`$ $`=`$ $`2\overline{\epsilon }_{3/2}\mathrm{sin}\gamma \mathrm{sin}\varphi +O(\overline{\epsilon }_{3/2}\epsilon _a).`$ (7)
Note that the rescattering effects described by $`\epsilon _a`$ are suppressed by a factor of $`\overline{\epsilon }_{3/2}`$ and thus reduced to the percent level. Explicit expressions for these contributions can be found in .
## 2 Lower bound on $`𝜸`$ and constraint in the $`\mathbf{(}\overline{𝝆}\mathbf{,}\overline{𝜼}\mathbf{)}`$ plane
There are several strategies for exploiting the above relations. From a measurement of the ratio $`R_{}`$ alone a bound on $`\mathrm{cos}\gamma `$ can be derived, implying a nontrivial constraint on the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$ defining the apex of the unitarity triangle . Only CP-averaged branching ratios are needed for this purpose. Varying the strong phases $`\varphi `$ and $`\eta `$ independently we first obtain an upper bound on the inverse of $`R_{}`$. Keeping terms of linear order in $`\epsilon _a`$ yields
$`R_{}^1`$ $``$ $`\left(1+\overline{\epsilon }_{3/2}|\delta _{\mathrm{EW}}\mathrm{cos}\gamma |\right)^2+\overline{\epsilon }_{3/2}^2\mathrm{sin}^2\gamma `$ (8)
$`+2\overline{\epsilon }_{3/2}|\epsilon _a|\mathrm{sin}^2\gamma .`$
Provided $`R_{}`$ is significantly smaller than 1, this bound implies an exclusion region for $`\mathrm{cos}\gamma `$ which becomes larger the smaller the values of $`R_{}`$ and $`\overline{\epsilon }_{3/2}`$ are. It is convenient to consider instead of $`R_{}`$ the related quantity
$$X_R=\frac{\sqrt{R_{}^1}1}{\overline{\epsilon }_{3/2}}=0.72\pm 0.98_{\mathrm{exp}}\pm 0.03_{\mathrm{th}}.$$
(9)
Because of the theoretical factor $`R_1`$ entering the definition of $`\overline{\epsilon }_{3/2}`$ in (2) this is, strictly speaking, not an observable. However, the theoretical uncertainty in $`X_R`$ is so much smaller than the present experimental error that it can be ignored. The advantage of presenting our results in terms of $`X_R`$ rather than $`R_{}`$ is that the leading dependence on $`\overline{\epsilon }_{3/2}`$ cancels out, leading to the simple bound $`|X_R||\delta _{\mathrm{EW}}\mathrm{cos}\gamma |+O(\overline{\epsilon }_{3/2},\epsilon _a)`$.
In Figure 1 we show the upper bound on $`X_R`$ as a function of $`|\gamma |`$, obtained by varying the input parameters in the intervals $`0.15\overline{\epsilon }_{3/2}0.27`$ and $`0.49\delta _{\mathrm{EW}}0.79`$ (corresponding to using $`|V_{ub}/V_{cb}|=0.085\pm 0.015`$ in (4)). Note that the effect of the rescattering contribution parameterized by $`\epsilon _a`$ is very small. The gray band shows the current value of $`X_R`$, which clearly has too large an error to provide any useful information on $`\gamma `$.<sup>1</sup><sup>1</sup>1Unfortunately, the $`2\sigma `$ deviation from 1 indicated by a first preliminary CLEO result has not been confirmed by the present data. The situation may change, however, once a more precise measurement of $`X_R`$ will become available. For instance, if the current central value $`X_R=0.72`$ were confirmed, it would imply the bound $`|\gamma |>75^{}`$, marking a significant improvement over the indirect limit $`|\gamma |>37^{}`$ inferred from the global analysis of the unitarity triangle including information from $`K`$$`\overline{K}`$ mixing .
So far, as in previous work, we have used the inequality (8) to derive a lower bound on $`|\gamma |`$. However, a large part of the uncertainty in the value of $`\delta _{\mathrm{EW}}`$, and thus in the resulting bound on $`|\gamma |`$, comes from the present large error on $`|V_{ub}|`$. Since this is not a hadronic uncertainty, it is appropriate to separate it and turn (8) into a constraint on the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$. To this end, we use that $`\mathrm{cos}\gamma =\overline{\rho }/\sqrt{\overline{\rho }^2+\overline{\eta }^2}`$ by definition, and $`\delta _{\mathrm{EW}}=(0.24\pm 0.03)/\sqrt{\overline{\rho }^2+\overline{\eta }^2}`$ from (4). The solid lines in Figure 2 show the resulting constraint in the $`(\overline{\rho },\overline{\eta })`$ plane obtained for the representative values $`X_R=0.5`$, 0.75, 1.0, 1.25 (from right to left), which for $`\overline{\epsilon }_{3/2}=0.21`$ would correspond to $`R_{}=0.82`$, 0.75, 0.68, 0.63, respectively. Values to the right of these lines are excluded. For comparison, the dashed circles show the constraint arising from the measurement of the ratio $`|V_{ub}/V_{cb}|=0.085\pm 0.015`$ in semileptonic $`B`$ decays, and the dashed-dotted line shows the bound implied by the present experimental limit on the mass difference $`\mathrm{\Delta }m_s`$ in the $`B_s`$ system . Values to the left of this line are excluded. It is evident from the figure that the bound resulting from a measurement of the ratio $`X_R`$ in $`B^\pm \pi K`$ decays may be very nontrivial and, in particular, may eliminate the possibility that $`\gamma =0`$. The combination of this bound with information from semileptonic decays and $`B`$$`\overline{B}`$ mixing alone would then determine the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$ within narrow ranges,<sup>2</sup><sup>2</sup>2An observation of CP violation, such as the measurement of $`ϵ_K`$ in $`K`$$`\overline{K}`$ mixing or $`\mathrm{sin}2\beta `$ in $`BJ/\psi K_S`$ decays, is however needed to fix the sign of $`\overline{\eta }`$. and in the context of the CKM model would prove the existence of direct CP violation in $`B`$ decays.
## 3 Extraction of $`𝜸`$
Ultimately, the goal is of course not only to derive a bound on $`\gamma `$ but to determine this parameter directly from the data. This requires to fix the strong phase $`\varphi `$ in (1), which can be achieved either through the measurement of a CP asymmetry or with the help of theory. A strategy for an experimental determination of $`\gamma `$ from $`B^\pm \pi K`$ decays has been suggested in . It generalizes a method proposed by Gronau, Rosner and London to include the effects of electroweak penguins. The approach has later been refined to account for rescattering contributions to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes . Before discussing this method, we will first illustrate an easier strategy for a theory-guided determination of $`\gamma `$ based on the QCD factorization theorem for nonleptonic decays . This method does not require any measurement of a CP asymmetry.
### 3.1 Theory-guided determination
In the previous section the theoretical predictions for the nonleptonic $`B\pi K`$ decay amplitudes obtained using the QCD factorization theorem were used in a minimal way, i.e., only to calculate the size of the SU(3)-breaking effects parameterized by $`R_1`$ and $`R_2`$. The resulting bound on $`\gamma `$ and the corresponding constraint in the $`(\overline{\rho },\overline{\eta })`$ plane are therefore theoretically very clean. However, they are only useful if the value of $`X_R`$ is found to be larger than about 0.5 (see Figure 1), in which case values of $`|\gamma |`$ below $`65^{}`$ are excluded. If it would turn out that $`X_R<0.5`$, then it is possible to satisfy the inequality (8) also for small values of $`\gamma `$, however, at the price of having a very large strong phase, $`\varphi 180^{}`$. But this possibility can be discarded based on the model-independent prediction that
$$\varphi =O[\alpha _s(m_b),\mathrm{\Lambda }/m_b].$$
(10)
A direct calculation of this phase to leading power in $`\mathrm{\Lambda }/m_b`$ yields $`\varphi 11^{}`$ . Using the fact that $`\varphi `$ is parametrically small, we can exploit a measurement of the ratio $`X_R`$ to obtain a determination of $`|\gamma |`$ – corresponding to an allowed region in the $`(\overline{\rho },\overline{\eta })`$ plane – rather than just a bound. This determination is unique up to a sign. Note that for small values of $`\varphi `$ the impact of the strong phase in the expression for $`R_{}`$ in (1) is a second-order effect. As long as $`|\varphi |\sqrt{2\mathrm{\Delta }\overline{\epsilon }_{3/2}/\overline{\epsilon }_{3/2}}`$, the uncertainty in $`\mathrm{cos}\varphi `$ has a much smaller effect than the uncertainty in $`\overline{\epsilon }_{3/2}`$. With the present value of $`\overline{\epsilon }_{3/2}`$ this is the case as long as $`|\varphi |43^{}`$. We believe it is a safe assumption to take $`|\varphi |<25^{}`$ (i.e., more than twice the value obtained to leading order in $`\mathrm{\Lambda }/m_b`$), so that $`\mathrm{cos}\varphi >0.9`$.
Solving the equation for $`R_{}`$ in (1) for $`\mathrm{cos}\gamma `$, and including the corrections of $`O(\epsilon _a)`$, we find
$`\mathrm{cos}\gamma `$ $`=`$ $`\delta _{\mathrm{EW}}{\displaystyle \frac{X_R+\frac{1}{2}\overline{\epsilon }_{3/2}(X_R^21+\delta _{\mathrm{EW}}^2)}{\mathrm{cos}\varphi +\overline{\epsilon }_{3/2}\delta _{\mathrm{EW}}}}`$ (11)
$`+{\displaystyle \frac{\epsilon _a\mathrm{cos}\eta \mathrm{sin}^2\gamma }{\mathrm{cos}\varphi +\overline{\epsilon }_{3/2}\delta _{\mathrm{EW}}}},`$
where we have set $`\mathrm{cos}\varphi =1`$ in the numerator of the $`O(\epsilon _a)`$ term. Using the QCD factorization theorem one finds that $`\epsilon _a\mathrm{cos}\eta 0.02`$ in the heavy-quark limit , and we assign a 100% uncertainty to this estimate. In evaluating the result (11) we scan the parameters in the ranges $`0.15\overline{\epsilon }_{3/2}0.27`$, $`0.55\delta _{\mathrm{EW}}0.73`$, $`25^{}\varphi 25^{}`$, and $`0.04\epsilon _a\mathrm{cos}\eta \mathrm{sin}^2\gamma 0`$. Figure 3 shows the allowed regions in the $`(\overline{\rho },\overline{\eta })`$ plane for the representative values $`X_R=0.25`$, 0.75, and 1.25 (from right to left). We stress that with this method a useful constraint on the Wolfenstein parameters is obtained for any value of $`X_R`$.
### 3.2 Model-independent determination
It is important that, once more precise data on $`B^\pm \pi K`$ decays will become available, it will be possible to test the prediction of a small strong phase $`\varphi `$ experimentally. To this end, one must determine the CP asymmetry $`\stackrel{~}{A}`$ defined in (6) in addition to the ratio $`R_{}`$. From (1) it follows that for fixed values of $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$ the quantities $`R_{}`$ and $`\stackrel{~}{A}`$ define contours in the $`(\gamma ,\varphi )`$ plane, whose intersections determine the two phases up to possible discrete ambiguities . Figure 4 shows these contours for some representative values, assuming $`\overline{\epsilon }_{3/2}=0.21`$, $`\delta _{\mathrm{EW}}=0.64`$, and $`\epsilon _a=0`$. In practice, including the uncertainties in the values of these parameters changes the contour lines into contour bands. Typically, the spread of the bands induces an error in the determination of $`\gamma `$ of about $`10^{}`$ .<sup>3</sup><sup>3</sup>3A precise determination of this error requires knowledge of the actual values of the observables. Gronau and Pirjol find a larger error for the special case where the product $`|\mathrm{sin}\gamma \mathrm{sin}\varphi |`$ is very close to 1, which however is highly disfavored because of the expected smallness of the strong phase $`\varphi `$. In the most general case there are up to eight discrete solutions for the two phases, four of which are related to the other four by a sign change $`(\gamma ,\varphi )(\gamma ,\varphi )`$. However, for typical values of $`R_{}`$ it turns out that often only four solutions exist, two of which are related to the other two by a sign change. The theoretical prediction that $`\varphi `$ is small implies that solutions should exist where the contours intersect close to the lower portion in the plot. Other solutions with large $`\varphi `$ are strongly disfavored. Note that according to (1) the sign of the CP asymmetry $`\stackrel{~}{A}`$ fixes the relative sign between the two phases $`\gamma `$ and $`\varphi `$. If we trust the theoretical prediction that $`\varphi `$ is negative , it follows that in most cases there remains only a unique solution for $`\gamma `$, i.e., the CP-violating phase $`\gamma `$ can be determined without any discrete ambiguity.
Consider, as an example, the hypothetical case where $`R_{}=0.8`$ and $`\stackrel{~}{A}=15\%`$. Figure 4 then allows the four solutions where $`(\gamma ,\varphi )(\pm 82^{},21^{})`$ or $`(\pm 158^{},78^{})`$. The second pair of solutions is strongly disfavored because of the large values of the strong phase $`\varphi `$. From the first pair of solutions, the one with $`\varphi 21^{}`$ is closest to our theoretical expectation that $`\varphi 11^{}`$, hence leaving $`\gamma 82^{}`$ as the unique solution.
## 4 Sensitivity to New Physics
In the presence of New Physics the theoretical description of $`B^\pm \pi K`$ decays becomes more complicated. In particular, new CP-violating contributions to the decay amplitudes may be induced. A detailed analysis of such effects has been presented in . A convenient and completely general parameterization of the two amplitudes in (1) is obtained by replacing
$`P`$ $``$ $`P^{},\epsilon _ae^{i\gamma }e^{i\eta }i\rho e^{i\varphi _\rho },`$
$`\delta _{\mathrm{EW}}`$ $``$ $`ae^{i\varphi _a}+ibe^{i\varphi _b},`$ (12)
where $`\rho `$, $`a`$, $`b`$ are real hadronic parameters, and $`\varphi _\rho `$, $`\varphi _a`$, $`\varphi _b`$ are strong phases. The terms $`i\rho `$ and $`ib`$ change sign under a CP transformation. New Physics effects parameterized by $`P^{}`$ and $`\rho `$ are isospin conserving, while those described by $`a`$ and $`b`$ violate isospin symmetry. Note that the parameter $`P^{}`$ cancels in all ratios of branching ratios and thus does not affect the quantities $`R_{}`$ and $`X_R`$ as well as any CP asymmetry. Because the ratio $`R_{}`$ in (5) would be 1 in the limit of isospin symmetry, it is particularly sensitive to isospin-violating New Physics contributions.
New Physics can affect the bound on $`\gamma `$ derived from (8) as well as the extraction of $`\gamma `$ using the strategies discussed above. We will discuss these two possibilities in turn.
### 4.1 Effects on the bound on $`𝜸`$
The upper bound on $`R_{}^1`$ in (8) and the corresponding bound on $`X_R`$ shown in Figure 1 are model-independent results valid in the Standard Model. Note that the extremal value of $`R_{}^1`$ is such that $`|X_R|(1+\delta _{\mathrm{EW}})`$ irrespective of $`\gamma `$. A value of $`|X_R|`$ exceeding this bound would be a clear signal for New Physics .
Consider first the case where New Physics may induce arbitrary CP-violating contributions to the $`B\pi K`$ decay amplitudes, while preserving isospin symmetry. Then the only change with respect to the Standard Model is that the parameter $`\rho `$ may no longer be as small as $`O(\epsilon _a)`$. Varying the strong phases $`\varphi `$ and $`\varphi _\rho `$ independently, and allowing for an arbitrarily large New Physics contribution to $`\rho `$, one can derive the bound
$$|X_R|\sqrt{12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2}1+\delta _{\mathrm{EW}}.$$
(13)
Note that the extremal value is the same as in the Standard Model, i.e., isospin-conserving New Physics effects cannot lead to a value of $`|X_R|`$ exceeding $`(1+\delta _{\mathrm{EW}})`$. For intermediate values of $`\gamma `$ the Standard Model bound on $`X_R`$ is weakened; but even for large values $`\rho =O(1)`$, corresponding to a significant New Physics contribution to the decay amplitudes, the effect is small.
If both isospin-violating and isospin-conserving New Physics contributions are present and involve new CP-violating phases, the analysis becomes more complicated. Still, it is possible to derive model-independent bounds on $`X_R`$. Allowing for arbitrary values of $`\rho `$ and all strong phases, one obtains
$`|X_R|`$ $``$ $`\sqrt{(|a|+|\mathrm{cos}\gamma |)^2+(|b|+|\mathrm{sin}\gamma |)^2}`$ (14)
$``$ $`1+\sqrt{a^2+b^2}{\displaystyle \frac{2}{\overline{\epsilon }_{3/2}}}+X_R,`$
where the last inequality is relevant only in cases where $`\sqrt{a^2+b^2}1`$. The important point to note is that with isospin-violating New Physics contributions the value of $`|X_R|`$ can exceed the upper bound in the Standard Model by a potentially large amount. For instance, if $`\sqrt{a^2+b^2}`$ is twice as large as in the Standard Model, corresponding to a New Physics contribution to the decay amplitudes of only 10–15%, then $`|X_R|`$ could be as large as 2.6 as compared with the maximal value 1.8 allowed in the Standard Model. Also, in the most general case where $`b`$ and $`\rho `$ are nonzero, the maximal value $`|X_R|`$ can take is no longer restricted to occur at the endpoints $`\gamma =0^{}`$ or $`180^{}`$, which are disfavored by the global analysis of the unitarity triangle . Rather, $`|X_R|`$ would take its maximal value if $`|\mathrm{tan}\gamma |=|\rho |=|b/a|`$.
The present experimental value of $`X_R`$ in (9) has too large an error to determine whether there is any deviation from the Standard Model. If $`X_R`$ turns out to be larger than 1 (i.e., at least one third of a standard deviation above its current central value), then an interpretation of this result in the Standard Model would require a large value $`|\gamma |>91^{}`$ (see Figure 1), which would be difficult to accommodate in view of the upper bound implied by the experimental constraint on $`B_s`$$`\overline{B}_s`$ mixing, thus providing evidence for New Physics. If $`X_R>1.3`$, one could go a step further and conclude that the New Physics must necessarily violate isospin .
### 4.2 Effects on the determination of $`𝜸`$
A value of the observable $`R_{}`$ violating the bound (8) would be an exciting hint for New Physics. However, even if a future precise measurement will give a value that is consistent with the Standard Model bound, $`B^\pm \pi K`$ decays provide an excellent testing ground for physics beyond the Standard Model. This is so because New Physics may cause a significant shift in the value of $`\gamma `$ extracted using the strategies discussed in Section 3, leading to inconsistencies when this value is compared with other determinations of $`\gamma `$.
A global fit of the unitarity triangle combining information from semileptonic $`B`$ decays, $`B`$$`\overline{B}`$ mixing, CP violation in the kaon system, and mixing-induced CP violation in $`BJ/\psi K_S`$ decays provides information on $`\gamma `$ which in a few years will determine its value within a rather narrow range . Such an indirect determination could be complemented by direct measurements of $`\gamma `$ using, e.g., $`BDK^{()}`$ decays (see below), or using the triangle relation $`\gamma =180^{}\alpha \beta `$ combined with a measurement of $`\alpha `$. We will assume that a discrepancy of more than $`25^{}`$ between the “true” $`\gamma =\text{arg}(V_{ub}^{})`$ and the value $`\gamma _{\pi K}`$ extracted in $`B^\pm \pi K`$ decays will be observable after a few years of operation at the $`B`$ factories. This sets the benchmark for sensitivity to New Physics effects.
In order to illustrate how big an effect New Physics could have on the extracted value of $`\gamma `$ we consider the simplest case where there are no new CP-violating couplings. Then all New Physics contributions in (4) are parameterized by the single parameter $`a_{\mathrm{NP}}a\delta _{\mathrm{EW}}`$. A more general discussion can be found in . We also assume for simplicity that the strong phase $`\varphi `$ is small, as suggested by (10). In this case the difference between the value $`\gamma _{\pi K}`$ extracted from $`B^\pm \pi K`$ decays and the “true” value of $`\gamma `$ is to a good approximation given by
$$\mathrm{cos}\gamma _{\pi K}\mathrm{cos}\gamma a_{\mathrm{NP}}.$$
(15)
In Figure 5 we show contours of constant $`X_R`$ versus $`\gamma `$ and $`a`$, assuming without loss of generality that $`\gamma >0`$. Obviously, even a moderate New Physics contribution to the parameter $`a`$ can induce a large shift in $`\gamma `$. Note that the present central value of $`X_R0.7`$ is such that values of $`a`$ less than the Standard Model result $`a0.64`$ are disfavored, since they would require values of $`\gamma `$ exceeding $`100^{}`$, in conflict with the global analysis of the unitarity triangle .
### 4.3 Survey of New Physics models
In , we have explored how physics beyond the Standard Model could affect purely hadronic FCNC transitions of the type $`\overline{b}\overline{s}q\overline{q}`$ focusing, in particular, on isospin violation. Unlike in the Standard Model, where isospin-violating effects in these processes are suppressed by electroweak gauge couplings or small CKM matrix elements, in many New Physics scenarios these effects are not parametrically suppressed relative to isospin-conserving FCNC processes. In the language of effective weak Hamiltonians this implies that the Wilson coefficients of QCD and electroweak penguin operators are of a similar magnitude. For a large class of New Physics models we found that the coefficients of the electroweak penguin operators are, in fact, due to “trojan” penguins, which are neither related to penguin diagrams nor of electroweak origin.
Specifically, we have considered: (a) models with tree-level FCNC couplings of the $`Z`$ boson, extended gauge models with an extra $`Z^{}`$ boson, supersymmetric models with broken R-parity; (b) supersymmetric models with R-parity conservation; (c) two-Higgs–doublet models, and models with anomalous gauge-boson couplings. Some of these models have also been investigated in . In case (a), the resulting electroweak penguin coefficients can be much larger than in the Standard Model because they are due to tree-level processes. In case (b), these coefficients can compete with the ones of the Standard Model because they arise from strong-interaction box diagrams, which scale relative to the Standard Model like $`(\alpha _s/\alpha )(m_W^2/m_{\mathrm{SUSY}}^2)`$. In models (c), on the other hand, isospin-violating New Physics effects are not parametrically enhanced and are generally smaller than in the Standard Model.
For each New Physics model we have explored which region of parameter space can be probed by the $`B^\pm \pi K`$ observables, and how big a departure from the Standard Model predictions one can expect under realistic circumstances, taking into account all constraints on the model parameters implied by other processes. Table 1 summarizes our estimates of the maximal isospin-violating contributions to the decay amplitudes, as parameterized by $`|a_{\mathrm{NP}}|`$. They are the potentially most important source of New Physics effects in $`B^\pm \pi K`$ decays. For comparison, we recall that in the Standard Model $`a0.64`$. Also shown are the corresponding maximal values of the difference $`|\gamma _{\pi K}\gamma |`$. As noted above, in models with tree-level FCNC couplings New Physics effects can be dramatic, whereas in supersymmetric models with R-parity conservation isospin-violating loop effects can be competitive with the Standard Model. In the case of supersymmetric models with R-parity violation the bound (14) implies interesting limits on certain combinations of the trilinear couplings $`\lambda _{ijk}^{}`$ and $`\lambda _{ijk}^{\prime \prime }`$, as discussed in .
## 5 Alternative approaches and recent developments
We will now review recent developments regarding other approaches to determining $`\gamma `$, focusing mainly on proposals that can be pursued at the first-generation $`B`$ factories.
### 5.1 Variants of the $`𝑩^\mathbf{\pm }\mathbf{}𝝅𝑲`$ strategy
The first proposal to constraining $`\gamma `$ using CP-averaged $`B\pi K`$ branching ratios was made by Fleischer and Mannel , who suggested to consider the ratio
$$R=\frac{\tau (B^+)}{\tau (B^0)}\frac{\text{B}(B^0\pi ^{}K^\pm )}{\text{B}(B^\pm \pi ^\pm K^0)}=1.11\pm 0.33.$$
(16)
Neglecting the small rescattering contribution to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes as well as electroweak penguin contributions yields
$`R`$ $``$ $`12\epsilon _T\mathrm{cos}\varphi _T\mathrm{cos}\gamma +\epsilon _T^2`$ (17)
$`=`$ $`\mathrm{sin}^2\gamma +(\epsilon _T\mathrm{cos}\varphi _T\mathrm{cos}\gamma )^2+\mathrm{sin}^2\varphi _T\mathrm{cos}^2\gamma `$
$``$ $`\mathrm{sin}^2\gamma ,`$
where $`\epsilon _T`$ is a real parameter of similar magnitude as $`\overline{\epsilon }_{3/2}`$, and $`\varphi _T`$ is a strong phase. If the ratio $`R`$ was found significantly less than 1, the above inequality would imply an exclusion region around $`\gamma =90^{}`$.
Unlike the parameter $`\overline{\epsilon }_{3/2}`$ in $`B^\pm \pi K`$ decays, the quantity $`\epsilon _T`$ is not constrained by SU(3) symmetry and cannot be determined experimentally. The strategy proposed in is to eliminate this quantity in deriving a bound on $`\gamma `$. This weakens the handle on the weak phase except for the particular case where $`\epsilon _T\mathrm{cos}\varphi _T\mathrm{cos}\gamma `$. The neglect of electroweak penguin and rescattering corrections is questionable and has given rise to some criticism . Yet, although the bound (17) is theoretically not as clean as the corresponding bound (8) on the ratio $`R_{}`$, a precise measurement of the ratio $`R`$ can provide for an interesting consistency check. Various refinements and extensions of the original Fleischer–Mannel strategy are discussed in .
Some authors have suggested to eliminate the small rescattering contribution to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes, parameterized by $`\epsilon _a`$ in (1), by assuming SU(3) symmetry and exploiting amplitude relations connecting $`B\pi K`$ and $`B\pi \pi `$ decays with other decay modes, such as $`B^+K^+\overline{K}^0`$, $`B^+\pi ^+\eta ^{()}`$, $`B_sK\pi `$ , or $`B^+K^+\eta ^{()}`$, $`B_s\pi ^0\eta ^{()}`$ . Note that approaches based on $`B_s`$ decays have to await second-generation $`B`$ factories at hadron colliders. Besides relying on the assumption of SU(3) flavor symmetry the above proposals suffer from theoretical uncertainties related to $`\eta `$$`\eta ^{}`$ mixing. Given that the rescattering contribution in question is expected to be very small , and that this expectation can be tested experimentally using $`B^\pm K^\pm \overline{K}^0`$ decays , neglecting $`\epsilon _a`$ or putting an upper bound on it will most likely be a better approximation than neglecting the potentially much larger SU(3)-breaking corrections in the above strategies.
### 5.2 SU(3) relations between $`𝑩_𝒅`$ and $`𝑩_𝒔`$ decay amplitudes
Fleischer has recently suggested to use the U-spin ($`ds`$) subgroup of flavor SU(3) to derive relations between various decays into CP eigenstates, such as
$`B_dJ/\psi K_S`$ $``$ $`B_sJ/\psi K_S,`$
$`B_dD_d^+D_d^{}`$ $``$ $`B_sD_s^+D_s^{},`$
$`B_d\pi ^+\pi ^{}`$ $``$ $`B_sK^+K^{}.`$ (18)
The first two relations provide access to the weak phase $`\gamma `$, while the third one is sensitive to both $`\beta `$ and $`\gamma `$. Although this strategy involves $`B_s`$ decays and thus cannot be pursued at the first-generation $`B`$ factories, we discuss it because of its general nature.
Consider the example of $`B_d,B_sJ/\psi K_S`$ decays, which are governed by an interference of tree and penguin topologies. The sensitivity to $`\gamma `$ arises from the presence of top- and up-quark penguins. A general parameterization of the decay amplitudes $`A_{d,s}𝒜(B_{d,s}J/\psi K_S)`$ is
$`A_d`$ $`=`$ $`A^{}\left(1+\mathrm{tan}^2\theta _Cϵ^{}e^{i\varphi ^{}}e^{i\gamma }\right),`$
$`A_s`$ $`=`$ $`A\left(1+ϵe^{i\varphi }e^{i\gamma }\right),`$ (19)
where $`\theta _C`$ is the Cabibbo angle, $`\varphi ^{()}`$ are strong phases, and the penguin contributions are proportional to parameters $`ϵ^{()}0.2`$. Exact U-spin symmetry would imply $`A^{}=A`$, $`ϵ^{}=ϵ`$, and $`\varphi ^{}=\varphi `$. In that limit $`\gamma `$ could be determined (with discrete ambiguities) by measuring the direct and mixing-induced CP asymmetries $`A_{\mathrm{CP}}^{\mathrm{dir}}(B_sJ/\psi K_S)`$ and $`A_{\mathrm{CP}}^{\mathrm{mix}}(B_sJ/\psi K_S)`$ as well as the ratio of the CP-averaged $`B_d,B_sJ/\psi K_S`$ decay rates. These observables would suffice to fix $`ϵ`$, $`\varphi `$ and $`\gamma `$. Assuming $`A^{}=A`$, this parameter cancels out.
This approach is interesting in that it is general and can be applied to several different decay modes . Assuming the theoretical uncertainties related to U-spin breaking can be controlled, it will provide several independent determinations of $`\beta `$ and $`\gamma `$. However, a question that needs to be addressed in future work is how important the SU(3)-breaking corrections leading to $`A^{}A`$ are, given that $`ϵ0.2`$ is expected to be a small parameter.
### 5.3 Dalitz-plot analysis in $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝝅^\mathbf{+}𝝅^{\mathbf{}}`$ decays
There have been several proposals for obtaining information on the weak phases $`\alpha `$ and $`\gamma `$ from an analysis of the Dalitz plot in $`B3\pi `$ decays. In fact, the approach of Quinn and Snyder based on the decays $`B\rho \pi 3\pi `$ is considered to offer one of the most promising ways to measure $`\alpha `$ . The strategy of Bediaga et al. (see also ) for learning $`\gamma `$ is to fit the measured Dalitz distribution $`\text{d}^2\mathrm{\Gamma }/\text{d}m_1^2\text{d}m_2^2`$ in the decays $`B^+\pi ^+\pi ^+\pi ^{}`$, where $`m_i^2=(p_{\pi _i^+}+p_\pi ^{})^2`$, to an ansatz of the form
$$\left|\underset{i}{}a_ie^{i\theta _i}F_i(m_1^2,m_2^2)\right|^2.$$
(20)
$`F_i(m_1^2,m_2^2)`$ are appropriate kinematic functions for resonance or continuum contributions. From the fit one extracts a set $`\{a_i,\theta _i\}`$ of real amplitudes and complex phases. Performing a similar fit to the CP-conjugated decays $`B^{}\pi ^{}\pi ^{}\pi ^+`$ gives parameters $`\{a_i,\overline{\theta }_i\}`$. The complex phases are sums of strong and weak phases, and the latter ones are determined from the differences $`\varphi _i=\frac{1}{2}(\theta _i\overline{\theta }_i)`$. The weak phase $`\gamma `$ enters through the interference of the $`\chi _c\pi ^\pm `$ resonance state with other resonance channels (e.g., $`\rho ^0\pi ^\pm `$, $`f_0\pi ^\pm `$, etc.) and nonresonant contributions. The associated CKM factors are $`V_{cb}^{}V_{cd}`$ and $`V_{ub}^{}V_{ud}`$, respectively.
A theoretical problem inherent in this approach is the “penguin pollution”, i.e., the fact that $`\overline{b}\overline{d}q\overline{q}`$ penguin transitions contaminate the analysis. If the penguin/tree ratio is assumed to be at most 20%, then the resulting error in the extraction of $`\gamma `$ is bound to be less than $`11^{}`$ . Unfortunately, the recent data on $`B\pi K`$ and $`B\pi \pi `$ decays reported by the CLEO Collaboration suggest that the penguin/tree ratio may be significantly larger than 20%.
The feasibility of this method profits from the fact that no tagging is required (only charged $`B`$-meson decays are considered), the final state consists of three charged pions (no $`\pi ^0`$ reconstruction is needed), and a Dalitz plot analysis typically does not require very large data samples. The authors of estimate that with only 1000 events one could obtain a resolution of $`\mathrm{\Delta }\gamma 20^{}`$. Potential problems of the approach are that the size of the interference term depends on the yet unknown $`B^\pm \chi _c\pi ^\pm \pi ^\pm \pi ^+\pi ^{}`$ branching ratio, and that contamination from nonresonant channels may, in the end, require larger data samples.
### 5.4 Extracting $`𝜸`$ in $`𝑩\mathbf{}𝑫𝑲^{\mathbf{(}\mathbf{}\mathbf{)}}`$ decays
$`BDK^{()}`$ decays were originally considered to be the “classical” way for determining $`\gamma `$. Later, it was realized that this is a very challenging route, which poses high demands to experiment and theory. We discuss this strategy because it provides an extraction of $`\gamma `$ from tree processes alone, which is unlikely to be affected much by New Physics.
The original idea of Gronau and Wyler (see also ) was to use the interference of the amplitudes for the decays $`B^+\overline{D}^0K^+`$ and $`B^+D^0K^+`$ occurring if the charm meson is detected as a CP eigenstate $`D_1^0=\frac{1}{\sqrt{2}}(D^0+\overline{D}^0)`$. The first decay is due to the quark transition $`\overline{b}\overline{c}u\overline{s}`$ proportional to $`V_{cb}^{}V_{us}`$, whereas the second one is due to the decay $`\overline{b}\overline{u}c\overline{s}`$ proportional to $`V_{ub}^{}V_{cs}`$. The relative phase of these two combinations of CKM elements is $`\gamma `$. Ideally, one would measure the six rates for the decays $`B^+\overline{D}^0K^+`$, $`B^+D^0K^+`$, $`B^+D_1^0K^+`$, and their CP conjugates. Then, using isospin triangles, $`\gamma `$ could be determined in a theoretically clean way.
This strategy is hindered by the fact that is it not possible experimentally to determine the rate of the doubly Cabibbo-suppressed decay $`B^+D^0K^+`$ followed by $`D^0K^{}\pi ^+`$, because its combined branching ratio is similar to that of the transition $`B^+\overline{D}^0K^+`$ followed by the doubly Cabibbo-suppressed decay $`\overline{D}^0K^{}\pi ^+`$ . Several approaches have been suggested to circumvent this problem ; however, they are challenging from an experimental point of view because precise measurements of very small decay rates are required.
Recently, some authors have suggested to use isospin relations combined with certain dynamical assumptions (the neglect of annihilation contributions relative to color-suppressed tree amplitudes) to eliminate the “difficult” $`B^+D^0K^+`$ and $`B^{}\overline{D}^0K^{}`$ rates in favor of six other $`B,\overline{B}DK`$ rates . Unfortunately, it is difficult to gauge the accuracy of the assumptions made in these proposals.
Considering the various options that have been discussed it appears that measuring $`\gamma `$ at the first-generation $`B`$ factories using $`BDK^{()}`$ decays is very challenging , and more demanding than a determination based on $`B^\pm \pi K`$ decays. On the other hand, to have two independent determinations of $`\gamma `$ from these two classes of decays would be extremely important. Whereas $`\gamma `$ measured in $`BDK^{()}`$ decays is likely to be the “true” phase of the CKM matrix, the angle $`\gamma _{\pi K}`$ determined in $`B^\pm \pi K`$ decays probes loop processes and may easily be affected by New Physics. As discussed in Section 4 and summarized in Table 1, comparing the two determinations would provide a very sensitive probe for physics beyond the Standard Model.
## 6 Conclusions
Among the strategies for determining the weak phase $`\gamma =\text{arg}(V_{ub}^{})`$ of the quark mixing matrix, approaches based on rate measurements in $`B\pi K`$ decays play an important role and have received a lot of attention recently. The corresponding branching ratios are “large” and the final states are “easy” to detect experimentally. Using isospin, Fierz and flavor symmetries together with the fact that nonleptonic $`B`$ decays into two light mesons admit a heavy-quark expansion, a largely model-independent description of certain observables concerning the charged modes $`B^\pm \pi K`$ can be obtained. Various proposals exist for extracting information on $`\gamma `$ and on the Wolfenstein parameters $`\overline{\rho }`$ and $`\overline{\eta }`$ using these decays. In the future, this will allow us to determine $`\gamma `$ with a theoretical uncertainty of about $`10^{}`$. When combined with an alternative measurement of $`\gamma `$ using other decays such as $`BDK^{()}`$, this will provide for a sensitive probe of physics beyond the Standard Model.
###### Acknowledgments.
It is a pleasure to thank the SLAC Theory Group for the hospitality extended to me during my visit earlier this year. I am grateful to M. Beneke, G. Buchalla, Y. Grossman, A. Kagan, J. Rosner and C. Sachrajda for collaboration on parts of the work reported here. This research was supported by the Department of Energy under contract DE–AC03–76SF00515. |
no-problem/9910/cond-mat9910232.html | ar5iv | text | # Off-Equilibrium Dynamics at Very Low Temperatures in 3d Spin Glasses
## 1 Introduction
The nature of the low temperature phase of finite dimensional spin glasses is still a subject of controversy .
Recently Bray, Moore, Bokil and Drossel have questioned many of the numerical results obtained with Monte Carlo methods in the three dimensional Edwards-Anderson (EA) model .
Inspired by the study of the Migdal-Kadanoff approximation (MKA) of the EA model, they argued that the numerical results, that were obtained at temperatures $`T3/4T_c`$, could be strongly affected by finite size effects and that one should go to sizes larger than the crossover length $`L^{}`$ in order to see the right (droplet) behavior. They found (in the framework of the MKA) that the crossover length is $`L^{}100`$ for $`T0.7T_c`$ and that it decreases for lower temperatures: $`L^{}10`$ when $`T0.5T_c`$ (see also the comment ).
This is maybe the main motivation that pushed us to study the EA model in the very low temperature region: verify whether the behavior already found at $`T0.75T_c`$ persists at $`T0.5T_c`$. In fact at these temperatures we can simulate (using off-equilibrium techniques) a system of size larger than $`L^{}`$ (in this paper, we will present data for sizes $`L=24`$ and $`L=64`$).
The numerical data showed in this paper have been measured in the off-equilibrium dynamical regime. This way of probing the system properties, apart from being much more similar to the experimental procedure, does not present the thermalization problems of a simulation performed at the equilibrium, that would be insurmountable obstacles at so low temperatures. The efficiency of this way of measuring has been largely tested in the recent past . Moreover the off-equilibrium dynamics of spin glasses has received in the last years a great attention both from the experimental and from the analytical points of view.
Taking the measurements in the off-equilibrium regime we are able to confront the droplet model (DM) and the Mean Field like theory (MF) on two grounds: the off-equilibrium regime itself and the equilibrium one, that can be obtained in the limit of very large times. We can take this limit quite safely thanks to the very large time reached in our simulations.
A preliminary analysis based on the data at temperatures $`T=0.7`$ and $`T=0.35`$ was reported in reference . In this paper we present an extended analysis based on nine different temperatures obtaining a precise temperature dependence of the dynamical critical exponent in order to have an accurate confront with recent experiments.
## 2 The model and the numerical method
We have simulated the Gaussian Ising spin glass on a three-dimensional cubic lattice of volume $`L^3`$ with periodic boundary conditions. The Hamiltonian of the system is
$$=\underset{<ij>}{}\sigma _iJ_{ij}\sigma _j.$$
(1)
We denote by $`<ij>`$ the sum over nearest neighbor pairs. $`J_{ij}`$ are Gaussian variables with zero mean and unit variance.
We focus our attention on the study of the point-point overlap correlation function computed at distance $`x`$ and time $`t`$
$$G(x,t)=\frac{1}{L^3}\underset{i}{}\overline{\sigma _{i+x}\tau _{i+x}\sigma _i\tau _i_t}.$$
(2)
where $`\sigma `$ and $`\tau `$ are two real replicas (systems which evolve with the same disorder) and the index $`i`$ runs over all the points of the lattice. As usual we denote by $`\overline{(\mathrm{})}`$ the average over the disorder and, in this context, $`(\mathrm{})_t`$ is the average over the dynamical process until time $`t`$ (for a given realization of the disorder). The two replicas ($`\sigma `$ and $`\tau `$) evolve with different random numbers.
The simulation has been performed in a similar way to the experimental procedure: the system is prepared in a high temperature configuration (actually the initial configurations were chosen at random, i.e. $`T=\mathrm{}`$) and suddenly it is quenched below the (estimated) critical temperature, $`T_c=0.95(3)`$ . Immediately we start taking the measurements, which obviously depend on time. The equilibrium behavior is recovered in the large time limit. We have used as dynamical process the standard Metropolis method.
We have simulated 4 samples (8 systems) of an $`L=64`$ lattice, measuring the correlation function at times $`t=1002^k`$ (with $`k=0,\mathrm{},13`$) and temperatures $`T=0.9`$, $`0.8`$, $`0.7`$, $`0.6`$, $`0.5`$, $`0.4`$, and $`0.35`$. In addition, we have simulated 4096 samples of an $`L=24`$ lattice measuring at times $`t=2^k`$ (with $`k=7,\mathrm{},19`$) and at three temperatures: $`T=0.7`$, $`0.5`$ and $`0.35`$.
For the study of the fluctuation-dissipation relation we have used $`L=64`$ systems and we have simulated them for more than $`10^7`$ Monte Carlo steps. All the simulations have been performed with the help of the parallel computer APE100 .
## 3 Results on the correlation function
First, we analyze the correlation functions computed with the $`L=64`$ runs. The study of the numerical data suggest us the following Ansatz for the time and spatial dependences of the correlation function
$$G(x,t)=\frac{\text{const}}{x^\alpha }\mathrm{exp}\left[\left(\frac{x}{\xi (t)}\right)^\delta \right],$$
(3)
where $`\xi (t)`$ is the dynamical correlation length. The numerical data clearly show that the dynamical correlation length depends on the time following a power law $`\xi (t)=Bt^{1/z}`$ where $`z`$ is the dynamical critical exponent. The exponents $`\alpha `$, $`\delta `$ and $`z`$ and the amplitude $`B`$ could, in principle, depend on the temperature. However we obtain (see below) that $`\alpha `$, $`\delta `$ and $`B`$ are almost temperature independent, while $`z(T)`$ is inversely proportional to $`T`$.
In Table 1 we report the results of our fits (always done using the CERN routine MINUIT ). We remark that, for a given temperature, we have fitted our numerical data to the Ansatz of Eq.(3) in two steps. In the first step we fix the distance in the correlation function and we perform the following three parameters fit in the variable $`t`$
$$\mathrm{log}G(x,t)=A(x)B(x)t^{\delta /z}.$$
(4)
We have found that $`\delta /z`$ is independent of $`x`$. The second step has been to extract from $`A(x)`$ and $`B(x)`$ the exponents $`\alpha `$ and $`\delta `$ and the amplitude $`B`$ using the formulæ: $`A(x)=\text{const}\alpha \mathrm{log}x`$ and $`B(x)=B^\delta x^\delta `$. We report our final values of $`z`$, $`\delta `$, $`B`$ and $`\alpha `$ in Table 1.
The resulting values for $`z(T)`$ (see Table 1) can be fitted to a power law (using all the temperatures of Table 1) obtaining
$$z(T)=6.4(6)T^{0.96(20)}.$$
(5)
From the previous fit we can guess a simpler law for the dynamical critical exponent $`z(T)=a/T`$, obtaining<sup>1</sup><sup>1</sup>1 This law was found by Kisker et al. for the $`\pm 1`$ three dimensional spin glass (See references ). Moreover this law was guessed for the Gaussian model using numerical data taken at temperatures $`T=0.7`$ and $`T=0.35`$ in reference .
$$z(T)=\frac{6.2(3)}{T}.$$
(6)
This kind of behavior suggests that the low temperature dynamics in spin glasses is dominated mainly by activated processes with free energy barriers diverging logarithmically with the size of the system.
We can finally write down the dependence of the dynamical correlation length on the time as well as on the temperature:
$$\xi (t,T)t^{T/6.2(3)}=t^{0.161(8)T}=t^{0.153(12)T/T_c},$$
(7)
where we have assumed that the temperature of the phase transition is $`T_c=0.95(3)`$ . The agreement of the previous formula with the experiments is very good. We recall that in experiments it was found the following dependence for the dynamical correlation length
$$\xi (t,T)t^{0.170T/T_g}.$$
(8)
where $`T_g`$ is the experimental critical temperature (the authors of this result do not quote the error in the exponent).
A further check of Eq.(3) would be the collapse of the data (measured at different times and different temperatures) when plotting $`G(x,t)x^\alpha `$ versus $`x/t^{1/z(T)}`$. To this purpose we use the data from 4 samples of the $`64^3`$ runs, together with those measured on 4096 samples of $`24^3`$ runs. We remark that, in the $`24^3`$ runs, the volume is near 19 times less than the $`L=64`$ runs but we have computed 1000 times more samples and so we expect the errors to be smaller.
In Fig. 1 we plot the correlation function for two low temperatures ($`T=0.5`$ and $`0.35`$) using as variables $`x/\xi (t)`$ and $`x^\alpha G(x,t)`$ (we have taken $`\alpha =0.5`$, see Table 1). In the plots we use the data from both runs ($`L=24`$ and $`L=64`$) and they superimpose perfectly. In the insets we present the same data in a log-linear scale in order to let the reader evaluate better the collapse. It is clear that the scaling is impressive even at the lowest temperature $`T=0.35`$. We can also state that the finite size effects are negligible for the lattice sizes used.
Scaling arguments tell us that the more general scaling function for the correlation function is (for large $`x`$ and $`t`$)
$$G(x,t)x^\alpha 𝒢\left(\frac{x}{\xi (t)}\right),$$
(9)
where the scaling function $`𝒢(y)`$ is smooth. Moreover, in the scaling regime, $`𝒢(y)`$ should not depend neither on temperature nor on the lattice size. Note that in our Ansatz (3) we have chosen an exponential function for the scaling function: $`𝒢(y)\mathrm{exp}(y^\delta )`$, and we show that it fits very well the data. However to check that our estimates of $`\alpha `$ and $`\xi (t)`$ are correct we do not need to know $`𝒢(y)`$. We can simply plot $`x^\alpha G(x,t)`$ versus $`x/\xi (t)`$ (as it was done in Fig. 1) and check how well the data collapse.
In order to check the temperature independence of $`𝒢(y)`$ we show in Fig. 2 the scaling function for three different temperatures ($`T=0.35,0.5`$ and $`0.7`$), together with the exponential function $`\mathrm{exp}[y^{1.42(2)}]`$ (see Table 1) obtained through the fitting procedure. It is clear that the scaling function is really temperature-independent and it can be very well approximated by the exponential function as we have chosen in our Ansatz.
Another interesting issue is the extrapolation of the correlation function to infinite time. In this limit we can compare again our numerical results with the predictions of the droplet model and with that of the RSB theory. In the former the extrapolated correlation function tends to the value $`q_{\mathrm{EA}}^2`$ for large distances, whereas the RSB prediction is a pure power law going asymptotically to zero . Our Ansatz, which describes perfectly the numerical data, supports the RSB prediction even for the lowest temperatures.
Nevertheless, we have tried to fit our data with a functional dependence compatible with the droplet model, that is, $`G(x,t)=G_{\mathrm{}}(x)𝒢(x/\xi (t))`$, where $`G_{\mathrm{}}(x)=Ax^\alpha +C`$. If $`C=0`$, then the previous formula is exactly our Ansatz (and it implies a breaking of the replica symmetry), while if $`C=q_{\mathrm{EA}}^2`$ then it would support a droplet picture. Fitting the data to the previous formula, $`G_{\mathrm{}}(x)`$, we have found that at every temperature and even at $`T=0.35`$ (our lowest temperature), the best value for $`C`$ is always compatible with zero. At very low temperatures, i.e. $`T=0.35`$, the Edwards-Anderson order parameter is so close to one ($`q_{\mathrm{EA}}1`$) than we can safely distinguish between the two competing theories. In fact in the droplet like formula we have that $`G_{\mathrm{}}(x)`$ is almost constant<sup>2</sup><sup>2</sup>2 Actually it slowly decreases from 1 to $`q_{\mathrm{EA}}^21`$, but for all our purposes it can be considered as a constant. and so we simply should fit the data into the scaling formula of Eq.(9) without the factor $`x^\alpha `$ in order to check the correctness of the droplet model.
In Fig. 3 we present the data rescaled with the formula suggested by the droplet model (left plot) and with that implied by RSB (right plot). It is clear that the RSB prediction fits much better the numerical data. Note that the data error is sufficiently small to affirm safely that the data in the left plot have no collapse at all. A scaling plot like the left one has been recently presented by Komori et al. in (see also reference ). We believe that the rather poor collapse of their data (see Fig. 5 in ) is due to the fact that they neglect the factor $`x^\alpha `$ in the scaling formula. A much better collapse would be obtained by plotting $`\sqrt{x}G(x,t)`$ versus $`x/\xi (t)`$ (see ).
## 4 Fluctuation dissipation relation at very low temperatures
Now, we present the results of the analysis based on the generalization of the fluctuation-dissipation theorem (FDT) in the out of equilibrium regime . In this section we will focus on the scaling properties of the aging region and the violation of fluctuation-dissipation at very low temperatures.
A preliminary analysis was done in studying the violation of FDT at temperature, $`T=0.7`$. Here we have simulated different lower temperatures and so, as byproduct, we can study the scaling properties of the violation of FDT. An analogous analysis, with many temperatures, was done in but on the four-dimensional EA model.
For the sake of conciseness, we do not repeat all the formalism and we address the interested reader to one of the previous publications on the subject . Here we simply recall the main formulæ that we use. As usual we define the integrated response to a very small external field as
$$\chi (t,t_w)=\underset{h_00}{lim}\frac{1}{h_0}_{t_w}^tR(t,t^{})h(t^{})\text{d}t^{},$$
(10)
where $`h(t)=h_0\theta (tt_w)`$ and $`R(t,t^{})=\frac{1}{N}_i\frac{s_i(t)}{h(t^{})}`$. The autocorrelation function is defined as
$$C(t,t_w)=\frac{1}{N}\underset{i}{}s_i(t)s_i(t_w).$$
(11)
Relating these two functions, in the large times limit, via
$$T\chi (t,t_w)=S(C(t,t_w)),$$
(12)
we have that, at the equilibrium, the fluctuation-dissipation theorem (FDT) holds and $`S(C)=1C`$, while in the aging regime the function $`S(C)`$ can be linked to the equilibrium overlap distribution through $`P(q)=\frac{^2S(C)}{C^2}|_{C=q}`$ .
Models that, in the frozen phase, do not show any breaking of the replica symmetry, have, at the equilibrium level, a static $`P(q)=\delta (qq_{\mathrm{EA}})`$, which dynamically corresponds to the absence of response in the aging regime. This means that, plotting $`\chi (t,t_w)`$ versus $`C(t,t_w)`$, we obtain a horizontal line in the range $`Cq_{\mathrm{EA}}`$ (in the quasi-equilibrium regime, $`Cq_{\mathrm{EA}}`$, and it always holds $`T\chi =1C`$ independently of the model)
In Fig. 4 we show the results for different temperatures in the usual plot $`\chi (t,t_w)`$ versus $`C(t,t_w)`$. Note that in this plot the FDT line is $`\chi =(1C)/T`$ and so it is different for different temperatures. It is quite clear that, even for very large times, the curves are far from been horizontal when they leave the FDT line. This result gives more evidence in favor of a replica symmetry breaking in the very low temperature phase of the 3D EA model .
We present the data for different temperatures on a single plot in order to make more evident the fact that the numerical data seem to stay on the same curve once the system enters into the aging regime, i.e. when the points leave the FDT line. This kind of behavior has been observed in the four-dimensional EA model and it is reminiscent of the mean-field solution.
Indeed in the SK model, using the Parisi-Toulouse (PAT) hypothesis , it can be shown that
$$S(C)=\{\begin{array}{cc}1C& \mathrm{for}Cq_{\mathrm{EA}}(T),\hfill \\ T\sqrt{1C}& \mathrm{for}Cq_{\mathrm{EA}}(T).\hfill \end{array}$$
(13)
The formula can be easily generalized assuming a generic power law behavior in the aging regime: $`S(C)=TA(1C)^B`$ (the mean-field value for the exponent is $`B=1/2`$).
We use this generalization to fit the data and we obtain very good results. The best fit parameters have been estimated from the collapse of the data reported in Fig. 4 and they are $`A0.7`$ and $`B0.41`$ (to be compared with the mean-field values $`A=1`$ and $`B=1/2`$, and those obtained for the 4D EA model $`A0.52`$ and $`B0.41`$ ).
In order to show the validity of the fitting formula, we present in Fig. 5 the collapse of the scaled data using the variables $`x=(1C)T^\varphi `$ and $`y=\chi T^{1\varphi }`$, where $`\varphi =\frac{1}{1B}=1.7`$. It is easy to see that, if the previous scaling holds, the data should stay on two power laws: $`y=x`$ and $`y=Ax^B`$ in the quasi-equilibrium and aging regime, respectively. The two power laws are reported in Fig. 5.
Even if we may expect a breakdown of the assumed scaling for large values of the scaling variable $`x`$ (i.e. the scaled data are no longer described by a power law), we note however that for a quite large range the collapse is very good and very well approximated by a power law. Moreover, we remark that the collapse has been obtained adjusting only one parameter.
## 5 Discussion
We have studied the off equilibrium dynamics of the three dimensional Gaussian spin glass in the very low temperature phase. In particular we have studied the scaling properties of the dynamical overlap correlation functions and the scaling properties of the violation of the fluctuation-dissipation.
We have tried to fit our correlation functions to the functional form predicted by the droplet model but the fits were poor. Moreover a correlation length diverging following a power law with the time implies, as was noted by Rieger , barriers diverging not as $`L^\psi `$ (as predicted by the droplet model with the lower bound $`\psi \theta 0.2`$) but as $`\mathrm{log}L`$. This latter results implies $`\psi =0`$, hence violating the droplet lower bound.
It is interesting to note that the experimental data could be fitted to the droplet formula assuming that $`\psi =\theta `$ . However while both, the results of numerical simulations and the experiments are in very good agreement with a power law fit for $`\xi (t,T)`$, the numerical fit assuming a droplet formula for $`\xi (t,T)`$ disagrees with the experimental fit assuming the same hypothesis .
As it has been noted above, our final result for the dynamical correlation length is in a very good agreement with the experimental result.
We remark that the same scenario (power law dependence of $`\xi (t,T)`$ and linear dependence of $`1/z`$ with temperature) also emerges in four and six dimensions. In the latter case it was found that $`z(T)=4T_c/T`$ ($`z=4`$ at the transition is the value predicted by Mean Field) while in the former one $`z(T)=5.5T_c/T`$ . Moreover in these two dimensions the overlap correlation function constrained to zero overlap follows a pure power law as in three dimensions.
If we send to infinity the time in our Ansatz for the overlap-overlap correlation function we obtain a pure power decay $`G(x)x^\alpha `$ with $`\alpha 0.5`$, with a small dependence of $`\alpha `$ on the temperature for the whole spin glass phase. We recall again that the droplet prediction is $`G(x)q_{\mathrm{EA}}^2`$ in contradiction with our numerical correlation functions (this fact was already noted in ). Instead, the pure power behavior is supported by the Gaussian approximation using the Mean Field solution .
One could argue that the simulated temperatures are not low enough and the times and sizes not large enough in order to see the “true” (droplet) behavior of the EA model. However, as we stressed in the Introduction, $`L=64`$ is large enough for temperatures as low as $`T=0.35`$ and $`T=0.5`$. Moreover our large times extrapolations are very safe thanks to the measurements have been taken over 6 time decades.
We have shown numerical results that contradict the droplet predictions in a wide range of temperatures ($`0.35T0.9`$). In particular we point out that our results (both for correlation functions and for violation of FDT) at a very low temperature, $`T=0.35`$, support a Mean Field picture.
Finally, we remark that using the (PAT) Mean Field scaling relations for the $`P(q)`$ we have obtained a very good scaling plot of the violation of fluctuation-dissipation (like in four dimensions ). This provides us another strong evidence calling for a low temperature phase being well described by Mean Field .
## 6 Acknowledgments
JJRL is partially supported by CICyT AEN97-1693. JJRL wishes to thank L.A. Férnandez and A. Muñoz Sudupe for interesting suggestions. Moreover we wish to thank A. J. Bray for pointing out an arithmetic error in equation (7). |
no-problem/9910/hep-th9910257.html | ar5iv | text | # Non-renormalizability of a SSB gauge theory induced by a non-linear fermion-Higgs coupling of canonical dimension 4
## Acknowledgments
I am indebted to Ion I. Cotăescu and Geza Ghemes for stimulating discussions and criticisms. I also thank Atilla Farkas for reading the manuscript. |
no-problem/9910/astro-ph9910423.html | ar5iv | text | # Reflection and noise in the low spectral state of GX 339-4
## 1 Introduction
It was found recently for a large sample of Seyfert AGNs and several observations of the Galactic X-ray binaries that the amplitude of the reflected component is generally correlated with the slope of the primary power law emission (Zdziarski et al. (1999)). Based on the numerous RXTE/PCA observations of Cyg X-1 Gilfanov et al. (1999) (hereafter Paper I) showed that this correlation is strong for multiple observations of this source and that the spectral parameters are also tightly correlated with the characteristic noise frequency. In particular an increase of the QPO centroid frequency is accompanied with a steepening of the slope of the Comptonized radiation and an increase of the amplitude of the reflected component. Studying fast variability of the reflected emission Revnivtsev et al. (1999) showed that its amplitude is suppressed with respect to that of the primary emission at the frequencies higher than $`110`$ Hz.
GX339-4 is a bright and well studied X-ray binary. It is usually classified as a black hole candidate and in many aspects is very similar to Cyg X-1 (see e.g. Tanaka&Lewin (1995), Trudolyubov et al. (1998), Zdziarski et al. (1998), Wilms et al. (1999), Nowak et al. (1999)). The investigations of the connections between the spectral and timing properties of Cyg X-1 (e.g. Gilfanov et al. (1999)) and GX 339-4 (e.g. Ueda et al (1994)) indicate that these sources could be similar from this point of view also. In this paper we expand the analysis of correlations between spectral and temporal characteristics of the X-ray emission of GX 339-4 in the low spectral state using the Rossi X-ray Timing Explorer data and show that this source demonstrates the same behavior that was previously observed from Cyg X-1.
## 2 Observations, data reduction and analysis
We used the publicly available data of GX 339-4 observations with RXTE/PCA from 1996–1997 performed during the low spectral state of the source. Our sample includes 23 observations from the proposals 10068, 20056, 20181 and 20183 with a total exposure time of $``$130 ksec (Table 1). Only observations from the proposal P20183 had sufficient energy and timing resolution to perform Fourier frequency resolved spectral analysis. Therefore the frequency resolved analysis was carried out only for $`61`$ ksec of the data.
The data screening was performed following the RXTE GOF recommendations: offset angle $`<0.02^{}`$, Earth elevation angle $`>10^{}`$, electron contamination value (the “electron ratio”) for any of PCUs $`<0.1`$. The data from all PCUs were used for the analysis. The energy spectra were extracted from the PCA mode “Standard 2” (128 channels, 16 sec time resolution) and averaged over each observation. Fourier frequency resolved spectral analysis used “Good Xenon” data (256 energy channels, 1$`\mu `$s time resolution). The response matrixes were built using standard RXTE FTOOLS 4.2 tasks (Jahoda 1999). The background spectra for the conventional spectral analysis were constructed with the help of the “VLE” based model (Stark 1999). The background contribution to the frequency resolved spectra is negligible in the frequency and energy ranges of interest. A uniform systematic uncertainty of 0.5% was added quadratically to the statistical error in each energy channel. The value of systematic uncertainty was chosen basing on the deviations of the PCA Crab spectra from a power law model (see e.g. Wilms et al. (1999)).
The energy spectra were fit in the 3–20 keV energy range with a spectral model identical to that of Paper I. The model consisted of a power law without high energy cutoff with superposed continuum, reflected from the neutral medium ($`pexrav`$ model in XSPEC, see Magdziarz & Zdziarski (1995)) and an intrinsically narrow emission line at the energy 6.4 keV. No ionization effects were taken into account. The binary system inclination angle was fixed at $`\theta =45^o`$ (e.g. Zdziarski et al. (1998)), the iron abundance – at the solar value. In such a model the amplitude of the reflected component is characterized by the reflection scaling factor $`R`$, which is an approximate measure of the solid angle subtended by the reflector, $`R\mathrm{\Omega }/2\pi `$. In the simplest geometry of an isotropic point source above the infinite reflecting plane, the reflection scaling factor $`R`$ is equal to 1. In order to approximately account for smearing of the reflection features due to e.g relativistic and ionization effects the reflected continuum and the fluorescent line were convolved with a Gaussian. Its width was a free parameter of the fit. The uncertainties in Table 1 represents 1$`\sigma `$ confidence intervals for the model parameters. The error bars on the values of equivalent width of the line were calculated by the propagation of errors from line flux value.
The power spectra of GX339–4 in the low spectral state feature a prominent QPO peak which frequency varies typically between $`0.1`$ and $`0.5`$ Hz (Fig.1). We therefore used its centroid to parameterize the characteristic noise frequency. The power spectra were approximated with a model consisting of two band limited noise components (Lorentzians, centered at zero frequency) and the comparably narrow Lorentzian profile (QPO peak).
The Fourier frequency resolved spectra were obtained following the prescription of Revnivtsev, Gilfanov & Churazov, 1999 and were approximated with the same model as averaged spectra except that the width of the Gaussian used to model the smearing of the reflection features was fixed at the value of 0.7 keV.
## 3 Results and their uncertainties
The results of the energy and power spectra approximation are presented in Table 1 and Fig.2. As can be seen from Fig.2 the main temporal and spectral parameters – characteristic noise frequency, slope of the underlying power power law and amplitude of the reflected component – change in a correlated way. A steepening of the spectrum is accompanied with an increase of the reflection amplitude and an increase of the QPO centroid frequency. Such a behavior is very similar to that found in Paper I for Cyg X-1.
The spectral model is obviously oversimplified and does not include several important effects such as ionization of the reflecting media, deviations of the primary emission spectrum from the power law, exact shape of the relativistic smearing of the reflection features etc. These effects might affect the best fit parameters and could lead to appearance of artificial correlations between them. Particulary sensitive to the choice of the spectral model is the reflection scaling factor $`R\mathrm{\Omega }/2\pi `$. As is well known there is some degeneracy between the amplitude of reflection $`R`$ and the photon index $`\mathrm{\Gamma }`$ of the underlying power law determined from the spectral fits, especially if the spectral fitting was done in a limited energy range (e.g. Zdziarski et al. (1999)). This degeneracy might result in a slight positive correlation between the best fit values of $`R`$ and $`\mathrm{\Gamma }`$ which is in part due to statistical noise and, in part, due to inadequate choice of the spectral model. The statistical part of this degeneracy is illustrated in Fig.2 by a 2-dimensional confidence contour for one of the points in the $`R\mathrm{\Gamma }`$ plane. As can be easily seen from Fig.2 it is correctly represented by the error bars assigned to the points.
In order to estimate contribution of the second, systematic part of the $`R\mathrm{\Gamma }`$ degeneracy we compare two pairs of observations with different and close best fit values of reflection factors $`R`$. We plot in Fig.3 the ratios of the count spectra for each pair. As is clearly seen from Fig.3 the spectrum having larger best fit value of the reflection scaling factor (and the equivalent width of the line) shows more pronounced reflection signatures – the fluorescence line at $`6.4`$ keV followed by the absorption edge and increase due to the Compton reflected continuum at larger energies. Thus we conclude that although the best fit values of the model parameters might not represent the exact values of the physically important quantities, our spectral model does correctly rank the spectra according to the amplitude of the reflection signatures and the correlations shown in Fig.2 are not artificial.
The Fourier frequency resolved spectra illustrate the energy dependence of the amplitude of X-ray flux variation at a given time scale. As was stressed out by Revnivtsev et al. (1999) and Gilfanov et al. 2000b interpretation of the Fourier frequency resolved spectra in general is not straightforward and requires certain a priory assumptions to be made. One of the areas where it can be efficiently used and which at present can not be accessed by the conventional spectroscopy is studying the fast variability of the reflected emission. Indeed, variation of the parameters in the Comptonization region, for instance, lead to variations of the spectral shape of the Comptonized radiation which would be imprinted in the Fourier frequency resolved spectra. Their shape, however, might differ significantly from any of the Comptonized spectra they resulted from and no easily interpretable results could be obtained via the conventional spectral fits. The shape of the reflection signatures, on the other hand, and especially that of the fluorescent line, is generally subject to significantly less variations. Therefore they can be easily identified in the frequency resolved spectra and their amplitude can be measured. Absence or presence of the reflection signatures in a Fourier frequency resolved spectrum would signal absence or presence of variations of the reflected emission at the given frequency. Their amplitude would in principal measure amplitude of the variations of the reflected flux relative to the variations of the primary emission. With that in mind we show in Fig.4 the Fourier frequency resolved spectra of GX339–4 in two frequency ranges. Significant decrease of the amplitude of the reflection features with frequency is apparent. Quantitative dependence of the reflection amplitude on the Fourier frequency is shown in Fig.5. Within the available statistical accuracy this dependence is qualitatively and quantitatively similar to that found for Cyg X-1 (Revnivtsev et al. (1999)).
## 4 Discussion
We analyzed 23 observations of GX339-4 with RXTE/PCA performed from 1996–1997 during the low spectral state of the source. Using simple spectral model we found that the pattern of temporal and spectral variability of GX339-4 is confirming the previous findings of Ueda et al (1994), obtained with the help of GINGA observatory, and it is nearly identical to that of Cyg X-1 (Revnivtsev et al. (1999), Paper I). This indicates that such a pattern might be common for the accreting black holes in the low spectral state. In particular:
1. The characteristic noise frequency, slope of the Comptonized spectrum and amplitude of the reflected component change in a correlated way. Increase of the noise frequency is accompanied by increase of the amplitude of the reflected component and steepening of the Comptonized spectrum.
2. Fourier frequency resolved spectral analysis showed that the short term variations of the reflected flux are suppressed in comparison with variations of the primary Comptonized flux at frequencies above $`110`$ Hz.
As it was discussed e.g. in Gilfanov et al. 1999, Zdziarski et al. (1999) the correlation of the spectral index (the photon index $`\mathrm{\Gamma }`$ of the underlying power law) with the amplitude of the reflected component hints on the possible relation between the solid angle, subtended by the reflector, and the influx of the soft photons to the Comptonization region. Existence of such relation suggests that the reflecting medium is the primary source of the soft seed photons to the Comptonizing region. It could be explained e.g. within the framework of the disk-spheroid models of the accretion flow (see e.g. Poutanen et al. (1997)) (a hot quasi-spherical Comptonization region near the compact object surrounded by an optically thick cold accretion disk terminating at some radius $`R_{\mathrm{in}}`$). In such geometry the decrease of the inner radius of the optically thick cold accretion disk should result in: 1) the decrease of the temperature of the inner hot region, because of the increase of the influx of cold seed photons, and 2) the increase of the solid angle subtended by this accretion disk (reflector) seen from the central hard source. This would lead to the correlation of the spectral index and the amplitude of the reflection. In addition, if we assume that the characteristic frequencies of aperiodic variations of the source flux are proportional to the keplerian frequency on the inner boundary of the optically thick accretion disk we will obtain an additional correlation. The characteristic frequency of the source’s power spectrum will positively correlate with the amplitude of the reflection. This is what we observe in the cases of Cyg X-1 and GX 339-4. We should note, however, that the above explanation of the observed $`R\mathrm{\Gamma }`$ correlation is not unique. For example, the similar dependencies can be produced in the model of active corona above the accretion disk (see e.g. Beloborodov (1999), Gilfanov et al. 2000a ).
The frequency resolved spectral analysis was introduced in Revnivtsev et al. (1999). In that paper we showed that the reflected component in the spectrum of Cyg X-1 is less variable than the underlying continuum at the frequencies above $``$1 Hz. The similar behavior, but with less statistical significance was found in the case of GX 339-4. Similar to the discussion in Revnivtsev et al. (1999) we can assume here that the time variations of the reflected component in the spectrum of GX 339-4 could be smeared out by the finite light-crossing time of the reflector (see more extended discussion in Gilfanov et al. 2000b ). Alternatively, the observed behavior could be explained by non-uniformity within the comptonizing region. For example, if the short time scale variations appear in geometrically inner part of the accretion flow (hot spheroid) and give a rise to significantly weaker, if any, reflected emission than the longer time scale events (originating in the outer regions) then we will see no reflection features at high Fourier frequencies. In turn, the smaller amplitude of the reflection in the inner regions of the accretion flow might be caused by the screening of the reflector from the innermost regions by the outer parts of spheroid.
###### Acknowledgements.
This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. MR acknowledges partial support from RBRF grant 97-02-16264. |
no-problem/9910/astro-ph9910264.html | ar5iv | text | # Interaction of Infall and Winds in Young Stellar Objects
## 1 Introduction
Molecular outflows are commonly observed in association with young stars. Although the precise mechanism generating the outflows is poorly understood, it is generally believed that the molecular mass is driven and excited into emission by a wind emanating from the inner circumstellar disk close to the star, and driven by accretion energy (Calvet & Gullbring, 1998). Recent observations have explored how these outflows form close to the driving source (Chandler et al., 1996) and how they might evolve in time (Velusamy & Langer, 1998). These and other investigations indicate that molecular outflows are not clearly explained in terms of any single current model.
Molecular outflow models are principally divided into two groups; those in which outflows are driven by highly-collimated winds or “jets”, and those in which outflows are driven by wide-angle winds. Masson & Chernin (1992, 1993) and Chernin & Masson (1995) compared jet driven and wind driven models of molecular outflows. They conclude that jet-driven models are a better fit than wide-angle wind models because observations show very little outflow mass moving at the highest velocities. A jet with moderate post-shock cooling would transfer its momentum to the ambient molecular material largely through the small cross-section of its head, while a wide wind has a much larger driving area. On the other hand, jet-driven models have so far had some difficulty producing the cross-sectional widths of many observed outflows. To reproduce these morphologies, investigators have invoked mechanisms such as turbulent entrainment (Stahler, 1994; Raga & Cabrit, 1993), and precession (Cliffe et al., 1995; Suttner et al., 1997). These strategies imply filled outflow cavities and shock structures that aren’t generally observed (Chandler et al., 1996). Wide angle wind models can produce wind-blown bubbles with wide bow shocks and empty cavities. It is, however, difficult for the simplest versions of these models to reproduce the observed momentum distribution. Li & Shu (1996) have suggested that a wide angle wind whose properties vary as a function of polar angle may help solve the momentum problem. Such a wind requires a structure that has higher density along an axis: more like a jet. Thus at present, models and observations leave it unclear as to what is really the driving mechanism of molecular outflows. With these issues in mind, the present paper examines a wide angle wind model through hydrodynamic simulations.
The early model used by Chernin and Masson to make their case against wide angle winds is that presented by Shu et al. (1991). It applies a “snowplow” scenario where a central wind sweeps up ambient material and compresses it into a thin shell. The calculation is performed under the critical assumptions of isothermal shock dynamics and strong mixing between the post-shock wind and ambient gas. The model successfully produces collimated bipolar wind-blown bubbles when relatively denser “equatorial” regions prevent the shell from expanding at low latitudes as quickly as it does along the pole.
New features have been added to the basic wide angle-wind blown bubble model. Close to the young star, effects such as gravity from the central star and the density distribution of the inner envelope become crucial in influencing the shape and evolution of the outflow. This has been investigated in detail by Wilkin & Stahler (1998), who model a quasi-static balance between a steady central wind, gravity, and a time-dependent angularly varying infall. In essence the bubble is considered as a series of accretion shocks. Using the momentum balance across the fixed shell face in both normal and transverse directions, the authors solve for the density and flow pattern within the shell. Because of the quasi-static assumption, the time-dependence of the bubble shape and size is directly linked to the way in which the infall changes in time. The stability of such a situation remains to be investigated. While they found that significant collimation can occur as the environmental density distribution becomes more flattened, the timescale for this is $`10^5`$ yr. They point out that this is much longer than what is observed. This result reveals the importance of performing dynamic, time-dependent calculations.
While considerable progress has been made in the wide angle wind scenario for molecular outflows there remain many aspects which have not yet been explored. In particular the full time-dependent multi-dimensional nature of the flows has not been examined. Solving cylindrically symmetric non-linear, time-dependent fluid equations by numerical simulation we obtain significantly different behavior from that obtained in previous wide angle wind blown bubble models. The shape of the envelope, namely the gradient of density with polar angle, not only affects the shape of the outer shock and resulting molecular shell, but also the shape of the inner (wind) shock. This can have important dynamical consequences as has been shown in Frank & Mellema (1996) and Mellema & Frank (1997).
It seems difficult to avoid the the conclusion that some intrinsic collimation of the initial mass ejection is required, given the evidence for highly-collimated jets over long scales (Bally et al., 1997; Bachiller, 1996). This is particularly true for emission-line jets of optically-visible T Tauri stars (e.g., Stapelfeldt et al. 1997; see also the review by Reipurth & Heathcote 1997), where there is little evidence for the presence of enough external (dusty) medium for initial hydrodynamic collimation of ionized jets to be effective. On the other hand, it should also be recognized that even models of mass ejection with collimated flows often have some wide-angle component. This is true not only of the magnetic “X-wind” model of Shu and collaborators (Shu et al., 1994; Najita & Shu, 1994; Shang et al., 1998), in which a dense axial flow is surrounded by a lower-density dispersing flow; it is also true of the original self-similar MHD disk wind (Blandford & Payne, 1982; Pudritz & Norman, 1983; Königl, 1989). For this reason it is worth exploring the interaction of wide-angle flows with the ambient medium.
In this paper we consider the time-dependent interaction of an infalling envelope with an expanding wind. Our focus is spherically-symmetric winds as an example of a maximally wide angle wind, though we do calculate two cases using aspherical winds for comparison. Collimation of the resulting flow will be enhanced if the infalling envelope is not spherically-symmetric. We take one of the simplest models for such non-spherical collapse, the model of Hartmann et al. (1996), from an initial self-gravitating sheet with no magnetic field.
In § 2 we present the physics and the computational scheme used for the simulations. The simulation results are examined in § 3. In § 4 we compare our simulations to the snowplow model of outflows, discuss the possible role and importance of turbulence in these models, and make comparisons of the simulations to observations. We conclude in § 5.
## 2 Computation
### 2.1 Methods
The simulation code solves the Euler ideal fluid equations with source (sink) terms for central point-source gravity and radiative cooling. The equations, written in conservative form, are:
$`{\displaystyle \frac{\rho }{t}}+\rho 𝐮=0,`$ (1)
$`{\displaystyle \frac{\rho 𝐮}{t}}+\rho \mathrm{𝐮𝐮}=\rho \mathrm{\Phi },`$ (2)
$`{\displaystyle \frac{ϵ}{t}}+𝐮(ϵ+p)=\mathrm{\Lambda }(\rho ,T)\rho 𝐮\mathrm{\Phi }.`$ (3)
where $`\rho `$ is the fluid mass density at a point, and $`𝐮`$ is the velocity. The pressure $`p`$ is related to the energy density $`ϵ`$ and its kinetic ($`ϵ_{kn}`$) and thermal components ($`ϵ_{th}`$) through the relations
$`ϵ=ϵ_{kn}+ϵ_{th},`$ (4)
$`ϵ_{kn}={\displaystyle \frac{1}{2}}\rho u^2,`$ (5)
$`ϵ_{th}={\displaystyle \frac{p}{(\gamma 1)}}.`$ (6)
The adiabatic index $`\gamma =5/3`$ is that for a monatomic gas. The temperature is determined through the ideal gas equation of state, with the particle mass set to that of atomic hydrogen:
$$p=\frac{\rho kT}{m_H}.$$
(7)
The cooling source term $`\mathrm{\Lambda }(\rho ,T)`$ includes a Dalgarno-McCray 1972 cooling curve which is applied above 6000 K, and a Lepp and Shull 1983 cooling curve applied below 6000 K to simulate the effects of both high and low temperature cooling.
Gravity in these models is purely due to a source at the origin. Self-gravity of the fluid is not included. The gravity source term is written in terms of the gradient of the potential $`\mathrm{\Phi }`$. This force is specified via the constant central source mass $`M_{}`$ as
$$\mathrm{\Phi }=G\frac{M_{}}{r^2}\widehat{r},$$
(8)
The radius $`𝐫`$ is the vector from the origin to the fluid point.
The equations are solved on an Eulerian grid using an operator split numerical method, where several operators are applied per timestep, each one simulating a different aspect of the physics. We also employ fluid tracking to determine what part of the initial grid a fluid parcel came from. The grid covers a quarter meridional plane of a cylinder and has assumed axial symmetry plus reflective symmetry across an equatorial plane. The basic hydrodynamics equations without source terms are applied via the numerical Total Variation Diminishing (TVD) method of Harten 1983 as implemented in dimensionally split form by Ryu et al. (1995). The source term for radiative cooling is applied explicitly to first order via an exponential as described in Mellema & Frank (1997) where:
$$ϵ_{th}^{n+1}=ϵ_{th}^n\mathrm{exp}\left(\frac{\mathrm{\Lambda }^n(\rho ^n,T^n)}{ϵ_{th}^n}\mathrm{\Delta }t\right).$$
(9)
The superscripts represent a value at the $`n`$th and $`(n+1)`$th timestep, and $`\mathrm{\Delta }t`$ represents the amount of time between timestep $`n`$ and $`n+1`$. Gravity is applied through a first order explicit Euler operator which updates the velocities and the kinetic energy density. The fluid is evolved by rotational velocity through another first order explicit Euler operator.
The timesteps are adjusted to satisfy the Courant condition on the sourceless hydrodynamics, and simultaneously to be less than a reasonable multiple of the cooling time: $`3\mathrm{\Lambda }(\rho ,T)/ϵ_{th}`$. Because of the high densities in the ambient medium ($`n>10^8cm^3`$) the cooling time step can become relatively small and the number of times steps required to complete the simulation relatively large. This produces additional diffusion in the solutions causing our flows to appear rather smooth. By artificially reducing the cooling in some test simulations, we were able to obtain sharper features. The gross morphology and evolution of the flow however remained qualitatively the same as those with the full cooling shown in § 3.
We applied several tests to the code. We compared momentum conserving wind blown bubbles with the solutions obtained by Koo & McKee (1992) to test the basic hydrodynamics with strong cooling. We performed radiative shock tube tests along the axis using a simplified power law cooling function and compared these against semi-analytic shock solutions to test the cooling operator. To test gravity we reproduced the Bondi accretion solution in the code. To test gravity and rotational velocity, we maintained a Keplerian ring in an orbit around a central mass. In general, the code was able to recover the analytical solutions to better than 10%.
### 2.2 Model
The scenario we applied involved driving a spherically symmetric wind into an infalling non-spherical envelope. While a spherical wind may not be operating in many young stellar object systems, we seek in this investigation to explore the shaping of the outflow through wind/environment interactions and make contact with previous analytic studies. We hope to relax the assumption of a spherical wind in a later paper. The parameters used are given in Table 1 and are based on typical values for low mass young stellar objects. A radially directed steady wind of velocity $`200km/s`$ and mass flux $`10^7M_{}/yr`$ was imposed at each timestep on the cells of the grid within an origin-centered sphere of radius equal to 10% the radial size of the simulation. We chose an environment given by Hartmann et al. (1996) derived from their simulations of a collapsing, rotating, axially symmetric sheet. In this model, the collapse of the initially-flattened protostellar cloud gives rise to a highly anisotropic density distribution in the infalling envelope, which in turn helps strongly collimate the initially spherical flow. As Hartmann et al. point out, whether or not this model is correct in detail, the general property of non-spherical clouds to flatten or become more anisotropic as gravitational collapse proceeds lends credence to the idea of aspherical infall. The anisotropic stresses of magnetic fields can also produce non-spherical protostellar clouds, and Li & Shu (1996) argue that such initial configurations can also help make outflows more collimated. <sup>1</sup><sup>1</sup>1However, we note that the static configurations suggested by Li & Shu may not be particularly relevant to outflow sources, because some collapse must have already occurred to produce the central mass responsible for the outflow. As shown by Hartmann et al. (1994), even if the initial density configuration is not toroidal, or magnetically-dominated, a toroidal density distribution will result from collapse.
The density is given by the equations (8), (9) and (10) in Hartmann et al. (1996) which modifies calculations by Cassen & Moosman (1981) and Terebey et al. (1984) (referred to hereafter as TSC). The velocity distribution is taken from Ulrich (1976). The equations are rewritten in a slightly different but equivalent form in the appendix. These equations produce a flattened infalling toroidal density distribution with an equator to pole density contrast $`\rho _e/\rho _p>1000`$. This is larger than what is produced by the Cassen & Moosman and TSC models. The TSC model was used by Wilkin & Stahler (1998).
The environment near the center of the sheet models used in this paper is a torus of a high equatorial density ($`6\times 10^8m_H/cm^3`$) with a half-maximum opening angle of almost 180 degrees. In the hydrodynamic collimation simulations of Frank & Mellema and Mellema & Frank a ”fat” torus was used with opening angles of $`90^{}`$, similar to those obtained by Li & Shu (1996) for magnetized collapse. These previous simulations produced outflows with strong collimation. It is noteworthy that, as we shall see in § 3, it is possible to produce strongly collimated flows even with the wide opening angles of the sheet distribution.
We investigate the influence of ram pressure balance between wind and ambient gas by holding the wind speed constant and varying the ratio of the inflow mass flux to the wind mass flux (denoted $`f^{}`$:)
$$f^{}=\frac{\dot{M}_i}{\dot{M}_w}.$$
(10)
We chose to hold the wind velocity constant because young stellar object wind and jet velocities are well observed to be a few hundred kilometers per second (Bachiller, 1996; Shu et al., 1991), while the range of the mass flux ratio f’ is less well known. We have focused on models with $`f^{}=10,20,30,40,`$ and $`50`$. This was accomplished by adjusting the infall mass flux and fixing the outflow mass flux. We also simulated cases where the inflow mass flux was reduced relative to the wind, and obtained basically the same results with only minor differences. We note that a measure of the effect of varying $`f^{}`$ can be seen by calculating the angle $`\theta _e`$ at which the ram pressures are equal in our simulation. For $`f^{}=10`$ we find $`\theta _e=1.568`$ radians (measured from the axis) which is less than $`6\%`$ of a grid cell. For $`f^{}=50`$ we find $`\theta _e=1.519`$ radians which is less than $`130\%`$ of a grid cell. Thus is is clear that the wind will be able to push all material away from the equator in the $`f^{}=10`$ case while $`f^{}=50`$ simulations should experience some material attempting to cross the inner wind sphere.
We note here the basis of our attack on this problem. The theoretical issue of the interaction of a stellar wind with an infalling environment is complex. There are a number of parameters controlling both the wind and the infalling environment. In choosing to address the issues involved one must ask which parameters are important enough, a-priori, to warrant attention? Which connect directly to issues raised in previous studies? Which allow for a numerically clean solution, i.e. one whose specification of initial and boundary conditions do not impose serious transients or artifacts on the flow? In taking on this problem we have adopted a strategy which, hopefully, allows to us address important aspects of the physics while leaving others for future works. In the present paper we focus primarily on a relatively simple treatment of initial/ boundary conditions (e.g. the wind and infalling environment) in order to clarify how a full solution to the governing equations differs from the more idealized solutions explored in previous works. We have constructed a focused exploration of a set of numerical experiments that address key issues not explored in previous studies. In future works we plan to open other dimension of the parameter space including a more detailed treatment of aspherical winds.
## 3 Results
### 3.1 Spherical Winds/Aspherical Environments
The sampling in $`f^{}`$ described in the previous section can be thought of as a sampling from a relatively strong wind to a relatively weak wind, as the results will show. In addition to exploring different steady wind cases, the simulations serve as a crude initial exploration of what might be expected in models with fully time-dependent wind mass loss rates. The details of time-dependence are particularly important for FU Orionis stars (Hartmann & Kenyon, 1996) where the observations show considerable variation in the wind, likely related to large fluctuations in the accretion rate of material through the disk. In this section, we start with a general overview of the results and continue by considering each simulation in more detail.
The principal results can be inferred from Fig. 1 showing density snapshots of all five simulations. In the $`f^{}=10`$ case we see the strong wind overwhelms the infall and creates a wide conical cavity. As $`f^{}`$ decreases, the wind becomes weaker, the infall mass flux dominates and, as a result, the wind becomes more confined and collimated. Even in the $`f^{}=50`$ case however the wind is still strong enough to avoid being completely cut off. In simulations where we increase the infall flux such that $`f^{}=100`$, the wind is completely choked by the infall with its momentum slowly diffusing into the surrounding environment. Because of the diffusion and the direct interaction of the infall with the artificial wind boundary, we are not confident in the simulations after the wind becomes choked. Therefore we have only placed a lower limit ($`f^{}=50`$) on the infall to wind flux ratio at which the wind is cut off.
In the strong wind, ($`f^{}=10`$), case (see Fig. 2), the wind pushes its way through the environment but is still shaped by it. By the time the swept up ambient material reaches the top of the grid (at $`180y`$: see Figs. 3 and 4) it has a conical shape reminiscent of the outflows seen in Velusamy & Langer (1998) and Chandler et al. (1996). The outflow bubble opens with an angle of about $`30^{}`$ to the vertical. The supersonic wind and molecular material are each compressed at the inner and ambient shocks respectively, forming a thin layer about the contact discontinuity at $`20^{}`$ to the vertical. The dynamics of this region are dominated by the fact that the radially directed wind material hits the inner shock obliquely. Post-shock wind is redirected and forced to flow along the contact discontinuity. Eventually this high speed gas flows up the walls of the conical cavity and reaches a cusp at top of the bubble. There the focused high speed flow moves ahead of the bubble, encircling its nearly spherical cap with a conical “jet”. While the development of such a pattern is readily understood in terms of basic radiative oblique shock physics it is not clear that the cusps and resulting jets would be stable in real three dimensional bubbles.
It is interesting that as the bubble evolves, a region of warm gas ($`T10^5K`$) develops at the top pushing the inner and outer shocks apart there. Usually wind blown bubbles are classified as either adiabatic or radiative depending on whether the cooling time is longer or shorter than the expansion time of the bubble. We believe that this top region does not fall into either category, and that instead we are seeing the development of a partially radiative bubble. This third classification, which lies between the adiabatic and radiative cases, was originally explored by Koo & McKee (1992). Such a case occurs when the wind material resupplies thermal energy generated at the shock faster than radiative cooling can remove it, but when the cooling time is still shorter than the bubble expansion time. To see if the system is partially radiative at the top of the $`f^{}=10`$ bubble we need to compare the cooling time $`t_c`$ with the wind crossing time $`t_x`$.
Given that the top of the bubble has an essentially spherical shape we can use the analysis of 1-D bubbles to write the cooling and crossing times in terms of the distance from the wind source, $`R`$, plus other simulation parameters for the $`f^{}=10`$ case. A rough estimate of the inner shock speed is $`v_s=v_w=200km/s`$ which produces a temperature of $`8.8\times 10^5K`$. The Dalgarno-McCray cooling curve at this temperature gives a $`\mathrm{\Lambda }(n,T)/n^2`$ of $`1.65\times 10^{22}ergcm^3/s`$. Using $`\dot{M}_w=10^7M_{}/yr`$ and the factor of 4 density increase behind a strong shock we can determine the cooling time from the following equation:
$$t_c=ϵ_{th}/\mathrm{\Lambda }(n,T)4100s(R/AU)^2.$$
(11)
The crossing time, $`t_x`$, is given simply in terms of the radial distance from the origin to the shell and the wind velocity:
$$t_x=\frac{R}{v_w}7.5\times 10^5s(R/AU).$$
(12)
Note that the crossing time grows linearly and the cooling time grows quadratically. Thus the cooling time eventually surpasses the crossing time. For the parameters given above this occurs when the shock is at a distance of about $`182AU`$ or $`2.7\times 10^{15}cm`$ which is well inside the simulation domain. We find that this is approximately the distance where the simulated bubble begins to exhibit its cap of warm postshock wind material. We believe that this is the first time that the partially radiative bubble phase has been seen in a simulation, confirming the prediction of Koo & McKee.
The weaker $`f^{}=20`$ case is seen in Fig. 1 to also develop a conical outer shock, but the overall shape is more confined or collimated than in the previous case. The cusp which forms at the edge of the bubble is more pronounced. This effect may due to higher inertia of the shocked wind flow. At low latitudes material is channeled from regions closer to the origin where the density is higher. In addition, the polar regions of the bubble are expanding slower due to the relatively lower wind mass loss rate. Thus the refracted wind material emerges from the top of the bubble with higher momentum relative to the polar cap in this model than in the $`f^{}=10`$ case, allowing the cusp to propagate farther from the cap. Another difference at the pole is that less warm post-shock gas exists in this case. This effect can also be attributed to the the slower speed of the polar sectors of the bubble. A slower outer shock speed implies a faster inner shock speed (in the frame of the wind). This in turn leads to higher post-shock temperatures and stronger cooling in the shocked wind. Note that the inner shock forms an angle of about $`10^{}`$ to vertical and the outer shock (bubble) opens roughly to $`20^{}`$.
At weaker winds ($`f^{}=30`$) the inner shock becomes more confined and the flow is even more collimated. The cusp and resulting conical “jet” is more pronounced than seen previously. Again this is expected because the cap of the bubble will expand at lower velocities. Figure 5 shows the details of the flow pattern for this case. Since the $`f^{}=20`$ and $`f^{}=30`$ models are similar the figure emphasizes the points concerning the flow pattern made above. The wind shock opens about $`2^{}`$ to vertical and the bubble opens about $`20^{}`$.
In the second weakest case ($`f^{}=40`$) the inner shock becomes almost elliptical or “bullet” shaped. The tip of the shock at the axis is suspect because of the imposition of reflective boundary conditions and may not form if the cylindrical symmetry is relaxed (see for example Stone & Norman 1994). The wind is strongly focused by the oblique inner shock producing an almost vertically directed high velocity flow. The resulting bubble is very well collimated. In this case the conical cusp is not as pronounced and the dynamics look very much like a jet. At the tip of the outflow (along the axis) the the inner shock is acting in the same manner as a jet shock, decelerating vertically directed wind and shunting it off in the direction transverse to the propagation of the outflow. The full opening angle of the outer shock of the bubble is about $`35^{}`$ ($`17.5^{}`$ to vertical).
The weakest wind case ($`f^{}=50`$) is shown in Figs. 6, 7. It exhibits very strong collimation. The inner shock is strongly prolate and closes back on itself at a relatively small distance from the central source, and redirects all the wind into the collimated outflow. There is no extended region of freely expanding wind. This occurs because inflowing material is overwhelming the wind at low latitudes, inhibiting its expansion everywhere except at the poles. Even though the opening angle of the bubble’s outer shock at the base is $`35^{}`$, the resulting outflow will appear cylindrical because of the focusing.
The trend of the evolutionary timescales of the different simulations allows us to infer the dynamical consequences of collimation. The timescales are taken from the snapshots in Fig. 1 which were chosen such that the bubbles are of comparable height. As the wind becomes weaker ($`f^{}=10`$ to $`f^{}=50`$) the timescales go $`100y`$, $`200y`$, $`220y`$, $`240y`$, $`220y`$. The bubbles decelerate as the wind weakens. The effect saturates however and then reverses as the collimation becomes stronger. At first glance this reversal may be unexpected because as winds become weaker they should supply less momentum to the bubble. However, as the wind becomes focused into a jet the net flux of momentum, $`F=\rho v^2A`$, is injected into the environment across a smaller angular extent $`A`$. Thus the tip of a highly collimated bubble expands at a higher velocity than would occur for a spherical bubble.
We shifted from the ”strong wind” to ”weak wind” cases in these simulations by increasing the infall mass flux while fixing the wind velocity. If instead we shift from $`f^{}=10`$ to the $`f^{}=50`$ by decreasing the mass flux of the wind and keeping the infall mass flux fixed we find that the results are qualitatively the same. The post inner shock region expands slightly faster due to the lower density and reduced cooling there, but the effect is not significant. The results are, therefore, determined solely by the parameter $`f^{}`$.
The results of the simulations show the complicated way in which the environment shapes the outflow. This occurs directly at the outer shock by providing an inertial gradient along the shock face. It also occurs indirectly by affecting the shape of the inner shock. Shaping the inner shock leads to shock focusing of wind material, creates cusps at the tips of the bubble and, in the most extreme cases, leads to focusing into a jet through prolate inner shocks. In all cases, the simulations show the “internal” flow of post inner shock wind material plays an important role in the dynamics.
The results also show the way in which outflows affect the environment. The winds carve out evacuated cavities of potentially large angular extent and sweep ambient material at a high rate. The swept up mass totals $`1\times 10^6M_{}/yr`$ to a few times $`10^7M_{}/yr`$ as the wind becomes weaker. Because the winds become more collimated and on the whole expand much more slowly as they weaken, ambient mass is swept at a slower rate. The wider angle winds are able to collect mass much more rapidly even though the envelope has less material in it.
### 3.2 Spherical Winds/Aspherical Environments
Although they are not the focus of this study, we have included the results of simulations with aspherical winds injected into a spherical environment for comparison. We note that addressing the issue of aspherical winds means specifying a model for those winds. This raised a number of questions suh as; should simple sinusoidal variation be used or something akin to the cylindrical stratification seen in wide-angle models like those of the X-winds? If we focus on the former should we vary only the density from pole to equator or only the velocity. If they both vary then should momentum or energy flux be held constant across the stellar surface? Given these the complexities we have chosen a highly simplified set of boundary conditions for these models and present them as means of contrasting the results of the previous section.
The distribution in the environment is given by Terebey et al. (1984) with the same infall parameters as used in the strong-wind simulation described above: essentially it is the previously described sheet distribution except unflattened. The wind velocity distribution is given by
$$v_w(\mathrm{cos}\theta )^{\mathrm{log}(\mathrm{cos}\theta _c)/log\frac{1}{2}},$$
(13)
where $`\theta `$ is the polar angle, $`v_w`$ is the maximum polar $`200`$ km/s velocity and the angle of half-maximum velocity is $`\theta _c`$. The wind mass density at every point is kept the same as in the strong-wind simulation.
The density of the result of injecting a mildly focused wind ($`\theta _c=70^{}`$) is shown in Figure 8. Even at $`300`$ years, the bubble is much more limited in size than in the sheet environment cases: the expansion speed is lower and very little warm gas exists to push the bubble outwards. The focusing is also much more limited because of the the environment.
The resulting density snapshot of injecting a more focused wind ($`\theta _c=30^{}`$) into the spherical environment is shown in Figure 9. Here the focusing is pronounced, but the bubble extent at 300 years is still comparatively small because of the greater mass in the polar regions. As is to be expected, a focused wind in a spherical environment can also produce a focused outflow. Detailed results, however, such as the precise bubble shape should not be taken as final because the infall material is able to press up against the artificial wind boundary.
These simulations show that the shape of a wind blown bubble is not unique to a single set of boundary conditions. As one might expect we do see collimated outflows emerging from collimated winds though there are some differences from the spherical wind case in terms of propagation speed and post-shock temperatures. These results show that the issue is always one of wind momentum or ram pressure injection as a function of angle. With spherical-winds/aspherical-infall shock focusing helps redirect the wind ram pressure towards the pole. This is explored in more detail in the next section. In the aspherical winds/spherical-infall case the wind momentum must be strongly collimated a-priori to produce a narrow outflow.
Thus flow shapes are not unique nor should one expect them to be. However, as we will explore in the next section, the simulations presented in section 3.1 demonstrate an effectiveness of wide-angle wind models not seen in previous studies.
## 4 Discussion
In this section we discuss the results of our simulations in light of previous analytical models of YSO wind blown bubbles We also attempt to establish some points of contact between our models and molecular outflow observations. In making these comparisons we must note a limitation of the present simulations, namely that they formally refer to only very small spatial and temporal scales. The overwhelming majority of observed molecular outflow structures have typical lifetimes of $`10^410^5`$ yr, in contrast to the $`10^2`$ yr time sequences shown here. In addition, for flows that are much larger than $`0.1`$ pc, it is very likely that the bulk of the swept-up interstellar material originally resided in regions outside the protostellar core, let alone the inner infall regions. Nevertheless, there are a couple of observations of younger and smaller outflows (Chandler et al., 1996) where our simulations may be more directly relevant. In addition some of the physical results may be generalized to larger-scale situations.
### 4.1 Comments on the Shu, et. al. “Snowplow” Outflow Model
In Shu et al. (1991) a simple, elegant “snowplow” model of the formation of bipolar molecular outflows is presented. The basic features of this model were a central wind with a mass flux varying with polar angle and an environment with a density that varies with polar angle and falls off in radius as $`1/r^2`$. A number of simplifying assumptions are invoked. Cooling is assumed instantaneous so that the bubble is a purely momentum-driven thin shell. The interaction of the ambient material and the wind is taken to be fully mixed and ballistic such that the dynamics at a given polar angle are independent of those at nearby angles. In addition, the swept up wind mass is taken to be negligible.
In the snowplow model, changing the ratio of total infall to wind mass flux ($`f^{}`$) only effects the bubble’s expansion timescale but doesn’t change the self-similar shape. The $`1/r^2`$ ambient profile leads to a simple expression for the bubble shape and the variation with polar angle in the extent of the bubble is purely determined by the angular variations of wind and ambient medium. The simulations however show a strong variation of bubble shape with $`f^{}`$. This discrepancy implies that some dynamical aspects of the simulations are not captured in the snowplow model.
The most important physical difference between the simulations and the snowplow model is that the snowplow model does not account for the post inner shock redirection of wind material. Pressure gradients in the post-shock shell that come about because of the variation in the ambient density, and resulting variations in shock obliquity and shock speed have no consequence in the snowplow model, but have profound effect in the simulations. If we were to somehow model the non-radial direction of the postshock wind and use that in the snowplow model we might obtain a better agreement. The question becomes should one use the flow inside or outside the inner wind shock as the input wind to the snowplow model. While this may seem, on one level, to be a question of where one starts the calculation, more fundamentally it is an issue of what initial physics one includes to determine the wind distribution. Li & Shu (1996) make a modification of the snowplow model in this way by incorporating the distribution of the wind after it has been accelerated and partially focused by magnetic launching.
If dynamically important physics in wide wind scenarios are left out of the snowplow model, it leads to the question whether the model is useful in predicting observations. Masson & Chernin (1992) and Li & Shu (1996) have used the snowplow model to calculate distributions of mass and velocity to argue whether or not wide angle winds can drive outflows. Our simulations, however, show that even if the driver is initially a wide angle wind it can be focused by a nonlinear interaction between the wind and environment. Our simulations appear to blur the distinction between wide angle winds and jets as the sole driving mechanism of the outflow.
### 4.2 Mixing and momentum transfer
A strong assumption used in the analytic models of Shu et al. (1991) and Wilkin & Stahler (1998) is that postshock wind and postshock ambient material become fully mixed on a dynamically short timescale. This allows the shell to be treated as a single fluid and greatly simplifies the calculations. To support the assumption, Wilkin & Stahler consider supersonic shear flows within the postshock regions and state that some form of instability will generate turbulence and rapidly mix the wind and ambient fluids. Our simulations have some diffusive mixing of fluids due to numerical effects, but no complete mixing occurs and there is no turbulence modeled in the code. Since the simulations show the post-shock shear flow influencing the shape of the bubble, it is worth discussing when turbulent mixing of ambient and wind material will occur and, if it does occur, how it might affect the bubble’s evolution.
#### 4.2.1 Turbulent length scale
The onset of turbulence in jets and bubbles is still poorly understood, but one estimate of the momentum transfer rate between material in a jet and a stationary ambient medium has been performed by Cantó & Raga (1991). From this, they calculate the length, $`L_t`$, a laminar jet can travel before it is subsumed in a turbulent boundary layer finding that $`L_t`$ scales linearly with jet Mach number, and inversely with the jet radius $`R_j`$. At Mach 10 they find $`L_t/R_j170`$. The calculations were performed assuming slab symmetry and are applicable to our supersonic postshock wind regions. Raga et al. (1995) extend the results to the case where both fluids are moving and find that the spreading rate is actually reduced.
Using the result of Cantó and Raga we can estimate when the flows inside the bubble become turbulent. For this purpose we calculate the width of the shell of post-shock wind in a slightly aspherical isothermal wind-blown bubble. The calculation is simplified by the nearly spherical shape, but is still consistent with shock focusing because even mildly aspherical bubbles will produce focused flows (Frank & Mellema 1996). If density $`\rho _{ws}`$ in the postshock region is constant, the mass in that region, $`M_{ws}`$, can be written in terms of the volume $`V_{ws}`$ as:
$$M_{ws}=V_{ws}\rho _{ws}.$$
(14)
If the speed of the shock $`v_s`$ is small compared with that of the wind, $`M_{ws}`$ can also be written
$$M_{ws}=\dot{M}_wt,$$
(15)
where $`\dot{M}_w`$ is the mass flux of the wind and $`t`$ is the age of the bubble. If the shock is relatively thin and the bubble is not too aspherical then we can approximate the volume as $`V_{ws}4\pi R_s\mathrm{\Delta }R_{ws}`$, where $`R_s`$ is the shock radius and $`\mathrm{\Delta }R_{ws}`$ is the shock width. In terms of the shock Mach number $`\stackrel{~}{M}`$ and the preshock wind density $`\rho _w`$ the isothermal condition requires that $`\rho _{ws}=\rho _w\stackrel{~}{M}^2=\stackrel{~}{M}^2\dot{M}_w/(4\pi R_s^2v_w)`$ (Shu, 1992). Equating (14) and (15) and solving for the shock width gives:
$$\mathrm{\Delta }R_{ws}=\frac{tv_w}{\stackrel{~}{M}^2}.$$
(16)
Once we calculate a shock speed, we can calculate the relative width of the bubble, $`\mathrm{\Delta }R_{ws}/R_s`$. For simplicity we choose the ambient environment to fall as $`1/r^2`$, which, as mentioned in the previous section, implies a constant shock velocity of $`v_s=\sqrt{(\dot{M}_w/\dot{M}_a)v_w(a/2)}`$. Choosing typical values of $`v_w=100km/s`$, $`\dot{M}_w/\dot{M}_a=1/10`$, $`\stackrel{~}{M}=100`$, $`a=0.2km/s`$, we obtain:
$$\frac{\mathrm{\Delta }R_{ws}}{R_s}=\left(\frac{\dot{M}_w}{\dot{M}_a}\frac{a}{2}\right)^{\frac{1}{2}}\frac{\sqrt{v_w}}{\stackrel{~}{M}^2}\frac{1}{100}.$$
(17)
Note the high Mach number is a result of the fact that in an aspherical bubble the shock creating the shear flow will be oblique and therefore will not strongly decelerate the wind material. In addition the Mach number is an inverse function of the sound speed. The strong cooling in the postshock region keeps this speed low, which also helps to keep the Mach number high. The simulated shear flow has velocity on order Mach 10 (as opposed to the compression Mach number $`\stackrel{~}{M}=100`$) and we can couple equation (17) with the result of Cantó and Raga, to find that
$$\frac{\mathrm{\Delta }R_{ws}}{R_s}\frac{L_t}{\mathrm{\Delta }R}1.$$
(18)
This suggests that the flow becomes turbulent only on a size comparable to that of the bubble, and that the focused tangential flow will have time to influence the dynamics before mixing occurs.
Two features this calculation doesn’t take into account further reduce the chance that turbulence will dominate the dynamics. First, the continuous supply of fresh wind material along the length of the shear flow is ignored as it was in the Cantó & Raga estimate. This effect would allow unmixed wind material to be continually resupplied at high latitudes, possibly at a rate faster than the turbulence incorporates material. Second, magnetic fields are ignored. Fields may thread the shell especially when the bubble is small. There is some evidence that toroidal fields, which are present in magnetocentrifugal wind launching models, retard the entrainment of envelope material (Rosen et al., 1999). Poloidal fields at moderate Mach numbers can also become wrapped up in vortices in the mixing fluids, strengthening the field which in turn stabilizes the shear flow (Frank et al., 1996).
We have neglected effects in our calculation that may play a mediating role. The most important lie in the crude way in which we treat the geometry of the non-spherical shock. We also haven’t considered the early transition from a spherical to an elliptical bubble, when the shear flow Mach number will be more modest. Also, when the shear is stronger, centrifugal forces in the bubble may be important. The details of what instabilities form and how they grow in these geometries could affect the growth rate of the turbulence. Thus while the estimate (18) is suggestive, how quickly the flow becomes turbulent merits further study.
#### 4.2.2 Direction of the turbulent flow
Taking the opposite view, we can assume that turbulence will be important dynamically at some time. How then would the resulting mixed fluid behave? Wilkin & Stahler (1998) assume the postshock flow is dominated by the shocked ambient gas and will be carried down towards the disk. While there is much more mass in the ambient material, the momentum in the wind material is quite high and the flow could be driven poleward. Below we calculate the direction a mixed flow would take in a wind blown bubble shell using a simple model.
In figure 10 we show preshock and postshock regions for the ambient and wind material with a mixing layer sandwiched between. These regions denoted by subscripts $`a`$,$`w`$, and $`L`$ respectively. We assume the infall $`v_a`$ and wind velocities $`v_w`$ are constant and the turbulent boundary layer grows into the postshock regions at the same constant velocity $`v_0`$. In this framework the question becomes: is $`v_L`$ positive (i.e.poleward) or negative (i.e.equatorward)?
Mass conservation implies:
$$\rho _L2v_0=\rho _{sa}v_0\rho _{sw}(v_0)\rho _L=\frac{1}{2}(\rho _{sw}+\rho _{sa}).$$
(19)
Likewise, momentum conservation for the component moving parallel to the faces of the boundaries implies
$$(\rho _Lv_L)2v_0=(\rho _{sa}v_{sa})v_0(\rho _{sw}v_{sw})(v_0)\mathrm{\hspace{0.33em}2}\rho _Lv_L=\rho _{sa}v_{sa}+\rho _{sw}v_{sw}.$$
(20)
The preshock densities are determined from the mass flux and velocity through $`\rho =\dot{M}/(4\pi R_s^2v)`$, assuming the shell is thin. By taking the shocks to be isothermal, we obtain the postshock densities by multiplying the preshock densities by $`\stackrel{~}{M}^2`$ where $`\stackrel{~}{M}`$ is the Mach number. Because the fluid velocity parallel to the shock remains unchanged through the front, $`v_{sa}=v_a\mathrm{sin}\theta _{sa}`$, and $`v_{sw}=v_w\mathrm{sin}\theta _{sw}`$. Incorporating these velocities and solving the equations (19) and (20) for $`v_L`$ gives:
$$v_L=\frac{1}{2\rho _l}\frac{\dot{M}_a\stackrel{~}{M}_a^{\mathrm{\hspace{0.33em}2}}}{4\pi R_s^2}\mathrm{sin}\theta _{sa}\left(\frac{\dot{M}_w}{\dot{M}_a}\left(\frac{\stackrel{~}{M}_{sw}}{\stackrel{~}{M}_{sa}}\right)^2\frac{\mathrm{sin}\theta _{sw}}{\mathrm{sin}\theta _{sa}}1\right).$$
(21)
We are most interested in whether the mixed flow moves in the direction of the wind or the infall. If the wind and ambient velocities are mostly aligned across the shell, then $`\mathrm{sin}\theta _{sa}=\mathrm{sin}\theta _{sw}`$, and the condition for the mixed flow moving in the direction of the wind is:
$$\left(\frac{\dot{M}_w}{\dot{M}_a}\left(\frac{\stackrel{~}{M}_{sw}}{\stackrel{~}{M}_{sa}}\right)^2\right)>1.$$
(22)
For typical values of $`\stackrel{~}{M}_{sw}100`$, $`\stackrel{~}{M}_{sa}10`$, $`\dot{M}_w/\dot{M}_a=1/10`$, the left hand side of (22) is $`10`$. Thus, the flow moves in the direction of the wind.
We find that the large momentum input of the wind would tend to accelerate the mixed material poleward. At the pole, material collides and sprays ahead of the bubble. This is the Cantó focusing mechanism Cantó & Rodríguez (1980), called conical converging flows, which operates in simulations under a variety of circumstances (Mellema & Frank, 1997). Material would be cleared and accelerated above the pole leading to a more elongated bubble, and possibly a very narrow jet.
Ignoring shear flows simplifies calculations greatly. We have argued, however, they are important dynamically in bubbles driven into stratified environments whether turbulence is dominant or not. Future models will have to consider such flows in more detail.
### 4.3 Properties of young outflows
As noted in the introduction, Chandler et al. (1996) point to three qualities that models of molecular outflows must reproduce to match their observations of TMC-1 and TMC-1A. They require conical outflow lobes close to the star, evacuated outflow cavities, and moderate $`30^{}40^{}`$ opening angles, and they argue that existing models do not explain these effects.
We find, however, that our simulations create outflows meeting the requirements. The opening angle of bubbles in the $`f^{}=20`$ to $`f^{}=50`$ cases are $`35^{}40^{}`$. To be sure, these angles are determined from the density profiles of the simulations and should in the end be determined from synthetic emission maps. However, the angles are not as wide as might naively be expected from a wide angle wind. The similarity to TMC-1, TMC-1A and other young outflows is encouraging because these simulations are on a small spatial and time scale (300 years, $`10^{16}cm`$) and would hopefully match up more readily with younger outflows such as these.
The opening angle of the bubble in the $`f^{}=10`$ case is $`60^{}`$, which is too wide to compare with TMC-1 and TMC-1A, but wider outflows have been observed. The outflow around IRS1 in Barnard 5 has an opening angle of about $`125^{}`$ (Velusamy & Langer, 1998). Although these outflows may be explained by multiple non-aligned driving winds or jets, very strong wide angle winds may be at work in such situations.
### 4.4 Mass vs. velocity relations.
Masson & Chernin (1992) used CO line intensities of NGC 2071, L1551, and HH 46-47 and assumptions about optical depth to obtain the amount of mass per velocity channel in these outflows. They measured a power law in this mass versus velocity relationship of $`\mathrm{\Delta }m/\mathrm{\Delta }v=v^\gamma `$ with an exponent of $`\gamma =1.8`$. With such a steep slope little mass is accelerated to high velocities, leading Chernin and Masson to conclude that any high velocity driver of the molecular outflows must have a small cross-section and consequently cannot be a wide angle wind. Calculations of the observed mass vs. velocity relation in other outflows have been calculated with similar results but with interesting differences. Chandler et al. (1996) have pointed out that it is hard to justify fitting the distributions in their observations with a single power law, although they do try to fit them with two. They find that some “average” power law curve obtains a $`\gamma 1.8`$. Bally et al. (1999) bring up the issue of sensitivity of the transformation from line intensities to masses to assumptions of optical depth. The universality of a mass vs. velocity relationship may therefore be in question, whether the relationship is even a power law at all, and whether the mass has been properly calculated. We are interested however in making some comparison of models and observations, so it is worth discussing the mass-velocity relation calculated from the simulations.
Figure 11 shows the histograms of ambient mass as a function of projected velocity from our simulations. We able to unambiguously identify the ambient material through the code’s fluid tracking and independently by the location of the contact discontinuity. The swept up masses are 1-2 orders of magnitude lower than the masses shown in the TMC1 and TMC1A observations of Chandler et al. (1996). However, our numerical simulations are limited in the length of time considered and the spatial region that can be encompassed while maintaining adequate resolution. Thus, our simulations correspond to sweeping up much less envelope material than has occurred in most observed outflow regions. Quantitatively, our models correspond to extremely young, rare systems; qualitatively, many of the effects we show should be present in older objects.
As found in the Chandler et al. (1996) observations the simulation curves are not well fit by a single power law. The curves are logically split into two parts: a “shallow” portion that can be fit on the simulation plots roughly by a power law of exponent ranging from $`1.5`$ to $`1.3`$ and a steeper portion. The shallow portion matches well with the TMC1 and TMC1A results of -1.26 and -0.75 respectively, and is fairly close to the Chernin & Masson value of $`\gamma 1.8`$. These fits are subject to some small variation from the choice of where one portion begins and ends, and the size of the velocity bins, but varying among reasonable choices for these only changes the exponents by a tenth or so. The steeper portions of the simulation curves are fit with exponents steeper than $`10`$, to be compared with an observation of $`3`$ (TMC1 and TMC1A) and a jet-model value of $`5.6`$ (Zhang & Zheng, 1997). There appears, however, to be some flattening of the steep portion as the system evolves in the $`f^{}=50`$ case, which is the most focused model. The slope flattens from $`19`$ to about $`7`$ to $`3.25`$ by $`220`$ years. The focusing accelerates the polar tip and sweeps up increasingly more mass to higher velocities. This acceleration might also occur in other less focused cases when the outflow “breaks out” of the protostar’s cloud core into much less dense material. If this breakout flattens the mass vs. velocity curve at high velocity then it could explain the observationally derived curves from older outflows.
The shallow and steep portions of the mass-velocity curves are connected by a “hump,” which does not appear in the observationally determined curves. This feature is also seen in the mass-velocity plot of Zhang & Zheng (1997) and we find that in our simulations it is largely due to the innermost portions of the ambient shock - those farthest from an observer. Optical depth effects could play a role in translating this part of the curve to an accurate integrated CO line profile.
The mass vs. velocity relations for the simulations look promising, with one caveat being that questions remain about this method of matching observations to models. Another caveat is that extrapolating these results to those of large-scale outflows (larger than 0.1 pc) is not straightforward. Large outflows are likely to involve significantly more material from regions exterior to the cloud than material in the infall in the infall region, and thus the simulations here may refer only to structures on 100 - 1000 AU scales. The Chandler et. al. observations are more relevant in this case. Noting these provisos, the models have mass vs. velocity relations whose shallow portions are about as steep as those quoted by Chernin & Masson, and the steeper portions may flatten as the flow focuses or breaks out of the cloud core. The simulated relations have striking similarities to the observationally derived curves despite the fact that the model systems are driven by wide angle winds.
## 5 Conclusion
We have performed radiative simulations of the interaction of a central spherically symmetric wind with an infalling environment under conditions similar to those found around young stellar objects. We include gravity from the protostar and rotation about the axis. We have examined the effect of varying the relative infall to wind mass flux on the morphology and kinematics of the resulting outflow.
In our simulations, shaping by the toroidal YSO environment of the wind shock, in addition to the outer molecular shock, focuses the outflow, leading to morphologies and kinematics not explored in earlier analytical models. Most importantly, the outflows become progressively more collimated as the central wind becomes weaker relative to the infall. Stronger winds can reverse the inflow of material creating the wide evacuated conical cavities seen in some observations of molecular outflows. Weaker winds are focused more tightly and form elongated, bipolar structures. The simplified snowplow model for these outflows does not explain these variations. Thus, conclusions made from the snowplow model that wide angle winds cannot drive molecular outflows should be reexamined.
We suggest that the effects of turbulent mixing of wind and ambient material, which are not treated explicitly in the simulations, will not significantly alter the results. The mixing is shown to be a relatively inefficient process. The shear speed of the post-shock wind in the simulations is high and we estimate that the poleward focused flow will traverse a large part of the wind blown bubble’s circumference before mixing becomes significant. If the mixing does occur, the momentum in the wind is large enough to force the flow towards the poles and possibly form a conical converging flow.
We find a favorable comparison of the mass vs. velocity relations derived from the simulations to those from observations. The shallow portion of the curve at low velocity can be roughly fit by a power law of exponent from about $`1.5`$ to $`1.3`$. The steeper portion at high velocity is fit by an exponent of $`10`$ or steeper, although the most focused of our simulations shows a gradual flattening. This leads us to speculate that when the other simulations run to breakout of the cloud core there might be further focusing and a similar transition in the mass vs. velocity curve.
These simulations show that directed outflows can result from wide angle winds. The nonlinear interaction of wind and environment leads to dynamics that are not easily modeled outside of simulation, but which lead to behavior similar to what is seen in young outflows. Properties that have been previously been proscribed to jet driven systems can be found in the wide angle winds. Further synthetic observations need to be performed on simulations like those in this paper to indicate the other ways in which the wide angle wind model offers an explanation for molecular outflows.
For low mass YSOs it is probable that wide angle winds will be generated by some form of MHD mechanism associated with an accretion disk (Ouyed & Pudritz, 1997) or disk-star boundary layer (Shu et al., 1994). Although our simulations are hydrodynamic and involve a maximally uncollimated (i.e.isotropic) wind, the shaping of outflows should be general and occur even in the presence of magnetic fields. This is because shock focusing could occur even when the shocks are magnetized. In addition, MHD driven wide angle wind models will likely enhance the outflow collimation in two ways. The first is because these models typically collimate the wind early, in the “wind sphere” region we do not model, to send more momentum along the poles. The second is because toroidal fields are generated and carried out by the wind in such models, providing radially directed hoop stresses in the wind post-shock region.
In future papers we hope to explore in more detail the emissive properties of wide wind driven simulated outflows and make a more direct comparison with observations, as well as study long term behavior. Additionally, these studies of the relative magnitude of the wind have laid the foundation for an examination into how time variation in the wind affects the outflow. With time dependence of the source, which we expect on observational grounds, the outflows can be extremely complicated kinematically. Finally a more detailed exploration of the parameters associated with the asphericity of the wind should be explored allowing a better link between its shape and the resulting properties of the outflows.
We wish to thank Thomas Gardiner for his helpful discussions. This work was supported by NSF Grant AST-0978765 and the University of Rochester’s Laboratory for Laser Energetics through a Frank J. Horton Fellowship and their computing facilities.
## Appendix A Collapsing Sheet Model
In this appendix we briefly recapitulate the equations of the sheet density distribution described in Hartmann et al. (1996).
The collapsing sheet is described as a function of spherical radius from the origin $`r`$ and cosine of the polar angle $`\mu `$. In spherical coordinates the mass density and velocity is:
$`{\displaystyle \frac{\rho }{\rho _n}}({\displaystyle \frac{r}{R_c}},\mu )`$ $`=`$ $`\left(\mathrm{sech}(\eta \mu _0)\right)^2\left({\displaystyle \frac{\eta }{\mathrm{tanh}(\eta )}}\right){\displaystyle \frac{\rho _{CMU}}{\rho _n}}({\displaystyle \frac{r}{R_c}},\mu )`$ (A1)
$`{\displaystyle \frac{\rho _{CMU}}{\rho _n}}({\displaystyle \frac{r}{R_c}},\mu )`$ $`=`$ $`\left({\displaystyle \frac{r}{R_c}}\right)^{3/2}\left(1+{\displaystyle \frac{\mu }{\mu _0}}\right)^{1/2}\left({\displaystyle \frac{\mu }{\mu _0}}+{\displaystyle \frac{2\mu _0}{\frac{r}{R_c}}}\right)^1`$ (A2)
$`{\displaystyle \frac{v_r}{v_k}}({\displaystyle \frac{r}{R_c}},\mu )`$ $`=`$ $`({\displaystyle \frac{r}{R_c}})^{1/2}\left(1+{\displaystyle \frac{\mu }{\mu _0}}\right)^{1/2}`$ (A3)
$`{\displaystyle \frac{v_\theta }{v_k}}({\displaystyle \frac{r}{R_c}},\mu )`$ $`=`$ $`({\displaystyle \frac{r}{R_c}})^{1/2}(\mu _0\mu )\left({\displaystyle \frac{\mu _0+\mu }{\mu _0(1\mu ^2)}}\right)^{1/2}`$ (A4)
$`{\displaystyle \frac{v_\varphi }{v_k}}({\displaystyle \frac{r}{R_c}},\mu )`$ $`=`$ $`({\displaystyle \frac{r}{R_c}})^{1/2}\left(1{\displaystyle \frac{\mu }{\mu _0}}\right)^{1/2}\left({\displaystyle \frac{1\mu _0^2}{1\mu ^2}}\right)^{1/2}.`$ (A5)
where $`\mu _0=\mu _0(r/R_c,\mu )`$ specifies the location (cosine of polar angle) on a reference sphere at some radius $`r_0`$ which a gas parcel at the given coordinates came from. It is defined implicitly by:
$$\frac{r}{R_c}=\frac{1\mu _0^2}{1\frac{\mu }{\mu _0}}.$$
(A6)
The parameter $`\eta `$ is the ratio of the distance inside-out collapse has progressed into the sheet $`r_0`$, to the scale height $`H`$ of the initial, static, self-gravitating sheet. It also describes the degree of flattening in the central density distribution. The value of $`\eta `$ is taken as constant during the simulations. Other basic constants include the centrifugal radius, $`R_c`$, the Keplerian velocity at the centrifugal radius, $`v_k`$, and a density $`\rho _n`$. These in turn can be defined using the mass of the (forming) central star $`M_{}`$, and an angular velocity at the radius $`r_0`$ of $`\dot{\varphi }_0`$.
$`R_c`$ $`=`$ $`{\displaystyle \frac{\dot{\varphi }_0^2r_0^4}{GM_{}}}`$ (A7)
$`v_k`$ $`=`$ $`\sqrt{{\displaystyle \frac{GM_{}}{R_c}}}`$ (A8)
$`\rho _n`$ $`=`$ $`{\displaystyle \frac{\dot{M}_a}{4\pi (GM_{}R_c^{\mathrm{\hspace{0.33em}3}})^{1/2}}}`$ (A9) |
no-problem/9910/astro-ph9910035.html | ar5iv | text | # Gravitational clustering in a 𝐷-dimensional Universe.
## Abstract
We consider the problem of gravitational clustering in a $`D`$-dimensional expanding Universe and derive scaling relations connecting the exact mean two-point correlation function with the linear mean correlation function, in the quasi-linear and non-linear regimes, using the standard paradigms of scale-invariant radial collapse and stable clustering. We show that the existence of scaling laws is a generic feature of gravitational clustering in an expanding background, in all dimensions except $`D=2`$ and comment on the special nature of the 2-dimensional case. The $`D`$-dimensional scaling laws derived here reduce, in the 3-dimensional case, to scaling relations obtained earlier from $`N`$-body simulations. Finally, we consider the case of clustering of 2-dimensional particles in a 2-$`D`$ expanding background, governed by a force $`GM/R`$, and show that the correlation function does not grow (to first order) until much after the recollapse of any shell.
I. Introduction
The temporal evolution of a system consisting of a large number of particles interacting with each other via Newtonian gravity is a formidable problem to tackle. Such a system has, in fact, no “final” state of thermodynamic equilibrium as there is no limit to its phase space volume. The phase volume available can be continuously increased by the separation of particles into a collapsed core and a dispersed halo, with the core becoming more and more tightly bound and the halo particles moving to larger and larger distances. In fact, the only stable configuration of such a system consists of a tightly coupled binary, with all the other particles at infinite distance.
The above situation changes drastically if the background of the system (i.e. the space in which the particles are moving) is itself expanding. In this case, the expansion provides a “stabilising” influence on the evolution and can result in the formation of stable structures in which the effects of gravitational collapse are balanced by the expansion. Further, there also exists the possibility that the outmoving halos may be captured by other compact cores, leading to the build up of larger structures. Thus, the issue of gravitational clustering of a large number of particles appears more tractable in the case of a background expanding (in general) with a time-dependant scale factor. This problem is, of course, of immense physical relevance as there currently exists strong evidence that the matter density in the Universe is dominated by collisionless dark matter particles, which interact solely by gravity. Thus, if the length scales of interest are much smaller than the Hubble scale (and particle velocities are non-relativistic), the formation of large-scale structure in the Universe is well described by the above picture, making it worthy of investigation.
In the linear regime, where deviations from uniformity are small, perturbative techniques are used to obtain the time evolution of the system parameters, for example, the correlation functions. These methods, of course, fail in the quasi-linear and non-linear regimes and there is, as yet, no clear analytical picture of the behaviour of the system in these stages. However, in the case of 3+1 dimensions, there exists a set of scaling relations relating the linear and non-linear correlation functions (, ). These scaling laws are not particularly well understood but appear to be validated by numerical simulations (, ).
A better understanding of a physical problem is sometimes attained by treating it in a more general manner in an arbitrary number of dimensions as one may then be able to separate the generic features of the problem (as in, issues arising from the nature of the interaction) from results which stem from its dimensionality. This has been seen earlier, for example, in the case of the Ising model. In the current letter, we address the issue of gravitational clustering in a $`D`$-dimensional, expanding Universe and attempt to derive the scaling relations, using the well-known paradigms of scale-invariant radial infall in the quasi-linear phase and stable clustering in the non-linear regime.
II. Clustering in $`D`$-dimensions
We consider the case of a $`D`$-dimensional Universe, expanding with a time-dependant scale factor $`a(t)`$. Further, we use the maximally symmetric Robertson-Walker metric (in $`D+1`$ dimensions) and specialise to flat space ($`k=0`$). The equations governing the evolution of $`a(t)`$ are then (see )
$$\frac{\dot{a}^2}{a^2}=\frac{2\kappa (D)}{D(D1)}\rho _b$$
(1)
$$\frac{\ddot{a}}{a}+\frac{D2}{2}\left(\frac{\dot{a}}{a}\right)^2=\frac{\kappa (D)}{(D1)}p$$
(2)
where $`\rho _b`$ and $`p`$ are, respectively, the density and pressure of the background Universe and $`\kappa `$ is the constant in the Einstein equations, which, in general, can depend on the dimension $`D`$. We note that equations (1) and (2) are obtained by imposing the constraints of homogeneity and isotropy on the Einstein equations in $`(D+1)`$ dimensions; the situation is thus manifestly isotropic. Further, it can be seen that the behaviour in the $`D=2`$ case is likely to be special, due to the presence of the $`(D2)`$ coefficient in the second term of equation (2). We emphasise that the present work considers the clustering of $`D`$-dimensional particles in a $`D`$-dimensional expanding Universe. Earlier studies of 2-$`D`$ clustering in the literature (see, for example, ) treated the clustering of infinite needles in a 3-dimensional background; this will be commented upon later.
Given an equation of state, $`p(\rho _b)`$, one can solve equations (1) and (2) for the evolution of $`a(t)`$. For pressureless dust, with $`p=0`$, this implies that $`\rho _b`$ is given by
$$\rho _ba^D$$
(3)
Next, we incorporate the effects of perturbations on the smooth background by considering the evolution of the above system starting from Gaussian initial conditions with an initial power spectrum $`P_{in}(k)`$. The two-point correlation function, $`\xi (a,x)`$, is defined as the Fourier transform of the power spectrum. We will, for convenience, work with the mean two-point correlation function, $`\overline{\xi }(a,x)`$, defined by
$$\overline{\xi }(a,x)=\frac{D}{x^D}_0^x𝑑yy^{D1}\xi (a,y)$$
(4)
and attempt to relate the exact $`\overline{\xi }(a,x)`$ in the quasi-linear and non-linear regimes to the mean correlation function calculated from linear theory, at the same epoch.
The equation for the conservation of pairs can be written in $`D`$-dimensions as
$$\frac{\xi }{t}+\frac{1}{ax^{D1}}\frac{}{x}\left[x^{D1}v(1+\xi )\right]=0$$
(5)
where $`v(a,x)`$ is the mean relative pair velocity at scale $`x`$ and time $`t`$. In terms of $`\overline{\xi }(a,x)`$, this gives
$$\left[\frac{}{A}h(a,x)\frac{}{X}\right]\mathrm{ln}(1+\overline{\xi })=Dh(a,x)$$
(6)
Here, $`h(a,x)=v/\dot{a}x`$, $`X=\text{ln }x`$ and $`A=\text{ln }a`$. The above equation can be further simplified by defining $`F=\mathrm{ln}\left[x^D(1+\overline{\xi })\right]`$, yielding
$$\left[\frac{}{A}h(a,x)\frac{}{X}\right]F=0$$
(7)
The characteristic curves of this equation, on which $`F`$ is a constant, satisfy the condition $`\mathrm{ln}\left[x^D(1+\overline{\xi })\right]=\mathrm{constant}`$, i.e.
$$x^D(1+\overline{\xi })=l^D$$
(8)
where $`l`$ is some other length scale. (Note that $`x^D(1+\overline{\xi })`$ is proportional to the number of neighbours of a given particle, within a $`D`$-dimensional sphere of radius $`ax`$; the above equation expresses the conservation of pairs in this sphere .) When the evolution is linear at all relevant scales, $`\overline{\xi }(a,x)1`$ and $`xl`$. However, in the non-linear regime, $`\overline{\xi }(a,x)1`$ at some scale $`x`$; clearly $`xl`$. Thus, the behaviour of clustering at a scale $`x`$ is determined by the transfer of power from a larger scale $`l`$ along the characteristics defined by equation (7). This suggests that one should try to relate the true $`\overline{\xi }(a,x)`$ to the linear correlation function $`\overline{\xi }_L(a,l)`$, evaluated at a different scale.
The paradigm of scale-invariant radial collapse (see , ) will be used to carry out the above procedure, in the quasi-linear regime. Consider the evolution of a $`D`$-dimensional, spherically symmetric, overdense region, containing a mass $`M`$. In general, such a region will initially expand with the background Universe until the excess gravitational force due to its enclosed mass causes it to collapse back upon itself. Thus, the radius of the region will initially rise, reach a maximum and then decrease. The equation of motion for the radius, $`R(t)`$, of such a $`D`$-dimensional, spherical region is ()
$$\frac{d^2R}{dt^2}=\frac{2(D2)}{D^2}(1+\delta _i)\frac{l^2}{t_i^2}\frac{l^{D2}}{R^{D1}}$$
(9)
where we have replaced for the mass, $`M`$, in terms of $`\delta _i`$, $`l`$ and $`t_i`$, i.e. the initial density contrast, shell radius and time, respectively. We note that the acceleration, $`(d^2R/dt^2)`$, is proportional to $`1/R^{D1}`$, in $`D`$ dimensions. The energy integral for equation (9) is
$$E=\frac{1}{2}\left(\frac{dR}{dt}\right)^2\frac{2(1+\delta _i)}{D^2}\frac{l^2}{t_i^2}\left(\frac{l}{R}\right)^{D2}$$
(10)
The above expression is, of course, not valid for $`D=2`$; this case will be treated separately later. We evaluate the energy constant, $`E`$, by requiring that the velocity satisfies the unperturbed expansion at $`t=t_i`$, i.e. $`dR/dT=lH_i=2l/Dt_i`$, where $`H_i=2/Dt_i`$ is the Hubble parameter. This gives
$$\left(\frac{dR}{dt}\right)^2=\frac{4(1+\delta _i)}{D^2}\frac{l^2}{t_i^2}\left(\frac{l}{R}\right)^{D2}\frac{4\delta _i}{D^2}\frac{l^2}{t_i^2}$$
(11)
At turnaround, $`R=R_M`$ and $`dR/dt=0`$. Thus
$$\left(\frac{\delta _i}{1+\delta _i}\right)=\left(\frac{l}{R_M}\right)^{D2}$$
(12)
or, since $`\delta _i1`$,
$$R_M=l\delta _i^{1/(2D)}$$
(13)
In the quasi-linear regime, we expect clustering to take place in a region surrounding density peaks of the linear regime, i.e. around regions such as the spherical region considered above. Making the usual assumption that the typical density profile around such a peak is equal to the average profile around a mass point, we can write this density profile as
$$\rho (x)\rho _\mathrm{b}(1+\overline{\xi })$$
(14)
Thus, the initial density contrast, $`\delta _i(l)\overline{\xi }_L(l)`$, in the initial epoch, when linear theory is valid. Since $`R_M=l\delta _i^{1/(2D)}`$, clearly $`R_M\overline{\xi }_L^{1/(2D)}`$. In the scale-invariant, radial collapse picture, each shell can be approximated as contributing an effective radius proportional to $`R_M`$. Taking the final effective radius, $`x`$, as proportional to $`R_M`$, the final mean correlation function is given by
$$\overline{\xi }(x)\rho \frac{M}{x^D}\frac{l^D}{\left[l\overline{\xi }_L^{1/(2D)}\right]^D}\left[\overline{\xi }_L(l)\right]^{D/(D2)}$$
(15)
Thus, the final correlation function $`\overline{\xi }_{\mathrm{QL}}`$ at $`x`$ is the $`D/(D2)`$ power of the initial correlation function at $`l`$, where $`l^Dx^D\left[\overline{\xi }_L(l)\right]^{D/(D2)}x^D\overline{\xi }_{\mathrm{QL}}(x)`$, which is the form required by equation (8), if $`\overline{\xi }_{\mathrm{QL}}1`$.
Let us turn our attention, next, to the non-linear regime; here, we use the ansatz of stable clustering ($`h1`$ for $`\overline{\xi }1`$), which is physically well-motivated as it seems reasonable to expect stable, bound systems to form under the joint influence of gravity and the expansion (). Such systems would neither expand nor contract and would hence have peculiar velocities equal and opposite to the Hubble expansion, i.e. $`v^i=\dot{a}x^i`$. (We emphasise that the stable clustering hypothesis is an ansatz and might well fail if mergers of structures are important.) Stable clustering requires that virialized systems should maintain their densities and sizes in proper co-ordinates, $`r=ax`$. This would require the correlation function, in $`D`$ dimensions, to have the form $`\overline{\xi }_{\mathrm{NL}}(a,x)=a^DF(ax)`$, where $`F`$ is some function and the $`a^D`$ factor arises from the decrease in the background density. We assume that $`\overline{\xi }_{\mathrm{NL}}(a,x)`$ is a function of $`\overline{\xi }_L(a,l)`$, where $`l^Dx^D\overline{\xi }_{\mathrm{NL}}(a,x)`$, from equation (8). This relation can be written as
$$\overline{\xi }_{\mathrm{NL}}(a,x)=a^DF(ax)=U\left[\overline{\xi }_L(a,l)\right]$$
(16)
where $`U(p)`$ is an unknown function of its argument, which is to be determined. The density contrast and mean correlation function have the relation $`\overline{\xi }\delta ^2`$, in the linear regime; this arises from the definition of $`\xi `$ as the Fourier transform of $`\delta ^2`$. Now, in a flat $`D`$-dimensional matter-dominated Universe, $`\delta a^{D2}`$ (), in the linear regime. Thus, $`\overline{\xi }_L\delta _l^2a^{2(D2)}`$. We can hence write $`\overline{\xi }_L(a,l)=a^{2(D2)}Q\left[l^D\right]`$, where $`Q`$ is a function of $`l`$ alone. But, $`l^Dx^D\overline{\xi }_{\mathrm{NL}}(a,x)=x^Da^DF(ax)=r^DF(r)`$. This implies
$`\overline{\xi }_{\mathrm{NL}}(a,x)`$ $`=`$ $`a^DF(r)=U\left[\overline{\xi }_L(a,l)\right]`$ (17)
$`=`$ $`U\left[a^{2(D2)}Q\left[r^DF(r)\right]\right]`$ (18)
Considering the above relation as a function of $`a`$ at constant $`r`$, we obtain
$$Ba^D=F\left[Ca^{2(D2)}\right]$$
(19)
where $`B`$ and $`C`$ are constants. One must hence have
$$F(p)p^{D/2(D2)}$$
(20)
Thus, in the extreme non-linear regime, we obtain
$$\overline{\xi }_{\mathrm{NL}}(a,x)\left[\overline{\xi }_L(a,l)\right]^{D/2(D2)}$$
(21)
The above analysis shows that the exact mean correlation function can be expressed in terms of the linear mean correlation function by the relation
$$\overline{\xi }(a,x)=\overline{\xi }_L(a,l)(\text{linear})$$
(22)
$$\overline{\xi }(a,x)\left[\overline{\xi }_L(a,l)\right]^{D/(D2)}(\text{quasi-linear})$$
(23)
$$\overline{\xi }(a,x)\left[\overline{\xi }_L(a,l)\right]^{D/2(D2)}(\text{non-linear})$$
(24)
In the case of 3 dimensions, the above equations reduce to $`\overline{\xi }(a,x)\left[\overline{\xi }_L(a,l)\right]^3`$ and $`\left[\overline{\xi }_L(a,l)\right]^{3/2}`$ in the quasi-linear and non-linear regimes, respectively; these are in reasonable agreement with simulations (, ; see, however, ). Thus, scaling laws appear to be a generic feature of gravitational clustering in an expanding background, in all dimensions (except $`D=2`$).
The 2-dimensional case is special and different from all other dimensions, as, in this case, equation (9) gives
$$\frac{d^2R}{dt^2}=0$$
(25)
i.e. the correlation function does not evolve at all in 2-$`D`$ and no structures are formed. The special nature of the $`D=2`$ case appears to arise from the structure of the Poisson equation, due to the fact that it contains derivatives of the second order. It has been noted earlier that scaling relations can arise in 2-$`D`$ numerical simulations of gravitational clustering (). These simulations, however, do not consider the case of 2-$`D`$ particles evolving in a 2-$`D`$ background but instead treat the system as consisting of a set of infinitely long, parallel needles, in a 3-$`D`$ expanding background, with particles considered as arising at the intersections of the needles with any plane orthogonal to them. The mass elements in the needles interact via the usual $`1/r^2`$ force; however, the interaction between the needles themselves is governed by a $`1/r`$ force. The simulations consider the clustering of the needles by taking a slice through a plane orthogonal to them. This situation is, of course, clearly anisotropic as clustering is considered as taking place in 2 dimensions while the background Universe expands in 3 dimensions. In this case, one can show numerically () that $`R_M=l/\delta _i`$ in the quasi-linear regime and arrive at the relevant scaling relations by an analysis similar to the above.
We finally consider the case of 2-$`D`$ particles in a 2-$`D`$ expanding background and assume a force law
$$\frac{d^2R}{dt^2}=\frac{GM}{R}$$
(26)
The above form, of course, cannot be obtained in a self-consistent manner, by taking limits of the Einstein equations for an expanding Universe; however, it has the form $`d^2R/dt^2R^1`$, the correct dependence for $`D=2`$ in equation (9). The energy integral of equation (26) is
$$E=\frac{1}{2}\left(\frac{dR}{dt}\right)^2GM\text{ln}R$$
(27)
The mass $`M`$, enclosed within the shell, is given by
$$M=(1+\delta _i)\rho _b\pi l^2$$
(28)
where $`l`$ is the initial shell radius. Here, we will assume $`\rho _bt^2`$, the usual result for $`D`$ dimensions. Setting $`dR/dt=H_il`$ initially, and using $`H_i1/t_i`$, we obtain
$$R_M=l\text{exp}\left[A(1+\delta _i)\right]$$
(29)
where $`A`$ is a constant; i.e. $`R_M/l=const.`$, to leading order in $`\delta _i`$. Again taking the final effective radius, of the 2-dimensional shell, as proportional to $`R_M`$, the final mean correlation function is given by
$$\overline{\xi }(x)\rho \frac{M}{x^2}\frac{l^2}{l^2}\text{const.}$$
(30)
Thus, the correlation function does not grow (to first order) until much after turnaround. This is an interesting result which needs to be verified by simulations.
The above situation is similar to the case of a 3-$`D`$ network of parallel cosmic strings (space around them is flat and $`\rho _ba^2`$). Structure will clearly not grow if the network is expanding uniformly; however, if the strings had some initial peculiar velocities, then two strings which pass on either side of a third would move towards each other because the angle around the third string is less than $`2\pi `$.
III. Conclusions
In the present paper, the standard paradigms of scale-invariant radial collapse and stable clustering have been used to derive scaling relations connecting the exact mean two-point correlation function and the linear mean correlation function, for a $`D`$-dimensional expanding Universe. The existence of scaling laws is found to be a generic feature of gravitational clustering in an expanding background, in all dimensions, except $`D=2`$. Further, the scaling laws derived here are in agreement, in the 3-dimensional case, with scaling relations obtained earlier from $`N`$-body simulations ($`\overline{\xi }\overline{\xi }_L^3`$ and $`\overline{\xi }\overline{\xi }_L^{3/2}`$, in the quasi-linear and non-linear regimes, respectively.). Finally, we have considered the case of clustering of 2-$`D`$ particles in a 2-dimensional expanding background, governed by a force -GM/R, and show, by a similar analysis, that the correlation function does not grow, to first order, until much after turnaround of any shell. |
no-problem/9910/quant-ph9910063.html | ar5iv | text | # Bell’s inequalities for states with positive partial transpose
## I Introduction
One of the basic questions asked early in the development of quantum information theory was about the nature of entanglement. Extreme cases were always clear enough: a 2 qubit singlet state was the paradigm of entangled state , whereas product states and mixtures thereof were obviously not, but merely “classically correlated” . But in the wide range between it was hardly clear where a meaningful boundary between the entangled and the non-entangled could be drawn. Still today some boundaries are not completely known, although, of course, general structural knowledge about entanglement has increased dramatically in the last few years. The present paper is devoted to settling the relationship between two entanglement properties discussed in the literature.
To fix ideas we will start by recalling some properties one might identify with “entanglement” and the known relations between them. For simplicity in this introduction we will choose the setting of bipartite quantum systems, i.e., quantum systems whose Hilbert space is written as a tensor product $`=_1_2`$. Moreover, we consider finite dimensional spaces only, leaving the appropriate extensions to infinite dimensions to the reader. All properties listed refer to a density matrix $`\rho `$ on this space. It turns out to be simpler to define the entanglement properties in terms of there negations, i.e., the various degrees of “classicalness”:
(S) A state is called separable or “classically correlated”, if it can be written as a convex combination of tensor product states. Otherwise, it is simply called “entangled”.
(B) Before 1990 perhaps the only mathematically sharp criterion for entanglement were the Bell inequalities in their CHSH form . A state is said to satisfy these Bell inequalities if, for any choice of operators $`A_i,A_i^{}`$ on $`_i`$ ($`i=1,2`$) with $`\mathrm{𝟏}A_i,A_i^{}\mathrm{𝟏}`$, we have
$$\mathrm{tr}\rho \left(A_1(A_2+A_2^{})+A_1^{}(A_2A_2^{})\right)2.$$
(1)
It is easy to see that (S)$``$(B).
(M) Bell’s inequalities are traditionally derived from an assumption about the existence of local hidden variables. The same assumptions lead to an infinite hierarchy of correlation inequalities , and it seems natural to base a notion of entanglement not on an arbitrary choice of inequality (e.g. CHSH) from this hierarchy. So we say that $`\rho `$ admits a local classical model, if it satisfies all inequalities from this hierarchy. Then (S)$``$(M)$``$(B). It was shown in that (M)$`\Rightarrow ̸`$(S), and this was perhaps the first indication that different types of entanglement might have to be distinguished.
(U) A key step for the development of entanglement theory was a paper by Popescu , showing that by suitable local filtering operations applied to maybe several copies of a given $`\rho `$, one could sometimes obtain a new state $`\rho ^{}`$ violating a Bell inequality, even though $`\rho `$ admitted a local hidden variable model, and hence satisfied the full hierarchy of Bell inequalities. Let us call a state undistillible, if it is impossible to obtain from it a two qubit state violating the CHSH inequality, by any process of local quantum operations (i.e., operations acting only on one subsystem), perhaps allowing classical communication, and several copies of the state as an input. What Popescu showed was that (M)$`\Rightarrow ̸`$(U).
(P) The idea of distillation was later taken to much greater sophistication , and for a while the natural conjecture seemed to be that not only (S)$``$(U) (which is trivial to see), but that these two should be equivalent. The counterexample was provided in . They used a property (P), which had been proposed by Peres as a necessary condition for separability (i.e., (S)$``$(P)), which turned out to be also sufficient in the qubit case . This condition (P) is that $`\rho `$ has positive partial transpose, i.e., $`\rho ^{T_1}`$ is a positive semi-definite operator. Here the partial transpose $`A^{T_1}`$ of an operator $`A`$ on $`=_1_2`$ is defined in terms of matrix elements with respect to some basis by
$$k\mathrm{}|A^{T_1}|mn=m\mathrm{}|A|kn.$$
(2)
Equivalently,
$$(\underset{\alpha }{}A_\alpha B_\alpha )^{T_1}=\underset{\alpha }{}A_\alpha ^TB_\alpha ,$$
(3)
where the superscript $`T`$ stands for transposition in the given basis. It was shown that (P)$``$(U), and the counterexample in worked by establishing (U) and not-(S) in this example. States of this kind are now called bound entangled.
There are further interesting properties, like usefulness for teleportation , but the above are sufficient for explaining the problem addressed in this paper. To summarize, it is known that (S)$``$(P)$``$(U), and (S)$``$(M)$``$(B). For pure states all conditions are equivalent, and for systems of two qubits (U)$``$(S), but (M)$`\Rightarrow ̸`$(S).
For multi-partite systems, i.e., systems with Hilbert space $`_1_2\mathrm{}_n`$, the properties (S),(M),(U) immediately make sense. For (B) there may be several choices of inequalities following from (M). The inequalities we use in this paper are discussed in detail in the next section. Partial transposition (P) is an intrinsically bipartite concept. The strongest version of (P) in multi-partite systems is the one we use below: the positivity of partial transposes with respect to every subsystem.
Then the implication chains (S)$``$(P)$``$(U), and (S)$``$(M)$``$(B) hold as in the bipartite case. However, no direct relations were known so far between these chains, even in the bipartite case. It seems likely that the violation of (B) is a fairly strong property, perhaps implying distillibility. This certainly seems to be the intuition of Peres in who conjectures that
$$(\mathrm{M})(\mathrm{P}).$$
(4)
We will refer to this statement as Peres’ Conjecture. It should be noted, however, that neither we nor Peres gave a sharp mathematical formulation, particularly of the way the model is required to cover not only one pair but also tensor products and distillation processes. Some such condition is certainly needed (and implicitly assumed by Peres), because otherwise the implication $`(\mathrm{M})(\mathrm{P})`$ would fail already for two qubits . It is not entirely clear from how strongly Peres is committed to (4). Personally, we would not place a large bet on it. However, we do follow Peres’ lead in seeing here an interesting line of inquiry. Indeed, the present paper is devoted to proving one special instance of the conjecture, namely the implication (P)$``$(B), for general multi-partite systems, where (P) is taken as the positivity of every partial transpose, and (B) is taken as the $`n`$-particle generalization of the CHSH inequality proposed by Mermin , and further developed by Ardehali , Belinskii and Klyshko and others .
## II Mermin’s generalization of the CHSH inequalities
Like the CHSH inequalities, Mermin’s $`n`$-party generalization refers to correlation experiments, in which each of the parties is given one subsystem of a larger system, and has the choice of two $`\pm 1`$-valued observables to be measured on it. The expectations of such an observable are given in quantum mechanics by a Hermitian operator $`A`$ with spectrum in $`[1,1]`$, and with a choice of $`A_k,A_k^{}`$ at site $`k`$ the raw experimental data are the $`2^n`$ expectation values of the form $`\mathrm{tr}(\rho A_1A_2^{}\mathrm{}A_n)`$ with all possible choices $`A_k`$ vs. $`A_k^{}`$ at all the sites.
If we look only at a single site, the possible pairs of expectation values (with fixed $`A,A^{}`$ but variable $`\rho `$) lie in a square. It will be very useful for the construction of the inequalities and the proof of our result to consider this square as a set in the complex plane: after a suitable linear transformation (a $`\pi /4`$-rotation and a dilation) we can take it as the square $`𝒮`$ with the corners $`\pm 1`$ and $`\pm i`$. The pair of expectation values of $`A`$ and $`A^{}`$ is thus replaced by the single complex number $`\mathrm{tr}(\rho a)`$, where
$`a`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left((A+A^{})+i(A^{}A)\right)`$ (5)
$`=`$ $`e^{i\pi /4}(A+iA^{})/\sqrt{2}.`$ (6)
The idea of this transformation is that the square $`𝒮`$ has a special property: products of complex numbers $`z_k𝒮`$ lie again in $`𝒮`$. This is evident for the corners (they form a group) and follows for the full square by convex combination. Suppose now that $`\rho =_{k=1}^n\rho _k`$ is a product state. Then the operator $`b=_{k=1}^na_k`$ has expectation $`\mathrm{tr}(\rho b)=_{k=1}^n\mathrm{tr}(\rho _ka_k)𝒮`$. Since the expectation is linear in $`\rho `$, the same follows for any separable state, i.e., any convex combination of product states. The statement “$`\mathrm{tr}(\rho b)𝒮`$” is essentially Mermin’s inequality, although not yet written as an inequality. Note that the argument given here implies also that this statement (written out in correlation expressions involving $`A_k,A_k^{}`$) holds in any local classical model, because in a classical theory every pure state of a composite system is automatically a product, hence every state is separable. Thus Mermin’s inequality indeed belongs to the broad category of Bell’s inequalities.
To write “$`\mathrm{tr}(\rho b)𝒮`$” as a bona fide set of inequalities, we just have to undo the transformation (5), i.e., we introduce operators $`B,B^{}`$ such that (5) is satisfied with $`(b,B,B^{})`$ substituted for $`(a,A,A^{})`$. The operators $`B,B^{}`$ are usually called Bell operators, and Mermin’s inequality simply becomes
$$|\mathrm{tr}(\rho B)|1\text{resp.}|\mathrm{tr}(\rho B^{})|1.$$
(7)
Writing out $`B`$ and $`B^{}`$ explicitly in terms of tensor products of $`A_k,A_k^{}`$ gives the usual CHSH inequality (1) for $`n=2`$, and becomes arbitrarily cumbersome for large $`n`$. It is also not helpful for our purpose. The above derivation also gets rid of the case distinction “$`n`$ odd/even”, which has troubled the early derivations. In fact, Mermin first missed a factor $`\sqrt{2}`$ for even $`n`$, which was later obtained by Ardehali who in turn missed the same factor for odd $`n`$. Inequalities equally sharp for even and odd $`n`$ were established in and in .
## III Violations of Mermin’s inequality in Quantum Mechanics
The idea of combining $`A,A^{}`$ in the non-hermitian operator $`a`$ has a long tradition for the CHSH case . Its power is not only in organizing the inequalities (only linear transformations among operators are needed for that purpose), but in the possibility of bringing in the non-commutative algebraic structure of quantum mechanics to analyze the possibility of violations in the quantum case. In this section we discuss these violations, at the same time building up the machinery needed in the proof of our result. We will need the following expressions:
$`a^{}a`$ $`=`$ $`{\displaystyle \frac{1}{2}}(A^2+A^2)+{\displaystyle \frac{i}{2}}[A,A^{}]`$ (8)
$`aa^{}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(A^2+A^2){\displaystyle \frac{i}{2}}[A,A^{}]`$ (9)
$`a^2a^2`$ $`=`$ $`i(A^2A^2)`$ (10)
It is clear from the first line that although $`\mathrm{tr}(\rho a)`$ lies in $`𝒮`$, and hence in the unit circle for all $`\rho `$, the operator norm $`a=a^{}a^{1/2}`$ may be $`>1`$. Therefore, the tensor product operator $`b`$ may have a norm increasing exponentially with $`n`$. This is the key to the quantum violations of Mermin’s inequality.
The largest possible commutators, i.e., operators saturating the obvious bound $`[A,A^{}]2AA^{}`$ are just Pauli matrices. A good choice is $`A_k=(\sigma _x+\sigma _y)/\sqrt{2}`$ and $`A_k^{}=(\sigma _x\sigma _y)/\sqrt{2}`$ for all $`k`$. Then $`a_k=\sqrt{2}v`$, where $`v=\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right)`$. It is readily verified that $`v^n`$ acts in the two-dimensional space spanned by $`e_1^n`$ and $`e_2^n`$ exactly as $`v`$ acts in the space spanned by the two basis vectors $`e_1,e_2𝐂^2`$. With the same identification of two-dimensional subspaces $`b=2^{n/2}v^n`$ acts like $`2^{(n1)/2}a`$, so the possible expectations $`\mathrm{tr}(\rho b)`$, with $`\rho `$ supported in this subspace span the exponentially enlarged square $`2^{(n1)/2}𝒮`$.
In order to show that $`2^{(n1)/2}`$ is the maximal possible violation (in analogy to Cirel’son’s bound for the CHSH inequality), but also in preparation of the proof of our main result it is useful to consider the following general technique for getting upper bounds on $`\mathrm{tr}(\rho b)`$. It has been used in the CHSH case by Landau , among others. Note first that $`\mathrm{tr}(\rho B)`$ and $`\mathrm{tr}(\rho B^{})`$ are affine functionals of each $`A_k`$ or $`A_k^{}`$. Hence if we maximize the expectations of Bell operators by varying some $`A_k`$ or $`A_k^{}`$, keeping $`\rho `$ fixed, we may as well take $`A_k`$ extremal in the convex set of Hermitian operators with $`\mathrm{𝟏}A_k\mathrm{𝟏}`$. That is to say, we may assume $`A_k^2=A_k^2=\mathrm{𝟏}`$ for all $`k`$. Taking tensor products of Equation (8) and expanding the product we find
$`b^{}b`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}\left(\mathrm{𝟏}+{\displaystyle \frac{i}{2}}[A_k,A_k^{}]\right)`$ (11)
$`=`$ $`{\displaystyle \underset{\beta }{}}{\displaystyle \underset{k\beta }{}}{\displaystyle \frac{i}{2}}[A_k,A_k^{}],`$ (12)
where the sum is over all subsets $`\beta \{1,\mathrm{},n\}`$, and only factors different from $`\mathrm{𝟏}`$ are written in the tensor product. In particular, the term for $`\beta =\mathrm{}`$ is $`\mathrm{𝟏}`$. For $`bb^{}`$ we get a similar sum with an additional factor $`(1)^{|\beta |}`$, where $`|\beta |`$ denotes the cardinality of the set $`\beta `$. From Equation (10) we find $`a_k^2=a_k^2`$, and $`b^2=b^2`$, by taking tensor products. Again by applying (10), to $`(b,B,B^{})`$ this time, we find that $`B^2=B^2`$. In fact, by adding Equations (8) and (9) and inserting Equation (12) we get
$`B^2=B^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}(b^{}b+bb^{})`$ (13)
$`=`$ $`{\displaystyle \underset{\beta \mathrm{even}}{}}{\displaystyle \underset{k\beta }{}}{\displaystyle \frac{i}{2}}[A_k,A_k^{}].`$ (14)
By the variance inequality $`|\mathrm{tr}(\rho B)|^2\mathrm{tr}(\rho B^2)`$ the expectation of the right hand side is an upper bound on the square of largest violation of Mermin’s inequality. There are two immediate applications: since each term in the sum has norm at most one, the norm of the sum is bounded by the number of terms, i.e., $`2^{n1}`$. This shows the analog of Cirel’son’s inequality, i.e., that the violation discussed above is indeed maximal. The second application is to the case that all commutators vanish. Then only the term for $`\beta =\mathrm{}`$ survives, and there is no violation of the inequality. Our result to be stated and proved in the next section is a refinement of this idea.
## IV Positive Partial Transposes and Main Result
We now apply the technique of the previous section to the partial transpose. More specifically, for any density operator $`\rho `$ and any subset $`\alpha \{1,\mathrm{},n\}`$, let $`\rho ^{T_\alpha }`$ denote the partial transpose of all sites belonging to $`\alpha `$. Suppose now that $`\rho ^{T_\alpha }`$ is positive semi-definite, and hence again a density matrix. Then we can apply the variance inequality to $`\rho ^{T_\alpha }`$ and $`B^{T_\alpha }`$, obtaining:
$`(\mathrm{tr}\rho B)^2`$ $`=`$ $`\left(\mathrm{tr}\rho ^{T_\alpha }B^{T_\alpha }\right)^2\mathrm{tr}\left(\rho ^{T_\alpha }(B^{T_\alpha })^2\right)`$ (15)
$``$ $`\mathrm{tr}\left(\rho \left((B^{T_\alpha })^2\right)^{T_\alpha }\right)`$ (16)
We note that $`[A^T,A^T]^T=[A,A^{}]`$ and thus
$$\left((B^{T_\alpha })^2\right)^{T_\alpha }=\underset{\beta \mathrm{even}}{}(1)^{|\alpha \beta |}\underset{k\beta }{}\frac{i}{2}[A_k,A_k^{}].$$
(17)
Note that it does not matter whether we transpose $`\alpha `$ or its complement.
Now consider a partition of $`\{1,\mathrm{},n\}`$ into $`p`$ nonempty and disjoint subsets $`\alpha _1,\mathrm{},\alpha _p`$. Let us denote by $`𝒫`$ the collection of all unions of these basic sets together with the empty set, so that $`𝒫`$ has $`2^p`$ elements. We assume that $`\rho ^{T_\alpha }0`$ for all $`\alpha 𝒫`$. For $`p=1`$ this is no constraint at all, because the full transpose of $`\rho `$ is always positive. At the other extreme, for $`p=n`$, this assumption means the positivity of every partial transpose.
We now take the expectation value of Equation (17), and average over the $`2^p`$ resulting terms. The coefficient of the $`\beta ^{\mathrm{th}}`$ term then becomes
$$2^p\underset{\alpha 𝒫}{}(1)^{|\alpha \beta |}=2^p\underset{m=1}{\overset{p}{}}(1+(1)^{|\alpha _m\beta |}).$$
(18)
which is proved by writing the sum over $`𝒫`$ as a sum over $`p`$ two-valued variables, labeling the alternative “$`\alpha _m\alpha `$ or $`\alpha _m\alpha `$”, and using that the parity $`(1)^{|\alpha \beta |}`$ is the product of the parities corresponding to the $`\alpha _m`$. Clearly, the expression (18) is one iff $`|\alpha _m\beta |`$ is even for all $`m`$ and zero otherwise. Let us call such sets $`\beta `$$`𝒫`$-even”. There are
$$\underset{m}{}2^{|\alpha _m|1}=2^{np}$$
(19)
such sets. Hence we get the bound
$`(\mathrm{tr}\rho B)^2`$ $``$ $`{\displaystyle \underset{\beta 𝒫\mathrm{even}}{}}\mathrm{tr}\left(\rho {\displaystyle \underset{k\beta }{}}{\displaystyle \frac{i}{2}}[A_k,A_k^{}]\right)`$ (20)
$``$ $`2^{np}.`$ (21)
That this bound is optimal is evident by evaluating it on a tensor product of pure states maximally violating Mermin’s inequality for each partition element $`\alpha _m`$, i.e., states as discussed in Section III.
To summarize, we have established the best bound
$$|\mathrm{tr}(\rho B)|2^{(np)/2}$$
(22)
on violations of Mermin’s inequalities, under the assumption that the partial transposes $`\rho ^{T_\alpha }`$ are positive for all $`\alpha \{1,\mathrm{},n\}`$ subordinated to a partition into $`p`$ subsets. This includes three special cases: For $`p=1`$ it is the analogue of Cirel’son’s inequality, for $`p=n`$ it proves our claim that the inequalities are satisfied if all partial transposes are positive, and for partitions of the form $`\{1\},\mathrm{},\{m\},\{m+1,\mathrm{}n\}`$, we obtain the result by Gisin et al. using Mermin’s inequalities to test for the number $`m`$ of independent qubits. |
no-problem/9910/astro-ph9910032.html | ar5iv | text | # Star formation history of early–type galaxies in low density environments.
## 1 Introduction
Hierarchical clustering scenarios predict that early-type galaxies we see today in large virialized structures and in low density environments have formed at different epochs (Ellis 1998 and references therein). Cluster objects formed through major merging events at red-shift $`z>3`$, whereas early-type galaxies, today at the border of large structures, have formed significantly later, at $`z<1`$.
The morphology of early-type galaxies shows many relationships of different kind with the surrounding environment. The population of the early-type galaxies in clusters appears to be homogeneous contrary-wise to the large heterogeneity shown by objects inhabiting the low density media. Although the bulk of galaxies visible today in low density environments are already in place at $`z`$ 0.5 (Griffith at al. 1994), i.e nearly half the Hubble time, a large number of them show peculiarities such as signatures of interaction, fine structures etc. (see Schweizer 1992; Reduzzi et al. 1996 and references therein) which may hint at more recent activity.
In the above framework, early-type galaxies are currently interpreted as the final product of major/minor merging events (Schweizer 1992; Barnes 1996). Nevertheless, both from observational and theoretical points of view, evidence has grown over the years that different mechanisms have also played a significant role in shaping their final structure. In clusters, galaxy encounters are fast enough to make mergers less probable than harassments (Moore et al. 1996). In low density environments encounters are most likely to result in a merger (Barnes & Hernquist 1992) but dissipative mechanisms (Bender 1997) and weak-interaction events (Thomson 1991) could also contribute to the structural evolution of a galaxy.
This paper is the fourth of a series dedicated to the study of typical galaxies in low density environments, i.e. galaxies showing signatures of present/past interactions. The sample is composed of 21 shell-galaxies and 30 members of interacting pairs most of which show fine structures. Among them shell-galaxies represent a class of objects exhibiting signatures of past interactions, i.e. minor/major mergers (Schweizer 1992) or weak-interactions (Thomson & Wright 1990) and Thomson (1991). Pair-members are instead objects with ongoing interaction.
Longhetti et al. (1998a) measured 19 line-strength indices in the nuclear regions of the galaxies in our sample. Sixteen indices belong to the group defined by Worthey (1992) and G93 and include H$`\beta `$, Mg2 and some Fe features. All indices were transformed into the Lick–IDS system. Furthermore, three indices, particularly sensitive to recent star formation (Rose 1984, 1985; Leonardi & Rose 1996), i. e. $`\mathrm{\Delta }`$4000, H$`\delta `$/FeI and CaII(H+K), were added to the list. Longhetti et al. (1998b) derived the inner kinematics of the sample and corrected the line–strength indices for central velocity dispersion (Longhetti et al. 1998a).
The comparison of these galaxies with those in Virgo and Fornax (Rampazzo et al. 1999a) singled out a group of pair-galaxies in our sample with peculiar behavior in the (log R<sub>e</sub>, $`\mu _e`$) plane and the $`\kappa `$ space. These galaxies, apparently missing among cluster early-type objects, are tidally stretched and most likely in early stages of interaction. This finding makes evident the large scatter of line–strength indices as compared to galaxy scale properties. As an example, the correlation between galaxy shapes (as measured by the a4/a parameter) and ages (as deduced from H$`\beta `$) that was advanced by de Jong & Davies (1997) is not confirmed by the Rampazzo et al. (1999a,b) study.
The purpose of this paper is to cast light on the past star formation (SF) history of shell- and pair-galaxies in LDE as traced by their line strength indices, with particular attention to the role played by dynamical interactions in triggering star formation. The comparison with the results obtained for galaxies in denser environments (Burstein et al. 1984; Pickles 1985; Rose 1995; Bower et al. 1990; G93) will help us to understand the effects of the environment on the formation/evolution of early-type galaxies.
In addition to this, we address the problem of the origin of shell structures by analyzing the evolutionary history of the stellar populations hosted by the nucleus of the interacting galaxy. Numerical simulations of dynamical interactions among galaxies yield still contrasting explanations for the occurrence of shells (Weil & Hernquist 1993; Thomson 1991). In this study, we seek to assess the duration of the shell phenomenon by means of the age of the last episode of star formation as inferred from the nuclear indices.
The paper is organized as follows. In Sect. 2 we compare indices for Single Stellar Populations (SSPs) obtained using fitting functions by different authors (Buzzoni et al. 1992, 1994; Worthey 1992; Idiart et al. 1995) and a unique source of isochrones (Bertelli et al. 1994). By doing so, we are able to quantify the uncertainties affecting line strength indices calculations. Based on this preliminary comparison, we adopt the fitting functions of Worthey (1992). In Sect. 3 we introduce the sample of galaxies of G93 adopted as the reference template. In Sect. 4, firstly we compare the observational line strength indices for the nuclear region of galaxies in our sample with the theoretical predictions, and secondly we compare our sample of shell- and pair-galaxies with that of normal elliptical galaxies by G93. Notes on individual galaxies in the H$`\beta `$ vs. \[MgFe\] diagram are reported in Sect. 5. A tentative explanation of the distribution of galaxies in the H$`\beta `$ vs. \[MgFe\] diagram is presented in Sect. 6 both for the present sample and the template. The interpretation is based on statistical simulations of the observations. Finally, Sect. 7 summarizes the results.
## 2 Line-strength indices for SSPs
The technique to calculate line-strength indices of SSPs is amply described in Worthey (1994), Bressan et al. (1996) and Tantalo et al. (1998). The latter improved their previous models by introducing a differential method particularly suited to understand the role of the three main parameters driving the strength of selected indices, namely age, metallicity and enhancement in the abundance of $`\alpha `$ elements with respect to the solar partition. The reader is referred to those articles and references therein for a detailed description of the method. Here we limit ourselves to summarize a few basic properties of the models in usage. The galaxy indices are based on the isochrones of Bertelli et al. (1994), which provide the relative number of stars per elemental area of the Hertzsprung-Russell Diagram (HRD), the atlas of theoretical stellar spectra by Kurucz (1992), which is used to calculate the energy distribution in the continuum of each pass-band of interest, and a set of fitting functions.
The elemental areas of the HRD along the path drawn by an isochrone are taken sufficiently small so that all stars in each of them have the same effective temperature, gravity, luminosity and chemical composition.
The Kurucz (1992) library is extended at high temperatures by black-bodies and at temperatures cooler than 3500 K (late K and M type) by implementing the libraries of Lancon & Rocca-Volmerange (1992), Straizys & Sviderskiene (1972), and Fluks et al. (1994), see Bressan et al. (1994) and Tantalo et al. (1996) for all the details.
The fitting functions depend on stellar effective temperature, gravity and metallicity. They are used to calculate the line strength indices for each elemental area of the HRD. Finally a suitable integration technique yields the total indices for each SSP (isochrone).
Direct measurements of line strength indices on the integrated spectral energy distribution of individual SSPs is not possible, because the Kurucz (1992) library contains spectra at low resolution. A different strategy would be to replace this library with another one containing medium resolution spectra on which the direct measurement of the indices could be made. This possibility, explored in a forthcoming paper, is no longer considered here.
### 2.1 Uncertainties on the line-strength indices
In this section we compare line-strength indices computed by adopting fitting functions from different authors, namely Worthey (1992) and Worthey et al. (1994), otherwise known as the Lick-system, Buzzoni et al. (1992, 1994), and Idiart et al. (1995).
The Lick fitting functions refer to 21 line–strength indices (Worthey 1992; Gorgas et al. 1993; Worthey et al. 1994) and are based on a library of stellar spectra containing about 400 stars, observed at the Lick Observatory between 1972 and 1984, with an Image Dissector Scanner (IDS). In the following we adopt the Worthey (1992) fitting functions, extended however to high temperature stars ($`T_{eff}`$ 10000K) as reported in Longhetti et al. (1998a).
The Buzzoni et al. (1992) fitting functions for the Mg2, Fe5270 and H$`\beta `$ indices, rest on a library of spectra for 74 stars. Buzzoni et al. (1994) do not consider the dependence of the H$`\beta `$ on the metallicity.
Idiart et al. (1995) present calibrations for Mg2, Mgb and H$`\beta `$, based on a library of 170 stars, among which 89 are new observations, and the remaining are from Faber et al. (1985) and Gorgas et al. (1993) data. The sample spans the metallicity range $`3.0<`$\[Fe/H\]$`<0.2`$, the gravity range $`0.7<log(g)<5.0`$ and the temperature range $`3800`$K $`<T_{eff}<6500`$K.
We construct integrated narrow band indices for SSPs of solar metallicity by applying the different fitting functions above to the same set of stellar isochrones. This allows us to single out the effects of different empirical relations and evaluate the corresponding uncertainty.
The comparison between the three different sets of models for the indices in common is shown in Fig. 1.
Good agreement between Worthey (1992) and Buzzoni et al. (1992, 1994) calibrations exists for the H$`\beta `$ index of SSPs older than 1 Gyr, where the mean difference between the two sources is $`2`$% (see Table 1). For SSPs younger than 1 Gyr the fitting functions by Buzzoni et al. (1992, 1994) slightly overestimate the index H$`\beta `$ with respect to those by Worthey (1992). Furthermore, the Mg2 and Fe5270 indices, more sensitive to changes in metallicity, agree only for SSPs older than 3 Gyr. For younger SSPs, they differ by large factors. It is worth recalling that Buzzoni et al. (1992, 1994) considered their calibration to be applicable only to old SSPs.
The calibrations by Idiart et al. (1995) fit the behavior of Mg2, Mgb and H$`\beta `$ for stellar temperatures between 3800 K and 6500 K. The features measured by the Mg2 and Mgb indices are strongly dominated by relatively cool stars, both in young and old SSPs. On the contrary, the H$`\beta `$ index mainly reflects the contribution of main sequence and turn off stars of a stellar population.
Unfortunately, at ages younger than 2 Gyr the effective temperatures of these stars are higher than 6500 K, the upper limit of those fitting functions (Bertelli et al 1994). Therefore, values obtained for the H$`\beta `$ index adopting the calibration of Idiart et al. (1995) are reliable only for relatively old SSPs. Indeed, when only SSPs older than 2 Gyr are compared, good agreement between Idiart’s et al. (1995) and Worthey’s et al. (1994) calibration of H$`\beta `$ is found (see Fig. 1). In contrast, for ages older than 10 Gyr, the effect of main sequence stars cooler than 3800 K (for which the index has been extrapolated out of the range of validity of the calibrations) can be seen in the growing disagreement between the predictions of Idiart et al. (1995) and those of the Lick system.
The predictions for Mg2 and Mgb do not suffer from this limitation in temperature, therefore they can be extended also to young SSPs as shown in Fig. 1. The Mg2 and Mgb indices derived from Idiart’s et al. (1995) fitting functions are systematically lower than those from Worthey (1992) calibrations, with an offset of 0.02 mag for Mg2 and 0.36 mag for Mgb. These systematic shifts are probably due to the different sample of stars adopted by Idiart et al. (1995), that leads to a different calibration of the indices with respect to the stellar gravity.
In Table 1 we present the detailed comparison of results obtained from using Worthey (1992), Buzzoni et al. (1992, 1994), and Idiart et al. (1995): Table 1 lists $`\mathrm{\Delta }`$% (average difference between the two sets of indices values) and $`\sqrt{\mathrm{\Delta }^2}`$% (root square of the average quadratic difference) for different values of the age as indicated.
In the following we will adopt Worthey (1992) as the reference calibration, and consider the average offset between Buzzoni et al. (1992) and Worthey (1992) predictions, as representative of the uncertainty of the models. Indices calculated with this set of fitting functions are reported in Table 2 for three different metallicities, and for ages between $`5\times 10^7`$ yr and 15 Gyr.
## 3 Remarks on the observational data
### 3.1 The reference sample by G93
G93 obtained long-slit spectroscopic data for a sample of 41 elliptical galaxies. He derived kinematic profiles and line-strength indices for the nuclear regions (within 1/8 of the effective radius) and for a wider coverage of the galaxies area (within 1/2 of the effective radius), thus providing information on the radial gradients in these quantities. G93 sample of galaxies is the reference frame to which often in the course of this paper our data will be compared. In order to make the G93 data fully consistent with our ones, we start from his raw data within the central 5″, to which we apply the correction for velocity dispersion (see below). The corrected data are listed in Table 3.
### 3.2 Correcting G93 for velocity dispersion
The correction of the G93 data for velocity dispersion is made using the method of Longhetti et al. (1998a) for the sake of internal consistency. We remind the reader that G93 corrected his data for velocity dispersion, but using a different method. Although Longhetti et al. (1998b) demonstrated that good agreement exists between our kinematic parameters and those of G93, and that the differences brought about by the two methods for correcting indices for velocity dispersion are statistically small, yet they may lead to systematic discrepancies between the two data sets in the \[MgFe\] - H$`\beta `$ diagram. Since our aim is to single out physical differences between normal (G93) and interacting or post-interacting galaxies, we need to clean the data for all possible effects of spurious nature.
### 3.3 Effects of emission lines on G93 data
Although the G93 sample was selected to exclude galaxies with large amounts of gas, a substantial fraction of the ellipticals in this list (28/40) clearly shows evidence of emission (EW$``$ 0.15Å) in the \[OIII\](5007Å) line, at least in the central parts of the galaxies. Emission in not strong (only one galaxy, NGC 4278 has \[OIII\](5007Å) emission larger than 1Å) but it can seriously affect the interpretation of the H$`\beta `$ index. To cope with this difficulty, G93 tried to cure the H$`\beta `$ index of his galaxies for emission contamination adopting a linear relation between the intensity of the H$`\beta `$ emission line and that of the forbidden line \[OIII\](5007Å). This correlation has been found empirically, measuring the two emission lines on a sample of galaxies. However, the measure of the H$`\beta `$ emission line is not an easy task, because of the possible contamination by the absorption component, and the correct evaluation of the effect of H$`\beta `$ emission line on the H$`\beta `$ index requires detailed models of the emission/absorption processes. Therefore, instead of applying uncertain corrections, we prefer to leave H$`\beta `$ unchanged and always keep in mind that G93 data adopted in the present work are not corrected for this effect.
### 3.4 The sample of shell- and pair-galaxies
All the data in our sample refer to the central 5″ portion of the objects and therefore are homogeneous with those by G93. Nevertheless, we have checked how the indices would change when 5″ region in our data is smaller or greater than 1/8 of the effective radius. No sizable difference has been noticed, so that no correction for aperture is applied.
Furthermore, all observational indices have been corrected for velocity dispersion ($`\sigma `$). This strictly follows the same procedure as in Longhetti et al. (1998a) so that no details are given here.
Finally, Table 4 lists all the data for our sample that will be used in the analysis below.
## 4 Galaxy indices: theory versus data
In this section we compare the observational indices of the galaxies in our sample with those from model calculations.
### 4.1 The $`\sigma `$, Mg2, Fe diagnostic
In Fig. 2 the velocity dispersion and two iron indices (Fe5270 and Fe5335) are shown as a function of the Mg2 index, separately for the sample of shell-galaxies and pair-members.
Shell-galaxies show a narrower range of values with respect to pair-members. Apparently the shell-sample does not contain low velocity dispersion objects. Furthermore all shell-galaxies lie above the universal $`\sigma `$ vs. Mg2 relation by Bender et al. (1993). On the contrary, the pair-members nicely follow this relation.
The behavior in the other index-index diagrams is quite similar for the two samples even if they differ in some details. First shell-galaxies tend to have both strong Mg2 and Fe indices at the same time, whereas pair-galaxies have a broader distribution both in Mg2 and Fe indices. Furthermore, in our sample, while the weak lined galaxies (i.e. low values of metal indices), pair-members only, are almost consistent with the theoretical prediction, strong lined galaxies fall below it. In fact, it has long been known that (e.g. Worthey et al. 1994) the relation between Fe indices and Mg2 tends to be flatter than that traced by the evolutionary path of SSPs. This fact has been interpreted as evidence of enhancement in $`\alpha `$ elements with respect to the solar partition (perhaps caused by Type II super-novae contamination). We will come back later to this topic.
The peculiar $`\sigma `$ vs. Mg2 diagram of shell-galaxies conforms to what is found for field galaxies with the fine structure index $`\mathrm{\Sigma }`$ of Schweizer (1992) larger than 2 (Bender et al. 1993). However in our sample the effect is more significant.
Among the galaxies showing fine structures, shells are believed to be a signature of past strong dynamical interaction (Barnes 1996). If this mechanism is responsible of the above displacement, several explanations are possible: (i) the major effect of dynamical interaction is on the velocity dispersion and other structural quantities rather than on the photometric properties of shell-galaxies; in such a case the object moves vertically away from the universal $`\sigma `$ vs. Mg2 relation and maintains its position in the index-index planes. (ii) As suggested by dynamical models of strong unbound encounters, the central velocity dispersion of the interacting objects remains unchanged and the main effect is a burst of star formation altering only the spectro-photometric properties, i.e. displacing them toward bluer, younger values of Mg2. Most likely a combination of the two effects is at work.
Though far from being statistically complete, we argue that our sample could sketch two consecutive phases of the accretion process: interaction and merging. We note that our selection criteria are biased toward pair-objects obeying the universal $`\sigma `$ vs. Mg2 relation.
### 4.2 On the enhancement of $`\alpha `$ elements
Assuming that Fe5270 and Mg2 are good indicators of the iron and magnesium abundances $`Z_{Fe}`$ and $`Z_{Mg}`$, respectively, G93 and Worthey et al. (1992) inferred that the \[Mg/Fe\] ratio changes with the mass of galaxies. Giant galaxies are characterized by a high \[Mg/Fe\] ratio, whereas ordinary galaxies by a lower, nearly solar ratio.
Several scenarios have been proposed by G93, Worthey (1992), Worthey et al. (1992) to explain the observed Fe vs. Mg2 relation and the inferred over-abundance of Mg passing from ordinary to giant (bright) early-type galaxies (see also Matteucci 1997 for a recent review of the subject).
All of these interpretations stand on the notion that heavy elements (like Fe) are predominantly generated by Type I super-novae, whereas lighter elements (like O, Mg, Si) are produced by Type II super-novae. As Type I and II SNe have progenitors with very different stellar masses (intermediate, low-mass stars the former, and massive stars the latter), the production of O, Mg, Si occurs much earlier than the bulk production of Fe.
The goal is reached by supposing that star formation stops much earlier in giant galaxies than in compact ones. G93’s suggestion is somehow supported by the simple interpretation of the distribution of early-type galaxies in his sample on the H$`\beta `$ vs. \[MgFe\] plane (see below). Furthermore, considering the velocity dispersion $`\sigma `$ as a measure of the galaxy mass, he argued that massive galaxies appear to be older than the low mass ones.
Incidentally, this opposes the standard galactic wind scenario proposed long ago by Larson (1974) to explain the color-magnitude relation of early-type galaxies.
It is worth noting that short-lived star formation in giant (massive) galaxies with respect to that in the compact (less massive) ones is not fully supported by major current theories of galaxy formation. In fact, in the gravitational collapse scenario, star formation in giant elliptical galaxies is expected to last longer than in the compact galaxies, because in the former the dynamical time scale is larger (about 3 times) than in the latter. Conversely, in the hierarchical scenario, where big galaxies are the result of several mergers, global star formation is expected to last for long periods of time. In any case, both scenarios predict giant galaxies to be younger than compact galaxies.
To cast light on this intriguing affair, Borges et al. (1995) have recently provided a new empirical calibration of the Mg2 index, that takes explicitly into account the relative abundance of the Mg element. They express the Mg2 index as a function of effective temperature, gravity, \[Fe/H\] and \[Mg/Fe\]. Tantalo et al. (1998) included the new calibration of Borges et al. (1995) in their models of synthesis of stellar populations, and they calculated the expected integrated Mg2 index for SSPs of various ages and metallicities, assuming the same isochrone for different abundance ratios at a given total metallicity. From their analysis of the G93 sample, younger galaxies seem to be moderately more metal rich and less \[$`\alpha `$/Fe\] enhanced than the older ones (see figures 5 and 6 in Tantalo et al. 1998). The present study adopts the “standard” fitting functions, based on solar abundance ratios, and thus it cannot follow their detailed abundance analysis. Nevertheless, their results will appear to be strengthened when a larger sample of galaxies is analyzed in the \[MgFe\] vs. H$`\beta `$ diagram (see next section).
### 4.3 The $`\sigma `$ vs. H$`\beta `$ diagnostic
According to Worthey (1992) and Bressan et al. (1996) MgI, MgH and FeI indices are good metallicity indicators whereas H$`\beta `$ is more suited to age determinations.
According to G93, in the case of SSPs, H$`\beta `$ is a good age indicator as it reflects the temperature of the turn-off stars. In galaxies, recent experiments with composite stellar populations suggest that H$`\beta `$ yields a sort of mean age of the stellar populations, as it tends to over-weight the age of the young stellar component with respect to the age of the bulk stars (see e.g. Bressan et al. 1996).
The two panels of Fig. 3 make evident the different behavior of our sample (right) and G93 sample (left panel) in the $`\sigma `$ vs. H$`\beta `$ plane. G93 found a correlation between $`\sigma `$ and H$`\beta `$ indicating that normal ellipticals, characterized by a shallower gravitational potential, are predominantly young objects as indicated by their high value of H$`\beta `$ (see also Fig. 11 in Bressan et al. 1996).
In our sample there are a number of galaxies with high $`\sigma `$ and H$`\beta `$ values at the same time. This is particularly true for the sample of shell-galaxies. This finding implies the existence of a family of galaxies with deep gravitational potentials and relatively young ages.
This can be a consequence of the interaction experienced by these galaxies that increases both $`\sigma `$ and H$`\beta `$ (thus making them appear younger).
Rampazzo et al. (1999a) have shown that shell-galaxies seem actually to belong to the family of giant galaxies, as far as their effective radius and surface brightness are concerned (see Fig.2 in Rampazzo et al. 1999a). In contrast, pair-members have a much wider distribution, going from normal to giant galaxies.
Following G93, we would expect that shell-galaxies (high $`\sigma `$ and high mass in turn) are old stellar systems, in which the young component formed during a recent interaction and/or merger episode is either too old or too weak to be able to affect the global Mg indices. The fact that the strong lined pair-members show the same behavior of shell-galaxies (i.e. deviation from the Fe vs. Mg2 relation and likely high \[Mg/Fe\] ratios) whereas the weak-lined ones agree with the above relation and likely have normal \[Mg/Fe\] ratio, leads us to argue that a much larger spread in age ought to exist among pair-members as compared to shell-galaxies. Checking this point requires the use of H$`\beta `$ as age indicator (see below).
As our sample is composed of objects in interaction and/or post-interaction stages, the mean age of their stellar populations is less meaningful with respect to that of the normal ellipticals of G93. In fact, an old age for the giant galaxies does not exclude the possible presence of a young component formed in a recent (or ongoing) event involving only a small percentage of the total mass of the galaxy.
### 4.4 H$`\beta `$ vs. \[MgFe\] diagnostic
Fig. 4 presents the shell- and pair-galaxies of our sample (dots and squares, respectively) in the \[MgFe\] - H$`\beta `$ plane and compares them with theoretical models.
Three SSPs (dashed lines) with different chemical composition, i.e. \[Y=0.250 Z=0.004\], \[Y=0.280 Z=0.020\] and \[Y=0.0.335 Z=0.05\], are plotted as a function of the age, and lines of constant age are also indicated (dotted-dashed).
In the same diagram we plot the G93 galaxies (asterisks) and the mean value for the early-type galaxies of the Virgo cluster (large open circle). The two sets of data share the same properties but differ in some important aspects. Pair- and shell-galaxies show a global metallicity similar to that of the normal G93 galaxies, and no difference between the shell-galaxies and pair-members appears in this diagram. At the same time we notice that the H$`\beta `$ values for our sample extend up to the top of the \[MgFe\] - H$`\beta `$ plane, where normal galaxies are not found.
There is a group of galaxies whose H$`\beta `$ is much lower than predicted by models of old age (say 15 Gyr). These galaxies are likely to have H$`\beta `$ affected by contamination of the H$`\beta `$ emission line whose net effect is to decrease the value of H$`\beta `$ by partially filling up the corresponding absorption feature.
The last thing to note is that the distribution of galaxies in this plane, of shell- and pair-objects in particular, is significantly steeper than the path followed by SSPs. The locus of real galaxies stretches almost vertically along the H$`\beta `$ axis and covers a narrow range of \[MgFe\].
## 5 Individual galaxies on the H$`\beta `$ vs. \[MgFe\] plane
This section is dedicated to discussing in some detail the location of groups of galaxies on the H$`\beta `$ \- \[MgFe\] plane trying firstly to understand the reason of their position and secondly to decipher the underlying age. Particular attention is paid to some of the galaxies in the sample, the shell-galaxies in particular. In fact for a few of them independent estimates of the age of the last burst of star formation based on morphological observations and dynamical models can be found in the literature. Consistency between photometric and dynamical age estimates would provide useful insight on the connection between dynamical processes and star formation.
### 5.1 Galaxies showing emission lines
Some of the galaxies of our sample show clear emission lines. In particular, among the pair galaxies, we find a detectable \[OII\](3727Å) line in 10 objects (in RR278a this line can be detected but its measure is very uncertain for the high noise of the corresponding spectrum around 3700Å). A subsample of 3 out of 10 galaxies (RR24a, RR24b and RR278a) shows also the \[OIII\](5007Å) line and the H$`\beta `$ emission. Among shell galaxies, the \[OII\](3727Å) line can be detected in 7 objects 4 of which (NGC 7135, NGC 1210, NGC 6958 and NGC 2945) show also the \[OIII\](5007Å) line, whereas the H$`\beta `$ emission can be detected (even if measured with great uncertainty) only in NGC 7135.
In Fig. 4 the four galaxies (3 pair objects and 1 shell galaxy) showing H$`\beta `$ in emission are not shown (their H$`\beta `$ index being negative).
As far as the remaining galaxies with measured \[OIII\](5007Å) emission (NGC 1210, NGC 6958 and NGC 2945) are concerned, adopting the G93 correction they would shift by $`\delta `$LogH$`\beta `$=+0.27, +0.67 and +3.4, respectively.
Given the high expected correction we have decided not to display these galaxies in Fig. 4.
For the galaxies showing only \[OII\](3727Å) line in emission (3 shell galaxies and 7 pair members), we used open symbols, in order to stress that the value of the H$`\beta `$ index can be affected by the emission contamination. However, by considering that the threshold of \[OIII\](5007Å) detection in our spectra is about 0.1Å, we estimate that in these latter galaxies the correction to the observed Log(H$`\beta `$) is less than +0.15 dex.
As previously pointed out, no correction for emission has been applied to the values of H$`\beta `$ shown in Fig. 4.
### 5.2 Galaxies with low \[MgFe\] values
RR 317b and RR 62a have unusually low values of \[MgFe\] and very different values of H$`\beta `$, very high in RR 317b and very low in RR 62a. Furthermore, while RR 317b shows evidence of emission \[OII\](3727Å), no detection is found in RR 62a (Longhetti et al. 1998b). Rampazzo et al. (1999a) have shown that in the $`\mu _e`$ \- R<sub>e</sub> plane both galaxies occupy the same area as ordinary galaxies. In addition to this, RR 62a lies outside the Hamabe-Kormendy (1987) relation which holds for bright galaxies. All this implies that both galaxies are not giant objects. According to Gorgas et al. (1993) dwarf galaxies populate the left part of the H$`\beta `$ \- \[MgFe\] plane showing a metallicity lower than in giant ellipticals. In this context, the location of RR 62a and RR 317b can be explained.
### 5.3 Galaxies with high H$`\beta `$ values
Bressan et al. (1996) investigated the evolutionary path in the H$`\beta `$ vs. \[MgFe\] plane of a galaxy in which a recent burst of star formation is superposed to a much older population.
As soon as the burst begins, the galaxy runs away from the locus of quiescence toward the top of the H$`\beta `$ \- \[MgFe\] plane following a nearly vertical path. As soon as the burst is over, on a short time scale, the galaxy goes back toward its original position sliding along a path that is also nearly vertical. A loop is performed. See the numerical experiment described by Bressan et al. (1996), in which a burst with 1% of the galaxy mass engaged in the star forming activity and duration of 10<sup>8</sup> years is calculated and displayed in the H$`\beta `$ vs. \[MgFe\] plane (Figs. 7 & 8 in Bressan et al. 1996). Since the burst intensity and duration may vary from one case to another, a large variety of loops are possible in the H$`\beta `$ vs. \[MgFe\] plane. Stronger and longer bursts induce wider loops and longer recovery time scales.
Looking at post-star-burst objects such as the shell-galaxies, they are expected to closely follow the distribution of star-burster simulations and to somehow depart from the region occupied by the normal galaxies of G93. This could be the case for galaxies with very high values of H$`\beta `$ (LogH$`\beta >`$ 0.35) present in our sample, namely NGC 813, NGC 2865, NGC 5018, ESO 2400100b, ESO 1070040, RR 187b, RR101b. None of these galaxies show emission lines in their spectra, apart from ESO 2400100b that shows only the \[OII\](3727Å) line. With lower H$`\beta `$ (0.25 $`<`$ LogH$`\beta <`$ 0.35), but different position with respect to the G93 sample, there are also NGC 1316, NGC 6776 (with \[OII\](3727Å) emission detected in their spectra), RR 225a, RR 387b RR 409a, RR297a, RR101a and RR 405b.
Pair-members. None of the pair-members in this area show emission lines in the spectrum, apart from RR101a that shows the \[OII\](3727Å) line. All systems they belong to show signs of interaction at least in one of the members. From the morphological signatures and the absence of emission lines we suggest that these pairs have already passed through the strongest interaction phase and are now recovering from a burst that likely occurred less than 1 Gyr ago.
ESO 2400100. Longhetti et al. (1998a) showed that ESO 2400100 is actually made up of two components separated by 5″ and 200 km s<sup>-1</sup>. This feature is present in three independent spectra. Probably what we are seeing is an ongoing merger accompanied by star formation as suggested by the very high value of H$`\beta `$. The component ESO 2400100a falls, however, in the area of normal galaxies.
NGC 2865. This shell-galaxy is at the top of the \[MgFe\] vs. H$`\beta `$ plane. Our data, interpreted at the light of Bressan et al. (1996) simulations, suggest that a burst of star formation has occurred very recently, probably less than 1 Gyr ago. This galaxy has been observed by Schiminovich et al. (1995), who derive from VLA images, a correlation between the HI distribution and the fine structures (shells, tails, and loops) hosted by the galaxy. The authors argue that the lack of HI gas in the center could be explained by a burst of star formation. The origin of the shells and the evolution of NGC 2865 remain in any case unclear. The question to be asked is why the \[MgFe\] - H$`\beta `$ plane hints at a recent burst, while the correlation between shells and stellar motions favors $``$7 Gyr old merger.
NGC 1316. The position of this galaxy in the H$`\beta `$ vs. \[MgFe\] diagram suggests that it has undergone a recent burst of star formation. Very recently, Mackie & Fabbiano (1998) studied the evolution of gas and stars in the optical and X-ray bands. They emphasized the presence in NGC 1316 of a complex tidal-tail system, that hampers the accurate reconstruction of its past history (mergers). Speculating about some observational indications which hint at an efficient conversion of HI into other gas phases, Mackie & Fabbiano (1998) suggest that a low-mass, gas-rich galaxy could have started merging $``$ 0.5 Gyr ago. A longer time scale is proposed by Schweizer (1980) to explain NGC 1316 as the product of several mergers of gas-rich galaxies over the past $``$ 2 Gyr.
### 5.4 Galaxies with normal \[MgFe\] and H$`\beta `$ values
In the area occupied by the majority of the G93 galaxies, in particular those which are members of the Virgo cluster, we find both shell-galaxies and pair-members. The age of the bulk stellar population in these galaxies can be estimated as old as 10-15 Gyr. If a star-burst phenomenon has ever occurred, its signatures on the stellar populations of the galaxies located in this area of the diagram have already faded away. Shell-galaxies falling in this region (NGC 1549, NGC 1553, IC 5105, NGC 6849) suggest that shells are long-lasting morphological features, which can be detected also after star formation signatures have disappeared.
## 6 More on the H$`\beta `$ vs. \[MgFe\] plane
Starting from the idea of Bressan et al. (1996) that the distribution of galaxies in the H$`\beta `$ vs. \[MgFe\] plane may reflect secondary episodes of star formation superposed to an old stellar component, we present here the results of simulations designed to clarify this point. In the analysis below, we will neglect any effects due to \[$`\alpha `$/Fe\]. As a matter of fact, first no calibration accounting for different \[Mg/Fe\] ratios is available for the Mgb index (in analogy to that of Borges et al. (1995) for the Mg2 index). Furthermore, any increase in the \[Mg/Fe\] is expected to be associated with a decrease in \[Fe/H\], and the two effects likely parallel each other in final determination of \[MgFe\]. Therefore, \[MgFe\] can be safely considered as a good, global indicator of metallicity (see also G93).
We have constructed models aimed at predicting the statistical distribution of 80 post-burst galaxies, and to reproduce the observational one displayed in Fig. 4. Following the selection criteria explained in the previous section, the observational sample to be matched is composed of about 20 shell galaxies, 20 pair members and 40 “normal” galaxies from G93.
We simplify the complex star formation history of an early–type galaxy to a bulk population made of old stars on which a more recent burst of activity with different age and intensity is superposed. The old stars are in turn represented by a SSP whose age is randomly chosen between $`T_1`$ and $`T_2`$, parameters in the models. We are aware that this oversimplified scheme neglects the complex star formation histories of real galaxies and the ensuing mixture of stars with different chemical composition.
Integrated indices of SSPs, resulting from multi-population models, have been investigated by Idiart et al. (1996) and Bressan et al. (1996). In particular, the complex chemical modeling used by Bressan et al. (1996) suggests that field early–type galaxies are confined within a narrow range of metallicities, around the mean value. For the sake of simplicity, we will use the mean metallicity (despite the physical process by which galaxies get enriched in metals, e.g. closed-box, infall, etc.) to rank galaxies as a function of this parameter.
The young stellar component, formed during the recent burst of star formation, is represented by a SSP whose age is randomly varied between 0.1 Gyr and $`T_3`$. The value of $`T_3`$ has been fixed to 1 Gyr for 20 models in order to represent the 20 pair members. These latter in fact seem to be characterized by a very young stellar component generated by the ongoing dynamical interaction (Longhetti et al. 1999).
For the remaining 60 objects, $`T_3`$ is allowed to arbitrarily vary from young to old ages (up to $`T_2`$). The strength of the secondary burst, represented by the percentage of the total mass turned into stars, is randomly chosen between 0 (no secondary stellar activity) and $`f`$, a parameter of the models. Since we are interested in guessing the minimum threshold above which the secondary episode has a sizable effect on the line strength indices, we will consider only the case in which the secondary episode involves a minor fraction of the galaxy mass.
The metallicity $`Z`$ of both stellar components is randomly varied from Z=0.015 (75% of the solar value) to Z=0.025 (1.25 the solar value). We have also calculated a set of simulations in which the metallicity is supposed to linearly increase from Z=0.015 to Z=0.04 (twice the solar value) over the time interval $`T_1`$$`T_2`$ .
Finally, random errors $`\mathrm{\Delta }`$Log(\[MgFe\]) $``$ 0.011 and $`\mathrm{\Delta }`$Log(H$`\beta `$) $``$ 0.02 are applied to the model indices to better simulate observational data.
In Fig. 5 we show an example of the 80 simulated galaxies for 4 different models. The aim is to highlight the effects of the underlying basic parameters, namely the epoch at which the bulk stars have been formed, the strength of the more recent burst episode, if any, and its age.
In panels (a) and (b) galaxies are conceived as old, nearly coeval systems, their population being approximated by a single SSP with age between 14 and 16 Gyr; this corresponds to the current view of elliptical galaxies in rich clusters (Bower et al. 1998). The maximum intensity of the superposed burst amounts to 10% and 2% of the total mass, panels (a) and (b) respectively.
Inspection of the results shown in panel (a) indicates that model galaxies with a burst engaging a percentage of the total mass up to 10% predicts too high values of H$`\beta `$ and too many young objects. In contrast, the models of panel (b) span the range of H$`\beta `$ values indicated by the observations. In both cases, however, the expected distribution of objects with respect to the H$`\beta `$ index is at variance with the observational one. Indeed models of this type predict a bimodal distribution, whereby the old galaxies (those for which the burst is almost as old as the bulk of their stellar populations) clump together in the lower portion of the diagram, whereas the “young” objects (those with very young bursts) form a tail extending to high values of H$`\beta `$. The real clump is even narrower than displayed if one considers the effect of the simulated errors.
The tail of “young” models agrees quite well with the observations, thus suggesting that the upper part of the diagram is populated by objects which experienced a recent burst of star formation of minor entity (less than 2% of the total mass).
The burst alone cannot, however, explain the smooth distribution observed at low values of H$`\beta `$. The reason for it is that the index H$`\beta `$ of a composite stellar population (2% of the mass is “young” stars and the remaining 98% in old stars), soon after the maximum excursion toward high values of H$`\beta `$, fades out very rapidly as the young population ages. Therefore catching a galaxy at intermediate values is highly improbable.
Slowing down the evolutionary rate of the “young” component by increasing the percentage of mass involved in the young burst produces an uncomfortably large fraction of objects in the upper part of the diagram, e.g. panel (a).
Since a sort of fine tuning between the old and young component is not easy to understand, the viable alternative is that the old component spreads over significantly longer periods than assumed in the above simulations. To this aim, we present the models shown in panels (c) and (d). The old population in these models has an average age that spreads over a significant fraction of the Hubble time. This is meant to indicate that either the object has been growing for such a long time with a low star formation rate, or that its major star formation activity was not confined to an early epoch. The young component is allowed to occur as in the simulations of panel (b). It is immediately evident that the new models much better reproduce the distribution of galaxies all over the range of H$`\beta `$.
However, problems remain if the metallicity is randomly chosen. Indeed, the model galaxies always tend to follow the path of an SSP, in these simulations the SSP with Z=0.02 as shown in panel (c). As already anticipated, real galaxies seem to follow a path steeper than that of SSP. We consider this point as evidence of an underlying relation between the age and metallicity of the bulk population of stars. Panel (d) shows our final experiments, where the average metallicity of the bulk stars has been supposed to linearly increase with age from Z=0.015 to Z=0.04. The broad range of metallicity is required by the large scatter in Log(\[MgFe\]). If this suggestion is sounded, it would imply that young early type galaxies in the field are on average more metal-rich than old systems, with an average rate of metal-enrichment $`\mathrm{\Delta }`$Log(Z)/$`\mathrm{\Delta }`$Log(t)$``$-0.7. If confirmed, the vertical distribution of galaxies in this diagram is therefore the trace of a global metal enrichment taking place in galaxies during all the star forming episodes.
## 7 Summary and conclusions
In this paper firstly we have compared the line strength indices for SSPs that one would obtain using different calibrations in literature, namely Worthey (1992), Worthey et al. (1994), Buzzoni et al. (1992, 1994) and Idiart et al. (1995). Secondly, with the aid of the Worthey (1992) and Worthey et al. (1994) calibrations, the sample of shell- and pair-galaxies by Longhetti et al. (1998a,b), and the sample of G93 for normal elliptical galaxies have been systematically analyzed, looking at the position of all these galaxies in various diagnostic planes. The aim is to cast light on the star formation history that took place in these systems with particular attention to those (shell- and pair-objects) for which the occurrence of dynamical interaction is evident. Finally, from comparing normal to dynamically interacting galaxies we attempt to understand the reasons for their similarities and differences.
The results of this study can be summarized as follows:
(1) The various calibrations for the line strength indices as a function of basic stellar parameters (effective temperature, gravity and metallicity) lead to quite different results. Specifically, the Buzzoni et al. (1992, 1994) calibrations for Mg2 and Fe5270 agree with those by Worthey (1992) and Worthey et al. (1994) only for SSPs older than 3 Gyr. For H$`\beta `$ the agreement is also good if one excludes all ages younger than about 1 Gyr. Idiart’s et al. (1995) calibrations can be compared with the others only for a limited range of SSP ages. For the Mgb and Mg2 indices we find a roughly constant offset, that could be attributed to different properties (g.e., metallicity, gravity) of the calibrating sample of stars. For the purposes of this study we have adopted Worthey (1992) and Worthey et al. (1994) as the reference calibrations.
(2) The comparison of the Mg and Fe indices (specifically Mg2, Fe5270 and Fe5335) and the velocity dispersion $`\sigma `$ of normal, shell- and pair-galaxies suggests firstly a different behavior of shell-galaxies with respect to pair-objects and secondly that strong-lined galaxies are likely to have super-solar \[Mg/Fe\] abundance ratios. Various kinds of star formation histories leading to super-solar \[Mg/Fe\] are examined at the light of current understanding of the mechanisms of galaxy formation. None of these is however able to give a self-consistent explanation of the \[$`\alpha `$\]-enhancement problem.
(3) The same galaxies are analyzed in the H$`\beta `$ vs. \[MgFe\] plane and compared to the G93 set of data. The shell- and pair-objects have the same distribution in this diagnostic plane as the normal galaxies even if they show a more pronounced tail toward high H$`\beta `$ values.
(4) The H$`\beta `$ vs. \[MgFe\] plane is divided in several sub-regions carefully inspected in order to look for all plausible causes that would justify the positions of the galaxies.
(5) As shell- and pair-galaxies share the same region of the H$`\beta `$ vs. \[MgFe\] plane occupied by normal galaxies, we suggest that a common physical cause is at the origin of their distribution. The star formation history in these objects is investigated with the aid of very simple galaxy models.
We find that the tail at high H$`\beta `$ values can be ascribed to secondary bursts of star formation which, in the case of shell- and pair-galaxies, can be easily attributed to interaction/acquisition events whose signatures are well evident in their morphology.
(6) A typical model where the burst is superimposed to an otherwise old and coeval population is however not able to reproduce the smooth distribution of galaxies in the H$`\beta `$ vs. \[MgFe\] plane. This kind of model would predict an outstanding clump at low H$`\beta `$ values, contrary to what is observed. Models in which the bulk of the star formation happened over a significant fraction of the Hubble time ( 4 Gyr $``$ t<sub>old</sub> $``$ 16 Gyr) better match the observed diagram.
(7) In this context, the peculiar, smooth and almost vertical distribution of galaxies (normal, shell- pair-objects) in the H$`\beta `$ vs. \[MgFe\] plane is interpreted as the trace of the increase of the average metallicity accompanying all star forming events. This could be the signature of a metal enrichment happening on a cosmic scale.
(8) Although deciphering the position of galaxies in the H$`\beta `$ vs \[MgFe\] plane to infer the age of the constituent stellar populations is a difficult task due to possible blurring caused by the secondary stellar activity, still we may draw some conclusions for the duration of the shell phenomenon. Specifically, since shell-galaxies can be found in the same region of old normal galaxies, we may say that shells are a morphological feature able to persist for long periods of time, much longer than the star forming activity that likely accompanied their formation. Among current dynamical models in which shell-structures can be formed, the weak interaction mechanism of Thomson & Wright (1990) and Thomson (1991) naturally predicts long-lived shells without particular hypotheses on the type of encounters.
###### Acknowledgements.
ML acknowledges the kind hospitality of the Astronomical Observatory of Brera (Milan) and Padua during her PhD thesis and the support by the European Community under TMR grant ERBFMBI-CT97-2804. CC wishes to acknowledge the friendly hospitality and stimulating environment provide by MPA in Garching where this paper has been completed during leave of absence from the Astronomy Department of the Padua University. This study has been financially supported by the European Community under TMR grant ERBFMRX-CT96-0086. |
no-problem/9910/astro-ph9910444.html | ar5iv | text | # Data analysis techniques for stereo IACT systems
## Abstract
Based on data and Monte-Carlo simulations of the HEGRA IACT system, improved analysis techniques were developed for the determination of the shower geometry and shower energy from the multiple Cherenkov images. These techniques allow, e.g., to select subsamples of events with better than 3’ angular resolution, which are used to limit the rms radius of the VHE emission region of the Crab Nebula to less than 1.5’. For gamma-rays of the Mrk 501 data sample, the energy can be determined to typically 10% and the core location to 2-3 m.
Systems of imaging atmospheric Cherenkov telescopes (IACTs) for TeV gamma-ray astronomy allow the stereoscopic reconstruction of air showers, and provide improved angular resolution, energy resolution, and rejection of backgrounds such as showers induced by cosmic rays, local muons, or random triggers caused by night-sky background light. For systems with more than two telescopes, the shower parameters are overdetermined, allowing important cross-checks of the performance of the telescope system and of the reconstruction algorithms. In particular, the event-by-event determination of the position of the core permits to directly measure effective detection areas, and to estimate the systematic errors in the flux measurement hofmann\_kruger ; mrk501 .
In this talk, I will cover recent developments concerning improved algorithms to reconstruct shower direction and shower energy, and their tests using data from the HEGRA IACT system performance ; mrk501 . Detailed information as well as a more complete list of references can be found in recopaper ; erespaper ; crabsizepaper .
Reconstruction of the shower geometry recopaper . The traditional reconstruction algorithm used in HEGRA determines the shower direction by intersecting the axes of all Cherenkov images, regardless of the quality of individual images (Fig. 1(a)). In particular in events combining some bright images with dim images, the latter, with their poorly determined image parameters, can spoil the angular resolution. The angular resolution can be improved by estimating, for each image, the errors on the image parameters and by properly propagating these errors (Fig. 1(b)). In addition, one can use the shape of the image, in particular the width/length ratio, to estimate the distance d between the image centroid and the source, and use this information to derive, for each telescope, an error ellipse for the source location (actually, two ellipses, because of the head-tail ambiguity). The ellipses from different telescopes are then combined to locate the source (Fig. 1(c)). Finally, another approach (d) is to fit the intensity distribution of the images using a set of image templates, rather than parameterizing images by their Hillas parameters. Fig. 1(e) shows the angular resolution for achieved for different event classes. As expected, the techniques (b)-(d) outperform the simplest algorithm (a). The fit (d) is generally best, but the improvement compared to the much simpler and faster algorithm (c) is not dramatic. In addition to an improvement in the angular resolution, algorithms (b)-(d) provide, for each event, an estimate of the angular resolution (Fig. 2(a)), which can be used, e.g., to reject poorly reconstructed events.
Size of VHE emission region of the Crab Nebula crabsizepaper . Well-reconstructed events reach an angular resolution on the same scale as the characteristic size of the Crab Nebula. One can use such events to search for evidence for an extended VHE emission region. Fig. 2(b) shows the angular distribution of events with an estimated angular error of less than 3’ in each projection, relative to the direction to the Crab. The width of the distribution is, within statistical errors, identical with the width expected for a point source on the basis of simulations (Fig. 2(c)) and with the width of the gamma-ray distribution observed for Mrk 501. Therefore, we can only give an upper limit on the size of the emission region. Including systematic effects, e.g. due to pointing errors, we find a 99% upper limit on the rms radius $`<r^2>`$ of the TeV emission region of 1.5’. This value is comparable to the radius at radio wavelengths, but significantly larger than the size at x-ray energies. Standard models for the VHE gamma-ray emission of the Crab Nebula assume that the same electron population is responsible for x-rays via synchrotron emission, and for TeV gamma-rays via the IC process, and predict a small TeV emission region, well below the experimental limit. Possible hadronic production of gamma-rays, on the other hand, could take place at significantly larger distances from the pulsar.
Core determination erespaper . The shower core is usually located by intersecting the image axes, starting from the telescope locations. The precision of the core determination is therefore given by the precision with which the image axes can be determined, typically $`O(5^{})`$. If the source location is known, as is the case, e.g., for the Mrk 501 data sample, one can alternatively determine the image axis as the line connecting the image of the source and the image centroid. With a typical distance between the source and the image centroid of $`1^{}`$ and a measurement of the centroid to $`O(0.02^{})`$, the image axis is then known to $`O(1^{})`$. Using this technique, Monte Carlo simulations predict that the precision for the shower core improves from about 6 m to 10 m for the normal method, to about 2 m to 3 m, depending on the core distance (in each case, properly taking into account the errors on the measured image parameters). The exact knowledge of the core position is particularly important for the energy determination, when the observed light yield is translated into an energy estimate.
Energy determination erespaper . Earlier studies comparing event-by-event the light yield observed in different telescopes indicated correlated fluctuations in the light yield of individual showers hofmann\_kruger . Monte-Carlo studies point to the fluctuation in the height of the shower maximum as the primary source for these correlated fluctuations. Fig. 3(a) illustrates that for distances up to about 100 m from the shower axis, the light yield varies significantly with the height of the shower; only beyond the Cherenkov radius of about 120 m is the light yield stable. An obvious approach to improve the energy resolution is therefore to measure the height $`h_{max}`$ of the shower maximum, and to include it as an additional parameter, writing $`E_i=f(size_i,r_i,h_{max})`$, where $`size_i`$ is the image size measured in telescope $`i`$ at a distance $`r_i`$ from the shower axis. With an IACT system, the height of the shower maximum, or, more precisely, the height of maximum Cherenkov emission, can be determined essentially by triangulation, using the relation between the distance $`d_i`$ from the image to the source, $`r_i`$, and $`h_{max}`$: $`d_ir_i/h_{max}`$. The actual algorithm erespaper uses a slightly more complicated relation, reflecting the fact that light arriving at small $`r_i`$ is generated by the tail of the shower rather than by particles near the shower maximum. The algorithm reaches a resolution in shower height of 530 to 600 m rms.
Fig. 3(b) illustrates the effect of the various possible improvements to the energy resolution. Whereas the conventional algorithm provides a resolution of 18% to 22% for the 1 TeV to 30 TeV range, the shower-height correction provides a resolution of about 12% to 14%, and the combination of the shower height correction with the improved core determination assuming a known source yields 9% to 12% resolution.
Before applying this technique to the actual data to obtain improved energy spectra, one needs to make sure that systematic effects are under control at a level consistent with the improved resolution. While the redundant data from the IACT system provide sufficient information to check this, the analysis is not yet finished. A first test of the new method with Mrk 501 data results in a spectrum consistent with earlier analyses, possibly with a slightly steeper spectrum in the cutoff region beyond 6 TeV.
Summary. The analysis algorithms discussed here represent clear improvements over the first-generation algorithms used in the reconstruction of data from the HEGRA IACT system; it is also clear that further improvements are possible and that at this point we do not fully use all the information provided by multiple IACT images of an air shower. The algorithms do not only improve the angular resolution and the energy resolution; they also help to boost the significance of faint signals. For example, instead of simply counting all events reconstructed within a certain angular distance form a source, one can form a weighted sum, weighting events according to their expected signal-to-background ratio, as determined event-by-event from the estimated angular error and misidentification probability. First tests of such methods indicate in an increase in the significance for the detection of weak sources by up to 80%.
Acknowledgments. Many of the members of the MPIK CT group have contributed in one way or another to the development and tests of the advanced analysis techniques discussed here; in particular, I. Jung, A. Konopelko, H. Lampeitl, H. Krawczynski and G. Pühlhofer should be mentioned. |
no-problem/9910/hep-lat9910010.html | ar5iv | text | # B and D semileptonic decays to light mesons FERMILAB-CONF-99/241-T
## 1 Introduction
We report on the progress made in our study of $`B`$ and $`D`$ meson semileptonic decays. A description of the analysis and some more preliminary results are in Refs. . The CKM matrix element $`|V_{ub}|`$ plays an important rôle in over-constraining the unitarity triangle but is only determined to $`20\%`$. The advent of $`B`$-factories will reduce the experimental error in exclusive decays considerably, which must be matched by more precise theoretical determinations of the nonperturbative contribution. With the increase in data, experiments will be able to study the $`q^2`$ distribution in $`B`$ and $`D`$ semileptonic decays, as shown by the CLEO collaboration, which recently presented a new analysis of $`B\rho l\nu `$ . Thus, lattice and experimental data could be combined in a range of $`q^2`$s where both are reliable, making the model-dependent extrapolation to $`q^2=0`$, traditionally done in lattice analyses, redundant.
A summary of our work is as follows. We have results at three lattice spacings ($`\beta =5.7,5.9`$ and $`6.1`$) with heavy quarks at the $`b`$ and $`c`$ quark masses and a light quark (daughter and spectator) at the strange quark mass. This allows us to study the lattice spacing dependence of the matrix elements, $`s\overline{s}(p)|V_\mu |B_s(0),D_s(0)`$ and experience leads us to believe the $`a`$-dependence does not change after chiral extrapolation, see Ref. . We have additional light quarks at $`\beta =5.9`$ and $`5.7`$ allowing chiral extrapolations on these lattices. The quark fields are rotated but the perturbative matching coefficients we use are those of light quarks; the full mass-dependent calculation is underway. The final results are at $`\beta =5.9`$, after chiral extrapolation.
The matrix elements are extracted from three-point correlation functions in which the heavy meson is at rest and the light meson has momentum of (0,0,0), (1,0,0), (1,1,0), (1,1,1) and (2,0,0), in units of $`2\pi /aL`$. In an approach different to other groups we study the $`a`$-dependence and perform the chiral extrapolations in terms of the matrix elements and not the form factors.
For $`B`$ and $`D`$ decays the form factors are determined from matrix elements in the usual way,
$`\pi (p)|V_\mu |B(p^{})`$ $`=`$ $`f^+(q^2)\left[p^{}+p{\displaystyle \frac{m_B^2m_\pi ^2}{q^2}}q\right]^\mu `$ (1)
$`+`$ $`f^0(q^2){\displaystyle \frac{m_B^2m_\pi ^2}{q^2}}q^\mu `$
and thence we determine the partial widths
$$\frac{d\mathrm{\Gamma }}{d|\stackrel{}{p}_\pi |}=\frac{2m_{B,D}G_F^2|V_{ub}^2|}{24\pi ^3}\frac{|\stackrel{}{p}_\pi |^4}{E_\pi }\left|f^+(q^2)\right|^2.$$
(2)
## 2 $`B\pi l\nu `$
We interpolate the spatial and temporal matrix elements, extracted from fits to the three-point functions, to fixed physical momenta in the range {0.1,$`\mathrm{}`$ ,1.0} GeV as shown in Figure 1.
A significant systematic error is evident in the interpolation of the matrix elements. The temporal component, $`V_4`$, is defined at zero momentum so all physical momenta are obtained by interpolation. In contrast, the spatial matrix element is not … the first point is $`p_{lat}(1,0,0)`$ $`0.7`$ GeV. All momenta below this are obtained by extrapolation (see Fig. 1). Thus for lighter quarks the temporal component captures the effect of the nearby $`B^{}`$ pole at $`p=0`$ and rises rapidly, whereas the spatial component misses this effect. Now one has a choice: introduce a model, eg. pole dominance, to reproduce the behaviour of the matrix elements in the vicinity of the $`B^{}`$ pole or impose a cut in momentum, below which the extrapolation of the spatial matrix element is considered unreliable. We wish to avoid any model dependence so we make a cut in momenta at $`p=0.4`$ GeV.
At $`\beta =5.7`$ and $`5.9`$ the matrix elements are extrapolated to the chiral limit. The data at $`\beta =5.9`$ are discussed here (the findings are similar at $`\beta =5.7`$). Three functional forms were compared in the chiral extrapolations: linear, quadratic and including a term $`\sqrt{m_q}`$. For $`0.4p0.8`$ GeV the best fit to the data was the quadratic form. At $`p>0.8`$ GeV all three functions resulted in unreliable fits, with bad $`\chi ^2/d.o.f.`$, due to large cutoff effects and an increasingly noisier signal. Therefore the range of momenta we consider is $`0.4p0.8`$ GeV. The error due to extrapolation is estimated by considering the spread in results from the three possible fit forms, in this range.
Figure 2 shows the $`a`$-dependence of our results (with strange light quarks) is very mild. The matrix elements show a similarly mild $`a`$-dependence . The scale to determine the physical momenta is set from the spin-averaged 1P-1S splitting in Charmonium. The quenched approximation introduces a dependence on the quantity used to set the scale and this is often used to estimate quenching effects. We repeated the procedure with $`a^1(f_\pi )`$ and found only a small effect which is included in our error estimates, with the caveat that it is almost certainly an underestimate of quenching. The partial width of $`B\pi l\nu `$, for $`0.4p0.8`$, is shown as the shaded region in Figure 3. This width and the statistical error is: $`2m_B_{p=0.4}^{p=0.8}|p_\pi |^4f^+(p_\pi )^2/E_\pi =12.17(11)`$.
## 3 $`D\pi l\nu /DKl\nu `$
In Ref. it was suggested that calculating the ratio of partial widths $`D\pi l\nu /DKl\nu `$ is a nice way to reduce the uncertainty on $`|V_{cd}|/|V_{cs}|`$, from its current $`17\%`$. The Focus Collaboration at Fermilab expects to have of $`𝒪(10^6)`$ fully reconstructed $`D`$ decays so the experimental error will be considerably reduced. By calculating a ratio of rates it is expected that much of the theoretical uncertainties will cancel. In particular, the renormalisation factors cancel, eliminating the perturbative uncertainty. The analysis of $`D`$ decays proceeds as described for $`B`$ decays with further details in Ref. . Here, we report on the update since last year. The chiral extrapolations at $`\beta =5.9`$ have been done for the pion and kaon final states so we can now calculate the ratio $`D\pi l\nu /DKl\nu `$, as shown in Figure 4.
The ratio of partial widths in the range $`0.2p_\pi 0.7`$ is 1.61(19), where the error is statistical. We also calculate the ratio $`B\pi l\nu /D\pi l\nu `$, shown in Figure 5. With the expected experimental precision in D decays this ratio has the advantage that many uncertainties are reduced and therefore may prove an interesting avenue for a determination of $`|V_{ub}|`$.
## 4 Conclusions
In conclusion we present a preliminary summary of the systematic errors contributing to the theoretical error in $`|V_{ub}|`$ and $`|V_{cd}|/|V_{cs}|`$. The uncertainty due to the perturbative matching is not yet included. Adding in quadrature gives an error of $`10\%`$ on $`|V_{ub}|`$ and $`13\%`$ on $`|V_{cd}|/|V_{cs}|`$ and we expect to improve upon this in the final result. $`|V_{ub}|`$ $`\frac{|V_{cd}|}{|V_{cs}|}`$ Statistics 4% 5% $`\chi `$-extrapolation, p-interpolation 8% 10% $`a`$-dependence 5% 5% Determining $`a^1`$ 3% 3% Fits,excited state contamination 2% 2% $`m_Q`$-dependence 1% 1%
Acknowledgements
Fermilab is operated by Universities Research Association, Inc. for the U.S. Department of Energy. |
no-problem/9910/cond-mat9910075.html | ar5iv | text | # Nonlinear response scaling of the two-dimensional XY spin glass in the Coulomb-gas representation
\[
to appear in Phys. Rev. B
## Abstract
Vortex critical dynamics of the two dimensional XY spin glass is studied by Monte Carlo methods in the Coulomb-gas representation. A scaling analysis of the nonlinear response is used to calculate the correlation length exponent $`\nu `$ of the zero-temperature glass transition. The estimate, $`\nu =1.3(2)`$, is in agreement with a recent estimate in the phase representation using the same analysis and indicates that the relevant length scale for vortex motion is set by the spin-glass correlation length and that spin and chiralities may order with different correlation length exponents.
\]
It is well known that vector spin glasses, such as the XY spin glass, have a chirality order parameter with Ising-like symmetry in addition to the continuous degeneracy associated with global spin rotation . Chirality arises from quenched in vortices due to frustration effects in each elementary cell of the lattice which contains and odd number of antiferromagnetic bonds. The interplay between spin and chiral variables has always received considerable attention because of the possibility of separate spin-glass and chiral-glass ordering due to freezing of spins and chiral variables, respectively . Possible separation of spin and chiral variables also arise in the frustrated XY model with weak disorder . While in three dimensions the existence of a finite temperature transition is under current investigation , in two dimensions there is a consensus that the transition only occurs at zero temperature. Associated with the zero temperature transition there is a correlation length which increases with decreasing temperatures as $`\xi T^\nu `$. However, the possibility of different spin and chiral glass short range correlation lengths $`\xi _s`$ and $`\xi _c`$ with different critical exponents $`\nu _s`$ and $`\nu _c`$ has not been resolved satisfactorily.
First evidence that spin and chiral glass correlation length exponents are different in two dimensions were reported by Kawamura and Tanemura from domain wall calculations. Various estimates of these exponents give approximate values of $`\nu _s1`$ and $`\nu _c2`$ but the errorbars are usually quite large and a single exponent scenario may not be ruled out which would be consistent with an analytical work for a particular type of disorder distribution and more recent numerical work on domain wall scaling behavior at zero temperature. These calculations are usually performed in a representation of the XY spin glass model in terms of the orientational angle of the two-component XY spins. In this representation, spin glass order can be directly identified as a long range order in appropriate phase correlations while the chiral variables are built from nearest neighbor phase correlations. In numerical simulations, the dynamics of the chiral variables are then determined by the phase variables and equilibration problems may prevent an adequate study of the vortex correlations in the system.
An alternative approach which allows to study the vortex dynamics directly can be obtained from the Coulomb-gas representation . Recently, Bokil and Young used Monte Carlo simulations in the vortex representation to obtain an estimate of the chiral glass correlation length exponent and found $`\nu _c=1.8\pm 0.3`$. This agrees with previous estimates within the errorbar. It also supports the scenario in which spin and chiral glass variables order with different critical exponents if one accepts earlier estimates of the spin glass exponent $`\nu _c1`$, obtained in the phase representation. However, it should also be of interest a determination of the spin glass correlation length exponent from simulations in the vortex representation as it is not completely clear how such length scale shows up in the dynamical behavior of vortices. In particular, since the XY spin glass model is currently been used as a model for granular high-$`T_c`$ superconductors containing ’$`\pi `$ ’ junctions , which leads to quenched in vortices even in the absence of external magnetic field, a natural question arises as to which correlation length, $`\xi _s`$ or $`\xi _c`$, is actually probed by transport measurements. In the measurements, the response of the vortices to an applied force can be observed as the voltage response to an applied driving current which acts as a Lorentz force on the vortices . The vortex response, or resistive behavior, is therefore determined by vortex mobility and the current-voltage scaling is expected to be controlled by the relevant divergent length scale which could be either $`\xi _s`$ or $`\xi _c`$.
The question of the relevant correlation length for vortex response is also of interest for the three dimensional XY spin glass. Recent simulations of the vortex dynamics in three dimensions showed evidence of a resistive phase transition at finite temperatures which was attributed to glass ordering of chiralities while the spins remain disordered. This would be consistent with the scenario of a finite temperature chiral glass transition in the absence of spin glass transition which has been proposed previously from calculations in the phase representation of the XY spin glass model. This interpretation however is only justified if the relevant length scale for vortex dynamics is determined by the chiral glass correlation length $`\xi _c`$. On the other hand, since it is well known that vortex motion leads to phase incoherence, one expects that vortex dynamics should probe the spin glass correlation length $`\xi _s`$ and therefore the resistive transition should correspond instead to a spin glass transition at finite temperatures. In fact, this is supported by a more recent domain-wall calculations suggesting a spin glass transition at finite temperatures in three dimensions . The present study of the vortex response in two dimensions may help to clarify this point as it is well known that in this case both spin and chiral glass ordering only occurs at zero temperature and so the scaling analysis involve less unknown parameters.
In the absence of a precise agreement among the various studies of the XY spin glass model, as mentioned above, and in view of its relevance for vortex dynamics, the additional numerical results presented below may help to settle some issues.
In this work, we study the vortex critical dynamics of the two dimensional XY spin glass by Monte Carlo methods in the Coulomb-gas representation . A scaling analysis of the nonlinear response is used to calculate the correlation length exponent $`\nu `$ of the zero-temperature glass transition. The estimate, $`\nu =1.3(2)`$, is in agreement with a recent estimate in the phase representation using the same analysis and indicates that the relevant length scale for vortex motion is set by the spin-glass correlation length $`\xi _s`$ and that spin and chiralities may order with different correlation length exponents.
We consider the XY spin glass on a square lattice defined by the Hamiltonian
$$H=\underset{<ij>}{}J_{ij}s_is_j=J\underset{<ij>}{}\mathrm{cos}(\theta _i\theta _jA_{ij})$$
(1)
where $`\theta _i`$ is the phase of a two-component classical spin of unit length, $`s_i=(\mathrm{cos}\theta _i,\mathrm{sin}\theta _i)`$, $`J>0`$ is a coupling constant and $`A_{ij}`$ has a binary distribution, $`0`$ or $`\pi `$ , with equal probability, corresponding to a coupling constant $`J_{ij}=J`$ or $`J`$, respectively, between $`s_i`$ and $`s_j`$ spins. The sum is over all nearest-neighbor pairs. This Hamiltonian also describes an array of Josephson junctions where there is a phase shift of $`\pi `$ across a fraction of the junctions as in models of d-wave ceramic superconductors .
To study the vortex dynamics it is convenient to rewrite the above Hamiltonian in the Coulomb-gas representation
$$H_{cg}=2\pi ^2J\underset{r,r^{}}{}(n_rf_r)G_{rr^{}}^{}(n_r^{}f_r^{})$$
(2)
which can be obtained following a standard procedure in which Eq. (1) is replaced by a periodic Gaussian model separating spin-wave and vortex variables. The Coulomb interaction is given by $`G_{rr^{}}^{}=G(rr^{})G(0)`$, where $`G`$ is the lattice Green’s function.
$$G(r)=\frac{1}{L^2}\underset{k}{}\frac{\mathrm{exp}(ikr)}{42\mathrm{cos}k_x2\mathrm{cos}k_y}$$
(3)
and $`L`$ is the system size. $`G^{}(r)`$ diverges as $`2\pi \mathrm{log}|r|`$ at large separations $`r`$. The vortices are represented by integer charges $`n_r`$ at the sites $`r`$ of the dual lattice and the frustration effects of $`A_{ij}`$ by quenched random charges $`f_r`$ given by the directed sum of $`A_{ij}`$ around the plaquette, $`f_r=A_{ij}/2\pi `$. The charges are constrained by the neutrality condition $`_r(n_rf_r)=0`$. For the XY spin glass, the charges $`f_r`$ have a correlated random distribution of integer and half integer values. Other random distributions can represent different models. A uniform distribution describes the gauge glass model where $`A_{ij}`$ in Eq. (1) is a continuous variable in the range $`[0,2\pi ]`$ and an uncorrelated continuous distribution can describe arrays of superconducting grains with random flux or arrays of mesoscopic metallic grains with random offset charges .
We study the nonequlibrium response of the vortices in the XY spin glass by Monte Carlo simulations of the Coulomb gas under an applied electric field . An electric field $`E`$ represents an applied force acting on the vortices and gives an additional contribution to the energy in Eq. (2) as $`_rEn_rx_r`$ for $`E`$ in the $`x`$ direction. A finite $`E`$ sets an additional length scale in the problem since thermal fluctuations alone, of typical energy $`kT`$, leads to a characteristic length $`lkT/E`$ over which single charge motion is possible. Thus, increasing $`E`$ will probe smaller length scales. Crossover effects are then expected when $`l`$ is of the order of the relevant correlation length for independent charge motion. As vortex motion leads to phase incoherence we thus expect that the scaling behavior of the nonlinear response will probe the spin glass correlation length $`\xi _s`$ of the original model and allow an estimate of the thermal critical exponent $`\nu _s`$. This dynamical approach complements previous equilibrium calculations in the vortex representation of the XY spin glass where only the chiral glass correlation length was studied .
In the dynamical simulations, the Monte Carlo time is identified as the real time $`t`$ and we take the unit of time $`dt=1`$ corresponding to a complete Monte Carlo pass through the lattice. A Monte Carlo step consists of adding a dipole of unit charges and unit length to a nearest neighbor charge pair $`(n_i,n_j)`$, using the Metropolis algorithm. Choosing a nearest-neighbor pair $`i,j`$ at random, the step consists in changing $`n_in_i1`$ and $`n_jn_j+1`$, corresponding to the motion of a charge by a unit length from $`i`$ to $`j`$. If the change in energy is $`\mathrm{\Delta }U`$, the move is accepted with probability min$`\{1,\mathrm{exp}(\mathrm{\Delta }U/kT)\}`$. The external electric field $`E`$ biases the added dipole, leading to a current $`I`$ as the net flow of charges in the direction of the electric field if the charges are mobile. The current $`I`$ is calculated as
$$I(t)=\frac{1}{L}\underset{i}{}\mathrm{\Delta }Q_i(t)$$
(4)
after each Monte Carlo pass through the lattice, where $`L`$ is lattice size and $`\mathrm{\Delta }Q_i(t)=1`$ if a charge at site $`i`$ moves one lattice spacing in the direction of the field $`E`$ at time $`t`$, $`\mathrm{\Delta }Q_i(t)=1`$ if it moves in the opposite direction and $`\mathrm{\Delta }Q_i(t)=0`$ otherwise. Periodic boundary conditions are used. Most calculations were performed for $`L=32`$ and compared to a smaller system of $`L=16`$ but size dependence was not significant in the temperature range studied. The current density $`J`$ is defined as $`J=I/L`$. The linear response is given by the linear conductance $`G_L=lim_{E0}J/E`$ which can be obtained from the fluctuation-dissipation relation as
$$G_L=\frac{1}{2kT}𝑑t<I(0)I(t)>$$
(5)
without imposing the external field $`E`$. In the calculations, the integral is replaced by a sum of successive Monte Carlo sweeps through the lattice with the unit of time $`dt=1`$. We use typically $`4\times 10^4`$ Monte Carlo steps to compute averages and $`20`$ different realizations of disorder.
To analyze the numerical results we need a scaling theory of the nonlinear response near a second-order phase transition. A detailed scaling theory has been described in the context of the current-voltage characteristics of vortex-glass models but it can be directly applied to the present case. Since the glass transition occurs at $`T=0`$ with a power-law divergent correlation length $`\xi T^\nu `$ and the external field introduces an additional length scale $`lkT/E`$, the dimensionless ratio $`E/JG_L`$ can be cast into a simple scaling form in terms of the dimensionless argument $`\xi /l`$,
$$J/EG_L=F(E/T^{1+\nu })$$
(6)
where $`F`$ is a scaling function with $`F(0)=1`$. This scaling form indicates that a crossover from linear behavior, when $`F(x)1`$, to nonlinear behavior, when $`F(x)>>1`$, is expected to occur when $`x1`$ which leads to a characteristic field $`E_cT^{1+\nu }`$ at which nonlinear behavior sets in.
The nonlinear response $`J/E`$ and an Arrhenius plot of the linear conductance $`G_L`$ are shown in Fig. 1. The data shows the expected behavior for a $`T=0`$ transition. The ratio $`J/E`$ in Fig. 1(a) tends to a finite value for small $`E`$, corresponding to the linear conductance $`G_L`$ in Fig. 1(b) with an activated behavior. This activated behavior is consistent with a zero temperature transition and finite correlation length at nonzero temperatures which leads to a finite energy barrier $`U`$ for vortex motion. In general, an energy barrier exponent $`\psi `$ can also be defined from $`U\xi ^\psi `$ for a temperature dependent energy barrier. The pure Arrhenius activated behavior in Fig. 1(b) is consistent with an exponent $`\psi 0`$. As can be seen from Fig. 1(a), there is a smooth crossover from linear behavior, when $`J/E`$ is roughly a constant, to nonlinear behavior for increasing $`E`$ at each temperature which appears at smaller $`E`$ for decreasing temperatures in agreement with the expected crossover behavior at a characteristic field $`E_cT^{1+\nu }`$ .
We now verify the scaling hypothesis of Eq. (6) and obtain a numerical estimate of the thermal correlation length exponent $`\nu `$ using two different methods. Fig. 2(a) shows the temperature dependence of $`E_c`$ defined as the value of $`E`$ where $`E/JG_L`$ starts to deviate from a fixed value of $`2`$. From the expected power-law behavior for the crossover field $`E_cT^{1+\nu }`$ we obtain a direct estimate of $`\nu =1.4(2)`$ in a log-log plot. From a scaling plot of the nonlinear response according to Eq. (6), $`\nu `$ can also be obtained by adjusting its value so that the best data collapse is obtained as shown in Fig. 2(b). The data collapse supports the scaling behavior and provides an independent estimate of $`\nu =1.3`$. From the two independent estimates we finally obtain $`\nu =1.35\pm 0.2`$.
Our estimate of $`\nu =1.35\pm 0.2`$ from the scaling analysis of the vortex response is consistent with previous estimates of the spin glass correlation length exponent $`\nu _s`$ obtained in the phase representation of the XY spin glass . These calculations give numerical estimates with comparable uncertainties ranging from $`\nu _s=1`$ to $`1.2`$. It also agrees with a recent calculation in the phase representation of Eq. (1), $`\nu =1.1\pm 0.2`$, using the same scaling analysis . This suggests that the relevant length scale for vortex dynamics is set by the spin glass correlation length which determines short range phase coherence. Since the chiral glass correlation length exponent has been estimated to be significantly larger, in the range $`\nu _c=1.8`$ to $`2.6`$, it also supports the scenario in which the phase and chiral variables in the XY spin glass are decoupled on large length scales and order with different correlation length exponents. However, as the errorbars of these estimates are significant large, a single critical exponent may not be completely ruled out on pure numerical grounds and further work will be necessary to completely settle this issue.
It should be noted that the decoupled scenario for spin and chiral variables near the same transition temperature, suggested by our numerical results, does not contradict general arguments of renormalization-group theory which allows for the possibility of nontrivially decoupled fixed points. In fact, since the model has continuous and Ising-like symmetries, one would expect that the effective Hamiltonian describing the critical behavior could be written in terms of a disordered ferromagnetic XY-spin model, with a zero-temperature transition, coupled to an Ising spin glass model, representing phase coherence and chiral degrees of freedom, respectively. Ferromagnetic XY spin models with a zero temperature transition do exist as, for example, in diluted XY models at percolation threshold . Although the exact form of the effective XY and Ising Hamiltonians and the coupling term are not known, the continuous and discrete symmetries of the model are consistent with an energy density coupling of the form $`_rE_s(r)E_c(r)`$, where $`E_s`$ and $`E_c`$ are the local energy densities for the XY spins and chirality, respectively. Such a coupling term is known to occur in the effective Hamiltonian of frustrated XY models with weak disorder . For a stable decoupled fixed point, the coupling term should be an irrelevant perturbation, corresponding to an eigenvalue $`\lambda =2x<0`$ evaluated at the unperturbed fixed point, where $`2x`$ is the correlation function exponent. Using the scaling relation $`x=21/\nu `$ for the energy density correlations, and the proposed numerical values for the correlation length exponents $`\nu _s=1.3`$ and $`\nu _c=2`$, we find indeed that $`\lambda =2x_sx_c<0`$ as required for a stable decoupled fixed point. If the transition at $`T=0`$ corresponds to a decoupled fixed point then phase and chiral variables can order with different correlation length exponents. These arguments, by no means, show that a decoupled transition is actually realized in the XY spin glass but it makes plausible the assumption of distinct divergent correlation lengths at the same transition temperature used in our analysis of the numerical data.
Finally, our calculation for the two-dimensional XY spin glass, which indicates that vortex dynamics probe mainly the spin glass correlation length rather than the chiral glass correlation length, also suggests that the finite temperature resistive transition observed recently by Wengel and Young in numerical simulations in the vortex representation of the tree-dimensional XY spin glass model should be attributed to spin glass ordering. This is in fact consistent with more recent calculation indicating that the lower critical dimension for spin-glass ordering may be just above $`3`$.
This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP (Proc. 99/02532-0). |
no-problem/9910/astro-ph9910412.html | ar5iv | text | # ICM PHYSICS AT HIGH REDSHIFT
## 1 Introduction: What is ICM ?
In 1970, the UHURU satellite revealed that clusters are the 2nd most powerful X-ray sources in the sky, after the QSOs (Forman et al 1972). Their emission was soon recognized to be of thermal origin, produced by a very diffuse gas at a temperature of several tens of million degrees trapped in the cluster gravitational potential. During the last two decades, due to the X-ray observatories EINSTEIN (1978), GINGA (1987), ROSAT (1990) and ASCA (1993), our knowledge of the properties of this gas, commonly called “Intra-Cluster Medium” (ICM), has made tremendous progress. The relative weights of the three cluster components are now quite confidently estimated:
Galaxies \- what is seen in the optical - accounting for $`<5\%`$ of the cluster total mass, ($`M_{tot}`$)
Gas \- what is seen in the X-ray - $`15\%M_{tot}`$, with densities of the order of 1 atom/litre, temperatures $`510^8K`$ and abundances $`0.3`$ solar
Dark Matter \- what is not seen - $`80\%M_{tot}`$
Quoted value are for a “medium” cluster having a mass of a few $`10^{14}M_{}`$ (Ho = 50 km/s/Mpc); except for very small (compact) groups, the observed variations in abundances and gas mass fractions is small. From this, the cluster gravitional potential is defined by the distribution of the Dark Matter plus that of the X-ray gas. Amazingly, it turns out that galaxies whose grouping originally led to the term “cluster of galaxies” have a negligible contribution in these gravitationally bound systems. From the theoretical point of view, clusters are described by only one parameter: they are (virialized ?) entities having masses of the order of $`10^{14}10^{15}M_{}`$ \- the mass being however, not a direct observable but rather easily deduced from the X-ray temperature (Henry 1997 and references therein).
Because of historical observational reasons, the ICM is generally first considered through its X-ray properties. We thus start by summarizing the main results obtained from X-ray data and the basic properties of the ICM. Section 3 presents the information provided by radio wavalengths: we shall describe how it complements the X-ray data in our understanding of ICM physics. Then, we briefly address a few more theoretical issues, as to the state of the ICM, which may have a decisive impact on the data interpretation. A relevant question is also the role of galaxies: are they only test particules in the cluster gravitationnal potential? In Section 5, we present a few simple low-$`z`$ correlations which appear to constrain galaxy activity and, correlatively, the state of the ICM at high redshift. The conclusion summarizes the current status and presents prospects for further X-ray studies of the ICM at high redshift and of very low density structures such as small galaxy groups or filaments.
## 2 What X-ray PHYSICS today?
In this Section, we recall the bases of cluster X-ray astronomy, as most of the audience at the Conference are of “optical” origin. Let’s first stress that X-ray telescopes count individual photons, and so far, have measured their energy with an accuray of the order of 50% (ROSAT) - 20% (ASCA) and resolved from 3 arcmin (ASCA) to 25-5 arcsec (ROSAT).
### 2.1 Basic physics with $`10^3`$ photons
A few thousand X-ray photons received from a galaxy cluster enable the determination of the global properties of the ICM: surface brightness profile and some morphological information (2D), electron density profile (3D, $`n_{gas}(r)`$), average metallicity and temperature. It is also possible to estimate cluster masses under the assumptions of hydrostatical equilibrium, spherical symmetry and ideal gas:
$$\frac{dP_{gas}}{dr}=n_{gas}\frac{d\mathrm{\Phi }_{grav}}{dr}$$
$$\mu m_HP_{gas}=n_{gas}kT$$
$$M_{tot}(<r)=\frac{kTr}{\mu m_HP_{gas}G}\left(\frac{dlogn_{gas}}{dlogr}+\frac{dlogT}{dlogr}\right)$$
The enclosed mass within a given radius is readily obtained from the last equation, assuming a constant average temperature. Subsequently, two puzzling results have emerged from the observations; (1) as mentioned above, a gas fraction of the order of 15-20% is usually found in rich clusters (Arnaud & Evrard 1999 and references therein). If clusters are representative of the rest of the universe, this is in severe conflict with the results of the primordial nucleo-synthesis giving $`0.04<\mathrm{\Omega }_b<0.06`$ … for $`\mathrm{\Omega }_o`$ = 1 (the so-called “ baryon catastrophe”, White et al 1993); (2) there appears to be a factor 2 discrepancy between X-ray and lensing mass estimates for some clusters, the X-ray mass being too small unless temperatures were twice those actually measured; however the agreement is good for apparently relaxed clusters (Allen 1998).
### 2.2 Advanced physics with $`10^4`$ photons
More than 10 000 collected photons enable the basic analysis described above to be applied to cluster sub-regions such sectors or concentric rings. There are however critical observational limitations to this procedure, namely: it is far from trivial to recover the “true” 3D temperature and abundance profiles from the 2D projected X-ray data, having, moreover, an instrumental PSF which depends both on the photon off-axis position and energy. With the aid of shophisticated deconvolution procedures, ROSAT and ASCA data have provided interesting information about the radial variation of temperature (such as the existence of a possible universal temperature profile) and abundances in a series of nearby clusters (Markevich et al 1998).
Some, apparently relaxed, clusters show a strong X-ray peak at their very center associated with a cooler temperature, a phenomenon commonly known as “cooling flow” (CF) (see Fabian 1994 for a review): Once a cluster size object has collapsed, the hot gas tends to reach hydrostatic equilibrium. But it radiates energy and cools (the cooling coefficient, $`\mathrm{\Gamma }_X`$, or X-ray emissivity is proportional to the density square). The more it radiates, the more it contracts and the more its density increases. This has the immediate effect of most stronlgy enhancing the cluster central density, thus its X-ray brightness, and, ultimately, would lead to the so-called “cooling catastrophe”. Because of the pressure gradient which develops, the gas is assumed to flow inwards, in order to stay isobaric. The fate of the accumulating gas at the center is however an enigma, as star formation rates usually found to be associated with the cluster dominant galaxy, through optical emission lines and blue nebulae, are 10 to 100 times lower than inferred from the X-ray mass flow rate estimates. ASCA spectral observations of CF clusters need at least two temperatures to be correctly fitted and reveal the presence of a central Hydrogen absorption (Allen et al 1996), hence suggesting that the ICM is actually multi-phase and could cool very low. We shall come back to this question in Sec. 4.
### 2.3 What is HIGH REDSHIFT for ’99 cluster X-ray astronomy?
To summarize the Introduction, presently - i.e. with ASCA and ROSAT - basic physics is achievable out to $`z0.5`$ whereas advanced physics is performed out to $`z0.1`$. Note, also, that it is currently possible to detect clusters out to $`z1`$ (with a few tens of photons).
The validity of the hydrostatic equilibrium assumption is certainly questionable, but with the present data, hard to test. The comparison with lensing mass estimates - free of this hypothesis - suggests that this may not be always verified. We shall see in the next sections that the physics of the ICM is actually much more complex than suggested from the X-ray viewpoint alone.
## 3 Radio observations and magnetic field
### 3.1 Individual synchrotron radio sources
Radio galaxies produce relativistic particles, that interact with the ICM. Indeed, head-tail (HT) and wide-angle-tail (WAT) galaxies are exclusively found in cluster environments: While the straight shape of the HT galaxies (usually located at the cluster periphery $``$ large peculiar velocity) is explained by dynamical frictions against the ICM, the origin of the bending of the WAT (always cluster dominant galaxy $``$ almost at rest in the cluster frame) is still unclear and one has to invoke buoyancy forces or winds in the ICM.
The combined examination of high resolution X-ray and radio maps reveals how complex the mutual interactions can be. For instance, the radio morphology of the central galaxy of the Perseus cooling flow cluster, NGC 1275, shows a clear anticorrelation with the surrounding X-ray gas emissivity, hence indicating that the relativistic particles repell the hot gas (Böhringer et al 1993). To the contrary, the central galaxy of Virgo, M87, exhibits a similarly obvious correlation; here the excess of X-ray thermal emission could be produced by a cool gas component due to the combined action of the radio jets and the magnetic field (Böhringer et al 1995). Note that recent 90 cm observations of M87 show a very extended and bubble-like structure, suggesting the presence of an “outflow” (Owen et al 1999).
To quantify the interactions of the radio lobes with the ICM, one way is to test the equipartition hypothesis, which is satisfied if the energy density of the relativistic particles equals that of the magnetic field or, in other words: when the minimum pressure deduced from the radio data equals the X-ray pressure. Although the estimate of the minimum radio pressure is subject to a series of hypotheses - certainly more ad hoc than for the X-ray pressure derivation - there appears to be a large variety of situations (Liang et al 1997 and references therein), as suggested by the above morphological considerations.
### 3.2 Diffuse cluster sources
In addition to cluster radio galaxies, extended diffuse radio emission is observed on cluster scales. These so-called halo or relic sources are rare events; only some 20 cases are known. They all have a steep radio spectrum and are found in merging or post-merging clusters (for a review see Ferreti et al 1995). The nearest example is the Coma cluster (Ferreti 1999), one of the most remarkable and well studied case is found in A2256 (R ttgering et 1994; for the X-ray temperature map: Briel & Henry 1994), the most distant one being in A1300 ($`z=0.3`$, Reid et al 1999). A possible explanation for the origin of halos would be the acceleration of cosmic rays, contained in the ICM, at the merging front. Indeed, only a few hundredths of the shock power would be enough to produce the energy of the electron population of the radio halo (Böhringer et al 1992).
The comparison of the X-ray and radio maps of A85 revealed an X-ray bump coinciding with an extended Very Steep Spectrum Radio Source not associated with a galaxy. Here, the diffuse X-ray component is likely not to be thermal, but produced by inverse Compton effect: the primeval 3K CMB photons, scattering off the relativistic electrons of the diffuse radio source can produce the X-ray emission at the observed level (Bagchi et al 1998).
### 3.3 Cooling flows
HST/WFPC2 observations of the CF cluster, A1795, revealed a chain of blue knots coinciding with the radio lobes of the cD galaxy (Pinkey et al 1996). A simple interpretation suggests that star formation is triggered by the radio activity in clumps where the multi-phase ICM is cooler and denser. As mentioned earlier, cooling flows are thought to be considerable sources of cold gas, which can be detected in the optical and X-ray either in emission or in absorption; moreover, the dense inflowing gas may also feed an AGN. Assuming that the production of relativistic electrons by the central engine is related to the gas accretion rate, one can imagine a feedback mechanism inhibiting the accreation, thus, leading to a cyclic regime with phases of accretion where the radio activity is turned-off and phases of production of particles and no accretion; this would explain the observed fraction of CF clusters possessing a central radio source ($`70\%`$) and why so little star formation is usually observed in the optical (Tucker & David 1997).
### 3.4 $`{}_{}{}^{57}Fe`$ Hyperfine transition
Sunyaev & Churazov (1984) have predicted the existence of a hyperfine transition for the lithium-type ion $`{}_{}{}^{57}Fe^{+23}`$ at $``$ 0.3 cm. Its spontaneous emission coefficient is significanlty higher than that of HI and thus would partly compensate the low abundance of the $`{}_{}{}^{57}Fe`$ isotope. The detection of this line would be most interesting for our knowledge of the ICM: direct insight into the dynamics and turbulent motions, alternative estimate of the metal radial abundance, accurate and straightforward measurement of the cluster redshift. However, the predicted intensity of the line is well below 1 mK for a strong CF cluster like A85 and, so far, the line has not been detected (Liang et al 1997).
### 3.5 Magnetic field
Magnetic fields are estimated via Faraday rotation, mostly in very nearby clusters to ensure the presence of enough background radio sources. The strength of the field is of the order of a few $`\mu `$G with a scale length of about 10pc (Kim et al 1991). Strong rotation measures are found in CF clusters. Faraday rotation and gamma observations have led Ensslin et al (1997) to show that the cluster radio galaxies could sustain the ICM with injection of cosmic rays and magnetic field. Non-thermal pressure could then affect the simplistic cluster mass calculation in X-ray, requiring a higher total mass, and thus reduce the discrepancy between X-ray and lensing estimates; it would also tune down the “baryon catastrophe” (cf Sec. 2.1).
### 3.6 The Sunyaev Zel’dovich effect
The Sunyaev Zel’dovich (S-Z) effect is due to Inverse Compton scattering of the CMB photons on the ICM electrons and, thus, produces a distortion of the CMB blackbody spectrum, when observed through a cluster. The amplitude of the thermal effect measured as a CMB temperature variation, is:
$$y=\sigma _Tn_e\frac{kT_e}{m_ec^2}𝑑l$$
Where $`n_e,T_e,l`$ are respectively the ICM electron density and temperature, the pathlength along the line of sight through the cluster, and $`\sigma _T`$, the Thomson cross-section. The $`y`$ parameter is thus representative of the cluster electron pressure. The effect is independent of the cluster distance and despite its weakness ($`\mathrm{\Delta }T_{MBG}/T_{MBG}10^{4.}`$) it is now currently possible to map clusters with radio interferometers (Carlstrom 1999). There is a recent review by Birkinshaw (1999) on the topic, so we shall just mention a few points here, in direct connection with the ICM Physics.
The S-Z decrement (which is independent of distance), combined with the knowledge of the X-ray temperature and electron density radial profiles (distance dependent) was proposed as a straightforward way to measure $`H_o`$. Actually, there are numerous limiting factors in the use of this method, some of them being of technical nature, others directly related to the ICM complexity.
-1- The S-Z effect differs significantly from the X-ray data in their derived properties of the cluster atmosphere: $`SZn_eT_e`$ while $`\mathrm{\Gamma }_Xn_e^2T_e^{1/2}`$. They are thus likely to have different angular structures, which should provide information on the local variations of temperature and density in the cluster gas. Improvements in X-ray technology, which provides spatially resolved spectroscopy, have largely superceded S-Z information. However, the S-Z drops off less rapidly than the X-ray surface brightness ($`S_X`$) and thus could still be an important probe of structure in the outer parts of clusters; but the current sensitivity of S-Z instruments is too low relative to the sensitivity of X-ray image and spectra, for this to be useful in most cases.
-2- Where the cluster contains a radio source (particularly a radio halo) the S-Z effect is of particular interest since it provides a direct measurement of the electron pressure near that radio source and so can be used to test whether the dynamics of the radio emitting plasma are strongly affected by the external gas pressure.
-3- On the smallest scales (where ICM structures are unresolved by X-ray or radio telescopes) the structures are better described by a clumping of the gas and the S-Z effect and $`S_X`$ scale differently. For instance, if clumping is isobaric, with the pressure in clumps the same as outside, then the S-Z effect will show no variations in regions where the gas is strongly clumped while $`S_X`$ will increase as $`n_e^{3/2}`$. No useful results on the clumping of cluster gas have been reported in the literature to date.
-4- One direct use of the S-Z effect is as probe of the gas mass enclosed within the telescope beam. The surface mass density in gas can be related to the $`y`$ parameter if $`T_e`$ is constant. This can be compared directly with the mass estimates derived from the study of gravitational shear in clusters.
## 4 Some theoretical considerations
Having presented the main observational properties of the ICM, we show here by two examples, how current results and views need to be sharpened by theoretical considerations.
### 4.1 The state of the ICM
The thermodynamical state of the ICM is of primary importance in the study of the formation and evolution of cosmic structures, especially at high redshift. As in the collapse, the gas velocity becomes supersonic, shock fronts form at about the virial radius and separate the infalling from the inner gas already at viriral temperature. In the dense central region, cooling is likely to play a dominant role. In the outer regions, the density is rather low and allows an adiabatic treatment of the gas dynamics and, at the same time, non equilibrium thermodynamics occurs in this hot and diffuse plasma. Takizawa (1998) and Chi ze et al (1998) have shown that whereas the hydrostatic and thermodynamical equilibrium assumptions are valid in the central regions, they are both violated in the outer regions: A significant decoupling between electrons and ions occurs between $`r_{200}/2`$ and the shock front<sup>1</sup><sup>1</sup>1$`r_{200}`$ is the radius where the density equals 200 times the critical density, located roughly at 2$`r_{200}`$. The maximum departure is located at $`r_{200}`$ and is about 20%. Therefore, the usual assumption of thermodynamical equilibrium between electrons and ions breaks down in this region. This could lead to a thermodynamical decoupling of 50% which means that the observed electron temperature (measured in the X-ray) underestimates by a factor of 2 the true dynamical temperature. Therefore, the error in cluster mass estimates due to a departure from thermodynamical equilibrium could be as large as a factor of 2.
### 4.2 The multi-phase medium
In Sec. 2.2 we mentioned observational evidences for a “several-temperature” medium in cooling flow clusters. Actually, Nulsen as early as 1986 predicted the existence of a multi-phase medium on cluster scales. A collapsed object radiates energy, but the cooling process is highly unstable and leads to the formation of cool and dense blobs and filaments, evolving from an initial temperature fluctuation distribution. As cooling proceeds, a net mass flow from the hot phase to the cool phase develops, although some hot, diffuse gas always remains in the cluster. This remaining X-ray emitting gas inhibits the “cooling catastrophe” at the cluster center by increasing entropy and creating a core. After a few cooling times, a self-similar evolution may build up, allowing the description of a multi-phase cooling flow using very simple analytical models (Teyssier 1996). Cooling processes will be much more efficient (dominating) in groups. The remaining hot and diffuse gas has an X-ray luminosity significantly reduced compared to the equivalent “adiabatic” cluster or group. The X-ray mass fraction is also strongly influenced by this multi-phase evolution.
## 5 A glimpse at high-redshift conditions
A few basic correlations between cluster properties ($`L_X,T_X,\sigma _v`$…), have been investigated out to $`z0.4`$. There does not appear to a conspicuous evolution, either in the correlations (e.g. Mushotzky & Scharf 1997) or in the cluster luminosity function alone (e.g Rosati et al 1998, out to $`z0.8`$). However these local correlations described below provide clues about the state of the ICM in the era of cluster formation.
The $`L_{Bol}T`$ relationship
A clear correlation between temperatures and X-ray luminosities exists for X-ray clusters. The scatter is large, but it is remarkable that the correlation extends from very luminous clusters down to groups, thus over the luminosity range $`510^{45}510^{42}`$ erg/s, following $`L_{Bol}T_X^{3.4}`$ (David et al 1993). Arnaud & Evrard (1999) have shown that restricting to non CF clusters and to objects having well measured temperature, the correlation is very tight, with a slope $`3`$. However, this average relationship is rather far from that expected for a simple gavitational collapse, namely: $`L_{Bol}T_X^2`$. Non gravitational “preheating” processes, such as shocks or feedback from supernova explosions at early stages, have been proposed in order to raise the temperature of the groups before they merge into clusters (Kaiser 1991). The energy contribution due to winds will be larger in cool clusters (groups) than in hot ones, thus, producing the observed steepening of the $`L_{Bol}T`$ relationship in the local universe. More quantitatively, hydrodynamical simulations show that including feedback raises the entropy of the ICM, preventing it from collapsing to densities as high as those obtained in the pure infall model (Metzler & Evrard 1994). The effect is most pronounced in subclusters formed at high redshift, clusters with feedback being always less luminous but, then, they experience a more rapid luminosity evolution. Also, the simulations suggest the presence of a radial iron abundance gradient as a consequence of ejection which takes place while galaxies have a gradient in number density with respect to the primordial gas (as the ICM is slightly hotter and therefore more extended than the galaxy distribution.)
The $`\sigma _vT`$ relationship
Assuming that galaxy orbits are isotropic, that the gas and galaxies occupy the same potential well and that gravity is the only source of energy, we can predict that the galaxy velocity dispersion ($`\sigma _v`$) and the temperature of the ICM should be correlated by $`\sigma _vT^{0.5}`$. It is found that $`\sigma _vT^{0.61or0.76}`$ (Bird et al 1995). Simulations show that both energetic winds and galaxy velocity bias can result in the observed $`\sigma _vT`$ relation.
The $`R_{isophot}T`$ relationship
From a sample of ROSAT PSPC clusters, Mohr & Evrard (1997) have shown that there is a tight relation between some measure of the cluster X-ray isophotal size and its temperature. This indicates that the ICM structure outside the core regions is a well-behaved function of the temperature, with no evidence that CF significantly affect the relationship, and suggests a tight correlation betweeen the temperature and the virial mass. However, the slope of the $`R_{isophot}T`$ relationship is steeper than the slope inferred from numerical simulations. Again here, introducing galaxy feedback produces the kind of ICM structural changes required to steepen the correlation.
Statistical and evolutionary properties of cluster samples are potentially powerful tools to constrain cosmology (Blanchard, these proceedings). Conversely, the three above-mentioned local correlations (all involving the temperature parameter) indicate that statistically complete surveys extending to $`z0.51`$ would provide constraining power on the amount of feedback in clusters, although in a cosmologically dependent fashion, which still needs to be understood.
## 6 Conclusion
We have presented the main aspects of the current knowledge about ICM and its physics, primarily from the X-ray point of view. Hydrostatic equilibrium seems a reasonable working hypothesis within the (inner) regions of “regular” clusters explored so far… at least if one is statisfied with an uncertainty of a factor of two, which may be considered excellent for astrophysical accuracy standards or may point out some degree of over-simplification. Indeeed, information provided by the radio wavelengths complements and refines this view, suggesting that phenomena such as turbulence and magnetic field may have a non negligible role; Sunyaev Zel’dovich effect studies are potentially a powerful complement to X-ray data.
In addition, recent cluster observations in the Extreme UV (Sarazin & Lieu 1998) and Hard X-ray (Fusco-Femanio et al 1999), show emission in excess to what is expected from a purely thermal spectrum extrapolated from the X-ray band. The excesses would be due - respectively - to Inverse Compton diffusion of low energy cosmic ray present in the ICM and to thermal bremsstrahlung produced by an electron population having energies above 50 keV, accelerated by turbulence.
Despite their negligible mass, galaxies appear to play an important role in the physics of the ICM, especially at high redshift. (i) Feedback from star forming activity preheats the ICM via winds and supernova explosions prior to cluster formation; the effect of individual galaxies is stronger in groups than in clusters and remains conspicuous in the low $`z`$ group population (Davis et al 1999) (ii) metals found in the ICM originate in galaxies: due to ASCA’s good energy resolution enabling the measurement of the Oxygen abundance, it has been shown that type II supernovae are responsible for a large part for the enrichment of the ICM (Loewnestein & Muschotzky 1996); (iii) radio galaxies poduce relativistic particles that interact strongly with the ICM. (iv) the role of the central (radio) galaxy appears to be crucial in CF clusters. To conclude, we emphasize that the physical state of the ICM is inexorably linked with galaxy formation hence, with star formation.
Investigating ICM X-ray physics at high redshift will require large collecting areas and good energy resolution. This is what is to be offered by XMM, the 2nd ESA Corner Stone. The basic physics mentioned in Sect. 2 will be possible out to $`z1`$, while advanced temperature mapping will be achievable for the more nearby clusters out to the “virial radius” where complicated phenomena are supposed to take place (Sec. 4). With moderate exposure times it will be possible to detect the entire cluster/group population out to $`z0.5`$ and study in detail the effect of the feedback and cooling processes (Pierre et al 1999). Correlativeley, the very high spatial and spectral resolution of Chandra should enable the study of fine morphological ICM details in nearby objects as well as possible soft X-ray absorption lines (in QSO spectra) from warm/cold gas present in clusters or cosmic filaments (Cen & Ostriker 1999).
###### Acknowledgements.
It is a pleasure to thank Romain Teyssier for many cinteresting discussions and S. Madden for useful comments. |
no-problem/9910/hep-ex9910063.html | ar5iv | text | # Radiative decays of light vector mesons.
## 1 Introduction
Radiative decays of $`\rho ,\omega `$ and $`\varphi `$ mesons were studied in the SND experiment at the VEPP-2M $`e^+e^{}`$ collider . Approximately $`20\times 10^6`$ $`\varphi `$, $`2\times 10^6`$ $`\rho `$ and $`1\times 10^6`$ $`\omega `$ mesons were collected. The results for $`\varphi `$-meson decays were obtained mainly from the part of the data collected during 1996, when approximately $`8\times 10^6`$ of $`\varphi `$ mesons were produced with appropriate integrated luminosity 4.3 pb<sup>-1</sup>.
The SND detector provides good possibility to study the radiative decays of light vector mesons $`e^+e^{}\mathrm{V}\pi ^0\gamma ,\eta \gamma ,\eta ^{}\gamma `$, where $`\mathrm{V}=\rho ,\omega ,\varphi `$. In particular, large calorimeter solid angle, about 90% of $`4\pi `$, allows to select the multi-photon final states with high efficiency.
## 2 Decays $`\varphi \eta \gamma ,\pi ^0\gamma `$
The decay $`\varphi \eta \gamma `$ was studied in the main final states of $`\eta `$: $`3\gamma ,3\pi ^0\gamma ,\pi ^+\pi ^{}\pi ^0\gamma `$, which cover about 94% of all decays of $`\eta `$ meson. The study of $`3\gamma `$ final state allows to measure the probability of the decay $`\varphi \pi ^0\gamma `$ also.
### 2.1 $`3\gamma `$ final state .
In this final state two radiative decays of $`\varphi `$ meson were studied: $`e^+e^{}\eta \gamma \gamma \gamma \gamma `$ and $`e^+e^{}\pi ^0\gamma \gamma \gamma \gamma `$ with the main background coming from non-resonant QED three-quanta annihilation $`e^+e^{}\gamma \gamma \gamma (\mathrm{QED})`$.
Preliminary selection included presence of three or four reconstructed photons; cut on the total energy deposition in the calorimeter: $`0.72E_{beam}<E_{tot}<1.22E_{beam}`$; cut on the sum of the photon momenta: $`P_i<0.2E_{tot}/c`$; cut on the minimal energy of the photons at the level of 50 MeV. To suppress spurious signals in the calorimeter, which appear mainly in the crystals closest to the beam, additional restrictions were imposed on the energies and angles of the reconstructed photons: polar angle for the photons with energies 50–100 MeV was in the range $`45^{}<\theta <135^{}`$, while for the photons with the energies higher than 100 MeV it was in the range $`27^{}<\theta <153^{}`$.
To distinguish between the processes $`\eta \gamma 3\gamma `$, $`\pi ^0\gamma 3\gamma `$ and $`e^+e^{}3\gamma (QED)`$ the kinematic fit was used. About 18000 events were selected for the process $`\eta \gamma 3\gamma `$ and about 1700 events for the process $`\pi ^0\gamma 3\gamma `$ with corresponding efficiencies 44% and 14%.
### 2.2 $`7\gamma `$ final state
In this final state main background comes from the process $`\varphi K_SK_L2\pi ^0+X`$. The selection criteria included presence of 6–8 reconstructed photons, cuts on the total energy deposition and the momentum balance. The most energetic photon in the event is the recoil photon with energy about 360 MeV. Therefore the restriction on the energy of the most energetic photon was imposed.
The selection efficiency of this final state was about 32%. Approximately 10000 events were selected with the background lower than 1%.
### 2.3 $`\pi ^+\pi ^{}\pi ^0\gamma `$ final state
Main background for this final state comes from the decay $`\varphi \pi ^+\pi ^{}\pi ^0`$ with spurious hits in calorimeter.
At first the events with 2 charged particles and 3 or more photons were selected. Then the cuts on distances between charged tracks and beam axis and on space angle between charged tracks (to reject the events of the process $`\varphi K_SK_L\pi ^+\pi ^{}+X`$) were applied. To suppress the background the kinematic fit was used.
The selection efficiency of this final state was about 18%. Approximately $`20\times 10^6`$ events of $`\varphi `$-meson decays were processed.
### 2.4 Analysis
For the description of the cross section of processes $`e^+e^{}P\gamma `$, where $`P`$ is a pseudo-scalar meson, the following dependence was used:
$$\sigma (s)=\frac{F(s)}{s^{3/2}}\left|\underset{V=\rho ,\omega ,\varphi }{}\sqrt{\sigma _{VP\gamma }\frac{m_V^3}{F(m_V^2)}}\frac{m_V\mathrm{\Gamma }_Ve^{\mathrm{i}\phi _V}}{m_V^2s\mathrm{i}\sqrt{s}\mathrm{\Gamma }_V(s)}\right|^2,$$
(1)
where $`\sigma _{VP\gamma }=12\pi B(Ve^+e^{})B(VP\gamma )/m_V^2`$ is the cross section of the process $`e^+e^{}VP\gamma `$ at the maximum of vector resonance $`V`$. $`F(s)=[(sm_P^2)/2\sqrt{s}]^3`$ is the phase space factor for the process $`e^+e^{}P\gamma `$. The relative phases of vector mesons were taken to be $`\phi _\rho =\phi _\omega =0`$, $`\phi _\varphi =180^{}`$ for $`\eta \gamma `$ decay, $`\phi _\varphi =(158\pm 11)^{}`$ for $`\pi ^0\gamma `$ decay.
The fit gave the following results for the decay $`\varphi \eta \gamma `$ in the different final states:
$`3\gamma `$ $`:`$ $`\mathrm{BR}(\varphi \eta \gamma )=(1.338\pm 0.012\pm 0.052)\%`$ (2)
$`7\gamma `$ $`:`$ $`\mathrm{BR}(\varphi \eta \gamma )=(1.296\pm 0.024\pm 0.057)\%`$ (3)
$`\pi ^+\pi ^{}\pi ^0`$ $`:`$ $`\mathrm{BR}(\varphi \eta \gamma )=(1.259\pm 0.030\pm 0.059)\%`$ (4)
Main sources of systematic errors were luminosity measurement (2.5%), error in $`\mathrm{BR}(\varphi e^+e^{})`$ (3%), MC efficiency determination (1–2%), the error in the branching ratios of the decays of $`\eta `$ meson (1–2%) and model dependence (1.5%).
If we combine the branching ratios (2), (3) and (4) some systematic errors will cancel and we will obtain $`\mathrm{BR}(\varphi \eta \gamma )=(1.304\pm 0.049)\%`$. This result agrees with the world average $`\mathrm{BR}(\varphi \eta \gamma )=(1.26\pm 0.06)\%`$ and has smaller error. The systematic error comes mainly from the error in the branching ratio $`\mathrm{BR}(\varphi e^+e^{})`$.
For the decay $`\varphi \pi ^0\gamma `$ the following result was obtained: $`\mathrm{BR}(\varphi \pi ^0\gamma )=(1.226\pm 0.036_{0.089}^{+0.096})\times 10^3`$. The uncertainty in phase $`\phi _\varphi `$ for the process $`\varphi \pi ^0\gamma `$ gave about 6% systematic error for the decay $`\varphi \pi ^0\gamma `$.
## 3 Decay $`\varphi \eta ^{}\gamma `$
The first observation of this decay was done at VEPP-2M in CMD-2 experiment . The measurement of this decay at SND was performed with $`\eta ^{}`$ decaying into $`\pi ^+\pi ^{}\eta `$ and $`\eta `$ into two $`\gamma `$’s. The background for this final state comes from the processes $`e^+e^{}\eta \gamma \pi ^+\pi ^{}\pi ^0\gamma `$, $`e^+e^{}\pi ^+\pi ^{}\pi ^0`$ and $`e^+e^{}\omega \pi ^0\pi ^+\pi ^{}\pi ^0\pi ^0`$. For the analysis events with two charged tracks and three photons were selected. To suppress the background a complex selection algorithm was developed based on the kinematics of all these processes. It was described in details in the ref. . Due to the strict cuts the selection efficiency of the final state $`\pi ^+\pi ^{}3\gamma `$ for the events of the process under study was 5.5%.
There were found $`5.2_{2.2}^{+2.6}`$ events, which correspond to the branching ratio $`\mathrm{BR}(\varphi \eta ^{}\gamma )=(6.7_{2.9}^{+3.4})\times 10^5`$. The systematic error, not included in the above errors, is about 15% and determined mainly by the error in the efficiency estimation.
## 4 Preliminary results on decays $`\rho ,\omega \eta \gamma `$
To study the decays $`\rho ,\omega \eta \gamma `$ the final state $`\eta 3\pi ^06\gamma `$ was chosen because of physical background in the energy region of $`\rho `$ meson is absent for this final state. The analysis of these decays is similar to the analysis of the decay $`\varphi \eta \gamma 3\pi ^0\gamma 7\gamma `$. The spectrum of recoil mass of the most energetic photon in the event of the process under study is shown in Fig. 2. The Born cross section of the process $`e^+e^{}\eta \gamma `$ in the energy region of $`\rho `$ meson is presented in Fig. 2. The fit was done by the formula (1) with fixed phases $`\phi _\rho =\phi _\omega =0`$, $`\phi _\varphi =180^{}`$. The resulting branching ratios are following: $`\mathrm{BR}(\omega \eta \gamma )=(5.9\pm 1.0)\times 10^4`$, $`\mathrm{BR}(\rho \eta \gamma )=(2.0\pm 0.4)\times 10^4`$. These results agree with the table values $`\mathrm{BR}(\omega \eta \gamma )=(6.5\pm 1.0)\times 10^4`$ and $`\mathrm{BR}(\rho \eta \gamma )=(2.4_{0.9}^{+0.8})\times 10^4`$ . The result on the decay $`\rho \eta \gamma `$ has smaller error.
## 5 Summary on the results
The results presented in this work are summarized in the table:
Decay BR(SND) BR(PDG) $`\varphi \eta \gamma `$ $`(1.304\pm 0.049)\%`$ $`(1.26\pm 0.06)\%`$ $`\varphi \pi ^0\gamma `$ $`(1.226\pm 0.036_{0.089}^{+0.096})\times 10^3`$ $`(1.31\pm 0.13)\times 10^3`$ $`\varphi \eta ^{}\gamma `$ $`(6.7_{2.9}^{+3.4})\times 10^5`$ $`(12_5^{+7})\times 10^5`$ $`\omega \eta \gamma `$ $`(5.9\pm 1.0)\times 10^4`$ $`(6.5\pm 1.0)\times 10^4`$ $`\rho \eta \gamma `$ $`(2.0\pm 0.4)\times 10^4`$ $`(2.4_{0.9}^{+0.8})\times 10^4`$
## 6 Acknowledgement
The work is partially supported by RFBR (Grants No 96-15-96327, 99-02-17155, 99-02-16815, 99-02-16813) and STP “Integration” (Grant No 274). |
no-problem/9910/chao-dyn9910023.html | ar5iv | text | # Finite thermal conductivity in 1d lattices
## A The rotator model
The simplest example of a classical-spin 1d model with nearest neighbour interactions lies in the class (1) with $`V(x)=1\mathrm{cos}(x)`$ and $`U=0`$. This model can be read also as a chain of $`N`$ coupled pendula, where the $`p_i`$’s and the $`q_i`$’s represent action-angle variables, respectively. It has been extensively studied as an example of chaotic dynamical system that becomes integrable both in the small and high energy limits, when it reduces to a harmonic chain and free-rotators, respectively. In the two integrable limits, the relaxation to equilibrium slows down very rapidly for most of the observables of thermodynamic interest (e.g., the specific heat) . As as consequence, the equivalence between ensemble and time averages is established over accessible time scales only inside a limited interval of the energy density $`\epsilon `$. Here we shall discuss heat conduction for values of the energy density corresponding to strongly chaotic behaviour.
## B Numerical analysis of the thermal conductivity
The most natural and direct way to determine $`\kappa `$ consists in simulating a real experiment, by coupling the left and right extrema of the chain with two thermal baths at temperatures $`T_L>T_R`$, respectively. In our simulations we have used Nosé-Hoover models of thermostats , both because they can be easily implemented (integrating the resulting equations with a standard algorithm) and because of the smaller finite-size effects (due to the unavoidable contact resistance).
With this setting, a non-equilibrium stationary state sets in characterized by a non-vanishing heat flux $`J`$ :
$$J=\frac{1}{N}\underset{i}{}j_i=\frac{1}{N}\underset{i}{}\frac{p_i}{2}\left(f_{i+1}+f_i\right)$$
(2)
where $`f_i=V(q_{i+1}q_i)/q_i=\mathrm{sin}(q_{i+1}q_i)`$ is the interaction force and $`j_i`$ is the local flux at site $`i`$. The total heat flux $`J`$ has to be averaged over a sufficiently long time span to get rid of fluctuations and to ensure the convergence to the stationary regime. This can be tested by monitoring the average heat flux and looking at the scale of its fluctuations. As a result, we have verified that $`210^6`$ time units are sufficient to guarantee a few percents of fluctuations in the worst cases.
The thermal conductivity is determined by assuming the Fourier law, i.e. from the relation $`J=\kappa T`$, where $`T`$ denotes the imposed thermal gradient. The simulations have been performed for $`T_L=0.55`$, $`T_R=0.35`$, and chain lengths ranging from $`N=32`$ to 1024 with fixed boundary conditions. The equations of motion have been integrated with a 4th-order Runge-Kutta algorithm and a time step $`\mathrm{\Delta }t=0.01`$. The results, reported in Fig. 1 clearly reveal a convergence to a $`\kappa `$-value approximately equal to 7 (see the circles). The dotted line represents the best data fit with the function $`a+b/N`$: the agreement is very good, showing that finite-size corrections to $`\kappa `$ are of the order $`O(1/N)`$, as it should be expected because of the thermal contacts. However, more important than the numerical value of the conductivity is its finiteness in spite of the momentum conservation.
In order to test independently the correctness of our results, we have performed direct microcanonical simulations, which allow determining the thermal conductivity through the Green-Kubo formula :
$$\kappa =\frac{1}{T^2}_0^{\mathrm{}}C_J(t)𝑑t$$
(3)
where $`C_J(t)=NJ(t)J(0)`$ is the flux autocorrelation function at equilibrium and $`T`$ is the temperature. A correct application of the above formula requires fixing the energy density $`\epsilon `$ in such a way that the kinetic temperature (defined as $`T=p^2`$, in agreement with the virial theorem) is close to the average value of the temperature in the previous simulations. The choice $`\epsilon =0.5`$ turns out to be reasonable, as it corresponds to $`T0.46`$. In the absence of thermal baths, the equations of motion are symplectic, so that we have now preferred to use a 6th order McLachlan-Atela integration scheme (with periodic boundary conditions).
The correlation function has been computed by exploiting the Wiener-Khinchin theorem, i.e. by anti-transforming the Fourier power spectrum. The result of the time integration is almost independent of $`N`$ for $`N>128`$. The gray region in Fig. 1 corresponds to the expected value of $`\kappa `$ taking into account statistical fluctuations. There is not only a clear confirmation of a finite conductivity, but the numerical value obtained with this technique is in close agreement with the direct estimates.
## C Dynamics in the mode space
In order to clarify the difference between the dynamics of the present model and that of FPU-type systems, we have investigated the evolution of the low-frequency Fourier modes. In Fig. 2 we have reported the power spectra of some long-wavelength modes. For the sake of comparison, the same quantities are reported for a diatomic FPU-chain, that is characterized by an anomalous transport. At variance with the FPU model, in the rotators there is no sharp peak (which is a signal of an effective propagation of correlations ). Quite differently, the low-frequency part of the spectrum is described very well by a Lorentzian with halfwidth $`\gamma =Dk^2`$ ($`D4.3`$). This represents an independent proof that energy diffuses, as one expects whenever the Fourier law is established.
## D Temperature dependence of the thermal conductivity
The most natural question arising from these results concerns the reason for the striking difference with other symmetric models such as the FPU-$`\beta `$ system. As long as each $`(q_{i+1}q_i)`$ remains confined to the same valley of the potential, there cannot be any qualitative difference with the models previously studied in the literature. Jumps thorugh the barrier, however, appear to act as localized random kicks that contribute to scattering the low-frequency modes and thus to a finite conductivity. If this intuition is correct, one should find analogies between the temperature dependence of the conductivity and the average escape time from the potential well. To this aim, we have computed $`\kappa `$ for different temperature values by performing microcanonical simulations with various energy densities. From the data reported in Fig. 3, one can notice a divergence for $`T0`$ of the type $`\kappa \mathrm{exp}(\alpha /T)`$ with $`\alpha 1.2`$. An even more convincing evidence of this behavior is provided by the temperature dependence of the average escape time (see the triangles in Fig. 3) with an exponent $`\alpha 2`$. The latter behaviour can be explained by assuming that the jumps are the results of activation processes. Accordingly, the probability of their occurrence is proportional to $`\mathrm{exp}(\mathrm{\Delta }V/T)`$, where $`\mathrm{\Delta }V`$ is the barrier height and the Boltzmann constant is fixed equal to 1 (as implicitely done throughout the Letter). Since $`\mathrm{\Delta }V=2`$, the whole interpretation is consistent. Moreover, in the absence of jumps, the dependence of the conductivity on the length should be the same as in FPU-systems, i.e. $`\kappa N^{2/5}`$. Therefore, a low-frequency mode travelling along the chain should experience a conductivity of the order of $`\overline{N}^{2/5}`$, where $`\overline{N}`$ is the average separation between jumps. Under the assumption of a uniform distribution of phase jumps, their spatial separation is of the same order of their time separation, so that we can expect that $`\kappa \mathrm{exp}[2\mathrm{\Delta }V/(5T)]`$. On the one hand, this heuristic argument explains why and how such jumps contribute to a normal transport. On the other hand, the numerical disagreement between the observed and the expected value of the exponent $`\alpha `$ (1.2 vs. 0.8) indicates that our analysis needs refinements. In fact, we should, e.g., notice that, in the low-energy limit, nonlinearities become negligible, implying that deviations from the asymptotic law $`\kappa N^{2/5}`$ should become relevant.
## E Further checks
In order to test the conjecture that jumps are responsible for a normal heat transport, we have decided to investigate some other models. First, we have considered a double-well potential $`V(x)=x^2/2+x^4/4`$ (the same as in FPU with a different sign for the harmonic term). The results of the direct simulations are reported in Fig.1 (see triangles) for a temperature corresponding again to a quarter of the barrier height. The finiteness of the conductivity ans its numerical value is confirmed by the computation of $`\kappa `$ through the Green-Kubo formula (see the light-grey shaded region).
Finally, we have considered an asymmetric version of the rotator model, namely $`V(x)=A\mathrm{cos}(x)+0.4\mathrm{sin}(2x)`$, where $`A`$ is fixed in such a way that the minimum of the potential energy is zero, and the temperature again corresponds to one quarter of the barrier height. In this case too, the conductivity is finite, confirming our empirical idea that the jumps are responsible for breaking the coherence of the energy flux and, in turn, for the finite conductivity. However, from the point of view of Ref. , this result is quite unexpected, since, in view of the model asymmetry, one expects that the pressure $`\varphi =f_i/N`$ is non-zero. Nonetheless, microcanonical simulations show that although the distribution of forces is definitely asymmetric, their average value is numerically 0. This can be understood by noticing that in view of the boundedness of the potential, the system cannot sustain any compression. Therefore, no contradiction exists with Ref. .
In conclusion, in this Letter we have reported about the first evidence of normal heat transport in 1d systems with momentum conservation. Such a behaviour appears to be connected with jumps between neighbouring potential valleys. From the dynamical point of view, it is natural to ask what are the peculiar properties of such jumps that make them so different from other types of nonlinear fluctuations that may occur in single-well type of potentials. The only clearly distinctive feature that we have found is the “hyperbolic” type of behaviour in the vicinity of a maximum of the potential which has to be confronted with the typical “ellyptic” character of the oscillations around the minima. We hope to be able to understand in the future whether this is truly the reason for the difference in FPU and rotator systems.
We thank S. Lepri for several profitable discussions. One of us (AP) thanks Gat Omri for some remarks about our last comments. ISI in Torino is acknowledged for the kind hospitality during the Workshop on ”Complexity and Chaos” 1998, where part of this work was performed. |
no-problem/9910/hep-ph9910362.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Electroweak symmetry (EWS) is a gauged chiral symmetry of the standard model (SM) fermions broken by a mechanism which at this stage remains elusive. Influenced by the breaking of electro-magnetism in superconductors through the dynamical formation of an electron pair condensate, and by the breaking of chiral symmetry in QCD through a quark condensate, it is natural to propose that electroweak symmetry may be broken by a dynamically driven fermion condensate. If the responsible dynamics were a gauge interaction then the logarithmic running of the gauge coupling would naturally provide a separation between the planck and electroweak scales, ie a solution to the hierarchy problem. The most obvious such extension to the standard model in this vein is technicolour \- essentially a repeat of QCD but with a strong interaction scale of order the weak scale. Such models though run into problems because in a broken gauge theory there is a violation of the decoupling theorem . The sector responsible for the symmetry breaking gives large contributions to parameters in the low energy theory applicable at LEP and is, at least naively, incompatible. The most natural mechanism for the generation of the standard model fermion masses in this context is by a feed-down mechanism involving broken gauge interactions, extended technicolour (ETC) . ETC also runs into grave trouble - in its case accommodating sufficient isospin breaking to generate the large top-bottom mass splitting without contradicting the precision data . In this article we discuss recent model building that overcomes many of the failures of technicolour .
Why, given the failures of the arch-type model, and the existence of other well motivated solutions of the hierarchy problem (supersymmetry and large extra dimensions), should one perserver with dynamical symmetry breaking models? Firstly I believe that the motivation that inspired technicolour remains even with its fall but, most importantly, I do not believe it is the place of theory to claim the exclusion of entire paradigms. In the current era of particle physics it is the theorist’s role to provide as wide a variety of viable models for experiment to eventually differentiate between.
The dynamical models I discuss below are intended to provide insight into how dynamical symmetry breaking might manifest in nature and hence inspire experimental searches. The biggest success of these most recent models is that they are compatible with the precision data because they have a decoupling limit in which low energy predictions are precisely those of the standard model.
I will begin in Section 1 by reviewing technicolour and the pit falls that must be avoided. The first example of a dynamical EWS breaking model with a decoupling limit proposed was top condensation which I review in section 2 - the model is though ruled out by the small measured top mass. In section 3 I describe recent models that successfully implement a decoupling limit in dynamical symmetry breaking models . Finally in section 4 I discuss the experimental limits on the scale of the new physics proposed in these models both from precision data and from direct searches at the Tevatron. Much of the work reported here was carried out with Gustavo Burdman and Sekhar Chivukula in .
## 2 Technicolour and its failures
The simplest model of dynamical EWS breaking is technicolour . We assume there is an $`SU(N_{TC})`$ gauge group acting on say a single electroweak doublet of left handed “techni-quarks”, $`(U,D)_L`$, and two electroweak singlet right handed techniquarks, $`U_R`$, $`D_R`$. The techniquarks are massless so there is an $`SU(2)_L\times SU(2)_R`$ chiral symmetry. We assume the asymptotically free SU(N) group becomes strong at a scale $`𝒪`$(1 TeV), generating techniquark condensates, $`\overline{U}U,\overline{D}D0`$, which break the chiral symmetry to the vector subgroup. The chiral EWS is broken. The Goldstones eaten by the W and Z are the Goldstones of chiral symmetry breaking, the technipions. The weak scale $`v`$ is traded for the technipion decay constant, $`F_\pi `$. Such a model would be characterized by the discovery of technihadrons at the TeV or so scale.
This sort of model though gets in trouble with the precision electroweak data from LEP and SLD . In a broken gauge theory particles with masses violating the gauge symmetry do not decouple. These effects enter through oblique corrections which can be parameterized by the three parameters $`S,T,U`$ . The $`S`$ parameter turns out to essentially count the number of such massive particles. One can estimate the contribution to S from massive, strongly interacting fermions by scaling up normal QCD data to the appropriate scale . The result for technifermions is
$$\mathrm{\Delta }S_{TC}N_{TC}N_D0.1$$
(1)
where $`N_D`$ is the number of doublets. The experimental limit on $`S`$ (assuming a heavy higgs) is $`0.27\pm 0.12`$ ! Even a very minimal one doublet SU(2) technicolour theory appears ruled out. Of course it is possible that there are other pieces of new physics contributing to S with negative sign or that the naive scaling of QCD data might be inapppropriate to the technicolour dynamics. In any case at most a relatively minimal technicolour sector seems possible.
Breaking EWS is not the only job a technicolour model must accomplish. The SM fermions must also be given their masses. The usual mechanism considered is extended technicolour. At high scales the technicolour group is unified with the flavour symmetries of the SM fermions. This larger symmetry is assumed to be broken down to technicolour leaving massive gauge bosons which can feed the technifermion condensate down to provide the SM fermion masses. One finds
$$m_f\frac{g_{ETC}^2}{M_{ETC}^2}\overline{T}T\frac{g_{ETC}^2}{M_{ETC}^2}4\pi F_{TC}^3$$
(2)
The gauging of the SM flavour symmetries is though a dangerous game and one may expect to find flavour changing neutral currents in the theory mediated by single gauge boson exchange. To suppress such contributions to $`K^0\overline{K}^0`$ mixing requires $`M_{ETC}600`$ TeV. Such an ETC gauge boson can only generate a fermion mass of 0.5 MeV though which is well short of the second family quark masses. This is a long standing problem with ETC.
A second problem is the generation of the large top mass. A 175 GeV fermion mass would require a 1 TeV or so ETC gauge boson. The interactions of this light gauge boson must violate custodial isospin since the bottom quark is so much lighter than the top. Including such a isospin violating gauge boson in the loops of technifermions generating the W and Z masses gives contributions to $`\mathrm{\Delta }\rho (\alpha T)12\%`$ \- two orders of magnitude above the experimental limit!
The lessons of technicolour appear to be that there are no extra electroweak doublets beyond the SM and that the SM fermion masses do not result from a simple feeddown mechanism.
## 3 Top condensation and its failure
Top condensation models were a first attempt to avoid the excessive baggage of technicolour. Inspired by the large top mass it was suggested that the top may play a unique role in EWS breaking - perhaps the “top is the technifermion” and it is a $`\overline{t}t`$ condensate that breaks EWS. The simplest model is to introduce a four fermion interaction acting on the top
$$=\frac{\kappa }{M^2}\overline{\psi }_Lt_R\overline{t}_R\psi _L$$
(3)
where we might imagine some broken gauge theory was providing the origin of the interaction. At least at large N, the model can be solved and the behaviour of the condensate as a function of $`\kappa `$ is shown in Fig 1. There is a critical coupling at which chiral symmetry breaking switches on. At large $`\kappa `$ the condensate flattens out to of order the scale $`M`$. If $`M1`$ TeV then arranging $`\kappa `$ so that the correct top mass/EWS breaking scale is realized is relatively easy. If $`M1`$ TeV then one must fine tune $`\kappa \kappa _c`$ to achieve such a low scale as $`v`$. Below the scale $`M`$ the effective theory contains a higgs boson which is a bound state of the top quark. The higgs mass at tree level can be calculated from resumming top loops in the four top scattering amplitude and at large N is given by $`2m_t`$. One may also estimate the relation between $`m_t`$ and $`v`$ through a loop diagram and here is where the theory runs into trouble. To generate $`v250`$ GeV requires $`m_t600`$ GeV (assuming $`M5`$ TeV)!
It is possible to combine top condensation and technicolour to lessen the $`\rho `$ parameter problems that technicolour alone suffered. One allows EWS to be broken by technicolour whilst a direct top condensate supplies the top mass without a light ETC gauge boson. Such models have most of the troublesome baggage of technicolour remaining though.
## 4 Viable dynamical symmetry breaking models
We move now to discuss models that are compatible with all low energy data. The archtype was provide by Dobrescu and Hill in their top see-saw model . Their model provides a mechanism for reconciling $`m_t`$ with the idea that a top condensate provides the entirety of $`v`$. A similar idea was proposed in .
The trick is to use the left handed top as a “technifermion” but introduce a new field $`\chi _R`$ to be the right handed technifermion. $`\chi _R`$ has the same quantum numbers as the $`t_R`$ and is bound into a massive Dirac fermion with a partner $`\chi _L`$ which shares it’s quantum numbers ($`m_\chi 3`$ TeV). We now imagine an interaction of the form (3) that is strong and drives a $`\overline{t}_L\chi _R`$ condensate that breaks EWS at the scale $`v`$. A mass of order 600 GeV has been generated between $`t_L`$ and $`\chi _R`$. To generate the top mass we include a mass term betwen $`\chi _L`$ and $`t_R`$ \- this is gauge invariant so we would have to explain why it wasn’t there if it wasn’t! The result of all these masses, if we choose the EWS singlet masses correctly, is a see-saw like mass spectrum with a massive eigenstate (1-5 TeV) and a light eigenstate, the top (175 GeV). It may seem a bit strange that $`\chi _R`$ which is part of a Dirac fermion with mass of several TeV can participate in dynamics that gives rise to a scale of $`v`$. For this to be possible we require that the scale $`M`$ in (3) be larger than $`\chi `$’s Dirac mass term so the dynamics is really above that scale. The fact that the scale $`v`$ emerges hints at a degree of fine tuning - in this sense it is best if $`M`$ is not too large.
The higgs in these models is a bound state of $`t_L`$ and $`\chi _R`$ and, at large N, as in the top condensate model, has a mass of twice the EWS breaking mass, ie 1.2 TeV. This mass is only the tree level mass and does not take into account the running of the quartic coupling between $`M`$ and the weak scale. This running is quite strong and for large values of $`M`$ will display the fixed point behaviour of the SM couplings. We expect for $`M<10`$ TeV that the physical higgs mass will be between 400-600 GeV.
The appealing aspect of this model is that it has a decoupling limit. $`\chi `$ is an electroweak singlet and so its mass may be taken to infinity where it will decouple completely from the low energy theory leaving the SM as the effective field theory. To maintain the physical top mass the ratio of the mass between $`\chi _L`$ and $`b_R`$ to that of the $`\chi _L\chi _R`$ mass must be kept constant in this limit. Of course taking the extreme limit of $`M\mathrm{}`$ introduces fine tuning as discussed above but if $`M3+`$ TeV the decoupling is almost complete and the fine tuning “barely” present .
Extending this type of model to include masses for all the SM fermions is relatively easy. One example is the flavour universal EWS breaking model . The top mass is no longer a direct measure of $`v`$ in the see-saw model so there is in fact no reason to use it, or it alone, to break EWS. In the flavour universal model all the SM fermions participate equally in EWS breaking. We introduce two Dirac singlet fermions ($`\chi `$ and $`\omega `$) with masses of 3 TeV or so and the quantum numbers of the SM fermion’s right handed spinor for each SM fermion. A strong interaction is assumed to cause condensation between the left handed SM fermion and its $`\chi _R`$ field. A mass term is included between $`\omega _L`$ and the right handed SM fermion. The SM fermion mass then results from a mass mixing between the two massive singlets according to graphs such as
The SM fermion masses are simply the result of mass terms, $`\stackrel{~}{M}`$, chosen in the singlet sector - the problem of flavour is defered to a higher scale. This mechanism introduced in is essentially a way of introducing yukawa couplings into dynamical models. Since all the SM fermions participate equally in EWS breaking the EWS breaking masses between the SM fermions and the singlet sector are reduced by a factor of $`\sqrt{N_D/3}`$ and the higgs mass is approximately 350-450 GeV (with running of the quartic coupling the mass could be as low as 300 GeV).
More complete models of both the top see-saw and the flavour universal EWS breaking model exist in the literature. The origins of the strong coupling are broken, strong, gauged flavour symmetries. For example to generate a top condensate one must have an interaction that acts solely on the top - one possibility is to gauge the SU(3) colour group of the top separately from that of the rest of the standard model . At the 3 TeV or so scale this extended gauge symmetry is broken to the SM leaving a colour octet of massive strongly interacting gauge bosons. These top colour interactions are responsible for the top condensation. To distinguish between the top and bottom quarks these interactions must be chiral. The flavour universal models suggest the gauging (and then breaking) of the chiral family symmetry groups of the standard model or the full SU(12) flavour symmetry of the standard model left handed fermions . One might worry that as in ETC FCNCs will be generated. In fact if we are careful to preserve the $`SU(3)^5`$ chiral flavour symmetry of the standard model, which is responsible for the SM’s GIM mechanism, then the GIM mechanism persists above the weak scale and these symmetries can be gauged at scales of only a few TeV . Since this class of model requires different interactions for the left handed doublets from the right handed fermions such a scheme is very natural in this context.
## 5 Experimental limits
The dynamical symmetry breaking models described above have been engineered to have a decoupling limit and hence to avoid making an experimental prediction! However, the desire to avoid fine tuning requires that the scale of the new physics is actually not too far above the weak scale. It is therefore possible to place meaningful lower limits on the scale of the dynamics from precision EW data and direct search limits from the Tevatron.
The new physics in the models enters the precision data in two ways. Firstly there is mixing between the SM fermions and EWS singlets which will give rise to corrections to the SM Z-fermion couplings of the form
$$\delta g_f\frac{e}{s_\theta c_\theta }Q_fs_{\theta _w}^2m_{mix}^2/m_\chi ^2$$
(4)
where $`m_{mix}`$ is the mixing mass which we expect of order a few 100 GeV and $`m_\chi `$ is the Dirac mass of the singlet. Assuming flavour universal mixing and fitting to the data places a limit of 1.9, 2.6 TeV on $`m_\chi `$ for $`m_{mix}=100,200`$ GeV.
The flavour gauge bosons may also correct the vertices of the SM fermions they act on and provide corrections to the $`\rho /T`$ parameter through loops of top quarks. In we have performed a global fit to the Z-pole data including these effects. The $`95\%`$ confidence level limits on the mass scale of the new interactions in a variety of models when their coupling is the critical coupling from the NJL model are
$$\begin{array}{cc}\mathrm{Top}\mathrm{colour}\hfill & M(\kappa _c)1.3\mathrm{TeV}\\ \mathrm{Left}\mathrm{handed}\mathrm{quark}\mathrm{family}\mathrm{symmetry}\hfill & M(\kappa _c)2\mathrm{TeV}\\ \mathrm{Left}\mathrm{handed}\mathrm{SU}(12)\mathrm{flavour}\mathrm{symmetry}\hfill & M(\kappa _c)2\mathrm{TeV}\end{array}$$
(5)
These bounds assume a 100 GeV higgs mass but are in fact fairly insensitive to the higgs mass since the higgs mass enters only logarithmically in the precision variables whilst the mass scale in these corrections enter quadratically. The precision data currently favours a low standard model higgs mass $`m_h260`$ GeV (the bound rises to 400 GeV if the SLD forward backward asymetry measurement is not included) whilst the models we have discussed have a higgs mass in the $`300600`$ GeV range. The precision limit is though extremely sensitive to new physics - in particular positive contributions to the $`\delta \rho /T`$ parameter such as are provided by these flavour gauge bosons can restore heavier higgs masses to agreement with the precision data. For example the top coloron model with $`M(\kappa _c)`$ of order 2 TeV is compatible with a 400 GeV higgs.
Direct search limits on the flavour gauge bosons may be obtained from the Tevatron Run I data. Top colour gives enhanced top production, the larger flavour symmetry models enhance $`q\overline{q}`$ production and can make use of the bottom quark content of the proton to make single top events. Finally the SU(12) flavour model which involves the leptons gives contributions to Drell-Yan production. These limts are currently under study and place bounds of 1-3 TeV on the flavour gauge bosons. Expectations for Run II’s limits are that they will be competetive with the precision data and probe scales of order 3+ TeV in these dynamical symmetry breaking models.
## 6 Conclusions
The idea that EWS is broken by a dynamically generated fermion condensate offers the possibility of a natural and low scale extension of the SM. Technicolour was the obvious first model to propose since it is simply a repeat of QCD. However, the precision data is incompatible with an extended symmetry breaking sector. Top condensation was proposed as a dynamical symmetry breaking model with a minimum of new physics and provides a natural explanantion for a heavy top. In fact the top turns out to be too heavy in this scheme. The top see-saw model resolves this problem by introducing singlet fermions and a see-saw mass mechanism that gives a lightest mass eignestate that can be interpreted as the top quark. The flavour universal symmetry breaking model extends the idea to include masses for the full set of SM fermions. These models have a decoupling limit since all the additional fields beyond the SM fields are EWS singlets. The dynamics is assumed to result from broken gauged flavour symmetries. Precision data and Tevatron direct searches put a lower limit of order a few TeV on these models. The Tevatron at Run II has discovery potential.
The models discussed though are not complete models in any sense. The origin of the SM fermion masses is deferred for example. That the dynamics is the result of gauge interactions broken close to their strong scale so they can nevertheless generate condensation themselves requires unproven gauge dynamics . The hope is that the models provide examples of how dynamical symmetry breaking might be realized in nature and guidance for experimental searches. In the end experiment must surely be an essential guide to the true model of EWS breaking.
Acknowledgements: NE’s work is supported by a PPARC Advanced Fellowship. |
no-problem/9910/astro-ph9910298.html | ar5iv | text | # Re-observing the EUV emission from Abell 2199: in situ measurement of background distribution by offset pointing
## Abstract
The EUV excess emission from the clusters A2199 and A1795 remains an unexplained astrophysical phenomenon. There has been many unsuccessful attempts to ‘trivialize’ the findings. In this Letter we present direct evidence to prove that the most recent of such attempts, which attributes the detected signals to a background non-uniformity effect, is likewise excluded. We address the issue by a re-observation of A2199 which features a new filter orientation, usage of a more sensitive part of the detector and, crucially, includes a background pointing at $``$ 2<sup>o</sup> offset - the first in situ measurement of its kind. We demonstrate quantitatively two facts: (a) the offset pointing provides an accurate background template for the cluster observation, while (b) data from other blank fields do not. We then performed point-to-point subtraction of the in situ background from the cluster field, with appropriate propagation of errors. The resulting cluster radial profile is consistent with that obtained by our original method of subtracting a flat asymptotic background. The emission now extends to a radius of 20 arcmin; it confirms the rising prominence of EUV excess beyond $``$ 5 arcmin as previously reported.
Subject headings: Galaxies: clusters: general; instrumentation: detectors; methods: data analysis, statistical; radiation mechanisms: thermal.
The origin of diffuse EUV and soft X-ray emission from the Virgo and Coma clusters of galaxies (Lieu et al 1996a,b), detected at a level higher than that expected from the hot intracluster medium (i.e. the cluster soft excess, or CSE, syndrome), has remained unsolved. The most serious theoretical puzzle is presented by the EUVE data of the rich clusters Abell 1795 and 2199, where the CSE emission was found to be very soft (with luminous EUV excess unaccompanied by any excess in the 1/4-keV band of the ROSAT PSPC) and absent from the cluster centers (Mittaz, Lieu & Lockman 1998; Lieu, Bonamente & Mittaz 1999, hereafter abbreviated as LBM). The reported phenomena have profound implications irrespective of whether the emission turns out to be thermal, non-thermal, or some other origin (Cen & Ostriker 1999, Sarazin 1999, Lieu et al 1999 and references therein). It is therefore not surprising that questions concerning the observational integrity of CSE are still occasionally raised, especially with respect to A1795 and A2199. The area of data analysis being scrutinized recently is the subtraction of the EUVE detector background, as the inferred signals at large cluster radii, being only a small fraction of the background, are sensitive to this procedure.
To understand the potential problems involved in analysing data from the EUVE DS detector, we first describe the essential aspects of the DS background behavior which can affect the analysis of cluster data. We emphasize that here and after, unless otherwise specifically stated, only raw data (i.e. the output product of the EUVE standard telemetry processing pipeline, as publically archived) have been employed. The formidable problems which confront usage of additionally manipulated data will be enlisted shortly.
The portion of the DS occupied by the Lex/B (69 - 190 eV) filter is rectangular in shape, and the sensitivity of the detector to background photons and particles is not spatially uniform across this area<sup>1</sup><sup>1</sup>1Note that this sensitivity non-uniformity is not a vignetting effect, due to the obvious lack of azimuthal symmetry around boresight. In Figure 1 we show contours of the background as obtained by accumulating detector images of three extragalactic pointings, viz. 2EUVE J1100+34.4, 2EUVE J0908+32.6, and M15, totalling an exposure of $``$ 85 ksec, comparable to the longest exposure EUVE has in any single observation of an individual cluster. All bright sources have been removed. The figure results from a multi-scale wavelet analysis and reconstruction (see Slezak, Durret & Gerbal 1994) of the detector image, retaining features (in every spatial scale) which have a minimum significance level of 3 $`\sigma `$. It can be seen from Figure 1 that there is no evidence of extended emission at any scale, and that the background exhibits large scale gradients at the extremeties of the y-axis. Note that this spatial pattern is typical of the DS Lex/B background for commensurate exposures. Thus, although such a 2-D view offers firm assurance that cluster glows are inherently not present in background fields, it also reveals the potential dangers of a 1-D view, viz. that any radial surface brightness profile centered at or near boresight (where clusters are usually observed) could lead to an underestimate of the background if this is taken from an annulus with large (inner and outer) radii which encompasses the bands of lower background.
Nonetheless, a simple way of minimizing this difficulty does exist, and (though not explicitly stated) has been the manner in which analysis leading to our past publications was performed. The same procedure is also adopted in the present work. It involves truncating areas of large and small y coordinate. There are no strict criteria on how to execute this, although our approach is to first plot the total detector counts as a function of y after they have been summed over all values of x, and then place the y-limits at the extreme ends of the ‘plateau’ region. To illustrate, we show in Figure 2 the y-profile of the composite detector image of many long exposure fields (Mrk 421, NGC 5548, PKS 2155-304, PSR J0108-1431, and the forementioned three fields, totalling an exposure of 0.736 Msec). It will be shown below, using the current data as example, that the method leads to a well-behaved background profile.
Other potential issues are the use of a template background and the effect of data manipulation. Detailed treatment of this subject is given in Kaastra (1999) but we summarize the main points here. A commonly used technique of ‘processing’ the raw data, known as pulse height (PH) thresholding, takes advantage of the fact that between the two main components of DS background, photon and particle, the former has a much narrower range of PH. By selecting events within this range, it is possible in principle to suppress the particle background and improve the data signal-to-noise. In reality, however, the method only works for point sources. For diffuse emission, which illuminates a large area of the detector, the detector position dependence of this photon PH window complicates matters. Specifically, the entire window shifts towards higher PH as one moves from the central region of the detector where cluster centroids are usually located, to the outer regions where fainter radiation is detected from large cluster radii.
The common practice is to use a fixed PH window everywhere, even though this window corresponds only to the PH distribution of photon events at or near the detector center<sup>2</sup><sup>2</sup>2 A trivial way of PH thresholding the DS data, sometines employed to remove detector artifacts, is to apply only a lower PH threshold. It introduces minimal bias to the data because off-axis photons, which are peaked at higher PH, are no longer excluded.. The result is a loss of photons, and consequently a decrease in the thresholded background, with increasing radius. This is the reason why Bowyer, Berghöffer, and Korpela 1999 (hereafter abbreviated as BBK) recently made the erroneous conclusion that cluster EUV, including even the radiation of the hot gas, are mere background effects. In fact, a map of the spatial distribution of the peak PH value of the photon PH window (Kaastra 1999) explains precisely why BBK find much steeper background gradients (relative to the raw data) across the DS detector. One could, of course, undertake the elaborate task of applying position dependent PH windows, as in the case of Kaastra (1999) who also subtracted a template background obtained from blank field pointings at random times and directions. Even after such efforts systematic uncertainties at a known level remain, because of the heavy data manipulations involved and the problems related to usage of a background template which is not obtained in situ (see below).
The decisive way of addressing these issues is to measure the true background distribution of the detector region occupied by the cluster. We therefore scheduled an EUVE re-observation of A2199 in February 1999, which consisted of a 47 ksec exposure to the cluster (with the cluster center placed at 11.5 arcmin off-axis along the detector +x direction) immediately followed by a 11 ksec exposure of the blank sky region (on-axis J2000 coordinates of RA = 245.523<sup>o</sup>, DEC = 40.326<sup>o</sup>, $``$ 2<sup>o</sup> offset from the cluster) whilst maintaining the same roll (azimuthal) angle of the Deep Survey (DS) detector. Within the context of the CSE this is the first direct approach to the EUVE deep survey (DS) detector background problem, as it involved a spatially and temporally contiguous observation - the data thus provided represents the most relevant background map for the purpose of subtraction.
The merits of using an offset pointing to estimate the cluster background are clear: the offset pointing background only differed from that of the cluster by 4% and this should be contrasted with the $``$ 300 % dynamic range of DS raw background values for the entire EUVE mission. Moreover, the two datasets were found to maintain their spatial distributions of event counts and pulse height (PH) over areas outside the cluster location, when again a broad variety of behavior exists within the archival database. To elaborate the latter point, we computed the difference between the radial profiles of the cluster and offset fields with the center of the annular system located at 15 arcmin from boresight along the detector -x direction. With this choice of center one avoids the effects of the cluster emission on the profile of the cluster field, since A2199 is $``$ 27 arcmin away on the other side of the detector. Thus one expects the subtraction to yield zero signal everywhere. This is indeed the case, as can be seen in Figure 3a, where the data follow a flat and vanishing profile with $`\chi _{red}^2(13)=`$ 1.18, and the r.m.s. deviation is consistent with Poisson statistics - the residual point-to-point systematics are $``$ 0.13 % of the pre-subtracted flux. In contrast, when the procedure is repeated using the present cluster field but another offset background field acquired in the same month by pointing to a direction $``$ 2<sup>o</sup> away from the Virgo cluster, the subtraction was not satisfactory, with $`\chi _{red}^2(13)=`$ 4.41, and a large difference between the r.m.s. and Poisson errors indicating that even the residual point-to-point systematics are at a level higher than the random uncertainties. Thus, e.g., a region of apparent extended emission spanning $``$ 6 arcmin is evident in the innermost 10 arcmin of the subtracted profile, see Figure 3b. These comparisons clearly demonstrate the difficulties in building a background template from data obtained contemporaneously with the cluster observation: the only reliable template is that of a time contiguous, in situ, background.
In Figure 4a we show the radial profile of the offset background field, now with the annuli center placed at the detector position x = +11.5 arcmin occupied by the centroid of A2199 during the cluster pointing. The data are consistent with a flat distribution out to $``$ 28 arcmin. Quantitatively, the difference between the 0 - 15 and 15 - 28 arcmin background levels is 1.0 $`\pm `$ 1.4 %.
As to the cluster field, we show in Figure 4b the sky radial profile of the raw data, taken from the same region of the DS detector as that of the background (offset) pointing. Comparison of Figure 4b with Figure 4a suggests that cluster emission extends to $``$ 21 arcmin, and one could proceed to obtain the cluster signals by subtracting a flat background as determined from the 21 - 28 arcmin region of the cluster field. In the present work, however, we undertake the most conservative approach by performing point-to-point subtraction of the offset background, with propagation of background errors. The cluster emission profile thus obtained is shown in Figure 5a, and the resulting soft excess profile in Figure 5b.
The rising importance of the CSE with radius, as reported in LBM (also Mittaz, Lieu, and Lockman 1998, on A1795), is confirmed by our re-observation of A2199. However, when compared with the original data (Figure 5b here versus Figure 1b of LBM) the CSE beyond 10 arcmin exhibits a steeper upward trend than previously. The reason has to do with the new data revealing more emission in this region. The difference can be attributed to several features of the re-observation: (a) the availability of offset pointing enabled us to perform a more accurate background subtraction, (b) the exposure of the cluster to +11.5 arcmin off-axis location where the DS detector has relatively higher quantum efficiency because it has not been as exposed to bright sources as the normal locations near boresight (Kaastra 1999, see also Figure 1), and (c) the orientation of the DS detector, with the x-axis almost parallel to the celestial N-S direction, is at approximate right angles to that of the first observation - in this way the emission within 10 - 15 arcmin south-east of the cluster center, evident in the current data, was previously missed because it corresponded to large detector y-values, i.e. areas of lower sensitivity. To illustrate, we show in Figure 6 a combined image of the two observations (now totalling an exposure of 100 ksec), adaptively smoothed in the manner described in LBM. The emission in the extreme south east was not evident from the EUV contours of LBM.
The rising trend of the CSE suggests that its origin is a warm intracluster medium, with cooler gas residing at larger cluster radii, and large mass implications. Do the EUV signals represent the hitherto undetected baryons necessary to bridge the gap between theoretical and observational cosmology ? As an example we calculated the mass of the emitting gas between cluster radii of 12 and 15 arcmin, within the context of an equilibrium thermal model. The large EUV excess there, which is not accompanied by the detection of any soft X-ray excess in the 1/4-keV band of the ROSAT PSPC, constrains the gas temperature to kT $``$ 10 eV; the gas mass is then $``$ 10<sup>14</sup> M, on par with that of the dark matter in the same region of the cluster (Siddiqui, Stewart, & Johnstone 1998).
R. Lieu gratefully acknowledges support from NASA’s ADP and EUVE-GI programs.
References
Bowyer, S., Berghöffer, T., & Korpela, E., 1999, Proc. of workshop on ‘Thermal and
Relativistic Plasmas in Clusters of Galaxies’, Schlöss Ringberg, Germany, ed.
H. Böhringer (also astro-ph 9907127).
Cen, R., & Ostriker, J.P. 1999, ApJ, 514, L1.
Kaastra, J.S., Lieu, R., & Mittaz, J,P.D., 1999, A & A, in preparation.
Lieu, R., Mittaz, J.P.D., Bowyer, S., Lockman, F.J., Hwang, C. -Y., Schmitt,
J.H.M.M. 1996a, Astrophys. J., 458, L5–7.
Lieu, R., Mittaz, J.P.D., Bowyer, S., Breen, J.O., Lockman, F.J.,
Murphy, E.M. & Hwang, C. -Y. 1996b, Science, 274, 1335–1338.
Lieu, R., Ip W.-I., Axford, W.I. and Bonamente, M. 1999, ApJL ,
510, 25–28.
Lieu, R., Bonamente, M., & Mittaz, J.P.D. 1999, ApJ, 517, L91.
Mittaz, J.P.D., Lieu, R., Lockman, F.J. 1998, Astrophys. J., 498, L17–20.
Morrison, R. and McCammon D., 1983, ApJ, 270, 119–122.
Sarazin, C.L. 1999, ApJ, 520, 529.
Siddiqui, H., Stewart, G.C., & Johnstone, R.M., 1998, A & A, 334, 71.
Slezak, E., Durret, F., & Gerbal, D. 1994, AJ, 108, 1996.
Figure Captions
Figure 1. Greyscale map of the surface brightness of the DS as obtained by co-adding the detector image of three separate pointings (details see text) totalling an exposure of $``$ 85 ksec. This is essentially a sensitivity map of the detector (to background events). The contours are scaled linearly between the maximum and minimum brightness values, which differ by 13 %. The normal convention adopted for the detector axes is also indicated, with tickmarks in units of detector pixels (13 pixels $``$ 1 arcmin) and with the on-axis (boresight) position at pixel coordinates (1024,1024).
Figure 2. The total intensity distribution of the DS along the detector y-axis. The dashed line represents a strip where bright point sources affected the background - this region is therefore not shown. The dotted lines mark those y-limits beyond which the profile steepens considerably. In computing radial profiles only the y pixels within these limits were used. The data were obtained by merging many detector images (see text).
Figure 3. Radial profile of the A2199 cluster field, after subtracting the same profile of a background field. Both profiles are centered at an identical highly off-axis detector position far away from the detector region employed to observe A2199 (see text). 3a: the background field is that of the A2199 offset pointing; to eliminate the slight ($``$ 4 %) difference between the background of the cluster and offset pointing, the subtraction was performed after the offset profile was re-normalized such that the average asymptotic (21 - 28 arcmin) background agrees between the two fields - a procedure adopted to extract the actual cluster signals (Figure 5). 3b: the background field is that of a very different sky direction (see text), with the same re-normalization having been applied. Both plots are binned in 3 arcmins to reveal the presence or otherwise of large scale features produced by the subtraction. The data used are raw (i.e. no PH thresholding) and areas of known detector artifacts were excluded from analysis. This statement also applies to Figures 4 and 5.
Figure 4. Radial profiles of the offset (4a) and cluster (4b) pointings of the A2199 observation. The radius is measured from the detector position occupied by the emission centroid of A2199 in the cluster field. Dotted line represents the average brightness of the 21 - 28 arcmin region. For perusal of the cluster emission the data here are plotted in higher resolution than that of Figure 3, with the offset profile displayed in 2 arcmin bins to reduce the larger random uncertainties which result from the smaller exposure.
Figure 5. Radial profile of the A2199 cluster EUV signal (5a) and soft excess (5b), obtained by point-to-point subtraction of the background (offset) profile, with propagation of errors from the background data and after scaling away the $``$ 4 % difference between the 21 - 28 arcmin average background levels of the source and offset pointings. In 5b the C-band fluxes correspond to PSPC channels 18 - 41, and the expected Lex/B to C-band ratio of the hot ICM emission as given by the dashed line was computed in the manner of LBM (Figure 1b, except that the corresponding line there was lower because the interstellar absorption code adopted here (Morrison & McCammon 1983) involves removal of less extragalactic EUV signals than before). The lower fluxes in the inner radii may be due to intrinsic absorption.
Figure 6. Adaptively smoothed image of the two EUVE observations of A2199. Contours are cluster EUV emission brightness in units of photons arcmin<sup>-2</sup> s<sup>-1</sup>. |
no-problem/9910/astro-ph9910461.html | ar5iv | text | # “Convergent observations” with the stereoscopic HEGRA CT system
## Abstract
Observations of air showers with the stereoscopic HEGRA IACT system are usually carried out in a mode where all telescopes point in the same direction. Alternatively, one could take into account the finite distance to the shower maximum and orient the telescopes such that their optical axes intersect at the average height of the shower maximum. In this paper we show that this “convergent observation mode” is advantageous for the observation of extended sources and for surveys, based on a small data set taken with the HEGRA telescopes operated in this mode.
The HEGRA collaboration is operating a system of currently five imaging atmospheric Cherenkov telescopes (IACTs) for the stereoscopic observation of VHE cosmic $`\gamma `$-rays performance . The telescope system is located on the Canary Island La Palma, at the Observatorio del Roque de los Muchachos (ORM), at 2.2 km asl. The system telescopes feature a mirror area of 8.5 m<sup>2</sup> and a focal length of 5 m and are equipped with 271-pixel photomultiplier cameras. Based on the multiple views obtained for each shower, the orientation of the shower axis in space as well as the location of the shower core can be determined. Compared to single IACTs, stereoscopic IACT systems provide superior angular resolution, energy resolution, and background rejection performance ; trigger\_paper ; mrk501 .
During typical observations with the HEGRA IACT system, the optical axes of all telescopes are parallel and either point directly to the source, or – in the so-called wobble mode – at a point displaced by $`\pm 0.5^{}`$ in declination relative to the source. In the latter case, the rate in a region displaced by the same distance from the optical axis, but in the opposite direction, is used to estimate off-source background rates.
With all telescopes pointing in exactly the same direction, both the operation of the system and the data analysis are simplified, but one may wonder if the detection characteristics could not be improved by canting the telescopes towards each other, such that their optical axes intersect roughly at the height of the shower maximum. Such an alignment of telescopes would guarantee that the most luminous region of an air shower is optimally viewed by all telescopes simultaneously.
The two alternatives are shown in Fig. 1, which also serves to illustrate the trigger characteristics of IACT arrays. To first approximation, an individual IACT will trigger on an air shower if two conditions are fulfilled koehler :
1. the telescope has to be located within the light pool of the shower, with its typical radius of about 120 m, and
2. the shower maximum has to be within the field of view of the camera.
For the HEGRA telescopes, with their $`4.3^{}`$ field of view, the latter condition implies that the shower maximum – at TeV energies typically located 6 km above the telescopes – should be within 225 m from the optical axis of the telescope. For showers propagating parallel to the optical axis, the second condition is automatically fulfilled, once a telescope is within the light pool of the air shower. The camera field of view adds an additional constraint only for showers at angles of more than $`1^{}`$ relative to the optical axis (at least as far as triggering is concerned – to avoid truncation of images, one may want to require in addition in the subsequent image analysis that the centroid of the image is at least $`0.5^{}`$ away from the edge of the field of view; this will still result in a field of view of about 175 m radius at the shower maximum). Therefore, canting of telescopes is most likely not an issue for studies of point sources near the center of the field of view; it may however be important for the observation of extended sources as well as for surveys of larger areas of the sky, where it is important to maximize sensitivity over a large solid angle. For two telescopes, the situation is relatively obvious from Fig. 1: for parallel pointing, the range of locations of the shower maximum, and hence the accessible solid angle, is much more restricted than for canted telescopes. For three or more telescopes, the conclusion depends on the locations of the telescopes and the trigger conditions; parallel pointing will reduce the solid angle for a coincidence of all $`N`$ telescopes, but may increase the angle if only 2 out of $`N`$ telescopes are required in the trigger and in the subsequent analysis.
Estimates of detection rates were carried out for the actual geometry of the HEGRA IACT system, with telescopes located at three of the four corners of a square of about 100 m side length, and another telescope located at the center of the square (the remaining corner telescope had at that time – summer 1998 – an older camera and was not yet included in the IACT system). The rate estimates were based on the simplified model discussed above, assuming a radius of the light pool of about 120 m and a usable field of view (without edge distortion of the images) of $`3.6^{}`$. Two cases were compared: 1) parallel optical axes of all telescopes, and 2) telescopes canted such that the axes intersect in a height of 6 km, with the nominal pointing of the IACT system defined as the pointing of the central telescope. The results – event rates for a given source flux and observation time as a function of the distance of the source from the optical axis of the central telescope – are shown in Fig. 2(a)-(d). The simulations indicate almost identical total detection rates for the two pointing modes (Fig. 2(a)). However, for sources more than $`0.5^{}`$ from the optical axis of the system, the “convergent observation mode” provides a significantly larger fraction of four-telescope events (Fig. 2(d)). Since both the angular resolution and the cosmic-ray rejection improve with the number of triggered telescopes, the convergent mode is clearly favorable.
To verify these model predictions experimentally, three hours of observations of the Crab nebula were performed with canted telescopes. The cosmic-ray background provides a uniform flux of particles and allows to study detection rates as a function of the distance to the optical axis of the IACT system. At last qualitatively, these characteristics should be similar for hadronic showers and for the more interesting $`\gamma `$-ray induced showers. Limited observation time and low counting rate prevented us from scanning the field of view using the Crab nebula as a $`\gamma `$-ray source.
Fig. 3 illustrates the effect of the canting: the positions of the images in the different cameras coincide, whereas for the parallel pointing mode they are displaced by about $`\delta d/h1^{}`$ along the direction connecting the locations of the telescopes. Here, $`d`$ is the spacing of the telescopes and $`h`$ the height of the shower maximum. As a consequence, in convergent mode it is unlikely that images in some of the telescopes are truncated; either all telescopes have well-contained images, or all images suffer from edge problems.
Due to slight differences in the weather conditions, the telescopes trigger rates varied somewhat between the observations taken in convergent tracking mode, and the reference data set. Since the pointing mode may influence the trigger rates, one cannot simply normalize the data sets on the basis of the raw trigger rates. To derive the correct normalization factor, the field of view in convergent mode was artificially truncated to $`2.4^{}`$ diameter, and in the reference data set – with parallel pointing – the convergent pointing was mocked up by selecting in software a $`2.4^{}`$ field of view shifted by the canting angle. After these software trigger cuts, the detection rates can be compared, resulting in a $`24\pm 2\%`$ correction.
The corrected cosmic-ray detection rates as a function of the angle relative to the system axis are shown in Fig. 2(e) for the total rate, and separately for 2-telescope (f), 3-telescope (g), and 4-telescope events (h). Here, only images within the central $`3.6^{}`$ of the field of view were accepted, to exclude truncated images. The observed pattern matches that predicted by the simple model: very similar total rates, but a clear enhancement of the 4-telescope rate at larger angles in case of the convergent pointing mode, at the expense of 2 and 3-telescope events. For the 4-telescope events, the diameter of the effective field of view is increased by about $`0.8^{}`$.
During the data taking in convergent mode, the Crab nebula was positioned $`0.5^{}`$ off the system optical axis. Under these conditions, one would not expect any difference in detection rates between the two modes, and indeed the Crab signals in the two modes are well consistent within the statistical errors of about 20%.
In summary, for stereoscopic IACT systems, the convergent tracking mode – canting the telescopes towards each other such that their optical axes intersect at the height of the shower maximum – improves the detection capabilities in particular for sources near the edge of the field of view, and is advised for observations of extended sources and for surveys.
Acknowledgments. The support of the German Ministry for Research and Technology BMBF and of the Spanish Research Council CYCIT is gratefully acknowledged. We thank the Instituto de Astrofisica de Canarias for the use of the site and for providing excellent working conditions. We gratefully acknowledge the technical support staff of Heidelberg, Kiel, Munich, and Yerevan. |
no-problem/9910/astro-ph9910242.html | ar5iv | text | # Clustering in the Caltech Faint Galaxy Redshift Survey
## 1 Introduction
The defining features of the Caltech Faint Galaxy Redshift Survey that distinguish it from existing and ongoing or planned surveys are that it is possible to reach to fainter magnitudes ($`K20`$ mag, $`R24`$ mag) and that completeness is emphasized rather than sparse sampling over a large field. Objects are observed irrespective of morphology so as not to exclude AGNs or other unusual extragalactic sources.
The CFGRS is working on two fields. The first field at J005325+1234. The central survey field measures $`2\times 7.3`$ arcmin<sup>2</sup> with a statistical sample containing 195 infrared-selected objects complete to $`K=20`$ mag. After several seasons of observing, a redshift completeness of 84% was achieved. There are 139 galaxies with redshifts as well as 24 spectroscopically confirmed Galactic stars and 32 objects for which spectroscopic redshifts cannot be assigned (most of these are EROs) in the $`K20`$ sample. There are 13 additional objects with redshifts (including two more stars) that are just outside the field boundary or are within the field but too faint. Thus in this field there are 150 galaxies with redshifts which lie in the range \[0.173, 1.44\].
In addition, six-color broad band photometry from 0.36 to 2.2$`\mu `$ is available for all these objects. For the set of galaxies with redshifts, rest frame spectral energy distributions can be derived (assuming a cosmological model) which reach far into the UV for the higher redshift objects.
A suite of papers containing all the material for the field at J005325+1234 and an analysis thereof has been published (Cohen et al. 1999a, Cohen et al. 1999b, Pahre et al. 1999).
Our second field is the HDF-North (Williams et al. 1996). Here, as described by Cohen et al. (2000), after several years of effort, the spectroscopy is 92% complete to $`R=24`$ in the HDF itself, and 92% complete to $`R=23`$ in the Flanking Fields covering a circle with diameter 8 arcmin centered on the HDF. Cohen et al. (2000) contains a large collection of previously unpublished redshifts as well as a compilation of all published data for the region of the HDF; see Cohen et al. (2000) for details. The total sample of objects with redshifts in the region of the HDF now exceeds 660.
Accompanying the spectroscopic paper is a four-color photometric catalog of the region of the HDF with high quality astrometric positions, Hogg et al. (2000). The $`R`$ band of this catalog was used to define the spectroscopic sample completeness and the spectroscopic sample has been matched onto this $`R`$-band catalog.
To facilitate work on clustering at a scale larger than that permitted by the small solid angle subtended by the two main fields, each of the two main fields is surrounded by outrigger fields separated by up to 30 arcmin from the main field. Redshifts exist for about 500 galaxies in the outrigger fields around J005325+1234 and about 300 galaxies in the outrigger fields 30 arcmin N, S, E and W of the HDF.
We now focus exclusively on the issue of galaxy clustering in these samples.
## 2 Evidence for the Presence of Groups to $`z1.2`$
There is clear evidence for the presence of bound groups of galaxies out to $`z1.2`$. The best place to see this is in the HDF sample. We use a Gaussian kernel smoothing to define groups. A $`\sigma `$ of 300 km s<sup>-1</sup> is used for the signal and a much larger smoothing width is used to define the mean distribution in $`z`$ of the sample. The ratio of the two functions is the overdensity Figure 1 shows the overdensity of the HDF sample as a function of $`z`$ for $`z<1.25`$. The membership and velocity dispersions of the groups are calculated as well; see Cohen et al. (2000) for details. Every field that we have looked at, both the main fields and the outrigger fields, shows similar small scale structure.
The overdensity is shown as a function of $`z`$ for the HDF sample. The thin line is the heavily smoothed distribution of galaxies.
## 3 A Formal Analysis of Small Scale Clustering
David Hogg has carried out an analysis of the two point correlation function from the CFGRS database as it existed roughly 6 months ago. His results are summarized in Figure 2, which shows the correlation length deduced from our work for the 1 x 1 deg<sup>2</sup> area of the field at J005325+1234 and for the 8 arcmin diameter field of the region of the HDF. The results are shown for three bins in $`z`$, and compared to those of several recent surveys.
This figure supports and extends earlier work which suggested that the correlation length decreases with $`z`$. There are several important caveats. There appear to be substantial field-to-field variations, particularly in the lower $`z`$ bins, with the HDF showing abnormally low correlation lengths. Perhaps this has occurred because the HDF was selected to be devoid of bright galaxies. Also galaxies whose spectra are dominated by absorption lines are more strongly clustered than are galaxies with emission lines.
Figure 2. The correlation length is shown as a function of $`z`$ for the HDF sample and for the 1x1 deg<sup>2</sup> sample in the field J005325+1234. Published measurements for $`r_0`$ are all converted to H<sub>0</sub> = 60 km s<sup>-1</sup> Mpc<sup>-1</sup>.
## 4 Evidence for the Presence of Large Scale Structure
There are two ways to approach this issue using our data. Given the very large sample in the HDF we can assume each group/small cluster represents the intersection of a “wall” similar to those seen locally (e.g. de Lapparent, Geller & Huchra 1986) with the line of sight to the HDF. One then examines the distance in comoving coordinates between adjacent redshift peaks in the statistically complete sample of groups to obtain Figure 3. This figure (from Cohen et al. 2000) shows that a characteristic separation of about 70 Mpc in our cosmology (H<sub>0</sub> = 60 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_M=0.3`$, $`\mathrm{\Lambda }=0`$) seems reasonable.
Figure 3. The difference in comoving distance along the line of sight between adjacent redshift peaks is shown for the HDF sample smoothed by a Gaussian with $`\sigma `$ = 10 Mpc. If differences less than 50 Mpc are omitted, the dashed line is obtained. The vertical lines have a spacing of 68 Mpc. (H<sub>0</sub> = 60 km s<sup>-1</sup> Mpc<sup>-1</sup>.)
The second approach is look for correlations between the main field and the outrigger fields across the full 1 x 1 deg<sup>2</sup> area on the sky. While such attempts are still preliminary, the results are tantalizing. First one needs to establish what tolerance is allowed in $`\mathrm{\Delta }(z)`$ across this field. This is discussed in Cohen et al. (2000); it is about 0.012 for $`z1`$. Figure 4 shows a comparison of the HDF with the sum of the outrigger fields around the HDF. It provides tantalizing evidence for partial coherence across a scale of 1 degree. A lot more work, and of course a bigger sample of objects with redshifts, is going to be required to produce a definitive result.
David Hogg and Roger Blandford are working on a formal analysis of this data. The non-continuous sample fields and the selection effects for the sample within each of the fields make this quite difficult.
Figure 4. The overdensity group finding function is shown for the HDF for $`0.5<z<1.1`$. The overdensity function for four outrigger fields there taken together is shown shifted upward. The short horizontal line denotes the expected tolerance in matching peaks across fields 1 deg apart.
## 5 Summary
Our survey demonstrates very strongly that galaxies are clustered on a small scale at all redshifts up to at least $`z1.2`$, and that these structures are very similar in many respects such as size, velocity dispersion, and total luminosity to groups and small clusters of galaxies as seen locally.
In terms of large scale structure, our survey suggests a characteristic scale near 70 Mpc with (H<sub>0</sub> = 60 km s<sup>-1</sup> Mpc<sup>-1</sup>). The physical processes that could lead to such a scale being imprinted on the fluctuation spectrum are discussed in Szalay (1999). The characteristic scale we have found is significantly smaller that the deduced locally from the LCRS (Shectman et al. 1996) of 128 Mpc by Doroshkevich et al. (1996) and of 215 Mpc by Broadhurst et al. (1990). We do not find the strict periodicity claimed by Broadhurst et al..
We offer up one final comment. Because of the ubiquitous presence of groups of galaxies, and the relatively small $`z`$ separation between such, it is going to be quite difficult to use photometric redshift techniques to work on galaxy clustering problems. Photo-$`z`$s can be accurate to perhaps 10%, adequate for galaxy luminosity functions, but that is not sufficient here. This problem is going to have to be resolved the old fashioned way, with real spectroscopic redshifts, big surveys, and large samples. Its going to take a while, even with the current generation of 8 - 10 m telescopes.
I thank my collaborators Roger Blandford of Caltech and David Hogg of the Institute for Advanced Study, with whom most of this work was done. I am grateful for partial support from STScI/NASA grant AR-06337.12-94A. |
no-problem/9910/cond-mat9910129.html | ar5iv | text | # Helium mixtures in nanotube bundles
## Abstract
An analogue to Raoult’s law is determined for the case of a <sup>3</sup>He –<sup>4</sup>He mixture adsorbed in the interstitial channels of a bundle of carbon nanotubes. Unlike the case of He mixtures in other environments, the ratio of the partial pressures of the coexisting vapor is found to be a simple function of the ratio of concentrations within the nanotube bundle.
PACS numbers: 61.48.+c, 67.60.-g, 67.70.+n
Helium atoms are strongly attracted to and absorbed within nanotube bundles . For tubes at the experimentally observed diameter of $``$14 Angstroms, the most energetically favorable sites lie within interstitial channels bounded by three nanotubes; these tubes form a hexagonal array. For a system of adsorbed <sup>4</sup>He atoms, the individual atoms are highly localized by the periodic potential due to the surrounding nanotubes. The He atoms’ mutual interactions induces a condensation which is well-described by an anisotropic lattice gas model. Because of the well-localized atomic states, the transition temperature to the condensed state is nearly the same for <sup>3</sup>He and <sup>4</sup>He. Here we analyze a mixture of these isotopes and show that the system forms an ideal solution, wherein the ratio of the partial pressures of the coexisting vapors of the two components satisfies an analogue of Raoult’s law.
Within the grand canonical ensemble, the term contributed to the partition function by any specific quantities N<sub>3</sub> and N<sub>4</sub> of <sup>3</sup>He and <sup>4</sup>He is
$$p(N_3,N_4)\mathrm{exp}(\beta [N_3(\mu _3ϵ_3)+N_4(\mu _4ϵ_4)])\frac{N!}{N_3!N_4!}Z_I.$$
(1)
Here $`N=N_3+N_4\theta N_s`$, $`\theta `$ is the total occupation fraction of the adsorption sites. $`N_s`$ is the number of adsorption sites, $`ϵ_{3,4}`$ is the single particle energy in the interstitial channel, $`\beta =1/(k_BT)`$ is the inverse temperature, $`\mu _{3,4}`$ is the chemical potential of <sup>3,4</sup>He, and $`Z_I`$ is the canonical partition function of $`N`$ indistinguishable He atoms (omitting the single-particle energy). The energy $`ϵ_{3,4}`$ assumes a single value for each species because the lowest He bands are very narrow ($``$0.2 K) and well-separated ($``$ 100 K) from higher bands. We can write
$$Z_I=\mathrm{exp}(\beta Nf_I),$$
(2)
where $`f_I(\theta )`$ is the Helmholtz free energy per atom for such a system of interacting indistinguishable particles. We need not specify the form of $`f_I(\theta )`$ here; its behavior was described in previous work using an anisotropic lattice gas model (based in turn on studies by Fisher and Graim and Landau).
The equilibrium number of <sup>3</sup>He atoms, $`<N_3>`$, follows from maximizing $`p(N_3,N_4)`$:
$$\left[\frac{\mathrm{ln}p(N_3,N_4)}{N_3}\right]_{T,\mu _{3,4},N_4}=0.$$
(3)
Hence we obtain the condition for the chemical potential:
$`\mu _3`$ $`=`$ $`ϵ_3+\beta ^1\mathrm{ln}x+f_I(\theta )+\theta f_I^{}(\theta )`$ (4)
$`=`$ $`ϵ_3+\beta ^1\mathrm{ln}x+g^{}(\theta )`$ (5)
$`g(\theta )`$ $``$ $`\theta f_I(\theta ),`$ (6)
where $`x=N_3/(N_3+N_4)`$ is the <sup>3</sup>He concentration and a prime refers to differentiation with respect to $`\theta `$. The coexisting three dimensional vapor (assumed ideal) satisfies
$$\mu _3=\beta ^1\mathrm{ln}(n_3\lambda _3^3)=\beta ^1\mathrm{ln}(\beta P_3\lambda _3^3),$$
(7)
where $`n_3`$ is the <sup>3</sup>He particle density in the vapor phase, $`P_3`$ is the <sup>3</sup>He partial pressure, and $`\lambda _3=(2\pi \mathrm{}^2\beta /m_3)^{1/2}`$ is the de Broglie wavelength for the <sup>3</sup>He atoms. The resulting isotherm is then
$$P_3\lambda _3^3=x\mathrm{exp}[\beta (ϵ_3+g^{}(\theta ))].$$
(8)
In similar fashion,
$$P_4\lambda _4^3=(1x)\mathrm{exp}[\beta (ϵ_4+g^{}(\theta ))],$$
(9)
which yields a remarkably simple relation between the isotopic partial pressures:
$$\frac{P_3}{P_4}=\left(\frac{3}{4}\right)^{3/2}\frac{x}{1x}e^{\beta (ϵ_3ϵ_4)}.$$
(10)
The ratio of partial pressures is independent of both $`\theta `$ and the form of the interaction between the atoms. These quantities disappear from the pressure ratio since the interaction between He atoms is nearly isotope-independent due to the strong atomic localization. This ratio is an expression of Raoult’s law of solutions. Because $`ϵ_3ϵ_417`$ K, $`P_3>P_4`$ except at small $`x`$ and high $`T`$.
This result coincides with that obtained from a noninteracting (band) model. In this case the average number of <sup>4</sup>He particles is
$$N_4=𝑑ϵ\frac{𝒩(ϵ)}{e^{\beta (ϵ\mu _4)}1}$$
(11)
where $`𝒩(ϵ)`$ is the density of states as a function of the single-particle energy $`ϵ`$. In the limiting case of very low coverage relevant to this noninteracting model (i.e. $`exp(\beta \mu )>>1`$),
$$N_4e^{\beta \mu _4}𝑑ϵ𝒩(ϵ)e^{\beta ϵ}.$$
(12)
For the present case of a very narrow band we can approximate the density of states by a delta function to obtain
$$N_4e^{\beta (\mu _4ϵ_4)}\frac{L}{a},$$
(13)
where $`a`$ is the lattice constant and $`L`$ is the total length of interstitial channel. The chemical potential is then:
$$\mu _4=ϵ_4+\beta ^1\mathrm{ln}\left(\frac{N_4a}{L}\right).$$
(14)
Taking the same vapor chemical potential as before, and following a similar analysis for <sup>3</sup>He, one again obtains Eqn. (10) as the isotopic ratio of the partial pressures. This confirms our expectation that the pressure ratio in a localized lattice gas model with arbitrary interactions coincides with that obtained in the appropriate noninteracting band model. At high density the noninteracting model fails, so a comparison is not appropriate.
These results depend upon the isotope-independence of the He –He interaction, a property which we now address in detail. Three effects could contribute to an isotope dependence in the He –He interaction: differences in zero point motion (primarily along the axis of the channel), the magnetic interaction in <sup>3</sup>He, and the exchange interaction. The contribution from zero point motion can be described by a Hartree interaction,
$$V_H(a)=𝑑\stackrel{}{r}_1n_1(\stackrel{}{r}_1)𝑑\stackrel{}{r}_2n_2(\stackrel{}{r}_2)u(r_{12}),$$
(15)
where $`n_1(\stackrel{}{r}_1)`$ and $`n_2(\stackrel{}{n}_2)`$ are the densities at neighboring sites separated by a distance $`a`$ in the same channel, and $`u(r_{12})`$ is the interaction potential between two atoms. Since the densities are well-confined, we can Taylor expand the integrand (with $`z_{1,2}`$ the axial coordinate and $`\rho _{1,2}`$ the transverse radial coordinate). Keeping second order terms, we obtain
$$V_H(a)=u(a)+\frac{1}{2a}u^{}(a)(<\rho _1^2>+<\rho _2^2>)+\frac{1}{2}u^{\prime \prime }(a)(<z_1^2>+<z_2^2>).$$
(16)
Using the single-particle wavefunctions for <sup>3</sup>He and <sup>4</sup>He in the interstitial channel of an (18,0) tube lattice, we obtain a correction $`\mathrm{\Delta }VV_H(a)u(a)`$ of -0.488 K, -0.490 K and -0.489 K respectively for the <sup>4</sup>He –<sup>4</sup>He, <sup>3</sup>He –<sup>3</sup>He and <sup>3</sup>He –<sup>4</sup>He interactions. Although the magnitude of this correction reaches 25% of the bare He –He interaction, it introduces only a tiny distinction between the isotopes.
The magnetic energy of <sup>3</sup>He is also negligible ($``$ nK) on this scale due to the very small nuclear moment of <sup>3</sup>He. As to the exchange of He atoms between different sites, one might expect this to be significantly different for the two species. However, the very similar effective masses and band widths for the two isotopes suggests that the effects of exchange are not significantly different for the two species.
In summary, we have found a simple expression for the ratio of partial pressures of the ambient vapor of He isotopes when exposed to adsorption sites within bundles of carbon nanotubes. To our knowledge, this expression has no precedent in the field of <sup>3</sup>He –<sup>4</sup>He mixtures. In other known adsorption situations, the single-particle wave functions are not sufficiently localized to justify the present assumption that the effects of interactions are nearly isotope-independent. One can contrast the localized wave functions in the present case with those of He on graphite wherein the states are quite delocalized. The band masses are $`m^{}/m18`$ in the nanotube environment and $`1.05`$ on graphite . In the case of He on graphite one cannot employ the present simple analysis because the effect of interparticle interactions depends on the species degree of localization, which differs for the two isotopes. It is plausible that other nanoscale porous media, such as zeolites, exhibit similar behavior of adsorbed He isotopic mixtures.
We thank V. Bakaev, and W. A. Steele for helpful discussions. This research has been supported by the Army Research Office, the National Science Foundation under grant DMR-9876232, and the donors of the Petroleum Research Fund, administered by the American Chemical Society. |
no-problem/9910/cond-mat9910087.html | ar5iv | text | # A study of the superconducting gap in 𝑅Ni2B2C (𝑅 = Y, Lu) single crystals by inelastic light scattering
## I Introduction
The rare-earth borocarbides are believed to be conventional BCS-type superconductors with a number of interesting properties that are not fully understood. Borocarbides are similar to intermetallic superconductors such as V<sub>3</sub>Si and Nb<sub>3</sub>Sn in that they have a relatively strong electron-phonon coupling constant and a moderately large density of states at the Fermi level. While it is generally agreed that the electron-phonon interaction is the underlying mechanism for superconductivity in these systems, the responsible phonons are claimed to be either a high-energy phonon (boron A<sub>1g</sub> near 850 cm<sup>-1</sup>) or a low-energy soft-mode phonon. Additionally, the borocarbides have relatively simple tetragonal crystal structure, yet they show a variety of physical properties that have been active research topics over the past years. For example, some of the borocarbides exhibit magnetic ordering and superconductivity at around the same temperature, offering the possibility of studying the interplay between superconductivity and magnetism.
Even the non-magnetic rare-earth borocarbides $`R`$Ni<sub>2</sub>B<sub>2</sub>C ($`R`$ = Y, Lu) show rich superconducting phenomena, such as anisotropic upper critical fields and angular dependence of in-plane magnetization, a hexagonal-to-square vortex lattice transition, and phonon-softening at a finite wave vector where strong Fermi-surface nesting is suggested. The results of inelastic neutron scattering measurements are on soft $`q0`$ phonons, which would not be detected using Raman spectroscopy.
A study of the low-energy excitation spectra would clearly lead to a more complete understanding of the electronic interaction in these systems. The superconducting gap has been a major subject of research in superconductivity. Up to now, the gap in borocarbides has been studied by tunneling spectroscopy and infra-red (IR) reflectivity which are insensitive to the wave vector-space dependence of the gap. Electronic Raman spectroscopy is sensitive to the wave vector dependence of the gap. When the optical penetration depth exceeds the superconducting coherence length, it yields a susceptibility for two quasi-particles of essentially zero total wave vector. The quasi-particles that contribute to the response are those with wave vectors close to the Fermi surface for which the Raman vertex has a large modulus squared. This vertex equals the projection onto the polarization vectors of the incident and scattered photons of a tensor closely related to the inverse effective mass tensor. Manipulation of the polarization vectors changes the Raman symmetry and selectively probes the gap near certain regions of the Fermi surface. Raman spectroscopy studies have played an important role in understanding superconductivity of conventional and exotic superconductors. For example, superconductivity-induced redistribution of the electronic continua of the high-T<sub>c</sub> superconductors led to important clues as to the d-wave nature of the order parameter. However, there remain unsolved puzzles regarding different values of the 2$`\mathrm{\Delta }`$ peak positions in different scattering geometries, the power-law behavior of the scattering response below 2$`\mathrm{\Delta }`$ and its relation to the d-wave nature of the order parameter, and to the role of impurities.
Until now there has been only one kind of “conventional” superconductors ($`A15`$) whose gap was measured by electronic Raman, while there are numerous oxide superconductors measured by the same technique. In other words, we do not really know how the conventional superconductors behave in the electronic Raman studies. Without a sound background for comparison, many of the studies of “unconventional” behavior of the exotic superconductors would be meaningless. From number of aspects (including our own measurements), borocarbides seem to behave conventionally. Thus borocarbides would be suitable choice of superconductor for investigation usingelectronic Raman spectroscopy.
In this paper, we report an observation of clear redistribution of the electronic continua in different scattering geometries in the $`R`$Ni<sub>2</sub>B<sub>2</sub>C ($`R`$ = Y, Lu) borocarbides in the superconducting state. 2$`\mathrm{\Delta }`$-like peaks were observed in the superconducting state in A<sub>1g</sub>, B<sub>1g</sub>, and B<sub>2g</sub> Raman symmetry from single crystals. The peaks in A<sub>1g</sub> and B<sub>2g</sub> symmetries are significantly sharper and stronger than the peak in B<sub>1g</sub> symmetry. The temperature dependence of the frequencies of the peaks shows typical BCS-type behavior of the superconducting gap, but the apparent values of the $`2\mathrm{\Delta }`$ gap are strongly anisotropic for both systems. In contrast to what we normally expect from conventional superconductors, we find reproducible scattering strength below the 2$`\mathrm{\Delta }`$-like peak in all the scattering geometries. This is the first Raman measurement that shows finite scattering below the gap-like peaks of the “conventional” superconductors. Previous measurements on $`A15^{}s`$ were too noisy to detect any finite scattering below the gap.
## II Experiment
All the samples measured in this work were single crystals grown by the flux- growth method. Samples were characterized by various measurements, including resistivity versus temperature, magnetization versus temperature, and neutron scattering. The crystal structure of $`R`$Ni<sub>2</sub>B<sub>2</sub>C is tetragonal body-centered space group $`I4/mmm`$, and phononic Raman analyses have been made earlier. Raman spectra were obtained in a pseudo-backscattering geometry using a custom-made subtractive triple-grating spectrometer designed for very small Raman shifts and ultra low intensities. 3 mW of 6471 Å Kr-ion laser light was focused onto a spot of $`100\times 100`$ $`\mu `$m<sup>2</sup>, which results in heating of the spot above the temperature of the sample environment. The temperature of the spot on the sample surface was estimated by solving the heat-diffusion equation. $`\mathrm{\Delta }`$T, the rise in temperature is proportional to the inverse of the thermal conductivity of the sample. $`\mathrm{\Delta }`$T is largest at lowest temperature because the thermal conductivity is smaller at lower temperatures. The estimated $`\mathrm{\Delta }`$T is 2.7K at 4K and 0.9K at 14K for the case of YNi<sub>2</sub>B<sub>2</sub>C single crystal. The ambient temperature at which the Raman continua begin to show the redistribution was determined to be 14K, which is in agreement with this estimation. All the spectra have been corrected for the Bose factor. They therefore are proportional to the imaginary part of the Raman susceptibility.
## III Results and Discussion
Superconductivity-induced changes in the B<sub>2g</sub> and B<sub>1g</sub> spectra of YNi<sub>2</sub>B<sub>2</sub>C taken at different temperatures are shown in Fig. 1. The quality of the sample surfaces was good enough that Raman spectra could be taken down to $``$10 cm<sup>-1</sup>. Sharp drops of the spectra below 10 cm<sup>-1</sup> (for B<sub>2g</sub> and B<sub>1g</sub>) or 20 cm<sup>-1</sup> (for A<sub>1g</sub>) are due to cutoff of the filtering stage, and the sharp rises near zero frequency are due to the laser line (6471 Å). The relative strengths of the spectra in different scattering geometries are meaningful although the absolute values of the scattering strength are in arbitrary units in all the figures. In the superconducting state (T $``$ 7 K), the spectra show a clear redistribution of the scattering weight: depletion of the weight at low frequencies ($``$ 40 cm<sup>-1</sup>) and the accumulation of the weight above $``$ 40 cm<sup>-1</sup>, resulting in sharp gap-like peaks in both B<sub>2g</sub> and B<sub>1g</sub> symmetries. The gap-like feature is more prominent and sharper in B<sub>2g</sub> symmetry than in B<sub>1g</sub> symmetry. The sharpness of the B<sub>2g</sub> peak suggests that its origin may be different from that of B<sub>1g</sub>.
The peak at 200 cm<sup>-1</sup> is the Ni B<sub>1g</sub> phonon mode, the height of which (1250 on the same scale) is much stronger than the B<sub>1g</sub> electronic Raman peaks. Previous Raman measurements on borocarbides concentrated mainly on phononic peaks, and the gap features were not observed in earlier electronic Raman measurements. Electronic Raman spectra from borocarbides are so weak that a sensitive spectrometer with good stray-light rejection is required.
The peak frequency $`\omega _p`$, read from graphs, of the peak feature taken at 7 K is normalized in such a way that its normalized value is the same as the value of the reduced superconducting gap $`\mathrm{\Delta }`$(T=7K)/$`\mathrm{\Delta }_0`$ (= 0.95) in the BCS theory, where $`\mathrm{\Delta }_0`$ = $`\mathrm{\Delta }`$(T = 0), at the reduced temperature of 0.45 (= 7/15.5). $`\omega _p`$(0), the values of the peak frequency in the limit of 0 K thus obtained are 40.1 cm<sup>-1</sup> for B<sub>2g</sub> and 48.9 cm<sup>-1</sup> for B<sub>1g</sub>. $`\omega _p`$/$`\omega _p`$(0), the normalized frequencies of the peaks of the peak features as functions of the reduced temperatures follow the curve for $`\mathrm{\Delta }`$(T)/$`\mathrm{\Delta }_0`$ predicted by the BCS theory as shown in the insets of the figure. This suggests that the peak features arise due to opening of the superconducting gap and provides some evidence that borocarbides are conventional BCS-type superconductors.
One of the advantages of the borocarbide superconductors is that their upper critical fields ($`H_{c2}6T`$ at 6 K) can be easily reached by using commercial superconducting magnets. We applied strong magnetic fields (up to 7 T) parallel to c-axis and investigated the effects on these gap-like features of both YNi<sub>2</sub>B<sub>2</sub>C and LuNi<sub>2</sub>B<sub>2</sub>C crystals at low temperatures. The gap-like features were completely suppressed by the strong magnetic fields, confirming that these features are indeed induced by superconductivity. (See Fig. 2 a.) Full analysis of the dependence on magnetic fields of the peak frequencies and the features below the gap will be presented in an additional manuscript.
Unlike other gap-probing measurements such as tunneling experiments and x-ray photoemission spectroscopy (XPS), Raman spectroscopy is able to measure the anisotropy of the gap at a high resolution of a few $`meV`$. In addition, electronic Raman spectroscopy has an advantage over infra-red spectroscopy in studying the gap of superconductors in their clean limit. Figure 2 shows Raman spectra in A<sub>1g</sub>, B<sub>1g</sub>, and B<sub>2g</sub> geometries of YNi<sub>2</sub>B<sub>2</sub>C and LuNi<sub>2</sub>B<sub>2</sub>C in superconducting states and in normal states. Spectra in E<sub>g</sub> geometry were difficult to acquire from either fractured or polish-then-etched ac surfaces, even if those surfaces were optically flat. The A<sub>1g</sub> spectra reflect weighted average of the Fermi surface with a weighting function of $`k_x^2+k_y^2`$ symmetry; the B<sub>1g</sub> spectra, $`k_x^2k_y^2`$ symmetry; and the B<sub>2g</sub> spectra, $`k_xk_y`$ symmetry. The polarizations used were such that a contribution of A<sub>2g</sub> symmetry is possible in all three cases; however, the A<sub>2g</sub> spectrum is expected to be negligible. The most apparent phenomenon is the anisotropy in the intensity and sharpness of the gap-like features. The A<sub>1g</sub> and B<sub>2g</sub> peaks are significantly stronger and sharper than the B<sub>1g</sub> peak. Together with the smaller values of the positions of the A<sub>1g</sub> and B<sub>2g</sub> peaks than that of B<sub>1g</sub> peak as detailed below, this suggests that the sharp A<sub>1g</sub> and B<sub>2g</sub> peaks might be a reflection of collective modes. Further investigation is necessary to clarify the origin of the peaks.
The B<sub>1g</sub> gap energy is considerably larger than the B<sub>2g</sub> gap energy signifying perhaps that the gap parameters are anisotropic for both YNi<sub>2</sub>B<sub>2</sub>C and LuNi<sub>2</sub>B<sub>2</sub>C. The apparent gap anisotropy is more pronounced in YNi<sub>2</sub>B<sub>2</sub>C than in LuNi<sub>2</sub>B<sub>2</sub>C. The values extrapolated at T = 0 are $`\mathrm{\Delta }_0`$ (A<sub>1g</sub>) = 2.5 meV (2$`\mathrm{\Delta }_0`$/kT<sub>c</sub> = 3.7), $`\mathrm{\Delta }_0`$ (B<sub>1g</sub>) = 3.0 meV (2$`\mathrm{\Delta }_0`$/kT<sub>c</sub> = 4.5), and $`\mathrm{\Delta }_0`$ (B<sub>2g</sub>) = 2.5 meV (2$`\mathrm{\Delta }_0`$/kT<sub>c</sub> = 3.7) for YNi<sub>2</sub>B<sub>2</sub>C, while $`\mathrm{\Delta }_0`$ (A<sub>1g</sub>) = 3.0 meV (2$`\mathrm{\Delta }_0`$/kT<sub>c</sub> = 4.4), $`\mathrm{\Delta }_0`$ (B<sub>1g</sub>) = 3.3 meV (2$`\mathrm{\Delta }_0`$/kT<sub>c</sub> = 4.8), and $`\mathrm{\Delta }_0`$ (B<sub>2g</sub>) = 3.1 meV (2$`\mathrm{\Delta }_0`$/kT<sub>c</sub> = 4.5) for LuNi<sub>2</sub>B<sub>2</sub>C. These values are different from the values from other measurements. For instance, an IR reflectivity measurement reports $`\mathrm{\Delta }_0`$ = 3.46 meV for polycrystalline YNi<sub>2</sub>B<sub>2</sub>C and $`\mathrm{\Delta }_0`$ = 2.68 meV for single crystal LuNi<sub>2</sub>B<sub>2</sub>C.
There is reproducible spectral weight below the $`2\mathrm{\Delta }`$ peak in both symmetries for both YNi<sub>2</sub>B<sub>2</sub>C and LuNi<sub>2</sub>B<sub>2</sub>C systems. Special care was taken in studying the features below the gap: Samples with high-quality surfaces were selected; higher resolution and longer exposure times were employed Figure 3 shows Raman spectra of YNi<sub>2</sub>B<sub>2</sub>C(B<sup>10</sup> isotope) in B<sub>2g</sub> and B<sub>1g</sub> symmetries. In this particular case, the cutoff of the filter state in the spectrometer was 6 cm<sup>-1</sup>. As far as the spectral weight below the $`2\mathrm{\Delta }`$ peak is concerned, samples with different boron isotopes give the same results. Both B<sub>2g</sub> and B<sub>1g</sub> spectra show finite Raman scattering below the gap and it appears to be roughly linear to the Raman frequency below 30 cm<sup>-1</sup>. For the B<sub>2g</sub> spectrum, there is an abrupt slope change at about 30 cm<sup>-1</sup>. Similar linear-in-frequency behavior was observed in the spectra of LuNi<sub>2</sub>B<sub>2</sub>C below 35 cm<sup>-1</sup>. The possibility of instrumental broadening is ruled out by measuring the spectral widths of Kr-ion plasma lines, which are about 2.5 cm<sup>-1</sup> in the spectral range of these measurements.
Such a feature below the gap could rise from different causes. We eliminate the possibilities of a) an extrinsic phase on the sample surfaces and b) inhomogeneity of the probing laser beam, and argue that this sub-gap feature is intrinsic to the borocarbide superconductors. This would constitute the first, definitive experimental observation of finite Raman scattering below the gap feature for the “conventional” superconductors. Previous measurements on $`A15`$ compounds were simply too noisy to detect any finite scattering below the gap.
a) The borocarbide single crystals do have extrinsic phases, mostly from the flux. In our measurements, we were able to avoid such flux phases. If the sub-gap feature were due to an extrinsic phase, the intensity of the sub-gap feature would be dependent on the measurement conditions, e.g., dependent on the fraction of flux under the exciting focussed laser spot.All the spectra showed the same intensity, independent of where the spot was located. X-ray photoemission spectroscopy (XPS) analyses show that the flux phase on the surface of the single crystals is Ni<sub>3</sub>B or Ni<sub>3</sub>BC. Scanning electron microscopy (SEM) did not reveal any small-grained structures on the surface or edges of the single crystals. Auger analyses did not show any fine structures either. Furthermore, they showed that chemical compositions did not vary significantly from point to point on the surface. All these analyses suggest that there is no fine-grained extrinsic phase contributing to the Raman spectra. In addition, we actually measured the Raman spectra of the flux phase itself (Fig. 3 a). Even if all the Raman intensity above the gap feature is assumed to be due to the flux (over-estimation of the flux contribution), a finite Raman intensity below the peak feature is always obtained (under-estimation of the intrinsic contribution), as shown in Fig. 3 b.
b) An experimental situation where part of the measured spot is normal (or close to the normal state) and another part is deep in the superconducting state due to inhomogeneity of the laser spot could lead to finite scattering intensity below the gap. This senario is excluded as follows: 1. In the measurements, spatial filtering the excitation laser source was used to make the incident beam as homogeneous as possible. 2. We used several different laser intensities and confirmed the same feature below the gap. The overall intensity scaled with the incident power as long as the heating effect was small, so that the temperature of the measured spot remained well below T<sub>c</sub>. 3. The sharpness of the B<sub>2g</sub> peak itself indicates that the actual temperature of the measured spot is quite uniform and well below T<sub>c</sub>. Otherwise, we should not be able to see such a steep slope below the B<sub>2g</sub> peak. Therefore, we believe that the finite Raman intensity below the peak feature is intrinsic.
Cuprate superconductors show finite Raman response below the 2$`\mathrm{\Delta }`$-like peak even in the superconducting state. The lack of the complete suppression of the Raman response below the 2$`\mathrm{\Delta }`$ peak has been regarded as evidence that the superconducting gap function of cuprates has nodes where the gap vanishes. In simple BCS picture no density of states is allowed below the superconducting gap; thus, no Raman scattering beyond small smearing effects is expected below the gap in BCS-type superconductors. However, in this work, finite Raman scattering is observed below the gap of borocarbides.
It is interesting to note that Kim finds finite low-energy states below the gap in s-wave superconductors by considering a finite-ranged pairing interaction energy. The quasiparticle density of states calculated numerically resembles the B<sub>2g</sub> Raman spectra shown in Fig. 3. The underlying mechanism that gives rise to the linear-in-frequency behavior below the 2$`\mathrm{\Delta }`$ gap for the borocarbides, which are believed to be conventional BCS-type superconductors, needs to be clarified further.
## IV Conclusion
In conclusion, sharp redistributions of the continuum of Raman spectra were observed in different geometries for the $`R`$Ni<sub>2</sub>B<sub>2</sub>C ($`R`$ = Y, Lu) systems upon going into the superconducting states. 2$`\mathrm{\Delta }`$-like features were observed in the A<sub>1g</sub>, B<sub>1g</sub>, and B<sub>2g</sub> spectra of the $`R`$Ni<sub>2</sub>B<sub>2</sub>C ($`R`$= Y, Lu) single crystals in the superconducting states. The peak features in A<sub>1g</sub> and B<sub>2g</sub> geometry are sharper and stronger than those in B<sub>1g</sub> geometry. The features are suppressed by applied magnetic fields, suggesting that they are indeed closely related to the gap parameter 2$`\mathrm{\Delta }`$. The temperature dependence of the peaks of the 2$`\mathrm{\Delta }`$-like features shows typical BCS-type behavior of the gap values as a function of temperature. However, the apparent values of the gap are anisotropic, and the anisotropy is more pronounced in YNi<sub>2</sub>B<sub>2</sub>C than in LuNi<sub>2</sub>B<sub>2</sub>C. Finite Raman scattering below the gap was experimentally observed in these “conventional” superconductors. The finite scattering is intrinsic and it appears to be linear in the Raman frequency. The lack of complete suppression of the Raman response below the 2$`\mathrm{\Delta }`$ peak of apparent BCS-type superconductors needs to be further addressed.
###### Acknowledgements.
We thank G. Blumberg, T. P. Devereaux, H.-L. Liu, K.-S. Park, and M. Rübhausen for help and valuable discussions. ISY and MVK were partially supported under NSF 9705131 and through the STCS under NSF 9120000. ISY was also supported by KOSEF 95-0702-03-01-3. SLC was supported by DOE under grant DEFG02-96ER45439. ISY, BKC, and SIL were supported by the Creative Research Initiative Project of the Ministry of Science and Technology, Korea. Ames Laboratory is operated by the U.S. Department of Energy by Iowa State University under Contract No. W-7405-Eng-82.
Current address: Department of Material Science and Technology, K-JIST, Kwangju, 500-712, Korea |
no-problem/9910/hep-th9910247.html | ar5iv | text | # References
Reply to the comment by Park and Ho
S. Carlip<sup>*</sup><sup>*</sup>*email: carlip@dirac.ucdavis.edu
Department of Physics
University of California
Davis, CA 95616
USA
Park and Ho correctly point out an error in the surface term (9) of Ref. , and the consequent lack of antisymmetry in the Poisson brackets (13) of that paper. The root of the problem is that the surface term fails to make the action sufficiently “differentiable”: the diffeomorphisms of interest, given by eqns. (17)-(18) of Ref. , do not satisfy the boundary conditions (described after eqn. (10)) needed in order for the total surface term in the variation of the action to vanish. An interesting alternative has been proposed by Sovoliev , who suggests an approach to boundary terms that does not require “differentiability” of the action, but it is not clear to me whether this completely resolves the problem.
Despite this error, however, the overall conclusion of Ref. remains valid. The relevant surface terms have been analyzed more thoroughly, in a manifestly covariant manner, in Ref. , where it is shown that the Virasoro algebra of Ref. is reproduced with the correct values of central charge and $`L_0`$ needed to explain black hole entropy. The details of the relation between the covariant phase space approach of Ref. and the canonical approach of Ref. have not yet been worked out, however. Moreover, as Ref. demonstrates, there are still unanswered questions concerning the proper generic boundary conditions for a black hole horizon, so the subject cannot be considered closed.
While the the error pointed out by Park and Ho is a real one, their analysis is slightly misleading in one respect. Their criticism (b) implicitly assumes that because a nonrotating black hole has no preferred angular direction, diffeomorphisms like those of eqn. (18) of Ref. should be taken to be independent of angular coordinates. But as observed in Ref. , the generator of diffeomorphisms (with the correct boundary terms) exists only when an angular dependence is included. For a static black hole, this angular dependence is nearly arbitrary—different choices lead to isomorphic Virasoro algebras—but it cannot be neglected. As a result, criticism (b) is not valid.
The Comment by Park and Ho also raises an interesting issue that deserves to be investigated further. Their “anomalous boundary contribution” to the transformation of, for example, $`g_{rr}`$ is determined by looking at the variation $`\delta L[\xi ]`$ of the generator of diffeomorphisms and extracting the coefficient of $`\delta \pi ^{rr}`$. This certainly seems to be a valid procedure. On the other hand, however, the relevant variation $`\delta L[\xi ]`$ is
$$\delta L[\xi ]_{r=r_+}\frac{1}{f}n_r\widehat{\xi }^rg_{rr}\delta \pi ^{rr},\text{with}\pi ^{rr}=\frac{1}{f}\sqrt{\sigma }\sigma ^{\alpha \beta }K_{\alpha \beta }.$$
(1)
If one uses the boundary conditions (3) and (4) of Ref. , one finds that this variation vanishes. Park and Ho obtain a nonzero contribution to $`\delta g_{rr}`$ because they divide $`\delta L[\xi ]`$, which vanishes at $`r=r_+`$, by $`\delta \pi ^{rr}`$, which also vanishes.
Now, in the standard approach of Regge and Teitelboim to surface terms and ADM mass, one computes the variation $`\delta L[\xi ]`$, takes an appropriate limit to go to a boundary such as spatial infinity, and then uses the limiting variation to determine a boundary term. What Park and Ho have demonstrated is that this process does not necessarily “commute” with the process of functionally differentiating to obtain Poisson brackets. It may be that this paradox can be resolved by correctly incorporating boundary conditions into the definition of Dirac brackets, but further investigation seems warranted.
Acknowledgements
This work was supported in part by Department of Energy grant DE-FG03-91ER40674. |
no-problem/9910/hep-th9910213.html | ar5iv | text | # References
hep-th/9910213
Comments on “ Entropy of 2D Black Holes from Counting Microstates ”
by
Mu-In Park<sup>1</sup><sup>1</sup>1Electronic address: mipark@physics.sogang.ac.kr and Jae Hyung Yee
Department of Physics, Yonsei University, Seoul 120-749, Korea
ABSTRACT
We point out that a recent analysis by Cadoni and Mignemi on the statistical entropy of 2D black holes has a serious error in identifying the Virasoro algebra which invalidates its principal claims.
PACS Nos: 04.70.Dy, 11.25.H, 11.30.-j, 04.20.Fy
9 September 1999
Recently Cadoni and Mignemi presented a microscopical derivation of the entropy of the two-dimensional (2D) black holes. They have shown that the canonical algebra of the asymptotic symmetry of 2D Anti-de Sitter (ADS) space included the Virasoro algebra. Using this algebra and the Cardy’s formula they obtained the statistical entropy which agreed, up to a factor of $`\sqrt{2}`$, with the thermodynamical result. In this Comment, we point out that their analysis has a serious error which invalidates its principal claims.
Let us consider the problem in detail. The authors’ object was obtaining the microscopic derivation of the entropy of the 2D black holes in ADS space from the microstate counting procedure of the three-dimensional ADS space $`a\mathrm{`}la`$ Brown-Henneaux and Strominger (BHS). In this approach, it is a basic ingredient that there is the infinite-dimensional conformal symmetry, for some (infinitely large or finite) boundaries, which is described by the Virasoro algebra with the “classical” central charge . It is the strategy to apply the Cardy’s formula for the asymptotic density of states to obtain the microscopic entropy though the true validity of the formula may be debatable .
As the simplest example of the 2D gravity theory that admits 2D black hole solution in ADS space they considered the Jackiw-Teitelboim model . Using the Regge-Teitelboim procedure , they obtained the differentiable, which means ‘no boundary terms in the field variation’, Hamiltonian
$$H[\chi ]=𝑑x(\chi ^{}+\chi ^{}_x)+J[\chi ]$$
for the surface deformation parameters $`\chi ^{}=N\chi ^t,\chi ^{}=\chi ^x+N^x\chi ^t`$ with the lapse, shift functions $`N,N^x`$ and the Killing vectors $`\chi ^t,\chi ^x`$ along the tangent vectors $`/t,/x`$ respectively. Here, note that the boundary charge $`J[\chi ]`$ is defined only at a boundary “point”, $`x\mathrm{}`$, and no other integration variables present. With this Hamiltonian, the central charge are read off usually from the Dirac bracket algebra
$`\{J[\chi ],J[\omega ]\}_{DB}=J[[\chi ,\omega ]]+c(\chi ,\omega )`$ (1)
or from the variation of $`J[\chi ]`$ under the surface deformations
$`\delta _\omega J[\chi ]=J[[\chi ,\omega ]]+c(\chi ,\omega )`$ (2)
with the surface deformation Lie bracket $`[\chi ,\omega ]`$ and its corresponding central term $`c(\chi ,\omega )`$. \[The Dirac bracket for the dynamical variables $`A,B`$ is defined usually as $`\{A,B\}_{DB}=\{A,B\}\{A,\mathrm{\Gamma }_\alpha \}C_{\alpha \beta }^1\{\mathrm{\Gamma }_\beta ,B\}`$ with the second-class constraints $`\mathrm{\Gamma }_\alpha (\text{det}C_{\alpha \beta }0,C_{\alpha \beta }\{\mathrm{\Gamma }_\alpha ,\mathrm{\Gamma }_\beta \})`$ and the inverse of $`C_{\alpha \beta }`$ , $`C_{\alpha \beta }^1`$ . In our case of (1), it is implicitly assumed that the gauge-fixing conditions $`F_\gamma 0`$ are introduced to make the (first-class) energy and momentum constraints $`0,_x0`$ become the second-class constraints set $`\mathrm{\Gamma }=\{,_x,F_\gamma \}`$.\]
But there are serious mismatches with our purpose: a) Because the algebra is defined only at one (boundary) point there is no room for the infinite tower of symmetry generators through the Fourier-series expansion of the algebra as required in the BHS procedure. b) $`J[\chi ]`$ is time-dependent in general which means that $`J[\chi ]`$ is not conserved quantity. This is contrast to the usual fact that $`J[\chi ]`$ is a (boundary part of) conserved Noether charge . Moreover, the algebra and the central term $`c(\chi ,\omega )`$ are now time-dependent and eventually “time-dependent entropy” would be expected $`S\sqrt{c(\chi ,\omega )}`$ according to the Cardy’s formula in general. In order to resolve this unwanted situation they introduced the time-integrated charge $`\widehat{J}`$
$`\widehat{J}[\chi ]={\displaystyle \frac{\lambda }{2\pi }}{\displaystyle _0^{2\pi /\lambda }}𝑑tJ[\chi ]`$ (3)
with time period of $`2\pi /\lambda `$ and they obtained a Virasoro algebra in the asymptotic symmetry algebra of 2D black holes. But this procedure is an erroneous one. Let us explain our argument in detail.
We start our argument by considering Eq. (2) in their suggesting frame. It is true that the (overall) time integration of (2) has a definite meaning as
$`\widehat{\delta _\omega J[\chi ]}=\widehat{J}[[\chi ,\omega ]]+\widehat{c}(\chi ,\omega )`$ (4)
with the central charge (23) of Ref. and this looks like a Virasoro algebra. But the problem of this method is that the left-hand side of (4) can not be written as
$`\{\widehat{J}[\chi ],\widehat{J}[\omega ]\}_{DB},`$ (5)
which is essential for the interpretation of the time integrated (1) as a Virasoro algebra: Let us consider
$`\{J[\chi ],H[\omega ]\}_{DB}=\{J[\chi ],J[\omega ]\}_{DB}=\delta _\omega J[\chi ]`$ (6)
which implies that $`H[\omega ]`$ is the correct (asymptotic) symmetry generator, i.e.,
$`\{\varphi (x),H[\omega ]\}_{DB}=\delta _\omega \varphi (x).`$ (7)
for the symmetry transformation $`\delta _\omega \varphi `$ of the field $`\varphi `$. \[This Dirac bracket as well as the Poisson bracket in Eq. (19) of Ref. can be well-defined contrast to the claims of Cadoni and Mignemi following the work of Ref. .\] Here it is an important fact that the Dirac brackets are all defined at $`equaltimes`$ because the constituting Poisson brackets are equal times in nature for the $`local`$ field theory:
$$\{A,B\}=𝑑x\left(\frac{\delta A}{\delta \varphi _k(x,t)}\frac{\delta B}{\delta \pi ^k(x,t)}\frac{\delta A}{\delta \pi ^k(x,t)}\frac{\delta B}{\delta \varphi _k(x,t)}\right)$$
for the canonical conjugates pairs $`\varphi _k(x,t),\pi ^k(x,t)`$. If we perform the (overall) time integration over (3) the left-hand side becomes
$`{\displaystyle \frac{\lambda }{2\pi }}{\displaystyle _0^{2\pi /\lambda }}𝑑t\{J[\chi (t)],J[\omega (t)]\}_{DB}`$
or
$`\left({\displaystyle \frac{\lambda }{2\pi }}\right)^2{\displaystyle _0^{2\pi /\lambda }}𝑑t^{}{\displaystyle _0^{2\pi /\lambda }}𝑑t{\displaystyle \frac{2\pi }{\lambda }}\delta (tt^{})\{J[\chi (t)],J[\omega (t)]\}_{DB}.`$
It is then clear that (6) can not be written as (5) because of the factor $`\frac{2\pi }{\lambda }\delta (tt^{})`$ in the integrand. This is a direct consequence of (6) which is a well-known fact related to the Noether theorem in the field theory; if their claim which equating (4) and (5) was correct, the time-integrated quantity $`\widehat{H}[\chi ](\widehat{J}[\chi ])`$ should be treated as the symmetry generator necessarily contrast to (7).
In conclusion, they made a serious mistake by identifying (4) and (5) and hence their way of applying the Cardy’s formula to obtain the black hole entropy (25) of Ref. with the misidentified $`c`$ (central charge) and $`l_0`$ (lowest eigenvalue of the Virasoro generator) can not be justified.
MIP would like to thank Prof. Steven Carlip, Drs. Gung-won Kang and Hyuk-jae Lee for several helpful discussions. MIP was supported in part by a postdoctoral grant from the Natural Science Research Institute, Yonsei University in the year 1999 and in part by the Korea Research Foundation under 98-015-D00061. |
no-problem/9910/physics9910017.html | ar5iv | text | # Ground states of the atoms H, He,…, Ne and their singly positive ions in strong magnetic fields: The high field regime
## I Introduction
The behavior and properties of atoms in strong magnetic fields is a subject which has attracted the interest of many researchers. Partially this interest is motivated by the astrophysical discovery of strong fields on white dwarfs and neutron stars . On the other hand, the competition of the diamagnetic and Coulomb interaction, characteristic for atoms in strong magnetic fields, causes a rich variety of complex properties which are of interest on their own.
Investigations on the electronic structure in the presence of a magnetic field appear to be quite complicated due to the intricate geometry of these quantum problems. Most of the investigations in the literature focus on the hydrogen atom (for a list of references see, for example, ). These studies provided us with a detailed understanding of the electronic structure of the hydrogen atom in magnetic fields of arbitrary strengths. As a result the absorption features of certain magnetic white dwarfs could be explained and this allowed for a modeling of their atmospheres (see ref. for a comprehensive review of atoms in strong magnetic fields and their astrophysical applications up to 1994 and ref. for a more recent review on atoms and molecules in external fields). On the other hand there are a number of magnetic white dwarfs whose spectra remain unexplained and cannot be interpreted in terms of magnetized atomic hydrogen. Furthermore new magnetic objects are discovered (see, for example, Reimers et al in the course of the Hamburg ESO survey) whose spectra await to be explained. Very recently significant progress has been achieved with respect to the interpretation of the observed spectrum of the prominent white dwarf GD229 which shows a rich spectrum ranging from the UV to the near IR. Extensive and precise calculations on the helium atom provided data for many excited states in a broad range of field strengths . The comparison of the stationary transitions of the atom with the positions of the absorption edges of the observed spectrum yielded strong evidence for the existence of helium in the atmosphere of GD229 .
For atoms with several electrons there are two decisive factors which enrich the possible changes in the electronic structure with varying field strength compared to the one-electron system. First we have a third competing interaction which is the electron-electron repulsion and second the different electrons feel very different Coulomb forces, i.e. possess different one particle energies, and consequently the regime of the intermediate field strengths appears to be the sum of the intermediate regimes for the separate electrons.
There exist a number of investigations on two-electron atoms in the literature (see ref. and references therein). Focusing on systems with more than two electrons however the number of investigations is very scarce . Some of them use the adiabatic approximation in order to investigate the very high field regime. These works contain a number of important results on the properties and structure of several multielectron atoms. Being very useful for high fields the adiabatic approach does hardly allow to describe the electronic structure with decreasing field strength: particularly the core electrons of multi-electron atoms feel a strong nuclear attraction which can be dominated by the external field only for very high field strengths. In view of this there is a need for further quantum mechanical investigations on multi-electron atoms, particularly in the intermediate to high-field regime.
The ground states of atoms in strong magnetic fields have different spatial and spin symmetries in the different regions of the field strengths. We encounter, therefore, a series of changes i.e. crossovers with respect to their symmetries with varying field stength. The simplest case is the helium atom which possesses two ground state configurations: the singlet zero- and low-field ground state $`1s^2`$ and the fully spin-polarized high-field ground state $`1s2p_1`$. In the Hartree-Fock approximation the transition point between these configurations is given by the field strength $`\gamma =0.711`$. (If not indicated otherwise we use in the following atomic units for all quantities. In particular, the magnetic field $`\gamma =B/B_0`$ is measured in units $`B_0=\mathrm{}c/ea_0^2=2.350510^5\mathrm{T}=2.350510^9\mathrm{G}`$.) In previous works we have investigated the series of transitions of the ground state configurations for the complete range of field strengths for the lithium and carbon atoms as well as the ion $`\mathrm{Li}^+`$ . The evolution and appearence of these crossovers and the involved configurations become more and more intricate with increasing number of electrons of the atom. Currently the most complicated atomic system with a completely known sequence of ground state electronic configurations for the whole range of magnetic field strengths is the neutral carbon atom . Its ground state experiences six crossovers involving seven different electronic configurations which belong to three groups of different spin projections $`S_z=1,2,3`$ onto the magnetic field. This series of ground state configurations was extracted from results of numerical calculations for more than twenty electronic configurations selected via a detailed analysis on the basis of general energetical arguments. The picture of these transitions is especially complicated at relatively weak and intermediate fields. Due to this circumstance the comprehensive investigation of the structure of ground states of atoms is a complex problem which has to be solved for each atom separately. On the other hand, the geometry of the atomic wave functions is simplified for sufficiently high magnetic fields: Beyond some critical field strength the global ground state is given by a fully spin polarized configuration. This allows us to push the current state of the art and to study the ground states of the full series of neutral atoms and singly charged positive ions with $`Z10`$, i.e. the sequence H, He, Li, Be, B, C, N, O, F, Ne, in the domain of high magnetic fields. For the purpose of this investigation we define the high field domain as the one, where the ground state electronic configurations are fully spin polarized (Fully Spin Polarized (FSP) regime). The latter fact supplies an additional advantage for calculations performed in the Hartree-Fock approach, because our one-determinantal wave functions are eigenfunctions of the total spin operator $`𝐒^\mathrm{𝟐}`$. Starting from the high-field limit we will investigate the electronic structure and properties of the ground states with decreasing field strength until we reach the first crossover to a partially spin polarized (PSP) configuration, i.e. we focus on the regime of field strengths for which fully spin polarized configurations represent the ground state.
## II Method
The numerical approach applied in the present work coincides with that of our previous investigations . Refs. contain some more details of the mesh techniques. We solve the electronic Schrödinger equation for the atoms in a magnetic field under the assumption of an infinitely heavy nucleus (see below for comments on finite nuclear mass corrections) in the (unrestricted) Hartree-Fock approximation. The solution is established in cylindrical coordinates $`(\rho ,\varphi ,z)`$ with the $`z`$-axis oriented along the magnetic field. We prescribe to each electron a definite value of the magnetic quantum number $`m_\mu `$. Each one-electron wave function $`\mathrm{\Psi }_\mu `$ depends on the variables $`\varphi `$ and $`(\rho ,z)`$ as follows
$`\mathrm{\Psi }_\mu (\rho ,\varphi ,z)=(2\pi )^{1/2}e^{im_\mu \varphi }\psi _\mu (z,\rho )`$ (1)
where $`\mu `$ indicates the numbering of the electrons. The resulting partial differential equations for $`\psi _\mu (z,\rho )`$ and the formulae for the Coulomb and exchange potentials have been presented in ref..
The one-particle equations for the wave functions $`\psi _\mu (z,\rho )`$ are solved by means of the fully numerical mesh method described in refs. . The feature which distinguishes the present calculations from those described in ref. is the method for the calculation of the Coulomb and exchange integrals. In the present work as well as in ref. we obtain these potentials as solutions of the corresponding Poisson equation.
Our mesh approach is flexible enough to yield precise results for arbitrary field strengths. Some minor decrease of the precision appears in very strong magnetic fields. With respect to the electronic configurations possessing high absolute values of magnetic quantum numbers of outer electrons some minor computational problems arose also at lower field strengths. Both these phenomena are due to a big difference with respect to the binding energies $`ϵ_{B}^{}{}_{\mu }{}^{}`$ of one electron wave functions belonging to the same electronic configuration
$`ϵ_{B}^{}{}_{\mu }{}^{}=(m_\mu +|m_\mu |+2s_{z\mu }+1)\gamma /2ϵ_\mu `$ (2)
where $`ϵ_\mu `$ is the one electron energy and $`s_{z\mu }`$ is the spin $`z`$-projection. The precision of our results depends, of course, also on the number of the mesh nodes and can be improved in calculations with denser meshes. Most of the present calculations are carried out on sequences of meshes with the maximal number of nodes being $`65\times 65`$.
## III Relevant properties in the high field regime
In this section we provide some qualitative considerations on the problem of the ground states of multi-electron atoms in the high field limit. These considerations present a starting point for the combined qualitative and numerical considerations given in the following section. At very high field strengths the nuclear attraction energies and HF potentials (which determine the motion along the $`z`$ axis) are small compared to the interaction energies with the magnetic field (which determines the motion perpendicular to the magnetic field and is responsible for the Landau zonal structure of the spectrum). Thus in the limit ($`\gamma \mathrm{}`$), all the one-electron wave functions of the ground state belong to the lowest Landau zones, i.e. $`m_\mu 0`$ for all the electrons, and the system must be fully spin-polarized, i.e. $`s_{z\mu }=\frac{1}{2}`$. For the Coulomb central field the one electron levels form quasi 1D Coulomb series with the binding energy $`E_B=\frac{1}{2n_z^2}`$ for $`n_z>0`$, whereas $`E_B(\gamma \mathrm{})\mathrm{}`$ for $`n_z=0`$, where $`n_z`$ is the number of nodal surfaces of the wave function crossing the $`z`$ axis. In the limit $`\gamma \mathrm{}`$ the ground state wave function must be formed of the tightly bound single-electron functions with $`n_z=0`$. The binding energies of these functions decrease as $`|m|`$ increases and, thus, the electrons must occupy orbitals with increasing $`|m|`$ starting with $`m=0`$.
In the language of the Hartree-Fock approximation the ground state wave function of an atom in the high-field limit is a fully spin-polarized set of single-electron orbitals with no nodal surfaces crossing the $`z`$ axis and with non-positive magnetic quantum numbers decreasing from $`m=0`$ to $`m=N+1`$, where $`N`$ is the number of electrons. For the carbon atom, mentioned above, this Hartree-Fock configuration is $`1s2p_13d_24f_35g_46h_5`$ with $`S_z=3`$. For the sake of brevity we shall in the following refer to these ground state configurations in the high-field limit, i.e. the configuration generated by the tightly bound hydrogenic orbitals $`1s,2p_1,3d_2,4f_3,\mathrm{}`$, as $`|0_N`$. The states $`|0_N`$ possess the complete spin polarization $`S_z=N/2`$. Decreasing the magnetic field strength, we can encounter a series of crossovers of the ground state configuration associated with transitions of one or several electrons from orbitals with the maximal values for $`|m|`$ to other orbitals with a different spatial geometry of the wave function but the same spin polarization. This means the first few crossovers can take place within the space of fully spin polarized configurations. We shall refer to these configurations by mentioning, i.e. noting, only the difference with respect to the state $`|0_N`$. This notation can, of course, also be extended to non-fully spin polarized configurations. For instance the state $`1s^22p_13d_24f_35g_4`$ with $`S_z=2`$ of the carbon atom will be briefly refered to as $`|1s^2`$, since the default is the occupation of the hydrogenic series $`1s,2p_1,3d_2,\mathrm{}`$ and only deviations from it are recorded by our notation.
In the following considerations we shall often refer to subsets of electronic states which possess different spin polarizations. As indicated above we will denote the set of electronic states with $`S_z=N/2`$ as the FSP subset. Along with the global ground state it is expedient to consider also what we call local ground states which are the energetically lowest states with some definite degree of the spin polarization. For the purpose of the present work we need to know the local ground state of the subset of electronic states with $`S_z=N/2+1`$ (which is the only partially spin polarized subset considered in this paper and which is refered to as subset PSP) in the high-field regime. This knowledge is necessary for the evaluation of the point of the crossover between the FSP and PSP ground states, i.e. for the determination of the critical field strengths at which the global ground state changes its spin polarization from $`S_z=N/2`$ to $`S_z=N/2+1`$. For sufficiently high fields the $`|1s^2`$ state is the local ground state of the PSP subset of electronic states.
## IV Ground state electronic configurations in the high-field regime
Let us start with the high field limit and the state $`|0_N`$ and subsequently consider possible ground state crossovers which occur with decreasing magnetic field strength. In the high-field regime we have per definition only crossovers due to changes of the spatial orbitals and no spin-flip crossovers. According to the goals of the present work we investigate the possible global ground state configurations belonging to the subset FSP and determine the transition points to the subset PSP. Since the detailed study of the latter subset of states for arbitrary field strengths goes beyond the scope of the present work we consider first only the $`|1s^2`$ state of this subset which is the local ground state of the subset PSP for sufficiently strong fields. Then we investigate the FSP ground states with decreasing field strength until we reach the point of crossover with the energy of the configuration $`|1s^2`$. Subsequently we need to consider other electronic configurations of the PSP set in order to determine the complete picture of the energy levels as a function of the field strength near the spin-flip crossover and, possibly, to correct the position of this point (the latter is necessary if the state $`|1s^2`$ is not the lowest one of the subset PSP at the spin-flip point).
Let us consider the ground state transitions within the subset FSP with decreasing field strength. The first of these transitions occurs when the binding energy associated with the outermost orbital ($`m_N=N+1`$) becomes less than the binding energy of one of the orbitals with $`n_z>0`$. Due to the circumstance, that all the orbitals with $`n_z>0`$ are not occupied in the high-field ground state configuration, it is reasonable to expect the transition of the outermost electron to one of the orbitals with $`m=0`$ and either $`n_z=1`$ (i.e. $`2p_0`$ orbital) or $`n_z=2`$ (i.e. $`2s`$ orbital). The decision between these two possibilities cannot be taken on the basis of qualitative arguments. For the hydrogen atom or hydrogen-like ions in a magnetic field the $`2p_0`$ orbital is more strongly bound than the $`2s`$ orbital for any field strength. On the other hand, owing to the electronic screening of the nuclear charge in multi-electron atoms in field-free space the $`2s`$ orbital tends to be more tightly bound than the $`2p_0`$ orbital. Thus, we have two competing mechanisms and numerical calculations are required for the decision between the possible $`|0_N|2p_0`$ and $`|0_N|2s`$ crossovers to a new local FSP ground state. Our calculations for the $`|2s`$ state presented below in table VI for neutral atoms and in table X for positive ions show that the state $`|2s`$ becomes more tightly bound than the $`|2p_0`$ state only for rather weak field strengths, where this state cannot pretend to be the ground state of the corresponding atom or ion due to the presence of more tightly bound non-fully spin polarized states. In result the first intermediate ground state of the subset FSP, i.e. the state beside the $`|0_N`$ state which might be involved in the first crossover of the ground state with decreasing field strength, is the $`|2p_0`$ state. Calculations for the subset PSP (see below) show indeed, that this state is the global ground state in a certain regime of field strengths for the neutral atoms with $`Z6`$, i.e. C, N, O, F and Ne, as well as their positive ions $`\mathrm{C}^+`$, $`\mathrm{N}^+`$, $`\mathrm{O}^+`$, $`\mathrm{F}^+`$, $`\mathrm{Ne}^+`$. For the atoms He, Li, Be and B ($`Z5`$) as well as for the ions $`\mathrm{Li}^+`$, $`\mathrm{Be}^+`$ and $`\mathrm{B}^+`$ the state $`|1s^2`$ becomes more tightly bound than $`|0_N`$ for fields stronger that those associated with the $`|0_N|2p_0`$ crossover and the $`|2p_0`$ never becomes the global ground state of these atoms and ions. Thus, both neutral atoms and positive ions $`\mathrm{A}^+`$ with $`Z5`$ have only one fully spin polarized ground state configuration $`|0_N`$, which represents the global ground state above some critical field strength.
The question about a possible second intermediate fully spin polarized ground state occurring with further decreasing field strength arises for neutral atoms and positive ions with $`Z6`$ which possess the intermediate fully spin polarized ground state $`|2p_0`$. This state could be either a state, containing an additional orbital with $`n_z=1`$ which would result in the $`|2p_03d_1`$ configuration, or a state with an additional $`s`$-type orbital, i.e. $`|2s2p_0`$. The third possibility of the simultaneous transition of the electron with the magnetic quantum number $`m_{N1}=N+2`$ to the $`3d_1`$ orbital and the electron in the $`2p_0`$ orbital to the $`2s`$ orbital, which gives the $`|2s3d_1`$ configuration, can be excluded from the list of possible ground state configurations without a numerical investigation. The reason herefore is that the $`3d_1`$ orbital is for any field strength more weakly bound than the $`2p_0`$ orbital and thus the $`|2s2p_0`$ configuration possess a lower energy than the $`|2s3d_1`$ configuration for arbitrary magnetic field strengths. When comparing the configurations $`|2s2p_0`$ and $`|2p_03d_1`$ we can make use of what we have learned (see above) from the competing $`|2p_0`$ and $`|2s`$ configurations for higher field strengths: The $`2s`$ orbital is energetically preferable at weak magnetic fields whereas the $`3d_1`$ orbital yields energetically lower configurations in the strong field regime. Thus, we perform calculations for the $`|2p_03d_1`$ configuration for many field strengths and then perform at much fewer field strengths calculations to check the energy of the $`|2s2p_0`$ configuration in order to obtain the correct lowest energy and state of the set FSP.
The behavior of the energy levels described in the previous paragraph is illustrated in Figure 1. In this figure the energy curves for four possible fully spin polarized electronic configurations and two energy curves for the PSP subset of the neon ($`Z=10`$) atom are presented. This figure shows, in particular, the energy curve of the high field ground state $`|0_N`$ which intersects with the curve $`E_{|2p_0}(\gamma )`$ at $`\gamma =159.138`$. The latter energy remains the lowest in the FSP subset until the intersection of this curve with $`E_{|2p_03d_1}(\gamma )`$ at $`\gamma =40.537`$. This intersection occurs at higher field strength than the intersection of the curves $`E_{|2p_0}(\gamma )`$ and $`E_{|1s^2}(\gamma )`$ which is at $`\gamma =38.060`$. On the other hand, the control calculations for the state $`|2s2p_0`$, not presented in Figure 1, show that its total energy for $`\gamma =38.060`$ is larger than the energy $`E_{|2p_03d_1}`$. According to the previous argumentation this means that the state $`|2s2p_0`$ is not the global ground state of the Ne atom for any magnetic field strengths. Furthermore the state $`|2p_03d_1`$ is a candidate for becoming the global ground state of the neon atom in some bounded regime of the field strength. However, we have not yet performed (see below) a detailed investigation of the lowest energy curves of the PSP subset which is essential to take a definite decision on the global ground state configurations. For neutral atoms with $`6Z9`$ and positive ions $`\mathrm{A}^+`$ with $`6Z10`$ the energies of the states $`|2p_03d_1`$ and $`|2s2p_0`$ at the points of intersections of the curves $`E_{|2p_0}(\gamma )`$ and $`E_{|1s^2}(\gamma )`$ are higher than the energies of the states $`|2p_0`$ and $`|1s^2`$. This leads to the conjecture that no neutral atoms with $`Z<10`$ and positive ions with $`Z10`$ can possess more than two different fully spin polarized ground state configurations in the complete range of field strengths.
The above concludes our considerations of the fully spin polarized ground state configurations. To prove or refute the above conjecture we have to address the question of the lower boundary of the fully spin polarized domain, i.e. the lowest field strength, at which a fully spin-polarized state represents the ground state of the atom considered. It is evident that this boundary value of the field strength is given by the crossover from a fully spin polarized to a non-fully spin polarized ground state with decreasing field strength.
First of all we have to check if the state $`|1s^2`$ has the lowest energy of all the states of subset PSP at the point of intersection of the curve $`E_{|1s^2}(\gamma )`$ with the corresponding energy curve for the local ground state configuration of subset FSP. Following our considerations for the fully spin polarized case we can conclude that calculations have to be performed first of all for the states $`|1s^22p_0`$ and $`|1s^22s`$.
The numerical calculations show, that for atoms with $`Z6`$ and ions with $`Z7`$, the state $`|1s^2`$ becomes the ground state while lowering the spin polarization from the maximal absolute value $`S_z=N/2`$ to $`S_z=N/2+1`$. For heavier atoms and ions we first remark that the state $`|1s^2`$ is not the energetically lowest one in the PSP subset at magnetic fields at which its energy becomes equal to the energy of the lowest FSP state. For these atoms and ions the state $`|1s^22p_0`$ lies lower than $`|1s^2`$ at these field strengths. One can see this behavior for the neon atom in Figure 1. The second possible PSP local ground state $`|1s^22s`$ (not presented in Figure 1) proves to be less tightly bound at these fields. These facts allow in the following a definite clarification of the picture of the global ground state configuration in the high field regime. For atoms with $`Z7`$ and positive ions with $`Z8`$ the intersection points between the state $`|1s^22p_0`$ and the energetically lowest state in the FSP subspace have to be calculated. In result, the spin-flip crossover occurs at higher fields than this would be in the case of $`|1s^2`$ being the lowest state in the PSP subspace. In particular, the spin-flip crossover for the neon atom is found to be slightly higher than the point of the crossover $`|2p_0|2p_03d_1`$, and, therefore, this atom has in the framework of the Hartree-Fock approximation only two fully spin polarized configurations likewise other neutral atoms and positive ions with $`6Z10`$. The above conjecture is therefore refuted and the FSP $`|2p_03d_1`$ represents never the global ground state configuration in the high field regime for all neutral atoms and positive ions with $`Z10`$. It should be noted that the situation with the neon atom can be regarded as a transient one due to closeness of the intersection $`|2p_0|2p_03d_1`$ to the intersection $`|2p_0|1s^22p_0`$. This means that we can expect the configuration $`|2p_03d_1`$ to be the global ground state for the sodium atom ($`Z=11`$). In addition an investigation of the neon atom carried out on a more precise level than the Hartree-Fock method could also introduce some corrections to the picture described above.
After obtaining the new spin flip points for atoms with $`7Z10`$ and ions with $`8Z10`$ (which are transition points between the $`|2p_0`$ and $`|1s^22p_0`$ states) one has to check them with respect to the next (in the order of decreasing field strengths) possible PSP local ground state configurations. Analogously to the FSP subset these configurations are $`|1s^22p_03d_1`$ and $`|1s^22s2p_0`$. The numerical calculations show, that their energies lie higher than the energy of the $`|1s^22p_0`$ configuration at the spin flip points and they are therefore excluded from the list of the global ground states considered here.
The final picture of the crossovers of the global ground state configurations is presented in tables I (for the neutral atoms) and II (for the positive ions $`\mathrm{A}^+`$). The corresponding values of the field strengths belonging to the point of crossover are underlined in these tables. The field strengths for other closelying crossovers which actually do not affect the scenario of the changes of the global ground state are also presented in these tables. In a graphical form these results are illustrated in Figures 2 (neutral atoms) and 3 (ions). Shown are the critical field strengths belonging to the crossovers of selected states of the atoms (ions) as functions of the nuclear charge. The filled symbols mark the crossovers of the energy levels which correspond to the actual transitions of the ground state configurations, whereas the analogous non-filled symbols correspond to magnetic field strengths of the crossovers not associated with changes in the ground state but excited states. One can see in these figures the dependencies of the field strengths for various types of crossovers on the charge of the nucleus. In particular, one can see many significant crossovers for $`Z=10`$ lying very close from each other on the $`\gamma `$ axis. This peculiarity in combination with the behavior of the curve $`\gamma (Z)`$ for the $`|2p_0|2p_03d_1`$ crossover allows one to expect the configuration $`|2p_03d_1`$ to become a ground state configuration for $`Z>10`$.
Some summarizing remarks with respect to the global ground state configurations in the high field regime are in order. The atoms and positive ions with $`Z5`$ have one ground state configuration $`|0_N`$. The atoms and ions with $`6Z10`$ possess two high field configurations. The C atom ($`Z=6`$) plays an exceptional role in the sense that it is the only atom which shows the ground state crossover $`|2p_0|1s^2`$ involving the $`|1s^2`$ state as a global ground state.
## V Numerical Results and Discussion
The tables IIIX contain numerical values of the total energies of the neutral atoms and positive ions obtained in our Hartree-Fock calculations. Tables III, IV, V and VI contain the energies of the neutral atoms in the states $`|0_N`$, $`|2p_0`$, $`|1s^2`$ and $`|2s`$, respectively. The analogous results for the ions $`\mathrm{A}^+`$ are presented in tables VII, VIII, IX and X (the results are for the states $`|0_N`$, $`|2p_0`$, $`|1s^2`$ and $`|2s`$). The energies associated with the points of crossover for the global ground state both in neutral atoms and in their singly positive ions are presented in table XI. These energy values provide us with the ionization energies at the transition points. Being combined with the data of the previous tables they provide the behavior of the ionization energies of the atoms and the total energies of the atoms and positive ions in the complete high-field region.
In Figure 4 we present the ionization energies of neutral atoms divided by the ionization energy of the hydrogen atom as a function of the magnetic field strength. All the curves for multi-electron atoms at $`\gamma <600`$ lie lower than the curve for hydrogen at the corresponding field strengths. But for $`\gamma >1500`$ the ionization energies of all atoms exceed the ionization energy for the hydrogen atom. Moreover, with growing nuclear charge we observe a stronger increase of the ionization energy for stronger fields accompanied by a shift of the starting point for the growth to the regime of stronger magnetic fields. This strengthening of the binding of the multi-electron atoms at strong magnetic fields may be considered as a hint for increasingly favorable conditions for the formation of the corresponding negative ions.
Figure 5 presents the ionization energies for the $`|0_N`$ states for various field strengths depending on the nuclear charge $`Z`$, i.e. for all atoms H, He,…, Ne. All the field strengths presented in this figure are above the first crossover to another global ground state configuration. Thus, the ionization energies in this figure represent the differences between the energies of the high-field ground states of the neutral atoms and the corresponding singly charged positive ions. The curve for $`\gamma =2000`$ can be considered as the prototype example for the general properties of the dependencies $`E_{\mathrm{Ion}}(Z)`$. For small values of $`Z`$ this curve shows increasing values for $`E_{\mathrm{Ion}}`$ with increasing $`Z`$, then it has a maximum at $`Z=5`$ and for $`Z>5`$ it decreases with increasing $`Z`$. Analogous curves for lower field strengths have their maxima at lower values of $`Z`$. At $`\gamma =1000`$ the ionization energy shows its maximal value at $`Z=2`$, whereas the ionization energies for $`\gamma =500`$ and $`\gamma =200`$ decrease monotonically with increasing $`Z`$. On the other hand, for $`\gamma =5000`$ and $`\gamma =10000`$ we obtain a monotonically increasing behavior of the ionization energy for the whole range $`1Z10`$ of nuclear charges investigated in the present work. The behavior described above results from a competition of two different physical mechanisms which impact the binding energy of the outermost electron in the high-field ground state Hartree-Fock configuration. The first mechanism is the lowering of the binding energy of the outermost electron with increasing absolute value of its magnetic quantum number $`|m|`$ provided that this electron feels a constant nuclear charge. The latter assumption is a rough approximation to the case of relatively weak fields when the inner $`Z1`$ electrons screen more or less effectively the Coulomb field of the nucleus. The second and opposite tendency is associated with the decrease of the efficiency of this screening in extremely strong magnetic fields due to the fact that the geometry of the wave functions tends to be one-dimensional in these fields. In result the effect of increasing effective nuclear charge exceeds the effect of the growth of $`|m|`$ with increasing $`Z`$ for the high-field ground state configurations. Continuing this qualitative consideration we point out that at each fixed $`\gamma `$ the influence of the magnetic field on the inner electrons become less and less significant as $`Z`$ increases which is due to the dominance of the Coulomb attraction potential of the nucleus over the magnetic field interaction. This has to result in a significant screening of the nuclear charge by these electrons. In result the functions $`E_{\mathrm{Ion}}(Z)`$ for strong fields defined on the whole interval $`1Z<+\mathrm{}`$ have maxima at some values for $`Z`$ and decrease for sufficiently large values of $`Z`$.
Next we provide a comparison of the present results with adiabatic HF calculations which were carried out for multi-electron atoms in refs. . We compare our results on the Hartree-Fock electronic structure of atoms in strong magnetic fields with results obtained by Neuhauser et al via a one-dimensional adiabatic Hartree-Fock approximation. The calculations in this work were carried out for the four field strengths $`\gamma =42.544`$, $`\gamma =212.72`$, $`\gamma =425.44`$ and $`\gamma =2127.2`$. For $`Z9`$ and all these field strengths and for $`Z=10`$ at the three larger values of these fields the Hartree-Fock wave functions of the ground states are reported to be fully spin polarized with no nodes crossing the $`z`$ axis. This conclusion differs from our result for $`\gamma =42.544`$. According to our calculations at $`\gamma =42.544`$ the wave functions without nodes crossing the $`z`$ axis represent the ground states of atoms with $`Z7`$ (i.e. H, He, Li, Be, B, C and N) whereas for the atoms with $`8Z10`$ (i.e. O, F and Ne) the wave functions of the ground states are fully spin polarized with one nodal surface crossing the $`z`$ axis. A numerical comparison of our results with those of refs. is shown in table XII. All our values lie lower than the values of these adiabatic calculations. Since our total energies are upper bonds to the exact values we consider our HF results as being closer to the exact values compared to the results of the adiabatic HF calculations. Therefore, on the basis of our calculations combined with the results of we can obtain an idea of the degree of the applicability of the adiabatic approximation for multi-electron atoms for different field strengths and nuclear charges. It is well known, that the precision of the adiabatic approximation decreases with decreasing field strength. The increase of the relative errors with decreasing field strength is clearly visible in the table. On the other hand, the relative errors of the adiabatic approximation possess the tendency to increase with growing $`Z`$, which is manifested by the scaling transformation $`E(Z,\gamma )=Z^2E(1,\gamma /Z^2)`$ (e.g. ) well known for hydrogen-like ions. The behavior of the inner electrons is to some extent similar to the behavior of the electrons in the corresponding hydrogen-like ions. Therefore their behavior is to lowest order similar to the behavior of the electron in the hydrogen atom at magnetic field strength $`\gamma /Z^2`$ i.e. this behavior can be less accurately described by the adiabatic approximation at large $`Z`$ values. The absolute values of the errors in the total energy associated with the adiabatic approximation are in many cases larger than the corresponding values of the ionization energies.
To conclude this section we discuss briefly three issues, which could affect the precision of the results presented above. These issues are electron correlations, effects due to the finite nuclear mass and relativistic corrections. For all these effects we have to distinguish between their influence on the total energy and on other quantities like the ionization energy and the field strength for the crossover of the energy levels. In most cases their influence on the latter values is much smaller due to the fact that they involve differences of total energies for quantum states possessing a similar atomic core. Let us start by addressing the problem of the electronic correlations which is the critical problem for the precision of the Hartree-Fock calculations. The final evaluation of the correlation effects is possible only on the basis of exact calculations going beyond the Hartree-Fock approximation. Therefore we can give here only qualitative arguments based on the geometry of the wave function and on existing calculations for less complicated systems. The dependence of the ratio of the correlation energy and the total binding energy for the two ground state configurations of the helium atom has been investigated in ref. . This ratio for the $`1s^2`$ state decreases with growing $`\gamma `$ from $`1.4\%`$ at $`\gamma =0`$ to about $`0.6\%`$ at $`\gamma =100`$. The same ratio for the $`1s2p_1`$ state (high field ground state configuration) increases with growing $`\gamma `$. It remains however for all the field strengths considered essentially smaller the values for the $`1s^2`$ state. This result for the helium atom in strong magnetic fields allows us to speculate that for the field strengths considered here the correlation energy for atoms and positive ions heavier than helium atom does not exceed their corresponding values without fields. Due to the similar geometry of the inner shells in the participating electronic configurations we do not expect a major influence of the correlation effects both on the field strengths of the crossovers of the ground state configurations within the subsets FSP or PSP and on the ionization energies if the states of a neutral atom and the positive ion belong to the same subset. On the other hand, the properties associated with configurations from different subsets (for instance values of the spin-flip crossover field strengths) can be affected more strongly by correlation effects.
Our second issue is the influence of the finite nuclear mass on the results presented above. A discussion of this problem is provided in ref. and references therein. Importantly there exists a well-defined procedure which tells us how to relate the energies for infinite nuclear mass to those with a finite nuclear mass. The corresponding equations are exact for hydrogen-like systems and provide the lowest order mass corrections $`O\left(\frac{m}{M}\right)`$ ($`m`$ and $`M`$ are the electron and total mass, respectively) for general atoms/ions. Essentially they consist of a redefinition of the energy scale (atomic units $``$ reduced atomic units, due to the introduction of the reduced mass) and an additional energy shift $`(1/M_0)\gamma (M+S_z)`$ where $`M_0`$ is the nuclear mass. The first effect can simply be ’included’ in our results by taking the energies in reduced a.u. instead of a.u. The mentioned shift can become relevant for high fields. However, it can easily be included in the total energies presented here. We emphasize that it plays a minor role in the region of the crossovers of the ground state configurations and decreases significantly with increasing mass of the atom (nucleus).
Relativistic calculations for the hydrogen atom and hydrogen-like ions were performed by Lindgren and Virtamo and Chen and Goldman . Our considerations are based on the work by Chen and Goldman which contains results for the $`1s`$ and $`2p_1`$ states for a broad range of magnetic field strengths. Interpolating their results for the $`1s`$ state and using well known scaling transformations we can conclude that in the least favorable case of $`Z=10`$ relativistic corrections $`\delta E=(E^{\mathrm{relativistic}}E^{\mathrm{non}\mathrm{relativistic}})/|E^{\mathrm{non}\mathrm{relativistic}}|`$ have to be of the order $`410^4`$ for $`\gamma =200`$ and $`210^4`$ for $`\gamma =10^4`$. The relativistic corrections for the $`2p_1`$ state at relatively strong fields appear to be of the same order of magnitude or smaller than for the $`1s`$ state. Thus, making a reasonable assumption that relativistic corrections for both inner and outer electrons are similar to those in the hydrogen-like ions with a properly scaled nuclear charge we can evaluate $`|\delta E|410^4`$ for $`Z=10`$ and lesser for lower $`Z`$ values. The same relative correction can be expected also for the ionization energies and energy values used for the determination of the crossovers of the electronic configurations.
## VI Summary
In the present work we have applied our 2D Hartree-Fock method to the magnetized neutral atoms H, He, Li, Be, B, C, N, O, F and Ne in the high field regime which is characterized by fully spin-polarized electronic shells. Additionally we have studied the crossover from fully spin polarized to partially spin polarized global ground state configurations. The highest field strength investigated was $`\gamma =10000`$. Our single-determinant Hartree-Fock approach supplies us with exact upper bounds for the total energy. A comparison with adiabatic calculations in the literature shows the decrease of the precision of the latter with growing $`Z`$.
The investigation of the geometry of the spatial part of the electronic wave function demonstrates that in the high-field limit this wave function is a composition of the lowest Landau orbitals with absolute values of the magnetic quantum number growing from $`|m|=0`$ up to $`|m|=N1`$, where $`N`$ is the number of the electrons: i.e. we have the series $`1s`$, $`2p_1`$, $`3d_2`$,…For atoms with $`2Z5`$ these states of type $`1s2p_13d_2\mathrm{}`$ represent the complete set of the fully spin-polarized ground state configurations. Heavier atoms $`6Z10`$ have one intermediate ground state configuration associated with the low-field end of the fully spin polarized region. This state contains one $`2p_0`$ type orbital (i.e. the orbital with a negative $`z`$ parity and $`|m|=0`$) instead of the orbital with the positive $`z`$ parity and the maximal value of $`|m|`$. Extrapolating our data as a function of the nuclear charge $`Z`$ we expect that a third fully spin polarized ground state configuration occurs first for $`Z=11`$, i.e. the sodium atom. The third configuration is suggested to be the $`|2p_03d_1`$ state. The critical field strength which provides the crossover from the partially spin polarized to the fully spin polarized regime depends sensitively on the changes of the geometry of the wave functions. Indeed a number of different configurations have been selected as candidates for ground states in the crossover regime and only concrete calculations could provide us with a final decision on the energetically lowest state of the non-fully spin polarized electronic states. Generally speaking all the spin-flip crossovers mentioned above involve a pairing of the $`1s`$ electrons, i.e. the pair of orbitals $`1s^2`$. The carbon atom ($`Z=6`$) plays an exceptional role since it is the only neutral atom which possesses two fully spin polarized configurations and the $`|1s^2`$ as a global non-fully spin polarized ground state configuration. The spin-flip crossover of the carbon atom preserves the total magnetic quantum number. All other atoms N, O, F and Ne ($`7Z10`$) possess instead the $`|1s^22p_0`$ configuration as a non-fully spin polarized ground state for strong fields. We have determined the positions, i.e. field strengths, of the crossovers of the ground states. Beyond this total energies have been provided for many field strengths for several low-lying excited states.
An analogous investigation has been carried out for singly charged positive ions $`2Z10`$. The structure of the fully spin polarized ground state configurations for these ions is the following: The ions with $`3Z5`$ have one fully spin polarized ground state configuration analogous to the high-field limit of the neutral atoms. For $`6Z10`$, analogously to the neutral atoms, there exist two fully spin polarized ground state configurations. Depending on the values of the nuclear charge number $`Z`$ the spin-flip transitions associated with the lowering of the spin polarization with decreasing field strength lead also to wave functions of different spatial symmetries. These data being combined with the data for neutral atoms allow us to obtain the ionization energies of the atoms. The dependencies of the ionization energies on the nuclear charge at fixed field strength generally exhibit maxima at certain values of $`Z`$. The positions of these maxima shift to larger values of $`Z`$ with increasing field strength. We provide some qualitative arguments explaining this behavior of $`E_{\mathrm{Ion}}(Z)`$. Finally we have given some remarks on the interactions going beyond the present level of investigation, i.e. correlations and finite nuclear mass effects as well as relativistic corrections.
Acknowledgments
Financial support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged.
Figure Captions
Figure 1. The total energies (in atomic units) of the relevant states of the neon atom under consideration for the determination of the ground state electronic configurations for the high field regime.
Figure 2. The magnetic field strengths (a.u.) corresponding to crossovers of energy levels in neutral atoms as functions of the nuclear charge. The filled symbols mark crossovers between global ground state configurations.
Figure 3. Same as Figure 2 but for the singly positive ions.
Figure 4. Ionization energies of neutral atoms divided by the ionization energy of the hydrogen atom as a function of the magnetic field strength (a.u.).
Figure 5. Ionization energies of the states $`|0_N`$ of the neutral atoms ($`1Z10`$) for different magnetic field strengths. |
no-problem/9910/quant-ph9910009.html | ar5iv | text | # Bäcklund-type superposition and free particle 𝑛-susy partners
## Abstract
The higher order susy partners of Schrödinger Hamiltonians can be expli-citly constructed by iterating a nonlinear difference algorithm coinciding with the Bäcklund superposition principle used in soliton theory. As an example, it is applied in the construction of new higher order susy partners of the free particle potential, which can be used as a handy tool in soliton theory.
Recent studies confirm that the higher order supersymmetric (susy) partners of Schrödinger Hamiltonians are most easily constructed by a simple algebraic tool named intertwining technique . One of the keys of this method is an algebraic nonlinear expression which links solutions of different Riccati equations (see, e.g. \[2–4\]). In a previous paper , we have studied the application of this method to the free particle potential. The ‘building blocks’ of some of the resulting potentials are the well known soliton solutions of the Korteweg-de Vries (KdV) equation: $`\kappa ^2\mathrm{sech}^2[\kappa (xa)]`$ and $`\kappa ^2\mathrm{csch}^2[\kappa (xa)]`$. In this work we shall sketch the main steps of the approach in order to present some of the potentials derived in .
First, consider the intertwining relationship $`H_1A_1=A_1H_0`$, where the intertwiner $`A_1`$ is the first order differential operator $`A_1=\frac{d}{dx}+\beta _1(x,ϵ)`$. All the available information concerning the Hamiltonians $`H_0=\frac{1}{2}\frac{d^2}{dx^2}+V_0(x)`$ and $`H_1=\frac{1}{2}\frac{d^2}{dx^2}+V_1(x,ϵ)`$ is encoded in the beta function, which satisfies the Riccati equation
$$\beta _1^{}(x,ϵ)+\beta _1^2(x,ϵ)=2[V_0(x)ϵ].$$
(1)
The arbitrary integration constant $`ϵ`$ plays the role of a factorization energy. It is very simple to check that the potentials are related by the first order susy relationship
$$V_1(x,ϵ)=V_0(x)+\beta _1^{}(x,ϵ).$$
(2)
Equations (1) and (2) are necessary and sufficient conditions for the Hamiltonians to be factorized as $`H_0ϵ=(1/2)A_1^{}A_1`$, and $`H_1ϵ=(1/2)A_1A_1^{}`$. Suppose now that $`V_0(x)`$ is a known solvable potential with eigenvalues $`E_n`$ and eigenfunctions $`\psi _n`$, $`n=0,1,2,\mathrm{}`$ Let us assume that we have found a general solution of (1) for a given factorization energy $`ϵ_1E_n,n`$. Then, the potential $`V_1(x,ϵ_1)`$ is also given \[1–3\]. The iteration of this procedure starts by considering now $`V_1(x,ϵ_1)`$ as the known solvable potential and looking for a new one $`V_2(x,ϵ_1,ϵ)`$ satisfying the second order susy relationship
$$V_2(x,ϵ_1,ϵ)=V_1(x,ϵ_1)+\beta _2^{}(x,ϵ_1,ϵ).$$
(3)
Therefore, the new beta function must fulfill the Riccati equation
$$\beta _2^{}(x,ϵ_1,ϵ)+\beta _2^2(x,ϵ_1,ϵ)=2[V_1(x,ϵ_1)ϵ],$$
(4)
where $`ϵ`$ is again an arbitrary factorization energy. The corresponding solution is given by
$$\beta _2(x,ϵ_1,ϵ)=\beta _1(x,ϵ_1)\frac{2(ϵ_1ϵ)}{\beta _1(x,ϵ_1)\beta _1(x,ϵ)}.$$
(5)
The finite difference expression (5) is a nonlinear superposition of two general solutions of (1), one for each factorization energy $`ϵ_1`$ and $`ϵ`$, transforming equation (1) into (4) by the change of $`V_0(x)`$ by $`V_1(x,ϵ_1)`$, and $`\beta _1(x,ϵ)`$ by $`\beta _2(x,ϵ_1,ϵ)`$. This transformation can be used to link the higher order susy partners of $`V_0(x)`$ with the first order superpotentials $`\beta _1(x,ϵ)`$, just by solving (1) for different values of the factorization energy $`ϵ`$. For instance, providing $`n`$ different general solutions of (1), one for each $`ϵ_k`$, $`k=1,2,\mathrm{},n`$, we are able to iterate $`n1`$ times the algorithm (5) acquiring a new beta function in each step, given by
$$\beta _k(x,ϵ_k)=\beta _{k1}(x,ϵ_{k1})\frac{2(ϵ_{k1}ϵ_k)}{\beta _{k1}(x,ϵ_{k1})\beta _{k1}(x,ϵ_k)},k=2,3,\mathrm{}n.$$
(6)
We have adopted here an abreviated notation making explicit only the dependence of $`\beta _k`$ on the factorization constant introduced in the very last step, keeping implicit the dependence on the previous factorization constants (henceforth, the same criterion will be used for any other symbol depending on $`k`$ factorization energies). Therefore, given any initial potential $`V_0`$, the corresponding $`n`$-susy partner potential $`V_n`$ can be writen as
$$V_n(x,ϵ_n)=V_0(x)+\underset{k=1}{\overset{n}{}}\beta _k^{}(x,ϵ_k),$$
(7)
provided that the master equations for $`\beta _k`$ and $`V_k`$ are given by
$$\beta _k^{}(x,ϵ_k)+\beta _k^2(x,ϵ_k)=2[V_{k1}(x,ϵ_{k1})ϵ_k],k=1,2,\mathrm{},n,$$
(8)
$$V_k(x,ϵ_k)=V_{k1}(x,ϵ_{k1})+\beta _k^{}(x,ϵ_k),k=1,2,\mathrm{},n.$$
(9)
Now, let us stress that every general solution of the Riccati equation (1), for a given $`ϵ`$, depends on an additional implicit integration parameter $`\alpha `$, hence, the process acumulates as many of these integration parameters as many general solutions of (1) have been used.
Observe the coincidence of our nonlinear algorithm (6) and the Wahlquist and Estabrook superposition principle expression (see equation (16) of ), derived from the Bäcklund transformation (BT) of the KdV equation $`w_t=6w_x^2w_{xxx}`$; subscripts $`t`$ and $`x`$ denote partial derivatives. The method has been typically used to generate new, multisoliton solutions $`w_{12},\mathrm{},w_{(n)}`$ of the KdV equation from a given one-soliton solution $`ww_1`$ of the same equation. It is thus quite interesting that the validity of the same algorithm in the intertwining problem (supersymmetry) is much easier to demonstrate without worrying at all about the nonlinear equations! Moreover, its physical applicability in susy seems much wider. Thus, e.g., the singular solutions of KdV (singular water waves) would be of marginal physical interest. The singular potentials in the Schrödinger equation are not! Therefore, the possibility of reducing the $`n`$-th intertwining iteration to the multiple applications of the Bäcklund superposition principle means that $`n`$-susy could be a universal method generating the “multisoliton deformations” of any initial potential.
We shall now focus on the vacuum case, presenting some simplifications which the method offers in deriving the $`n`$-susy partners for the potential $`V_0(x)=0`$. In this case, the Riccati equation (1) has the general solution
$$\beta _1(x,ϵ)=\sqrt{2ϵ}\mathrm{cot}[\sqrt{2ϵ}(x\alpha )],$$
(10)
where $`\alpha `$ is an integration constant (in general complex). It is well known that the superpotential (10) gives four different first order susy partners of $`V_0(x)=0`$ by taking different values of $`ϵ`$ and $`\alpha `$. This information is sumarized in Table I.
TABLE I. The four different real superpotentials $`\beta _1`$ comming out from (10), depending on the values of $`ϵ`$ and the integration parameter $`\alpha `$. In each case S means singular, R regular, $`\mathrm{P}`$ periodic, and N null. The parameters $`a`$ and $`b`$ are arbitrary real numbers.
| | | | | |
| --- | --- | --- | --- | --- |
| Case | $`ϵ`$ | $`\sqrt{2ϵ}`$ | $`\alpha `$ | $`\beta _1(x,ϵ)`$ |
| S | $`ϵ<0`$ | $`i\sqrt{2|ϵ|}=i\kappa `$ | $`a`$ | $`\kappa \mathrm{coth}[\kappa (xa)]`$ |
| R | $`ϵ<0`$ | $`i\sqrt{2|ϵ|}=i\kappa `$ | $`b{\displaystyle \frac{i\pi }{2\kappa }}`$ | $`\kappa \mathrm{tanh}[\kappa (x+b)]`$ |
| $`\mathrm{P}`$ | $`ϵ>0`$ | $`\sqrt{2ϵ}=k`$ | $`a`$ | $`k\mathrm{cot}[k(xa)]`$ |
| N | $`0`$ | $`0`$ | $`a`$ | $`{\displaystyle \frac{1}{xa}}`$ |
| | | | | |
As an example, notice that the regular case (R) leads to the well known modified Pöschl-Teller type susy partner $`V_1^R(x,ϵ)=\kappa ^2\mathrm{sech}^2[\kappa (x+b)]`$, while the null case (N) leads to the potential barrier $`V_1^N(x,0)=(xa)^2`$. Now, in order to give an example of second order susy partner potentials $`V_2(x,ϵ)`$, let us consider the superpotentials R and S as given in Table I. By introducing them in (5) and (3) we get
$$V_2(x,ϵ_2)=(\kappa _1^2\kappa _2^2)\frac{\kappa _1^2\mathrm{csch}^2[\kappa _1(x+b)]+\kappa _2^2\mathrm{sech}^2[\kappa _2(xa)]}{(\kappa _1\mathrm{coth}[\kappa _1(x+b)]+\kappa _2\mathrm{tanh}[\kappa _2(xa)])^2}.$$
(11)
The potential (11) has two finite wells which can be modulated by changing the values of $`\kappa _1`$ and $`\kappa _2`$ under the condition $`\kappa _2<\kappa _1`$. A Taylor expansion of (11) shows a singularity at $`x=a`$ when $`\kappa _2>\kappa _1`$. The case $`\kappa _2=\kappa _1`$ gives a potential $`V_2(x,ϵ_1)=0`$.
Let us remark that, for the periodic superpotentials $`\beta _1`$ in Table I, equation (7) leads to a natural classification of two kinds of potentials depending on the parity of $`n`$. For $`n`$ even, the periodic superpotential $`\beta _1`$ does not appear as a separate term in (7), affecting only one of denominators. The resulting susy partners have only a finite quantity of singularities. This fact has been used by Stalhofen by constructing potentials with bound states embedded in the continuum. On the other hand, for $`n`$ odd, the function $`\beta _1`$ is a separate term in the sum (7) and its global effect is not canceled by any similar term. The corresponding susy partners become singular periodic potentials.
In conclusion, the nonlinear difference algorithm (6) allows the construction of higher order susy partners of any initial potential $`V_0(x)`$, provided that a certain number of solutions of (1) have been given. This finite difference algorithm generalizes the superposition principle reported in extending its applications to the susy construction of new solvable potentials. In particular, the higher order susy partners $`V_n(x,ϵ_n)`$ of the free particle potential represent a wide set of transparent wells in the terms discussed in \[7–9\], as well as multisoliton solutions of the KdV equation as given in .
This work was performed under the auspices of CONACyT (Mexico) and DGES project PB94-1115 from Ministerio de Educación y Cultura (Spain), as well as by Junta de Castilla y León (CO2/197). BM and ORO acknowledge the kind hospitality at Departamento de Física Teórica, Universidad de Valladolid (Spain). ORO wishes to acknowledge partial finantial support from the ICSSUR’99 Organizing Committee.
References
* D.J. Fernández C, Int. J. Mod. Phys. A 12, 171 (1997);
D.J. Fernández C., M.L. Glasser and L.M. Nieto, Phys. Lett. A 240, 15 (1998)
* D.J. Fernández C., V. Hussin and B. Mielnik, Phys. Lett. A 244, 1 (1998);
J.O. Rosas-Ortiz, J. Phys. A 31, L507 (1998); J. Phys. A 31, 10163 (1998);
D.J. Fernández C. and V. Hussin, J. Phys. A 32, 3603 (1999)
* B. Mielnik, L.M. Nieto and O. Rosas-Ortiz, The finite difference algorithm for higher order supersymmetry, Universidad de Valladolid Preprint, Spain (1998)
* V. E. Adler, Physica D 73, 335 (1994)
* H. D. Wahlquist and F. B. Estabrook, Phys. Rev. Lett. 31, 1386 (1973)
* C. S. Gardner, J. M. Greene, M. D. Kruskal and R. Miura, Phys. Rev. Lett. 19, 1095 (1967)
* V.B. Matveev and M.A. Salle, Darboux Transformations and Solitons, Springer-Verlag, Berlin (1991)
* A. Stahlhofen, Phys. Rev. A 51, 934 (1995)
* B.N. Zakhariev and V. M. Chabanov, Inverse Problems 13, R47 (1997) |
no-problem/9910/astro-ph9910262.html | ar5iv | text | # Coherent Mechanisms of Pulsar Radio Emission
## 1. Emission mechanisms
We have shown (Lyutikov, Blandford & Machabeli 1999b) that pulsar radiation may be generated by two kinds of electromagnetic plasma instabilities – cyclotron-Cherenkov instability, developing at the anomalous Doppler resonance
$$\omega (𝐤)k_{}V_{}+\omega _B/\gamma _{res}=0$$
(1)
and Cherenkov-drift instability, developing at the Cherenkov-drift resonance
$$\omega (𝐤)k_{}V_{}k_xu_d=0$$
(2)
The cyclotron-Cherenkov instability is responsible for the generation of the core-type emission and the Cherenkov-drift instability is responsible for the generation of the cone-type emission. These electromagnetic instabilities are the strongest instabilities in the pulsar magnetosphere (Lyutikov 1999a).
Both instabilities are maser-type in a sense that an induced emission dominates over spontaneous. From a classical viewpoint, a random incoming wave forces a particle to emit a wave ”in phase” with initial one, so that the resulting intensity is proportional to $`N^2`$ where $`N`$ is a number of the interfering waves. This is how coherence of the radiation is produced: masers naturally produce coherent waves. For the operation of a maser some kind of population inversion condition should be satisfied: there should be more emitting that absorbing particles.
The cyclotron instability can develop on both primary beam and on the tail of the plasma distribution while the Cherenkov-drift instability develops on the rising part of the primary beam distribution function. The free energy for the growth of the instability comes from the nonequilibrium anisotropic distribution of fast particles.
From the microphysical point of view both emission process habe more similarities with Cherenkov-type emission than with synchrotron or curvature emission. In the case of Cherenkov-type process the emission may be attributed to the electromagnetic polarization shock front that develops in a dielectric medium due to the passage of a charged particle with speed larger than phase speed of waves in a medium. It is a collective emission process in which all particles of plasma take part.
Interestingly, in a cyclotron-Cherenkov emission process an emitting particle undergoes a transition up in Landau levels, thus, population inversion condition in this case requires more particles on the lower levels - this condition is satisfied by the one dimensional distribution of particles in pulsar magnetosphere. The cyclotron-Cherenkov emission is not new in astrophysics: it is exactly by this resonance that cosmic rays produce Alfvén wave in the interstellar shocks. In addition, the laboratory devices called Slow Wave Electron Cyclotron Masers use this resonance to produce high power microwave emission.
Emission of a charged particle propagating in a medium with a curved magnetic field differs from conventional Cherenkov, cyclotron or curvature emission and includes, to some extent, the features of each of these mechanisms. We have developed a formalism for considering an emissivity of a particle in a curved field in a medium (Lyutikov, Machabeli & Blandford 1999b). The resulting process may be called a coherent curvature emission. The Cherenkov-drift instability that operates in pulsars is related to some low frequency electromagnetic instabilities in TOKAMAKs; the ultrarelativistic energies in pulsars change it to a high frequency one.
Both instabilities develop in a limited region on the open field lines at large distances from the surface $`r10^9`$ cm. The size of the emission region is determined by the curvature of the magnetic field lines, which limits the length of the resonant wave-particle interaction and growth of the wave. The location of the cyclotron-Cherenkov instability is limited to the field lines with large curvature, while the Cherenkov-drift instability occurs on the field lines with the radius of curvature limited both from above and from below. There are two possible locations of the Cherenkov-drift instability: in a ringlike region around the straight field lines and in the region of swept back field lines (Fig. 1). Thus, both instabilities produce narrow pulses, though they operate at radii where the opening angle of the open field lines is large.
The proposed theory is capable of explaining many observational results:
* Energetic: approximately $`1\%`$ of the beam energy is transfered into radiation
* Waves are emitted approximately half way through the light cylinder; curvature of filed lines controls the emission regions (cyclotron absorption may also be important)
* Radius-to-frequency mapping for core $`r\nu ^{1/6}`$
* Width-frequency dependence is controlled by the combination of the growth rate of the instability and the structure of the field line
W, rad
MHz
* Fundamental modes are linearly polarized
* Different distribution function of secondary electrons and positrons results in circular polarization for some $`\theta <\theta ^{}`$. For the resonance on the beam the circular polarization reaches maximum in the center. For the resonance on the tail particles the sense of the polarization changes due to the curvature drift.
* Higher degree of circular polarization at high frequencies: higher frequencies resonate only with the beam, lower frequencies can resonate with the tail particles where both types of charges are present.
* Spectra of the core emission are controled by the quasilinear diffusion, which gives a spectral index $`\alpha =2`$ (Lyutikov 1998a) . Other nonlinear processes, like Raman scattering (Lyutikov 1998b) may also be important.
* Large emitting size and high altitudes (Gwinn et al. 1997, Smirnova et al. 1995, Kijak & Gil 1998)
* High energy emission: development of the cyclotron instability excites gyration; reemission on the normal Dopper resonance boosts the frequency into the optical-UV-soft X-ray region $`\omega \omega _B\gamma `$.
One of the most challenging consequences of our model is that emission is generated at high altitudes. Below we list both observational and theoretical arguments that support this:
* ”Wide beam” geometry: correlation in intensity and position angle in widely separated pulses (Manchester 1995)
* Emission bridge between some widely separated pulses
* Extra peaks in Crab at high frequencies
* Alignment of Radio and High Energy emission in Crab and partially in Vela
* Large emission size of Vela (500 km) (Gwinn et al. 1997) (also Smirnova et al. 1995, Kijak & Gil 1998)
* For small $`r`$ typical plasma frequencies $`\omega _p,\omega _B\omega `$ (Kunzl et al. 1998); excitaion of wave with $`\omega \omega _p`$ is impossible (Lyutikov 1999b)
Predictions of our model are:
* ”frequency incoherent maser”: at each moment emission consists of narrow frequency features
* increase of circular polarization with frequency (at higher frequencies the cyclotron instability on the beam with one sign of charge dominates over cyclotron instability on the tail particles, where both signs of charge present)
* linear polarization of cone $``$ to B plane (may be resolved using interstellar scintillations similar to Smirnova et al. 1996)
The weak points of the model are: (i) the development of the cyclotron instability requires that the plasma be relatively dense and slow streaming ($`\gamma _p10`$) - this is unusual but not unreasonable; (ii) width-period dependence - now it is determined not only by the geometrical factors, but also by the plasma parameter (growth rates).
## References
Gwinn, C.R. et al. 1997, ApJ, 483, L53
Kijak J & Gil J. 1998, MNRAS, 299, 855
Kunzl T., Lesch H., Jessner A. & von Hoensbroech A., 1999, ApJ, 505, L139
Lyutikov M. 1998a, MNRAS, 298, 1198
Lyutikov M. 1998b, Phys Rev E 58, 2474
Lyutikov M., Blandford R.D. & Machabeli G.Z. 1999a, ApJ, 512, 804
Lyutikov, Machabeli & Blandford 1999b, MNRAS, 305, 338
Lyutikov 1999a, J. Plas. Phys., in print
Lyutikov 1999b, submitted to MNRAS
Manchester R., J. 1995, Astroph & Astron. 16, p.107
Smirnova et al. 1996, ApJ, 462, 289 |
no-problem/9910/astro-ph9910415.html | ar5iv | text | # Prospects for solving basic questions in galaxy evolution with a next generation radio telescope
## 1 Introduction
A number of indicators point to the period from $`z3`$ to $`z1`$ as the time when galaxies assembled. This period of some 3 billion years witnessed a maximum in the rate at which stars have formed and a peak in number of quasars and powerful radio galaxies. The luminous star forming galaxies that trace the rise and fall of star formation through this epoch represent a tiny fraction of the protogalactic systems that exist at this time, since neutral hydrogen clouds are known to have been abundant, amounting to far more mass than was then in stars. This leaves the bulk of the baryons destined to join galaxies below the threshold for viewing by today’s telescopes, and it means that our perception of this important epoch of history lacks a clear observational picture of the sequence and timing of events that has occurred in the coalescence of mass to form galactic potential wells and their present contents of stars, gas and dust.
Theoretical simulations, which are successfully tuned to produce the $`z=0`$ large scale structure starting from initial conditions that are consistent with the level of fluctuation in the microwave background, predict mass distribution functions for the protogalaxies through this period of galaxy formation. These simulations, which are based on the gravitational collapse of a Cold Dark Matter dominated Universe, demonstrate a hierarchical clustering that leads to the desired $`z0`$ large-scale behavior and shows galaxies forming by the dissipationless merging of low mass dark matter mini-halos halos and the subsequent accretion of condensing, dissipational gas (cf. White & Frenk 1991, Kauffman 1996, Ma et al 1997).
The postponement of the assembly of massive galaxies in the CDM models is somewhat at odds with observations showing powerful radio galaxies at redshift above $`z=5`$ (van Breugel et al 1999) and evidence for large gaseous disks in well-formed potentials at $`z2.5`$ (Prochaska & Wolfe 1999). A crucial test of the formation ideas will be to measure the sizes of galaxies and their total dynamical masses as a function of redshift in order to define time sequence of galaxy formation.
The next section reviews the considerable indirect evidence that bears on this epoch, leading to the conclusion that a large radio telescope, capable of sensing the abundant cool gas confined to the evolving potential wells, will clarify the history of galaxies through this critical period. Subsequent sections address the specific observational tests.
## 2 Evolution of global properties $`z=4`$ to $`z=0`$
A number of indicators trace global properties of the Universe through the epoch of most vigorous assembly of galaxies. Fig. 1 shows four of these plotted as a function of redshift. (1) The star formation rate SFR density computed for color-selected, star forming galaxies (cf Madau et al 1996, Calzetti and Heckman 1999) shows a steep rise with redshift to $`z1`$ with a modest decline at higher redshifts. More recent observational evidence (Steidel et al 1999) favors a flat SFR density above $`z=1`$ to at least $`z=4`$, once corrections have been made for extinction. (2) The comoving space density of luminous optically selected quasars reaches a maximum at $`z2`$, implying that mass is being redistributed within the evolving galaxies to efficiently feed nuclear activity. (3) Studies of absorption lines against the ultraviolet continua of bright high redshift quasars provide probes of the comoving density of neutral gas, through studies of the damped Lyman-$`\alpha `$ DLa line of HI (Wolfe et al 1986), as well as (4) the ionized galaxy halo gas that is sensed in CIV (Steidel 1990). Quasar absorption lines are especially relevant for the studies of normal galaxies, since they are not biased toward the luminous objects at the peak of the luminosity function for a chosen redshift, but rather the quasar absorption lines provide a democratic selection of the common, gas rich objects that represent the less rapidly evolving protogalaxies that are contain the bulk of the baryons destined to eventually be locked into galaxies at $`z=0`$.
The DLa lines are especially relevant to the discussion here, since the quantities of cool neutral gas sensed in these high redshift absorbers exceeds the local HI comoving density $`\mathrm{\Omega }_g(z=0)`$ by a factor of at least five (Wolfe et al 1995, Lanzetta 1995), and the HI is a viable target for radio studies in the 21cm line of HI. The DLa class comprises neutral gas layers with H<sup>0</sup> column densities above $`2\times 10^{20}`$cm<sup>-2</sup>, while the MgII selection chooses column densities of H<sup>0</sup> down to levels $`10^{18}`$cm<sup>-2</sup>.
The HI content contained in the DLa population of absorbing cloud suffers uncertainty in the high redshift regime. The same clouds that absorb in the Lyman-$`\alpha `$ line, also show absorption by common metals that originated in a prior generation of stars. The same generation of stars will have produced dust, and the resulting extinction may cause lines of sight with DLa absorbers to be selected against when samples of high $`z`$ quasars are composed for spectroscopic study (Pei et al 1999). A detailed appraisal of the extinction leads Pei et al (1999) to conclude that the DLa statistics may underestimate the neutral gas density at $`z2.5`$ by a factor of two to three, for a total factor of 10 to 15 above the present density $`\mathrm{\Omega }_g(z=0)`$.
A large uncertainty also remains in the low $`0<z<1.7`$ DLa measures of $`\mathrm{\Omega }_g(z)`$ since the Lyman-$`\alpha `$ line is not shifted in the optical window observable with ground based telescopes until $`z>1.7`$. This is a period of galaxy evolution when the SFR subsided, perhaps in response to depleted supplies of neutral gas from which to form stars. During this period, star formation may have changed from being fueled with largely primordial material to star formation relying on reprocessing an interstellar medium of built from stellar mass loss (Kennicutt et al 1994) of earlier generations of relatively long lived stars. An understanding of this transition is important in predicting the gas-consumption time (Roberts 1963) and the duration of star formation into the future. These uncertainties could be eliminated by using a SKA class telescope to observe the HI emission directly in a survey to redshift $`z=1.5`$ to perform the kinds of HI census that is being made for the local Universe now by surveys conducted with telescopes such as Arecibo (Zwaan et al 1997) and Parkes (Staveley-Smith et al 1996).
Currently operational telescopes map the detailed gas kinematics of nearby spiral galaxies in the 21cm emission line, and the analyses show that galaxies rich in neutral gas have well ordered rotation of a cool disk, much like the disk of the Milky Way. The dynamical analysis of the velocity fields of the quiescent rotating disks has been a important tool in the measurement of the total mass in galaxies and has served to specify the presence and distribution of dark matter in galaxies (cf van Albada et al 1985, Broeils 1992). In this application, the HI is a highly diagnostic tracer of the galactic potential. Even the crude resolution obtained with single-dish telescopes yields a “mass indicator” by measurement of the profile width. The settled and quiescent nature of neutral gas layers makes them a more reliable tracer of gravitational potentials than emission line gas in HII regions around star forming regions, where stellar winds and expanding shock fronts compete with gravitation in determining the gas kinematics.
The apparent absence of isolated neutral gas clouds without an associated galaxy in the nearby Universe (Zwaan et al 1997, Spitzak and Schneider 1999) suggests that the neutral gas relies on a confining potential to maintain the gas at sufficient density that the it can remain neutral in the face of an ionizing background radiation field. Perhaps under the conditions of “adequate” confinement for preservation of neutrality, it is difficult to avoid the instabilities that lead to cooling, collapse and star formation. In such a picture, the shallower potential wells of the lower mass dwarf galaxies would be the places where the HI is more gently confined and evolutionary processes would generally proceed more slowly. Indeed, this is the domain inhabited by the dimmest of the gas-rich LSB galaxies. The expectation is that the DLa clouds at high redshift must also be confined in order to remain neutral, and they too will be tracers of their confining potentials.
In many respects, the neutral clouds giving rise to the the DLa absorbers have similar properties to the interstellar media of gas rich galaxies. They typically have a mix of cold clouds and a thinner, turbulent medium (Briggs & Wolfe 1983, Lanzetta & Bowen 1992, Prochaska & Wolfe 1997) whose physical conditions vary from mildly ionized to the more highly ionized “halo” gas characterized by the CIV absorption lines and represented by well studied lines of sight through the Milky Way halo. Metal abundances in the DLa clouds show considerable variance around the expected trend of enrichment over time (Pettini et al 1997), and there is now evidence that the DLa clouds are a distinct population (Pettini et al 1999) apart from the active star forming galaxies found through color selection (Steidel et al 1999). The onset of the CIV absorption (see Fig. 1) is another symptom of wide-spread star formation, as metals are produced and redistributed in the ISM and halos (Steidel 1990, Sembach et al 1999).
## 3 Gas-rich clouds vs. active star forming galaxies
An interesting comparison can be made between the observed sizes of the high-$`z`$ star forming galaxies (Giavalisco et al 1996a) and the interception cross-sections for uv absorption by different ions (cf. Steidel 1993). The Lyman-break color-selection technique for identifying the star forming galaxies produces candidates with a density on the sky of $`1`$ arcmin<sup>-2</sup> for objects with redshifts predominantly in the range $`2.6z3.4`$ (Steidel et al 1998). The comoving density of $`LL_{}`$ galaxy “sites,” computed for this redshift range, amounts to $`2`$ arcmin<sup>-2</sup> (for a cosmological model with $`\mathrm{\Omega }_o=0.2`$). Fig. 2 shows the cross section for absorption lines that every $`L_{}`$ galaxy site would necessarily present, if ordinary galaxies are to explain the observed incidence of absorption lines. Thus, absorption line statistics indicate $``$2 times the absorption cross sections shown in Fig. 2 for every Lyman-break galaxy. At low and intermediate redshifts ($`z<1`$), the association of the metal line absorbers with the outer regions of galaxies has been well established for the MgII class of absorber by observation of galaxies close to the lines of sight to the background quasars (Le Brun et al 1993). The CIV selected systems are consistent with galaxy halo cloud properties (Petijean & Bergeron 1994, Guillemin & Bergeron 1997), and ionized high velocity cloud (HVC) analogs for the CIV cloud population exist in the halo of the Milky Way (Sembach et al 1999). A few of the DLa absorbers are also identified with galaxies at intermediate redshift (Steidel et al 1994, Steidel et al 1995). The optical identification of the DLa systems has gone more slowly than for the MgII, for example, because DLa absorbers are rarer (as indicated by the relative cross sections), and the majority of the surveys for DLa systems have been conducted with ground-based telescopes, which find $`z>1.7`$ systems that are difficult to associate with galaxies. Curiously, some of the studies of the lowest redshift DLa absorbers have failed to provide any optical identification to sensitive limits (Rao & Turnshek 1998).
The overall implication of a comparison between the physical sizes of the Lyman-break galaxies and the absorption-line cross sections for the high-$`z`$ Universe is that there is a substantial population of metal-enriched gaseous objects, possibly accompanied by a tiny pocket of stellar emission, that can well go unnoticed in deep optical images. The nature of these invisible absorbers remains a puzzle. Do the basic skeletons of today’s $`L^{}`$ galaxies exist already at redshifts greater than 3 as partially filled gravitational potential wells of dark matter – each well binding a star forming nuclear region, a large disk of neutral gas, and an extended halo structure of ionized, metal-enriched gas? Or, are the statistical cross sections for absorption plotted in Fig. 2 actually the integrals of many much smaller absorption cross sections created by much less strongly bound, small clouds that coalesce steadily since $`z5`$ to form the large galaxies at $`z=0`$? In the latter case, the star-forming Lyman Break population would represent a very tiny fraction of the protogalaxy population. Will all galaxies pass through such a star-forming phase, or will they be accreted to the onto the LB objects? Would the tiny DLa clouds need to be bound in dark matter mini-halos in order to avoid photoionization as has occurred in the intergalactic medium? Are the DLa clouds clustered together or are they bound to the outskirts of the LB galaxies? These are important questions since the DLa population appears to be a gravitationally confined population containing enough baryons to produce the stellar matter at $`z=0`$, and if these clouds are largely cool and unevolved, their kinematic and dynamic properties can be studied directly in no way other than at radio wavelengths.
## 4 The big questions
The big question is centered on the sequence for construction of the galaxy population. Do galaxies form in a hierarchical manner as described by the CDM simulations? Can merging and accretion be gentle enough to produce the cool flattened disk galaxies like the spiral population at $`z0`$?
Fig. 3 is a schematic view of the observational consequences for the two scenarios – one with large objects already in place at high $`z`$ and one relying on hierarchical merging. In order to build up the cross section required by the DLa statistics at $`z2.5`$, there must be several disks per unit $`\mathrm{\Delta }z`$ per square arcminute along a randomly chosen direction. The left panel in Fig. 3 illustrates this idea with disks whose comoving number density is constant over time from $`z=4`$ to $`z=0`$ and whose diameters are adjusted to match the DLa interception probabilities (Lanzetta et al 1995). There is an $``$50% probability of interception by a DLa over a redshift path from 0 to 4. The right panel in Fig. 3 has individual spherically shaped clouds that decline in radius toward higher redshifts as $`r(1+0.7z)^{3/2}`$, while maintaining the same integral interception cross section per unit redshift as in the left panel. This requires that there be many more clouds to build up the required integral cross section. The metal line systems, such as CIV, have larger cross section, as shown in Fig. 2, and a similar diagram plotted for CIV statistics would provide near complete coverage of the sky, with a significant probability of multiple CIV absorbers along a single line of sight, as is observed (Steidel 1990).
The angular size scales of the DLa absorbers range over a few arcseconds, which is a good match to the sub-arcsecond resolution obtained by a radio interferometer array with baselines of a few kilometers to a few hundred kilometers. As depicted in Fig. 2, these objects are everywhere in the sky, so a sufficiently sensitive telescope would find them in surveys of randomly chosen fields. A further consequence of the existence of vast numbers of these objects is that they should frequently be seen in absorption against background radio sources. Roughly one half of the radio galaxies and quasars at $`z4`$ will lie behind DLa clouds.
The prospects for making definitive emission and absorption experiments are discussed in the following sections. These observations include detection and mapping of HI emission from individual high redshift $`z>3`$ galaxies, as well as statistical studies of the HI content contained in sub-classes of optically selected objects. Measuring 21cm absorption against background sources might be an effective way to settle the question of spatial extent and dynamical mass content of the DLa population; the absorption experiments could be tackled during the next few years – with existing telescope arrays.
A consequence of hierarchical formation models is that galaxies should undergo repeated merging and accretion events. At $`z0`$ the most noticeable merging systems are also bright far infrared emitters and often are hosts for OH megamasers. The brightest megamasers are so luminous that they could be seen throughout the $`z<5`$ Universe. This may provide a means to directly measure the galaxy merging rate as a function of time, without bias due to dust obscuration.
## 5 Monitoring the HI content of the Universe
The expectation for detection of HI-rich galaxies at different redshifts is straightforward to calculate. To provide the framework for these estimates, Fig. 4 summarizes simulations of the number of detected signals per unit of survey solid angle per unit of radio frequency. These units are chosen to facilitate comparison between the sky density of OH megamasers and more common but less luminous emission from ordinary galaxies in the 21cm line. The detection rates are computed for a range of cosmological models. Computed sensitivity levels assume signal profile widths of 300 km s<sup>-1</sup> with optimally smoothed spectra. The detection threshold levels are: 1 mJy, 0.2 mJy, 5$`\mu `$Jy, and 0.75$`\mu `$Jy. The levels at 5$`\mu `$Jy and 0.75$`\mu `$Jy are 7$`\sigma `$ for 8 and 360 hour integrations respectively with a telescope with the nominal specifications suggested for SKA (Taylor & Braun 1999).
The calculation is based on the number density of galaxies of given HI mass described by the HI mass function of Zwaan et al (1999), truncated at the low mass end at $`10^5`$M. This mass function follows a Schechter form, which has power law rise toward low masses with index $`\alpha =1.2`$ and a knee at $`M_{HI}^{}=10^{9.8}`$M ($`H_o=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup>) above which the mass function cuts off sharply with exponential dependence.
A SKA Deep Field HI survey would yield $`10^5`$ gas rich galaxies from a 1 square degree field. The most numerous detections would probably fall in the redshift range $`0.8<z<2`$. These objects would be excellent tracers of large scale structure.
### 5.1 Low redshift: $`z<1`$
For low redshifts, the Zwaan et al mass function should provide a reasonably reliable estimate of the detection rate. Through the redshift range from $`z1`$ to $`z=0`$, the Deep Field survey would observe the decline of the HI mass content of the Universe, as mass is increasingly locked up in stars. The difference between the star formation rate density (as measured in optical surveys) and gas consumption rate will specify the role of stellar mass loss in replenishing the ISM and prolonging star formation into the future.
### 5.2 Direct detection of protogalaxies at $`z2`$
As a framework for discussion, the calculation of Fig. 4 has adopted a constant comoving density of non-evolving galaxies. This cannot be correct at high redshifts. We know that there is more neutral mass at $`z2.5`$ than at present. Whether this will also translate into an increased number of detections above what is specified by these calculations is not clear. Whether the HI is parceled in large or small masses will be the deciding factor. In order to illustrate the difficulty in measuring small HI masses predicted by hierarchical clustering scheme, the figure has dots drawn to indicate the highest redshift at which non-evolved $`M_{HI}^{}(z=0)`$ and 0.1$`M_{HI}^{}(z=0)`$ galaxies would be detected, for the $`\mathrm{\Omega }_m=0.2`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology. For a SKA Deep Field requiring 360 hours of integration, the $`M_{HI}^{}`$ galaxies are detected to redshift $`z2.6`$ and an 0.1$`M_{HI}^{}`$ to $`z0.9`$.
The goal of observing reshifts around $`z2.5`$ would be resolve the considerable uncertainty in HI mass distribution. The large excess of HI that we know exists could simply increase all galaxy sites by a factor of 10 in mass, implying that an 0.1$`M_{HI}^{}`$ galaxy at present evolved from a system of 1$`M_{HI}^{}`$ at $`z=2.5`$, and these would be easily detected, along with the 10$`M_{HI}^{}`$ around the knee in the mass function. On the other hand, a hierarchical picture would have increased numbers, say by a factor 10, of objects with some fraction of the HI mass now measured in galaxies nearby. Note that as galaxies merge and stars form, the stellar populations form a sink for neutral baryons – merging ten $`M_{HI}=10^9`$M protogalaxies is likely to produce a luminous $`z=0`$ galaxy with only $`10^9`$M or less of HI mass, since much of the mass is destined to be consumed in star formation.
Individual protogalactic clumps with $`M_{HI}<10^8`$M would not be detected by even the long integrations of a Deep Field survey. However, the coalescing clumps may be clustered sufficiently to create detectable signals.
### 5.3 A statistical measure of HI in the Lyman Break Population
The detection of the small, protogalactic HI masses common to hierarchical models at redshifts around $`z3`$ will be difficult, even with a SKA class telescope. On the other hand, considerable progress might be made with a straightforward statistical method, much sooner than the construction of a new radio telescope. Several current generation aperture synthesis telescopes (the Westerbork Synthesis Radio Telescope and the Giant Metrewave Radio Telescope in India) are equipped to observe the 21cm line redshifted to $`z3`$. The field of view of these telescopes can survey several square degrees of sky in a single integration with sufficient angular resolution to avoid confusion among the LB galaxies. If an adequate catalog of LB galaxies could be constructed for such a synthesis field, with of order $`10^4`$ identified LB objects with celestial coordinates and redshifts, then the radio signals could be stacked, to obtain a statistical measure of the HI content of the LB population. This would allow the “average HI content per LB galaxy site” to be determined.
## 6 Sizes and kinematics of the DLa Clouds
### 6.1 Emission observations
For nearby galaxies, synthesis mapping techniques provide measurements of the extent of the HI emission and kinematics that help to clarify the structure of the galaxies and the distribution of their dark matter component. These elegant maps and subsequent analysis typically describe the nearby galaxies with peak flux densities in the integral profiles of a few 100 mJy with sensitivity levels around $`\sigma 1`$ mJy per velocity channel ($``$10 km s<sup>-1</sup> wide).
The prospects for obtaining this level of sensitivity with $`S/N100`$ in high redshift galaxies are not good. Fig. 5 summarizes the expected $`S/N`$ for $`M_{HI}^{}=10^{9.8}`$M galaxies as a function of redshift. For this example, the galaxies have total velocity spreads of 300 km s<sup>-1</sup>, which is observed with channel spacing of 30 km s<sup>-1</sup>. A SKA 360 hour integration cannot achieve $`S/N>50`$ for redshifts greater than 0.6 and hits $`S/N=10`$ for $`z=1.25`$.
### 6.2 Absorption observations
Fortunately, definitive measurements can be obtained through high spatial resolution observations of absorption against extended background radio sources. Fig. 3 shows that roughly half of the radio sources with redshift greater than 4 will lie behind DLa absorbing layers. SKA sensitivities will permit absorption experiments against random high redshift radio sources in every field. In fact, a standard tool for determining a lower limit to redshifts for optically blank field radio sources may be to examine the radio spectrum for narrow absorption lines of redshifted HI.
In fact, considerable progress in assessing the extent and kinematics of the DLa class of quasar absorption line system can be made before the commissioning of SKA with minimal technical adaptation of existing radio facilities. The technique requires background radio quasars or high redshift radio galaxies with extended radio continuum emission. Some effort needs to be invested in surveys to find redshifted 21cm line absorption against these types of sources. These surveys can either key on optical spectroscopy of the quasars to find DLa systems for subsequent inspection in the 21cm line, or they can make blind spectral surveys in the 21cm line directly, once the new wideband spectrometers that are being constructed for at Westerbork and the new Green Bank Telescope are completed. Then radio interferometers with suitable angular resolution at the redshifted 21cm line frequency must be used to map the absorption against the extended background source. This would involve interferometer baselines of only a few hundred kilometers – shorter than is typically associated with VLBI techniques, but longer than the VLA and GMRT baselines. The shorter spacings in the European VLBI Network and the MERLIN baselines would form an excellent basis for these experiments, although considerable effort will be required to observe at the interference riddled frequencies outside the protected radio astronomy bands.
Fig. 6 shows an example of how these experiments might work. The top panel shows contours for the radio source 3C196. Brown and Mitchell (1983) discovered a 21cm line in absorption at $`z=0.437`$ against this source in a blind spectral survey. The object has been the target of intensive optical and UV spectroscopy (summarized by Cohen et al 1996), as well as HST imaging to identify the the intervening galaxy responsible for the absorption (Cohen et al 1996, Ridgway and Stockton 1997). Fig. 6 includes a dashed ellipse in the top panel to indicate the approximate extent and orientation of the galaxy identification.
The second panel from the top in Fig. 6 illustrates the 21cm line emission spectrum typical of nearby HI-rich disk galaxies, observed by a low resolution (“single-dish”) beam that does not resolve the gaseous structure in the galaxy. The rotation of a galaxy with a flat rotation curve produces the velocity field shown to the left of the spectrum
For disk systems observed in absorption, the information accessible to the observer is less, since we can only hope to ever learn about the gas opacity and kinematics for regions that fall in front of background continuum. This restricts our knowledge to zones outlined in the third panel of Fig. 6. The “restricted emission” spectrum is drawn to illustrate what fraction of the galaxies gas content might be sensed by a sensitive synthesis mapping observation. A comparison to the total gas content in the upper spectrum suggests that much of the important information (velocity spread, for example) would be measured by a synthesis map of the absorption against background source.
The single-dish spectrum of the absorption lines observed for an object like 3C196 is weighted by the regions where the background continuum has the highest brightness. As shown in the lower panel, this weighting emphasizes the bright spots in the radio lobes. Clearly sensitive mapping will better recover the information lost in the integral spectrum produced by a low angular resolution observation. A preliminary look at recent observations of the $`z=0.437`$ absorber in 3C196 can be found in de Bruyn et al (1997).
## 7 Direct measurement of the merger rate
OH megamasers occur in the nuclear regions of merging and heavily interacting galaxies. The galaxies are characterized by disturbed morphologies, strong far infrared emission and heavy extinction at the center. The brighter OH megamasers can be detected easily at cosmological distances. Due to the heavy obscuration, these objects are not especially eye catching, but their strong FIR flux has led to the identification of a representative sample.
Briggs (1998) argued that the OH megamasers may be a useful tool in direct measurement of the galaxy merger rate over time, since these sources would be expected to turn up in radio spectroscopic surveys, such as a SKA Deep Field. Simply counting the number of OH megamasers per volume as a function of redshift would specify the number of galaxies in the merging phase at that time. The selection is both immune to obscuration and unbiased with respect to redshift since the entire radio spectrum (aside from regions of strong rfi) can be covered with a single telescope.
Fig. 4 shows estimates for the detection rate for OH megamaser sources for comparison with the HI emission from ordinary galaxies. The OH calculations use a constant comoving density of OH megamaser hosts and the local OH megamaser luminosity function. For the levels of sensitivities reached by current radio telescopes (detection levels 0.2 to 1 mJy), the OH detections should dominate for frequencies below $``$1000 MHz. Given the expectation that mergers were more numerous in the past, there may be a much high detection rate for OH emission through the range $`1<z<3`$ that shown in the figure (Briggs 1998).
## 8 Conclusion
Radio mapping in the redshifted HI line with modest spatial resolution radio interferomters promises to resolve basic questions about how galaxies assemble and evolve. By observing the cool neutral gas that traces gravitational potential wells of forming galaxies, the 21cm line provides not only a measure of the neutral gas content of the Universe over cosmic time scales but also a method to weigh the dark matter halos.
## Acknowledgements
The author is grateful to A.G. de Bruyn and J.M. van der Hulst for valuable discussions.
## References |
no-problem/9910/cond-mat9910226.html | ar5iv | text | # Pseudogaps in the 2D half-filled Hubbard model
## Introduction
For over a decade it has been recognized that the normal state properties of high-$`T_c`$ superconductors are unusual and appear to have non-Fermi liquid characteristics. One of the most remarkable features of the normal state is a suppression of the density of states at the Fermi energy in a temperature regime above $`T_c`$ in underdoped samples. Angular resolved photoemission experiments show that this pseudogap in the spectral function has a d-wave anisotropy, the same symmetry as the superconducting order parameter in these materials. This, along with theories that short-ranged spin fluctuations mediate pairing in the high-$`T_c`$ cuprates, emphasizes the importance of understanding the normal state, insulating phase.
It is thought by many that the two-dimensional Hubbard model, or closely related models, should capture the essential physics of the high-$`T_c`$ cuprates. Yet, despite years of effort, neither the precursor pseudogap nor d-wave superconducting order have been conclusively seen in the Hubbard model.
Intuitively, one may expect that the Hubbard model should show pseudogap behavior. At half-filling, the ground state of the 2D Hubbard model is an antiferromagnetic insulator and the spectrum is therefore gapped. However, the Mermin-Wagner theorem precludes any transition at finite $`T`$, so as the temperature is lowered one may anticipate that a pseudogap will develop. This question has been previously addressed in the 2D Hubbard insulator by finite-size lattice Quantum Monte Carlo (QMC) and approximate many-body techniques . The results have been contradictory and inconclusive as to the existence of a pseudogap at low temperatures, due to limitations of these techniques.
Using the recently developed Dynamical Cluster Approximation (DCA) we find that at sufficiently low temperatures a pseudogap opens in the single particle spectral weight $`A(𝐤,\omega )`$ of the 2D Hubbard model with a simultaneous destruction of the Fermi liquid state due to critical fluctuations above the $`T=0`$ transition temperature. This occurs in the weak-to-intermediate coupling regime $`U<W`$, where $`U`$ is the on-site Coulomb energy and $`W`$ the non-interacting band width.
Using finite-sized techniques, it is difficult to determine if a gap persists in the thermodynamic limit. At half filling, finite-size QMC calculations display a gap in their spectra as soon as the correlation length exceeds the lattice size, so they tend to overestimate the pseudogap as it would appear in the thermodynamic limit. Finite-size scaling is complicated by the lack of an exact scaling ansatz for the gap and the cost of performing simulations of large systems. Calculations employing Dynamical Mean Field Approximation (DMFA) in the paramagnetic phase do not display this behavior since they take place in the thermodynamic limit rather than on a finite-size lattice. However, the DMFA lacks the non-local spin fluctuations often believed to be responsible for the pseudogap. The Dynamical Cluster Approximation (DCA) is a fully causal approach which systematically incorporates non-local corrections to the DMFA by mapping the problem onto an embedded impurity cluster of size $`N_c`$. $`N_c`$ determines the order of the approximation and provides a systematic expansion parameter $`1/N_c`$. While the DCA becomes exact in the limit of large $`N_c`$ it reduces to the DMFA for $`N_c=1`$. Thus, the DCA differs from the usual finite size lattice calculations in that it is a reasonable approximation to the lattice problem even for a “cluster” of a single site. Like the DMFA, the DCA solution remains in the thermodynamic limit, but the dynamical correlation length is restricted to the size of the embedded cluster. Thus the DCA tends to underestimate the pseudogap.
## Method
The DCA is based on the assumption that the lattice self energy is weakly momentum dependent. This is equivalent to assuming that the dynamical intersite correlations have a short spatial range $`bL/2`$ where $`L`$ is the linear dimension of the cluster. Then, according to Nyquist’s sampling theorem, to reproduce these correlations in the self energy, we only need to sample the reciprocal space at intervals of $`\mathrm{\Delta }k2\pi /L`$. Therefore, we could approximate $`G(𝐊+\stackrel{~}{𝐤})`$ by $`G(𝐊)`$ within the cell of size $`\left(\pi /L\right)^D`$ (see, Fig. 1) centered on the cluster momentum $`𝐊`$ (wherever feasible, we suppress the frequency labels) and use this Green function to calculate the self energy. Knowledge of these Green functions on a finer scale in momentum is unnecessary, and may be discarded to reduce the complexity of the problem. Thus the cluster self energy can be constructed from the coarse-grained average of the single-particle Green function within the cell centered on the cluster momenta:
$$\overline{G}(𝐊)\frac{N_c}{N}\underset{\stackrel{~}{𝐤}}{}G(𝐊+\stackrel{~}{𝐤}),$$
(1)
where $`N`$ is the number of points of the lattice, $`N_c`$ is the number of cells in the cluster, and the $`\stackrel{~}{𝐤}`$ summation runs over the momenta of the cell about the cluster momentum $`𝐊`$ (see, Fig. 1). For short distances $`rL/2`$ the Fourier transform of the Green function $`\overline{G}(r)G(r)+𝒪((r\mathrm{\Delta }k)^2)`$, so that short ranged correlations are reflected in the irreducible quantities constructed from $`\overline{G}`$; whereas, longer ranged correlations $`r>L/2`$ are cut off by the finite size of the cluster.
This coarse graining procedure and the relationship of the DCA to the DMFA is illustrated by a microscopic diagrammatic derivation of the DCA. For Hubbard-like models, the properties of the bare vertex are completely characterized by the Laue function $`\mathrm{\Delta }`$ which expresses the momentum conservation at each vertex. In a conventional diagrammatic approach $`\mathrm{\Delta }(𝐤_1,𝐤_2,𝐤_3,𝐤_4)=_𝐫\mathrm{exp}\left[i𝐫(𝐤_1𝐤_2+𝐤_3𝐤_4)\right]=N\delta _{𝐤_1+𝐤_2,𝐤_3+𝐤_4}`$ where $`𝐤_1`$ and $`𝐤_2`$ ($`𝐤_3`$ and $`𝐤_4`$) are the momenta entering (leaving) each vertex through its legs of $`G`$. However as $`D\mathrm{}`$ Müller-Hartmann showed that the Laue function reduces to
$`\mathrm{\Delta }_D\mathrm{}(𝐤_1,𝐤_2,𝐤_3,𝐤_4)=1+𝒪(1/D)\text{.}`$ (2)
The DMFA assumes the same Laue function, $`\mathrm{\Delta }_{DMFA}(𝐤_1,𝐤_2,𝐤_3,𝐤_4)=1`$, even in the context of finite dimensions. Thus, the conservation of momentum at internal vertices is neglected. Therefore we may freely sum over the internal momenta at each vertex in the generating functional $`\mathrm{\Phi }_{DMFA}`$. This leads to a collapse of the momentum dependent contributions to the functional $`\mathrm{\Phi }_{DMFA}`$ and only local terms remain.
The DCA systematically restores the momentum conservation at internal vertices. As discussed above the Brillouin-zone is divided into $`N_c=L^D`$ cells of size $`(2\pi /L)^D`$. Each cell is represented by a cluster momentum $`𝐊`$ in the center of the cell. We require that momentum conservation is (partially) observed for momentum transfers between cells, i.e. for momentum transfers larger than $`\mathrm{\Delta }k=2\pi /L`$, but neglected for momentum transfers within a cell, i.e less than $`\mathrm{\Delta }k`$. This requirement can be established by using the Laue function
$$\mathrm{\Delta }_{DCA}(𝐤_1,𝐤_2,𝐤_3,𝐤_4)=N_c\delta _{𝐌(𝐤_1)+𝐌(𝐤_3),𝐌(𝐤_2)+𝐌(𝐤_4)}$$
(3)
where $`𝐌(𝐤)`$ is a function which maps $`𝐤`$ onto the momentum label $`𝐊`$ of the cell containing $`𝐤`$ (see, Fig. 1).
With this choice of the Laue function the momenta of each internal leg may be freely summed over the cell. This is illustrated for the second-order term in the generating functional in Fig. 2. Thus, each internal leg $`G(𝐤_1)`$ in a diagram is replaced by the coarse–grained Green function $`\overline{G}(𝐌(𝐤_1))`$, defined by Eq. 1
The diagrammatic sequences for the generating functional and its derivatives are unchanged; however, the complexity of the problem is greatly reduced since $`N_cN`$. We showed previously that the DCA estimate of the lattice free-energy is minimized by the approximation $`\mathrm{\Sigma }(𝐤)\overline{\mathrm{\Sigma }}(𝐌(𝐤))`$, where $`\delta \mathrm{\Phi }_{DCA}/\delta \overline{G}=\overline{\mathrm{\Sigma }}`$.
The cluster problem is then solved by usual techniques such as QMC, the non-crossing approximation or the Fluctuation-Exchange approximation. Here we employ a generalization of the Hirsh-Fye QMC algorithm to solve the cluster problem. The initial Green function for this procedure is the bare cluster Green function $`𝒢(𝐊)^1=\overline{G}(𝐊)^1+\overline{\mathrm{\Sigma }}(𝐊)`$ which must be introduced to avoid over-counting diagrams. The QMC estimate of the cluster self energy is then used to calculate a new estimate of $`\overline{G}(𝐊)`$ using Eq. 1. The corresponding $`𝒢(𝐊)`$ is used to reinitialize the procedure which continues until the self energy converges to the desired accuracy.
## Results
We study the 2D Hubbard Hamiltonian:
$`H`$ $`=`$ $`t{\displaystyle \underset{i,j,\sigma }{}}(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma })`$ (5)
$`+U{\displaystyle \underset{i}{}}(n_i{\displaystyle \frac{1}{2}})(n_i{\displaystyle \frac{1}{2}})\mu {\displaystyle \underset{i,\sigma }{}}n_{i\sigma }.`$
where $`c_{i\sigma }^{}(c_{i\sigma })`$ creates (destroys) an electron at site $`i`$ with spin $`\sigma `$, $`U`$ is the on-site Coulomb potential, and $`n_{i\sigma }=c_{i\sigma }^{}c_{i\sigma }`$ is the number operator. We set the overlap integral $`t=1`$ and measure all energies in terms of $`t`$. We work at $`\mu =0`$ where the system is half-filled ($`n=1`$). We choose $`U=5.2`$, which is well below the value $`UW`$ believed to be necessary to open a Mott-Hubbard gap.
We also calculate the angle integrated dynamical spin susceptibility shown in the inset. It does not have a pseudogap, as expected for the half-filled model since the spin-wave spectrum is gapless. Since a (spin) charge gap is generally defined as one which appears in the (spin) charge dynamics or thermodynamics, we conclude that the pseudogap is only in the charge response and is due to short-ranged antiferromagnetic spin correlations.
Fig. 3 shows the spectral density $`A(𝐤,\omega )`$, and the real $`\mathrm{Re}\mathrm{\Sigma }(𝐤,\omega )`$ and imaginary $`\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )`$ parts of the self-energy for the 2D Hubbard model via the DCA with a paramagnetic host at the Fermi surface $`X`$ point $`𝐤=(\pi ,0)`$ for a 64-site cluster ($`N_c=64`$) at various temperatures. We obtain the spectral function $`A(𝐤,\omega )`$ via the Maximum Entropy Method (MEM). As the temperature is lowered, the system first builds a Fermi-liquid-like peak in $`A(𝐤,\omega )`$. By $`\beta =2.6`$, a pseudogap begins to develop in $`A(𝐤,\omega )`$. The pseudogap builds as the temperature is further lowered.
Fig. 4 shows the spectral function $`A(𝐤,\omega )`$ at the half-filled Fermi surface point $`𝐤=(\pi /2,\pi /2)`$. The qualitative features are similar to Fig. 3, but the pseudogap opens at a lower temperature and the distance between the peaks is less than that seen at the $`X`$ point. This behavior is reminiscent of the anisotropy of the pseudogap observed experimentally in the insulating and in the superconducting cuprates, but is not large enough to be comparable with that seen experimentally. We speculate that the anisotropy seen here may be due to a difference in the number of states near the Fermi energy at these two points in the zone.
The DCA self-energy spectra Figs. 3 & 4 support the spectral evidence. At the $`X`$ point, the slope of the real part $`\mathrm{Re}\mathrm{\Sigma }(𝐤,\omega )`$ becomes positive below $`\beta =2.6`$, the temperature at which we observed the opening of a pseudogap. This signals the appearance of two new solutions in the quasiparticle equation $`\mathrm{Re}(\omega ϵ_𝐤\mathrm{\Sigma }(𝐤,\omega ))=0`$ in addition to the strongly damped solution at $`\omega =0`$ which is also present in the noninteracting system. These two new quasiparticle solutions for the same $`𝐤`$-vector indicate precursor effects of the onset of antiferromagnetic ordering which entails a doubling of the unit cell. They are referred as shadow states and are caused by antiferromagnetic spin fluctuations in the paramagnetic state. At these temperatures, the imaginary part $`\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )`$ displays a local minimum at $`\omega =0`$ indicating the breakdown of the Fermi liquid behavior. We note that a different conclusion was previously reached in a FLEX study, which found that $`\mathrm{Im}\mathrm{\Sigma }(𝐤,\omega )`$ has a local minimum at $`\omega =0`$ which was not accompanied by an opening of a pseudogap. Since the pseudogap is due to short-range spin correlations, we conclude that FLEX underestimates these correlations.
It is instructive to compare the DCA results with those obtained by finite-size QMC calculations. Fig. 5 shows the spectral density $`A(\pi ,0,\omega )`$ obtained by analytically continuing both finite-size and DCA QMC data. In spite of the difference in the two methods, the information they provide is complimentary. In the finite-size results, (b), we see a similar opening of a pseudogap. However, as the length of the antiferromagnetic (AF) correlations reach the longest length on the finite-size lattice, the system develops a full gap. Thus, the finite-size QMC overestimates the size of the gap. In the DCA results, (a), the pseudogap emerges as soon as $`N_c>1`$. The temperature $`T^{}`$ at which the pseudogap first becomes apparent in the spectra, as well as the full width $`\mathrm{\Delta }`$ measured from peak to peak is plotted in the inset. Both $`T^{}`$ and $`\mathrm{\Delta }`$ increase with $`N_c`$. Since the DCA calculation remains in the thermodynamic limit, a full gap due to antiferromagnetic correlations alone cannot open until their correlation length diverges. However, since these correlations are restricted to the size of the cluster, the DCA systematically underestimates the size of the gap. Thus, if a pseudogap exists in the DCA for finite $`N_c`$, it should persist in the limit as $`N_c\mathrm{}`$.
In summary, we have employed the recently developed DCA to study the long-open question of whether the half-filled Hubbard model has a pseudogap due to AF spin fluctuations. We find conclusive evidence of a pseudogap in the charge dynamics and have shown unambiguously that the $`T=0`$ phase transition of the half-filled model is preceded by an opening of a pseudogap in $`A(𝐤_F,\omega )`$ accompanied by pronounced non-Fermi liquid behavior in $`\mathrm{\Sigma }(𝐤_F,\omega )`$.
## Acknowledgments
We would like to acknowledge useful conversations with P. van Dongen, B. Gyorffy, M. Hettler, H.R. Krishnamurthy R.R.P. Singh and J. Zaanen. This work was supported by the National Science Foundation grants DMR-9704021, DMR-9357199, and the Ohio Supercomputing Center. |
no-problem/9910/gr-qc9910017.html | ar5iv | text | # The Generalized Uncertainty Principle from Quantum Geometry
## 1 Introduction
The problem of reconciling Quantum Mechanics (QM) with General Relativity is one of the task of modern theoretical physics which, until now, has not yet found a consistent and satisfactory solution. The difficulty arises since general relativity deals with the events which define the world–lines of particles, while QM does not allow the definition of trajectory; in fact the determination of the position of a quantum particle involves a measurement which introduces an uncertainty into its momentum (Wigner, 1957; Saleker, 1958; Feynman, 1965).
These conceptual difficulties have their origin, as argued in Ref. (Candelas, 1983; Donoghue, 1984, 1985), in the violation, at quantum level, of the weak principle of equivalence on which general relativity is based. Such a problem becomes more involved in the formulation of quantum theory of gravity, owing to the non–renormalizability of general relativity when one quantizes it as a local Quantum Field Theory (QFT) (Birrel, 1982) .
Nevertheless, one of the most interesting consequences of this unification is that in quantum gravity there exists a minimal observable distance on the order of the Planck distance, $`l_P=\sqrt{G\mathrm{}/c^3}10^{33}`$cm, where $`G`$ is the Newton constant. The existence of such a fundamental length is a dynamical phenomenon due to the fact that, at Planck scales, there are fluctuations of the background metric, i.e. a limit of the order of Planck length appears when quantum fluctuations of the gravitational field are taken into account.
In absence of a theory of quantum gravity, one tries to analyze quantum aspects of gravity retaining the gravitational field as a classical background, described by general relativity, and interacting with matter field. This semiclassical approximation leads to QFT and QM in curved space-time and may be considered as a preliminary step towards the complete quantum theory of gravity. In other words, we take into account a theory where geometry is classically defined while the source of Einstein equations is an effective stress–energy tensor where contributions of matter quantum fields, gravity self–interactions, and quantum matter–gravity interactions appear (Birrel, 1982).
Besides, the canonical commutation relations between the momentum operator $`p^\nu `$ and position operator $`x^\mu `$, which in Minkowski space-time are $`[x^\mu ,p^\nu ]=i\mathrm{}\eta ^{\mu \nu }`$, in a curved space-time with metric $`g_{\mu \nu }`$ can be generalized as
$$[x^\mu ,p^\nu ]=i\mathrm{}g^{\mu \nu }(x).$$
(1)
Eq. (1) contains gravitational effects of a particle in first quantization scheme. Its validity is confined to curved space–time asymptotically flat so that the tensor metric can be decomposed as $`g_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu }`$, where $`h_{\mu \nu }`$ is the (local) perturbation to the flat background (Ashtekar, 1990). We note that the usual commutation relations between position and momentum operators in Minkowsky space–time are obtained by using the veirbein formalism, i.e. by projecting the commutator and the metric tensor on the tangent space.
As it is well known, a theory containing a fundamental length on the order of $`l_P`$ (which can be related to the extension of particles) is string theory. It provides a consistent theory of quantum gravity and allows to avoid the above mentioned difficulties. In fact, unlike point particle theories, the existence of a fundamental length plays the role of natural cut–off. In such a way the, ultraviolet divergencies are avoided without appealing to the renormalization and regularization schemes (Green, 1987).
Besides, by studying string collisions at planckian energies and through a renormalization group type analysis (Veneziano, 199; Amati, 1987, 1988, 1989, 1990; Gross, 1987, 1988; Konishi, 1990; Guida, 1991; Yonega, 1989), the emergence of a minimal observable distance yields to the generalized uncertainty principle
$$\mathrm{\Delta }x\frac{\mathrm{}}{2\mathrm{\Delta }p}+\frac{\alpha }{c^3}G\mathrm{\Delta }p.$$
(2)
Here, $`\alpha `$ is a constant. At energy much below the Planck mass, $`m_P=\sqrt{\mathrm{}c/G}10^{19}`$GeV/c<sup>2</sup>, the extra term in Eq. (2) is irrelevant and the Heisenberg relation is recovered, while, as we approach the Plack energy, this term becomes relevant and, as said, it is related to the minimal observable length.
The purpose of this paper is to recover the generalized uncertainty principle, Eq. (2), in the framework of Quantum Geometry theory (Caianiello, 1979, 1980a, 1980b, 1992). It tries to incorporate quantum aspects into space–time geometry so that one–particle QM may acquire a geometric interpretation. Its formulation is based on the fact that the position and momentum operators are represented as covariant derivatives with an appropriate connection in the eight–dimensional manifold and the quantization is geometrically interpreted as curvature of phase space.
A consequence of this geometric approach is the existence of maximal acceleration defined as the upper limit to the proper acceleration $`𝒜`$ experienced by massive particles along their worldlines (Caianiello, 1981, 1982, 1984). It can be interpreted as mass–dependent, $`𝒜_m=2mc^3/\mathrm{}`$ ($`m`$ is the mass of particle), or as an universal constant, $`𝒜=m_Pc^3/\mathrm{}`$ ($`m_P`$ is the Planck mass). Since the regime of validity of (2) is at Planck scales, in order to derive it from quantum geometry, we will consider maximal acceleration depending on Planck mass.
The existence of a maximal acceleration has several implications for relativistic kinematics (Scarpetta, 1984), energy spectrum of a uniformly accelerated particle (Caianiello, 1990a), Schwarzschild horizon (Gasperini, 1989), expansion of the very early universe (Caianiello, 1991), tunneling from nothing (Capozziello, 1993; Caianiello, 1994), and mass of the Higgs boson (Kuwata, 1996). It also makes the metric observer–dependent, as conjectured by Gibbons and Hawking (Gibbons, 1977) and leads, in a natural way, to the hadronic confinement (Caianiello, 1988). Besides, the regularizing properties of the maximal acceleration has been recently analyzed in Ref. (Feoli, 1999), and its applications in the framework of string theory have been studied in Refs. (Feoli, 1993; McGuigan, 1994).
Moreover, concrete experimental tests of the consequence of the maximal acceleration have been proposed in Refs. (Caianiello, 1990; Papini, 1995a; Lambiase, 1998).
Limiting values for the acceleration were also derived by several authors on different grounds and applied to many branches of physics (Brandt, 1983, 1984, 1989; Das, 1980; Frolov, 1991; Papini, 1992, 1995b; Pati, 1992; Sanchez, 1993, Toller, 1988, 1990, 1991; Vigier, 1991, Wood, 1989, 1992).
The paper is organized as follows. In Section 2, we shortly discuss quantum geometry theory, recalling only the main topics used in this paper. Section 3 is devoted to derive the generalized uncertainty principle from quantum geometry. Conclusions are discussed in Section 4.
## 2 Quantum Geometry Theory
Quantum geometry includes the effects of the maximal acceleration on dynamics of particles in enlarging the space-time manifold to an eight-dimensional space-time tangent bundle TM, i.e. $`M_8=V_4TV_4`$, where $`V_4`$ is the background space–time equipped with metric $`g_{\mu \nu }`$. In this way, the invariant line element defined in $`M_8`$ is generalized as
$$d\stackrel{~}{s}^2=g_{AB}dX^AdX^B,A,B=1,\mathrm{},8,$$
(3)
where the coordinates of $`M_8`$ are
$$X^A=(x^\mu ;\frac{c^2}{𝒜}\frac{dx^\mu }{ds}),\mu =1,\mathrm{},4.$$
(4)
$`ds`$ is the usual infinitesimal element line, $`ds^2=g_{\mu \nu }dx^\mu dx^\nu `$, $`𝒜`$ is the maximal acceleration and
$$g_{AB}=g_{\mu \nu }g_{\mu \nu }.$$
(5)
From Eq. (5), it follows that the generalized line element (3) can be written as
$$d\stackrel{~}{s}^2=g_{\mu \nu }(dx^\mu dx^\nu +\frac{c^4}{𝒜^2}d\dot{x}^\mu d\dot{x}^\nu ).$$
(6)
An embedding procedure can be developed (Caianiello, 1990b) in order to find the effective space-time geometry where a particle moves when the constraint of the maximal acceleration is present. In fact, if we find the parametric equations that relate the velocity field $`\dot{x}^\mu `$ to the first four coordinates $`x^\mu `$, we can calculate the effective four dimensional metric $`\stackrel{~}{g}_{\mu \nu }`$ induced on the hypersurface locally embedded in $`M_8`$. For a particle of mass $`m`$ accelerating along its worldline, Eq. (6) implies that it behaves dynamically as if it is embedded in a space–time with the metric
$$d\stackrel{~}{s}^2=\left(1+c^4\frac{\ddot{x}^\sigma \ddot{x}_\sigma }{𝒜^2}\right)ds^2,$$
(7)
or, in terms of metric tensor
$$\stackrel{~}{g}_{\mu \nu }=\left(1+c^4\frac{\ddot{x}^\sigma \ddot{x}_\sigma }{𝒜^2}\right)g_{\mu \nu },$$
(8)
that depends on the squared length of the (spacelike) four–acceleration, $`|\ddot{x}|^2=g_{\sigma \rho }\ddot{x}^\sigma \ddot{x}^\rho `$. Particularly interesting is the case in the absence of gravity, $`g_{\mu \nu }=\eta _{\mu \nu }`$ which corresponds to a flat background. In this case, any accelerating particle experiences a gravitational field given by
$$\stackrel{~}{g}_{\mu \nu }=\left(1+c^4\frac{\ddot{x}^\sigma \ddot{x}_\sigma }{𝒜^2}\right)\eta _{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu },$$
(9)
where $`h_{\mu \nu }=c^4(\ddot{x}^\sigma \ddot{x}_\sigma /𝒜^2)\eta _{\mu \nu }`$ is the quantum (local) perturbation to the Minkowskian metric. From Eq. (9) it follows that
$$\stackrel{~}{g}^{\mu \nu }\left(1c^4\frac{\ddot{x}^\sigma \ddot{x}_\sigma }{𝒜^2}\right)\eta ^{\mu \nu }.$$
(10)
Nevertheless, we stress that this curvature is not induced by matter through conventional Einstein equation; it is due to the motion in momentum space and vanishes in the limit $`\mathrm{}0`$. Thus, it represents a quantum correction to the given background geometry, that, henceforth, we will assume flat.
## 3 Generalized Uncertainty Principle
Let us now derive the generalized uncertainty principle (2) starting from relation (1), where the tensor metric is induced by the acceleration of a massive particle in a high energy scattering process.
According to the hypothesis that microscopic space-time should be regarded as a four-dimensional hypersurface locally embedded in the larger height-dimensional manifold, as discussed in the previous section, accelerated particles can be associated to four-dimensional hypersurfaces whose curvature is, in general, non vanishing. At this semiclassical level, the effective space-time geometry experimented by interacting particles is curved.
Inserting (9) into (1), one gets
$$[x^\mu ,p^\nu ]=i\mathrm{}\left(1+c^4\frac{(\ddot{x}^\sigma \ddot{x}_\sigma )_m}{𝒜^2}\right)^1\eta ^{\mu \nu }.$$
(11)
The right-hand side is understood as a c–function. The term $`(\ddot{x}^\sigma \ddot{x}_\sigma )_m`$ is the mean value of the squared length of the four–acceleration which takes into account the quantum fluctuation of the metric.
Since $`\ddot{x}^\mu =(1/mc)\delta p^\mu /\delta s`$, $`\delta p^\mu `$ is the transferred momentum, it follows that
$$(\ddot{x}^\sigma \ddot{x}_\sigma )_m\frac{1}{m^2c^2\delta s^2}\left[\frac{p^ip^j}{|\stackrel{}{p}|^2}\delta ^{ij}\right](\delta p^i\delta p^j)_m,i,j=1,2,3,$$
(12)
where the high energy limit $`Em`$ has been used. Due to the average on the product of transferred momenta, one can assumes
$$(\delta p^i\delta p^j)_m\mathrm{\Delta }p^2\delta ^{ij},$$
(13)
then Eq. (12) reads as
$$(\ddot{x}^\sigma \ddot{x}_\sigma )_m2\frac{\mathrm{\Delta }p^2}{m^2c^2\delta s^2}.$$
(14)
$`\mathrm{\Delta }p`$ is the transferred momentum along the $`x`$-direction.
As it is well known, two non–commutating operators $`A`$ and $`B`$ defined in a Hilbert space, for any given state, satisfy the uncertainty relation
$$\mathrm{\Delta }A\mathrm{\Delta }B\frac{1}{2}|<[A,B]>|.$$
If $`A=x^\mu `$ and $`B=p^\nu `$, Eqs. (10) and (11) allow to write
$$\mathrm{\Delta }x^\mu \mathrm{\Delta }p^\nu \frac{\mathrm{}}{2}|\eta ^{\mu \nu }||1c^4\frac{(\ddot{x}^\sigma \ddot{x}_\sigma )_m}{𝒜^2}|.$$
(15)
From Eq. (14) and for $`\mu =\nu =1`$, one obtains
$$\mathrm{\Delta }x\mathrm{\Delta }p\frac{\mathrm{}}{2}+\frac{\mathrm{}c^2}{m^2𝒜^2\delta s^2}\mathrm{\Delta }p^2.$$
(16)
For $`𝒜=m_Pc^3/\mathrm{}`$, where $`m_P=(\mathrm{}c/G)^{1/2}`$, and $`\delta s\lambda _c\mathrm{}/mc`$, with $`\lambda _c`$ the Compton length, it becomes
$$\mathrm{\Delta }x\mathrm{\Delta }p\frac{\mathrm{}}{2}+\frac{\alpha }{c^3}G\mathrm{\Delta }p^2,$$
(17)
that is we recover Eq. (1). $`\alpha `$ is a free parameter. Eq. (17) is the result which we want: the geometrical interpretation of QM through a quantization model formulated in a eight–dimensional manifold, implying the existence of an upper limit on the acceleration of particles, leads to the generalized principle of string theory.
It is worthwhile to note that, in the last term of (17), the dependence on $`\mathrm{}`$ disappears. So that this term is not related to quantum fluctuations but, as the uncertainty principle for strings, it is due to the intrinsic extension of particles.
## 4 Conclusions
Starting from the uncertainty principle of QM written in a space–time, where the effective geometry is induced by the acceleration of particles moving along their worldlines, the generalized uncertainty principle of string theory has been derived.
In this model we have assumed the maximal acceleration as an universal constant expressed in term of the Planck mass, which value is $`𝒜10^{52}m/sec^2`$. As expected, it becomes relevant at very high energy where the emergence of a minimal observable distance occurs.
Unlike the string theory, in which the extension of particles is introduced ab initio, in quantum geometry such an extension is takes into account through the constraint of the maximal acceleration, that is by modifying the geometry in which moves an accelerating particle.
In this sense, we can state that the geometrical formulation of QM is an alternative approach in order to study physics of extended objects.
However, we have to note the fact that we have not used any second quantization scheme or full QFT approach in deriving our generalized uncertainty principle, nevertheless it is indicative of the fact that quantum geometry is an alternative scheme leading to physical interesting results. |
no-problem/9910/cond-mat9910223.html | ar5iv | text | # Interactions Between Charged Rods Near Salty Surfaces
## 1 Introduction
Recent experiments on various systems containing DNA strands and charged surfaces have raised an interest in understanding how these surfaces screen the electrostatic fields of the DNA and the resulting interactions between strands. The interest emanates from sources varying from understanding prokaryotic DNA replicationprokaryotic , to non-viral gene therapygene ; safinya ; bruinsma ; helmut , and even DNA chip technologychips .
In this paper we treat this problem using a two-dimensional salt solution modelderjaguin ; tots ; velazquez ; our ; sam to account for the charged surface (membrane), while the DNA strands are modeled as negatively charged rigid rods. We treat charge neutral systems where the over all charging of the surface and the rod is zero, thus focusing either on overall neutrally charged mixed lipid fluid membranesmixed , or highly charged surfaces to which the counter-ions are strongly bound and therefore treated within a two-dimensional geometry. We neglect the possible dependence on dielectric properties of the components in the experimental systems mentioned above in order to simplify the theoretical picture. The effects of the thickness and the dielectric properties of the layers will be published elsewherenext .
We focus on very simple geometries in order to understand how the DNA strand and surface charges interact and screen. The geometry is that of one, infinite, salty surface decorated with either one or two DNA strands. We calculate the charge distribution around the DNA and the resulting interaction between the two strands. We calculate the interaction assuming the DNA strands are slightly raised above the surface (See Fig. 1).
The theory we use to obtain our analytical results is within the Debye-Hückel approximationlandau : We minimize the free energy of the system with respect to the charge densities and use the result in the Poisson equation (thus obtaining the Poisson-Boltzmann (PB) equation) which we linearize with respect to the electrostatic potential. Solving this equation leads to the optimized self consistent charge distribution which we can insert back into the free energy in order to obtain the resulting interactions. In the next section we introduce the model and the formal results, and in the following section we apply it to find the interactions between two strands. We compare these results with new simulations of two-dimensional salt solutions and we conclude with a discussion on the limits of applicability and relevance of this model.
## 2 Model
The free energy of a system of fixed and mobile charges includes electrostatic terms and entropic terms:
$$F_{elec}=\frac{1}{2}\frac{e^2}{ϵ}\left(\frac{\sigma (\stackrel{}{r})\sigma (\stackrel{}{r}^{})}{|\stackrel{}{r}\stackrel{}{r}^{}|}+\frac{\mathrm{\Sigma }(\stackrel{}{r})\mathrm{\Sigma }(\stackrel{}{r}^{})}{|\stackrel{}{r}\stackrel{}{r}^{}|}+2\frac{\sigma (\stackrel{}{r})\mathrm{\Sigma }(\stackrel{}{r}^{})}{|\stackrel{}{r}\stackrel{}{r}^{}|}\right)𝑑\stackrel{}{r}𝑑\stackrel{}{r}^{},$$
(1)
$$S=\left(\sigma _+(\stackrel{}{r})(\mathrm{log}(\sigma _+(\stackrel{}{r})a_0)1)+\sigma _{}(\stackrel{}{r})(\mathrm{log}(\sigma _{}(\stackrel{}{r})a_0)1)\right)𝑑\stackrel{}{r}.$$
(2)
Here $`\sigma _+`$ and $`\sigma _{}`$ are the number densities of the positive and negative mobile charges where the total mobile charge density is given by $`e\sigma =e\sigma _+e\sigma _{}`$, and $`e\mathrm{\Sigma }=e\mathrm{\Sigma }_+e\mathrm{\Sigma }_{}`$ are the fixed charge densities.
Minimizing the Grand Potential:
$$G=F_{elec}k_BTS\mu \left(\sigma _++\sigma _{}\right)𝑑\stackrel{}{r},$$
(3)
with respect to the mobile charge densities, $`\sigma _+`$ and $`\sigma _{}`$, yields:
$$\sigma _+(\stackrel{}{r})=a_0^1e^{\frac{e\varphi (\stackrel{}{r})+\mu }{k_BT}}\sigma _{}(\stackrel{}{r})=a_0^1e^{\frac{e\varphi (\stackrel{}{r})+\mu }{k_BT}}.$$
(4)
Here the chemical potential of the positive and negative charges, $`\mu `$, is taken to be equal since we treat the thermodynamic limit of an infinite system where the average number of positive and negative charges are equal far from the rods, and
$$\varphi (\stackrel{}{r})=\frac{e}{ϵ}\left(\frac{\sigma _+(\stackrel{}{r}^{})\sigma _{}(\stackrel{}{r}^{})+\mathrm{\Sigma }_+(\stackrel{}{r}^{})\mathrm{\Sigma }_{}(\stackrel{}{r}^{})}{|\stackrel{}{r}\stackrel{}{r}^{}|}\right)𝑑\stackrel{}{r}^{},$$
(5)
is the resulting electrostatic potential. Inserting these results into the free energy we find the formal expression:
$$G=\frac{1}{2}\left(\mathrm{\Sigma }(\stackrel{}{r})\sigma (\stackrel{}{r})\right)\varphi (\stackrel{}{r})𝑑\stackrel{}{r}.$$
(6)
In this expression we have dropped constant terms that do not depend on the exact geometry of the system since we are interested in how the energy depends on the distances between the charges objects. We will use this expression in the next section to calculate the interactions in this system. Note that the mobile charge density, $`\sigma `$, enters with an opposite sign to what one would have naively guessed to be the interaction. This is due to the fact that this term enters as an entropic contribution and therefore indicates how the entropy has been reduced (and thus the free energy increased) due to the arrangement of the mobile charges around the fixed charges.
By inserting the distributions of Eq. 4 in the Poisson equation we get the Poisson Boltzmann equationsambook :
$$ϵ^2\varphi (\stackrel{}{r})=4e\pi ((e^{\frac{e\varphi (\stackrel{}{r})+\mu }{k_BT}}e^{\frac{e\varphi (\stackrel{}{r})+\mu }{k_BT}})a_0^1+\mathrm{\Sigma }(\stackrel{}{r}))\delta (z).$$
(7)
The $`\delta `$ function was introduced because the charges are confined to the surface at $`z=0`$. Linearizing this equation yields the Debye-Hückel (DH) equationsambook :
$$ϵ^2\varphi (\stackrel{}{r})=\left(\frac{1}{\lambda }\varphi (\stackrel{}{r})4\pi e\mathrm{\Sigma }(\stackrel{}{r})\right)\delta (z);\frac{1}{\lambda }=\frac{8e^2\pi e^{\frac{\mu }{k_BT}}}{k_BTa_0}.$$
(8)
Solving Eq. 8 for a given fixed charge distribution $`\mathrm{\Sigma }(\stackrel{}{r})`$, yields the mobile charge distribution and electrostatic potentials and fields. In our case the fixed charge is that of a uniformly charged stiff rod (model DNA). We first solve Eq. 8 for a fixed point charge $`Q`$, (i.e., $`\mathrm{\Sigma }(\stackrel{}{r})=Q\delta (\rho )\delta (z)`$) and then, since the problem is linear, we integrate to find the corresponding solution for an infinitely long rod.
The potential that solves Eq. 8 has two contributions: a singular part, $`\varphi _{sing}=\frac{Q}{ϵ\sqrt{\rho ^2+z^2}}`$, which solves the equation for a single point charge: $`ϵ^2\varphi _{sing}(\stackrel{}{r})=4e\pi Q\delta (\rho )\delta (z)`$. The second contribution,$`\psi `$, arises from the mobile charges on the surface and must satisfy the remaining equation:
$$ϵ^2\psi (\stackrel{}{r})=\frac{1}{\lambda }\left(\varphi _{sing}+\psi \right)\delta (z).$$
(9)
Eq. 9 is the Laplace equation: $`^2\psi =0`$, with the special boundary condition at $`z=0`$:
$$\frac{ϵ\psi (0_+)}{z}\frac{ϵ\psi (0_{})}{z}=\frac{e}{\lambda }\left(\psi +\varphi _{sing}\right)|_{z=0}.$$
(10)
We solve for $`\psi `$ with a family of solutions of the Laplace equation:
$$\psi (\stackrel{}{r})=\alpha _qe^{q|z|}J_0(q\rho )𝑑q.$$
(11)
Using the identity $`\varphi _{sing}=\frac{Q}{ϵ\sqrt{\rho ^2+z^2}}=\frac{Q}{ϵ}e^{q|z|}J_0(q\rho )𝑑q`$, the boundary condition (Eq. 10) is easily satisfied with $`\alpha _q=\frac{Q/ϵ}{2qϵ\lambda +1}`$, and the total electrostatic potential of the system is given by:
$$\varphi _{point}=\varphi _{sing}+\psi =Qe^{q|z|}J_0(q\rho )\frac{2q\lambda }{2ϵq\lambda +1}𝑑q.$$
(12)
This potential can now be integrated over a line to give the potential of a charged rod on a salty surface:
$$\varphi _{rod}=\varphi _{point}(\stackrel{}{r})𝑑y=2\tau e^{q|z|}\frac{\mathrm{cos}(qx)}{q}\frac{2q\lambda }{2ϵq\lambda +1}𝑑q,$$
(13)
where $`\tau `$ is the charge per unit length on the bare rod (in the case of DNA: $`\tau e/1.7\AA `$). Within this model the resulting charge distribution on the surface is found to be:
$$\sigma (x)=\frac{1}{2\pi \lambda }\varphi _{rod}=\frac{\tau }{\pi }\frac{\mathrm{cos}(qx)dq}{2ϵq\lambda +1}.$$
(14)
## 3 Interactions
In this section we calculate the effective interactions between the components in the system. The surface charges, which distribute themselves around the fixed charges, screen to some extent their fields and thus the direct interactions. However, the screening is not as effective as that of a three-dimensional salt solution where the exponential screening leads to a finite ranged interaction. In the case of a two-dimensional salt, although reduced, the fields are still long rangedderjaguin . This can be seen when we take the limit of large distances from the rod and calculate the fields resulting from Eq. 13our :
$$E_x(x\lambda ,z=0)\frac{8ϵ^2\tau \lambda ^2}{x^3}E_z(x=0,z\lambda )\frac{2ϵ\tau \lambda }{z^2}.$$
(15)
These effective fields are dipolar in nature rather then the usual $`1/r`$ term for a charged line. However, they are not exponentially screened. (Close to the rod (distances $`<\lambda `$) the electrostatic potential is not screened and therefore is logarithmic as is the case for a bare charged rod.)
### 3.1 Interaction between two neighboring rods
In order to calculate the interaction energy between two charged rods adsorbed on a salty surface we use Eq. 6 for the free energy of the system with the charge distribution of two rods separated by a distance $`D`$. We make use of the fact that the DH equation (Eq. 8) is linear so that we can solve for each rod separately and then superimpose the potentials and charge distributions of the combined system of both rods. In order to compare with the simulation results which will be described in the following chapter, we have to modify the problem slightly so that the rods are at least slightly raised above the surface and thus do not interfere with the charge distribution around each other. (This is a requirement of the simulated system in order to allow for ions to move from one side of the rod to the other.) For a rod raised by a small (compared with $`\lambda `$) distance $`d`$ above the surface, the amplitudes of the modes in $`\psi `$ are now modified to be $`\alpha _q(d)=\frac{\tau e^{dq}}{ϵ(2qϵ\lambda +1)}`$. For rods that are close to each other ($`D<\lambda `$) we know that the fields will lead to a logarithmic inter-rod interaction. However, the fields farther away are more complicated and the resulting interaction is not obvious.
The interaction can be calculated numerically as a function of distance between the two rods, however, when the rods are separated by a distance $`D\lambda `$ the potential (Eq. 13) and charge distribution, $`\sigma `$ (Eq. 14), can be analytically approximated yielding a simple form for the interaction energy as a function of the distance $`D`$. Once this approximation is made it can be shown that $`\varphi (x,z=0)1/x^2`$ and $`\sigma (x)1/x^2`$ and the integrals in Eq. 6 are simplified to yield the interaction per unit length of the rods as a function of distance:
$$G(D\lambda )\frac{8(\tau \lambda )^2}{ϵD^2}\left(1+\frac{3}{4}\frac{d}{\lambda }+\frac{1}{4}\left(\frac{d}{\lambda }\right)^2\right).$$
(16)
Here $`D`$ is the inter-rod distance and $`d`$ is the distance between the rods and the charged surface (Fig. 1).
The interaction is similar to that of two rods of dipoles at a distance $`D`$ apart. However, we can not say that the charge distribution is actually dipolar since the field in the perpendicular direction ($`E_z`$) behaves differently.
In general one usually expects the interaction between charged objects in solution to be dominated by the osmotic pressure of the solute, in this case the two-dimensional salt solution. However, in this two-dimensional case the imperfectly screened electrostatic interactions dominate over the osmotic pressure which decays more quickly as a function of $`D`$ ( $`\mathrm{\Pi }_{osmotic}k_BT(\sigma _+(D/2)+\sigma _{}(D/2))O(\varphi ^2)O(D^4)`$. Differentiating equation16 leads to the correct two-dimensional pressure between the rods which is dominated by electrostatic contributions:
$$\mathrm{\Pi }=\frac{\delta G}{\delta D}=\frac{16(\tau \lambda )^2}{ϵD^3}\left(1+\frac{3}{4}\frac{d}{\lambda }+\frac{1}{4}\left(\frac{d}{\lambda }\right)^2\right)$$
(17)
## 4 Numerical Simulations
In order to verify the predictions of the above theory, we have performed numerical Brownian dynamics simulations of the effective interactions between charged stiff (infinitely long) rods (along the $`y`$ direction) above (in the $`z`$ direction) a surface, $`(x,y,z=0)`$, in which charged monovalent particles can move (see Fig. 1). The simulations are performed at a temperature, $`T=300K`$, and with uniform dielectric constant, $`ϵ=80`$, simulating the continuum properties of bulk water. Simulating a non-zero salt concentration in the surface, we have applied periodic boundary conditions in the plane $`(x,y)`$, with a periodicity of $`(L_x,L_y)=(400,40)`$, where length is normalized to $`r_0=1`$Å. The normalized long range interactions between rods in this partially periodic system are thus given by Mashl\_JCP\_110 ,
$$U_{rr}(x,z)=\frac{\tau _1\tau _2}{ϵ}\mathrm{ln}\left(2\mathrm{cosh}(2\pi \frac{d}{L_x})2\mathrm{cos}\left(2\pi \frac{D}{L_x}\right)\right),$$
(18)
and the normalized interaction, $`U_{cr}(x,z)`$, between a point charge and a rod is given by replacing $`\tau _2`$ with $`q/L_y`$ in the above expression, $`q`$ being the fractional charge of the point charge. Energy is here normalized to $`E_0=e^2/4\pi ϵ_0`$. The corresponding interaction energy between point particles in partially periodic media can be found in Ref. Jensen\_MolPhys . We further employ a short range repulsive interaction potential (in units of kcal/mol) between ions in the plane:
$`U_{LJ}(r)`$ $`=`$ $`\{\begin{array}{ccc}4\epsilon \left(\frac{\rho }{r}\right)^6\left[\left(\frac{\rho }{r}\right)^61\right]+\epsilon \hfill & ,& r<2^{1/6}\\ 0\hfill & ,& otherwise\end{array}`$ (21)
with $`\rho =4`$ and $`\epsilon =0.01`$.
Figure 2 shows the simulation results for two rods of charge density, $`\tau =e/2`$Å, at a distance $`d=10`$Å above the surface, and with a distance $`D`$ between them. The simulations have been performed by initiating the positions of the in-plane ions randomly and allowing them to equilibrate for $`>10^5`$ time steps ($`dt=0.005`$ in normalized time units) before averaging the mean forces over $`>10^6`$ steps. We show the attractive mean forces between the rods and the surface (Fig. 2a) as well as the repulsive mean force between the rods (Fig. 2b) for several different ionic strengths of the surface. The prevailing trend is that the mean force between the rods decays as $`D^3`$ for large $`D`$ and that the mean force between the rods and the surface approaches the asymptotic (single rod limit) as $`D^2`$ for large $`D`$. This is in agreement with the predictions given by Eq. 16 and 17. It should be noted that given the periodic boundary conditions necessary for simulating a non-zero concentration of salt in the surface, we do not have complete freedom to consider the limit $`D\mathrm{}`$. The largest possible $`D`$ is given by $`L_x/2`$ (where all forces between the rods, in the $`x`$ direction, are zero by symmetry) and one should therefore only consider simulation results for $`D`$ somewhat less than $`L_x/2`$ in order to obtain meaningful results to be compared to the theory, which does not consider a periodic array of particles and rods. The results show some signs of insufficient averaging for large $`D`$. Clearly, this is due to the very small mean forces that we try to evaluate, combined with the rather small number of simulated particles from which the averages are generated. The small systems are necessitated by the long range interactions which require all charged objects to interact with all other charged objects, thereby increasing the simulation time by the square of the number of charged objects simulated. Since the averaging only gets linearly better with the number of particles, we are limited to small systems if we want to perform calculations of the electrostatics carefully. Even so, the simulations do overall agree with the predictions. In fact, the two main predictions, namely that the inter-rod force and the rod-surface force asymptotically behave as $`D^3`$ and $`D^2`$, respectively, seem to be very robust results even though they were derived in the thermodynamic limit of a large number of overall salt ions in the plane. The three different cases shown in figure 2 represent ($``$) only counter-ions to the rods; ($`\times `$) only salt at concentration $`1/400`$Å<sup>-2</sup> (charge neutrality is satisfied by evenly smearing the rod counter-charge on the surface); and ($`\mathrm{}`$) $``$ and $`\times `$ combined. Even the case where only counter-ions are present shows reasonable agreement between simulations and theory. We have also verified that good comparisons between simulations and theory hold for other values of $`d`$, the distance between rods and surface. The general trend is that as $`d`$ increases, so does the distance, $`D`$, at which the mean forces approach the predicted slopes shown in figure 2.
## 5 Conclusions
We have presented both theoretical and simulation results for the interactions between charged DNA-like rods near a salty surface. We have focused on the polarization of the surface charge distribution and how it affects the fields and resulting interaction between two such rods. Our main conclusion is that when the mobile charges are confined, as is the case in our treatment, to a two-dimensional surface, they do not exponentially screen the fields, and hence the effective interaction between the rods is not finite-ranged. Both theory and simulations show consistent power law interactions both for the force between the rods and for the force they apply to the charged surface.
Despite the fact that the limits of applicability of the theory and simulation are not the same: the theory is valid at the limit of a large number of ions, and assumes an infinite surface with just two rods, while the simulation is restricted, for the reasons mentioned in section 4, to a small number of particles and periodic boundary conditions, we still find a finite region of inter-rod distances where the two agree fairly well. This agreement indicates that although the theory is approximate it is robust non the less.
The systems we treat in this paper may seem relatively artificial because the charges are confined to the surface and there is no, or very little, residual bulk salt in the surrounding water solution. In addition, we do not take into account the effects of the non-zero thickness of lipid membranes and the relatively low (compared to the surrounding water) dielectric constant of the lipid. However, despite these limitations, we can apply these results to some experimentally investigated systems. Specifically, in recent x-ray experimentssafinya that studied DNA-Cationic lipid complexes the structures that were found were formed by layers of membranes intercalated by ordered DNA domains. Most of these experiments were performed in very low bulk salt concentration, and moreover, because the counter-ions were trapped between the membranes they were effectively restricted to two-dimensional space. (The lower dimensionality of the space is“measured” relative to the fixed charged objects that polarize the surrounding solute. In this case we are studying the screening of the DNA and therefore the space available to the ions for redistributing themselves is effectively two-dimensional compared with the DNA, despite the fact that the ions are almost point like in comparison with the $`20\AA `$ diameter of the DNA.) This experimental limit may also be valid in biological layered structures such as the Golgi apparatus. Although the effects of dielectric discontinuities in these electrostatic systems can be nontrivial (we treat these elsewhere next ) they do not change the main results for distances $`D\lambda `$, and we do not expect the power law forces to change in this regime.
We are grateful to Cyrus Safinya, Ilya Koltover and Gerard Wang for discussing with us their experimental results. We thank Helmut Schiessel, Dov Levine and Nily Dan for useful discussions. PP and RM acknowledge the partial support of the MRL Program of the National Science Foundation under Award No. DMR99-72246 in addition to Awards 8-442490-21587 and 8-442490-21825. NGJ acknowledges support by the Director, Office of Advanced Scientific Computing Research, Division of Mathematical, Information, and Computational Sciences of the U.S. Department of Energy under contract number DE-AC03-76SF00098. |
no-problem/9910/cond-mat9910479.html | ar5iv | text | # Magnetization in short-period mesoscopic electron systems
## I Introduction
Several different probes have been used to investigate the properties of the two-dimensional electron gas (2DEG) in the quantum Hall regime, transport and optical experiments, or equilibrium methods including capacitance and magnetization measurements, to name only few. The magnetization of high mobility homogeneous 2DEG has been measured by two methods. One method uses sensitive mechanical, torque magnetometers.Eisenstein85:875 ; Wiegers97:3238 More recently, a low-noise superconducting quantum interference device (SQUID) has also been used.Grundler98:693 ; Meinel98a ; Meinel99:819 These precision measurements reveal many-body effects such as the exchange enhancement at odd filling factors Ando74:1044 and the fractional quantum Hall effect and, in addition, an unidentified effect around filling factor $`\nu 2`$ that might be connected to skyrmions.Meinel99:819
These experimental techniques are expected to be used soon for measuring the magnetization in lateral superlattices. The magnetization has been calculated for a disordered homogeneous 2DEG within the Hartree-Fock approximation (HFA)MacDonald86:2681 and within a statistical model for inhomogeneities corresponding to a Hartree approximation (HA). Gudmundsson87:8005 The results show strong exchange effects, already observed in experiments, Wiegers97:3238 ; Meinel98a ; Meinel99:819 and manifestations of the screening properties of the 2DEG.
Concerning periodic systems, the magnetization and the persistent current have been calculated by Kotlyar et. al. for a finite array of quantum dots using a Mott-Hubbard model for the electron-electron interactions, both intra-dot and inter-dot.Kotlyar97:R10205 ; kotlyar98:3989 In an infinite lateral superlattice, defined by a potential periodic in two directions (electric modulation) the energy spectrum in the presence of a magnetic field can only be calculated when the ratio of the magnetic flux through a lattice cell to the unit flux quantum, $`\varphi /\varphi _0`$, is a rational number. The unit cell can then be enlarged to have an integer number of flux quanta flowing through it.Hofstadter76:2239 In principle, the magnetization of a 2DEG in a lateral superlattice can be evaluated in the thermodynamic limit as the negative derivative of the free energy with respect to the magnetic field $`B`$,MacDonald86:2681 ; Gudmundsson87:8005 since the energy spectrum is always a continuous function of the flux. Hofstadter76:2239 ; Ketzmeric98:9881 ; kotlyar98:3989 The inclusion of the Coulomb interaction to the model, within a self-consistent scheme such as the HA Gudmundsson95:16744 or the HFA,Manolescu99:5426 does severely limit the possibility to effectively change the value of $`B`$ by a small amount in order to take a numerical derivative. Nevertheless, such a thermodynamic calculation can be done for a modulation that varies only along one spatial direction, i. e. for an array of quantum wires.
In this paper we shall evaluate the magnetization of a 2DEG in a periodic potential corresponding to a weak density modulation in both, or in only one spatial direction. In other words our system is either an array of dots or antidots, or of parallel quantum wires, in most cases with a strong overlap. We first consider a finite system with boundaries, and then the unbound system.
For the finite system we are able to calculate the total magnetization. For the infinite system with a 2D potential we shall rather use the definition applicable to a mesoscopic system with a phase coherence length $`L_\varphi `$ much larger than the spatial period of the square unit cell $`L`$. It is clear to us that in this manner we are calculating the contribution to the magnetization due to the periodic modulation, neglecting the contribution stemming from the edge of a real system. In an experiment the total magnetization is indeed measured and we would have a hard time arguing that the edge contribution is statistically insignificant in the thermodynamic limit. Our way out of this dilemma is to compare several results that we can obtain.
First, we compare the results for the finite system of various sizes, by heuristically separating the contribution of the bulk and edge current distributions to the total magnetization. Second, we compare quantitatively and qualitatively the bulk contribution in the finite system to the magnetization produced by one unit cell in the infinite system. We point out that experimentally it may be possible to measure only the contribution to the magnetization caused by the periodic modulation by placing the entire SQUID-loop well within the sample. And third, we compare to the total thermodynamic magnetization of an infinite system which is periodic in only one spatial direction and thus not subject to commensurability difficulties.
Our calculations show that the bulk contributions to the magnetization are strongly dependent on the presence or absence of the exchange interaction in the models supporting the view of Meinel et. al. that magnetization is an ideal probe of the many-body effects in a 2DEG.Meinel99:819 Magnetization has been calculated for quantum Hall systems in the fractional regime with higher order approximations that reproduce more reliably exchange and correlation effects.Haussmann97:9684 ; MacDonald98:R10171 Here we focus our attention on relatively large structured systems with several Landau bands included and have thus to resort to the HFA in order to make the calculation tractable in CPU-time. Recent comparison between results from exact numerical diagonalization and the HFA show that in high Landau levels the two approaches agree on the formation of charge density waves.Rezayi99:03258
We shall consider periodic potentials of a short period, 50 nm, which means short with respect to the present technical possibilities, but still realistic. In this case, for GaAs parameters and for magnetic fields in the range of few Tesla, the screening effects due to the direct Columb interaction are weak. However, the exchange effects remain strong, and they amplify the single-particle energy dispersion.Manolescu99:5426 Therefore the presence of the periodic potential should become proeminent in the magnetization even for weak amplitudes. And last, but not least, by avoiding strong screening we are benefitted by a shorter computational time.
## II Models
The magnetization is calculated within three models in the paper, self-consistently with respect to the electron-electron Coulomb interaction: A finite model using the HA, an infinite model periodic in both spatial directions, using the unrestricted Hartree Fock approximation (UHFA), and an infinite model periodic in only one spatial direction, using the standard HFA.
### II.1 Finite 2DEG
The model for a finite system consists of a laterally confined 2DEG. A hard wall potential ensures that the electrons stay in the square region
$$\mathrm{\Sigma }=\left\{(x,y)\right|\mathrm{\hspace{0.17em}0}<x<L_x,0<y<L_y\},$$
(1)
the wave functions being zero at the boundary. An external modulating potential and a perpendicular magnetic field are applied. The potential has the form
$$V_{\text{sq}}(𝐫)=V_0\left\{\mathrm{sin}\left(\frac{n_x\pi x}{L_x}\right)\mathrm{sin}\left(\frac{n_y\pi y}{L_y}\right)\right\}^2,$$
(2)
where $`n_x`$ and $`n_y`$ count the number of dots in $`x`$ and $`y`$ direction respectively, giving in total $`N_c=n_xn_y`$ unit cells. The Schrödinger equation is solved by expanding the eigenfunctions in Fourier sine-series and the expansion coefficients are found by diagonalizing the Hamiltonian matrix. The electron interaction is taken into account in the Hartree approximation.
The total magnetization can be calculated according to the definition for the orbital and the spin component of the magnetization,Desbois98
$`M_o+M_s`$ $`=`$ $`{\displaystyle \frac{1}{2c𝒜}}{\displaystyle _𝒜}d^2r\left(𝐫\times 𝐉(𝐫)\right)\widehat{𝐧}`$ (3)
$``$ $`{\displaystyle \frac{g\mu _B}{𝒜}}{\displaystyle _𝒜}d^2r\sigma _z(𝐫),`$
where $`𝒜`$ is the total area of the system. The equilibrium local current is evaluated as the quantum thermal average of the current operator,
$$\widehat{𝐉}=\frac{e}{2}\left(\widehat{𝐯}|𝐫𝐫|+|𝐫𝐫|\widehat{𝐯}\right),$$
(4)
with velocity operator $`\widehat{𝐯}=[\widehat{𝐩}+(e/c)𝐀(𝐫)]/m^{}`$, A being the vector potential. Even though the magnetic field $`B`$ can be varied freely in this model we have used the definition of the orbital magnetization (3) rather than evaluating the derivative of the free energy with respect to the magnetic field.
### II.2 Periodic 2DEG in two directions
The two-dimensional modulation of the system is a square lattice of quantum dots ($`V_0>0`$) or antidots ($`V_0<0`$) determined by the static external potential
$$V_{\text{QAD}}(𝐫)=V_0\left\{\mathrm{sin}\left(\frac{gx}{2}\right)\mathrm{sin}\left(\frac{gy}{2}\right)\right\}^2,$$
(5)
or a simple cosine-modulation defined by
$$V_{\text{per}}(𝐫)=V_0\left\{\mathrm{cos}\left(gx\right)+\mathrm{cos}\left(gy\right)\right\},$$
(6)
where $`g`$ is the length of the fundamental inverse lattice vectors, $`𝐠_1=2\pi \widehat{𝐱}/L`$, and $`𝐠_2=2\pi \widehat{𝐲}/L`$. The Bravais lattice defined by $`V_{\text{QAD}}`$ or $`V_{\text{per}}`$ has a periodic length $`L`$ and the inverse lattice is spanned by $`𝐆=G_1𝐠_1+G_2𝐠_2`$, with $`G_1,G_2𝐙`$ . The commensurability condition between the magnetic length $`\mathrm{}`$ and the period $`L`$ requires magnetic-field values of the form $`B=pq\varphi _0/L^2`$, with $`pq𝐍`$, and $`\varphi _0=hc/e`$ the magnetic flux quantum.Silberbauer92:7355 ; Gudmundsson95:16744 Arbitrary rational values can, in principle, be obtained by resizing the unit cell in the Bravais lattice.
For this model we evaluate the contribution of the periodic modulation to $`M_o`$ and $`M_s`$, rather than the total magnetization. Using in Eq. (3) the periodicity of the current and spin densities, and the reflection symmetry of the unit cell, we reduce the integrations to a single cell. Obviously, in the absence of the modulation $`𝐉(𝐫)0`$ and the orbital contribution vanishes.
The ground-state properties of the interacting 2DEG in a perpendicular homogeneous magnetic field $`𝐁=B\widehat{𝐳}`$ and the periodic potential are calculated within the UHFA for the Coulomb interacting electrons at a finite temperature.Gross91 ; Palacios94:5760 The approximation is unrestricted in the sense that the single-electron states do not have to be eigenstates of $`\widehat{\sigma }_z`$.
### II.3 Periodic 2DEG in one direction
The modulation is defined by the potential
$$V_{\text{per}}(x)=V_0\mathrm{cos}\left(\frac{2\pi x}{L}\right),$$
(7)
describing an array of parallel quantum wires. In this case there is no restriction on the magnetic-field values, the magnetic flux through one lattice cell always being infinite. The groundstate is calculated in the HFA, by diagonalizing the Hamiltonian in the Landau basis, and by expanding the matrix elements in Fourier series. Therefore we can evaluate directly the total magnetization $`M=M_o+M_s`$ by the thermodynamic formula appropriate for the canonical ensemble,
$$M=\frac{1}{𝒜}\frac{d}{dB}(ETS),$$
(8)
where $`E`$ is the total energy, and $`S`$ the entropy. We shall assume the temperature is sufficiently low to neglect the second term of Eq. (8). In view of more realistic results we also assume a small disorder broadening of the Landau levels, which we take into account with a Gaussian model for the spectral function.MacDonald86:2681
## III Results
The numerical calculations are performed with GaAs parameters, $`m^{}=0.067`$, and $`\kappa =12.4`$. In the case of the infinite periodic modulation, in the UHFA, HFA, or the HA, the bare $`g`$-factor is -0.44, and is set equal to zero in the model of the finite 2DEG in the HA. Mostly for numerical reasons we keep a finite temperature, which for the models with 2D potential is 1 K, and for the 1D potential is 0.2 K. In all cases the length of the unit cell is $`L=50`$ nm.
### III.1 Finite system with 2D potential
The magnetization for the finite system is shown in Fig. 1. The system size is progressively increased, starting from a single cell of 50$`\times `$50 nm<sup>2</sup>, to a system of 5$`\times `$5 cells, keeping the unit-cell size constant. Each cell is defined by one period of the modulation potential (2). When the system consists of more than one cell we, ad hoc, divide the magnetization into an edge part $`M_e`$ with a contribution only from the first row of cells around the system, and a bulk part $`M_b`$ with the contribution from the rest of the cells. Below we shall see that generally, the magnetization $`M_b`$ does approach the orbital magnetization expected for a large system as the number of cells $`N_c`$ is increased. The variable on the $`x`$-axis of the figure, $`N/N_c`$, i. e. the number of electrons in a single cell, can approximately be interpreted as a filling factor for the 3$`\times `$3 and the 5$`\times `$5 system. This is confirmed by the evolution of the chemical potential $`\mu `$ through the single electron Hartree-states depicted in Fig. 2. For even-integer values of $`N/N_c`$ $`\mu `$ jumps through ’gaps’ of sparsely distributed edge states separating states concentrated into precursors of Landau bands. The bulk $`M_b`$ and the edge $`M_e`$ contributions to the magnetization as well as the total magnetization are of similar magnitudes. However the oscillations of $`M_b`$ around zero are more symmetric than those of $`M_e`$. This is because the direction of the edge current is more commonly as expected from the classical clockwise motion of the electrons around the sample, thus giving a preferred sign to $`M_e`$. Modestly increasing the size, from 3$`\times `$3 to 5$`\times `$5 unit cells, clearly gives the finite system more of the character of an extended system. The bulk magnetization for the large system is small when $`\nu `$ is not close to even integers due to the strong screening of the modulation potential away from the edges of the system. This is confirmed by Fig. 3 showing $`M_o`$ of the noninteracting 5$`\times `$5 system. The structures around even integer values of $`\nu `$ are less steep than for the interacting system. In the presence of the interaction the energy gaps are reduced by screening, which is self-consistently determined by the density of states around $`\mu `$. When $`\mu `$ lies within one ’band’ (i. e. $`\nu `$ is not an even integer) the Coulomb repulsion forces the electron density to spread out more evenly shifting the ”effective filling“ $`N/N_c`$ a small amount. A slight change in the magnetic flux in the finite system only shifts the relation between the number of electrons and the effective filling factor.Gudmundsson87:8005
The current distribution for a 5$`\times `$5 system is shown in Fig. 4 which reveals a strong edge current, but also a bulk current structured self-consistently in a complex way by the modulation, the interaction, and the location of $`\mu `$ with respect to the energy levels. This interplay of complex bulk contributions with the effects of the edge currents opens the question what are the effects of a modulation in an extended electron system on the magnetization.
### III.2 Infinite system with 2D potential
Next, we turn our attention to the infinite system, that is modulated in two directions, and calculate the contribution to the magnetization from one unit cell. In Fig. 5 the total energy is shown as a function of the filling factor $`\nu `$ for the extended periodic 2DEG in the UHFA and the HA for the case $`pq=2`$. Two magnetic flux quanta flow through the unit cell and each Landau band is split into two subbands which in turn are doubly spin split. Filling factor two means thus that both spin states of one Landau band are occupied, and in total four subbands are below the Fermi level. The modulation with $`V_0=1.0`$ or 0.1 meV is small compared to $`\mathrm{}\omega _c=5.71`$ meV. The minima in the total energy for the UHFA reflect the strong exchange interaction for electrons, added to nearly filled Landau bands or subbands thereof. Fig. 6 compares the total magnetization, Eq. (3), and its components $`M_o`$ and $`M_s`$ calculated within the UHFA, with the total magnetization according to the HA. The main difference between the results of these two approximations is the sharp reduction in the magnetization caused by the exchange interaction around odd integer filling factors. In this region the enhanced spin splitting of the subbands is larger than the subband splitting caused by the modulation. The order of the subbands (with respect to spin and magnetic subband index) and their curvature thus leads to $`M_o`$ being of same sign for $`\nu =2.5`$ and $`\nu =3.5`$. The behavior is thus different around the even and odd values of $`\nu `$. Later we see that this is not in the case of $`pq=1`$. Just like for the total energy the different modulation strengths result in minor changes in $`M_o`$, because the energy dispersion of the Landau bands is determined by the exchange energy rather than by the external or screened potentials. Manolescu99:5426
The light effective mass and the small $`g`$-factor of electrons in GaAs cause the spin contribution $`M_s`$ to be an order of magnitude smaller than the orbital one. A comparison of $`M_s`$ in these two approximations can be seen in Fig. 7. In the case of the UHFA the exchange interaction always leads to the maximum spin polarization. In the HA the spin splitting of the Landau bands is only the bare Zeeman gap, here about 0.07 meV, i. e. much smaller than the intra-band energy dispersion which is of the order of the modulation amplitude, $`V_0=1`$ meV. Therefore the chemical potential is never able to lie only in one spin subband. New electrons are being added to the system concurrently, to both spin states, resulting in a reduced polarization.
In the case of one flux quantum through a unit cell ($`pq=1`$) each Landau band consists of only two subbands, with different spin quantum numbers. This simpler band structure is reflected in the magnetization (see Fig. 8). Here the spin-bands hosting $`\mu `$ for the filling range $`2.5\nu <3`$ and $`3<\nu 3.5`$ have opposite curvature causing a maximum and a minimum in $`M_o`$, respectively. For the lower $`\nu `$ range the Fermi level ($`\mu `$) is in the lower spin subband of the second Landau-band pair, see Fig. 9. Here the Fermi contours (”Fermi surface“) encircle the energy maxima at the edges of the Brillouin-zone (hole orbits), while for the higher $`\nu `$ range they close around the minimum of the upper spin subband (electron orbits).
In the case of two flux quanta each Landau band is fourfold split as can be seen in Fig. 10. For $`\nu =2.0`$ only the splitting due to the modulation is clearly visible but for $`\nu `$ not equal to an even integer the spin splitting gets more enhanched due to the strong exchange force. Furthermore, we see that the subbands repel each other around the $`\mathrm{\Gamma }`$ point (center of the Brillouin zone) resulting in a different occurrence as $`\nu `$ is swept from 2.8 to to 3.3. In contrast to the strong $`\nu `$-dependent interaction effects on the energy spectra we show in Fig. 11 the ”static“ energy bands of the noninteracting system that are independent of $`\nu `$ and the location of $`\mu `$.
Around $`\nu =1`$ the order of the Landau subbands is unusual. The states below the Fermi level are those of the Landau subband $`(n,\sigma )=(0,+)`$, $`n`$ being the orbital and $`\sigma `$ the spin quantum numbers. But the Landau subbands above, i. e. $`(0,)`$ and $`(1,+)`$, are overlapped due to the strong exchange enhancement of the gaps between all the subbands with the same $`n`$ but opposite $`\sigma `$. Therefore the first states which are populated for $`\nu >1`$ belong to the subband $`(1,+)`$, while the subband $`(0,)`$ is populated a bit later (for slightly higher $`\nu `$). This situation is well known in the atomic physics of complex atoms, and has also been demonstrated in quantum dots, with a current-spin density-functional approach,Steffens98:529 which in principle is more reliable than our UHFA. In our case we observe this spin-state inversion as a small anomaly in $`M_s`$ around $`\nu =1`$, Fig. 8, where the maximum of $`M_s`$ is shifted to $`\nu >1`$.
The fact that the magnetization calculated by Eq. (3) for a unit cell in an infinite doubly modulated system reflects only the contribution of the modulation is seen in Fig. 12, where $`M`$ for a dot and an antidot modulation of the same strength are mirror symmetric around zero for low $`\nu `$. For higher $`\nu `$ the mixing of the Landau bands due to the Coulomb interaction slightly skews the mirror symmetry. For a homogeneous system ($`V_0=0`$) the persistent current in the definition of the magnetization (3) vanishes and thus $`M_o`$. Similar effect was seen for the bulk magnetization of the finite system in Fig. 1 for noninteger values of $`\nu `$. Due to the enhancement of subband width caused by the exchange force the transition to zero magnetization with decreasing modulation does not happen in a smooth linear fashion.Manolescu99:5426
### III.3 Correspondences between the finite and the infinite systems
Now we come back to the question how the magnetization of the finite and infinite system are related. The magnetization for the finite system can be calculated by either equation (3) or (8) giving the same results for the low temperature assumed here. The orbital magnetization $`M_o`$ is compared for the finite system of different sizes to the result for a single unit cell in an extended system with the same type of modulation in Fig. 13, but with the interaction accounted for in different ways. If we first look at the results for the infinite periodic system we notice that the largest variation of $`M_0`$ occurs for the UHFA and the smallest for the HA with the noninteracting case inbetween. This is in accordance with the screening properties mentioned earlier, in the HA the modulation is screened more effectively than in the UHFA. This result is in agreement with the simplistically defined bulk magnetization $`M_b`$ of the finite system seen in Fig. 1, the main difference being the sharp variation of $`M_b`$ at even integer values of $`\nu `$. Their presence indicates that even though the current density is only integrated over the ”bulk“ of the system the underlying energy spectrum is affected by the chemical potential $`\mu `$ traversing it’s edge states. To be able to get the unit cell of the two different systems to give the same magnetization the finite system has to be even larger, exhausting our means of computation.
The magnetization of both systems (and also of the 1D modulated system, see the next subsection) compares well with results derived from an older model of statistical inhomogeneities in a 2DEG that was used to explain effects caused by oscillating Landau level width due to the electrostatic screening.Gudmundsson87:8005
The main differences between the magnetization of our finite and infinite periodic systems are: i) the asymmetry around the zero line in the case of the finite system, and ii) the missing steepness of $`M_o`$ around even integer filling factors $`\nu `$ for the infinite system. Earlier we saw that the asymmetry is influenced by the contribution from the ”edge“ of the finite system. The second difference can also be traced to the edge states. In the extended model there are no edge states between the Landau subbands. Their shape and curvature can thus change sharply with the motion of the chemical potential $`\mu `$ through them, the self-consistent screening and exchange effects minimize any gaps that might evolve around $`\mu `$, which in turn prevents any sharp jumps in $`M_o`$. With this in mind it is clear that the magnetization for a realistic (large, but finite) modulated system is not simply the sum of the magnetization produced by two independent subsystems, the edge and the bulk. The Coulomb interaction makes the separation of the contributions to $`M`$ a nontrivial problem, which can be solved only by an experiment. In addition, we have seen that the self-consistent motion of $`\mu `$ through the energy bands depends strongly on the approximation used for the electron interaction.
### III.4 Infinite system with 1D potential
We have noticed in our calculations that for the system sizes considered here the magnetization of the finite system is not strongly dependent on whether the modulation is assumed 2D as here, or 1D.PhysicaE For an infinite system with a 1D modulation we can calculate the thermodynamically defined magnetization (8) presented in Fig. 14. As mentioned before we see here that $`M_o`$ for the finite system, especially when it is enlarged, bears a strong similarity with $`M_o`$ for the infinite 1D modulated 2DEG.
The calculation of the ground state for a modulated 2DEG with arbitrary magnetic fields is impossible for the 2D potential, due to the commensurability restrictions. The problem can be circumvented for a 1D modulated system that we shall now turn our attention to, and formulate predictions of experimental results that can be used to test the importance of the exchange interaction. For such a system we have access to the total magnetization, i. e. bulk plus edge, in the thermodynamic limit. In Fig. 14 we display results for the infinite system with a 1D potential, obtained with Eq. (8).
First we show the sawtooth profile in the absence of a modulation potential, reflecting the instability of the Fermi level in the energy gaps. The exchange interaction determines the spin splitting for odd filling factors, but also amplifies the jumps for even filling factors by almost a factor of two. The reason is the enhancement effect on both the spin gaps and the Landau gaps. For the same reason, in the presence of a modulation the jumps are only slightly reduced. Similarily, as for the 2D modulation, the exchange interaction also increases the energy dispersion of the Landau bands for noninteger filling factors, by lowering the energy of the occupied states, see Figs. 9 and 10. Hence, the band width depends on the position of the Fermi level inside an energy band. This fact prevents the coincidence of the Fermi level with a band edge (top or bottom), resulting in sharp cusps for $`\nu `$ close to integers. The sharpness is an effect of the exchange interaction in the vicinity of a van Hove singularity.
Some amount of disorder may indeed broaden such cusps. In addition, the magnetization jumps may now slightly increase, because of the smearing of the band edges by disorder, which helps the Fermi level to enter or to leave a Landau band. When the Fermi level lies in the middle of a band screening effects are important. In principle screening is important when the modulation period is much bigger than the magnetic length and/or when high Landau bands, with extended wave functions, are occupied. However, even here we can see some oscillations in the upper bands, with orbital quantum number $`n=2`$.
Increasing the modulation amplitude, from $`V_0=1.5`$ meV to $`V_0=5`$ meV we first see that the magnetization for the bands with $`n=1`$ is relatively stable. Just like in Figs. 5 and 6, this is because the exchange amplification of the energy dispersion depends on the filling factor, rather than on the modulation amplitude.Manolescu99:5426 The spin splitting survives now only for $`\nu =3`$, and it is abruptly supressed for $`\nu 5`$. A similar supression occurs for $`V_0=1.5`$ meV, but at a higher $`\nu `$, and it can be explained by the inflation of the wave functions in high Landau levels which rapidly equilibrates the number of spin-up and spin-down electrons and destroys the enhancement of the spin gap. Manolescu95:1703 Such a suppression effect has been recently observed in magnetotransport experiments on short-period modulated systems, in the Shubnikov - de Haas peaks.Petit97:225
## IV Final remarks
We have calculated the magnetization of periodic systems and discussed with examples various properties of it due to system boundaries, periodicity, and Coulomb interaction. We have compared the results of the finite and infinite systems and of the periodic systems in one and two spatial directions. Our aim is to provide information for understanding the magnetization measurements in mesoscopic systems, which are expected to become a new direction of experimental investigations.
Unlike in other types of experiments, like transport or electromagnetic absorption, the magnetization measurements seem to open a better and more direct access to the intrinsic, quantum electronic structure of the system. In transport experiments this is often intermediated by complicated electron-impurity interactions, and in far-infrared absorption usually the classical collective motion of the electron system is dominant. We have identified in the magnetization several properties of the energy spectrum which are absent or incompletely observed in transport or absorption measuremens, like the exchange enhancement of the energy dispersion or the curvature of the Landau bands. According to a recent prediction the exchange effects may also determine hysteresis properties when acting on the energy dispersion, either by varying the modulation amplitude, or by varying the Zeeman splitting in tilted magnetic fields, and keeping the filling factor constant.Manolescu99:5426 The magnetization measurements could be the best suited tool for probing such effects. The present calculation further indicates that sought after delicate internal structure of the Landau bands, such as the Hofstadter butterfly,Hofstadter76:2239 ; Gudmundsson95:16744 ; Gudmundsson96:5223R in a doubly periodic 2DEG might be better observed by magnetization than transport experiments.Schloesser96:683
###### Acknowledgements.
This research was supported by the Icelandic Natural Science Foundation and the University of Iceland Research Fund. In particular we acknowledge additional support from the Student Research Fund (S. E.), NATO Science Fellowship (A. M.), and Graduiertenkolleg ”Physik Nanostrukturierter Festkörper” (V. G.). |
no-problem/9910/cond-mat9910112.html | ar5iv | text | # Metal-insulator transition of isotopically enriched neutron-transmutation-doped 70Ge:Ga in magnetic fields
## I Introduction
Semiconductors with a random distribution of doping impurities have been studied extensively over the past three decades in order to probe the nature of the metal-insulator transition (MIT) in disordered electronic systems. The value of the critical exponent $`\mu `$ of the conductivity for the metallic side of the transition, however, still remains controversial. The exponent $`\mu `$ is defined by
$$\sigma (0)=\sigma ^{}(N/N_c1)^\mu $$
(1)
in the critical regime of the MIT $`(0<N/N_c11)`$. Here, $`\sigma (0)`$ is the zero-temperature conductivity, $`\sigma ^{}`$ is a prefactor, $`N`$ is the impurity concentration, and $`N_c`$ is the critical concentration for the MIT. An exponent of $`\mu 0.5`$ has been found in a number of nominally uncompensated semiconductors \[Si:P, Si:As, Si:Sb, Ge:As, and <sup>70</sup>Ge:Ga (Refs. REFERENCES and REFERENCES)\]. This value is considerably smaller than the results of numerical calculations ($`\mu =1.21.6`$; e.g., Refs. REFERENCES and REFERENCES) for the Anderson transition purely driven by disorder. Therefore, electron-electron interaction, which is undoubtedly present in doped semiconductors, must be relevant to the nature of the MIT, at least when impurity compensation is absent. A conclusion that has been reached over the years is that one has to deal simultaneously with disorder and electron-electron interaction in order to understand the MIT in doped semiconductors.
According to theories on the MIT which take into account both disorder and electron-electron interaction, the critical exponent $`\mu `$ does not depend on the details of the system, but depends only on the universality class to which the system belongs. Moreover, there is an inequality $`\nu 2/3`$ for the critical exponent $`\nu `$ of the correlation length. The inequality is expected to apply generally to disordered systems, irrespective of the presence of electron-electron interaction. Hence, if one assumes the Wegner relation $`\mu =\nu `$, which is derived for systems without electron-electron interaction, $`\mu 0.5`$ violates the inequality. This discrepancy has been known as the conductivity critical exponent puzzle. Kirkpatrick and Belitz have claimed that there are logarithmic corrections to scaling in universality classes with time-reversal symmetry, i.e., when the external magnetic field is zero, and that $`\mu 0.5`$, found at $`B=0`$, should be interpreted as an “effective” exponent which is different from a “real” exponent satisfying $`\mu 2/3`$. Therefore, comparison of $`\mu `$ with and without the time-reversal symmetry, i.e., with and without external magnetic fields becomes important. Experimentally, $`\mu 1`$ has been found for magnetic inductions $`B`$ on the order of one tesla for nominally uncompensated semiconductors: Ge:Sb, Si:B, and Si:P (Ref. REFERENCES). Since these systems result in different values of $`\mu `$ ranging from 0.5 to 1.0 at $`B=0`$, the applied magnetic field changes the value of $`\mu `$ for certain systems (Si:B and Si:P), while it does not or does only change little for the other (Ge:Sb). In this work, we aim to achieve a complete understanding of the effect of magnetic fields on the MIT in uncompensated semiconductors by studying the critical behavior of the zero-temperature conductivity as a function of both $`N`$ (doping-induced MIT) and $`B`$ (magnetic-field-induced MIT) in magnetic fields up to 8 T for <sup>70</sup>Ge:Ga system. To our knowledge, the MIT in Si or Ge has not been analyzed as a function of $`B`$. Concerning the critical point,
$$N_c(B)N_c(0)B^\beta $$
(2)
with $`\beta =0.5`$ was obtained for Ge:Sb, while $`\beta =1.7\pm 0.4`$ for Si:B. The exponent $`\beta `$ characterizes the phase (metal or insulator) diagram on the $`(N,B)`$ plane, and provides information on the nature of the MIT in magnetic fields. The above experimental results, however, imply that $`\beta `$ could be completely different even though $`\mu `$ in magnetic fields is the same. Hence, a determination of $`\beta `$ for various systems is important in order to probe the effect of magnetic fields.
In our earlier studies, we obtained $`\mu =0.5`$ at $`B=0`$ for <sup>70</sup>Ge:Ga. This result was obtained from precisely doped samples with a perfectly random distribution of impurities; our <sup>70</sup>Ge:Ga samples were prepared by neutron-transmutation doping (NTD), in which an ideally random distribution of dopants is inherently guaranteed down to the atomic level. For the case of melt- (or metallurgically) doped samples that have been employed in most of the previous studies, the spatial fluctuation of $`N`$ due to dopant striations and segregation can easily be on the order of 1% or more across a typical sample for the four-point resistance measurement (length of $``$5 mm or larger), and hence, it will not be meaningful to discuss physical properties in the critical regime (e.g., $`|N/N_c1|<0.01`$), unless one evaluates the macroscopic inhomogeneity in the samples and its influence on the results. A homogeneous distribution of impurities is important also for experiments in magnetic fields. Using the same series of <sup>70</sup>Ge:Ga samples that was employed in our previous study, we show here that the critical exponent of the conductivity is 1.1 in magnetic fields for both the doping-induced MIT and the magnetic-field-induced MIT. The phase diagram on the $`(N,B)`$ plane is successfully constructed, and $`\beta =2.5`$ is obtained for <sup>70</sup>Ge:Ga.
## II Experiment
All of the <sup>70</sup>Ge:Ga samples were prepared by neutron-transmutation doping (NTD) of isotopically enriched <sup>70</sup>Ge single crystals. We use the NTD process since it is known to produce the most homogeneous, perfectly random dopant distribution down to the atomic level. The concentration $`N`$ of Ga acceptors is determined from the time of irradiation with thermal neutron. The concentration $`N`$ is proportional to the irradiation time as long as the same irradiation site and the same power of a nuclear reactor are employed. Details of the sample preparation and characterization are described elsewhere. In this work 12 samples that are metallic in zero magnetic field are studied. (See Table I.) The conductivity of the samples in zero magnetic field has been reported in Refs. REFERENCES and REFERENCES.
We determined the electrical conductivity of the samples at low temperatures between 0.05 K and 0.5 K using a <sup>3</sup>He-<sup>4</sup>He dilution refrigerator. Magnetic fields up to 8 T were applied in the direction perpendicular to the current flow by means of a superconducting solenoid.
## III Results
### A Temperature dependence of conductivity
Figure 1 shows the temperature dependence of the conductivity of Sample B2 for several values of the magnetic induction $`B`$. Application of the magnetic field decreases the conductivity and eventually drives the sample into the insulating phase. This property can be understood in terms of the shrinkage of the wave function due to the magnetic field.
The temperature variation of the conductivity $`\sigma (N,B,T)`$ of a disordered metal at low temperatures is governed mainly by electron-electron interaction, and can be written in zero magnetic field as
$$\sigma (N,0,T)=\sigma (N,0,0)+m(N,0)T^{1/2}.$$
(3)
When $`g\mu _BBk_BT`$, i.e., in strong magnetic fields at low temperatures, the conductivity shows another $`T^{1/2}`$ dependence
$$\sigma (N,B,T)=\sigma (N,B,0)+m_B(N,B)T^{1/2}.$$
(4)
Here, one should note that these equations are valid only in the limits of $`[\sigma (N,0,T)\sigma (N,0,0)]\sigma (N,0,0)`$ or $`[\sigma (N,B,T)\sigma (N,B,0)]\sigma (N,B,0)`$. It is for this reason that we have observed a $`T^{1/3}`$ dependence rather than the $`T^{1/2}`$ dependence at $`B=0`$ in <sup>70</sup>Ge:Ga as the critical point \[$`\sigma (N,0,0)=0`$\] is approached from the metallic side. However, Fig. 1 shows that the $`T^{1/2}`$ dependence holds when $`B0`$ even around the critical point. Hence, we use Eq. (4) to evaluate the zero-temperature conductivity $`\sigma (N,B,0)`$ in magnetic fields.
According to an interaction theory for a disordered metal, $`m`$ and $`m_B`$ are given by
$$m=\frac{e^2}{\mathrm{}}\frac{1}{\mathrm{\hspace{0.33em}4}\pi ^2}\frac{1.3}{\sqrt{2}}\left(\frac{4}{\mathrm{\hspace{0.33em}3}}\frac{3}{\mathrm{\hspace{0.33em}2}}\stackrel{~}{F}\right)\sqrt{\frac{k_B}{\mathrm{}D}}$$
(5)
and
$$m_B=\frac{e^2}{\mathrm{}}\frac{1}{\mathrm{\hspace{0.33em}4}\pi ^2}\frac{1.3}{\sqrt{2}}\left(\frac{4}{\mathrm{\hspace{0.33em}3}}\frac{1}{\mathrm{\hspace{0.33em}2}}\stackrel{~}{F}\right)\sqrt{\frac{k_B}{\mathrm{}D}},$$
(6)
respectively, where $`D`$ is the diffusion constant and $`\stackrel{~}{F}`$ is a dimensionless parameter characterizing the Hartree interaction. Since $`m_B`$ is independent of $`B`$, the conductivity for various values of $`B`$ plotted against $`T^{1/2}`$ should appear as a group of parallel lines. This is approximately the case as seen in Fig. 1 at low temperatures (e.g., $`T<0.25`$ K). Values of $`m`$ (for $`B=0`$) and $`m_B`$ at $`B=4`$ T differ from each other considerably for all the samples as listed in Table I. This implies that $`\stackrel{~}{F}`$ is of the order of unity according to Eqs. (5) and (6). In order to support this finding, we shall estimate $`\stackrel{~}{F}`$ within the context of the Thomas-Fermi approximation.
The parameter $`\stackrel{~}{F}`$ is related to the average $`F`$ of the screened Coulomb interaction on the Fermi surface as
$$\stackrel{~}{F}=\frac{\mathrm{\hspace{0.33em}32}}{3}\left[\frac{\mathrm{\hspace{0.33em}1}+3F/4(1+F/2)^{3/2}}{F}\right].$$
(7)
The Thomas-Fermi approximation gives
$$F=\frac{\mathrm{ln}(1+x)}{x},$$
(8)
where
$$x=(2k_F/\kappa )^2,$$
(9)
with the Fermi wave vector
$$k_F=(3\pi ^2N)^{1/3},$$
(10)
and the screening wave vector in SI units
$$\kappa =\sqrt{3e^2Nm^{}/(ϵϵ_0\mathrm{}^2k_F^2)}.$$
(11)
For Ge, the relative dielectric constant $`ϵ`$ is 15.8 and the effective mass $`m^{}`$ of a heavy hole is $`0.34m_e`$, where $`m_e`$ is the electron rest mass. Hence $`x=1.1\stackrel{~}{N}^{1/3}`$, where $`\stackrel{~}{N}`$ is in units of 10<sup>17</sup> cm<sup>-3</sup>. Thus, in the concentration range covered by the samples, the Thomas-Fermi approximation gives $`0.48<\stackrel{~}{F}<0.55`$, which is consistent with the experimental finding that $`\stackrel{~}{F}`$ is of the order of unity.
### B Doping-induced metal-insulator transition
The zero-temperature conductivity $`\sigma (N,B,0)`$ of the <sup>70</sup>Ge:Ga samples in various magnetic fields obtained by extrapolation of $`\sigma (N,B,T)`$ to $`T=0`$ based on Eq. (4) is shown in Fig. 2. Here, $`\sigma (N,B,0)`$ is plotted as a function of the normalized concentration:
$$n[\sigma (N,0,0)/\sigma ^{}(0)]^{2.0}.$$
(12)
Since the relation between $`N`$ and $`\sigma (N,0,0)`$ was established for <sup>70</sup>Ge:Ga in Ref. REFERENCES as $`\sigma (N,0,0)=\sigma ^{}(0)[N/N_c(0)1]^{0.50}`$ where $`N_c(0)=1.861\times 10^{17}\mathrm{cm}^3`$, $`n`$ is equivalent to $`N/N_c(0)1`$. Henceforth, we will use $`n`$ instead of $`N`$ because employing $`n`$ reduces the scattering of the data caused by several experimental uncertainties, and it further helps us concentrate on observing how $`\sigma (N,B,0)`$ varies as $`B`$ is increased. Similar evaluations of the concentration have been used by various groups. In their approach, the ratio of the resistance at 4.2 K to that at 300 K is used to determine the concentration. The dashed curve in Fig. 2 is for $`B=0`$, which merely expresses Eq. (12), and the solid curves represent fits of
$$\sigma (N,B,0)=\sigma _0(B)[n/n_c(B)1]^{\mu (B)}.$$
(13)
The exponent $`\mu (B)`$ increases from 0.5 with increasing $`B`$ and reaches a value close to unity at $`B4`$ T. For example, $`\mu =1.03\pm 0.03`$ at $`B=4`$ T and $`\mu =1.09\pm 0.05`$ at $`B=5`$ T. When $`B6`$ T, three-parameter \[$`\sigma _0(B)`$, $`n_c(B)`$, and $`\mu (B)`$\] fits no longer give reasonable results because the number of samples available for the fit decreases with increasing $`B`$. Hence, we give the solid curves for $`B6`$ T assuming $`\mu (B)=1.15`$.
### C Magnetic-field-induced metal-insulator transition
We show $`\sigma (N,B,0)`$ as a function of $`B`$ in Fig. 3 for three different samples. When the magnetic field is weak, i.e., the correction $`\mathrm{\Delta }\sigma _B(N,B,0)\sigma (N,B,0)\sigma (N,0,0)`$ due to $`B`$ is small compared with $`\sigma (N,0,0)`$, the field dependence of $`\mathrm{\Delta }\sigma _B(N,B,0)`$ looks consistent with the prediction by the interaction theory,
$$\mathrm{\Delta }\sigma _B(N,B,0)=\frac{e^2}{\mathrm{}}\frac{\stackrel{~}{F}}{\mathrm{\hspace{0.33em}4}\pi ^2}\sqrt{\frac{g\mu _BB}{2\mathrm{}D}}\sqrt{B}.$$
(14)
In larger magnetic fields, $`\sigma (N,B,0)`$ deviates from Eq. (14) and eventually vanishes at some magnetic induction $`B_c`$. For the samples in Fig. 3, we tuned the magnetic induction to the MIT in a resolution of 0.1 T. We fit an equation similar to Eq. (13),
$$\sigma (N,B,0)=\sigma _0^{}(n)[1B/B_c(n)]^{\mu ^{}(n)},$$
(15)
to the data close to the critical point. As a result we obtain $`\mu ^{}=1.1\pm 0.1`$ for all of the three samples. The value of $`\mu ^{}`$ depends on the choice of the magnetic-field range to be used for the fitting, and this fact leads to the error of $`\pm 0.1`$ in the determination of $`\mu ^{}`$.
### D Phase diagram in magnetic fields
From the critical points $`n_c(B)`$ and $`B_c(n)`$, the phase diagram at $`T=0`$ is constructed on the $`(N,B)`$ plane as shown in Fig. 4. Here, $`n_c(B)`$ for $`B6`$ T shown by triangles are obtained by assuming $`\mu =1.15`$. The vertical solid lines associated with the triangles represent the range of values over which $`n_c(B)`$ have to exist, i.e., between the highest $`n`$ in the insulating phase and the lowest $`n`$ in the metallic phase. Solid diamonds represent $`B_c`$ for the three samples in which we have studied the magnetic-field-induced MIT in the preceding subsection. Estimations of $`B_c`$ for the other samples are also shown by open boxes with error bars.
The boundary between metallic phase and insulating phase is expressed by a power-law relation:
$$n=CB^\beta .$$
(16)
From the eight data points denoted by the solid symbols, we obtain $`C=(1.33\pm 0.17)\times 10^3`$ T and $`\beta =2.45\pm 0.09`$ as shown by the dotted curve.
## IV Discussion
### A Scaling of zero-temperature conductivity in magnetic fields
Now we shall consider the relationship between the two critical exponents: $`\mu `$ for the doping-induced MIT and $`\mu ^{}`$ for the magnetic-field-induced MIT. Suppose that a sample with normalized concentration $`n`$ has a zero-temperature conductivity $`\sigma `$ at $`B0`$ and that $`[n/n_c(B)1]1`$ or $`[1B/B_c(n)]1`$. From Eqs. (13) and (15), we have two expressions for $`\sigma `$:
$$\sigma =\sigma _0(n/n_c1)^\mu $$
(17)
and
$$\sigma =\sigma _0^{}(1B/B_c)^\mu ^{}.$$
(18)
On the other hand, we have from Eq. (16)
$$n/n_c=(B/B_c)^\beta =[1(1B/B_c)]^\beta 1+\beta (1B/B_c)$$
(19)
in the limit of $`(1B/B_c)1`$. This equation can be rewritten as
$$(n/n_c1)/\beta (1B/B_c).$$
(20)
Using Eqs. (17), (18), and (20), we obtain
$$\sigma _0^{}(1B/B_c)^\mu ^{}\beta ^\mu \sigma _0(1B/B_c)^\mu .$$
(21)
Since Eq. (21) has to hold for arbitrary $`B`$, the following relations
$$\sigma _0^{}=\beta ^\mu \sigma _0$$
(22)
and
$$\mu ^{}=\mu $$
(23)
are derived.
In Fig. 5, we see how well Eq. (23) holds for the present system. In Sec. III C, we have already shown that $`\mu ^{}=1.1\pm 0.1`$ is practically independent of $`n`$. Concerning the exponent $`\mu `$, however, its dependence on $`B`$ has not been ruled out completely even for the highest $`B`$ we used in the experiments. This is mainly because the number of available data points at large $`B`$ is not sufficient for a precise determination of $`\mu `$. In Fig. 5, the results of the doping-induced MIT for $`B4`$ T (solid symbols) and the magnetic-field-induced MIT for three different samples (open symbols) are plotted. Here, we plot $`\sigma (N,B,0)/[\beta ^\mu \sigma _0(B)]`$ vs $`[n/n_c(B)1]/\beta `$ with $`\beta =2.5`$ and $`\mu =1.1`$ for the doping-induced MIT, and $`\sigma (N,B,0)/\sigma _0(B)^{}`$ vs $`[1B/B_c(n)]`$ for the magnetic-field-induced MIT. Figure 5 clearly shows that the data points align exceptionally well along a single line describing a single exponent $`\mu =\mu ^{}=1.1`$.
We saw in Fig. 2 that $`\mu `$ apparently takes smaller values in $`B3`$ T, which seemingly contradicts the above consideration. We can understand this as follows. We find that the critical exponent $`\mu `$ in zero magnetic field is 0.5 which is different from the values of $`\mu `$ in magnetic fields. Hence, one should note whether the system under consideration belongs to the “magnetic-field regime” or not. In systems where the MIT occurs, there are several characteristic length scales: the correlation length, the thermal diffusion length, the inelastic scattering length, the spin scattering length, the spin-orbit scattering length, etc. As for the magnetic field, it is characterized by the magnetic length $`\lambda \sqrt{\mathrm{}/eB}`$. When $`\lambda `$ is smaller than the other length scales, the system is in the “magnetic-field regime.” As the correlation length $`\xi `$ diverges at the MIT, $`\lambda <\xi `$ holds near the critical point, no matter how weak the magnetic field is. When the field is not sufficiently large, the “magnetic-field regime” where we assume $`\mu =1.1`$ to hold, is restricted to a narrow region of concentration. Outside the region, the system crosses over to the “zero-field regime” where $`\mu =0.5`$ is expected. This is what is seen in Fig. 2.
The constant critical exponent in $`B0`$ yields a scaling of the form
$$\sigma (N,B,0)=\stackrel{~}{\sigma }(n,B)f(n/B^\beta ),$$
(24)
where $`\stackrel{~}{\sigma }(n,B)`$ is a prefactor which is irrelevant to the transition. The values of the prefactor are listed in Table II. Here, we list $`\sigma ^{}`$ in Eq. (1) instead of $`\sigma _0`$; $`\sigma ^{}`$ in $`B0`$ is calculated from $`\sigma _0`$ as
$$\sigma ^{}=(1+n_c^1)^\mu \sigma _0$$
(25)
because the relation between $`n/n_c1`$ and $`N/N_c1`$ is given by
$$(n/n_c1)=(1+n_c^1)(N/N_c1).$$
(26)
The values of $`\sigma ^{}`$ for zero field and for other doped semiconductors are also given in Table II. The values are normalized to Mott’s minimum metallic conductivity defined by
$$\sigma _{\mathrm{min}}C_\mathrm{M}(e^2/\mathrm{})N_c^{1/3},$$
(27)
where we assume $`C_\mathrm{M}=0.05`$ as Rosenbaum et al. did. The prefactor $`\sigma ^{}`$ can be also defined for the magnetic-field-induced MIT. We proposed to define it based on Eqs. (22) and (23) as
$$\sigma ^{}[(1+n^1)/\beta ]^\mu ^{}\sigma _0^{}.$$
(28)
Using this definition, we calculated $`\sigma ^{}`$ for the three <sup>70</sup>Ge:Ga samples in which we studied the magnetic-field-induced MIT. The ratios $`\sigma ^{}/\sigma _{\mathrm{min}}`$ are 10, 8, and 6 for Samples A3, B2, and B4, respectively. It is reasonable that the ratios $`\sigma ^{}/\sigma _{\mathrm{min}}`$ for the three samples and those in Table II are of the same order of magnitude. Note that Mott’s minimum metallic conductivity $`\sigma _{\mathrm{min}}`$ depends on both $`B`$ and the system through the critical concentration $`N_c`$. \[See Eq. (27).\]
A similar scaling form was studied theoretically by Khmel’nitskii and Larkin. They considered a noninteracting electron system starting from
$$\sigma (N,B,0)\frac{e^2}{\mathrm{}\xi }f(B^\alpha \xi ),$$
(29)
where $`\xi `$ is the correlation length. They claimed that the argument of the function $`f`$ should be a power of the magnetic flux through a region with dimension $`\xi `$. This means
$$\sigma (N,B,0)\frac{e^2}{\mathrm{}\xi }f(\xi /\lambda ),$$
(30)
where $`\lambda \sqrt{\mathrm{}/eB}`$ is the magnetic length, and hence, $`\alpha =1/2`$. In order to discuss the shift of the MIT due to the magnetic field, they rewrote Eq. (30) as
$$\sigma (N,B,0)\frac{e^2}{\mathrm{}\lambda }\varphi (t\lambda ^{1/\nu }),$$
(31)
based on the relation in zero magnetic field
$$\xi t^\nu .$$
(32)
Here, $`t`$ is a measure of distance from the critical point in zero field, e.g.,
$$t[N/N_c(0)1].$$
(33)
The zero point of the function $`\varphi `$ gives the MIT, and the shift of the critical point for the MIT equals
$$N_c(B)N_c(0)B^{1/2\nu }.$$
(34)
Thus, $`\beta =1/(2\nu )`$ results. Rosenbaum, Field, and Bhatt reported $`\beta =0.5`$ and $`\mu =1`$ in Ge:Sb, which satisfies this relation, when one assumes the Wegner relation $`\mu =\nu `$. In the present system, however, this relation does not hold, as long as we assume the Wegner relation. Experimentally, we find $`\beta =2.5`$, while $`1/(2\nu )=1/(2\mu )=1`$ for <sup>70</sup>Ge:Ga at $`B=0`$. The relation does not hold in Si:B, either ($`\beta =1.7`$ while $`\mu =0.65`$ at $`B=0`$).
### B Critical exponents
Finally, we shall discuss the possible origin for the crossover of $`\mu =0.5`$ at $`B=0`$ to $`\mu =1.1`$ at $`B0`$. According to theories for the MIT dealing with both disorder and electron-electron interaction, systems can be categorized into four universality classes by the symmetry they have as listed in Table III and the value of the critical exponent depends only on the universality class. For classes MF, MI, and SO, the nonlinear sigma model receives some restrictions from the symmetry breaker and the critical exponent $`\nu `$ for the correlation length is calculated for $`d=2+\epsilon `$ dimensions as
$$\nu =\frac{1}{\epsilon }[1+O(\epsilon )].$$
(35)
Assuming the Wegner relation
$$\mu =\nu (d2),$$
(36)
Eq. (35) yields
$$\mu =1+O(\epsilon ),$$
(37)
which means $`\mu 1`$ when $`\epsilon 1`$. For $`d=3`$ $`(\epsilon =1)`$, however, the theoretical result tells us very little about the value of $`\mu `$. A calculation for class G is difficult because there is no restriction for the nonlinear sigma model. The value in Table III for $`d=3`$ is merely an approximate one. We believe that $`\mathrm{`}\mathrm{`}\nu 0.75\mathrm{"}`$ should be treated as less accurate than $`\mathrm{`}\mathrm{`}\nu =1+O(1)\mathrm{"}`$ for the other classes. So, we conclude that values of $`\mu `$ can only be determined at this time by experimental measurements.
Ruling out an ambiguity due to an inhomogeneous distribution of impurities, we have established $`\mu =0.5`$ at $`B=0`$ and $`\mu =1.1`$ at $`B0`$ for <sup>70</sup>Ge:Ga. Since <sup>70</sup>Ge:Ga is a $`p`$-type semiconductor, it is most likely categorized as class SO. Electrical transport properties of $`p`$-type semiconductors are governed by holes at the top of the valence band, where spin-orbit coupling partially removes the degeneracy and shifts the split-off band down in energy by 0.29 eV in Ge and 0.044 eV in Si. External magnetic fields change the universality class to which a system belongs by breaking the time-reversal symmetry. According to Ref. REFERENCES, a system in class SO at $`B=0`$ belongs to class MI at $`B0`$ even if it contains no magnetic impurities. The change of $`\mu =0.5`$ to $`\mu =1.1`$ observed in <sup>70</sup>Ge:Ga due to the application of a magnetic field should be understood in such a context. A similar phenomenon was also found in Si:B. (See Table II.)
## V Conclusion
We have measured the electrical conductivity of NTD <sup>70</sup>Ge:Ga samples in magnetic fields up to $`B=8`$ T in order to study the doping-induced MIT (in magnetic fields) and the magnetic-field-induced MIT. For both of the MIT, the critical exponent of the conductivity is 1.1, which is different from the value 0.5 at $`B=0`$. The change of the critical exponent caused by the applied magnetic fields supports a picture in which $`\mu `$ varies depending on the universality class to which the system belongs. The phase diagram has been determined in magnetic fields for the <sup>70</sup>Ge:Ga system.
## Acknowledgments
We are thankful to T. Ohtsuki for valuable discussions, J. W. Farmer for the neutron irradiation, and V. I. Ozhogin for the supply of the Ge isotope. All the low-temperature measurements were carried out at the Cryogenic Center, the University of Tokyo. M. W. would like to thank Japan Society for the Promotion of Science (JSPS) for financial support. The work at Keio was supported in part by a Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports, and Culture, Japan. The work at Berkeley was supported in part by the Director, Office of Energy Research, Office of Basic Energy Science, Materials Sciences Division of the U. S. Department of Energy under Contract No. DE-AC03-76SF00098 and in part by U. S. NSF Grant No. DMR-97 32707. |
no-problem/9910/nucl-th9910017.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Although the Vavilov-Cherenkov effect is a well-established phenomenon widely used in physics and technology (see. e.g., Frank’s book ), a lot of its aspects remain uninvestigated up to now. In particular, it is not clear how a transition from the sub-light velocity regime to the the super-light one takes place. Some time ago (Tyapkin , Zrelov et al. ) it was suggested that alongside with the usual Cherenkov and bremsstrahlung ($`BS`$) shock waves, there should exist a shock wave associated with the charged particle passing the medium light velocity $`c_n`$. The consideration presented there was pure qualitative without any formulae and numerical results. It was based on the analogy with the phenomena occurring in acoustics and hydrodynamics. It seems to us that this analogy is not complete and, therefore, it cannot be considered as a final proof.
Usually, treating the Vavilov-Cherenkov effect, one considers the charge motion in an infinite medium with a constant velocity $`v>c_n`$. In the absence of $`\omega `$ dispersion, there is no electromagnetic field ($`EMF`$) before the Cherenkov cone accompanying the charge, an infinite $`EMF`$ on the Cherenkov cone itself and finite values of the $`EMF`$ strengths behind it (). In this case, information concerning the transition effects arising when charge velocity coincides with $`c_n`$ is lost (except for the existence of the Cherenkov shock wave ($`CSW`$) itself).
The accelerated motion of the point charge in a vacuum was first considered by Schott (). Yet, his qualitative consideration was pure geometric, not allowing numerical investigations.
Tamm () approximately solved the following problem: A point charge rests at a fixed point of medium up to some moment $`t=t_0`$ after which it exhibits the instant infinite acceleration and moves uniformly with the velocity greater than the light velocity in the medium. At the moment $`t=t_0`$ the charge decelerates instantly and rests afterwards. Later, this problem was numerically investigated by Ruzicka and Zrelov (). The analytic solution of this problem in the absence of dispersion has been found in . However, in all these studies the information concerning the transition effects arising when charge velocity surpasses the light velocity in medium was lost (due to the instant charge acceleration).
Another possibility is to use smooth accelerated and decelerated charge motion as a tool for the studying the above-mentioned transition effects. Previously, the straight-line motion of a point charge with constant acceleration ($`z=at^2`$) has been considered in . This motion law is obtained from the relativistic equation
$$m\frac{d}{dt}\frac{v}{\sqrt{1\beta ^2}}=eE,v=\frac{dz}{dt},\beta =v/c,$$
(m is the rest mass) for the following electric field directed along the $`z`$ axis
$$E_z=\frac{2ma}{e(14az/c^2)^{3/2}}.$$
$`(1.1)`$
For the case of accelerated motion it has been found there that two shock waves arise when charge velocity coincides with $`c_n`$. The first of them is the well-known Cherenkov shock wave $`C_M`$ having the form of the finite Cherenkov cone and propagating with the velocity of the charge. The second of these waves ($`C_L`$), closing the Cherenkov cone and propagating with the velocity $`c_n`$, is just the shock wave the existence of which was qualitatively predicted by Tyapkin () and Zrelov et al.(). These two waves form an indivisible entity. As time goes, the dimensions of this complex grow, but its form remains essentially the same. The singularities of the $`C_L`$ and $`C_M`$ shock waves are the same and much stronger than the singularity of the $`BS`$ shock wave arising from the beginning of the charge motion.
For the case of decelerated motion it has been found in the same reference that an additional shock wave arises at the moment when the charge velocity coincides with $`c_n`$. This wave being detached from the charge exists even after termination of its motion. It propagates with the velocity $`c_n`$ and has the same singularity as $`CSW`$.
The drawback of this consideration is that the electric field (1.1) maintaining charge motion tends to $`\mathrm{}`$ as $`z`$ approaches $`c^2/4a`$. This singularity makes the creation of electric field (1.1) be rather problematic. This, in turn, complicates the experimental verification of the shock waves mentioned above.
Here, we consider the straight-line motion of a point charge in a constant uniform electric field (which is much easier to create than the singular electric field (1.1)) and evaluate $`EMF`$ arising from such a motion. The arising motion law is manifestly relativistic. We suggest this motion law to be given, disregarding the energy losses and the medium influence on a moving charge.
Qualitatively, we confirm the results obtained in concerning the existence of the shock waves associated with the surpassing the light medium velocity.
In the present approach, we take the refractive index to be independent of $`\omega `$. This permits us to solve the problem under consideration explicitly. The price for disregarding of the $`\omega `$ dependence is the divergence of integrals quadratic in the Fourier transforms of field strengths (such as the total energy flux).
This consideration is on the same footing as the first Tamm and Frank papers on the Vavilov-Cherenkov effect in which the dispersion and the influence of energy losses on the uniform point charge motion were not taken into account. In spite of this, these papers correctly predicted the location of the Vavilov-Cherenkov singularity. The subsequent inclusion of dispersion only slightly changed these results.
Another argument for the simplified treatment of the charge accelerated motion (i.e., without $`\omega `$ dispersion) is due to Refs. where the uniform motion of a charge in medium with a standard $`\omega `$ dependence of electric permittivity ($`ϵ(\omega )=1+\omega _L^2/(\omega _0^2\omega ^2+ip\omega )`$) was considered. It was shown there that such a $`\omega `$ dependence of $`ϵ`$ removes singularities of the field strengths and leads to the appearance of many maxima of the radiated energy flux behind the moving charge. However, the main radiation maximum is at the same position as in the absence of $`\omega `$ dispersion. Further, despite the $`\omega `$ dispersion, the critical charge velocity (independent of frequency and dependent on medium properties) exists below and above of which the radiation spectrum differs drastically. It turns out that for the uniform charge motion the main features of the Cherenkov-Vavilov radiation are the same with and without dispersion. Thus, we hope the same is true for the $`EMF`$ radiated by the accelerated charge moving in medium.
## 2 Statement of the physical problem
Let a point charge move inside the medium with the polarizabilities $`ϵ`$ and $`\mu `$ along the given trajectory $`\stackrel{}{\xi }(t)`$. Then, its $`EMF`$ at the observation point ($`\rho ,z`$) is given by the Lienard-Wiechert potentials (see, e.g.,)
$$\mathrm{\Phi }(\stackrel{}{r},t)=\frac{e}{ϵ}\underset{i}{}\frac{1}{|R_i|},\stackrel{}{A}(\stackrel{}{r},t)=\frac{e\mu }{c}\underset{i}{}\frac{\stackrel{}{v}_i}{|R_i|},div\stackrel{}{A}+\frac{ϵ\mu }{c}\dot{\mathrm{\Phi }}=0$$
$`(2.1)`$
Here
$$\stackrel{}{v}_i=(\frac{d\stackrel{}{\xi }}{dt})|_{t=t_i},R_i=|\stackrel{}{r}\stackrel{}{\xi }(t_i)|\stackrel{}{v}_i(\stackrel{}{r}\stackrel{}{\xi }(t_i))/c_n$$
and $`c_n`$ is the light velocity inside the medium ($`c_n=c/\sqrt{ϵ\mu }`$). The summing in (2.1) is performed over all physical roots of the equation
$$c_n(tt^{})=|\stackrel{}{r}\stackrel{}{\xi }(t^{})|$$
$`(2.2)`$
To preserve the causality, the time of radiation $`t^{}`$ should be smaller than the observation time $`t`$. Obviously, $`t^{}`$ depends on the coordinates $`\stackrel{}{r},t`$ of the observation point $`P`$. With the account of (2.2) one gets for $`R_i`$
$$R_i=c_n(tt_i)\stackrel{}{v}_i(\stackrel{}{r}\stackrel{}{\xi }(t_i))/c_n$$
$`(2.3)`$
Consider the motion of the charged point-like particle of the rest mass $`m`$ inside the medium according to the motion law ()
$$z(t)=\sqrt{z_0^2+c^2t^2}+C.$$
It may be realized in a constant electric field $`E`$ directed along the $`Z`$ axis: $`z_0=|mc^2/eE|>0`$. Here $`C`$ is an arbitraty constant. We choose it from the condition $`z(t)=0`$. Therefore,
$$z(t)=\sqrt{z_0^2+c^2t^2}z_0$$
$`(2.4)`$
This law of motion, being manifestly relativistic, corresponds to constant proper acceleration .
The charge velocity is given by
$$v=\frac{dz}{dt}=c^2t(z_0^2+c^2t^2)^{1/2}.$$
Clearly, it tends to the light velocity in vacuum as $`t\mathrm{}`$. The retarded times $`t^{}`$ satisfy the following equation:
$$c_n(tt^{})=[\rho ^2+(z+z_0\sqrt{z_0^2+c^2t^2})^2]^{1/2}$$
$`(2.5)`$
It is convenient to introduce the dimensionless variables
$$\stackrel{~}{t}=ct/z_0,\stackrel{~}{z}=z/z_0,\stackrel{~}{\rho }=\rho /z_0$$
$`(2.6)`$
Then,
$$\alpha (\stackrel{~}{t}\stackrel{~}{t}^{})=[\stackrel{~}{\rho }^2+(\stackrel{~}{z}+1\sqrt{1+\stackrel{~}{t}^2})^2]^{1/2},$$
$`(2.7)`$
where $`\alpha =c_n/c`$ is the ratio of the light velocity in medium to that in vacuum. In order not to overload exposition we drop the tilde signs
$$\alpha (tt^{})=[\rho ^2+(z+1\sqrt{1+t^2})^2]^{1/2}$$
$`(2.8)`$
For the treated one-dimensional motion the denominators $`R_i`$ are given by
$$R_i=\frac{z_0}{\alpha \sqrt{1+t_i^2}}[\alpha ^2(tt_i)\sqrt{1+t_i^2}t_i(z+1\sqrt{1+t_i^2})]$$
$`(2.9)`$
We consider the following two problems :
I. A charged particle rests at the origin up to a moment $`t^{}=0`$. After that it is accelerated in the positive direction of the $`Z`$ axis.
II. A charged particle decelerates moving from $`z=\mathrm{}`$ to the origin. After the moment $`t^{}=0`$ it rests there.
It is easy to check that the moving charge acquires the light velocity $`c_n`$ at the moments $`t_l=\pm \alpha /\sqrt{1\alpha ^2}`$ for the accelerated and decelerated motion, resp. The position of a charge at those moments is $`z_l=1/\sqrt{1\alpha ^2}1`$.
It is our aim to investigate space-time distribution of $`EMF`$ arising from such particle motions. For this, we should solve Eq.(2.8). Taking its square we obtain the fourth degree algebraic equation relative to $`t^{}`$. Solving it, we find space-time domains where the $`EMF`$ exists. It is just this way of finding the $`EMF`$ which was adopted in . It was shown in the same reference that there is another, much simpler approach for recovering $`EMF`$ singularities (it was extensively used by Schott ). We seek the zeros of the denominators $`R_i`$ entering into the definition of the electromagnetic potentials (2.1). They are obtained from the equation
$$\alpha ^2(tt^{})\sqrt{1+t^2}t^{}(z+1\sqrt{1+t^2})=0$$
$`(2.10)`$
We rewrite (2.8) in the form
$$\rho ^2=\alpha ^2(tt^{})^2(z+1\sqrt{1+t^2})^2.$$
$`(2.11)`$
Recovering $`t^{}`$ from (2.10) and substituting it into (2.11) we find the surfaces $`\rho (z,t)`$ carrying the singularities of the electromagnetic potentials. They are just shock waves which we seek. It turns out that $`BS`$ shock waves (i.e., moving singularities arising from the beginning or termination of a charge motion) are not described by Eqs. (2.10) and (2.11). The physical reason for this is that on these surfaces $`BS`$ field strengths, not potentials, are singular (). The simplified procedure mentioned above for recovering of moving $`EMF`$ singularities is to find solutions of (2.10) and (2.11) and add to them ”by hand ” the positions of $`BS`$ shock waves defined by the equation $`r=\alpha t,r=\sqrt{\rho ^2+z^2}`$. The equivalence of this approach to the complete solution of (2.8) has been proved in where the complete description of the $`EMF`$ (not only its moving singularities as in the present approach) of a moving charge was given. It was shown there that the electromagnetic potentials exhibited infinite (for the Cherenkov and the shock waves under consideration) jumps when one crosses the above singular surfaces. Correspondingly, field strengths have $`\delta `$-type singularities on these surfaces while the space-time propagation of these surfaces describes the propagation of the radiated energy flux.
## 3 Numerical results
We consider the typical case when the ratio $`\alpha `$ of the light velocity in medium to that in vacuum is equal to 0.8.
### 3.1 Accelerated motion
For the first of the treated problems ( uniform acceleration of the charge resting at the origin up to a moment $`t=0`$) only positive retarded times $`t_i`$ have a physical meaning (negative $`t_i`$ correspond to the charge resting at the origin). The resulting configuration of the shock waves for the typical observation time $`t=2`$ is shown in Fig.1. We see on it:
i) The Cherenkov shock wave $`C_M`$ having the form of the Cherenkov cone;
ii) The shock wave $`C_L`$ closing the Cherenkov cone and describing the shock wave emitted from the point $`z_l=(1\alpha ^2)^{1/2}1`$ at the moment $`t_l=\alpha (1\alpha ^2)^{1/2}`$ when the velocity of a charge coincides with the light velocity in medium;
iii) The $`BS`$ shock wave $`C_0`$.
It turns out that the surface $`C_L`$ is approximated with good accuracy by the spherical surface $`\rho ^2+(zz_l)^2=(tt_l)^2`$ (shown by the short-dash curve $`C`$) It should be noted that only the part of $`C`$ coinciding with $`C_L`$ has physical meaning.
On the internal sides of the surfaces $`C_L`$ and $`C_M`$ electromagnetic potentials acquire infinite values. On the external side of $`C_M`$ lying outside $`C_0`$ the magnetic vector potential is zero (as there are no solutions of Eqs. (2.10),(2.11) there), while the electric scalar potential coincides with that of the resting charge. On the external sides of $`C_L`$ and on the part of the $`C_M`$ surface lying inside $`C_0`$ the electromagnetic potentials have finite values (as bremsstrahlung has reached these space regions).
In the negative $`z`$ semi-space the experimentalist will detect only the $`BS`$ shock wave. In the positive $`z`$ semi-space, for the sufficiently large times ($`t>2\alpha /(1\alpha ^2)`$) the observer close to the $`z`$ axis will detect the Cherenkov shock wave $`C_M`$ first, the $`BS`$ shock wave $`C_0`$ later and, finally, the shock wave $`C_L`$ originating from the surpassing the medium light velocity. For the observer more remoted from the $`z`$ axis the $`BS`$ shock wave $`C_0`$ arrives first, then $`C_M`$ and finally $`C_L`$ (Fig. 1). For large distances from the $`z`$ axis the observer will see only the $`BS`$ shock wave.
The positions of the shock waves for different observation times are shown in Fig. 2. The dimension of the Cherenkov cone is zero for $`tt_l`$ and continuously rises with time for $`t>t_l`$. The physical reason for this is that the $`C_L`$ shock wave closing the Cherenkov cone propagates with the light velocity $`c_n`$, while the head part of the Cherenkov cone $`C_M`$ attached to the charged particle propagates with the velocity $`v>c_n`$. It is seen that for small observation times ($`t=2`$ and $`t=4`$) the $`BS`$ shock wave $`C_0`$ (pointed curve) precedes $`C_M`$. Later, $`C_M`$ overtakes (this happens at the moment $`t=2\alpha /(1\alpha ^2)`$) and partly surpasses $`BS`$ shock wave $`C_0`$ ($`t=8`$). However, the $`C_L`$ shock wave is always behind $`C_0`$ (as both of them propagate with the velocity $`c_n`$, but $`C_L`$ is born at the later moment $`t=t_l`$ ). The picture similar to the $`t=8`$ case remains essentially the same for later times.
### 3.2 Decelerated motion
Now we turn to the second problem (uniform deceleration of the charged particle along the positive $`z`$ semi-axis up to a moment $`t=0`$ after which it rests at the origin). In this case, only negative retarded times $`t_i`$ have a physical meaning (positive $`t_i`$ correspond to the charge resting at the origin).
For the observation time $`t>0`$ the resulting configuration of the shock waves is shown in Fig. 3 where one sees the $`BS`$ shock wave $`C_0`$ arising from the termination of the charge motion (at the moment $`t=0`$) and the blunt shock wave $`C_M`$ into which the $`CSW`$ transforms after the termination of the motion. The head part of $`C_M`$ is described with good accuracy by the sphere $`\rho ^2+(zz_l)^2=(t+t_l)^2`$ corresponding to the fictious shock wave $`C`$ emitted from the point $`z_l=(1\alpha ^2)^{1/2}1`$ at the moment $`t_l=\alpha (1\alpha ^2)^{1/2}`$ when the velocity of the decelerated charge coincides with the medium light velocity. Only part of $`C`$ coinciding with $`C_M`$ has physical meaning. The electromagnetic potentials vanish outside $`C_M`$ (as no solutions exist there) and acquire infinite values on the internal part of $`C_M`$. Therefore, the surface $`C_M`$ represents the shock wave. As a result, for the decelerated motion after termination of the particle motion ($`t>0`$) one has the shock wave $`C_M`$ detached from a moving charge and the $`BS`$ shock wave $`C_0`$ arising from the termination of the particle motion. After the $`C_0`$ wave reaches the observer, he will see the electrostatic field of a charge at rest and bremsstrahlung from remoted parts of charge trajectory.
The positions of shock waves for different times are shown in Fig. 4 where one sees how the acute $`CSW`$, attached to the moving charge ( $`t=2`$), transforms into the blunt shock wave detached from it ($`t=8`$). Pointed curves mean the $`BS`$ shock waves described by the equation $`r=\alpha t`$.
For the decelerated motion and $`t<0`$ (i.e., before termination of the charge motion) physical solutions exist only inside the Cherenkov cone $`C_M`$ ( $`t=2`$ on Fig. 4). On the internal boundary of the Cherenkov cone the electromagnetic potentials acquire infinite values. On their external boundaries the electromagnetic potentials are zero (as no solutions exist there). When the charge velocity coincides with $`c_n`$ the $`CSW`$ leaves the charge and transforms into the $`C_M`$ shock wave which propagates with the velocity $`c_n`$ ($`t=2,\mathrm{\hspace{0.33em}4}`$ and $`8`$ on Fig. 4). As it has been mentioned, the blunt head parts of these waves are approximated with a good accuracy by the surface $`\rho ^2+(zz_l)^2=(t+t_l)^2`$ corresponding to the fictious shock wave emitted at the moment when the charge velocity coincides with the light velocity in the medium.
In the negative $`z`$ semi-space the experimentalist will detect the blunt shock wave first and $`BS`$ shock wave (shock dash curve) later.
In the positive $`z`$ semi-space, for the observation point close to the $`z`$ axis the observer will see the $`CSW`$ first and $`BS`$ shock wave later. For larger distances from the $`z`$ axis he will see at first the blunt shock $`C_M`$ into which the $`CSW`$ degenerates after the termination of the charge motion and the $`BS`$ shock wave later (Fig. 4).
It should be mentioned about the continuous radiation which reaches the observer between the arrival of the above shock waves and about the continuous radiation and the electrostatic field of a charge at rest after the arrival of the last shock wave. They are easily restored from the complete exposition presented in for the $`z=at^2`$ motion law.
## 4 Conclusion
We have investigated the space-time distribution of the electromagnetic field arising from the accelerated manifestly relativistic charge motion. This motion is maintained by the constant electric field. Probably, this field is easier to create in gases (than in solids in which the screening effects are essential) where the Vavilov-Cherenkov effect is also observed. We have confirmed the intuitive predictions made by Tyapkin () and Zrelov et al. () concerning the existence of the new shock wave (in addition to the Cherenkov and bremsstrahlung shock waves) arising when the charge velocity coincides with the light velocity in medium. For the accelerated motion, this shock wave forms indivisible unity with Cherenkov’s shock wave. It closes the Vavilov-Cherenkov radiation cone and propagates with the light velocity in the medium. For the decelerated motion, the above shock wave detaches from a moving charge when its velocity coincides with the light velocity in medium.
The quantitative conclusions made in for a less realistic external electric field maintaining the accelerated charge motion are also confirmed. We have specified under what conditions and in which space-time regions the above-mentioned new shock waves do exist. It would be interesting to observe these shock waves experimentally. |
no-problem/9910/hep-ph9910401.html | ar5iv | text | # A Model for the Parton Distribution in Nuclei
## Abstract
We have extended recently proposed model of parton distributions in nucleons to the case of nucleons in nuclei.
PACS numbers: 12.38.Aw; 12.38.Lg; 12.39.-x
Keywords: Parton distributions; Hadron structure; QCD; Nuclear structure
Recently a simple model for parton distributions in hadrons has been presented . They are derived from a spherically symmetric, Gaussian distribution, the width of which reflects, via the Heinserberg uncertainty relation, the hadronic size. Two distinct parts are distinguished in a hadron: a “bare” hadron (identified with valence quarks and gluons) and hadronic fluctuations (identified with pions, which are later the source of sea partons). The parton density distributions were calculated numerically using a Monte Carlo technique, and good agreement with deep inelastic scattering data was reported .
In this paper we have used the same method to calculate parton distributions in nuclei. Preserving the simplicity of the model and using the standard nuclear structure we were able to successfully describe the $`F_2(x)`$ in nucleus over the whole range of $`x`$. The only changes introduced were dictated by the presence of other hadrons around the one under investigation in a similar fashion as in our previous work on this subject . However, in the present case we were able to describe also the region of small $`x`$, which shows effects of shadowing and therefore is of particular interest to the physics of heavy ion collisions .
Let us first recollect the main features of the model , which we shall follow as close as possible (in particular the choice of kinematics and notation will be the same). Hadron is visualised there as a composition of bare hadron and hadronic fluctuations. The former are made out of valence quarks and gluons and the latter are the source of sea quarks and gluons. They are formed mainly by the pion. One starts with the hadron at rest in which frame all partons are supposed to be distributed according to a spherically symmetric, Gaussian distributions. Such a form is natural because of a large number of interactions binding partons in the hadron. The only parameter here is the width of this distribution, the value of which is expected to be of the order of a few hundred MeV (both from the Heinserberg uncertainty relation applied to the hadron size and from the primordial transverse momentum of partons observed in deep inelastic collisions). This parameter encompasses also all perturbative QCD effects present due to the initial state emission and therefore depends on the scale $`Q_0^2`$ <sup>1</sup><sup>1</sup>1Actually, when confronting flavour dependent data the number of parameters is enlarged because each quark flavour can have its own width; the same is true for gluons, cf. . In such situation the percentages of each species in the momentum sum rule is also a kind of parameter.. The goal of this approach is not so much the full wave function of the hadron, as the probability of finding a single parton with the four momentum $`k`$ probed by external current with four-momentum $`q`$. Therefore all other partons are treated collectively as a single remnant with the four momentum $`r`$. Because the above prescription provides us only with the three-momentum of the probed parton it is assumed that the energy component is equal to the parton (current) mass plus a Gaussian variation with the same width as above. It means that the parton can be off-shell at the scale $`Q_0^2`$ and fluctuates with a life-time given by the nucleon radius.
The reaction takes place in a coordinate system in which the negative $`z`$-direction points along the current which probes the hadron. One uses the light cone momentum fraction x of the parton defined as $`k_+=xp_+`$ (where $`p`$ is the four-momentum of the hadron). The final four-momentum of the scattered parton denoted by $`j`$ must satisfy the following condition: $`0<j^2<W^2`$ (where $`W`$ is invariant mass of the hadronic system). This is equivalent to $`0<x<1`$. When masses of quarks are neglected the same condition must be satisfied by the four-momentum $`r`$ of the remnant: $`0<r^2<W^2`$. The parton distributions are then calculated by a Monte Carlo code <sup>2</sup><sup>2</sup>2In our work we have used a simplified version of model with the same distributions for all valence quarks and without evolution in $`Q^2`$. We shall also not address gluon distributions, which in were assumed to have the same basic Gaussian shape as the valence quarks. The presence of gluons will be accounted for only in the momentum sum rule, where part of momentum will always be allocated to the gluonic component. . The momentum $`k`$ component of the parton to be probed by current with virtuality $`Q_0^2`$ and four-momentum $`q`$ is chosen from the Gaussian distribution described before. Actually the values of $`Q_0^2`$ is a free parameter expected to be of the order of $`1`$ GeV<sup>2</sup>. This makes it possible to calculate the four-momenta $`j`$ and $`r`$. Events are accepted if the exact kinematical constraints mentioned above are fulfilled. In this way one obtains as a result the valence parton distribution $`f_v(x;Q_0^2)`$ (“bare” hadron) <sup>3</sup><sup>3</sup>3One has to remember, however, that it is, the so called, Bjorken variable $`x=x_{Bj}=\frac{Q^2}{2pq}`$, which is accessible experimentally whereas starts from the light cone target rest frame variable $`x=x_{LC}=\frac{k^+}{p^+}`$ (where $`p=(M,0,0,0)`$) with a fixed resolution $`Q^2=Q_0^2`$. Whenever one is interested in some higher $`Q^2`$, the QCD evolution in the initial state must be performed. In this case the $`x=x_{Bj}`$ picked out of the proton at $`Q^2`$ will be smaller than the $`x_{LC}=k^+/p^+`$ one has started with at $`Q_0^2`$. The remaining energy is radiated into the final state as jets. If the invariant mass squared of all jets in the final state is $`M_X^2`$, then $`x_{LC}=x_{Bj}\left(1+\frac{M_X^2}{Q^2}\right)`$. In this case, if one only scatters a quark with no further radiation, then $`x_{LC}=x_{Bj}\left(1+\frac{m_q^2}{Q^2}\right)x_{Bj}`$. This clarifies the limits of applicability of the model used here.. The sea parton distribution is given by the pionic component of the nucleon: $`f_s(x;Q_0^2)=\frac{dy}{y}f_\pi (y,Q_0^2)f_{pion}(x/y;Q_0^2)`$. It is the convolution of the pion distribution function in the nucleon, $`f_\pi (x;Q_0^2)`$, and the parton structure of pion $`f_{pion}(x;Q_0^2)`$. This in turn is obtained from the same Gaussian primodial distribution as used for valence partons. The characteristic behaviour of the sea partons is then derived from the pion distribution in the nucleon, which was again parametrized by Gaussian distribution with a smaller width equal to $`0.052`$ GeV reflecting the smallness of the pion mass. The total nucleon structure function is then equal to: $`F(x;Q_0^2)=f_v(x,Q_0^2)+f_s(x;Q_0^2)`$.
We shall now apply this model to deep inelastic collisions with nuclei, $`l+Al^{}+X`$. As usual in such cases our aim will be the nuclear structure function $`F_2^A(x_A)`$, which shows a characteristic pattern: shadowing at small $`x`$, followed by antishadowing at $`x0.1÷0.3`$, followed in turn by a very pronounced deep around $`x0.7`$ and a kind of cumulative effect for $`x1`$. This subject has already long history and a vast literature (cf. for review). Our aim is to see if, and under what conditions, the model proposed in for free nucleons can be also applied to nuclei. In the approach presented here, from many possibilities mentioned in , we have selected the picture in which collision proceeds on nucleons bounded in nuclei, looking for nuclear partonic distributions as a sum of distributions of bound nucleons. The corresponding nuclear structure function can be then written as simple convolution of nuclear and nucleonic components:
$$\frac{1}{x_A}F_2^A(x_A)=A𝑑y_A\frac{dx}{x}\delta (x_Ay_Ax)\rho ^A(y_A)F_2^N(x).$$
(1)
Here $`F_2^N(x)`$ denotes a nucleon structure function inside the nucleus and $`\rho ^A(y_A)`$ is the nucleon distribution function in the nucleus. Variables $`x_A/A`$ and $`y_A/A`$ are the corresponding Björken variables for the quarks and nucleons in the nucleus, respectively (with nucleonic and nuclear longitudinal momenta given by $`p^+`$ and $`P_A^+`$). The presence of nucleus is summarised here by the nucleon distribution function $`\rho `$ and the simplest approach one could think of would be to use available information on it and keep the nucleonic structure functions unchanged. For example, in the framework of the relativistic mean field theory (RFM) leptons interact with nucleons with density $`\rho ^A`$, which are moving in a constant average scalar $`U_S`$ and vector $`U_V`$ potentials (defined in the rest frame of the nucleus as $`U_S=400\frac{\rho }{\rho _0}`$ MeV and $`U_V=300\frac{\rho }{\rho _0}`$ MeV, in both cases $`\rho _0=0.17`$ fm<sup>-3</sup>). As a result (cf. ), in the Relativistic Fermi Gas approximation the following form of the nuclear density function is obtained:
$$\rho ^A(y_A)=\frac{4}{\rho }\frac{d^4p}{(2\pi )^4}S_N(p^o,𝐩)\left[1+\frac{p_z}{E^{^{}}(p)}\right]\delta \left(y\frac{p_o+p_z}{\mu }\right).$$
(2)
The $`\rho `$ is nuclear density (which is different from the density of infinite nuclear matter $`\rho _0`$); $`S_N=n(p)\delta \left\{p^0[E^{}(p)+U_V]\right\}`$ denotes nucleon spectral function with $`n(p)`$ being Fermi distribution of nucleon momenta inside the nucleus, $`p(0,p_F)`$. The corresponding Fermi momentum $`p_F=(3/2\rho \pi ^2)^{1/3}`$. The nucleon chemical potential $`\mu =M_A/A`$ (which in RMF can be shown to be $`\mu =E^{}(p_F)+U_V`$) and $`E^{}(p)=\sqrt{p^2+m^2}`$ with $`m^{}=mU_S`$. After integration (2) simplifies to
$$\rho ^A(y_A)=\frac{3}{4}\frac{\left[v_A^2(y_A1)^2\right]}{v_A^3}.$$
(3)
Here $`v_A=p_F/E_F^{^{}}`$ and $`y_A`$ takes the values given by the inequality: $`0<(E_F^{^{}}p_F)<\mu y_A<(E_F^{^{}}+p_F)`$. In this way the motion of the nucleon inside the nucleus is parametrized here by two parameters: nuclear density $`\rho `$ which determines Fermi momentum $`p_F`$ and nucleon chemical potential $`\mu `$. Out of these two, the nucleon chemical potential is essentially constant (and equal to $`\mu =8`$ MeV), except for a few very light nuclei. The nuclear density $`\rho `$ due to the finite size of the nucleus can vary from $`\rho 0.1`$ fm<sup>-3</sup> for light nuclei to $`\rho 0.17`$ fm<sup>-3</sup> for heavy nuclei and nuclear matter. Taking $`\rho 0.12`$ fm<sup>-3</sup> for the average density of nucleons in <sup>56</sup>Fe with $`p_F=240MeV`$ and using (3) results in dashed curve in Fig. 1, which is completely off data. This means that RFM approach to deep inelastic scattering on nuclei fails, at least in the form presented here. One could play with nuclear parameters, for example by increasing chemical potential $`\mu `$ to make the structure of deep around $`x0.5÷0.7`$ more pronounced and more similar to the experimental data. This would, however, be difficult to justify and the shadowing/antishadowing structure seen in data at smaller $`x`$ would remain unexplained. There are then two possibilities: either to keep nucleon structure functions unchanged and try to change $`\rho `$ by including correlations and/or cluster structure of nucleons inside the nucleus or to decide to change the nucleonic structure functions. Because the first approach will essentially not affect the shadowing (or even antishadowing) region of $`x`$, we have opted for the second possibility. In what follows we shall therefore use, for nuclear structure function, the nuclear density $`\rho ^A`$ as given by eq. (2) with $`\mu =8`$ MeV and with $`U_S=U_V=0`$ (i.e., $`m^{}=m`$). This is to avoid the possible double counting of nuclear interactions, which in our model can be expressed either by potentials $`U_{V,S}`$ (treated now as free parameters) or by changes of the free nucleon structure functions discussed below. Because we have decided (as said above) for the latter possibility we have consistently set values of potentials equal to zero. It means that we are considering here a model of Fermi gas with modified nucleons.
Our Monte Carlo calculation for collision on nuclei has been performed in the same way as in . The only difference was that now the nucleon three-momentum is not fixed but chosen from distribution $`\rho ^A(y_A)`$ as given by eq. (2), i.e., we account for the Fermi motion of nucleons in nucleus. This introduces a kinematical medium effect produced by Lorentz transformation of the parton momenta from the nucleonic to the nuclear rest frame. This Fermi motion of nucleons also affects invariant mass $`W`$, four momentum $`r=pj`$ and Bjorken variable $`x=k_+/p_+`$ and in this way influences the calculated structure function $`F_2^B(x,p)`$ of the bound nucleon making it momentum dependent. Analytically it would mean that one extends our simple convolution formula (1) replacing $`F_2^N(x)`$ by momentum dependent $`F_2^B(x,p)`$ and integrates over momentum $`p`$, i.e., instead of (1) one uses formula
$$F_2^A(x_A)=\frac{4A}{\rho }𝑑y_A\frac{d^4p}{(2\pi )^4}S_N(p)\left[1+\frac{p_z}{E^{^{}}(p)}\right]\delta \left(y_A\frac{p_+}{\mu }\right)F_2^B(\frac{x_A}{y_A},p).$$
(4)
The other modifications mentioned before must be made by some suitable changes in parameters of the original nucleonic structure functions. Because we are not differentiating here between partons of different flavour there are only three such parameters, which can be argued to be affected by nuclear medium: the widths of the original gaussians (i.e., transverse primordial distributions), the same and equal $`\sigma _q=0.18`$ GeV for both valence quarks (“bare” hadron) and for nucleonic pions (sea quarks) and the width of the pionic distribution in nucleon equal $`\sigma _\pi =0.052`$ GeV. It turns out that, in order to get reasonable description of data, it is enough to modify only $`\sigma _q`$ (keeping it again the same for both types of quarks) by decreasing it to the value $`\sigma _q=0.165`$ GeV and centering pionic energy distribution not at the pion mass $`m_\pi =0.14`$ GeV but at $`m_\pi =0`$. This can be seen in Fig. 1 where we present results of calculations of ratio $`R(x)=f_2^A(x)/f_2^D(x)`$ for <sup>56</sup>Fe (performed using average value of nuclear density $`\rho =0.12fm^3`$, $`p_F=240MeV`$). The mentioned results are presented by open squares. Such a change in the width of initial partonic distribution can be naturally explained by the expected swelling of nucleons in nuclear matter . Smaller spread of momenta caused by the internucleonic interactions corresponds, due to the uncertainty relation, to bigger spread in coordinate space (the “vacuum” in nucleus is obviously different from that outside the nucleus). Although the idea is similar to what was proposed a long time ago in the effect of swelling is this time modelled not by changing the bound nucleon mass but by changing the intrinsic motion of partons inside such nucleon. What concerns the change made in the center of the energy distribution of nucleonic pions one can argue, following for example , that in nuclear matter such pions, interacting strongly with nuclear matter and coupled to the nuclear collective modes, seem to have indeed vanishing effective mass. In any case such change is essential in getting agreement with data. With only swelled nucleon and without decreasing $`m_\pi `$ (understood as a position of maximum in the energy distribution of nucleonic pions) results in curve denoted by open diamonds in Fig. 1. The other parameters were left unchanged (in particular, the fraction of momentum carried by the sea quarks is left $`7.7\%`$ as in ). Contrary to what has been presented in this time we have also reproduced (at least partially) the shadowing/antishadowing region of $`x`$. It has to be admited here, however, that there is a price in obtaining these results (see open squares in Fig. 1). The momentum sum rule is now violated by $`2\%`$, which in our case means that this fraction of nucleonic momenta is taken by gluons (in addition to what they already had in free nucleons). If we strictly impose the momentum sum rule (i.e., if we assume that there is no additional transfer of the momenta to gluons), it results in changing the momentum carried by quarks to $`9.7\%`$ and spoils the agreement with data in the vicinity of the antishadowing region (cf. Fig. 1, open triangles).
To conclude, we have demonstrated that the simple model of partonic distributions in nucleons developed recently in Ref. can also sucessfully describe partonic distributions in nuclei by: $`(i)`$ using standard description of nucleonic Fermi motion, $`(ii)`$ changing by small amount (about $`10\%`$) the Gaussian widths of the initial partonic components and $`(iii)`$ centering the energy distribution of nucleonic pions on $`m_\pi =0`$ instead $`0.14`$ GeV. These changes correspond to the valence quark distributions in nuclei being shifted towards the lower values of x and the sea quark distribution being relatively spread out and shifted towards the higher values of x. Also nucleons in nucleus allocate a bit more momentum in the gluonic component. Altogether, by these simple means a good agreement with data was obtained for the whole range of $`x`$ except of really small values of $`x0.06`$ where such small changes are definitely not sufficient and convolution model is hardly justifiable . We consider it remarkable that by changing only one parameter describing the primordial diffusiveness of partons (in momentum space) in the free nucleon and shifting pionic distribution in way suggested by nuclear matter calculations we were able to describe experimental data in a fairly reasonable way in a very broad region of $`x`$. Actually, our results are quite robust to changes in initial values of parameters describing partonic distributions in free nucleon (for example, a $`20\%`$ change in the quark distribution in pion does not change results presented in Fig. 1). Our analysis is obviously simplified, as we have not differentiated between quark flavours or accounted for the presence of gluons only via the momentum sum rule. Aso our convolution approach, cf. eq. (1) is not suitable for small values of $`x`$. But already at this level it shows that model is highly suitable for application to nuclear partonic distributions because of simplicity and cogency of the parametrization chosen there.
Acknowledgements:
We are grateful to Prof. L.Łukaszuk for many helpful discussions. This work has been supported in part by the Committee for Scientific Research, grant 2-P03B-048-12.
Figure Captions:
* Results for $`R(x)=F_2^A(x)/F_2^D(x)`$ for $`{}_{}{}^{56}Fe`$. Data are from \- full diamonds and from \- full circles. See text for explanations. |
no-problem/9910/hep-ph9910439.html | ar5iv | text | # Inclusive 𝜂' production in B decays and the Enhancement due to charged technipions
## Abstract
The new contributions to the charmless B decay $`BX_s\eta ^{}`$ from the unit-charged technipions $`P^\pm `$ and $`P_8^\pm `$ are estimated. The technipions can provide a large enhancement to the inclusive branching ratio: $`Br(BX_s\eta ^{})7\times 10^4`$ for $`m_{p1}=100GeV`$ and $`m_{p8}=250350GeV`$ when the effect of QCD gluon anomaly is also taken into account. The new physics effect is essential to interpret the CLEO data.
PACS numbers: 12.60.Nz, 12.15.Ji, 13.20.Jf
Recently CLEO has reported a very large branching ratio for the inclusive production of $`\eta ^{}`$:
$`Br(B\eta ^{}X_s)=(6.2\pm 1.6\pm 1.3)\times 10^4,for2.0E_\eta ^{}2.7GeV`$ (1)
and a corresponding large exclusive branching ratio
$`Br(B^\pm \eta ^{}K^\pm )=(7.1_{2.1}^{+2.5}\pm 0.9)\times 10^5`$ (2)
where the acceptance cut was used to reduce the background from events with charmed mesons. By using the Standard Model (SM) factorization one finds $`Br(B\eta ^{}X_s)(0.52.5)\times 10^4`$ including the experimental cut, with the largest yields corresponding to a fairly limited region of parameter space, which is much smaller than the observed inclusive rate in eq.(1).
Up to now a number of interpretations have been proposed to account for the observed large branching ratio of $`B\eta ^{}X_s`$ and/or the exclusive branching fraction $`Br(B^\pm \eta ^{}K^\pm )`$. These include: (a) conventional $`bsq\overline{q}`$ with constructive interference between the $`u\overline{u}`$, $`d\overline{d}`$ and $`s\overline{s}`$ components of the $`\eta ^{}`$, (b) $`bc\overline{c}s`$ decay enhanced $`c\overline{c}`$ content of the $`\eta ^{}`$ , (c) $`bsg^{}sg\eta ^{}`$ from QCD gluon anomaly or from both QCD gluon anomaly with running $`\alpha _s`$ and the new physics effects , (d) non-spectator effects.
From the above works, the following major points about the inclusive and exclusive branching ratios $`Br(B\eta ^{}X_s)`$ and $`Br(B^\pm \eta ^{}K^\pm )`$ can be reached;
1. The SM factorization can, in principle, account for the exclusive $`\eta ^{}`$ yield without the need of new physics. Although a SM ”cocktail” solution for large inclusive rate $`Br(B\eta ^{}X_s)`$ involving contributions from several mechanisms is still possible, but the intervention of new physics in the form of enhanced chromo-magnetic dipole operators provides a simple and elegant solution to the puzzle in question. On the other hand, the short-distance $`b\eta ^{}sg`$ subprocess most possibly does not affect the exclusive $`B\eta ^{}K`$ branching ratios.
2. The observed inclusive branching fraction is larger than what is expected from scenario (a). Furthermore, the data show that the invariant mass spectrum $`M(X_s)`$ of the particles recoiling against the $`\eta ^{}`$ peaks above $`2GeV`$.
3. The large inclusive rate may be connected to the standard model QCD penguins via the gluon anomaly, which leads to the subprocess $`bsg^{}sg\eta ^{}`$. Taking a constant $`gg\eta ^{}`$ vertex form factor $`H(0,0,m_\eta ^{}^2)`$ , the observed large branching ratio in eq.(1) can be achieved. But as argued by Hou and Tseng , if one considers the running of $`\alpha _s`$ implicit in $`H(0,0,m_\eta ^{}^2)`$, the result presented in ref. will be reduced by roughly a factor of 3. In other words, the new physics effect is essential to interpret the observed large inclusive rate.
4. As pointed by Kagan and Petrov , the $`m_\eta ^{}^2/(q^2m_\eta ^{}^2)`$ dependence of the $`gg\eta ^{}`$ coupling should be considered. Including this dependence nominally reduce the former result to $`1.6\times 10^5`$ including the cut, which is significantly smaller than the observed inclusive rate. This fact will strengthen the need for new physics.
5. It is possible to enhance the chromo-dipole $`bsg`$ coupling by new physics at the TeV scale without jeopardizing the electrodipole $`bs\gamma `$ coupling. One explicit example in the framework of the Minimal Supersymmetric Standard Model (MSSM) has been studied in ref..
In this letter we will show that the unit-charged technipions $`P^\pm `$ and $`P_8^\pm `$ appeared in almost all nonminimal technicolor models can provide the required enhancement to account for the observed large rate $`Br(B\eta ^{}X_s)`$.
In the framework of the SM, the loop induced effective $`bsg`$ coupling was calculated long time ago,
$`\mathrm{\Gamma }_\mu ^{SM}=g_s{\displaystyle \frac{G_F}{4\sqrt{2}\pi ^2}}V_{is}^{}V_{ib}\overline{s}T^a\left[F_1^i(q^2\gamma _\mu q_\mu \mathit{})iF_2^i\sigma _{\mu \nu }q^\nu (m_sL+m_bR)\right]b`$ (3)
where the $`g_s`$ is the strong coupling constant, V is the CKM matrix, $`T^a=\lambda ^a/2`$ and $`\lambda ^a`$ is the Gell-Mann matrix, $`q=p_bp_s`$ and the charge radius form factors $`F_1^i`$ and the dipole moment $`F_2^i`$ ( $`i=u,c,t`$ ) are
$`F_1^i`$ $`=`$ $`{\displaystyle \frac{x_i}{12}}\left[y_i+13y_i^26y_i^3\right]+\left[{\displaystyle \frac{2y_i}{3}}{\displaystyle \frac{x_i}{6}}(4y_i^2+5y_i^33y_i^4)\right]\mathrm{ln}[x_i],`$ (4)
$`F_2^i`$ $`=`$ $`{\displaystyle \frac{x_i}{4}}\left[y_i+3y_i^2+6y_i^3\right]+{\displaystyle \frac{3x_i^2y_i^3}{2}}\mathrm{ln}[x_i]`$ (5)
where $`x_i=m_i^2/M_W^2`$ and $`y_i=1/(x_i1)`$ for $`i=u,c,t`$.
In the framework of Technicolor theory, the new effective $`bsg`$ coupling can be derived by replacing the internal W-lines in the one-loop diagrams that induce $`bsg`$ coupling in the SM with the charged technipion lines. Using the gauge and effective Yukawa couplings as given in refs., one finds the new effective $`bsg`$ coupling induced by $`P^\pm `$ and $`P_8^\pm `$,
$`\mathrm{\Gamma }_\mu ^{New}=g_s{\displaystyle \frac{G_F}{4\sqrt{2}\pi ^2}}V_{is}^{}V_{ib}\overline{s}T^a\left[F_1^{(i,New)}(q^2\gamma _\mu q_\mu \mathit{})iF_2^{(i,New)}\sigma _{\mu \nu }q^\nu (m_sL+m_bR)\right]b`$ (6)
with
$`F_1^{New}(\xi _i,\eta _i)`$ $`=`$ $`{\displaystyle \frac{D^{}(\xi _i)}{3\sqrt{2}G_FF_\pi ^2}}+{\displaystyle \frac{8D^{}(\eta _i)}{3\sqrt{2}G_FF_\pi ^2}}`$ (7)
$`F_2^{New}(\xi _i,\eta _i)`$ $`=`$ $`\left[{\displaystyle \frac{D(\xi _i)}{3\sqrt{2}G_FF_\pi ^2}}+{\displaystyle \frac{8D(\eta _i)+E(\eta _i)}{3\sqrt{2}G_FF_\pi ^2}}\right]`$ (8)
and
$`D(\xi )`$ $`=`$ $`{\displaystyle \frac{5+19\xi 20\xi ^2}{24(1\xi )^3}}+{\displaystyle \frac{4\xi ^22\xi ^3}{4(1\xi )^4}}\mathrm{ln}[\xi ]`$ (9)
$`D^{}(\xi )`$ $`=`$ $`{\displaystyle \frac{729\xi +16\xi ^2}{72(1\xi )^3}}{\displaystyle \frac{3\xi ^22\xi ^3}{12(1\xi )^4}}\mathrm{ln}[\xi ]`$ (10)
$`E(\eta )`$ $`=`$ $`{\displaystyle \frac{1215\eta 5\eta ^2}{8(1\eta )^3}}+{\displaystyle \frac{9\eta 18\eta ^2}{4(1\eta )^4}}\mathrm{ln}[\eta ]`$ (11)
where $`\xi =m_{p1}^2/m_t^2`$ and $`\eta =m_{P8}^2/m_t^2`$, and $`m_{p1}`$ and $`m_{p8}`$ denote the masses of the color-singlet and color-octet technipion $`P^\pm `$ and $`P_8^\pm `$ respectively. The technipion decay constant $`F_\pi =123GeV`$ in the One-Generation Technicolor Model (OGTM) . The $`G_F`$ is the Fermi coupling constant $`G_F=1.16639\times 10^5(GeV)^2`$.
Comparing the effective $`bsg`$ coupling $`\mathrm{\Gamma }_\mu ^{New}`$ in eq.(6) with the $`\mathrm{\Gamma }_\mu ^{SM}`$ in eq.(3), one can see that the form factors $`F_1^{(i,New)}`$ and $`F_2^{(i,New)}`$ are the counterparts of the $`F_1^i`$ and $`F_2^i`$ in the SM. The new form factors $`F_1^{(i,New)}`$ and $`F_2^{(i,New)}`$ describe the contributions to the decay $`bsg`$ from the charged technipions $`P^\pm `$ and $`P_8^\pm `$.
In the numerical calculation we use the branching ratio formula for $`B\eta ^{}+X_s`$ with gluon anomaly as given in ref.,
$`{\displaystyle \frac{d^2Br(b\eta ^{}sg)}{dxdy}}=0.2\left[{\displaystyle \frac{g_s(m_b)}{4\pi ^2}}\right]^2{\displaystyle \frac{a_g^2m_b^2}{4}}\left[|\mathrm{\Delta }F_1|^2c_0+Re(\mathrm{\Delta }F_1F_2^{}){\displaystyle \frac{c_1}{y}}+|F_2|^2{\displaystyle \frac{c_2}{y^2}}\right]`$ (12)
where $`0.2`$ comes from $`(V_{cb}^2G_F^2m_b^2)/(192\pi ^3)0.2\mathrm{\Gamma }_B`$ via the standard trick of relating to $`B_{s.l}`$ (see ref.). The factors $`c_0,c_1`$ and $`c_2`$ in eq.(12) are
$`c_0`$ $`=`$ $`\left[2x^2y+(1y)(yx^{})(2x+yx^{})\right]/2,`$
$`c_1`$ $`=`$ $`(1y)(yx^{})^2,`$
$`c_2`$ $`=`$ $`\left[2x^2y^2(1y)(yx^{})(2xyy+x^{})\right]/2`$ (13)
where $`x=m^2/m_b^2`$ with $`m`$ is the physical recoil mass against the $`\eta ^{}`$ mason, and $`y=q^2/m_b^2`$ with $`q=p_bp_s`$ and $`x^{}=m_\eta ^{}^2/m_b^2`$. The term $`\mathrm{\Delta }F_1`$ was defined as $`\mathrm{\Delta }F_1=F_1(x_t)F_1(x_c)`$. The factor $`a_g=\sqrt{N_f}\alpha _s(\mu )/(\pi f_\eta ^{})`$ is the effective anomaly coupling $`H(q^2,k^2,m_\eta ^{}^2)`$ as defined in ref. and $`f_\eta ^{}=131MeV`$. For the running of $`\alpha _s`$, we use the two-loop approximation as given for instance in ref..
The Fig.1 shows the mass dependence of form factors in the SM and in the OGTM. The dot-dashed line corresponds to the $`\mathrm{\Delta }F_1=5.25`$ in the SM, while the long dashed line shows the $`\mathrm{\Delta }F_1^{New}=F_1^{New}(\xi _t,\eta _t)F_1^{New}(\xi _c,\eta _c)4.6`$ in the OGTM, assuming $`m_{p1}=100GeV`$ and $`m_{p8}=250600GeV`$. The short-dashed line is the $`F_2=0.2`$ in the SM for $`m_t=180GeV`$ and $`m_W=80.2GeV`$, while the solid curve is the $`F_2^{New}`$ in the OGTM, assuming $`m_{p1}=100GeV`$ and $`m_{p8}=250600GeV`$. It is easy to see that the size of $`F_2^{New}`$ can be much larger than the $`F_2`$ in the SM for light color-octet technipion. Furthermore, the $`P_8^\pm `$ dominates the total contribution to the $`F_1^{New}`$ and $`F_2^{New}`$.
Because we do not know the ”correct” form of $`gg\eta ^{}`$ vertex form factor $`H(q^2,k^2,m_\eta ^{}^2)`$, we consider the following two different cases respectively.
Case-1: We consider the effect due to the running of $`\alpha _s`$ as well as the new contribution from the charged technipions.
After the inclusion of the running of $`\alpha _s`$ one finds $`Br(B\eta ^{}X_s)3.4\times 10^4`$ including the cut, as shown in Fig.2 (the dot-dashed line). The horizontal band in Fig.2 represents the CLEO data in eq.(1). The long dashed curve corresponds to the total inclusive branching ratio $`Br(B\eta ^{}X_s)`$ when the new physics effects are also included. Numerically, $`Br(B\eta ^{}X_s)=(48.95.7)\times 10^4`$ for $`m_{p1}=100GeV`$ and $`m_{p8}=(250600)GeV`$. The theoretical prediction is now well consistent with the CLEO data for $`m_{p8}350GeV`$. The color-octet technipion $`P_8^\pm `$ dominates the total new contribution: the increase due to the $`P^\pm `$ is only about $`10\%`$ at the level of the corresponding branching ratio.
Case-2: We consider the effect of the $`m_\eta ^{}^2/(q^2m_\eta ^{}^2)`$ suppression and the new physics contribution from $`P^\pm `$ and $`P_8^\pm `$.
When the new suppression factor $`m_\eta ^{}^2/(q^2m_\eta ^{}^2)`$ is taken into account one finds $`Br(B\eta ^{}X_s)=2.3\times 10^5`$ including the cut as shown by the short dashed line in Fig.2, which is much smaller than the CLEO measurement. When the new contributions from the charged technipions are included, the inclusive branching ratio $`Br(B\eta ^{}X_s)`$ can be enhanced greatly as illustrated by the solid curve in Fig.2. Numerically, $`Br(B\eta ^{}X_s)=(15.20.7)\times 10^4`$ for $`m_{p1}=100GeV`$ and $`m_{p8}=(250600)GeV`$. The theoretical prediction is now consistent with the CLEO data for $`m_{p8}280GeV`$. The new physics effect is essential to interpret the CLEO data for the Case-2. Again, the color-octet technipion $`P_8^\pm `$ dominates the total contribution as that in Case-1.
In this letter we show a real example that the observed large ratio $`Br(B\eta ^{}X_s)`$ can be explained by the new physics contributions from the unit-charged technipions $`P^\pm `$ and $`P_8^\pm `$. Because the major properties of the technipions in different technicolor models are generally very similar, the analytical and numerical results obtained in this letter are representative and can be extended to other new technicolor models easily.
In this letter, we firstly evaluate the new one-loop penguin diagrams with the internal $`P^\pm `$ and $`P_8^\pm `$ lines and obtained the new form factors $`F_1^{New}(\xi _i,\eta _i)`$ and $`F_2^{New}(\xi _i,\eta _i)`$ which describe the new physics contributions to the decay in question. The size of $`F_2^{New}`$ can be rather large for relatively light charged technipions. Secondly, we combine the new form factors $`F_i^{New}`$ (i =1,2) with their counterpart $`F_1`$ and $`F_2`$ in the SM properly and use them in the numerical calculation. We finally calculate the inclusive branching ratios for both Case-1 and Case-2. As illustrated in Fig.2, the unit-charged technipions can provide a large enhancement to account for the large rate $`Br(B\eta ^{}X_s)`$ observed by CLEO.
The authors would like to thank Dongsheng Du, Kuangta Chao and Yadong Yang for helpful discussions. This work is supported by the National Natural Science Foundation of China under Grant No.19575015 and by the funds from Henan Scientific Committee.
Figure Captions
The dot-dashed line shows $`\mathrm{\Delta }F_1=5.25`$ in the SM, the long dashed line corresponds to the $`\mathrm{\Delta }F_1^{New}`$, the short dashed line is the $`F_2^{SM}=0.2`$ and the solid curve is the $`F_2^{New}`$ induced by unit-charged technipions.
The horizontal solid band represents the CLEO data $`Br(B\eta ^{}X_s)=(6.3\pm 2.1)\times 10^4`$. The dot-dashed (short dashed) line is the SM prediction in Case-1 (Case-2), while the long-dashed and solid line show the total inclusive branching ratio $`Br(B\eta ^{}X_s)`$ when the new physics effects are included in Case-1 and Case-2, respectively. |
no-problem/9910/math9910038.html | ar5iv | text | # The Spectrum and Isometric Embeddings of Surfaces of Revolution
## 1. Introduction
The problem of isometrically embedding $`(S^2,g)`$ in $`(^3,can)`$ has a long history which goes back at least as far as 1916. In that year, Weyl, , and in the years since, Nirenberg, , Heinz, , Alexandrov, , and Pogorelov, , to name a few, proved embedding theorems of various orders of differentiability in case the Gauss curvature is positive. A recent result of Hong and Zuily addresses the case of non-negative curvature. But, of course, not every metric on $`S^2`$ admits such an isometric embedding. The reader may refer to Greene, , wherein one finds examples of smooth metrics on $`S^2`$ for which there is no $`C^2`$ isometric embedding in $`(^3,can)`$.
In the presence of examples such as Greene’s, one might naturally ask if there exist intrinsic geometric conditions on metrics which obstruct such isometric embeddings. Inasmuch as the above mentioned embedding theorems require, at least, non-negativity of the Gauss curvature, one must look for embedding obstructed metrics among those with some negative curvature. Of course, having some negative curvature is not enough, but one might hope that some stronger condition, associated with the existence of negative curvature at a point, might satisfy our requirements. The purpose of this paper is to provide, in a special case, conditions on the spectrum of the Riemannian manifold which are intrinsic obstructions to the above isometric embedding problem.
It is no surprise that the spectrum might make an appearance in this subject. There is an extensive literature which associates the spectrum with (more generally) isometric immersions (see, for example, or among others). Much of this work relates the spectrum to the mean curvature, and is associated with the Willmore conjecture. By way of comparison, the embedding problem of this paper is almost trivial, but it is the new and more intrinsic relation between the spectrum and embeddability which we hope the reader will find interesting.
In the case of $`S^1`$ invariant metrics on $`S^2`$ (i.e. surfaces of revolution), one can prove that, while the first eigenvalue must be bounded above by $`8\pi /area`$ (Hersch’s theorem, ), the first $`S^1`$ invariant eigenvalue can be arbitrarily large. At the same time, however, there is an upper bound, depending on the metric, for the first $`S^1`$ invariant eigenvalue. We will prove that it is this upper bound which, upon exceeding a certain critical value, becomes an obstruction to isometric embedding into $`(^3,can)`$ (Necessarily, the same condition ensures that there is some negative curvature). As a result, if the first $`S^1`$ invariant eigenvalue becomes too large then the surface cannot be isometrically embedded in $`(^3,can)`$. (See also Abreu and Freitas .)
Another characteristic of the spectrum of a surface of revolution is that the eigenspaces are even dimensional unless the eigenvalue happens to correspond to an $`S^1`$ invariant eigenvalue. As a result, one way of increasing the first $`S^1`$ invariant eigenvalue is to insist that the multiplicities be even up to a certain point. This leads to a result, proved in Section 5, that even multiplicity for the first 4 distinct eigenvalues is an obstruction to isometric embeddability.
In the last section, we will remark on how these results give a generalization of a well known corollary of the Gauss-Bonnet theorem regarding the existence of points with negative curvature.
The author is indebted to Andrew Hwang for a brief but enlightening conversation about momentum coordinates.
## 2. Isometric embeddings and Momentum Coordinates
First, we will discuss a formulation of the condition on the metric which ensures that an $`S^1`$ invariant metric on $`S^2`$ may be isometrically embedded into $`(^3,can)`$. This condition is well known and quite elementary. The reader will find our treatment to be essentially equivalent to that of Besse , p. 95-105.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold which is diffeomorphic to $`S^2`$. We will assume the metric to be $`C^{\mathrm{}}`$. Since $`(M,g)`$ has an effective $`S^1`$ isometry group there are exactly two fixed points. We call the fixed points $`np`$ and $`sp`$ and let $`U`$ be the chart $`M\{np,sp\}`$. On $`U`$ the metric has the form $`dsds+a^2(s)d\theta d\theta `$ where $`s`$ is the arclength along a geodesic connecting $`np`$ to $`sp`$ and $`a(s)`$ is a function $`a:[0,L]^+`$ satisfying $`a(0)=a(L)=0`$ and $`a^{}(0)=1=a^{}(L)`$.
It is easy to see that isometric embeddings of such metrics can be parametrized as follows:
(2.1)
$$\{\begin{array}{ccc}\psi ^1& =& a(s)\mathrm{cos}\theta \hfill \\ \psi ^2& =& a(s)\mathrm{sin}\theta \hfill \\ \psi ^3& =& \pm ^s\sqrt{1(a^{})^2(t)}𝑑t\hfill \end{array}$$
It is evident from this formula that $`(M,g)`$ can be isometrically $`C^1`$ embedded in $`(^3,can)`$ if and only if
(2.2)
$$|a^{}(s)|1\text{for all}s[0,L].$$
We will find it convenient to make a change of variables to so called momentum coordinates (See Hwang and Singer ). These are given by a diffeomorphism $`(s,\theta )(x,\theta )`$ where $`x\varphi :[0,L][1,1]`$ is defined by:
(2.3)
$$x\varphi (s)_c^sa(t)𝑑t.$$
If we let $`f(x)(a^2\varphi ^1)(x)`$, then in the new coordinates the metric on the chart $`U`$ takes the form
(2.4)
$$g=\frac{1}{f(x)}dxdx+f(x)d\theta d\theta $$
where $`(x,\theta )(1,1)\times [0,2\pi )`$. In these coordinates the conditions at the endpoints translate to $`f(1)=0=f(1)`$ and $`f^{}(1)=2=f^{}(1)`$. In this form, it is easy to see that this metric has area $`4\pi `$ and that its Gauss curvature is given by $`K(x)=(1/2)f^{^{\prime \prime }}(x)`$. It is also worth observing that the function $`f(x)`$ is the square of the length of the Killing field (infinitesimal isometry) $`/\theta `$ on the chart $`U`$. The canonical (i.e. constant curvature) metric is obtained by taking $`f(x)=1x^2`$.
Now using (2.2) and the definition for $`f`$ it is a simple exercise from calculus to prove:
###### Proposition 2.1.
Let $`(M,g)`$, with metric $`g`$ as in (2.4), be diffeomorphic to $`S^2`$. $`(M,g)`$ can be isometrically $`C^1`$ embedded in $`(^3,can)`$ if and only if $`|f^{}(x)|2`$ for all $`x[1,1]`$. ∎
We will end this section with the comment that it is also very easy to see that (in our special case) non-negative curvature implies isometric embeddability since $`K(x)0`$ implies that $`f^{}(x)`$ is a non-increasing function on $`[1,1]`$ with maximum $`2`$ and minimum $`2`$. One can find essentially the same comment on p-106 of .
## 3. Some properties of the spectrum
In the interest of presenting a self-contained exposition we will review some of the relevent facts about the spectrum (eigenvalues) of a surface of revolution in this section. The interested reader may consult , and for further details.
Let $`\mathrm{\Delta }`$ denote the scalar Laplacian on a surface of revolution $`(M,g)`$, where $`g`$ is given by (2.4) and let $`\lambda `$ be any eigenvalue of $`\mathrm{\Delta }`$. We will use the symbols $`E_\lambda `$ and $`dimE_\lambda `$ to denote the eigenspace for $`\lambda `$ and it’s multiplicity respectively. In this paper the symbol $`\lambda _m`$ will always mean the $`m`$-th distinct eigenvalue. We adopt the convention $`\lambda _0=0`$. Since $`S^1`$ (parametrized here by $`0\theta <2\pi `$) acts on $`(M,g)`$ by isometries and because $`dimE_{\lambda _m}2m+1`$ (see for the proof), the orthogonal decomposition of $`E_{\lambda _m}`$ has the special form
$$E_{\lambda _m}=\underset{k=m}{\overset{k=m}{}}e^{ik\theta }W_k$$
in which $`W_k(=W_k)`$ is the “eigenspace” (it might contain only $`0`$) of the ordinary differential operator
$$L_k=\frac{d}{dx}\left(f(x)\frac{d}{dx}\right)+\frac{k^2}{f(x)}$$
with suitable boundary conditions. It should be observed that $`dimW_k1`$, a value of zero for this dimension occuring when $`\lambda _mSpecL_k`$.
It is easy to see that $`Spec(\mathrm{\Delta })=_kSpecL_k`$, consequently the spectrum of $`\mathrm{\Delta }`$ can be studied via the spectra $`SpecL_k=\{0<\lambda _k^1<\lambda _k^2<\mathrm{}<\lambda _k^j<\mathrm{}\}k`$. The eigenvalues $`\lambda _0^j`$ in the case $`k=0`$ above are called the $`S^1`$ invariant eigenvalues since their eigenfunctions are invariant under the the $`S^1`$ isometry group. If $`k0`$ the eigenvalues are called $`k`$ equivariant or simply of type $`k0`$. Each $`L_k`$ has a Green’s operator, $`\mathrm{\Gamma }_k:(H^0(M))^{}L^2(M)`$, whose spectrum is $`\{1/\lambda _k^j\}_{j=1}^{\mathrm{}}`$, and whose trace is defined by, $`tr\mathrm{\Gamma }_k1/\lambda _k^j`$.
###### Proposition 3.1 (See and ).
With the notations as above:
$$tr\mathrm{\Gamma }_k=\{\begin{array}{cc}\frac{1}{2}_1^1\frac{1x^2}{f(x)}𝑑x& \text{if }k=0\hfill \\ \frac{1}{|k|}& \text{if }k0\hfill \end{array}.$$
If $`area(M,g)=4\pi `$ and $`{\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\lambda _0^j}}{\displaystyle \frac{\pi ^2}{16}}`$ then there exist points $`pM`$ such that $`K(p)<0`$.
For all $`k`$ and $`j`$, $`\lambda _k^j=\lambda _k^j`$.
$`k1`$ and $`j0`$, $`\lambda _{k+j}\lambda _k^{j+1}`$; and $`\lambda _1\lambda _0^1`$.
$`dimE_{\lambda _m}`$ is odd if and only if $`\lambda _m`$ is an $`S^1`$ invariant eigenvalue. ∎
###### Remarks.
1.) One must be careful with the definition of $`tr\mathrm{\Gamma }_0`$ since $`\lambda _0=0SpecL_0`$. To avoid this difficulty we studied the $`S^1`$ invariant spectrum of the Laplacian on 1-forms in and then observed that the non-zero eigenvalues are the same for functions and 1-forms.
2.) A slight modification of the proof of Proposition 3.1 ii.) (in ) reveals that $`{\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\lambda _0^j}}{\displaystyle \frac{\pi ^2}{16}}`$ implies that $`(M,g)`$ cannot be isometrically embedded in $`(^3,can)`$. The reader will find that the results of this paper are an improvement on this idea.
## 4. A sharp upper bound for the first eigenvalue
In we derived sharp upper bounds for all of the distinct eigenvalues on a surface of revolution diffeomorphic to $`S^2`$. These estimates were obtained using the the $`k`$-type eigenvalues for $`k0`$. In this section we will obtain a sharp bound for $`\lambda _1`$ using the $`S^1`$ invariant spectrum. In contrast with the more general result of Hersch , the reader will find that this bound exhibits, more explicitly, its dependence on the metric. This fact will play an important rôle in embedding problems.
###### Proposition 4.1.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold of area $`4\pi `$ which is diffeomorphic to $`S^2`$, with metric (2.4). Let $`\lambda _0^1`$ be the first non-zero $`S^1`$ invariant eigenvalue for this metric, then
$$\lambda _0^1\frac{3}{2}_1^1f(x)𝑑x$$
and equality holds if and only if $`(M,g)`$ is isometric to $`(S^2,can)`$.
###### Proof.
The minimum principle associated with the first $`S^1`$ invariant eigenvalue problem,
(4.1)
$$L_0u=\frac{d}{dx}\left(f(x)\frac{du}{dx}\right)=\lambda _0^1u,$$
states that
(4.2)
$$\lambda _0^1\frac{_1^1f(x)(\frac{du}{dx})^2𝑑x}{_1^1u^2𝑑x}$$
for all $`S^1`$ invariant functions $`uC^{\mathrm{}}(M)`$ with $`u\mathrm{ker}L_0`$. Equality holds if and only if $`u`$ is an eigenfunction for $`\lambda _0^1`$. Since $`\mathrm{ker}L_0`$ consists of constant functions and $`_1^1x1𝑑x=0`$, we see that $`u(x)=x`$ is an admissible solution of (4.2) and therefore $`\lambda _0^1{\displaystyle \frac{3}{2}}{\displaystyle _1^1}f(x)𝑑x.`$ Equality holds if and only if $`u(x)=x`$ is the first $`S^1`$ invariant eigenfunction. In this case, upon substitution of $`u(x)=x`$ into (4.1) we obtain the equivalent equation $`f^{}(x)=\lambda _0^1x`$. Recalling that $`f(x)`$ and $`f^{}(x)`$ must satisfy certain boundary conditions forces $`\lambda _0^1=2`$ and yields the unique solution $`f(x)=1x^2`$. In other words, $`g=can`$. ∎
Because of Proposition 3.1 iv.), we have the immediate corollary:
###### Corollary 4.2.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold of area $`4\pi `$ which is diffeomorphic to $`S^2`$, with metric (2.4). Let $`\lambda _1`$ be the first, non-zero, distinct eigenvalue for this metric, then
$$\lambda _1\frac{3}{2}_1^1f(x)𝑑x$$
and equality holds if and only if $`(M,g)`$ is isometric to $`(S^2,can)`$. ∎
## 5. Spectral obstructions to isometric embeddings
In we used the trace formula of Proposition 3.1 i.) to show that there exist surfaces of revolution with arbitrarily large first $`S^1`$ invariant eigenvalue. This fact, together with Proposition 4.1 of the last section, shows that as $`\lambda _0^1`$ increases so does the integral $`_1^1f(x)𝑑x`$. This fact is the key to the results of this section, but first we will prove a lemma which gives lower bounds for our eigenvalues.
###### Lemma 5.1.
Let $`f(x)`$ and $`\lambda _k^m`$ be defined as above then for all $`m`$
$$\lambda _k^m>\{\begin{array}{cc}2m\left[_1^1\frac{1x^2}{f(x)}𝑑x\right]^1& \text{if }k=0\hfill \\ m|k|& \text{if }k0\hfill \end{array}.$$
###### Proof.
From Proposition 3.1 i.) $`{\displaystyle \frac{1}{2}}{\displaystyle _1^1}{\displaystyle \frac{1x^2}{f(x)}}𝑑x={\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\lambda _0^j}}`$ and $`{\displaystyle \frac{1}{|k|}}={\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\lambda _k^j}}`$. Each of the sequences $`\left\{\lambda _k^j\right\}_{j=1}^{\mathrm{}}`$ is positive and strictly increasing so by truncating the above series after $`m`$ terms and then replacing each term with the smallest one we obtain
$$\frac{1}{2}_1^1\frac{1x^2}{f(x)}𝑑x>\frac{m}{\lambda _0^m}\text{and}\frac{1}{|k|}>\frac{m}{\lambda _k^m}.$$
This produces the desired inequalities. ∎
As was observed in , the $`k=0`$, $`m=1`$ case of this inequality, together with the minimal restrictions on the function $`f`$ is enough to ensure that there exist surfaces of revolution with arbitrarily large $`\lambda _0^1`$. Because of this, we can be confident that the next two results are non-vacuous.
###### Proposition 5.2.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold of area $`4\pi `$ which is diffeomorphic to $`S^2`$ and let $`\lambda _0^1`$ be it’s first non-zero $`S^1`$ invariant eigenvalue. If $`\lambda _0^1>3`$ then $`(M,g)`$ cannot be isometrically $`C^1`$ embedded in $`(^3,can)`$.
###### Proof.
By Proposition 4.1, since $`\lambda _0^1>3`$, then $`_1^1f(x)𝑑x>2`$. Upon integrating by parts we have $`_1^1xf^{}(x)𝑑x>2`$ so that
$$2<\left|_1^1xf^{}(x)𝑑x\right|_1^1|x||f^{}(x)|𝑑x\underset{x[1,1]}{\mathrm{max}}|f^{}(x)|.$$
So there exists $`x_0[1,1]`$ with $`|f^{}(x_0)|>2`$, thus, by Proposition 2.1, precluding the possibility of an isometric embedding. ∎
Since non-embeddable metrics have some negative curvature, we have the immediate corollary:
###### Corollary 5.3.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold of area $`4\pi `$ which is diffeomorphic to $`S^2`$, let $`K`$ be it’s Gauss curvature, and let $`\lambda _0^1`$ be it’s first $`S^1`$ invariant eigenvalue. If $`\lambda _0^1>3`$ then there exists a point $`pM`$ such that $`K(p)<0`$. ∎
###### Remarks.
Rafe Mazzeo and Steve Zelditch have brought to our attention a recent result of Abreu and Freitas, , which is a significant improvement of Proposition 5.2. They prove, with the same hypothesis as Proposition 5.2 and using the notation of this paper, that, for metrics isometrically embedded in $`(^3,can)`$, $`\lambda _0^j<\xi _j^2/2`$, for all $`j`$ where $`\xi _j`$ is a positive zero of a certain Bessel function or its derivative. In particular, $`\lambda _0^1<\xi _1^2/22.89`$. We have left Proposition 5.2 in the paper since its proof is so easy, and because the eigenvalue bound contained therein is sufficient for proving the main theorem (Theorem 5.5) below.
As we allow the first $`S^1`$ invariant eigenvalue to increase one might suspect that, so to speak, some small eigenvalues with even multiplicity are “left behind”. This suggests that we might find an obstruction to embedding if the first few eigenvalues have even multiplicities. We will soon see that even multiplicities for the first four eigenvalues will constitute such an obstruction, but first it would be a good idea to know if metrics with this property exist. This is the subject of:
###### Theorem 5.4.
There exist metrics on $`S^2`$ whose first four distinct non-zero eigenvalues have even multiplicity.
###### Proof.
To prove this theorem we will find an $`S^1`$ invariant metric of area $`4\pi `$ with this property.
By Proposition 3.1 v.), $`dimE_{\lambda _m}`$ is even if and only if $`\lambda _m`$ is not an $`S^1`$ invariant eigenvalue, i.e. if and only if $`\lambda _m\lambda _0^j`$ for any $`j`$. It is now clear that the first four multiplicities are even if and only if $`\lambda _4<\lambda _0^1`$, and, by Proposition 3.1, iv.), this will occur if our metric satisfies $`\lambda _4^1<\lambda _0^1`$. Using a variational principle, as in , for the operator $`L_4`$, we obtain the upper bound:
$$\lambda _4^1\frac{_1^1\left[f(x)\left(\frac{du}{dx}\right)^2+\frac{4^2}{f(x)}u^2\right]𝑑x}{_1^1u^2𝑑x}$$
$`uC^{\mathrm{}}(1,1)`$ such that $`u(1)=u(1)=0`$.
Comparing this upper bound with the lower bound on $`\lambda _0^1`$ provided by Lemma 5.1., the proof of this theorem may now be reduced to finding a function $`f`$ and a suitable test function $`u`$ such that
(5.1)
$$\frac{_1^1\left[f(x)\left(\frac{du}{dx}\right)^2+\frac{16}{f(x)}u^2\right]𝑑x}{_1^1u^2𝑑x}<2\left[_1^1\frac{1x^2}{f(x)}𝑑x\right]^1$$
We claim that $`f(x)={\displaystyle \frac{10(1x^2)}{1+9x^{36}}}`$ and $`u(x)=\sqrt{1x^2}`$ will satisfy the inequality (5.1).
It is not difficult to see that $`2\left[{\displaystyle _1^1}{\displaystyle \frac{1x^2}{f(x)}}𝑑x\right]^1={\displaystyle \frac{185}{23}}>8`$ for this choice of $`f(x)`$. So the right hand side of (5.1) is greater than 8. Calculating the left hand side of (5.1) for this choice of $`f(x)`$ and $`u(x)`$ yields:
$`{\displaystyle \frac{_1^1\left[f(x)\left(\frac{du}{dx}\right)^2+\frac{16}{f(x)}u^2\right]𝑑x}{_1^1u^2𝑑x}}={\displaystyle \frac{3}{4}}\left[10{\displaystyle _1^1}{\displaystyle \frac{x^2}{1+9x^{36}}}𝑑x+{\displaystyle \frac{8}{5}}{\displaystyle _1^1}(1+9x^{36})𝑑x\right]`$
$`<{\displaystyle \frac{3}{4}}\left[10{\displaystyle \frac{2}{3}}+{\displaystyle \frac{16}{5}}{\displaystyle \frac{46}{37}}\right]={\displaystyle \frac{1477}{185}}<8,`$
where the first integral in brackets has been approximated in the obvious way. Since the left hand side is less than 8 and the right hand side is greater than 8, the proof is finished. ∎
The proof of this theorem is hardly optimal since there are, certainly, many such metrics. We also believe that using a similar technique, one should be able to find metrics whose first $`m`$ distinct eigenvalues have even multiplicity for arbitrary $`m`$, but we will not address these problems here.
###### Theorem 5.5.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold which is diffeomorphic to $`S^2`$ and let $`\lambda _m`$ be its $`m`$-th distinct eigenvalue. If $`dimE_{\lambda _m}`$ is even for $`1m4`$ then $`(M,g)`$ cannot be isometrically $`C^1`$ embedded in $`(^3,can)`$.
###### Proof.
Without loss of generality, we may assume the area of the metric is $`4\pi `$. As seen in the proof of Theorem 5.4, the first four eigenvalues have even multiplicity if and only if $`\lambda _4<\lambda _0^1`$. This result will then follow from Proposition 5.2 as long as we can prove that $`\lambda _4>3`$. This is most easily accomplished by contradiction.
Assume $`\lambda _43`$ so that $`0<\lambda _1<\lambda _2<\lambda _3<\lambda _43`$. Now each $`\lambda _i`$ for $`1i4`$ must satisfy $`\lambda _i=\lambda _k^l`$ for some $`k0`$ and $`l1`$. However, by Lemma 5.1, $`\lambda _k^l>l|k|`$ so if $`\lambda _i=\lambda _k^l3`$ it must be the case that $`l|k|2`$. By Proposition 3.1 iii.) $`\lambda _k^l=\lambda _k^l`$ so there are only three (possibly) distinct eigenvalues with these properties and their values coincide with $`\lambda _1^1`$, $`\lambda _2^1`$, and $`\lambda _1^2`$. There are, therefore, at most three distinct values for the four distinct eigenvalues $`\lambda _i`$ for $`1i4`$, but this contradicts the pigeonhole principle. ∎
Again there is an immediate corollary:
###### Corollary 5.6.
Let $`(M,g)`$ be an $`S^1`$ invariant Riemannian manifold which is diffeomorphic to $`S^2`$, let $`K`$ be it’s Gauss curvature, and let $`\lambda _m`$ be it’s $`m`$-th distinct eigenvalue. If $`dimE_{\lambda _m}`$ is even for $`1m4`$ then there exists a point $`pM`$ such that $`K(p)<0`$, ∎
and, contrapositively, a kind of partial converse for Weyl type theorems:
###### Corollary 5.7.
A surface of revolution which is diffeomorphic to $`S^2`$ and isometrically embedded in $`(^3,can)`$ has the property that at least one of its first four non-zero distinct eigenvalues has odd multiplicity. ∎
We observe that this is a property which these metrics share with those of constant positive curvature.
## 6. Remarks on classical surface theory
In this final section we leave behind the question of embeddability and focus our attention on the way in which Corollary 5.6 can be viewed as an extension of one of the corollaries of the Gauss-Bonnet theorem.
Let $`(M,g)`$ be any compact, orientable, boundaryless surface with metric $`g`$. We recall that the Euler characteristic, $`\chi (M)`$, and curvature $`K`$ are related by the Gauss Bonnet Theorem:
$$2\pi \chi (M)=_MK,$$
so that one has the well known result:
###### Proposition 6.1.
If $`\chi (M)0`$ then there exists a point $`pM`$ such that $`K(p)0`$. ∎
Via the Hodge-DeRham isomorphism, one can restate the Gauss-Bonnet theorm as follows:
Let $`\lambda _{q,j}`$ be the $`j`$-th distinct eigenvalue of the Laplacian acting on $`q`$-forms and $`E_{\lambda _{q,j}}`$ its “eigenspace” (this vectorspace may consist of the zero vector only). Then
(6.1)
$$\frac{1}{2\pi }_MK=2dimE_{\lambda _{1,0}}.$$
Of course $`dimE_{\lambda _{1,0}}`$ is simply twice the genus of the surface since $`\lambda _{1,0}=0`$. But this form of the Gauss-Bonnet formula does allow us to observe that: If $`dimE_{\lambda _{1,0}}`$ is even (this is automatic) and positive then there exists a point $`pM`$ such that $`K(p)0`$.
In case $`dimE_{\lambda _{1,0}}>0`$, $`M`$ is not a sphere. So these results tell us how to get some non-positive curvature by adding handles to the sphere.
If we don’t want to add handles to the sphere, it is Corollary 5.6 which tells us, at least in the case of surfaces of revolution, how to obtain some negative curvature by changing the dimension of the euclidian space into which it embeds.
Collecting the forgoing ideas together, one can state a result which gives a unified, if not quite complete, answer to the question of the existence of non-positive curvature, in other words: a generalization of Proposition 6.1 which includes surfaces with Euler characteristic 2.
###### Corollary 6.2.
Let $`(M,g)`$ be an orientable, compact, boundaryless surface with metric $`g`$, isometry group $`\mathrm{}(M,g)`$ and $`j`$-th distinct $`q`$-form eigenvalue $`\lambda _{q,j}`$. If, for some $`q\{|dim\mathrm{}(M,g)1|,1\}`$, $`dimE_{\lambda _{q,|1q|j}}`$ is even and positive for all $`j`$ such that $`1j4`$, then there exists a point $`pM`$ such that $`K(p)0`$.
###### Proof.
If $`(M,g)`$ satisfies the hypothesis for $`q=1`$ then the statement of this result is simply Proposition 6.1 as can be seen from Equation (6.1). Hence there exists a point $`pM`$ such that $`K(p)0`$.
If $`(M,g)`$ satisfies the hypothesis for $`q=|dim\mathrm{}(M,g)1|`$, then as is well known, we must consider all cases with $`dim\mathrm{}(M,g)3`$. If $`dim\mathrm{}(M,g)=0`$ or $`2`$, then, again, $`q=1`$ and the proof is the same as the previous case. If $`dim\mathrm{}(M,g)=3`$ then $`(M,g)=(S^2,can)`$ (see , p. 46, 47) so that $`K>0`$ and constant, and the statement of this result is simply, as we already know, that one of the first four, 0 or 2-form, eigenvalues has odd multiplicity (in fact they all have odd multiplicity). Finally, if $`dim\mathrm{}(M,g)=1`$ then $`q=0`$. If, in this case, the hypothesis holds for $`q=0`$ only then $`M`$ is, topologically, the sphere and thus the statement of this theorem reduces to Corollary 5.6. ∎
One cannot help but ponder the possibility that one can remove the, a priori, assumption of $`S^1`$ invariance since, according to legend, only surfaces of revolution would have a lot of even multiplicities anyway. Also, if we might hazard an even more provacative conjecture: perhaps there is a formula which relates integrals of geometric invariants with multiplicities of non-zero eigenvalues in a similar way as Formula (6.1). One could perhaps use heat kernel asymptotics to explore this. But this would be the subject of another treatise. |
no-problem/9910/cond-mat9910011.html | ar5iv | text | # Arrested States of Solids
## I Nonequilibrium Structures in Solids
What determines the final microstructure of a solid under changes of temperature or pressure ? This is an extremely complex issue, since a rigid solid finds it difficult to flow along its free energy landscape to settle into a unique equilibrium configuration. Solids often get stuck in long-lived metastable or jammed states because the energy barriers that need to be surmounted in order to get unstuck are much larger than $`k_BT`$.
Such nonequilibrium solid structures may be obtained either by quenching from the liquid phase across a freezing transition (see Ref. for a comprehensive review), or by cooling from the solid phase across a structural transition. Unlike the former, nonequilibrium structures resulting from structural transformations do not seem to have attracted much attention amongst physicists, apart from Refs., possibly because the microstructures and mechanical properties obtained appear nongeneric and sensitively dependent on details of processing history.
Metallurgical studies have however classified some of the more generic nonequilibrium microstructures obtained in solid (parent/ austenite) - solid (product/ ferrite) transformations depending on the kind of shape change and the mobility of atoms. To cite a few :
* Martensites are the result of solid state transformations involving shear and no atomic transport. Martensites occur in a wide variety of alloys, polymeric solids and ceramics, and exhibit very distinct plate-like structures built from twinned variants of the product.
* Bainites are similar to martensites, but in addition possess a small concentration of impurities (e.g. carbon in iron) which diffuse and preferentially dissolve in the parent phase.
* Widmanstätten ferrites result from structural transformations involving shape changes and are accompanied by short range atomic diffusion.
* Pearlites are a eutectic mixture of bcc Fe and the carbide consisting of alternating stripes.
* Amorphous alloys, a result of a fast quench, typically possess some short range ordering of atoms.
* Polycrystalline materials of the product phase are a result of a slower quench across a structural transition and display macroscopic regions of ordered configurations of atoms separated by grain boundaries.
That the morphology of a solid depends on the detailed dynamics across a solid-solid transformation, has been recognised by metallurgists who routinely use time-temperature-transformation (TTT) diagrams to determine heat treatment schedules. The TTT diagram is a family of curves parametrized by a fraction $`\delta `$ of transformed product. Each curve is a plot of the time required to obtain $`\delta `$ versus temperature of the quench (Fig. 1). The TTT curves for an alloy of fixed composition may be viewed as a ‘kinetic phase diagram’. For example, starting from a hot alloy at $`t=0`$ equilibrated above the transition temperature (upper left corner) one could, depending on the quench rate (obtained from the slope of a line $`T(t)`$), avoid the nose of the curve and go directly into the martensitic region or obtain a mixture of ferrite and carbide when cooled slowly.
It appears from these studies that several qualitative features of the kinetics and morphology of microstructures are common to a wide variety of materials. This would suggest that there must be a set of general principles underlying such nonequilibrium solid-solid transformations. Since most of the microstructures exhibit features at length scales ranging from $`100\AA `$-$`100\mu m`$, it seems reasonable to describe the phenomena at the mesoscopic scale, wherein the solid is treated as a continuum. Such a coarse-grained description would ignore atomic details and instead involve effective continuum theories based on symmetry principles, conservation laws and broken symmetry.
Let us state the general problem in its simplest context. Consider a solid state phase diagram exhibiting two different equilibrium crystalline phases separated by a first order boundary (Fig. 2). An adiabatically slow quench from $`T_{in}T_{fin}`$ across the phase boundary in which the cooling rate is so small that at any instant the solid is in equilibrium corresponding to the instantaneous temperature would clearly result in an equilibrium final product at $`T_{fin}`$. On the other hand, an instantaneous quench would result in a metastable product bearing some specific relation to the parent phase. The task is to develop a nonequilibrium theory of solid state tranformations which would relate the nature of the final arrested state and the dynamics leading to it to the type of structural change, the quench rate and the mobility of atoms.
In this article we concentrate on the dynamical and structural features of a class of solid-solid transformations called Martensites. Because of its commercial importance, martensitic transformations are a well studied field in metallurgy and material science. Several classic review articles and books discuss various aspects of martensites in great detail . The growing literature on the subject is a clear indication that the dynamics of solid state transformations is still not well understood. We would like to take this opportunity to present, for discussion and criticism, our point of view on this very complex area of nonequilibrium physics .
We next review the phenomenology of martensites and highlight generic features that need to be explained by a nonequilibrium theory of solid state transformations.
## II Phenomenology of Martensites
One of the most studied alloys undergoing martensitic transformations is iron-carbon . As the temperature is reduced, Fe with less than $`0.02`$% C undergoes an equilibrium structural transition (Fig. 2) from fcc (austenite) to bcc (ferrite) at $`T_c=910^{}`$C . An adiabatic cooling across $`T_c`$ nucleates a grain of the ferrite which grows isotropically, leading to a polycrystalline bcc solid. A faster quench from $`T_{in}>T_c`$ to $`T_{fin}<M_s<T_c`$ (where $`M_s`$ : martensite start temperature) produces instead a rapidly transformed metastable phase called the martensite, preempting the formation of the equilibrium ferrite. It is believed that martensites form by a process of heterogeneous nucleation. On nucleation, martensite ‘plates’ grow radially with a constant front velocity $`10^5cm/s`$, comparable to the speed of sound. Since the transformation is not accompanied by the diffusion of atoms, either in the parent or the product, it is called a diffusionless transformation. Electron microscopy reveals that each plate consists of an alternating array of twinned or slipped bcc regions of size $`100`$Å. Such martensites are called acicular martensites.
The plates grow to a size of approximately $`1\mu m`$ before they collide with other plates and stop. Most often the nucleation of plates is athermal ; the amount of martensite nucleated at any temperature is independent of time. This implies that there is always some retained fcc, partitioned by martensite plates. Optical micrographs reveal that the jammed plates lie along specific directions known as habit planes. Martensites, characterised by such a configuration of jammed plates, are long lived since the elastic energy barriers for reorganisation are much larger than $`k_BT`$.
A theoretical analysis of the dynamics of the martensitic transformation in Fe-C is complicated by the fact that the deformation is 3-dimensional (Bain strain) with 3 twin variants of the bcc phase. Alloys like In-Tl, In-Pb, Mn-Fe and high-T<sub>c</sub> ceramics however, offer the simplest examples of martensitic transformations having only two twin variants. For instance, the high-T<sub>c</sub> cuprates undergo a tetragonal to orthorhombic transformation . The orthorhombic phase can be obtained from the tetragonal phase by a two-dimensional deformation, essentially a square to rhombus transition. Experiments indicate that all along the kinetic pathway, the local configurations can be obtained from a two-dimensional deformation of the tetragonal cell. This would imply that the movement of atoms is strongly anisotropic and confined to the ab-plane. Thus as far as the physics of this transformation is concerned, the ab-planes are in perfect registry (no variation of the strain along the c-axis). In the next two sections we shall discuss our work on the dynamics of the square to a rhombus transformation in 2-dimensions using a molecular dynamics simulation and a coarse-grained mode coupling theory.
## III Molecular Dynamics Simulation of Solid-Solid Transformations
Our aim in this section will be to study the simplest molecular dynamics (MD) simulation of the square to rhombus transformation. We would like to use the simulation results to construct the complete set of coarse grained variables needed in a continuum description of the dynamics of solid state tranformations. We carry out the MD simulation in the constant $`NVT`$ ensemble using a Nosé-Hoover thermostat ($`N=12000`$) .
Our MD simulation is to be thought of as a ‘coarse-grained’ MD simulation, where the effective potential is a result of a complicated many-body interaction. One part of the interaction is a purely repulsive two-body potential $`V_2(r_{ij})=v_2/r_{ij}^{12}`$ where $`r_{ij}`$ is the distance between particles $`i`$ and $`j`$. The two-body interaction favours a triangular lattice ground state. In addition, triplets of particles interact via a short range three-body potential $`V_3(𝐫_i,𝐫_j,𝐫_k)=v_3w(r_{ij},r_{jk},r_{ik})[\mathrm{sin}^2(4\theta _{ijk})+\mathrm{sin}^2(4\theta _{jki})+\mathrm{sin}^2(4\theta _{kij})]`$ where $`w(r)`$ is smooth short-range function and $`\theta _{ijk}`$ is the bond angle at $`j`$ between particles $`(ijk)`$. Since $`V_3`$ is minimised when $`\theta _{ijk}=0`$ or $`\pi /2`$, the three-body term favours a square lattice ground state. Thus at sufficiently low temperatures, we can induce a square to triangular lattice transformation by tuning $`v_3`$. The phase diagram in the $`Tv_3`$ plane is exhibited in Fig. 3.
We define elastic variables, coarse-grained over a spatial block of size $`\xi `$ and a time interval $`\tau `$, from the instantaneous positions $`𝐮`$ of the particles. These include the deformation tensor $`u_i/x^k`$, the full non linear strain $`ϵ_{ij}`$, and the vacancy field $`\varphi =\rho \overline{\rho }`$ ($`\rho =`$ coarse grained local density, $`\overline{\rho }=`$ average density). We have kept track of time dependence of these coarse-grained fields during the MD simulation.
Consider two ‘quench’ scenarios — a high and low temperature quench (upper and lower arrows in Fig. 3 respectively) across the phase boundary. In both cases the solid is initially at equilibrium in the square phase.
The high temperature quench across the phase boundary, induces a homogeneous nucleation (i.e., strain inhomogeneities created by thermal fluctuations are sufficient to induce critical nucleation) and growth of a triangular region. The product nucleus grows isotropically with the size $`Rt^{1/2}`$. A plot of the vacancies/interstitial field shows that, at these temperatures they diffuse fast to their equilibrium value (vacancy diffusion obeys an Arrenhius form $`D_v=D_0\mathrm{exp}(A/k_BT)`$, where $`A`$ is an activation energy and so is larger at higher temperatures). The final morphology is a polycrystalline triangular solid.
The low temperature quench on the other hand, needs defects (either vacancies or dislocations) to seed nucleation in an appreciable time. This heterogeneous nucleation initiates an embryo of triangular phase, which grows anisotropically along specific directions (Fig. 4). Two aspects are immediately apparent, the growing nucleus is twinned and the front velocities are high. Indeed the velocity of the front is a constant and roughly half the velocity of longitudinal sound. A plot of the vacancy/interstitial field shows a high concentration at the parent-product interface. The vacancy field now diffuses very slowly and so appear to get stuck to the interface over the time scale of the simulation. If we force the vacancies and interstitials to annihilate each other, then the anisotropic twinned nucleus changes in the course of time to an isotropic untwinned one !
Therefore the lessons from the MD simulation are : (1) There are at least two scenarios of nucleation of a product in a parent depending on the temperature of quench. The product grows via homogeneous nucleation at high $`T`$, and via heterogeneous nucleation at low $`T`$ (2) The complete set of slow variables necessary to describe the nucleation of solid-solid transformations should include the strain tensor and defects (vacancies and dislocations) which are generated at the parent-product interface at the onset of nucleation (3) The relaxation times of these defects dictate the final morphology. At high temperatures the defects relax fast and the grains grow isotropically with a diffusive front. The final morphology is a polycrystalline triangular solid. At low temperatures the interfacial defects (vacancies) created by the nucleating grain relax slowly and get stuck at the parent-product interface. The grains grow anisotropically along specific directions. The critical nucleus is twinned and the front grows ballistically (with a velocity comparable to the sound speed). The final morphology is a twinned martensite.
## IV Mode Coupling Theory of Solid-Solid Transformations
Armed with the lessons from the MD simulation, let us now construct a continuum elastic theory of solid-state nucleation. The analysis follows in part the theories of Krumhansl et. al. , but has important points of departure. The procedure is to define a coarse grained free energy functional in terms of all the relevant ‘slow’ variables. From the simulation results, we found that every configuration is described in terms of the local (nonsingular) strain field $`ϵ_{ij}`$, the vacancy field $`\varphi `$, and singular defect fields like the dislocation density $`b_{ij}`$. These variables are essentially related to the phase and amplitudes of the density wave describing the solid $`\{\rho _𝐆\}`$.
It is clear from the simulation that the strain tensor, defined with respect to the ideal parent, gets to be of $`O(1)`$ in the interfacial region between the parent and the product. Thus we need to use the full nonlinear strain tensor $`ϵ_{ij}=(_iu_j+_ju_i+_iu_k_ju_k)/2`$. Further, since the strain is inhomogeneous during the nucleation process, the free energy functional should have derivatives of the strain tensor $`_kϵ_{ij}`$ (this has unfortunately been termed ‘nonlocal strain’ by some authors).
In general, the form of the free energy functional can be very complicated, but in the context of the square-to-rhombus transformation, the free energy density may be approximated by a simple form,
$$f=c(ϵ)^2+ϵ^2aϵ^4+ϵ^6+\chi _v\varphi ^2+\chi _db^2+k_dbϵ$$
(1)
where $`ϵ`$ is the nonzero component of the strain corresponding to the transformation between a square and a rhombus, $`\varphi `$ is the vacancy field and $`b`$ is the dislocation density (we have dropped the tensor indices for convenience). The tuning parameter $`a`$ induces a transition from a square (described by $`ϵ=0`$) to a rhombus ($`ϵ=\pm e_0`$).
Starting with $`ϵ=0`$ corresponding to the equilibrium square parent phase at a temperature $`T>T_c`$, we quench across the structural transition. The initial configuration of $`ϵ`$ is now metastable at this lower temperature, and would decay towards the true equilibrium configuration by nucleating a small ‘droplet’ of the product. As we saw in the last section, as soon as a droplet of the product appears embedded in the parent matrix, atomic mismatch at the parent-product interface gives rise to interfacial defects like vacancies and dislocations.
Let us confine ourselves to solids for which the energy cost of producing dislocations is prohibitively large. This would imply that the interfacial defects consist of only vacancies and interstitials. The dynamics of nucleation now written in terms of $`ϵ`$, $`𝐠`$ (the conserved momentum density) and vacancy $`\varphi `$ are complicated . For the present purpose, all we need to realise is that $`\varphi `$ couples to the strain and is diffusive with a diffusion coefficient $`D_v`$ depending on temperature.
As in the MD simulation, we find that the morphology and growth of the droplet of the product depends critically on the diffusion of these vacancies. If the temperature of quench is high, $`\varphi `$ diffuses to zero before the critical nucleus size is attained and the nucleus eventually grows into an equilibrium (or polycrystalline) triangular solid. In this case, the nucleus grows isotropically with $`Rt^{1/2}`$. However a quench to lower temperatures results in a low vacancy diffusion coefficient. In the limit $`D_v0`$, the $`\varphi `$-field remains frozen at the moving parent-product interface. In this case a constrained variational calculation of the morphology of the nucleus, shows that it is energetically favourable to form a twinned martensite rather than a uniform triangular structure. The growth of the twinned nucleus is not isotropic, but along habit planes. Lastly the growth along the longer direction is ballistic with a velocity proportional to $`\sqrt{\chi _v}`$ (of the order of the sound velocity). All these results are consistent with the results of the previous section and with martensite phenomenology. Let us try and understand in more physical terms, why the growing nucleus might want to form twins.
As soon as a droplet of the triangular phase of dimension $`L`$ is nucleated, it creates vacancies at the parent-product interface. The free energy of such an inclusion is $`F=F_{bulk}+F_{pp}+F_\varphi `$. The first term is simply the bulk free energy gain equal to $`\mathrm{\Delta }FL^2`$ where $`\mathrm{\Delta }F`$ is the free energy difference between the square and triangular phases. The next two terms are interfacial terms. $`F_{pp}`$ is the elastic contribution to the parent-product interface coming from the gradient terms in the free energy density Eq. 1, and is equal to $`4\sigma _{pp}L`$, where $`\sigma _{pp}`$ is the surface tension at the parent-product interface. $`F_\varphi `$ is the contribution from the interfacial vacancy field glued to the parent-product interface and is proportional to $`\varphi ^2L^2`$ (since the atomic mismatch should scale with the amount of parent-product interface). This last contribution dominates at large $`L`$ setting a prohibitive price to the growth of the triangular nucleus. The solid gets around this by nucleating a twin with a strain opposite to the one initially nucleated, thereby reducing $`\varphi `$. Indeed for an equal size twin, $`\varphi 0`$ on the average, and leads to a much lower interfacial energy $`F_\varphi L`$. However the solid now pays the price of having created an additional twin interface whose energy cost is $`F_{tw}=\sigma _{tw}L`$.
Considering now an (in general) anisotropic inclusion of length $`L`$, width $`W`$ consisting of $`N`$ twins, the free energy calculation goes as
$$F=\mathrm{\Delta }FLW+\sigma _{pp}(L+W)+\sigma _{tw}NW+\beta \left(\frac{L}{N}\right)^2N$$
(2)
where the last term is the vacancy contribution. Minimization with respect to $`N`$ gives $`L/NW^{1/2}`$, a relation that is known for 2-dimensional martensites like In-Tl.
Our next task is to solve the coupled dynamical equations with appropriate initial conditions numerically, to obtain the full morphology phase diagram as a function of the type of structural change, the parameters entering the free energy functional and kinetic parameters like $`D_v`$.
It should be mentioned that our theory takes off from the theories of Krumhansl et. al. , in that we write the elastic energy in terms of the nonlinear strain tensor and its derivatives. In addition we have shown that the process of creating a solid nucleus in a parent generates interfacial defects which evolve in time. The importance of defects has been stressed by a few metallurgists . We note also that the parent-product interface is studded with an array of vacancies with a separation equal to the twin size. This implies that the strain decays exponentially from interface over a distance of order $`L/N`$. This has been called ‘fringing field’ in Ref. . Krumhansl et. al. obtain this by imposing boundary conditions on the parent-product interface, whereas here it appears dynamically.
## V Patterning in Solid-Solid Tranformations : Growth and Arrest
So far we have discussed the nucleation and growth of single grains. This description is clearly valid at very early times, for as time progresses the grains grow to a size of approximately $`1\mu m`$ and start colliding, whereupon in most alloys they stop. Optical micrographs of acicular martensites reveal that the jammed plates lie along habit planes that criss-cross and partition the surrounding fcc (parent) matrix.
Can we quantify the patterning seen in martensite aggregates over a scale of a millimeter ? A useful measure is the size distribution of the martensite grains embedded in a given volume of the parent. The appropriate (but difficult !) calculation at this stage would be the analogue of a Becker-Döring theory for nucleation in solids. In the absence of such a theory, we shall take a phenomenological approach.
Clearly the size distribution $`P(l,t)`$ depends on the spatio-temporal distribution $`I`$ of nucleation sites and the growth velocity $`v`$. We have analysed the problem explicitly in a simple 2-dimensional context. Since the nucleating martensitic grains are highly anisotropic and grow along certain directions with a uniform velocity, a good approximation is to treat the grains as lines or rays. These rays (lines) emanate from nucleation sites along certain directions, and grow with a constant velocity $`v`$. The rays stop on meeting other rays and eventually after a time $`T`$, the 2-dimensional space is fragmented by $`N`$ colliding rays. The size distribution of rays, expressed in terms of a scaling variable $`y=y(I,v)`$, has two geometrical limits — $`𝚪`$-fixed point (at $`y=0`$) and the $`𝐋`$-fixed point (at $`y=\mathrm{}`$). The $`𝚪`$-fixed point corresponds to the limit where the rays nucleate simultaneously with a uniform spatial distribution. The stationary distribution $`P(l)`$ is a Gamma distribution with an exponentially decaying tail. The $`𝐋`$-fixed point, corresponds to the limit where the rays are nucleated sequentially in time (and uniformly in space) and grows with infinite velocity. By a mapping onto a multifragmentation problem, Ben Naim and Krapivsky were able to derive the exact asymptotic form for the moments of $`P(l)`$ at the L-fixed point. The distribution function $`P(l)`$ has a multiscaling form, characterised by its moments $`<l^q>N^{\mu (q)}`$ where $`\mu (q)=(q+2\sqrt{q^2+4})/2`$. At intermediate values of the scaling variable $`y`$, there is a smooth crossover from the $`𝚪`$-fixed point to the $`𝐋`$-fixed point with a kinematical crossover function and crossover exponents.
The emergence of scale invariant microstructures in martensites as arising out of a competition between the nucleation rate and growth is a novel feature well worth experimental investigation. There have been similar suggestion in the literature, but as far as we know there has been no direct visualization studies of the microstructure of acicular martensites using optical micrographs. Recent acoustic emission experiments on the thermoelastic reversible martensite Cu-Zn-Al, may be argued to provide indirect support of the above claim , but the theory of acoustic emission in martensites is not understood well enough to make such an assertion with any confidence.
## VI Open Questions
We hope this short review makes clear how far we are in our understanding of the dynamics of solid-solid transformations. A deeper understanding of the field will only come about with systematic experiments on carefully selected systems. For instance, a crucial feature of our nonequilibrium theory of martensitic transformations is the existence of a dynamical interfacial defect field. In conventional Fe based alloys, the martensitic front grows incredibly fast, making it difficult to test this using in situ transmission electron microscopy. Colloidal solutions of polysterene spheres (polyballs) however, are excellent systems for studying materials properties. Polyballs exhibiting fcc $``$ bcc structural transitions have been seen to undergo twinned martensitic transformations. The length and time scales associated with colloids are large, making it comfortable to study these systems using light scattering and optical microscopy.
In this article we have focussed on a small part of the dynamics of solid state transformations, namely the dynamics and morphology of martensites. Even so our presentation here is far from complete and there are crucial unresolved questions that we need to address.
Let us list the issues as they appear following a nucleation event.
The physics of heterogeneous nucleation in solids is very poorly understood. For instance, it appears from our simulations that the morphology of the growing nucleus depends on the nature of the defects seeding the nucleation process (e.g., vacancies, dislocations and grain boundaries). In addition several martensitic transformations are associated with correlated nucleation events and autocatalysis. Though these features are not central to the issue of martensites, such a study would lead to a better understanding of the origins of athermal, isothermal and burst nucleation. This in conjunction with a ‘Becker-Döring theory’ for multiple martensite grains would be a first step towards a computation of the TTT curves.
We still do not understand the details of the dynamics of twinning and how subsequent twins are added to the growing nucleus. Moreover the structure and dynamics of the parent-product interface and of the defects embedded in it have not been clearly analysed.
It would be desirable to have a more complete theory which displays a morphology phase diagram (for a single nucleus) given the type of structural transition and the kinetic, thermal and elastic parameters.
Certain new directions immediately suggest themselves. For instance, the role of carbon in interstitial alloys like Fe-C leading to the formation of bainites ; the coupling of the strain field to an external stress and the associated shape memory effect ; and finally the nature of tweed phases and pre-martensitic phenomena (associated with the presence of quenched impurities).
It is clear that the study of the dynamics of solid-solid transformations and the resulting long-lived morphologies lies at the intersection of metallurgy, material science and nonequilibrium statistical mechanics. The diversity and richness of phenomena make this an extremely challenging area of nonequilibrium physics.
## VII Acknowledgements
We thank Yashodhan Hatwalne for a critical reading of the manuscript.
Figure Caption
Fig. 4 MD snapshot of (a) the nucleating grain at some intermediate time initiated by the low temperature quench across the square-triangle transition. The dark(white) region is the triangular(square) phase respectively. Notice that the nucleus is twinned and highly anisotropic. (b) the vacancy (white)/interstitial (black) density profile at the same time as (a). Notice that the vacancies and interstitials are well separated and cluster around the parent-product interface. |
no-problem/9910/cond-mat9910083.html | ar5iv | text | # Surface Properties of a Selective Dissagregation Model
## Abstract
The scaling properties of one-dimensional surfaces generated by selective disaggregation processes are studied through Monte Carlo simulations. The model presented here for the deconstruction process takes into account the nearest neighbors interaction energies. This interactions are considered to be the energy barrier the detaching particle has to overcome in order to desorb from the surface. The resulting one-dimensional surface has self-affine properties and the scaling exponents of the surface roughness are functions of the interaction parameter $`J/kT`$, where $`J`$ is an interaction energy, $`k`$ is the Boltzmann constant and $`T`$ is the temperature. The dependence of the width exponents on the interaction parameter is analyzed in detail.
PACS: 05.40 68.35Bs 68.35Ct
There are three fundamental physical processes that gives rise to the morphology of a surface: deposition, surface diffusion and desorption. The characteristics of the interfaces generated by the combination of deposition and surface diffusion has been well studied during the past decades . In particular for growth models, particles are added to the surface and then are allowed to relax by different mechanisms. Many of this models have been shown to lead to the formation of self-affine surfaces, characterized by scaling exponents. From a theoretical point of view, the studies dedicated to the self-affine interfaces generated by growth models can be considered to follow two main branches. The studies about the properties of discrete models and the studies about continuous models. The first ones where dedicated mainly to the study of the properties of computational models in which the growth proceeds on an initially empty lattice representing a d-dimensional substrate. At each time step, the height of the lattice sites is increased by units (usually one unit) representing the incoming particles. Different models only differs on the relaxation mechanisms proposed to capture specific experimental characteristics. Then, the models are classified according to the values of the scaling exponents in several universality classes. The continuous models, on the other hand, are based on stochastic differential equations of the type
$$h/t=F.𝐣+noise,$$
(1)
in which $`h(r,t)`$ is the thickness of the film deposited onto the surface during time $`t`$, $`F`$ denotes the deposition rate and $`𝐣`$ denotes the current along the surface which in turn depends on the local surface configuration. When the surface current represents some experimental nonlinear equilibrium processes it gives rise to different types of nonlinear terms. The noise term corresponds to the fluctuations in the growth rate and, in general, is assumed to be uncorrelated. The differential equations are solved, either numerically or analytically (when it is possible), and the scaling exponents are determined. If the values of the exponents are similar to those of the discrete models, it is said that both (discrete and continuous models) belong to the same universality class, although the formal connection between both approaches is still an open question.
Desorption or detaching processes, despite its technological importance, have not received the same attention. Maybe because they were considered to be the reciprocal of the growth models and no new characteristics were expected to appear.
There are many technological processes depending on the details of the etching phenomena and only recently few of them have been modeled in order to understand the phenomena at an atomic level . As an example, corrosion is among the most important disaggregation processes, because its economical importance. A basic fact is that the corrosion attack is more intense at high temperatures and strongly depends on the type of material under attack. It would be desirable to take into account these characteristics in basic desorption models. In this paper we present a discrete disaggregation model, which can be considered as a generalization of a recently reported model . We claim that our model is much more realistic than previous ones because we incorporate an interaction parameter proportional to the nearest-neighbor energies. The main characteristic of the model presented here is the dependence of the values of the scaling exponents on the interaction parameter.
It is well known that most chemical reactions that take place at a surface or interface, as well as those processes that lead to particles removing, are activated processes. The particle has to overcome an energy barrier in order to detach. In a first approach, this energy barrier could be consider to be proportional to the number of bonds the particle has at its location on the surface. Then, the energy barrier could be written as $`nJ`$, where $`n`$ is the number of nearest-neighbors of the particle and $`J`$ is the bonding energy. Following this line, it is usual to represent the probability for a particle to overcome a barrier of height $`nJ`$ as $`p_{(n)}=\mathrm{exp}(nJ/kT)`$, where $`T`$ is the temperature and $`k`$ is the Boltzmann constant. In our model, the resulting self-affine interface shows different values of the scaling exponents for different values of $`J/kT`$, as a consequence of the dependence of the desorption probability on the interaction parameter.
In our computational model, the surface is represented by an one-dimensional array of integers specifying the number of atoms in each column. In order to maintain the solid-on-solid characteristic of the model the detaching processes will not give rise to overhangs. The simulations were carried out in the following way: each column of the one-dimensional array is initially filled enough in order to allow the deconstruction process to take place during the whole simulational time, i.e., no column height must become negative at the end of the simulation. By initially filling all columns with the same amount of atoms the the simulations start with a flat surface at $`t=0`$. In order to simulate the selectivity of the deconstruction process, a column $`K`$ is picked at random. Then, the site with the greatest number of open bonds is selected among the site $`s(K)`$ lying at the top of the column $`K`$ and the sites $`s(K1)`$ and $`s(K+1)`$ at the top of each of the neighbors columns. After selecting the column with the greatest number of open bonds, the detaching probability for the particle at the top of the selected column is calculated as $`p(n)=\mathrm{exp}(nJ/kT)`$. The particle is detached from the selected position if a random number $`r`$ is less than $`p(n)`$. Then, the detaching probability depends on the number of nearest-neighbors $`n`$ of the atom in the direction $`x`$ running parallel interface, i.e, in our model $`n=0`$, $`1`$ or $`2`$. The interaction energy corresponding to the neigbor in the direction $`y`$, perpendicular to the interface, is not taken into account since it is present in all detaching atoms. In this way the atom with less neighbors $`n`$ has the greatest probability to desorb from the surface.
Following the studies realized to growing models we expect that the surface generated by the above described process presents self-affine properties. Then, we are mainly interested in the scaling properties of the width (or roughness) defined as
$$\omega (L,t)=\left[h^2(x,t)_xh(x,t)_x^2\right]^{1/2},$$
(2)
where $`_x`$ denotes average over the entire lattice size $`L`$, $`h(x,t)`$ is the height of the column at position $`x`$, $`h(x,t)_x`$ is the average surface height and $`t`$ is the time, roughly proportional to the number of removed monolayers $`m`$.
As for a growth processes we assume that the roughness obeys the Family-Vicsek scaling law , valid for self-affine interfaces,
$$\omega (L,t)=L^\alpha f\left(\frac{t}{L^z}\right),$$
(3)
being $`\alpha `$ the roughness exponent because $`\omega L^\alpha `$ when $`tL^z`$ and $`\beta `$ the growth exponent because $`\omega t^\beta `$ when $`tL^z`$. In our case there is, of course, no growth but an exponent $`\beta `$ can be defined in the same way. $`z=\alpha /\beta `$ is the dynamic exponent.
In our model, the relevant characteristic is the dependence of the scaling exponents $`\alpha `$ and $`\beta `$ on the interaction parameter $`J/kT`$ and this characteristic in investigated in detail here. The exponent $`\beta `$ was determined by measuring the slope of the log-log plot of $`\omega (L,t)`$ vs $`t`$ at early times and the results presented below were obtained for systems sizes of $`L=5000`$. On the other hand, the exponent $`\alpha `$ was determined using the correlation height-height correlation function defined as $`c(r,\tau )=[h(x,t)h(x+r,t+\tau )]^2_{r,\tau }^{1/2}`$. The correlation function is known to scale in the same way as the width $`w(L,t)`$; in particular for $`tL^z`$ and for $`rL`$ the correlation function behaves as $`c(r,0)r^\alpha `$. Results presented below for the exponent $`\alpha `$ where obtained by measuring the slope of the log-log plot of $`c(r,\tau )`$ vs $`r`$ in the limit $`r0`$. For all the presented values of the interaction parameter $`J/kT`$ the used lattice size was $`L=1000`$ because the correlation function calculation must be done at long times in order to allow the width to reach the saturation value. Averages where taken over the necessary independent runs to reduce the statistical error below $`0.4\%`$ in all the simulations.
In Fig. 1 a log-log plot of the interface width versus time is presented. Time is advanced by $`1/L`$ when a particle at the upper monolayer is asked to detach. In Fig. 1 the upper points with the label RND where obtained by removing the particle at the top of randomly selected columns. Its slope is 0.5 and is presented as a reference. The other plots corresponds to the behavior of the width for various increasing values of the interaction parameter $`J/kT`$. It can be seen that the slope is smaller for larger values of $`J/kT`$. This behavior is quantitatively verified by the plot in Fig. 2. In this graphic, the values of the exponents $`\beta `$ are plotted as a function of $`J/kT`$. The values for $`\beta `$ were obtained from the slopes of the lines fitting the log-log plot of the points of the width vs time. Although only few values of $`J/kT`$ are plotted, clearly there is a continuous variation of the exponent $`\beta `$ with the interaction parameter. It can be seen that the value of $`\beta `$ deceases as the interaction parameter increases but, after certain crossover, a limiting value $`\beta 0.26`$ in reached and the exponent does not decrease for further increments in $`J/kT`$. The height-height correlation function for long times is plotted in Fig. 3 for the same values of $`J/kT`$ as shown in Fig. 1. The dependence of the slope of $`c(r,0)`$ on the interaction parameter is evident. The values of the exponent $`\alpha `$ were obtained from the slope of the lines through the points for $`r0`$. The values of $`\alpha `$ are plotted in Fig. 4 as a function of $`J/kT`$. As for $`\beta `$, the same limiting effect can be observed, and also the value of the exponent decreases as the interaction increases for the lowest values of the parameter.
The presented results can be interpreted in several ways. For fixed $`J`$, these results show that below certain temperature the scaling properties of the interface does not change. For fixed temperature it can be seen that materials with greater interaction energies present more resistance to etching. However, these are not linear effects. On the other hand, the behavior of the width of the surface is in agreement with the intuitive picture of a corrosion process, for instance. The roughness of the surface tends to decrease with increments of the interaction parameter indicating that if the interaction energy is large or the temperature is low, the surface is less affected by the attack, but eventualy exponent $`\beta `$ reaches a saturation value.
In summary, the model presented here capture the basic behavior of chemical disaggregation processes and show how the self-affine characteristics of the surface are affected by the inclusion of an interaction parameter. The model cannot be associated to any known continuous model because there is a parameter that continuously control the value of the scaling exponents. No continuous model shows this characteristic. In continuous growth models the values of the exponents are independent of any parameter involved in the stochastic differential equations. A possible equivalent discrete growth model would be a deposition model in which the particles sticks at the sites with the smaller number of bonds. However this will result in an unrealistic deposition model. The model presented here incorporates a realistic physical parameter, the interaction $`J/kT`$, which appears usually in many atomistic processes and, since the values of the scaling exponents depend on this parameter, the model does not fall in any known universality class. There are interested extensions of the present work that can be pointed out; the inclusion of next-nearest-neighbors interactions (competitive and not competitive) and the case of deconstruction in $`2+1`$ dimensions. These cases are under study and the results will be published elsewhere. |
no-problem/9910/cond-mat9910126.html | ar5iv | text | # One and two dimensional tunnel junction arrays in weak Coulomb blockade regime - absolute accuracy in thermometry
## Abstract
We have investigated one and two dimensional (1D and 2D) arrays of tunnel junctions in partial Coulomb blockade regime. The absolute accuracy of the Coulomb blockade thermometer is influenced by the external impedance of the array, which is not the same in the different topologies of 1D and 2D arrays. We demonstrate, both by experiment and by theoretical calculations in simple geometries, that the 1D structures are better in this respect. Yet in both 1D and 2D, the influence of the environment can be made arbitrarily small by making the array sufficiently large.
Coulomb blockade thermometry (CBT) was invented five years ago and has since been established as accurate and practical means to determine absolute temperature . Until recently only one dimensional (1D) arrays were discussed. An interesting suggestion to use two dimensional (2D) arrays was put forward by Bergsten et al. in , where limitations in measurement rate and tolerance to fabrication failures were discussed and shown to be even less restrictive than in 1D arrays. A large 2D array consisting of 256 $`\times `$ 256 tunnel junctions yielded absolute accuracy of better than 0.3 % at temperatures from two to four kelvin. This is somewhat better, although statistical variations from sample to sample in 2D arrays have not been reported, than what has been achieved with 1D arrays consisting of only 20 junctions in series (0.5 % standard deviation in absolute accuracy). Besides considering practical thermometry with sensors having not excessively many junctions, it is very interesting to understand the influence of the electromagnetic environment on tunnelling in arrays. In the case of a small tunnel junction and weak tunnelling ($`R_\mathrm{T}R_\mathrm{Q}`$, with $`R_\mathrm{T}`$ the junction resistance and $`R_\mathrm{Q}=h/4e^26.5`$ k$`\mathrm{\Omega }`$) this has been treated by the phase correlation theory with the harmonic oscillator bath as the environment , and it has been recently extended to describe in detail double junction structures and long 1D arrays . A general approach which includes strong tunnelling in 1D arrays is due to Golubev and Zaikin with emphasis on the low temperature limit. Our purpose here is to show experimental data on a few topologically different sets of arrays, and to discuss the theory in some of the less complicated cases.
CBT is based on partial blockade of tunnelling in the regime where the charging energy of single electrons, $`E_\mathrm{C}2\frac{N1}{N}\frac{e^2}{2C}`$ ($`N`$ is the number of junctions and $`CC_i`$ is capacitance of the ith junction in a homogeneous array), the thermal energy, $`k_\mathrm{B}T`$, and the electrostatic energy difference across the array, $`eV`$, where $`V`$ is the bias voltage, compete. The significant property is that the full conductance curve, $`G/G_\mathrm{T}`$, against $`V`$, or in more general terms against $`veV/Nk_\mathrm{B}T`$, can be calculated with a universal result for not too large values of the ratio $`uE_\mathrm{C}/k_\mathrm{B}T`$, which is the expansion parameter. Here, $`G_\mathrm{T}R_\mathrm{T}^1`$ is the conductance of the array at large transport voltages. The basic result is the linear one,
$$G/G_\mathrm{T}=1ug(v)$$
(1)
with known universal corrections for not small values of $`u`$. Here $`g(x)=[x\mathrm{sinh}(x)4\mathrm{sinh}^2(x/2)]/8\mathrm{sinh}^4(x/2)`$ is the function introduced in Ref. . The main result is that the full width at half minimum in this linear regime, $`V_{1/2,0}`$, has the value
$$V_{1/2,0}=5.439Nk_\mathrm{B}T/e$$
(2)
with again a known correction $`\mathrm{\Delta }V_{1/2}=V_{1/2}V_{1/2,0}`$, which is proportional to the normalised depth of the conductance dip, $`\mathrm{\Delta }G/G_\mathrm{T}`$:
$$\mathrm{\Delta }V_{1/2}/V_{1/2,0}=0.39211\mathrm{\Delta }G/G_\mathrm{T}.$$
(3)
An important feature of CBT is that it is not significantly influenced by random background charges when $`u1`$. The half width $`V_{1/2}`$, which is proportional to $`T`$ at small values of $`u`$, serves as an absolute measure of temperature. $`\mathrm{\Delta }G/G_\mathrm{T}`$, in turn, is a secondary thermometric parameter, inversely proportional to $`T`$ (again in the linear regime):
$$\mathrm{\Delta }G/G_\mathrm{T}=E_\mathrm{C}/6k_\mathrm{B}T.$$
(4)
The formulae (1 \- 4) are obeyed quantitatively with high precision in 1D arrays consisting of a large number of junctions. Yet in simpler structures, with, e.g., $`N<10`$ junctions in a 1D array, the influence of the external impedance on tunnelling is significant , and the conductance dip broadens. The origin of this is that Eqs. (1 \- 4) are based on the assumption that the arrays are perfectly voltage biased at the ends, which is not the case due to the nonzero external on-chip impedance. If we neglect the influence of the environment and non-sequential tunnelling, we can, however, analyse our arrays using Eqs. (1 \- 4) with no extra corrections, if we assume that the structures are fairly uniform. In simple cases, like with $`N`$ junction 1D arrays, these results are easy to obtain analytically , but with 2D arrays we have had to resort to Monte Carlo simulations.
Topologically different 1D and 2D arrays which we will discuss here are shown schematically in Fig 1. The bias is connected between the left and the right end (busbars). The practical $`N`$ junction 1D arrays used for Coulomb blockade thermometry consist of $`M`$ nominally identical arrays in parallel. The parallel connection does not affect the results of Eqs. (1 \- 4), but it lowers the impedance of the sensors by $`1/M`$ and this way makes them more suitable for practical measurements . In the basic ”aligned” 2D structure the ”equipotential” islands of the neighbouring chains are connected to each other by a tunnel junction whose parameters, i.e., $`R_\mathrm{T}`$ and capacitance $`C`$, are equal to those of the rest of the junctions in the structure. In the ”diagonal” 2D structure we denote the number of connections at each busbar by $`M^{}`$.
As the simplest illustrative example of comparing 1D and 2D structures, let us take 1D and 2D arrays with $`N=2`$, $`M=2`$. They are schematically shown in Fig. 1 (d). We denote such a 1D structure by II and the 2D structure by H. They were fabricated on nitridised silicon chips by standard electron beam lithography and two angle shadow evaporation of aluminum, with oxidation to create the tunnel barrier in between. The size of the junctions was nominally 0.2 $`\times `$ 0.6 $`\mu `$m<sup>2</sup>, and three different geometries of both types of samples were employed to check the influence of the internal structure of the array and of its termination. For example, island size was varied from 1 $`\mu `$m up to 10 $`\mu `$m.
Four II structures and seven H structures were measured at $`T=`$ 4.25 K. The inset of Fig. 2 shows a typical example of the conductance of a II structure and an H structure fabricated simultaneously on the same chip. In both cases $`\mathrm{\Delta }G/G_\mathrm{T}0.01`$, which implies very small corrections from the linear result of Eqs. (1) and (2) (see Eq. (3)). Although not very apparent from the two curves in the inset of Fig. 2, the generic features are: (i) the dip of the H structure is wider than that of the II structure, (ii) the shapes of the dips are not the same; simple scaling of the two curves does not make them overlap, and (iii) the dips are nearly equally high, when the junction size is the same in II and H. As will be discussed in what follows, (i) - (iii) are not consistent with the simple sequential tunnelling result with zero external impedance. The simple two junction result with no external impedance would predict $`V_{1/2,0}=`$ 4.0 mV at 4.25 K. The main result here, instead, is that $`V_{1/2}`$ of the II and H structures cluster around different values, both higher than $`V_{1/2,0}`$, i.e., at $`4.53\pm 0.02`$ mV ($`V_{1/2}/V_{1/2,0}=1.13\pm 0.005`$) for the II and at $`4.79\pm 0.05`$ mV ($`V_{1/2}/V_{1/2,0}=1.20\pm 0.01`$) for H, respectively. The internal geometry of the array did not influence the results. This result already suggests that the corrections to the basic results of Eqs. (1 \- 4) are larger for 2D type structures.
And, to make a comparison between all different array types of Fig. 1, several samples of different types with $`N=8`$ were measured. The number of parallel connections was $`M=9`$ in 1D and in ”aligned” 2D; the ”diagonal” 2D arrays had $`M^{}=4`$. Simple theory (Eq. (2)) predicts $`V_{1/2,0}=`$ 16.0 mV. Typical conductance curves measured at 4.25 K are shown in Fig. 2 for 1D and ”aligned” 2D arrays. Also for these samples $`\mathrm{\Delta }G/G_\mathrm{T}0.01`$, being well in the linear regime of Eqs. (1) and (2). The histogram of $`V_{1/2}`$ of all the samples measured is shown in Fig 3 (a). The 1D arrays show a $`+4`$ % correction to $`V_{1/2,0}`$ ($`V_{1/2}=16.68\pm 0.04`$ mV on the average). The ”diagonal” 2D arrays have a width of $`V_{1/2}=18.20\pm 0.14`$ mV, i.e., it is 14 % wider than the theory would predict. The ”aligned” 2D arrays show $`V_{1/2}=19.0\pm 0.27`$ mV, which exceeds $`V_{1/2,0}`$ by 19 %. Thus, at least in the case of arrays with a relatively small charging peak ($`\mathrm{\Delta }G/G_\mathrm{T}1`$ %), the 1D structures are more suitable for thermometry if judged based on the faster decrease of the size ($`N`$) dependent correction observed. Few arrays with smaller junctions were also measured: the difference from the simple theory becomes smaller for all arrays, but we have not done this systematically enough for quantitative conclusions. Our basic conclusion on $`N`$ dependence is further supported by the measurements of $`N=20`$, $`M=21`$ and $`N=30`$, $`M=31`$ 1D and ”aligned” 2D arrays: the data are collected in Fig. 3.
To interpret the results we apply the existing phase correlation theory of single electron tunnelling . Before doing that we first see the relation of the conductance in structures H and II if the external impedance of these structures is assumed to be vanishingly small. We can then use the ”orthodox” theory of single electron tunnelling , and derive the conductance curve either analytically in the high temperature limit ($`u<1`$) , or numerically by a Monte Carlo simulation . The obvious result is, that in both structures the conductance curve has the form given by Eq. (1), but the capacitance seen by one of the islands to the bias lines is $`2C`$ in II, whereas in H it is $`8C/3`$. Thus, $`E_\mathrm{C}`$ of H is smaller by a factor 3/4, and so is the depth $`\mathrm{\Delta }G/G_\mathrm{T}=u/6`$. The width of the dip is not different in the two structures H and II. The corrections in $`V_{1/2}`$ observed experimentally are thus due to less trivial reasons.
The most obvious corrections to the basic orthodox result arise either from the higher order tunnelling events (co-tunnelling and other non-sequential processes) or from the impedance of the environment. The first possibility can be ruled out easily: the higher order events are important in the limit of small tunnel resistances ($`R_\mathrm{T}<R_\mathrm{Q}6.5`$ k$`\mathrm{\Omega }`$), but our results did not show correlation with the tunnel resistance of the samples, which varied from 15 k$`\mathrm{\Omega }`$ up to 50 k$`\mathrm{\Omega }`$. Therefore, we analysed our observations based on the phase correlation theory (i.e., perturbatively) in the H and II circuits.
The tunnelling rates $`\mathrm{\Gamma }_j^\pm `$ through the junction $`j`$ in the two directions ($`\pm `$) are given by
$$\mathrm{\Gamma }_j^\pm (\delta F_j^\pm )=(1/e^2R_\mathrm{T})_{\mathrm{}}^+\mathrm{}𝑑E\frac{E}{1\mathrm{exp}(E/k_\mathrm{B}T)}P_j(\delta F_j^\pm E).$$
(5)
Here $`\delta F_j^\pm `$ is the free energy change in the tunnelling event and $`P_j(E)=\frac{1}{2\pi \mathrm{}}_{\mathrm{}}^+\mathrm{}𝑑te^{[J_j(t)+i\frac{E}{\mathrm{}}t]}`$ is the probability density of the tunnelling electron to exchange energy $`E`$ with its electromagnetic environment, where in turn, $`J_j(t)`$ is the phase-phase correlation function:
$$J_j(t)=2_0^+\mathrm{}\frac{d\omega }{\omega }\frac{\mathrm{Re}[Z_{\mathrm{t},j}(\omega )]}{R_\mathrm{K}}\{\mathrm{coth}(\frac{\mathrm{}\omega }{2k_\mathrm{B}T})[\mathrm{cos}(\omega t)1]i\mathrm{sin}(\omega t)\}.$$
(6)
The essential parameter is the real part of the impedance $`Z_{\mathrm{t},j}`$ seen by junction $`j`$. This can be obtained as the equivalent impedance of the surrounding circuit in parallel with the capacitance of this junction . All other junctions are described as pure capacitors, based on the assumption of sequential tunnelling.
To describe the environment of the junction circuit we have used the simple resistive impedance. The circuits of II are identical to what has been described in Refs. with resistance $`R_\mathrm{e}`$ at both ends of the double junction(s), see inset in Fig. 3 (c). Similarly we assume that there is impedance $`R_\mathrm{e}`$ connected at each termination of the H structure, as shown also in the inset. The impedances $`\mathrm{Re}[Z_{\mathrm{t},j}(\omega )]`$ in Eq. (6) can then be easily calculated and the result is $`\mathrm{Re}[Z_{\mathrm{t},j}(\omega )]=\frac{R_\mathrm{e}}{2}\frac{1}{1+(\omega /\omega _\mathrm{c})^2}`$ for all the four junctions in II, and $`\mathrm{Re}[Z_{\mathrm{t},j}(\omega )]=\frac{9R_\mathrm{e}}{16}\frac{1+(1/3)(\omega /\omega _\mathrm{c})^2}{1+(5/4)(\omega /\omega _\mathrm{c})^2+(1/4)(\omega /\omega _\mathrm{c})^4}`$ for the symmetrically positioned four junctions in H, and $`\mathrm{Re}[Z_{\mathrm{t},j}(\omega )]=\frac{R_\mathrm{e}}{4}\frac{1}{1+(1/4)(\omega /\omega _\mathrm{c})^2}`$ for the interconnecting central junction in H. The cut-off frequency is defined as $`\omega _\mathrm{c}(R_\mathrm{e}C)^1`$. We calculated the current voltage characteristics of the II and H structures as functions of $`R_\mathrm{e}`$ with two values, $`C=6`$ fF and $`C=3`$ fF, corresponding to the estimated range of capacitances of the samples measured, and at $`T=4.25`$ K. Despite the fact that we used just resistive environment we can draw some, at least qualitative conclusions based on the results depicted in Fig. 3 (c), where the half width of the corresponding conductance dip from the calculation has been plotted against $`R_𝐞`$. Since the ”unintentional” on-chip impedance is typically a fraction of the free space impedance $``$ 377 $`\mathrm{\Omega }`$, i.e., on the order of 100 $`\mathrm{\Omega }`$, we can assume that we are close to or below the maximum of $`V_{1/2}`$ on the $`R_\mathrm{e}`$ scale for both II and H structures. Therefore the width of the H structure well exceeds that of the II structure. If we assume that we are exactly at the maximum in both cases, II and H, even the quantitative agreement between the measured and the calculated half widths is good: from the calculation we obtain $`V_{1/2}=`$ 4.9 mV and $`V_{1/2}=`$ 4.6 mV for H and II structures, respectively. The coincidence between experiment and theory may be partly accidental, but gives a ground for the observed difference in the measured characteristics of H and II. The additional qualitative observation that the influence of the environment is most pronounced for very weak Coulomb blockade ($`\mathrm{\Delta }G/G_\mathrm{T}0.01`$) can be understood by noting that the position of the maximum of $`V_{1/2}`$ on the $`R_\mathrm{e}`$ scale is approximately inversely proportional to $`E_\mathrm{C}`$, but its magnitude is almost unchanged. The analysis of the larger 2D arrays in a dissipative environment is markedly more complicated and we have not tried that. The larger scatter in the data of 2D arrays in Fig. 3 (b) may indicate stronger $`R_\mathrm{e}`$ dependence than in 1D, similarly to what is predicted in Fig. 3 (c) for $`N=2`$.
In summary, we have investigated the influence of the cross connections in arrays of small tunnel junctions on their current voltage characteristics. We have observed that the multiply linked circuit structure in 2D arrays leads to strong influence of the external electromagnetic environment on the tunnelling junction imbedded in the array. This affects drastically the thermometric properties of not very extended junction arrays in Coulomb blockade thermometry in favour of 1D structures.
We thank Antti Manninen, Andrei Zaikin, Per Delsing, Jari Kinaret, Klavs Hansen and Tobias Bergsten for discussions, and the National Graduate School in Materials Physics for support.
FIGURE CAPTIONS
Fig. 1. Schematic presentation of the different array structures investigated: (a) 1D, (b) ”aligned” 2D, (c) ”diagonal” 2D arrays, and (d) the double junction II (left) and the coupled double junction H (right) structures.
Fig. 2. Measured normalised conductance $`G/G_T`$ vs. bias voltage $`V`$ for the topologies (a) and (b) of Fig. 1 with $`N=8`$ and $`M=9`$, measured at 4.25 K. Inset shows the corresponding conductance curves of the II and H structures in Fig. 1 (d).
Fig. 3. Summary of the experimental and theoretical results: (a) Histogram of the measured $`V_{1/2}`$ of the samples with $`N=8`$, $`M=9`$, and $`M^{}=4`$ (at 4.25 K). The filled black bars are for 1D arrays, the grey ones are for ”diagonal” 2D arrays, and the open ones for the ”aligned” 2D arrays. (b) the half width of the conductance curve for 1D (filled circles) and ”aligned” 2D (triangles) arrays against $`N`$. The ”crossed” 2D data ($`N=8`$, $`M^{}=4`$) are shown by a cross. The lines are just guiding the eye. In (c) we show the circuit models used in the calculations, and the theoretically obtained half widths for the two structures II and H as functions of the external impedance (resistance) $`R_e`$. In all cases $`T=4.25`$ K, and all the junctions are identical with $`C=6`$ fF ($`C=3`$ fF) for the solid (dashed) lines. Changing either $`C`$ or $`T`$ simply scales the values on the horizontal axis; the main parameter in the calculation is $`[\mathrm{}/(R_\mathrm{e}C)]/(k_\mathrm{B}T)`$. |
no-problem/9910/hep-th9910125.html | ar5iv | text | # 1 Introduction and Summary
## 1 Introduction and Summary
One of the most striking elements in recent developments in our understanding of gauge theories and string theories is the ubiquitous appearance of S-dualities in theories with 8 or more supercharges. S-duality denotes the exact equivalence of a theory at weak coupling to another theory at strong coupling. It can be described in general as a set of identifications on the space of couplings of a theory (or theories). Well-known examples in four-dimensional field theory are the $`N=4`$ supersymmetric Yang-Mills theories and finite $`N=2`$ theories . In $`N=4`$ theories, for example, S-duality identifies theories with gauge couplings $`\tau `$ and $`1/\tau `$.
The S-dualities of classes of scale invariant $`N=2`$ gauge theories with simple gauge groups and product gauge groups were derived by embedding those theories in higher rank asymptotically free gauge theories. The coupling space of the scale invariant theory was realized as a submanifold of the Coulomb branch of the asymptotically free theory. These embedding arguments by themselves do not necessarily capture all possible S-dualities—there may be further identifications of the coupling space—since they only show that a submanifold of the Coulomb branch of the appropriate asymptotically free theory is some multiple cover of the true coupling space. One place where we know such further identifications must exist are in theories with $`SU(2)`$ gauge group factors: for in the limit that the other factors decouple, the remaining $`SU(2)`$ factor must have the full $`SL(2,𝐙)`$ duality of , rather than the subgroup $`\mathrm{\Gamma }^0(2)SL(2,𝐙)`$ which emerges from the embedding argument. The purpose of this letter is to explore these further S-dualities in a scale invariant $`N=2`$ gauge theory with $`SU(2)\times SU(2)`$ gauge group.
The specific theory we focus on has massless hypermultiplets in the representations $`(\mathrm{𝟐},\mathrm{𝟐})(\mathrm{𝟐},\mathrm{𝟏})(\mathrm{𝟐},\mathrm{𝟏})(\mathrm{𝟏},\mathrm{𝟐})(\mathrm{𝟏},\mathrm{𝟐})`$ of $`SU(2)\times SU(2)`$. It has two exactly marginal complex gauge couplings, $`\tau _1`$ and $`\tau _2`$, which are conveniently parameterized by $`f_k=e^{i\pi \tau _k}`$ (so that weak coupling is at $`f_k=0`$). The new S-dualities we find act as a 20-fold identification on $`𝐂^2\{f_1,f_2\}`$, and are described explicitly in eqns. (5558) below. The resulting coupling space has a single $`𝐙_3`$ orbifold fixed point, complex lines of $`𝐙_2`$ orbifold fixed points intersecting in an $`S_3`$ orbifold point, and no further strong coupling singularities. The weak coupling singularities have the expected structure: in the limit that one coupling vanishes, the S-duality group acts as $`SL(2,𝐙)`$ on the other coupling; nevertheless, the total coupling space is not simply the Cartesian product of two $`SL(2,𝐙)`$ fundamental domains.
This paper is organized as follows. In the next section we review the proof of the S-duality of the $`SU(2)`$ gauge theory , clarifying in what sense the $`SL(2,𝐙)`$ group of identifications on the coupling space can be recovered. In section 3 we study the low energy effective action on the Coulomb branch of our scale invariant $`SU(2)\times SU(2)`$ theory. We derive two different forms of the curve encoding this effective action by embedding the theory in either an $`SU(n)\times SU(n)`$ or an $`SU(2n)\times Sp(2n)`$ theory. Demanding that the resulting curves describe equivalent low energy physics implies a non-trivial mapping between the coupling parameters that appear in each description. In section 4 we use this mapping and the results of to prove that there are the “extra” S-duality identifications described above.
## 2 Deriving the SL(2,Z) duality of the SU(2) theory
The $`N=2`$ theory with $`SU(2)Sp(2)`$ gauge group and four massless fundamental hypermultiplets is a scale invariant theory with an exactly marginal coupling, the complex gauge coupling $`\tau `$, taking values in the classical coupling space $`_{cl}=\{\tau |\text{Im}\tau >0\}`$. In evidence was presented, in the form of the invariance of the low energy effective action, that the true coupling space of this theory should be the classical space further identified under the transformations $`T:\tau \tau +1`$ and $`S:\tau 1/\tau `$. This gives the coupling space as $`=_{cl}/SL(2,𝐙)`$, and $`SL(2,𝐙)`$ is said to be the S-duality group of the theory.<sup>1</sup><sup>1</sup>1We only discuss the S-duality action on marginal couplings and not on masses or other operators, and so will ignore the distinction between $`SL(2,𝐙)`$ and $`PSL(2,𝐙)`$ in what follows.
On the other hand, the duality identifications manifest in the low energy effective action of this $`SU(2)`$ gauge theory derived from either the M-theory construction of or the geometrical engineering of do not comprise the full $`SL(2,𝐙)`$ S-duality group conjectured in . It was shown in that the true coupling space of the scale invariant $`SU(2)`$ gauge theory can be derived from its different covering spaces represented by submanifolds of Coulomb branches of two different embeddings of this theory in higher rank asymptotically free theories. In this section we review this argument and clarify the relation between the geometry of the covering of the coupling space and the S-duality group.
Consider first the scale invariant $`SU(2)`$ theory with four massless hypermultiplets in the fundamental representation. The Coulomb branch of the theory is described by the curve
$$y^2=(v^2u)^24fv^4,$$
(1)
parameterized by the gauge coupling $`f`$ and the gauge invariant adjoint vev $`u`$, a local coordinate on the Coulomb branch. $`f`$ is a function of the coupling such that $`fe^{i\pi \tau }`$ at weak coupling.<sup>2</sup><sup>2</sup>2In the $`N=2`$ theories discussed here it is convenient to define the coupling as $`\tau =\frac{\vartheta }{\pi }+i\frac{8\pi }{g^2}`$, differing by a factor of two from the usual definition. Embedding this theory into the asymptotically free $`SU(3)`$ model with $`4`$ quarks and scaling on the Coulomb branch of the latter (while tuning appropriately the masses of the quarks) to the scale invariant $`SU(2)`$ theory one identifies the coupling space $`_{SU}=\{f\}`$ with $`𝐏^1`$ with two punctures and an orbifold point: a weak coupling singularity $`f=0`$, an “ultra-strong” coupling point at $`f=1/4`$, and a $`𝐙_2`$ orbifold singularity at $`f=\mathrm{}`$.
On the other hand, this scale invariant $`N=2`$ $`SU(2)`$ gauge theory can be thought of as an $`Sp(2)`$ theory with $`4`$ massless fundamental flavors, whose curve reads
$$y^2=x(xv)^24gx^3.$$
(2)
The coupling space $`_{Sp}=\{g\}`$ of this theory was derived in from its embedding in asymptotically free $`Sp(4)`$ theory with $`4`$ massless hypermultiplets by tuning on the Coulomb branch of the latter to the scale invariant $`Sp(2)`$ theory. One then finds that $`_{Sp}`$ is again the complex manifold $`𝐏^1`$ with two punctures and an orbifold point: a weak coupling singularity at $`g=0`$, an “ultra-strong” singularity at $`g=1/4`$, and a $`𝐙_2`$ orbifold singularity at $`g=\mathrm{}`$.
Both the $`SU(2)`$ and the $`Sp(2)`$ descriptions of the scale invariant theory must describe the same physics. In particular, their low energy effective actions described by the complex structure of the curves (1) and (2) must be the same. We therefore look for an $`SL(2,𝐂)`$ transformation on $`x`$ which maps the zeros of the right sides to one another. Of the six distinct such mappings only two map weak coupling to weak coupling, and imply the identification
$$f=\frac{4\sqrt{g}(1+2\sqrt{g})}{(1+6\sqrt{g})^2}.$$
(3)
Choosing different signs of the square root gives two maps between $`_{Sp}`$ and $`_{SU}`$, which induce the nontrivial identification $`𝒰`$ on $`_{SU}`$
$$𝒰(f)=\frac{\gamma +2}{(\gamma +3)^2}$$
(4)
where $`\gamma `$ is a root of
$$0=f\gamma ^2+\gamma +1.$$
(5)
This gives two maps from $`_{SU}`$ to itself, one for each $`\gamma `$ satisfying (5). Thus these identifications imply at least a three-fold identification on $`_{SU}`$ (the original point and its two images). In fact, a little algebra shows that the orbit of a generic point under $`𝒰`$ is just this set of three points, so $`_{SU}`$ is a triple cover of the true coupling space of the scale invariant $`SU(2)`$ theory. In particular, the identifications (4) map the “strong coupling” point $`f=1/4`$ to the $`f=0`$ weak coupling singularity, and map the $`𝐙_2`$ point $`f=\mathrm{}`$ to the point $`f=2/9`$. In addition, there is a new fixed point under these identifications, namely $`f=1/3`$, which it is easy to check is a $`𝐙_3`$ orbifold point. The net result is that with these further identifications, the coupling space becomes topologically a sphere with three special points: the weak coupling puncture (the image of $`f=0`$ or $`1/4`$), a $`𝐙_2`$ orbifold point (the image of $`f=2/9`$ or $`\mathrm{}`$), and a $`𝐙_3`$ orbifold point (the image of $`f=1/3`$). Since the map (4) is analytic, the true coupling space inherits a complex structure from that of the punctured $`f`$-sphere. The order of the orbifold points reflects the nature of the singularity in the complex structure at the punctures.
This argument shows that there are indeed more identifications on the coupling space than were apparent in either the $`SU(2)`$ form of the curve (1) or the $`Sp(2)`$ form of the curve (2). But it might not be clear from this argument how to actually see the $`SL(2,𝐙)`$ structure of the duality group. For this we need an intrinsic definition of what we mean by duality group. Since having an S-duality group $`\mathrm{\Gamma }`$ means that the coupling space is given by $`=_{cl}/\mathrm{\Gamma }`$, and the classical coupling space $`_{cl}`$ is simply connected, we can define
$$\mathrm{\Gamma }=\pi _1().$$
(6)
When $``$ has orbifold singularities, $`\pi _1()`$ should be understood in the orbifold sense , meaning that the generator $`U`$ of $`\pi _1()`$ corresponding to looping about a $`𝐙_n`$ orbifold point satisfies $`U^n=1`$.
The true $`SU(2)`$ coupling space deduced above has the complex structure of a sphere with one puncture, a $`𝐙_2`$ orbifold point, and a $`𝐙_3`$ orbifold point. Thus the S-duality group $`\pi _1()`$ has two generators which we can take to be $`U`$, generating loops around the $`𝐙_2`$ point, and $`V`$, generating loops around the $`𝐙_3`$ point, and satisfying $`U^2=V^3=1`$. There are no other constraints since we know that going around the weak coupling puncture is a $`\theta `$-angle rotation, which does not correspond to any orbifold identification. But $`SL(2,𝐙)`$, considered as an abstract infinite discrete group, can be presented as the group with two generators $`S`$ and $`T`$ satisfying only the relations $`S^2=(ST)^3=1`$. So, identifying $`S=U`$ and $`ST=V`$, we see that the S-duality group is isomorphic to $`SL(2,𝐙)`$.
## 3 Curves for the SU(2) x SU(2) theory
In preparation for our discussion of S-duality in the $`SU(2)\times SU(2)`$ scale invariant theory, we must first make a somewhat lengthy technical detour to derive useful forms for the curves whose complex structure encodes the low energy physics of the Coulomb branch of the theory. The different curves we need are those arising from viewing the $`SU(2)\times SU(2)`$ theory as part of an $`SU(n)\times SU(n)`$ series or as part of an $`SU(2n)\times Sp(2n)`$ series. The goal of this section is to derive an explicit map between the couplings of the two versions of the theory—the analog of eq. (3) above. This map is summarized at the end of this section for those who prefer to skip the technicalities.
We start by briefly reviewing the derivation of the $`SU\times SU`$ curves from an M5 brane configuration in M-theory. In subsection 3.2 we then derive curves for the $`SU\times Sp`$ series with fundamental matter using an M5 brane configuration on $`𝐑^7\times Q`$ where $`Q`$ is the Atiyah-Hitchin manifold, corresponding to a negatively charged O6 orientifold in a type IIA string picture. In subsection 3.3 we specialize to vanishing bare masses for the matter hypermultiplets in the $`SU(2)\times SU(2)`$ and $`SU(2)\times Sp(2)`$ curves, develop hyperelliptic forms for both curves, and then derive the mapping of parameters matching the two. In subsection 3.4 we summarize the results of this section relevant for our discussion of S-duality.
### 3.1 SU x SU curves
Consider the scale invariant $`SU(n)\times SU(n)`$ theory with one hypermultiplet in the bifundamental, $`n`$ in the first $`SU(n)`$ fundamental, and $`n`$ in the second $`SU(n)`$ fundamental. This can be realized as a IIA brane configuration by placing three NS5 branes along the $`x^{0\mathrm{}5}`$ directions separated in $`x^6`$ but located at equal values in $`x^{7\mathrm{}9}`$, and $`n`$ D4 branes along the $`x^{0\mathrm{}3}`$ and $`x^6`$ directions suspended between neighboring pairs of NS5 branes. The fundamental matter is incorporated by including $`n`$ semi-infinite D4 branes extending to the right and left in the $`x^6`$ direction.
It is easy to lift such a brane configuration to an M-theory curve
$$F(t,v)p(v)t^3+q(v)t^2+r(v)t+s(v)=0,$$
(7)
where $`v=x^4+ix^5`$, $`t=\mathrm{exp}\{(x^6+ix^{10})/R\}`$, $`x^{10}`$ is the eleventh dimension of radius $`R`$. $`p`$, $`q`$, $`r`$ and $`s`$ are polynomials of degree $`n`$:
$`p`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}(vm_i^{(1)}M),`$ (8)
$`q`$ $`=`$ $`B_1{\displaystyle \underset{j=1}{\overset{n}{}}}(vb_jM),`$ (9)
$`r`$ $`=`$ $`B_2{\displaystyle \underset{k=1}{\overset{n}{}}}(va_k+M),`$ (10)
$`s`$ $`=`$ $`{\displaystyle \underset{\mathrm{}=1}{\overset{n}{}}}(vm_{\mathrm{}}^{(2)}+M).`$ (11)
The leading coefficients of $`p`$ and $`s`$ are set to $`1`$ by rescaling $`t`$ and $`v`$, and by a shift in $`v`$ we set $`_ka_k=_jb_j=0`$. Interpreting the positions in the $`v`$ plane of the D4 branes as mass parameters or Coulomb branch vevs, we find that the $`m_i^{(1)}`$ and $`m_{\mathrm{}}^{(2)}`$ are the bare masses of the fundamentals of the first and the second $`SU`$ factors, $`M`$ is the bifundamental mass, and the traceless $`a_k`$ and $`b_j`$ are the eigenvalues of the adjoint vevs of the first and the second $`SU`$ factors.
The $`B_i`$ in (11) encode the gauge couplings through the relative asymptotic positions of the NS5 branes in the IIA picture. These positions are given by the roots of $`F(t,v)=0`$ for large $`v`$, that is, the roots of $`t^3+B_1t^2+B_2t+1=0`$. The relative positions of these roots are unaffected by the $`𝐙_3`$ transformation of the coefficients $`B_i`$
$$(B_1,B_2)(\omega ^pB_1,\omega ^{2p}B_2)p=1,2,$$
(12)
where $`\omega =e^{2\pi i/3}`$. Thus the space $`_{SUSU}`$ of inequivalent couplings that enters into the low-energy physics on the Coulomb branch of this $`SU(n)\times SU(n)`$ theory is the space $`𝐂^2\{B_1,B_2\}`$ modded by the $`𝐙_3`$ action (12). Furthermore, in addition to the $`𝐙_3`$ orbifold fixed point at $`B_1=B_2=0`$, this space has singularities whenever the asymptotic positions of the M5 branes collide—whenever $`0=2718B_1B_2B_1^2B_2^2+4B_1^3+4B_2^3`$—as well as weak-coupling singularities whenever one of the NS5 branes goes off to infinity: $`B_1\mathrm{}`$ or $`B_2\mathrm{}`$. Indeed, the space of $`SU\times SU`$ couplings can be parameterized by the $`𝐙_3`$-invariant combinations $`f_1B_1/B_2^2`$ and $`f_2B_2/B_1^2`$, which have been chosen to correspond to the normalization of the $`SU(2)`$ coupling $`f`$ used in (1), so that they are related to gauge couplings at weak coupling as $`\{f_1,f_2\}\{e^{i\pi \tau _1},e^{i\pi \tau _2}\}`$.
We can check this identification of the coupling parameters (as well as our implicit identification of the vevs and bare masses in the $`SU\times SU`$ curve) by decoupling one of the $`SU`$ factors by taking one of the NS5 branes off to infinity. For example, we can decouple the first $`SU`$ factor by setting $`B_2=f_2B_1^2`$ with $`f_2`$ finite, and sending $`B_1\mathrm{}`$. The $`SU\times SU`$ curve (7) then becomes, after rescaling $`tB_1t`$ and dividing by $`B_1^3`$,
$$0=t\left(p(v)t^2+\frac{q(v)}{B_1}t+\frac{r(v)}{B_1^2}\right).$$
(13)
The overall factor of $`t`$ is for the decoupled brane, and the remaining polynomial becomes, using (11),
$$0=\underset{i=1}{\overset{n}{}}(vm_i^{(1)}M)t^2+\underset{j=1}{\overset{n}{}}(vb_jM)t+f_2\underset{k=1}{\overset{n}{}}(va_k+M).$$
(14)
Multiplying by $`_{i=1}^n(vm_i^{(1)}M)`$, changing variables to $`y=2t_{i=1}^n(vm_i^{(1)}M)+_{j=1}^n(vb_jM)`$, shifting $`vv+M`$, and identifying $`M_i=m_i^{(1)}`$ for $`i=1,\mathrm{},n`$ and $`M_i=a_{in}2M`$ for $`i=n+1,\mathrm{},2n`$, gives the scale invariant $`SU`$ curve found in .
### 3.2 SU x Sp curves
Consider the scale invariant $`SU(2n)\times Sp(2n)`$ theory with one hypermultiplet in the bifundamental, $`2n`$ in the $`SU(2n)`$ fundamental, and 2 in the $`Sp(2n)`$ fundamental. This can be realized as a IIA brane configuration in the presence of an O6 orientifold plane of negative RR charge . The O6<sup>-</sup> plane is the fixed point of a $`𝐙_2`$ quotient which acts on the space-time coordinates as $`x^{4,5,6}x^{4,5,6}`$, and thus extends along the $`x^{0\mathrm{}3}`$ and $`x^{7\mathrm{}9}`$ directions, and is located at $`x^{4\mathrm{}6}=0`$. It is convenient to work on the double cover, by including mirror images for all branes, where the O6<sup>-</sup> plane has RR charge -8 in D6 brane units. The $`SU(2n)\times Sp(2n)`$ gauge theory is then constructed by placing two NS5 branes (and their mirror images) along the $`x^{0\mathrm{}5}`$ directions separated in $`x^6`$ but located at equal values in $`x^{7\mathrm{}9}`$, and $`2n`$ D4 branes along the $`x^{0\mathrm{}3}`$ and $`x^6`$ directions suspended between neighboring pairs of NS5 branes. The fundamental matter is incorporated by including D6 branes parallel to the O6<sup>-</sup> plane: two between the O6<sup>-</sup> plane and the first NS5 brane, and $`2n`$ between the two NS5 branes (as well as their mirror images).
Following , we can derive the curve for this brane configuration by first moving the D6 branes to left and right infinity, whereupon they drag D4 branes behind them upon passing through any NS5 branes . Also, we can represent the O6<sup>-</sup> plane as a “neutral” O6 plane by pulling in 2 D6 branes (and their mirror images) from infinity to cancel the O6<sup>-</sup> plane RR charge. Upon passing through the NS5 branes, the D6 branes create two D4 branes between the NS5 branes and four between the NS5 brane and the O6<sup>-</sup> plane (as well as their mirror images). Thus the final configuration is simply four NS5 branes crossed by $`2n+4`$ infinite D4 branes, all arranged symmetrically with respect to the origin of $`x^{4\mathrm{}6}`$.
It is easy to lift such a brane configuration to the M-theory curve
$$F(t,v)p(v)t^4+q(v)t^3+r(v)t^2+q(v)t+p(v)=0$$
(15)
where $`v=x^4+ix^5`$, $`t=\mathrm{exp}\{(x^6+ix^{10})/R\}`$, $`x^{10}`$ is the eleventh dimension of radius $`R`$; $`p`$, $`q`$ and $`r`$ are polynomials of degree $`2n+4`$, $`r(v)=r(v)`$, and the $`𝐙_2`$ identification is lifted to
$$(v,t)(v,1/t).$$
(16)
The condition that there be an O6<sup>-</sup> plane implies that this curve be non-singular on the Atiyah-Hitchin space. As discussed in , this in turns implies that $`(^{\mathrm{}}F/v^{\mathrm{}})|_{v=0}`$ has a zero of order $`4\mathrm{}`$ at $`t=1`$ for $`\mathrm{}=0,\mathrm{},3`$, giving
$`p`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{2}{}}}(vm_i)^2{\displaystyle \underset{j=1}{\overset{2n}{}}}(v\mu _jM),`$ (17)
$`q`$ $`=`$ $`4p[0]+2vp[1]+A_1v^2{\displaystyle \underset{i=1}{\overset{2}{}}}(vm_i){\displaystyle \underset{k=1}{\overset{2n}{}}}(va_kM),`$ (18)
$`r`$ $`=`$ $`6p[0]+2v^2(q[2]p[2])+A_2v^4{\displaystyle \underset{\mathrm{}=1}{\overset{n}{}}}(v^2b_{\mathrm{}}^2),`$ (19)
where $`p[n]`$ refers to the coefficient of $`v^n`$ in $`p(v)`$. Interpreting the positions in the $`v`$ plane of the D4 branes as mass parameters or Coulomb branch vevs, we find that the $`m_i`$ are the bare masses of the two $`Sp`$ fundamentals, $`\mu _j`$ are the masses of the $`SU`$ fundamentals, $`M`$ is the bifundamental mass, the traceless $`a_k`$ are the eigenvalues of the $`SU`$ adjoint vev, and the $`b_{\mathrm{}}`$ likewise for the $`Sp`$ adjoint vev.
The $`A_i`$ in (19) encode the gauge couplings through the relative asymptotic positions of the NS5 branes in the IIA picture. These positions are given by the roots of $`F(t,v)=0`$ for large $`v`$, that is, the roots of $`t^4+A_1t^3+A_2t^2+A_1t+1=0`$. The relative positions of these roots are unaffected by the $`𝐙_2`$ transformation of the $`A_i`$ coefficients
$$(A_1,A_2)(A_1,A_2).$$
(20)
Thus the space $`_{SUSp}`$ of inequivalent couplings that enters into the low-energy physics on the Coulomb branch of this $`SU(2n)\times Sp(2n)`$ theory is the space $`𝐂^2\{A_1,A_2\}`$ modded by the $`𝐙_2`$ action (20). Furthermore, in addition to the line of $`𝐙_2`$ orbifold fixed points at $`A_1=0`$, this space has strong coupling singularities whenever the asymptotic positions of the M5 branes collide, which is when $`A_2+2=\pm 2A_1`$ or $`A_1^2=4A_28`$, as well as weak coupling singularities whenever one of the M5 branes goes off to infinity: $`A_1\mathrm{}`$ or $`A_2\mathrm{}`$. Indeed, the space of $`SU\times Sp`$ couplings can be parameterized by the $`𝐙_2`$-invariant combinations $`g_1A_2/A_1^2`$ and $`g_2A_1^2/A_2^2`$, which have been chosen to correspond to the normalization of the $`SU`$ and $`Sp`$ couplings used in the last section, so that they are related to gauge couplings at weak coupling as $`\{g_1,g_2\}\{e^{i\pi \tau _1},e^{i\pi \tau _2}\}`$ where $`\tau _1`$ is the $`SU`$ coupling and $`\tau _2`$ the $`Sp`$ coupling.
We can check this identification of the coupling parameters (as well as our implicit identification of the vevs and bare masses) in the $`SU\times Sp`$ curve by decoupling the $`Sp`$ factor ($`g_1`$ fixed, $`A_i\mathrm{}`$) or the $`SU`$ factor ($`g_2`$ fixed, $`A_i\mathrm{}`$). Decoupling the $`Sp`$ factor leads to considerations very similar to those discussed above in the case of the $`SU\times SU`$ curve, so we consider only the decoupling of the $`SU`$ factor. This decoupling is also interesting since it involves passing from the $`\{v,t\}`$ space which is a double cover of the orbifold space, to the single-valued coordinates which resolve the orbifold singularity appropriately. We will need to do the same change of variables on the $`SU(2)\times Sp(2)`$ curve in the next subsection.
The $`SU\times Sp`$ curve (15) then becomes
$$0=t\left(q(v)t^2+r(v)t+q(v)\right).$$
(21)
The overall factor of $`t`$ is for the decoupled brane, and the remaining polynomial becomes
$$0=\sqrt{g_2}\underset{i=1}{\overset{2n+2}{}}(vM_i)t+\left[2\sqrt{g_2}\underset{i=1}{\overset{2n+2}{}}M_i+v^2\underset{\mathrm{}=1}{\overset{n}{}}(v^2b_{\mathrm{}}^2)\right]+\sqrt{g_2}\underset{i=1}{\overset{2n+2}{}}(v+M_i)\frac{1}{t},$$
(22)
where we have used (19), divided by $`A_2v^2/t`$, and defined $`M_i=m_i`$ for $`i=1,2`$ and $`M_i=a_{i2}`$ for $`i=3,\mathrm{},2n+2`$. In order to compare this curve with previously derived genus-$`n`$ $`Sp(2n)`$ curves, we must divide out the orbifold identifications (16). To do this, define the invariant coordinates
$`x`$ $`=`$ $`v^2`$
$`y`$ $`=`$ $`[t(1/t)]v^1`$
$`z`$ $`=`$ $`[t+(1/t)+2]v^2,`$ (23)
which are related by
$$y^2=xz^24z.$$
(24)
Note that the change of variables (3.2) is singular when $`v=0`$; it serves to resolve the orbifold singularities at $`v=0`$, $`t=\pm 1`$ so that the resulting space has the complex structure of the Atiyah-Hitchin space , which is the appropriate M-theory resolution of the O6<sup>-</sup> plane singularity . In these variables, the curve (22) becomes
$$0=xP_0(x)z+xP_1(x)y2P_0(x)+2\sqrt{g_2}\underset{i=1}{\overset{2n+2}{}}M_i+x\underset{\mathrm{}=1}{\overset{n}{}}(xb_{\mathrm{}}^2),$$
(25)
where we have defined $`P_0`$ and $`P_1`$ by $`\sqrt{g_2}_{i=1}^{2n+2}(vM_i)=P_0(v^2)+vP_1(v^2)`$. Making the change of variables $`\stackrel{~}{y}=P_0y+xP_1z2P_1`$ and $`\stackrel{~}{z}=xP_1y+xP_0z2P_0`$, (24) and (25) become
$`\stackrel{~}{z}`$ $`=`$ $`2\sqrt{g_2}{\displaystyle \underset{i=1}{\overset{2n+2}{}}}M_ix{\displaystyle \underset{\mathrm{}=1}{\overset{n}{}}}(xb_{\mathrm{}}^2),`$
$`x\stackrel{~}{y}^2`$ $`=`$ $`\stackrel{~}{z}^24g_2{\displaystyle \underset{i=1}{\overset{2n+2}{}}}(xM_i^2),`$ (26)
where we have used the identity $`P_0^2xP_1^2=g_2_{i=1}^{2n+2}(xM_i^2)`$. Eliminating $`\stackrel{~}{z}`$ in (3.2) then gives the $`Sp(2n)`$ curve found in .
### 3.3 SU(2) x SU(2) and SU(2) x Sp(2) scale invariant curves
We now specialize to the $`SU(2)\times SU(2)`$ and $`SU(2)\times Sp(2)`$ theories which are of interest for the S-duality argument.
Consider first the $`SU(2)\times SU(2)`$ scale invariant theory with zero bare masses for the hypermultiplets. From (7, 11) the Coulomb branch of this theory is described by
$$t^3v^2+B_1t^2(v^2u_1)+B_2t(v^2u_2)+v^2=0,$$
(27)
where $`u_1=b_1b_2`$ and $`u_2=a_1a_2`$ denote the Coulomb branch moduli of the two $`SU(2)`$’s. To study degenerations of (27) on the Coulomb branch it is convenient to represent it as a double cover of the complex $`t`$ plane:
$$v^2=\frac{t(B_1u_1t+B_2u_2)}{(t^3+B_1t^2+B_2t+1)}.$$
(28)
The change of variables
$$y=(t^3+B_1t^2+B_2t+1)v$$
(29)
takes (28) to the hyperelliptic form
$$y^2=t(B_1u_1t+B_2u_2)(t^3+B_1t^2+B_2t+1).$$
(30)
We pause here to discuss the validity of changes of variables like (29), which we will use again below on the $`SU(2)\times Sp(2)`$ curve, and which we also used in the decoupling checks of the last subsections. It is important that the complex structures of curves related by these changes of variables are the same since we will match the parameters of the $`SU(2)\times SU(2)`$ and $`SU(2)\times Sp(2)`$ curves by comparing the complex structures of their hyperelliptic forms. The issue is the apparent singularity of the change of variables (29) whenever $`t^3+B_1t^2+B_2t+1=0`$. In fact this change of variables, when properly understood, is not singular on the curve, and so the resulting hyperelliptic curve (30) is equivalent to (has the same complex structure as) the prior curve (28).
The key point lies in the treatment of the points at infinity on the curves. Let us generalize to a situation where we have a curve of the form
$$v^2\underset{j=1}{\overset{m}{}}(tf_j)=\underset{i=1}{\overset{m}{}}(te_i),$$
(31)
which we would like to think of as representing a Riemann surface of genus $`m1`$. Thought of as a curve embedded in $`𝐂^2=\{v,t\}`$, though, (31) is non-compact, going off to infinity as $`tf_j`$ and $`t\mathrm{}`$. We can compactify this curve by replacing the $`\{v,t\}`$ space with an appropriate projective space; the correct choice of projective space is determined by demanding that the genus of the resulting compact surface indeed be $`m1`$. This is achieved if each infinity $`tf_i`$ is replaced by a single point, while the $`t\mathrm{}`$ infinity is compactified at two distinct points. The appropriate projective space which does this is the direct product of two Riemann spheres, $`𝐏^1\times 𝐏^1`$, which can be defined as $`𝐂^4=\{u,v,s,t\}`$ modulo the identifications $`\{u,v,s,t\}\{u,v,\lambda s,\lambda t\}`$ for $`\lambda 𝐂^{}`$, and $`\{u,v,x,z\}\{\mu u,\mu v,s,t\}`$ for $`\mu 𝐂^{}`$. The curve is homogenized to $`v^2_{j=1}^m(tf_js)=u^2_{i=1}^m(te_is)`$. The infinities of the $`\{v,t\}=𝐂^2`$ space are compactified to two (intersecting) copies of $`𝐏^1`$ in $`𝐏^1\times 𝐏^1`$, while the homogeneous curve intersects these “infinities” at the points $`\{u,v,s,t\}=\{0,1,1,f_j\}`$ (corresponding to $`tf_j`$) and $`\{1,\pm 1,0,1\}`$ (corresponding to $`t\mathrm{}`$).
We are interested in the change of variables $`y=v_{j=1}^m(tf_j)`$ which in homogeneous coordinates can be written $`y=(v/u)_{j=1}^m(tf_js)`$, $`x=t`$, $`z=s`$. The $`𝐏^1\times 𝐏^1`$ identifications on $`\{u,v,s,t\}`$ imply $`\{y,x,z\}\{\lambda ^my,\lambda x,\lambda z\}`$ for $`\lambda 𝐂^{}`$, which defines a point in the weighted projective space $`𝐏_{(m,1,1)}^2`$. This is a smooth space except for a $`𝐙_m`$ orbifold singularity at the point $`\{y,x,z\}=\{1,0,0\}`$. The change of variables thought of as a map from $`𝐏^1\times 𝐏^1𝐏_{(m,1,1)}^2`$ is singular on the $`𝐏^1`$ at $`v=\mathrm{}`$ which is mapped to the $`𝐙_m`$ orbifold point of $`𝐏_{(m,1,1)}^2`$, except for the points $`\{v,t\}=\{\mathrm{},f_i\}`$ which are blown up to the $`𝐏^1`$ of points $`\{y,x,z\}=\{,f_i,1\}`$.
The image of the homogeneous curve under this change of variables is the genus $`m1`$ hyperelliptic curve
$$y^2=\underset{i=1}{\overset{m}{}}(xe_iz)(xf_iz),$$
(32)
which does not intersect the $`𝐙_m`$ orbifold point of $`𝐏_{(m,1,1)}^2`$ if $`_ie_if_i0`$. In particular, the $`𝐏^1\times 𝐏^1`$ curve approaches the points $`\{u,v,s,t\}=\{0,1,1,f_j\}`$ in such a way that their images in $`𝐏_{(m,1,1)}^2`$ miss the orbifold point. Therefore the change of variables is a holomorphic mapping between the abstract Riemann surfaces, and so equates their complex structures.
In the case of the $`SU(2)\times SU(2)`$ curve (28) the $`f_i`$ are roots of $`t^3+B_1t^2+B_2t+1=0`$, while the $`e_i`$ are $`0`$, $`(B_2u_2)/(B_1u_1)`$, and $`\mathrm{}`$. The branch points at zero and infinity are harmless as can be seen by the fact that an $`SL(2,𝐂)`$ transformation on the $`\{s,t\}`$ $`𝐏^1`$ preserves the complex structure of the curve and can be used to move all branch points to finite points on the $`t`$ plane.
We return now to discuss the $`SU(2)\times Sp(2)`$ theory. From (15) and (19) the curve of the scale invariant $`SU(2)\times Sp(2)`$ theory with zero hypermultiplet masses is given by
$$v^2t^4+A_1(v^2v_1)t^3+A_2(v^2v_2)t^2+A_1(v^2v_1)t+v^2=0,$$
(33)
where $`v_1=a_1a_2`$ and $`v_2=b_1^2`$ are Coulomb branch moduli of the $`SU`$ and $`Sp`$ factor respectively. This curve is of the form (31) with $`m=4`$ (and one $`f_j`$ at infinity), thus describing a genus 3 Riemann surface (as is also clear from its brane construction). It was supposed to be equivalent to the $`SU(2)\times SU(2)`$ curve, which was genus 2. The reason for the mismatch is that the $`SU(2)\times Sp(2)`$ curve was constructed on the double cover of the O6<sup>-</sup> plane orbifold space.
Changing to single-valued variables on the orbifold space via (3.2), which parameterize the non-singular Atiyah-Hitchin space (the M theory resolution of the space transverse to the O6<sup>-</sup> plane ), gives the curve (33) as the intersection of the surfaces
$`y^2`$ $`=`$ $`xz^24z,`$
$`0`$ $`=`$ $`x((xz2)^22)+A_1(xv_1)(xz2)+A_2(xv_2).`$ (34)
Change variables by $`s=xz2`$, leaving $`x`$ and $`y`$ unchanged. Then the curve becomes the intersection
$`xy^2`$ $`=`$ $`s^24,`$
$`0`$ $`=`$ $`x(s^22)+A_1(xv_1)s+A_2(xv_2).`$ (35)
This change of variables is singular at $`x=0`$ which is a direction at infinity on the curve. As in the discussion above, as long as we treat the “points” at infinity correctly so as to preserve the genus of the curve, the complex structure will be preserved by the change of variables. Eliminating $`x`$ from (3.3) gives the curve
$$y^2=\frac{(s^24)(s^2+A_1s+A_22)}{(A_1v_1s+A_2v_2)}.$$
(36)
($`x`$ was the right variable to eliminate since only $`x`$ is single valued on the Atiyah-Hitchin space, which is a double cover of the $`y`$-$`z`$ plane.) Finally, by the type of change of variables discussed above, $`w=(A_1v_1s+A_2v_2)y`$, the genus 2 curve emerges in the hyperelliptic form
$$w^2=(A_1v_1s+A_2v_2)(s^24)(s^2+A_1s+A_22).$$
(37)
Since the $`SU(2)\times SU(2)`$ and $`SU(2)\times Sp(2)`$ theories are physically identical, the two genus 2 hyperelliptic curves (30) and (37) must have the same complex structure as a function of the couplings and vevs. Thus there must be an $`SL(2,𝐂)`$ transformation relating $`t`$ and $`s`$ which maps the branch points of (30) to those of (37). If we map the branch points at infinity to each other, and the branch point at $`s=2`$ to the one at $`t=0`$, then we must find a linear transformation $`4\beta t=s+2`$ and a map between the vevs and couplings which satisfies
$$(A_1v_1s+A_2v_2)(s2)(s^2+A_1s+A_22)(B_1u_1t+B_2u_2)(t^3+B_1t^2+B_2t+1),$$
(38)
for some $`\beta `$. Since the theory is scale-invariant, we can choose an arbitrary relative scaling of the $`u`$ and $`v`$ vevs so that $`u_1=v_1`$. We then find the following relations between couplings,
$`A_1`$ $`=`$ $`8+4\beta B_1,`$
$`A_2`$ $`=`$ $`30+24\beta B_1+16\beta ^2B_2,`$
$`0`$ $`=`$ $`1+\beta B_1+\beta ^2B_2+\beta ^3,`$ (39)
while the vevs are related by $`v_1=u_1`$ and $`(A_2/A_1)v_2=2u_1+4\beta (B_2/B_1)u_2`$. These matching relations are the main result of this section. They can be inverted to read
$`B_1`$ $`=`$ $`(A_18)/(4\alpha ),`$
$`B_2`$ $`=`$ $`(A_26A_1+18)/(16\alpha ^2),`$
$`16\alpha ^3`$ $`=`$ $`2A_1A_22,`$ (40)
for the couplings, with the vevs related by $`u_1=v_1`$ and $`(B_2/B_1)u_2=[(A_2/A_1)v_22v_1]/(4\alpha )`$, corresponding to a map $`4\alpha t=s+2`$ between the curves.
Finally, one can easily check that in the weak coupling limits, the above matching of parameters reduces to the appropriate identifications. For example, decoupling the $`SU(2)`$ factor of the $`SU(2)\times Sp(2)`$ theory by sending $`A_i\mathrm{}`$ keeping $`g_2=A_1^2/A_2^2`$ fixed (and thus $`g_10`$), (3.3) implies that the $`SU(2)\times SU(2)`$ couplings go as
$`f_1{\displaystyle \frac{B_1}{B_2^2}}`$ $``$ $`{\displaystyle \frac{4\sqrt{g_2}(1+2\sqrt{g_2})}{(1+6\sqrt{g_2})^2}},`$
$`f_2{\displaystyle \frac{B_2}{B_1^2}}`$ $``$ $`0,`$ (41)
which recovers precisely the mapping (4) between the $`SU(2)`$ and $`Sp(2)`$ couplings used in section 2, and is a non-trivial consistency check on the calculations of this section.
### 3.4 Summary of SU(2) x SU(2) low energy coupling spaces
We now summarize what we have just derived about the space of couplings of the $`SU(2)\times SU(2)`$ theory as they appear in the low energy effective actions on the Coulomb branch described by the $`SU(2)\times SU(2)`$ and $`SU(2)\times Sp(2)`$ curves. We denote these two spaces of couplings by $`_{SUSU}`$ and $`_{SUSp}`$ respectively.
#### 3.4.1 $`_{SUSU}`$
The $`SU(2)\times SU(2)`$ low energy effective action is described by two complex couplings $`B_1`$ and $`B_2`$ which parameterize an $`_{SUSU}𝐂^2/S_3`$ orbifold space. The $`S_3`$ orbifold identifications are generated by the $`𝐙_3`$ element
$$𝒫:(B_1,B_2)(\omega B_1,\omega ^2B_2),$$
(42)
where $`\omega `$ is a cube root of unity, as well as by the $`𝐙_2`$ element
$$𝒬:(B_1,B_2)(B_2,B_1)$$
(43)
which simply interchanges the two $`SU(2)`$ factors. Resulting from the $`S_3`$ identifications, $`_{SUSU}`$ has three lines of $`𝐙_2`$ orbifold singularities when $`B_1=\omega B_2`$ which intersect in an $`S_3`$ orbifold point at $`B_1=B_2=0`$. $`_{SUSU}`$ also has strong-coupling singularities when
$$0=2718B_1B_2B_1^2B_2^2+4B_1^3+4B_2^3$$
(44)
as well as weak-coupling singularities when $`B_1\mathrm{}`$ or $`B_2\mathrm{}`$. The $`𝐙_3`$-invariant couplings
$$f_1\frac{B_1}{B_2^2}\mathrm{and}f_2\frac{B_2}{B_1^2},$$
(45)
are related to the $`\{\tau _1,\tau _2\}`$ gauge couplings of the two $`SU(2)`$ factors by $`\{f_1,f_2\}\{e^{i\pi \tau _1},e^{i\pi \tau _2}\}`$ at weak coupling.
#### 3.4.2 $`_{SUSp}`$
The $`SU(2)\times Sp(2)`$ curve, though describing the same theory, has a very different space of couplings, $`A_1`$ and $`A_2`$, parameterizing the orbifold space $`_{SUSp}𝐂^2/𝐙_2`$. The $`𝐙_2`$ identification acts as
$$:(A_1,A_2)(A_1,A_2),$$
(46)
and gives rise to a line of $`𝐙_2`$ orbifold fixed points in $`_{SUSp}`$ when $`A_1=0`$. In addition, $`_{SUSp}`$ has strong coupling singularities when
$$A_2+2=\pm 2A_1\text{or}A_1^2=4A_28,$$
(47)
as well as weak-coupling singularities when $`A_1\mathrm{}`$ or $`A_2\mathrm{}`$. The $`𝐙_2`$-invariant couplings
$$g_1\frac{A_2}{A_1^2}\mathrm{and}g_2\frac{A_1^2}{A_2^2},$$
(48)
are related to the $`\{\tau _1,\tau _2\}`$ gauge couplings of the $`SU(2)`$ and $`Sp(2)`$ factors, respectively, by $`\{g_1,g_2\}\{e^{i\pi \tau _1},e^{i\pi \tau _2}\}`$ at weak coupling.
#### 3.4.3 $`_{SUSU}_{SUSp}`$ map
Finally, the low energy $`SU(2)\times SU(2)`$ and $`SU(2)\times Sp(2)`$ descriptions of the theory are found to be equivalent as long as the parameters of one theory are mapped to those of the other by $`𝒯:_{SUSp}_{SUSU}`$ defined by
$$\left(\begin{array}{c}B_1\\ B_2\end{array}\right)=𝒯\left(\begin{array}{c}A_1\\ A_2\end{array}\right)\left(\begin{array}{c}(A_18)/(4\alpha )\\ (A_26A_1+18)/(16\alpha ^2)\end{array}\right)$$
(49)
where
$$16\alpha ^3=2A_1A_22,$$
(50)
or its inverse
$$\left(\begin{array}{c}A_1\\ A_2\end{array}\right)=𝒯^1\left(\begin{array}{c}B_1\\ B_2\end{array}\right)\left(\begin{array}{c}8+4\beta B_1\\ 30+24\beta B_1+16\beta ^2B_2\end{array}\right)$$
(51)
with
$$0=1+\beta B_1+\beta ^2B_2+\beta ^3.$$
(52)
## 4 S-duality in the SU(2) x SU(2) theory
We will now derive the enlarged S-duality group of the $`SU(2)\times SU(2)`$ theory. The idea is a straightforward generalization of the strategy used for a single $`SU(2)`$ factor reviewed in section 2: the $`SU(2)\times SU(2)`$ model can be reached by flowing down from both the $`SU(n)\times SU(n)`$ and $`SU(2n)\times Sp(2n)`$ series. Denoting by $``$ the true coupling space of the $`SU(2)\times SU(2)`$ theory, we therefore expect to find some multiple cover $`_{SUSU}`$ of $``$ as the coupling space realized by flowing down in the $`SU(n)\times SU(n)`$ series, and a different multiple cover $`_{SUSp}`$ by flowing down in the $`SU(2n)\times Sp(2n)`$ series. We then use the equivalence of the two descriptions of the theory to deduce a map identifying $`_{SUSU}`$ with $`_{SUSp}`$. If this map is not a simple one-to-one map, then we thereby deduce extra identifications leading to the “smaller” coupling space $``$ and therefore a larger S-duality group $`\pi _1()`$.
The determination of $`_{SUSU}`$ and $`_{SUSp}`$ is easy, as we have already done it in . There we showed that the embedding argument leads to a coupling space for the $`SU(n)\times SU(n)`$ theory which is precisely the $`_{SUSU}`$ described above in eqns. (4244), and likewise that the $`SU(2n)\times Sp(2n)`$ theory coupling space is the $`_{SUSp}`$ described above in eqns. (4647). The map between $`_{SUSU}`$ and $`_{SUSp}`$ is then the one derived at length in the last section, and summarized in eqns. (4952). As this map is obviously not one-to-one, we have therefore found new S-duality identifications on the $`SU(2)\times SU(2)`$ coupling space, which is what we aimed to show.
The remainder of this section will be devoted to understanding some properties of the $`_{SUSU}_{SUSp}`$ map $`𝒯`$ (4952), and thereby of the the resulting enlarged S-duality group, $`\mathrm{\Gamma }=\pi _1()`$. We will refer to the $`𝐂^2`$ of $`B_i`$ parameters as $`𝐂_B^2`$ and of $`A_i`$ parameters $`𝐂_A^2`$. To see algebraically the extra identifications induced on $`_{SUSU}`$ by the map $`𝒯:𝐂_A^2𝐂_B^2`$, we use it to construct maps from $`𝐂_B^2`$ to itself. Note first that $`𝒯`$ and $`𝒯^1`$ each have three image points corresponding to the three different values that $`\alpha `$ or $`\beta `$ can take. In the case of $`𝒯`$, the three $`\alpha `$’s differ only by cube root of unity phases, and the three image points in $`𝐂_B^2`$ are related by the $`𝐙_3`$ identification $`𝒫`$ (42). One the other hand, the image of $`𝒯^1`$ is generically three distinct points in $`𝐂_A^2`$ unrelated by the $`𝐙_2`$ identification $``$ (46). However, the images under $`𝒯^1`$ of three points in $`𝐂_B^2`$ related by $`𝒫`$ are all the same three points in $`𝐂_A^2`$, since a $`𝒫`$ action on the $`B_i`$ just rotates the roots of (52), leaving (51) invariant.
Since the $`𝒯`$ map commutes with $`𝒫`$, we can formulate the identifications directly on $`𝐂_f^2𝐂_B^2/\{𝒫\}`$ with coordinates $`f_i`$ given by (45). In these variables $`𝒯`$ becomes
$$\left(\begin{array}{c}f_1\\ f_2\end{array}\right)=𝒯\left(\begin{array}{c}A_1\\ A_2\end{array}\right)=\left(\begin{array}{c}4(A_18)(2A_1A_22)(A_26A_1+18)^2\\ (A_26A_1+18)(A_18)^2\end{array}\right),$$
(53)
and $`𝒯^1`$ reads
$$\left(\begin{array}{c}A_1\\ A_2\end{array}\right)=𝒯^1\left(\begin{array}{c}f_1\\ f_2\end{array}\right)=\left(\begin{array}{c}(8f_2+4\gamma )/f_2\\ (30f_2+24\gamma +16\gamma ^2)/f_2\end{array}\right),$$
(54)
where $`\gamma `$ is a root of
$$0=f_1\gamma ^3+\gamma ^2+\gamma +f_2.$$
(55)
Note that while $`𝒯`$ is a single map, $`𝒯^1`$ is generically three maps, one for each $`\gamma `$ satisfying (55). Nevertheless, it is easy to check that $`𝒯𝒯^1`$ maps points in $`𝐂_f^2`$ to themselves, and it follows that repeated applications of $`𝒯`$ and $`𝒯^1`$ generate no further identifications between $`𝐂_A^2`$ and $`𝐂_f^2`$.
Since $`_{SUSU}𝐂_f^2/\{𝒬\}`$ and $`_{SUSp}𝐂_A^2/\{\}`$, where $`𝒬`$ and $``$ are the $`𝐙_2`$ identifications $`𝒬:f_1f_2`$, and $`:A_1A_1`$, further identifications will arise upon combining $`𝒯`$ with $``$ and $`𝒬`$. It is algebraically easiest to work on $`𝐂_f^2`$ where there are three generators of non-trivial maps involving $`𝒯`$, namely $`𝒮_i𝒯𝒯^1`$. Explicitly, this map reads
$$𝒮_i\left(\begin{array}{c}f_1\\ f_2\end{array}\right)=\left(\begin{array}{c}(4f_2+\gamma )(3f_2+2\gamma +\gamma ^2)(6f_2+3\gamma +\gamma ^2)^2\\ f_2(6f_2+3\gamma +\gamma ^2)(4f_2+\gamma )^2\end{array}\right),$$
(56)
where $`\gamma `$ is a root of (55). The subscript on $`𝒮`$ denotes the three different choices of roots for $`\gamma `$, which lead generically to three different image points in $`𝐂_f^2`$. Thus the new S-duality identifications $`𝒮_i`$ that we have found imply at least a four-fold identification on $`𝐂_f^2`$ (the original point and its three images). Furthermore, the orbit of a given point under repeated applications of the $`𝒮_i`$ can be shown to be just this set of four points. The $`𝐙_2`$ identification $`𝒬`$ on $`𝐂_f^2`$ does not commute with the $`𝒮_i`$, though it can be shown that for a given $`𝒮_i`$, there exist an $`𝒮_j`$ and an $`𝒮_k`$ such that $`𝒮_i𝒬𝒮_j𝒬𝒮_k𝒬=1`$. The minimum orbit of a generic point satisfying these relations comprises 20 points, as shown in Fig. 1. In fact this is the generic orbit in $`𝐂_f^2`$ under the complete set of identifications generated by $`𝒮_i`$ and $`𝒬`$, as checked numerically.
In summary, because of the algebraic complexity of the $`𝒮_i`$ generators, we have been unable to find a simpler description of the resulting true coupling space $``$ than
$$𝐂_f^2/\{𝒬,𝒮_i\}$$
(57)
with punctures at points satisfying (44) which reads in the $`f_i`$ coordinates:
$$0=14f_14f_2+18f_1f_227f_1^2f_2^2,$$
(58)
as well as weak coupling singularities when $`f_1f_2=0`$. For clarity, we emphasize that $`\pi _1()`$—the S-duality group of $``$—is not just the group generated by $`𝒬`$ and $`𝒮_i`$. There are many reasons for this: $`𝐂_f^2`$ already has $`𝐙_3`$ orbifold points at $`f_1=f_2=0`$ and $`f_1=f_2=\mathrm{}`$; $`𝒬`$ and $`𝒮_i`$ act with fixed points; there are also strong and weak coupling punctures on $`𝐂_f^2`$; finally, $`𝒬`$ and $`𝒮_i`$ do not even generate a group since there is no consistent labeling of the $`𝒮_i`$—the three roots of (55)—on the whole of $`𝐂_f^2`$.
We can, however, argue that $``$ is not just the Cartesian product of two copies of the fundamental domain of $`SL(2,𝐙)`$ as one might naively have guessed. If it were this product, $``$ would have (complex) lines of $`𝐙_3`$ fixed points, whereas it is straightforward to check that $`𝒬`$ and $`𝒮`$ only have isolated $`𝐙_3`$ fixed points which occur when $`(f_1,f_2)`$ is one of
$$(\frac{1}{3},0),(0,\frac{1}{3}),(\frac{1}{3},\frac{1}{3}),(\frac{37+i\sqrt{3}}{98},\frac{37i\sqrt{3}}{98}),(\frac{37i\sqrt{3}}{98},\frac{37+i\sqrt{3}}{98}).$$
(59)
In fact, these five points are all identified under $`𝒬`$ and $`𝒮_i`$, so there is only a single $`𝐙_3`$ fixed point in $``$.
Note that the first two entries in (59) are the $`𝐙_3`$ points, identified in section 2, on the coupling space of one $`SU(2)`$ factor in the limit where the other is decoupled. The orbit of the $`𝐙_2`$ points $`f=2/9`$, $`\mathrm{}`$, of a single $`SU(2)`$ factor also includes points at strong coupling:
$$(\frac{2}{9},0),(0,\frac{2}{9}),(\mathrm{},0),(0,\mathrm{}),(\frac{1}{3},\frac{1}{3}),(1,1),(\frac{5}{16},\frac{8}{25}),(\frac{8}{25},\frac{5}{16}).$$
(60)
In fact, there are whole (complex) lines of $`𝐙_2`$ fixed points. Though they are hard to characterize explicitly, they all seem to be images of the $`𝒬`$ fixed line $`f_1=f_2`$ under $`𝒮_i`$. These images intersect in an $`S_3`$ orbifold point whose orbit in $`𝐂_f^2`$ is
$$(\mathrm{},\mathrm{}),(\frac{3}{8},\frac{1}{3}),(\frac{1}{3},\frac{3}{8}),(\frac{1}{2},\frac{1}{2}).$$
(61)
These examples illustrate the interesting feature of the $`𝒬`$ and $`𝒮_i`$ maps that they equate “strong coupling” punctures—$`f_i`$ satisfying (58)—with weak coupling punctures satisfying $`f_1f_2=0`$. This is true in general: all strong coupling punctures are so identified with weak coupling points. To see this, note that $`f_1`$ and $`f_2`$ satisfy (58) precisely when two roots of (55) coincide. A double root of $`\gamma `$ satisfies (55) and its first derivative: $`0=3f_1\gamma ^2+2\gamma +1`$. Rewriting this as $`f_1\gamma ^3=(2\gamma ^2+\gamma )/3`$ and substituting into (55) gives $`\gamma ^2+2\gamma +3f_2=0`$. But, by (56), this implies that $`𝒮_i(f_1)=0`$ for this choice of the root $`\gamma `$, and thus that the strong coupling puncture is mapped to a weak coupling singularity. Thus S-duality identifications remove all “ultra-strong” coupling points from $``$, just as in the case of the $`SL(2,𝐙)`$ duality of a single $`SU(2)`$ factor.
Also, the point $`(1/3,1/3)`$ is special as its image under $`𝒮_i`$ depends on how one approaches it. Generically, its image is the point $`(1/3,0)`$, but if one approaches it along the particular direction $`(f_1,f_2)=(1/3)(1+ϵ,1+ϵ+kϵ^{3/2})`$, then its image under $`S_i`$ is the whole $`(f,0)`$ plane, where $`f`$ depends on $`k`$.
Finally, one should bear in mind that all our arguments only show that the extra identifications on $`𝐂_f^2`$ leading to $``$ are necessary, but do not imply that there are no further identifications on $``$. In principle one could rule out the existence of further identifications from the low energy effective action on the Coulomb branch by showing that the low energy data is different at distinct points of $``$. Though this is beyond the scope of the present paper, one piece of evidence for there being no further identifications on $``$ is the fact, checked above (3.3), that in the limit where one of the $`SU(2)`$ factors decouples, $``$ already encodes the full $`SL(2,𝐙)`$ S-duality group of the other $`SU(2)`$ factor.
## Acknowledgments
It is a pleasure to thank R. Maimon, J. Mannix, A. Shapere, G. Shiu, H. Tye and P. Yi for helpful discussions. This work is supported in part by NSF grants PHY94-07194, PHY95-13717, and PHY97-22022. The work of PCA is supported in part by an A. P. Sloan Foundation fellowship. |
no-problem/9910/physics9910051.html | ar5iv | text | # Multiphoton Radiative Recombination of Electron Assisted by Laser Field
## I INTRODUCTION
It is well known that the laser plasma emits photons with frequencies which are different from the frequency of the incident laser beam. For a number of applications an emission of high energy photons is the most interesting phenomenon. The known mechanisms which can be responsible for the high energy photo-production could be identified as the following three ones: the high harmonic generation, laser stimulated bremsstrahlung and laser assisted recombination. These processes differ by the initial and final states of the active electron. In the harmonic generation the initial state electron occupies a (laser-dressed) atomic bound state, usually it is the ground atomic state. In the final state of this reaction the electron can occupy either the same bound state, or some excited or even ionized state. The laser stimulated bremsstrahlung is a free-free transition during which the electron is scattered by an atom in the laser field. During the scattering the electron emits a high-energy quantum. In the laser assisted recombination (LAR) the electron starts in the laser-dressed continuum, but ends up in the bound state. The process of harmonic generation is currently studied very actively with important advancements both in theory and experiment (for reviews see Refs. ). The laser stimulated bremsstrahlung plays the very important role in plasma physics, see recent experimental and theoretical works.
The subject of the present study is the LAR process. As far as we know, it has not yet received a proper attention in the literature, although its importance for kinetics of laser plasma and its emission spectrum was indicated before . From the point of view of the high-energy photo-production the LAR possesses an advantage over the stimulated bremsstrahlung because in LAR the electron impact energy is totally transferred to the high energy quanta.
The conventional (laser-field free) radiative recombination of the continuum electron to the bound state is a well studied process which is inverse to the photoionization. The frequency of the emitted photon is uniquely defined by the energy conservation law. When a similar process occurs in the presence of an intensive laser field the radiation spectrum becomes much more richer since the recombination may be accompanied by absorption or emission of laser quanta. Therefore the emitted photon spectrum represents a sequence of equidistant lines separated by the laser frequency $`\omega `$. The recent review by Hahn on the electron recombination mentions only one, very special version of LAR process, namely one–photon LAR when the laser is tuned in resonance with the energy of free-bound electron transition, and only emission of photons with this particular energy is considered. The study of this special case was initiated quite long ago and remains active in connection with the processes in the storage rings , formation of positronium and antihydrogen and even with possible cosmological manifestations . In all these theoretical studies the laser field was presumed to be weak and its influence on an initial and/or final electron states was neglected, except Refs. which are commented below.
For production of high-energy photons it is very interesting to extend the mentioned above studies allowing for the multiphoton absorption during LAR. Obviously the multiphoton processes can happen with high probability only in a strong laser field. From this point of view there arises a necessary to fulfill a systematic study of LAR in a strong laser field in multiphoton regime. This paper makes a first step in this direction.
An additional, and rather unexpected inspiration for the present study arises from the fact that LAR comprises one of the steps in the three-step quantum scheme of high harmonic generation. This scheme has recently been firmly established, see Ref. and bibliography therein. The major statement of is that the high harmonic generation can be described as the multiphoton ionization of an atomic electron which is followed by the LAR of this electron with the parent atomic particle. From this point of view the LAR plays a role of ’a part’ of the problem of the high harmonic generation, which is important not only for the dense laser plasma, but also for photo-production from individual atoms in strong laser fields, where the harmonic generation is the major source for high energy photons.
The present study is devoted mostly to the patterns of intensities in the emitted photon spectrum depending on the laser field strength. We comment also on the influence of laser field on the total recombination cross section. Here, as well as in other applications, the laser field is intensive and LAR proceeds in substantially multiphoton regime.
Consider an electron in the laser-dressed continuum state $`\mathrm{\Phi }_𝐩(t)`$ with the translational momentum $`𝐩`$. Its recombination to the bound state generally results in the emission of photons with the frequencies $`\stackrel{~}{\mathrm{\Omega }}_M`$ defined from
$`\stackrel{~}{\mathrm{\Omega }}_M={\displaystyle \frac{1}{2}}𝐩^2+{\displaystyle \frac{F^2}{4\omega ^2}}\epsilon _a+M\omega ,`$ (1)
where $`\epsilon _a`$ is the quasienergy of the field-dressed bound state $`\mathrm{\Phi }_a(t)`$, $`𝐅`$ is the amplitude of the electric field strength in the laser wave, $`F^2/(4\omega ^2)`$ is the electron quiver energy in the laser field, $`M`$ is an integer. Hereafter we use atomic system of units unless stated otherwise. In the zero-laser-field limit ($`F0`$) only emission of the photon with the frequency
$`\mathrm{\Omega }_{F0}={\displaystyle \frac{1}{2}}𝐩^2+\left|E_a\right|`$ (2)
is allowed with $`E_a`$ being the bound state energy. The presence of an intensive laser field makes possible multiphoton processes when laser quanta are absorbed from the field or transmitted to it, with the amplitude
$`C_M(𝐩)={\displaystyle \frac{1}{T}}{\displaystyle \underset{0}{\overset{T}{}}}𝑑t\mathrm{\Phi }_a(t)\mathrm{exp}(i\stackrel{~}{\mathrm{\Omega }}_Mt)\widehat{d}_\mathit{ϵ}\mathrm{\Phi }_𝐩(t),\widehat{d}_\mathit{ϵ}=\mathit{ϵ}𝐫,`$ (3)
where $`T=2\pi /\omega `$ is the laser field period, and in the dipole momentum operator $`\widehat{d}_\mathit{ϵ}`$ the unit vector $`\mathit{ϵ}`$ selects polarization of emitted radiation. The LAR cross section is
$`\sigma _M(𝐩)={\displaystyle \frac{4}{3p}}{\displaystyle \frac{\left(\stackrel{~}{\mathrm{\Omega }}_M\right)^3}{c^3}}\left|C_M(𝐩)\right|^2,`$ (4)
where $`c`$ is the velocity of light. The cross section (4) refers to the process of spontaneous LAR, since it is presumed that incident electromagnetic field with the frequency $`\stackrel{~}{\mathrm{\Omega }}_M`$ is absent. In case if such a probe field is present, generally it would be amplified in course of propagation through the medium containing free electrons. There is a number of theoretical works devoted to calculation of related gain in case of one-photon LAR. A recent paper by Zaretskii and Nersesov explores the amplification in case of multiphoton LAR. Generally these studies imply some assumptions regarding the medium properties and result in the expressions for the rate of stimulated transitions via that of spontaneous transitions and some characteristics of laser beam and the experimental arrangement . The present paper provides analysis of spontaneous LAR whereas issues of radiation amplification are beyond its scope.
## II KELDYSH-TYPE APPROXIMATION
We develop the Keldysh-type approximation where the interaction of the continuum electron with the atomic core is neglected, i.e. the laser-dressed electron continuum state $`\mathrm{\Phi }_𝐩`$ is approximated by the well-known Volkov state. The laser wave is assumed to be linear polarized with the electric field strength $`𝐅(t)=𝐅\mathrm{cos}\omega t`$. Explicit expression for the Volkov functions is conveniently cast as
$`\mathrm{\Phi }_𝐩(𝐫,t)`$ $`=`$ $`\chi _𝐩(𝐫,t)\mathrm{exp}\left(i\overline{E}_𝐩t\right),`$ (5)
$`\chi _𝐩(𝐫,t)`$ $`=`$ $`\mathrm{exp}\left\{i\left[(𝐩+𝐤_t)𝐫{\displaystyle _0^t}\left(E_𝐩(\tau )\overline{E}_𝐩\right)𝑑\tau +{\displaystyle \frac{\mathrm{𝐩𝐅}}{\omega ^2}}\right]\right\},`$ (6)
where the factor $`\chi _𝐩(𝐫,t)`$ is time-periodic with the period $`T`$,
$`𝐤_t`$ $`=`$ $`{\displaystyle \frac{𝐅}{\omega }}\mathrm{sin}\omega t,`$ (7)
$`E_𝐩(t)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(𝐩+𝐤_t\right)^2,`$ (8)
$`\overline{E}_𝐩`$ $`=`$ $`{\displaystyle \frac{1}{T}}{\displaystyle _0^T}E_𝐩(\tau )𝑑\tau ={\displaystyle \frac{1}{2}}p^2+{\displaystyle \frac{F^2}{4\omega ^2}}.`$ (9)
For the final bound state the field-free expression is employed
$`\mathrm{\Phi }_a(𝐫,t)`$ $`=`$ $`\phi _a(𝐫)\mathrm{exp}(iE_at),`$ (10)
$`H_a\phi _a(𝐫)`$ $`=`$ $`E_a\phi _a(𝐫),`$ (11)
where $`H_a`$ is the effective atomic Hamiltonian in the single active electron approximation. The final bound state (10) is always available if electron collides with a positive ion. In case of collision with a neutral atom we assume existence of a stable negative ion. By substituting formulae (5)–(10) into (3) one can see that the integrand is a periodic function of time provided the emitted photon frequency $`\stackrel{~}{\mathrm{\Omega }}_M`$ satisfies (1) with integer $`M`$ and $`\epsilon _a`$ substituted by $`E_a`$. The lowest possible frequency of the emitted photon is $`\eta \omega `$, where
$`\eta ={\displaystyle \frac{1}{\omega }}\left({\displaystyle \frac{1}{2}}p^2E_a+{\displaystyle \frac{F^2}{4\omega ^2}}\right)\mathrm{Ent}\left[{\displaystyle \frac{1}{\omega }}\left({\displaystyle \frac{1}{2}}p^2E_a+{\displaystyle \frac{F^2}{4\omega ^2}}\right)\right]`$ (12)
with $`\mathrm{Ent}(x)`$ being an integer part of $`x`$ ($`0\eta <1`$). In the subsequent development we redefine labeling of emitted photon channels and instead of $`\stackrel{~}{\mathrm{\Omega }}_M`$ (1) employ the notation
$`\mathrm{\Omega }_m=(m+\eta )\omega (m0).`$ (13)
The new label $`m`$ differs from the old one $`M`$ by an additive integer. We find the labeling by $`m`$ more convenient since it is rigidly related to the low-frequency edge of the emitted photon spectrum: $`m=0`$ corresponds to the lowest photon frequency $`\eta \omega `$.
By using the Fourier transformation formula (3) is rewritten as
$`C_m(𝐩)={\displaystyle \frac{1}{T}}{\displaystyle \underset{0}{\overset{T}{}}}𝑑t\mathrm{exp}\left\{i\left[(m+\eta )\omega tS(t)\right]\right\}\stackrel{~}{\phi }_a^{(ϵ)}\left(𝐩𝐤_t\right),`$ (14)
where $`S(t)`$ is the classical action
$`S(t)={\displaystyle \frac{1}{2}}{\displaystyle ^t}𝑑\tau \left(𝐩+𝐤_\tau \right)^2E_at.`$ (15)
The function $`\stackrel{~}{\phi }_a^{(ϵ)}(𝐪)`$ is defined as
$`\stackrel{~}{\phi }_a^{(ϵ)}(𝐪)=i\left(\mathit{ϵ}_𝐪\right)\stackrel{~}{\phi }_a(𝐪).`$ (16)
where $`\stackrel{~}{\phi }_a(𝐪)`$ is the Fourier transform of the bound state wave function $`\varphi _a(𝐫)`$:
$`\stackrel{~}{\phi }_a(𝐪)={\displaystyle d^3𝐫\mathrm{exp}(i\mathrm{𝐪𝐫})\varphi _a(𝐫)}.`$ (17)
For the bound state wave function we use an asymptotic expression
$`\varphi _a(𝐫)A_ar^{\nu 1}\mathrm{exp}(\kappa r)Y_{lm}(\widehat{𝐫})(r1/\kappa ),`$ (18)
where $`\kappa =\sqrt{2|E_a|}`$, $`\nu =Z/\kappa `$, $`Z`$ is the charge of the atomic residual core ($`\nu =Z=0`$ for a negative ion), $`l`$ is the active electron orbital momentum in the initial state and $`\widehat{𝐫}`$ is the unit vector. The coefficients $`A_a`$ are tabulated for many negative ions . The Fourier transform $`\stackrel{~}{\phi }_a(𝐪)`$ (17) is singular at $`q^2=\kappa ^2`$ with the asymptotic behavior for $`q\pm i\kappa `$ defined by the long-range asymptote (18) in the coordinate space
$`\stackrel{~}{\phi }_a(𝐪)=4\pi A_a(\pm 1)^lY_{lm}(\widehat{𝐪}){\displaystyle \frac{(2\kappa )^\nu \mathrm{\Gamma }(\nu +1)}{(q^2+\kappa ^2)^{\nu +1}}},`$ (19)
where $`(\pm 1)^l`$ corresponds to $`q\pm i\kappa `$. In particular, for a negative ion ($`\nu =0`$) with the active electron in an $`s`$ state ($`l=0`$) we have from (19)
$`\stackrel{~}{\phi }_a(𝐪)`$ $`=`$ $`\sqrt{4\pi }A_a{\displaystyle \frac{1}{(q^2+\kappa ^2)}},`$ (20)
$`\stackrel{~}{\phi }_a^{(ϵ)}(𝐪)`$ $`=`$ $`i\left(\mathit{ϵ}\widehat{𝐪}\right)\sqrt{4\pi }A_a{\displaystyle \frac{2q}{(q^2+\kappa ^2)^2}}`$ (21)
($`\widehat{𝐪}𝐪/q`$ is unit vector).
## III ADIABATIC APPROACH TO STIMULATED RECOMBINATION
The time integral in (14) can be evaluated using the saddle point method. This amounts to the adiabatic approximation when the phase $`(m+\eta )\omega tS(t)`$ in (14) is assumed to be large. The position of saddle points in the complex $`t`$-plane is governed by equation
$`S^{}(t_{m\mu })\mathrm{\Omega }_m=0,`$ (22)
or, more explicitly,
$`{\displaystyle \frac{1}{2}}\left(𝐩+𝐤_{t_{m\mu }}\right)^2=E_a+(m+\eta )\omega .`$ (23)
It is convenient to single out in the electron momentum vector $`𝐩=𝐩_{}+𝐩_{}`$ components parallel ($`𝐩_{}`$) and perpendicular ($`𝐩_{}`$) to the electric field vector $`𝐅`$. Then Eq.(23) is rewritten as
$`{\displaystyle \frac{1}{2}}\left(p_{}+k_{t_{m\mu }}\right)^2=E_a{\displaystyle \frac{1}{2}}p_{}^2+(m+\eta )\omega .`$ (24)
For each value of $`m`$ this equation has a number of solutions $`t_{m\mu }`$ distinguished by the extra subscripts $`\mu `$. In the saddle point approximation the time integration in formula (3) is cast as
$`C_m(𝐩)`$ $`=`$ $`{\displaystyle \frac{1}{T}}{\displaystyle \underset{\mu }{}}\sqrt{{\displaystyle \frac{2\pi }{iS^{\prime \prime }\left(t_{m\mu }\right)}}}\mathrm{exp}\left\{i\left[\mathrm{\Omega }_mt_{m\mu }S(t_{m\mu })\right]\right\}\stackrel{~}{\phi }_a^{(ϵ)}\left(𝐩𝐤_{t_{m\mu }}\right),`$ (25)
where summation is to be taken over the saddle points $`t_{m\mu }`$ operative in the contour integration $`\left[𝐤_{t_{m\mu }}=(𝐅/\omega )\mathrm{sin}\omega t_{m\mu }\right]`$).
The saddle points are found from Eq.(24) as
$`\mathrm{sin}\omega t_{m\mu }={\displaystyle \frac{\omega }{F}}\left(p_{}\pm \sqrt{2(m+\eta )\omega \kappa ^2p_{}^2}\right).`$ (26)
The subscript $`\mu `$ labels solutions differing by the choice of the sign in (26) and sign in $`\mathrm{cos}\omega t_{m\mu }=\pm \sqrt{1\mathrm{sin}^2\omega t_{m\mu }}`$. There are four solutions per the laser field cycle (i.e for $`0\mathrm{Re}t_{m\mu }<T`$).
In order to elucidate the meaning of the saddle point equation (24) we rewrite it as
$`E_\mathrm{p}(t_{m\mu })E_a=\mathrm{\Omega }_m.`$ (27)
It shows that the photons are preferentially emitted at the moment of time when instantaneous continuum electron energy $`E_\mathrm{p}(t)`$ (8) is separated from the bound state energy $`E_a`$ by the energy of the emitted photon $`(m+\eta )\omega `$. The LAR process is most effective when this occurs at some real moment of time, i.e. the saddle points $`t_{m\mu }`$ are real-valued. This regime corresponds to the classically allowed radiation. It can happen only for some part of the emitted photon spectrum, i.e. only in some domain of $`m`$. Outside it, when $`t_{m\mu }`$ possesses an imaginary part, the emission is strongly suppressed. Remarkably, within the classically allowed domain the intensity of emitted lines could vary very significantly as detailed below.
The necessary condition of classically allowed radiation,
$`\mathrm{\Omega }_m>\left|E_a\right|+{\displaystyle \frac{1}{2}}p_{}^2,`$ (28)
makes real the right hand side of formula (26). Details of classically allowed emission depend on the relation between the electron translational momentum component $`p_{}`$ and the momentum $`F/\omega `$ acquired by the electron in its quiver motion in the laser field. In the fast electron regime, $`p_{}>F/\omega `$, the term $`\frac{1}{2}(p_{}+k_t)^2`$ never passes zero as time $`t`$ varies. As a result, the saddle point equation (23) has two or zero real-valued solutions per field cycle (in the classically allowed and forbidden domains respectively, see Fig. 1a). In the slow electron case, $`p_{}<F/\omega `$, the $`\frac{1}{2}(p_{}+k_t)^2`$ passes via zero. Due to this circumstance, as seen from Fig. 1b, for some interval of photon frequencies $`\mathrm{\Omega }_m`$ the equation (23) has four real-valued solutions whereas for higher $`\mathrm{\Omega }_m`$ only two solutions exist. Consequently, in this case the classically allowed domain is subdivided in two parts. The related LAR regimes are discussed below in more detail.
### A Fast electron regime: $`p_{}>F/\omega `$
Here one has to choose the upper sign in formula (26) in order to get a real-valued saddle point. The condition $`\left|\mathrm{sin}\omega t_{m\mu }\right|1`$ is straightforwardly reduced to
$`{\displaystyle \frac{1}{2}}\left(p_{}{\displaystyle \frac{F}{\omega }}\right)^2+\left|E_a\right|+{\displaystyle \frac{1}{2}}p_{}^2\mathrm{\Omega }_m{\displaystyle \frac{1}{2}}\left(p_{}+{\displaystyle \frac{F}{\omega }}\right)^2+\left|E_a\right|+{\displaystyle \frac{1}{2}}p_{}^2.`$ (29)
In this photon frequency interval only one pair of real saddle points $`t_{m\mu }`$ exists per field cycle, see Fig 1. These two saddle points are to be included into summation over $`\mu `$ in (25). The phase difference between the two terms in (25) varies with $`m`$. As a result $`\left|C_m(𝐩)\right|^2`$ oscillates between zero and some envelope function $`\mathrm{\Xi }(m)`$ defined as
$`\mathrm{\Xi }(m)={\displaystyle \frac{8\pi }{T^2S^{\prime \prime }}}\left|\stackrel{~}{\phi }_a^{(ϵ)}\left(𝐩𝐤_{t_{m\mu }}\right)\right|^2,`$ (30)
$`\left|\stackrel{~}{\phi }_a^{(ϵ)}\left(𝐩𝐤_{t_{m\mu }}\right)\right|^2=\pi A_a^2{\displaystyle \frac{2(m+\nu )\omega \kappa ^2p_{}^2}{(m+\nu )^4\omega ^4}},`$ (31)
$`S^{\prime \prime }=F\sqrt{2(m+\nu )\omega \kappa ^2p_{}^2}\sqrt{1{\displaystyle \frac{\omega ^2}{F^2}}\left(p_{}\sqrt{2(m+\nu )\omega \kappa ^2p_{}^2}\right)^2}.`$ (32)
As could be anticipated, the function $`\mathrm{\Xi }(m)`$ has weak singularities at the boundaries of the classically allowed region. The extension of the classically allowed region on the photon frequency scale is $`2p_{}F/\omega `$ with its center located at $`\mathrm{\Omega }_\mathrm{c}=\frac{1}{2}p^2+\left|E_a\right|+F^2/(2\omega ^2)`$. For vanishing laser field $`\mathrm{\Omega }_\mathrm{c}`$ tends to the limit (2) and the classically allowed domain shrinks to the single line. The condition that a single line dominates in the photon spectrum could be formulated as $`2p_{}F/\omega ^21`$.
Fig. 2 illustrates evolution of the spectrum pattern with the laser intensity $`I`$. We consider electrons with the energy $`E_{\mathrm{el}}=\frac{1}{2}p^2`$ equal to 1 eV ($`p=0.271`$) in the laser field with the frequency $`\omega =0.0043`$ and different intensities. The electron momentum $`𝐩`$ is directed along the laser field strength $`𝐅`$ ($`p_{}=0`$). The electron recombines to the bound state of H<sup>-</sup> ion ($`\kappa =0.2354`$, $`A_a=0.75`$ ). The emission amplitudes are obtained by numerical evaluation of the time integral in (14). The laser field intensities $`I=10^{11},\mathrm{\hspace{0.25em}10}^{10},\mathrm{\hspace{0.25em}10}^9,\mathrm{\hspace{0.25em}10}^8,\mathrm{\hspace{0.25em}10}^7`$ W/cm<sup>2</sup> corresponds to the values of parameter $`2pF/\omega ^2`$ respectively $`49.4,\mathrm{\hspace{0.25em}15.6},\mathrm{\hspace{0.25em}4.94},\mathrm{\hspace{0.25em}1.56},\mathrm{\hspace{0.25em}0.494}`$. For the weakest field considered ($`I=10^7`$ W/cm<sup>2</sup>) the intensity of the principal line in the spectrum ($`m=14`$) exceeds more than 50 times these of adjacent satellites. For $`I=10^8`$ W/cm<sup>2</sup> this ratio is substantially smaller ($`5`$). When laser field is increased by an order of magnitude, the dip in the emitted photon spectrum appears at $`m=15`$. This is the first manifestation of the oscillatory structure in the spectrum due to interference of two contributions in (25). For $`I=10^{10}`$ W/cm<sup>2</sup> the structure becomes well manifested. At last, for $`I=10^{11}`$ W/cm<sup>2</sup> the structure becomes well developed and extended. In the latter case, in fact, the situation is beyond the fast electron regime; it will be discussed in the next subsection.
The semiclassical formula (25) is applicable when the classically allowed domain is sufficiently broad on the frequency scale. Fig. 3 shows the photon spectrum in the well manifested semiclassical regime ($`E_{\mathrm{el}}=10`$ eV, $`I=10^{11}`$ W/cm<sup>2</sup>, $`p=0.857`$, $`F/\omega =0.392`$). In the classically allowed domain ($`31m187`$) the quantities $`\left|C_m(p)\right|^2`$ obtained by numerical evaluation of the integral (14) over time (circles) oscillate violently due to the interference effects. Outside this region $`\left|C_m(p)\right|^2`$ decrease very rapidly. Note that the most efficient emission occurs at the edges of the classically allowed interval. This effect is completely analogous to enhancement of the probability density near the turning points for the quantum particle moving in the potential well. The envelope function (30) (solid curve) reproduces well this overall behavior. The saddle point approximation (25) allows us to reproduce well the oscillatory structure (squares in Fig. 3). Within the classically-allowed domain the summation in this formula runs over two real-valued saddle points $`t_{m\mu }`$. As $`m`$ varies approaching the domain border, two saddle points lying at the real-$`t`$ axis approach each other and eventually merge at the boundary. After that they separate again moving perpendicular to the real axis in the complex $`t`$-plane. The latter situation corresponds to the classically forbidden, or tunneling regime where only one saddle point is to be included in the summation over $`\mu `$ in (26) (namely, that which ensures exponential decrease of $`\left|C_m(p)\right|^2`$ outside the classically-allowed domain). The transition between two regimes could be described by the Airy function. We do not pursue here the detailed description of this, rather standard situation. In particular, Fig. 3, the results shown by squares in Fig. 3 are obtained using the plain semiclassical formula (25) with two or one saddle points included as discussed above; the deviations from the numerical results are seen to be essential only in a very narrow transitions region. Since the numerical evaluation of integral (14) over time is not difficult, we employ the adiabatic approach in order to obtain better insight into the pattern of emitted radiation spectrum, but not for producing an alternative method to evaluate the amplitudes.
### B Slow electron regime: $`p_{}<F/\omega `$
In this case the real-valued result for $`t_{m\mu }`$ is provided by both upper and lower sign in the expression (26). It is easy to see from Fig. 1b that the classically allowed region of photon frequencies is subdivided in two domains. The first of them, with one pair of real-valued saddle points $`t_{m\mu }`$, corresponds to $`\mathrm{\Omega }_m`$ lying in the interval (29). At smaller photon frequencies, another subdomain is defined by the condition
$`\left|E_a\right|+{\displaystyle \frac{1}{2}}p_{}^2\mathrm{\Omega }_m{\displaystyle \frac{1}{2}}\left(p_{}{\displaystyle \frac{F}{\omega }}\right)^2+\left|E_a\right|+{\displaystyle \frac{1}{2}}p_{}^2.`$ (33)
Here two pairs of real saddle points $`t_{m\mu }`$ exist. The spectrum for this situation is illustrated by Fig. 4 ($`E_{\mathrm{el}}=0.1`$ eV, $`I=10^{11}`$ W/cm<sup>2</sup>, $`p=0.0857`$, $`F/\omega =0.392`$). The classically allowed domain lies in the interval $`7m32`$, with the four-saddle-point regime being operative for $`7m17`$, and the two-saddle point regime for $`18m32`$. The results of numerical calculations shown by circles suggest that the oscillations in $`\left|C_m(p)\right|^2`$ or $`\sigma _m(p)`$ proceed with two different frequencies, the higher frequency being characteristic for the four-saddle-point domain. The plain semiclassical formula (25) (squares) essentially reproduces this structure. Of course, it is not designed for accurate description of a transition between the two-saddle-point and four-saddle-point regimes where the deviations are seen to be larger. A special, more sophisticated treatment is required here, but such complications are not pursued in the present study as argued above. The non-standard situation emerges also at the left edge of the classically allowed interval where all saddle points simultaneously move from the real axis into the complex $`t`$ plane. This transition region could not be described by a simple Airy-type pattern that is known to give a monotonous decrease in the classically forbidden domain; on the contrary, the numerical results reveal some structure in this region, see Fig. 4. Bearing all this in mind it is not unexpected that the plain semiclassical approximation (25) essentially fails near the left border of the classically allowed domain.
It is worthwhile to mention also another region where the standard semiclassical approximation fails. Namely, for $`\mathrm{\Omega }=0`$ the saddle point positions coincide with the poles of the function $`\stackrel{~}{\phi }_a^{(ϵ)}`$. The situation when an exact coincidence occurs is tractable rather easily . Somewhat more effort is required to obtain uniform description of a transition between this case and a situation when the saddle point and the pole are well separated, as presumed in simple formula (25). Again, such sophistication are beyond the scope of the present study.
At last, Fig. 5 shows a transient situation between the fast and slow electron regimes ($`E_{\mathrm{el}}=1`$ eV, $`I=10^{11}`$ W/cm<sup>2</sup>, $`p=0.271`$, $`F/\omega =0.392`$). Here only two harmonics ($`m=7,\mathrm{\hspace{0.25em}8}`$) correspond to the four-saddle-point regime. The remaining part of the classically-allowed domain, $`9m56`$ corresponds to two-saddle-point regime. Most of the spectrum is well described by the plain saddle-point approximation (25) and covered by the envelope function (30), albeit the highest peak at $`m=9`$ exceeds it, as being in the region of the transition between the two and four-saddle point regimes. Quite paradoxically, the low-frequency classically forbidden region with well manifested structure exhibits much higher emission intensities as compared with the large-frequency edge of the classically allowed domain.
## IV CONCLUSION
As discussed in the Introduction, the LAR is one of the processes responsible for emission of high energy photons by the laser plasma. Surprisingly, it has not yet received attention of researchers. This is particularly unsatisfactory since the other processes leading to high energy photons (harmonic generation and laser stimulated bremsstrahlung) are currently under active scrutiny. The present paper could be considered as a first step to start filling this gap. The theory in many aspects is parallel to the treatment of multiphoton ionization (MPI) where the Keldysh approximation is known to provide an important insight and quantitatively reliable results. The origin of differences between MPI and LAR lies in the kinematics: in MPI process the allowed electron energy in the continuum are robustly defined by the parameters of the system (initial electron binding energy, laser field frequency and strength), whereas in the LAR the continuum electron energy is arbitrary. This rather trivial observation results in important consequences of physical character. They are particularly lucid in the adiabatic regime when laser frequency is sufficiently small. The ionization is a tunneling process for all above-threshold channels. On the contrary, in the LAR there is a domain of photon frequencies for which emission is allowed classically.
The Keldysh-type approximation allowed us to describe evolution of the LAR spectrum as the laser field varies, from the single line with only weak satellites in the low-field limit to the broad pattern of equidistantly spaced harmonics in the strong field case. In the adiabatic approximation (i.e. the saddle point method) the photon spectrum is subdivided into classically allowed and classically forbidden domains, with the line intensities being highest at the boundaries of the former region. Concerning the quantitative side of the problem, the adiabatic approach is less efficient for the LAR process as compared with the treatment of above threshold ionization (ATI). The reason is that in the latter case the saddle point method is well applicable in its most simple form, whereas for LAR process some technical complications emerge. The difference stems from the fact that ATI process always effectively occurs at complex-valued moments of time, whereas for LAR this is generally not the case, and several regimes could be operative with the transition regions between them. Albeit not drastic, these complications to our opinion hardly warrant necessary cumbersome analytical involvements, bearing in mind that the numerical calculations are quite simple and straightforward. Nevertheless the saddle point method remains very useful for understanding the intensity patterns in the emitted photon spectrum.
An additional assumption of the present study, that in principle could be easily abandoned, is the use of asymptotic expression (18) for the final bound state wave function. Again, in the LAR process the situation is less favorable for this approximation as compared with the ATI process. This is because, as discussed in detail earlier , the long-range asymptote of the bound state wave function governs ATI amplitudes, whereas LAR process is more sensitive to the wave function behavior in the entire coordinate space.
As is pictured by Fig. 2, the amount of noticeable lines in the photon spectrum increases with the laser field strength, but the intensity of each individual line decreases in average. The cross section of the electron transition into the bound state summed over all emitted photon channels is $`\sigma _{\mathrm{tot}}(p)=_{m>0}\sigma _m(p)`$. It exhibits only very weak dependence on the laser filed intensity $`I`$ . For instance, in the particular case of Fig. 2 we obtain for $`\sigma _{\mathrm{tot}}(p)`$ the values $`3.8510^6`$, $`3.8510^6`$, $`3.8910^6`$, $`3.6410^6`$, $`3.410^6`$ for the laser field intensities $`I=10^{11},\mathrm{\hspace{0.25em}10}^{10},\mathrm{\hspace{0.25em}10}^9,\mathrm{\hspace{0.25em}10}^8,\mathrm{\hspace{0.25em}10}^7`$ W/cm<sup>2</sup>. Recent calculations of the laser-assisted antihydrogen formation in positron-antiproton collisions employed Coulomb-Volkov wave function for the initial electron continuum state $`\mathrm{\Phi }_𝐩`$ and the laser-perturbed wave function for the bound state. The authors considered only one-photon LAR process and concluded that the LAR cross section decreases for the stronger laser fields. The present results indicate that if the multiphoton processes are included, then the total LAR cross section is essentially independent on laser field intensity.
Thus the effect of a laser on the recombination process looks very straightforward. The total cross section of recombination essentially is not changed by a laser field, but is redistributed over equidistant pattern in photon spectrum that becomes broader as the laser intensity increases.
###### Acknowledgements.
This work has been supported by the Australian Research Council. V. N. O. acknowledges the hospitality of the staff of the School of Physics of UNSW where this work has been carried out. |
no-problem/9910/astro-ph9910357.html | ar5iv | text | # Enhanced Cloud Disruption by Magnetic Field InteractionAnimations and color images from this work have been posted at URL http://www.msi.umn.edu/Projects/twj/mhd3d/
## 1 Introduction
This letter focuses on the results of the first three-dimensional (3-D) study of the ballistic interaction of a moderately supersonic dense cloud with a warmer magnetized tenuous medium. Magnetic fields are ubiquitous in astrophysical environments and cannot be neglected in any realistic ISM related study. Mac Low et al. (1994) and Jones et al. (1996) have studied the two-dimensional (2-D) magnetohydrodynamics (MHD) of cosmic bullets, extending earlier gasdynamical simulations and addressing the importance of magnetic field in preventing the cloud disruption due to fluid instabilities. In particular Jones et al. (1996) found that a region of strong magnetic pressure (or magnetic bumper) develops at the nose of the cloud during its motion through a magnetized medium, whenever its velocity is perpendicular to the initial unperturbed background field (transverse case). This enhancement was attributed to a strong stretching of the magnetic field lines. Miniati et al. (1999b) further investigated the problem pointing out that the behavior observed in the transverse case was typical, except in the case of a very small angle ($`<30^o`$) between the cloud velocity and the initial magnetic field. In addition, the development of this magnetic shield in front of the cloud was recognized to play a crucial role in the outcome of cloud collisions (Miniati et al. 1999a), because it dramatically reduced the degree of cloud disruption. Further, it was important in terms of exchange of magnetic and kinetic energy between different phases in the ISM (Miniati et al. 1999b) and to highlight cosmic ray electrons (Jones et al. 1994).
While supersonic motion of an individual cloud is obviously idealized, there are numerous astronomical contexts which can be illuminated by its example. For example, MHD wind-cloud interaction may be important for the evolution of synchrotron emission processes from planetary nebulae (Dgani & Soker 1998). Recently a multi-phase structure has been proposed for the dynamical state of the cosmic intra cluster medium (Fabian 1997), in order to explain EUV radiation in excess to what expected from the hot gas there (e.g., Bowyer, Lieu & Mittaz 1998). Although the issue has not been settled yet, this suggestion reinforces the necessity to understand the proper evolution of clouds in a multi-phase medium.
All the previous MHD cloud-motion results were based on 2-D numerical simulations, so the importance of following them up with more realistic 3-D calculations is clear. Stone and collaborators (Stone & Norman 1992; Xu & Stone 1995) have reported 3-D hydrodynamical simulations of shocked gas clouds. They found qualitative agreement with analogous 2-D gasdynamical simulations. However, as we shall see, introduction of nonisotropic Maxwell stresses produces very significant effects that differ in 3-D from what has been seen in previous, 2-D MHD simulations.
## 2 Numerics & Problem Setup
The numerical computation is based on a total variation diminishing (TVD) scheme for ideal MHD (Ryu & Jones 1995; Ryu, Jones & Frank 1995; Ryu et al. 1998). The cloud, initially spherical, is set in motion with respect to the uniform ambient medium with a velocity aligned along the $`x`$-axis. Its velocity is $`u_c=Mc_s`$, where $`c_s`$ is the sound speed in the ambient medium and the Mach number is $`M=1.5`$. The initial cloud density is $`\rho _c=\chi \rho _i`$, with $`\rho _i`$ the intercloud density and $`\chi =100`$. The direction of the magnetic field is chosen along the $`y`$-axis. Its intensity is conveniently expressed in terms of the familiar parameter $`\beta =p/p_B`$, where $`p`$ is the hydrodynamic pressure and $`p_B=B^2/(8\pi )`$ is the magnetic pressure. In these numerical simulations we have considered both the cases of a strong field ($`\beta =4`$) and a weak field ($`\beta =100`$). To be able to compare with pure hydrodynamic effects, a case with $`\beta =\mathrm{}`$ (no magnetic field) has also been computed. The calculations have been performed using a $`416\times 208\times 416`$ zone box containing one quarter of the physical space with radial symmetry applied in the $`y`$-$`z`$ plane around the $`x`$ axis. The initial cloud radius spanned 26 zones. Additional details on the numerical setup can be found in a companion paper describing quantitative results (Gregori et al. 1999). Since the cloud motion is supersonic, its motion leads to the formation of a forward, bow shock and a reverse, crushing shock propagating through the cloud. The approximate time for the latter to cross the cloud is referred to as the “crushing time”<sup>1</sup><sup>1</sup>1 This form of the crushing time differs from the one of Klein et al. (1994) by a factor of 2 since our definition is based on the cloud diameter instead of the cloud radius. We use this definition since it more closely measures the actual time before the crushing shock emerges. (e.g., Jones et al. 1994): $`\tau _{cr}=2R_c\chi ^{1/2}/Mc_s`$, where $`R_c`$ is the initial cloud radius. As we will see, the crushing time is a relevant dynamical quantity since it is proportional to the time over which disruptive instabilities develop.
## 3 Cloud Disruption
As the cloud moves through the ISM, its surface is subjected to several instability mechanisms: namely, Kelvin-Helmholtz (K-H), Rayleigh-Taylor (R-T) and, at start-up, the Richtmyer-Meshkov (R-M) instability. These instabilities, but especially the R-T instability, will ultimately disrupt the entire cloud. Grid-scale noise provides the seed for initial development of such instabilities (Kane et al. 1999 and references therein). At the same time, finite numerical resolution may suppress the large wavenumber (small scale) mode components that have the fastest initial growth rate (Chandrasekhar 1961). The K-H instability is related with the shearing motion at the separation between two fluids. Typically, we should expect such an instability to develop on the lateral boundaries of the cloud. From our simulations we can see that its effects are generally limited, especially considering that, in the MHD case, the development of the instability tends to be suppressed by the magnetic field for wave vectors in the $`x`$-$`y`$ plane (Chandrasekhar 1961; Jones et al. 1996). The most disruptive instability for a supersonic cloud is the R-T. It develops at the interface between two fluids when the lighter accelerates the heavier one. In the limiting case of an impulsive acceleration, as at the beginning of our simulations where the clouds are set into instantaneous motion, the R-M instability applies. However, the linear growth of the R-M instability and the presence of a thin boundary layer on the initial cloud makes the R-M instability relatively unimportant. Before a time $`\tau _{cr}`$ the cloud interface remains R-T stable, since there is no ongoing acceleration of the cloud body to drive this instability. When the crushing shock exits the cloud, however, a pressure gradient between the front and the rear is established. The cloud body is then decelerated, and this induces the R-T growth at its front interface (e.g., Mac Low et al. 1994; Jones et al. 1994; Kane et al. 1999).
In the general hydrodynamic case a R-T perturbation of wavenumber $`k`$ grows with a characteristic time $`\tau _{RT}(gk)^{1/2}`$, where $`g`$ is the cloud deceleration (Chandrasekhar 1961). Following Klein et al. (1994), we can estimate the deceleration as
$$g\frac{\rho _iu_c^2}{\rho _cR_c},$$
(1)
with the cloud velocity $`u_c`$ and the intercloud medium assumed at rest. It is easy to show that the most destructive modes, those with the largest wavelength, $`k2\pi /R_c`$, develop on a timescale $`\tau _{RT}\tau _{cr}`$. Thus, in the absence of magnetic field influences, the cloud will be disrupted only on a time scale of several $`\tau _{cr}`$ (Jones et al. 1996; Xu & Stone 1995).
## 4 Results
It is usually pointed out in the literature that in the interaction of a cloud-like object with a magnetized wind, the stretching of magnetic field lines draping around the body of the cloud is limited by their ability to escape by slipping around it (e.g., Mac Low et al. 1994; Dgani & Soker 1998). On the contrary, our results show that the magnetic field influences the development of the cloud deformations produced by instabilities so that the field lines become trapped in such deformations. This trapping of the field lines is indeed the most important physical process that has been revealed from our 3-D simulations. Its main consequence is then the development of a region of strong magnetic pressure at the leading edge of the cloud. This is clearly visible in Fig. Enhanced Cloud Disruption by Magnetic Field Interaction<sup>1</sup><sup>1</sup>affiliation: Animations and color images from this work have been posted at URL http://www.msi.umn.edu/Projects/twj/mhd3d/, which shows that even at simulation end most of the “overrun” field lines are indeed kept within the R-T fingers without slipping away.
This behavior is different from what is seen at the earth’s magnetopause or as predicted for the magnetopause of comets (e.g., Yi et al. 1996). There, field lines do seem to slip past the object, so that it behaves somewhat like a rigid body. Indeed, for both of those cases the impacted body is stiff, because compression produces an increasingly greater restoring force.
In Fig. Enhanced Cloud Disruption by Magnetic Field Interaction<sup>1</sup><sup>1</sup>affiliation: Animations and color images from this work have been posted at URL http://www.msi.umn.edu/Projects/twj/mhd3d/ we compare the cloud evolution for the three studied cases of the field intensity through 3-D volume rendering images of gas density. The four pictures in the (first, second, third) columns correspond to different evolutionary times for the progression of cases: $`\beta =(\mathrm{},100,4)`$ respectively. The most significant conclusion is that the presence of a strong background magnetic field ($`\beta =4`$) dramatically modifies the dynamical evolution of the cloud. For the hydrodynamic simulation the cloud is initially uniformly crushed in the direction aligned with its motion and the disruptive instabilities, which are radially symmetric because of the imposed symmetry, become evident only by the end of the simulation ($`t=2.25\tau _{cr}`$). Our resolution is not sufficient to allow really fine K-H instabilities to develop on the cloud perimeter such as those evident in the hydrodynamical shocked-cloud results of Xu & Stone (1995), but the global evolutions in our hydrodynamic simulations are comparable. By contrast with this behavior, the presence of a strong field ($`\beta =4`$ case) causes the cloud to be additionally compressed along the $`y`$-axis by the draping magnetic field lines. As a result the cloud is “extruded” in a direction orthogonal to the plane containing its motion and the initial field. This behavior is already well visible after one crushing time.
As noted earlier, magnetic fields inhibit the R-T instabilities in the $`x`$-$`y`$ plane (Jones et al. 1996). The development of this instability only along the $`z`$-axis, then produces the C-shaped structure of the cloud (last panel in Fig. Enhanced Cloud Disruption by Magnetic Field Interaction<sup>1</sup><sup>1</sup>affiliation: Animations and color images from this work have been posted at URL http://www.msi.umn.edu/Projects/twj/mhd3d/), as opposed to the symmetrical “sombrero” surface of the hydrodynamical case. In addition, the presence of the magnetic field enhances the growth of the R-T instability. In the strong field case ($`\beta =4`$), this can be assessed by replacing the ram pressure $`\rho _iu_c^2`$ with an effective pressure $`\rho _iu_c^2+p_B`$ in eq. (1). This is motivated by the fact that, for small $`\beta `$, $`p_B`$ becomes comparable to or larger than the ram pressure $`\rho _iu_c^2`$ during times $`t\tau _{cr}\tau _{RT}`$ (see also Miniati et al. 1999b). Taking into account this additional magnetic term then shortens the time scale for the development of the instability by a factor $`2`$. The instability enters the nonlinear stage sooner and its exponential growth determines the dramatic differences from the hydrodynamical case as shown in Fig. Enhanced Cloud Disruption by Magnetic Field Interaction<sup>1</sup><sup>1</sup>affiliation: Animations and color images from this work have been posted at URL http://www.msi.umn.edu/Projects/twj/mhd3d/. In this sense, our results describe the cloud disruption by a magnetically enhanced R-T like instability.
A weak field ($`\beta =100`$) changes the cloud dynamics in a similar way, but more slowly and less dramatically. With a limited growth of a C-shaped structures, the development of the R-T instability in front of the cloud looks typical, although it still shows evidence of the asymmetric Maxwell stresses. Also at $`t\tau _{cr}`$ the magnetic pressure is still a small fraction of the ram pressure in front of the cloud. Therefore, although the shape of the cloud undergoes some deformation compared to the hydrodynamical case, the growth rate of the instability has not been enhanced significantly. Further discussions on the magnetic energy evolution and its effect on the cloud morphology are given by Gregori et al. (1999).
## 5 Summary
In this paper we have presented the first results of a series of 3-D MHD numerical simulations of cloud motion in a multi-phase interstellar medium. We have considered a spherical cloud that moves transverse to the magnetic field, with two different cases for its initial strength; namely $`\beta =4`$ and $`\beta =100`$.
Both the weak ($`\beta =100`$) and strong field ($`\beta =4`$) simulations showed a comparable behavior with a substantial enhancement of the magnetic pressure at the leading edge of the cloud as a result of field line stretching there. This confirms and extends previous 2-D results (Jones et al. 1996; Miniati et al. 1999b). The importance of field line stretching to field amplification and cloud motion can now be fully appreciated, since the slipping of the field lines around the cloud (which was prevented in 2-D simulations by their geometry) turned out to be a minor effect in our 3-D simulations. In general moving clouds are reshaped by the field lines draping around their bodies, generating elongated structures oriented perpendicular to both the background magnetic field and the flow velocity. This is also important in terms of cloud collisions because it means that if two clouds are approaching each other moving through the same large scale magnetic field, their elongated, cylindrical bodies will tend to collide with the longer axis aligned. This helps validate previous 2-D MHD cloud collision calculations (Miniati et al. 1999a) and sets important constraints for future work on that subject.
The main result of this study is that contrary to 2-D geometry where fluid instabilities were prevented by the growth of a strong magnetic field, in 3-D they are instead considerably hastened by it. This was clearly shown in Fig. Enhanced Cloud Disruption by Magnetic Field Interaction<sup>1</sup><sup>1</sup>affiliation: Animations and color images from this work have been posted at URL http://www.msi.umn.edu/Projects/twj/mhd3d/ where the dramatic difference between the purely hydrodynamic and the $`\beta =4`$ MHD case is attributed to the accelerated development of a R-T instability in the latter. It was also shown that for $`\beta =4`$ the timescale for the cloud disruption is reduced by a factor $`2`$.
This work is supported at the University of Minnesota by the NSF through grants AST 96-19438 and INT95-11654, by NASA grant NAG5-5055, and by the Minnesota Supercomputing Institute. Work by DR is supported in part by KOSEF through grant 981-0203-011-2.
FIGURE CAPTIONS |
no-problem/9910/math9910174.html | ar5iv | text | # The Hodge polynomial of ℳ̄_{3,1}
## Abstract.
We give formulas for the Dolbeault numbers of $`\overline{}_{3,1}`$, using the first author’s calculations of the weights of the cohomology of $`_{2,2}`$ and $`_{2,3}`$ and the second author’s calculation of the weights of the cohomology of $`_{3,1}`$.
The authors thank the Scuola Normale, Pisa, where this paper was written. The work of the first author is partially funded by the NSF under grant DMS-9704320
Recall that the Serre polynomial of a quasi-projective variety $`X`$, or more generally, of a Deligne-Mumford stack whose coarse moduli space is a quasi-projective variety, is the polynomial
$$𝖾(X)=\underset{i,j}{}u^iv^j\underset{k=0}{\overset{\mathrm{}}{}}(1)^kh^{i,j}(H_c^k(X,)).$$
If a finite group $`\mathrm{\Gamma }`$ acts on $`X`$, the equivariant Serre polynomial $`𝖾_\mathrm{\Gamma }(X)`$ is defined in the same way, and takes values in $`R(\mathrm{\Gamma })`$. For background to all of this, see .
If $`X`$ is smooth and projective, or more generally, if $`X`$ is a smooth Deligne-Mumford stack whose coarse moduli space is projective, then the Serre polynomial of $`X`$ equals its Hodge polynomial:
$$𝖾(X)=\underset{i,j}{}(u)^i(v)^jh^{i,j}(X).$$
In particular, Poincaré duality implies the functional equation
$$𝖾(X)(u,v)=q^{dim(X)}𝖾(X)(u^1,v^1),$$
where throughout this paper, $`q`$ denotes $`uv`$. All of this is also true equivariantly.
If $`2g2+n>0`$, let $`_{g,n}`$ be the moduli stack of smooth projective curves of genus $`g`$ together with $`n`$ distinct marked points, and let $`\overline{}_{g,n}`$ be the moduli stack of stable curves of (arithmetic) genus $`g`$ with $`n`$ distinct marked smooth points ; these stacks are smooth, of dimension $`3g3+n`$, and the coarse moduli space of $`\overline{}_{g,n}`$ is projective. Furthermore, there is a flat map $`\overline{}_{g,n+1}\overline{}_{g,n}`$ which represents $`\overline{}_{g,n+1}`$ as the universal curve over $`\overline{}_{g,n}`$. In particular, $`\overline{}_{g,1}`$ is the universal stable curve of genus $`g`$. In this paper, we calculate $`𝖾(\overline{}_{3,1})`$. In particular, we show that $`h^{i,j}(\overline{}_{3,1})=0`$ unless $`i=j`$.
There is a standard identification of $`R(\mathrm{SS}_n)`$ with the space of symmetric functions (in an infinite number of variables $`\{x_ii0\}`$) of degree $`n`$, which we denote by $`\mathrm{\Lambda }_n`$. If $`X`$ is a quasi-projective variety with action of $`\mathrm{SS}_n`$, let $`𝖾_n(X)`$ be its Serre polynomial, thought of as an element of $`\mathrm{\Lambda }_n[u,v]`$.
There is a formula permitting the calculation of the $`\mathrm{SS}_n`$-equivariant Hodge polynomial of $`\overline{}_{g,n}`$ in terms of the $`\mathrm{SS}_m`$-equivariant Serre polynomials of the moduli stacks $`_{h,m}`$, where $`hg`$ and $`2h+m2g+n`$.
Let $`\widehat{\mathrm{\Lambda }}`$ be the completion of the ring of symmetric functions:
$$\widehat{\mathrm{\Lambda }}=\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{\Lambda }_n.$$
We may identify $`\widehat{\mathrm{\Lambda }}`$ with the algebra of power series in the variables
$$h_n=\underset{i_1\mathrm{}i_n}{}x_{i_1}\mathrm{}x_{i_n}.$$
Introduce an auxilliary variable $`\lambda `$, and let $`\widehat{\mathrm{\Lambda }}[[\lambda ]]`$ be the ring of power series in $`\lambda `$ with coefficients in the ring of symmetric functions. We may identify $`\widehat{\mathrm{\Lambda }}`$ with the algebra of power series in the variables
$$p_n=\underset{i=0}{\overset{\mathrm{}}{}}x_i^n.$$
The ring $`\widehat{\mathrm{\Lambda }}[[\lambda ]]`$ is a (special) lambda-ring: the Adams operations on $`\widehat{\mathrm{\Lambda }}[[\lambda ]]`$ are determined by the formulas $`\psi _k(p_n)=p_{kn}`$ and $`\psi _k(\lambda )=\lambda ^k`$. Let $`\mathrm{𝖤𝗑𝗉}`$ be the operation
$$\mathrm{𝖤𝗑𝗉}(f)=\underset{k=0}{\overset{\mathrm{}}{}}\sigma _k(f)=\mathrm{exp}\left(\underset{k=0}{\overset{\mathrm{}}{}}\frac{\psi _k(f)}{k}\right):\lambda \widehat{\mathrm{\Lambda }}[[\lambda ]]1+\lambda \widehat{\mathrm{\Lambda }}[[\lambda ]],$$
and let $`\mathrm{𝖫𝗈𝗀}`$ be its inverse, given by the formula
$$\mathrm{𝖫𝗈𝗀}(f)=\underset{k=0}{\overset{\mathrm{}}{}}\frac{\mu (k)}{k}\psi _k(f):1+\lambda \widehat{\mathrm{\Lambda }}[[\lambda ]]\lambda \widehat{\mathrm{\Lambda }}[[\lambda ]].$$
Let $`\mathrm{\Delta }`$ be the differential operator on $`\widehat{\mathrm{\Lambda }}`$ (and by extension of coefficients, on $`\widehat{\mathrm{\Lambda }}[[\lambda ]]`$) give by the formula
$$\mathrm{\Delta }=\underset{k=1}{\overset{\mathrm{}}{}}\lambda ^{2k}\left(\frac{k}{2}\frac{^2}{p_k^2}+\frac{}{p_{2k}}\right).$$
We may define two elements of $`\widehat{\mathrm{\Lambda }}[[\lambda ]]`$, by the formulas
$`\mathrm{\Psi }`$ $`={\displaystyle \underset{2g2+n>0,n>0}{}}\lambda ^{2g2+n}𝖾_{\mathrm{SS}_n}(\overline{}_{g,n})\text{and}`$
$`\mathrm{\Phi }`$ $`={\displaystyle \underset{2g2+n>0,n>0}{}}\lambda ^{2g2+n}𝖾_{\mathrm{SS}_n}(\overline{}_{g,n}).`$
The following is Theorem (8.13) of .
###### Theorem 1.
$`\mathrm{\Psi }=\mathrm{𝖫𝗈𝗀}\left(\mathrm{exp}(\mathrm{\Delta })\mathrm{𝖤𝗑𝗉}(\mathrm{\Phi })\right)`$
We see that to use this formula to calculate $`𝖾(\overline{}_{3,1})`$, we need the Serre polynomial of $`_{3,1}`$, together with the following equivariant Serre polynomials:
| $`_{g,n}`$ | $`𝖾_{\mathrm{SS}_n}(_{g,n})`$ |
| --- | --- |
| $`_{0,3}`$ | $`s_3`$ |
| $`_{0,4}`$ | $`qs_4s_{2^2}`$ |
| $`_{0,5}`$ | $`q^2s_5qs_{32}+s_{31^2}`$ |
| $`_{0,6}`$ | $`q^3s_6q^2s_{42}+q(s_{41^2}+s_{321})(s_{41^2}+s_{3^2}+s_{2^21^2})`$ |
| $`_{0,7}`$ | $`q^4s_7q^3s_{52}+q^2(s_{3^21}+s_{51^2}+s_{421})`$ |
| | $`q(s_{51^2}+s_{43}+s_{421}+s_{2^31}+s_{41^3}+s_{3^21}+s_{321^2})`$ |
| | $`+s_{321^2}+s_{52}+s_{31^4}+s_{32^2}+s_{421}`$ |
| $`_{1,1}`$ | $`qs_1`$ |
| $`_{1,2}`$ | $`q^2s_2`$ |
| $`_{1,3}`$ | $`q^3s_3s_{1^3}`$ |
| $`_{1,4}`$ | $`q^4s_4q^2s_4qs_{21^2}+s_{31}`$ |
| $`_{1,5}`$ | $`q^5s_5q^3(s_5+s_{41})+q^2(s_{32}s_{31^2})`$ |
| | $`+q(s_{41}+s_{32}+s_{31^2})(s_5+s_{32}+s_{2^21}+s_{1^5})`$ |
| $`_{2,1}`$ | $`q^4s_1+q^3s_1`$ |
| $`_{2,2}`$ | $`q^5s_2+q^4(s_2+s_{1^2})s_2`$ |
| $`_{2,3}`$ | $`q^6s_3+q^5(s_3+s_{21})q^4s_3q(s_3+s_{21})`$ |
Here, if $`\mu =(\mu _1\mathrm{}\mu _{\mathrm{}})`$ is a partition of $`n`$, $`s_\mu `$ denotes the associated Schur polynomial, given by the Jacobi-Trudi formula $`s_\mu =det(h_{\mu _i+ji})`$. The above Serre polynomials are calculated in (for genus $`0`$ and $`1`$) and (for genus $`2`$).
Applying Theorem 1 (and taking advantage of J. Stembridge’s symmetric function package SF for maple), we see that
$`𝖾(\overline{}_{3,1})`$ $`=𝖾(_{3,1})+3q^6+15q^5+29q^4+29q^3+16q^2+4q.`$
###### Theorem 2.
The Hodge polynomial $`𝖾(\overline{}_{3,1})`$ of $`\overline{}_{3,1}`$ equals
$$q^7+5q^6+16q^5+29q^4+29q^3+16q^2+5q+1.$$
###### Proof.
We must show that $`𝖾(_{3,1})=q^7+2q^6+q^5+q+1`$. This almost, but not quite, follows from . There, it was shown that the polynomial
$$\underset{i,j,k}{}u^iv^jt^kh^{i,j}(H_c^k(_{3,1},))=q^7t^{14}+q^62t^{12}+q^5t^{10}+q(t^8+t^7)+2t^6,$$
which would imply that $`𝖾(_{3,1})=q^7+2q^6+q^5+2`$.
Unfortunately, there was a small error in that calculation; namely, two terms were omitted from the calculation (4.8) of the Poincaré polynomial of $`W\backslash T^{}`$ in case $`R`$ is of type $`E_7`$, associated to the Dynkin subdiagrams $`D_6\times A_1`$ and $`D_4\times A_1^2`$ of $`E_7`$. Once they are taken into account (the first contributes $`t^7`$, the second $`qt^8`$), we see that
$`{\displaystyle \underset{i,j,k}{}}u^iv^jt^kh^{i,j}(H^k(_{3,1},))`$ $`=q^7t^{14}+q^62t^{12}+q^5t^{10}+qt^8+t^7`$
$`+(aqt+b)(t^7+t^6),`$
where $`a`$ and $`b`$ equal $`0`$ or $`1`$. This proves the theorem. ∎ |
no-problem/9910/hep-lat9910039.html | ar5iv | text | # Lattice QCD and chiral mesons
## Abstract
The standard QCD action is improved by the addition of irrelevant operators built with chiral composites. An effective Lagrangian is derived in terms of auxiliary fields, which has the form of the phenomenological chiral Lagrangians. Our improved QCD action appears promising for numerical simulations as the pion physics is explicitely accounted for by the auxiliary fields.
QCD in the high energy limit is efficiently described by the perturbative expansion in the gauge coupling constant. For the low energy properties instead, we do not have an equally satisfactory theory.
At the phenomenological level a description of the low energy physics related to the symmetry breaking of the chiral invariance is obtained by using the phenomenological chiral Lagrangians , but a direct derivation from the basic theory of quarks and gluons is still lacking.
At a more fundamental level the lattice formulation has allowed us to investigate the quark confinement and the spontaneous breaking of chiral invariance, but due to the complexities related to the definition of chiral fermions , in numerical simulations the chiral limit is achieved only through a fine-tuning procedure. Even though these difficulties can be overcome, we remain unable to unify the treatment of high energy and low energy properties.
For these reasons we have developed an approach based on the use of quark composites as fundamental variables. The idea behind it is that a significant part of the binding of the hadrons can be accounted for in this way, so that the “residual interaction” is sufficiently weak for a perturbative treatment.
The quark-composites approach is in principle fairly general, since it allows us to treat all the hadrons composite of quarks, but for technical reasons the composites with the quantum numbers of mesons and barions are treated in a different way. The nucleonic composites, for instance, naturally satisfy the Berezin integration rules and we derived the substitution rules which allow us to replace polynomials of the quark fields by appropriate polynomials of these composites in the partition function. The mesonic composites instead, due to the complexity of the integral over even elements of a Grassmann algebra, are replaced by auxiliary fields by means of the Stratonovich-Hubbard transformation . Even though our effective action is not renormalizable by power counting, due to the renormalizability of QCD only a finite number of free parameters can be generated by the counterterms because of the BRS identities.
We assume the modified partition function
$$Z=[dV][d\overline{\lambda }d\lambda ]\mathrm{exp}[S_{YM}S_qS_C],$$
(1)
where $`S_{YM}`$ is the Yang-Mills action, $`S_q`$ is the action of the quark field and $`S_C`$ is a four fermions irrelevant operator which provides the kinetic terms for the quark composites with the quantum numbers of the chiral mesons. $`\lambda `$ is the quark field while the gluon field is associated to the link variables $`V`$. Differentials in square brackets are understood to be the product of the differentials over the lattice sites and the internal indices. All the fields live in an euclidian lattice of spacing $`a`$. We introduce the following notation for the sum over the lattice
$$(f,g)=a^4\underset{x}{}f(x)g(x).$$
(2)
In this notation the quark action is
$$S_Q=(\overline{\lambda },Q\lambda ).$$
(3)
We do the Wilson choice for the quark wave operator
$$Q=Q_DQ_W,Q_D=\gamma _\mu \overline{}_\mu ,Q_W=a\frac{r}{2}\mathrm{}.$$
(4)
The symmetric derivative $`\overline{}_\mu `$ and the Laplacian $`\mathrm{}`$ are covariant and are defined in terms of the right/left derivatives
$$(_\mu ^\pm )_{xy}=\pm \frac{1}{a}\left(\delta _{x\pm \widehat{\mu },y}V_{\pm \mu }(x)\delta _{xy}\right)$$
(5)
according to
$$\overline{}_\mu =\frac{1}{2}\left(_\mu ^++_\mu ^{}\right),\mathrm{}=\underset{\mu }{}_\mu ^+_\mu ^{}$$
(6)
with standard conventions for the link variables.
The chiral composites are the pions and the sigma
$$\stackrel{}{\widehat{\pi }}=ia^2k_\pi \overline{\lambda }\gamma _5\stackrel{}{\tau }\lambda ,\widehat{\sigma }=a^2k_\pi \overline{\lambda }\lambda .$$
(7)
$`\gamma _5`$ is assumed hermitian, the $`\stackrel{}{\tau }`$’s are the Pauli matrices and a factor of dimension (length)<sup>2</sup>, necessary to give the composites the dimension of a scalar, has been written in the form $`a^2k_\pi `$ for convenience (see below).
Since for massless quarks the QCD action is chirally invariant, the action of the chiral mesons must be, apart from a linear breaking term, $`O(4)`$ invariant. It must then have the form
$$S_C=\frac{1}{4}(\widehat{\mathrm{\Sigma }}^{},C\widehat{\mathrm{\Sigma }})\frac{1}{4}(\widehat{\mathrm{\Sigma }}^{}+\widehat{\mathrm{\Sigma }})m_q,$$
(8)
where
$$\widehat{\mathrm{\Sigma }}=\widehat{\sigma }+i\stackrel{}{\tau }\stackrel{}{\widehat{\pi }}\gamma _5,A=\text{tr}^{isospin}A.$$
(9)
There is a long history of 4-fermions interactions and their relation to QCD, starting from the Nambu-Jona-Lasinio and Gross-Neveu models, until the so-called chirally extended QCD or $`\chi `$QCD (see also for example ).
Our demand of irrelevance of $`S_C`$, needed to maintain the renormalization properties of QCD, and heuristic considerations, based on experience with simple, solvable models, lead to the following form of the wave operator of the chiral composites
$$C=\frac{\rho ^4}{a^4}\frac{1}{\mathrm{}+\rho ^2/a^2},$$
(10)
where $`\rho `$ is a dimensionless parameter. The irrelevance by power counting of $`S_C`$ requires that in the continuum limit $`\rho `$ do not to vanish and $`k_\pi `$, as well as the product $`k_\pi \rho `$, do not diverge. Under these conditions, as a consequence of the explicit dependence on the lattice spacing of $`C`$ and the composites, operators of dimension higher than 4 are accompanied by the appropriate powers of the cut-off. All these parameters are obviously subject to renormalization.
We replace the chiral composites by the auxiliary fields
$$\mathrm{\Sigma }=\mathrm{\Sigma }_0i\gamma _5\stackrel{}{\tau }\stackrel{}{\mathrm{\Sigma }}$$
(11)
by means of the Stratonovich-Hubbard transformation . Ignoring, as we will systematically do in the sequel, field independent factors, the partition function can be written
$`Z`$ $`=`$ $`{\displaystyle [dV]\left[\frac{d\mathrm{\Sigma }}{\sqrt{2\pi }}\right]\mathrm{exp}\left[S_{YM}S_0\right]}`$ (12)
$`{\displaystyle [d\overline{\lambda }d\lambda ]\mathrm{exp}\left[(\overline{\lambda },(DQ)\lambda )\right]}`$
$`=`$ $`{\displaystyle [dV]\left[\frac{d\mathrm{\Sigma }}{\sqrt{2\pi }}\right]\mathrm{exp}\left[S_{YM}\stackrel{~}{S}\right]}`$
with
$$\stackrel{~}{S}=S_0\text{Tr}\mathrm{ln}(DQ)$$
(13)
“Tr” is the trace over both space and internal degrees of freedom and we introduced the functions of the auxiliary fields
$`S_0`$ $`=`$ $`{\displaystyle \frac{1}{4}}\rho ^4(\mathrm{\Sigma }^{},\left(a^4C\right)^1\mathrm{\Sigma }),`$ (14)
$`D`$ $`=`$ $`k_\pi \left[\rho ^2\mathrm{\Sigma }m_q\right].`$ (15)
The pion field can be introduced by the transformation
$`\mathrm{\Sigma }`$ $`=`$ $`R\left[{\displaystyle \frac{1\gamma _5}{2}}U+{\displaystyle \frac{1+\gamma _5}{2}}U^{}\right]`$
$`R^2`$ $`=`$ $`\mathrm{\Sigma }_0^2+\stackrel{}{\mathrm{\Sigma }}^2`$
$`U`$ $`=`$ $`\mathrm{exp}\left({\displaystyle \frac{i}{f_\pi }}\stackrel{}{\tau }\stackrel{}{\pi }\right)SU(2).`$ (16)
The volume element
$$\left[\frac{d\mathrm{\Sigma }}{\sqrt{2\pi }}\right]=\left[\frac{dR}{\sqrt{2\pi }}\right][dU]\mathrm{exp}\underset{x}{}3\mathrm{ln}R$$
(17)
provides the Haar measure $`[dU]`$ over the group. The effective action takes the form
$`\stackrel{~}{S}`$ $`=`$ $`{\displaystyle \underset{\mu }{}}{\displaystyle \frac{1}{4}}(_\mu ^+(RU^{}),_\mu ^+(RU))`$ (18)
$`+{\displaystyle \frac{\rho ^2}{2a^2}}(R,R)\text{Tr }\mathrm{ln}\left(DQ\right).`$
If the radial field $`R`$ acquires a nonvanishing expactation value $`\overline{R}`$, we set
$$\overline{R}=f_\pi $$
(19)
so that the first term of $`\stackrel{~}{S}`$ is the kinetic part of the chiral action while the radial field should not be dynamical because of its divergent mass (in the continuum limit). It can then be shown that the fermionic determinant can be expanded in powers of the derivatives of $`U`$, as appropriate to a Goldstone field, and of the explicit chiral symmetry breaking terms.
If we could naively forget about the Wilson term in the lattice action of the fermions, it would be very easy to evaluate
$$\overline{R}=\sqrt{\frac{\mathrm{\Omega }}{a\rho }}=f_\pi ,$$
(20)
where $`\mathrm{\Omega }=24`$ is the number of quark components. If we neglect the fluctuations of $`R`$ and we put
$$B=\frac{1}{2a^2f_\pi },$$
(21)
the first term of $`\stackrel{~}{S}`$ can be identified with the leading term of the chiral models
$$_2=\frac{1}{4}f_\pi ^2\left[_\mu U^{}^\mu U2m_qBU+U^{}\right]$$
(22)
and
$`m_\pi ^2`$ $`=`$ $`2m_qB`$ (23)
$`k_\pi 0|\overline{\lambda }\lambda |0`$ $`=`$ $`2f_\pi ^2B`$ (24)
while the fermionic determinant has an (hopping) expansion in inverse powers of $`f_\pi `$ and $`k_\pi `$ which provides corrections to $`_2`$. Furthermore the quarks are perturbatively confined in this vacuum because their effective mass
$$M_q=m_qk_\pi \rho ^2\overline{R}=m_qk_\pi \rho ^2f_\pi $$
(25)
is proportional to the inverse of the expansion parameters.
But since, as it is well known, in the absence of the Wilson term the lattice action is describing 16 fermionic species which cancel the abelian anomaly, let us go back to the full theory in the presence of the term $`Q_W`$. Let us set $`U=U^{}=1`$ and $`R=\overline{R}`$ and study the classical potential
$$\left[\overline{R}\right]=\frac{1}{2}\frac{\rho ^2}{a^2}\overline{R}^2\frac{1}{2a^4N^4}\text{ Tr }\mathrm{ln}P$$
(26)
where $`a^4N^4`$ is the lattice volume
$$P=M_q^22M_qQ_W+Q_W^2Q_D^2[Q_D,Q_W]$$
and the effective quark mass $`M_q`$ has been given above.
$`[\overline{R}]`$ is still a function of the fluctuating gauge fields. The variation of the partition function with respect to $`\overline{R}`$ yields the stationarity equations
$$\frac{\rho ^2}{a^2}\overline{R}=\frac{k_\pi \rho ^2}{a^4N^4}\text{Tr }P^1\left(Q_WM_q\right)$$
(27)
where $``$ is the functional average over the Yang Mills measure. Whenever a nonvanishing solution to these equations exists in the limit of vanishing of the explicit breaking terms, we have a spontaneous breaking of the chiral symmetry in QCD.
Such solution must satisfy the condition on the effective quark mass $`M_Q`$
$$\underset{a0}{lim}\frac{aM_q}{r}=0.$$
(28)
This is the standard condition, necessary also with our effective action , for the chiral anomaly to be correctly reproduced.
The pion mass turns out to be
$$m_\pi ^2=\frac{k_\pi \rho ^2\overline{R}}{f_\pi ^2}\frac{1}{a^4N^4}\text{Tr }P^1\left(Q_Wm_q\right)$$
(29)
and it is entirely due to the explicit breakings of the chiral invariance, namely the quark mass term and the Wilson term. It is convenient to introduce a subtracted mass which makes explicit the chiral limit
$$m_q=m+\delta m$$
(30)
with
$$\delta m\text{Tr }P^1=\text{Tr }P^1Q_W,$$
(31)
so that
$$m_\pi ^2=m\frac{k_\pi \rho ^2\overline{R}}{f_\pi ^2}\frac{1}{a^4N^4}\text{Tr }P^1.$$
(32)
Since the quark condensate in the present case is given by
$$0|\overline{\lambda }\lambda |0=\frac{1}{a^4N^4}\text{Tr }P^1\left(k_\pi \rho ^2\overline{R}m\right),$$
(33)
we get the Gell-Mann-Oakes-Renner relation
$$m_\pi ^2=\frac{1}{f_\pi ^2}m0|\overline{\lambda }\lambda |0.$$
(34)
Eq. (33) shows a contribution to the effective quark mass proportional to the quark-condensate, that has therefore the same origin of the nonperturbative contribution to the quark mass first studied by Politzer (see also ).
In conclusion, with our choice of the irrelevant 4-fermion $`S_C`$ we get the desired link between QCD and the chiral lagrangians, provided there exists a nontrivial solution for the expectation value of the radial field $`R`$ which is related to the spontaneous symmetry breaking of chiral invariance. It is indeed easy to show that the fermionic determinant can be expanded in powers of the derivatives of the auxiliary fields and the explicit chiral symmetry breaking terms. To complete the derivation of the chiral models from QCD it remains to show that these latter will generate only interactions of the chiral theories. This requires an investigation of the the additive renormalizations which are induced by the Wilson term $`Q_W`$ through the appropriate Ward identities. It could be of interest, also in this context, the use of Ginsparg-Wilson lattice fermions in order to preserve as much as possible the explicit chiral invariance.
From a numerical point of view, the evaluation of the fermionic determinant should be faster in presence of the auxiliary fields, as they already provide the propagation of the pions. Some support to the above can be found in . |
no-problem/9910/hep-ph9910520.html | ar5iv | text | # EVENT GENERATORS FOR LINEAR COLLIDER PHYSICS
## 1 Introduction
In any simulation of an experimental analysis at an $`e^+e^{}`$ collider, we must begin with a sample of physics events to be analyzed. To produce these, we need an event generator. This program encodes our knowledge of Standard Model background processes and our expectations for signal processes. In this article, I will review the current variety of event generators available for simulations studies at future linear colliders.
There are a number of goals that an event generator might be expected to fulfill. It should realistically represent possible signal processes and Standard Model backgrounds. It should take care of the superposition of QCD and fragmentation effects onto electroweak cross sections. It should give high accuracy, for precision studies. And, it should have the flexibility to include new reactions of arbitrary and exotic form. These goals generally conflict with one another, or else are achieved only at the expense of a high level of complexity. So we needed different tools optimized for these various tasks.
In this report, I review the event generators which address these issues that were presented at Sitges. The major simulation programs described in this article are listed in Table 1. This table includes, for each program, a Web address where download information and documentation can be found.
## 2 Workhorses
Among event generators, first place must be given to the general purpose programs PYTHIA and HERWIG. Both programs were originally written to test ideas about QCD jet phenomena and hadronization. But both have now evolved into general-purpose codes incorporating all of the basic Standard Model processes in $`e^+e^{}`$ annihilation and a variety of nonstandard reactions.
The most important aspect of PYTHIA and HERWIG is that they fully simulate QCD final state state effects. Given a system of two or more partons with large invariant mass, these programs generate a QCD parton shower and then simulate the hadronization of the final array of partons. The shower algorithm is not exact at higher orders in $`\alpha _s`$ but does generate an approximately correct set of hard jets radiated from the original parton system. The hadronization step is carried out by different algorithms in the the two programs, but, in both cases, the description of hadronization has been tuned to fit the $`e^+e^{}`$ annihilation data. These features imply that QCD final-state interactions have been included in a way that extrapolates correctly to high energy.
PYTHIA allows any parton-level generator to be included as a hard subprocess. The generator must specific the color routing in the final state and the order in which parton showers are to be generated. One caution with this approach is that order $`\alpha _s`$ corrections can raise or lower the overall normalization of the cross section. This effect cannot be included in the hadronization but must be accounted for externally.
PYTHIA can run with a given initialization at a variety of $`e^+e^{}`$ center of mass energies. This allows the program to be linked to a generator of initial-state $`e^{}`$ and $`e^+`$ energy distributions, such as CIRCE or PYBMS, to simulation the effect of beamstrahlung. Initial state polarization is not included in the current version. Final state spin correlations are included for some but not all processes; notably, spin correlations are included for the very important process $`e^+e^{}W^+W^{}`$.
PYTHIA and HERWIG both include generators for the process $`\gamma \gamma `$ hadrons, including hard, soft, and ‘resolved’ components. The third of these refers to processes in which partons in the photon undergo a hard scattering. The relative magnitudes of these three components are not understood from theory, but it is important to understand this problem to compute the high-rate ‘minijet’ background in which a $`\gamma \gamma `$ collision produces a low-mass hadronic system. New data on high energy $`\gamma \gamma `$ processes from LEP 2 and on $`\gamma p`$ processes from HERA—which contain much of the same physics–should allow systematic tuning of these generators.
## 3 Polarization
To introduce the next sections of this report, I must digress on the subject of polarization. Polarization has a central role in the LC physics. On one hand, because the LC will operate in the energy region well above the $`Z^0`$ where it becomes manifest that left- and right-handed have completely different quantum numbers, all standard and non-standard cross sections will depend strongly on polarization. On the other hand, since it is difficult to measure polarization effects in the hadronic environment, polarization provides many new observables that cannot be studied at the LHC. Thus it is important that both initial- and final-state polarization be included properly in LC event generators.
How can polarization be included in physics simulations? The traditional approach is to generalize cross section formulae to include polarization asymmetries. However, this rapidly becomes cumbersome. Generators that take polarization seriously typically work at the amplitude level. Even if one does not include polarization, it is useful to work with amplitudes in any complex Feynman diagram computation, since if there are $`N`$ terms in the expression for the amplitude, there are $`N^2`$ in the expression for the cross section. Any method that makes use of amplitudes to compute the cross section can be structured so that the polarization-dependence is available for free.
There are two common approaches for including polarization in the cross-section formulae used in event generators. The first approach is the helicity-amplitude paradigm. In this approach, one computes amplitudes for transitions between states of definite helicity. These amplitudes are then linked together to provide the complete amplitude for a process that turns the initial $`e^+e^{}`$ state into the final decay products. Finally, the complete amplitude is squared to provide the event weight.
The second approach is the CALKUL paradigm. In this approach, one concentrates on amplitudes with massless particles in the inital and final states, the typical situation for $`e^+e^{}`$ annihilation when top quarks, $`W`$ bosons, and other heavy particles have decayed to their final products. Then it is possible to compactly represent the amplitude for the whole process of production and decay in terms of spinor products,
$$12=\overline{u}_L(p_1)v_R(p_2),\left[12\right]=\overline{u}_R(p_1)v_L(p_2).$$
(1)
The spinor products can in turn be computed directly for the set of initial and final massless four-vectors in a given event.
It is important to note that there are no important polarization effects associated with hadronization, except that the $`\tau `$ decay depends strongly on $`\tau `$ polarization. This effect should be accounted explicitly by decaying $`\tau `$’s through the simulation program TAUOLA.
The use of helicity amplitudes to systematically describe LC physics was pioneered by the JLC group, using the programs HELAS, for automatic Feynman diagram computation, and BASES/SPRING, to provide weight-1 events. The current version of their package is PHYSSIM in Table 1.
The need to build up systematically the full complexity of LC reactions—including beamstrahlung, initial-state radiation, initial- and final-state polarization effects, and hadronization—has led the authors of almost all the simulation programs to embrace object-oriented programming for their future versions. One relatively simple generator, pandora, already segregates the beam and $`e^+e^{}`$ reaction information into separate C
```
++-\ classes which interact through a simple interface.
Further details can be found in ref. \cite{mypandora}.
\section{Supersymmetry}
The next few sections of this report will discuss generators devoted to
specific problems of LC physics. The first of these is the coherent
representation of supersymmetry processes.
The full set of processes in $\ee$ annnihilation to two supersymmetric
particles is now available in three
different programs, the supersymmetric extension of PYTHIA,\cite{SPYTH}
the latest release of ISAJET,\cite{Paige} and the SUSYGEN program written
for LEP 2 studies.\cite{Ghodbane} ISAJET was the first to include
polarization
```
dependent cross sections. The new version also correctly includes the matrix elements for 3-body decays. SUSYGEN gives a complete treatment of initial and final polarization effects using the helicity-amplitude paradigm and even allows for nonzero phases in the $`A`$ and gaugino mass parameters. ISAJET and SUSYGEN explicitly include parametrizations of beamstrahlung. All three programs allow input of a general set of supersymmetry parameters. Given the model-independent character of LC measurements, this is an important feature. PYTHIA and ISAJET also include facilities that compute the supersymmetry spectrum from an underlying model; SUSYGEN includes an interface to spectrum calculations with SUSPECT.
## 4 Precision Standard Model
For calculation of Standard Model background processes at a LC, it is not sufficient to consider $`e^+e^{}`$ annihilation to on-shell 2-body final states. Backgrounds to new physics typically come from higher-order corrections in which additional fermions are produced or from $`e^+e^{}W^+W^{}`$ processes in which one $`W`$ boson fluctuations far off the mass shell. In fact, these reactions are not distinct and one must include all $`e^+e^{}4`$ fermion Feynman diagrams in order to obtain a gauge-invariant result.
This is already an issue at LEP 2 and a very serious effort has been made to provide 4-fermion event generators for the LEP 2 experiments. The status of generators for 4-fermion and $`W`$ pair physics has recently been described by Bardin, et al. These programs typically use the CALKUL paradigm. Though their implemetations are slightly different, they agree excellently among themselves and with the LEP 2 for configurations of four fermions all at large relative momenta. For brevity, I have included only three of these programs, EXCALIBUR, KORALW,, and WPHACT, in Table 1.
Two unresolved conceptual problems in the simulation of 4-fermion processes are the treatment of the $`W`$ width for an off-shell $`W`$ and the correct inclusion of transverse momentum for almost-collinear initial state radiation. In both cases, there is no simple prescription which is gauge-invariant. This leads to discrepancies among the various programs in certain specific kinematic regions. For example, for the process $`e^+e^{}e^+\nu d\overline{u}`$ (very low mass single $`W^{}`$ production), the various generators give 10% differences in the predicted cross section when $`m(d\overline{u})`$ is as small as a few GeV.
Additional challenges can be found in higher-order processes. For the study of the standard process $`e^+e^{}t\overline{t}`$ and also for many searches, one needs an event generator for $`e^+e^{}6`$ fermions. Accomando, Ballestrero, and Pizzio have computed the relevant cross sections in a suitable form and are preparing a new generator, SIXPHACT. Alternatively, methods are now available to allow one to perform the computation automatically; I will describe these in the next section.
At the same time, it is necessary to reach for higher accuracy in the simulation of 2-fermion final states. KORALZ, by Jadach, Ward, and Was, achieved an accuracy of 0.1% in the calculation of the small-angle Bhabha scattering cross section, and this was essential for the precision cross section normalization at LEP 1. At higher energies, it is necessary to treat multiple photon emissions coherently, and the precision calculations must be extended to larger forward angles. Jadach and collaborators have just released a new program KK which addresses these issues.
## 5 Do it yourself
Eventually, workers in LC physics will have a need for event generators for processes that have not been included in the standard programs. The traditional recourse in this case has been to find a friendly theorist with time on his hands. Today, however, amother course is made available by the GRACE and COMPHEP programs. These encode the Feynman rules of the standard model and certain extensions, and allow encoding of arbitrary additional Lagrangian couplings, and then automatically generate the numerical sum of Feynman diagrams. The authors of GRACE have made available a supersymmetric extension which already includes all 239 $`e^+e^{}3`$-body supersymmetry processes, and all 1424 $`e^+e^{}3`$-body processes. A typical process might involve the summation of 100 tree diagrams. The calculations are done at the amplitude level, allowing the event generation to include full spin correlations using the helicity-amplitude paradigm. These programs can also compute one-loop corrections by evaluating diagrams in terms of the standard set of one-loop integrals defined by Passarino and Veltman. A more complete discussion of these systems can be found in Perret-Gallix’s contribution to this conference.
## 6 Conclusions
In this report, I have tried to summarize the array of programs that are now available to perform event generation for LC physics. These range from the general-purpose generators PYTHIA and HERWIG, to specific tools for supersymmetry and multi-fermion simulations, to tools for automatic generation of events for arbitrary physics processes. For the future, we expect to see trends toward object-oriented and modular programs, toward detailed high-accuracy computation of standard background processes, and toward further automation of complex calculations. We are well on the way to the level of accuracy and generality that will be needed for the LC physics program.
## Acknowledgments
I am grateful to the authors of the programs described above for their help in organizing this review. This work was supported by the US Department of Energy under contract DE–AC03–76SF00515.
## References |
no-problem/9910/cond-mat9910130.html | ar5iv | text | # Commensurability oscillations due to pinned and drifting orbits in a two-dimensional lateral surface superlattice
## Abstract
We have simulated conduction in a two-dimensional electron gas subject to a weak two-dimensional periodic potential, $`V_x\mathrm{cos}(2\pi x/a)+V_y\mathrm{cos}(2\pi y/a)`$. The usual commensurability oscillations in $`\rho _{xx}(B)`$ are seen with $`V_x`$ alone. An increase of $`V_y`$ suppresses these oscillations, rather than introducing the additional oscillations in $`\rho _{yy}(B)`$ expected from previous perturbation theories. We show that this behavior arises from drift of the guiding center of cyclotron motion along contours of an effective potential. Periodic modulation in the magnetic field can be treated in the same way.
The behavior of electrons in a periodic potential lies at the heart of solid state physics and continues to yield surprises. Motion in a controllable one (1D) or two-dimensional (2D) potential can be studied with a lateral surface superlattice (LSSL). The electrons typically lie in a high-mobility 2D gas in a semiconducting heterostructure, and the periodic potential is applied through an array of metal gates whose bias can be varied. Alternatively a patterned stressor may be used, in which case the dominant potential is piezoelectric; this has a lower symmetry than the stressor, which will prove important.
The aim of using LSSLs is often to explore quantum-mechanical effects, such as Bloch oscillation and the Hofstadter butterfly, but the period of the potential is too long in most current devices. Instead, the dominant effects seen in 1D LSSLs are commensurability oscillations (COs) in the magnetoresistance . These can be explained semiclassically from interference between cyclotron motion and the periodic potential. Consider a sinusoidal potential energy, $`V(x)=V_x\mathrm{cos}(2\pi x/a)`$. The interference causes a drift along the equipotentials, parallel to the $`y`$ axis, which contributes to the conductivity $`\sigma _{yy}`$ and the resistivity $`\rho _{xx}`$:
$$\frac{\mathrm{\Delta }\rho _{xx}^{(1\mathrm{D})}(V_x)}{\rho _0}=\left(\frac{\pi l}{a}\right)^2\left(\frac{V_x}{E_\mathrm{F}}\right)^2J_0^2\left(\frac{2\pi R_c}{a}\right).$$
(1)
Here $`J_0`$ is a Bessel function of the first kind, $`\rho _0`$ is the resistivity at $`B=0`$, $`l`$ is the mean free path, $`R_c=v_\mathrm{F}/\omega _\mathrm{c}`$ is the cyclotron radius, $`\omega _\mathrm{c}=eB/m`$ is the cyclotron frequency, $`E_\mathrm{F}`$ is the Fermi energy and $`v_\mathrm{F}`$ the Fermi velocity. No effect on $`\rho _{yy}`$ is expected in this approach. Quantum-mechanical analysis yields a similar result but with small contributions to $`\rho _{yy}`$. Overall agreement between theory and experiments on 1D LSSLs is excellent, even for the strong piezoelectric potentials in strained LSSLs .
Now consider a simple 2D potential energy,
$$V(x,y)=V_x\mathrm{cos}(2\pi x/a)+V_y\mathrm{cos}(2\pi y/a).$$
(2)
To avoid ambiguity we take $`V_xV_y`$. An extension of the semiclassical theory shows that $`V_x`$ continues to generate oscillations in $`\rho _{xx}(B)`$ according to Eq. (1), and $`V_y`$ has the same effect on $`\rho _{yy}(B)`$. In contrast to 1D potentials, there is little confirmation of this plausible result. An early experiment used a holographic technique in two steps. A 1D grating was produced first, and showed strong COs in the longitudinal resistivity as expected. The sample was next illuminated with an orthogonal 1D pattern to produce a 2D grid. However, this combined pattern did not produce similar, strong COs in both $`\rho _{xx}`$ and $`\rho _{yy}`$, as expected from the extension to the semiclassical model; much weaker oscillations with the opposite phase were seen instead. Many subsequent measurements have used different modulation techniques. Virtually the only common feature between them is that the COs are generally weak, confirming this most significant feature of the holographic experiment.
We have performed simulations of conduction in a 2D LSSL to address this issue, and find that $`V_x`$ and $`V_y`$ do not contribute independently. Instead, the introduction of $`V_y`$ suppresses the oscillations in $`\rho _{xx}`$ rather than inducing oscillations in $`\rho _{yy}`$. We explain this with a simple picture based on drift of the guiding center of cyclotron motion along contours of an effective potential. Trajectories can drift or be pinned, and the latter suppress the magnetoresistance. Remarkable behavior is found when higher Fourier components dominate this potential, which should be detectable in experiments.
To simulate conduction we solved the classical equations of motion for electrons moving in the potential energy given by Eq. (2) and a normal magnetic field, with a constant probability of isotropic scattering. The superlattice had period $`a=200\mathrm{nm}`$ in GaAs with $`3\times 10^{15}\mathrm{m}^2`$ electrons of mobility $`50\mathrm{m}^2\mathrm{V}^1\mathrm{s}^1`$. The resistivity tensor was deduced from the velocity autocorrelation function and its diagonal elements are plotted as a function of the magnetic field in Fig. 1. We held $`V_x=1\mathrm{meV}`$ and raised $`V_y`$ from zero to $`V_x`$. In the 1D limit, $`V_y=0`$, the usual oscillations are seen in $`\rho _{xx}`$ with no structure in $`\rho _{yy}`$, in excellent agreement with Eq. (1). Recall that the existing theory predicts that an increase of $`V_y`$ from zero should induce oscillations in $`\rho _{yy}`$ without affecting $`\rho _{xx}`$. Instead, we see no oscillations in $`\rho _{yy}`$ while those in $`\rho _{xx}`$ are suppressed. Most strikingly, there are no commensurability oscillations at all in the symmetric 2D limit where $`V_y=V_x`$. The only structure that remains is a positive magnetoresistance at low fields. This arises from magnetic breakdown and it has recently been suggested that successive breakdowns may lead to further oscillations of quantum-mechanical origin.
An explanation of this behavior follows from the trajectories taken by electrons in the simulation. Typical examples starting from different points are shown in Figs. 2(a) and (b) for $`V_x=1\mathrm{meV}`$ and $`V_y=\frac{1}{2}\mathrm{meV}`$. The magnetic field $`B=0.72\mathrm{T}`$, corresponding to the largest peak in Fig. 1. There is no scattering and the trajectories run for $`100\mathrm{ps}`$, considerably longer than the lifetime $`\tau =19\mathrm{ps}`$ if scattering had been included. There is no sign of the chaos seen in weaker magnetic fields . The underlying motion is clearly a cyclotron orbit and the overall trajectories can be divided into two classes. Fig. 2(a) shows the cyclotron orbit drifting along $`y`$. This is perpendicular to the wavevector of the stronger potential component, and is the only type of trajectory seen in the 1D limit, $`V_y=0`$. Such motion contributes to $`\sigma _{yy}`$ and $`\rho _{xx}`$. No electrons were found to drift along $`x`$ and we therefore expect no effect on $`\sigma _{xx}`$ and $`\rho _{yy}`$. Trajectories of the second class are pinned, as in Fig. 2(b). The cyclotron orbit is distorted and precesses but makes no overall displacement. Such orbits make no contribution to conduction in the limit of large $`\omega _\mathrm{c}\tau `$ and therefore suppress the magnetoresistance. All trajectories become pinned in the symmetric 2D limit, $`V_y=V_x`$, quenching the COs. This analysis of the trajectories therefore shows that COs are reduced in magnitude because of pinned orbits, and are seen only in $`\rho _{xx}`$ if $`V_x>V_y`$.
The trajectories are only weakly distorted from regular cyclotron motion. We therefore focus on motion of the guiding center , which drifts at a velocity given by
$$𝐯^{(\mathrm{d})}(X,Y)=V^{\mathrm{eff}}(X,Y)\times 𝐁/eB^2.$$
(3)
The effective potential energy $`V^{\mathrm{eff}}(X,Y)`$ is the periodic potential energy (Eq. 2) averaged over the perimeter of a cyclotron orbit centered on $`(X,Y)`$, which reduces $`V_x`$ and $`V_y`$ equally by a factor of $`J_0(2\pi R_\mathrm{c}/a)`$. This depends on magnetic field through the cyclotron radius $`R_\mathrm{c}`$. Eq. (3) shows that the guiding center drifts along contours of $`V^{\mathrm{eff}}(X,Y)`$. Two examples are plotted in Fig. 3. All contours are closed for a symmetric effective potential with $`V_x^{\mathrm{eff}}=V_y^{\mathrm{eff}}`$ \[Fig. 3(a)\]. All trajectories are therefore pinned as in Fig. 2(b). Fig. 3(b) shows the effect of breaking the symmetry with $`V_y^{\mathrm{eff}}=\frac{1}{2}V_x^{\mathrm{eff}}`$. This introduces a band of open contours, shaded in the plot, running parallel to the $`Y`$ axis. The guiding center can drift along these and the deviations of the contours from straight lines leads to the lateral oscillations seen in Fig. 2(a).
The change in conductivity can be estimated from
$$\mathrm{\Delta }\sigma _{\mu \nu }=\frac{e^2m\tau }{\pi \mathrm{}^2}\overline{v}_\mu ^{(\mathrm{d})}\overline{v}_\nu ^{(\mathrm{d})}_{\mathrm{orbits}}.$$
(4)
It is assumed that only drifting orbits need be considered. We first find the average drift velocity $`\overline{𝐯}^{(\mathrm{d})}`$ for each orbit, which leaves only $`\overline{v}_y^{(\mathrm{d})}`$ in our case. The square $`[\overline{v}_y^{(\mathrm{d})}]^2`$ is then averaged over all orbits in the unit cell to give $`\mathrm{\Delta }\sigma _{yy}`$. A difficulty is that Eq. (4) is valid only if the lifetime $`\tau `$ is much larger than the periods of the drifting orbits. This always fails for trajectories on the boundary of the open region because they go through stagnation points in the middle of each edge of the unit cell, but is easier to satisfy for the majority of orbits.
The open orbits are complicated and we therefore make several approximations to estimate Eq. (4). Start from the 1D limit, $`V_y=0`$, in which case all orbits drift with $`\overline{v}_y^{(\mathrm{d})}(X)=(2\pi V_x^{\mathrm{eff}}/eBa)\mathrm{sin}(2\pi X/a)`$. The introduction of $`V_y`$ affects this in two ways. First, the fraction of the unit cell occupied by drifting orbits is reduced to
$$P_{\mathrm{drift}}=1\frac{8}{\pi ^2}_0^{\pi /2}\mathrm{arcsin}\left(\sqrt{\frac{V_y}{V_x}}\mathrm{sin}\theta \right)𝑑\theta .$$
(5)
We replace the areas of drifting orbits shown in Fig. 3(b) by bands along $`Y`$ of the same area centered on $`X=\frac{1}{4}a`$ and $`\frac{3}{4}a`$, and average $`[\overline{v}_y^{(\mathrm{d})}(X)]^2`$ over the remaining area.
The second effect of $`V_y`$ is to make the drifting orbits sinuous, which reduces their average velocity along $`Y`$. The most rapid orbit is through the symmetry point $`(\frac{1}{4},\frac{1}{4})a`$. Its period is increased by a factor of $`(2/\pi )K`$ compared with $`V_y=0`$, where $`K`$ is the complete elliptic integral of first kind with modulus $`k=V_y/V_x`$. We apply this factor to all orbits. Combining the two effects lead to the approximate resistivity
$$\frac{\mathrm{\Delta }\rho _{xx}(V_x,V_y)}{\mathrm{\Delta }\rho _{xx}^{(1\mathrm{D})}(V_x)}\frac{\pi ^2}{4K^2}\left[P_{\mathrm{drift}}+\frac{\mathrm{sin}(\pi P_{\mathrm{drift}})}{\pi }\right].$$
(6)
This is plotted in the inset to Fig. 1. It reduces correctly to the symmetric and 1D limits, and agrees well with the simulations.
These results show that COs are much harder to observe in 2D potentials because the symmetry must be broken. They are also less robust in 2D. If $`V_y=0`$ the guiding center drifts along $`Y`$, which does not change the potential seen by the electron. In 2D, however, the potential changes and the picture based on the guiding center will be valid only if its drift during one cyclotron period is much smaller than the unit cell. This leads to the condition $`eBa2\pi (mV_x^{\mathrm{eff}})^{1/2}`$ for COs to exist, which is similar to that for normal diffusion rather than chaos in a much stronger, symmetric potential . Using the envelope of the Bessel function to relate $`V_x^{\mathrm{eff}}`$ and $`V_x`$ allows this to be rewritten as $`v_\mathrm{F}(eBa)^3m(2\pi V_x)^2`$. This becomes $`B0.3\mathrm{T}`$ for the conditions used in Fig. 1, which is in reasonable agreement with the simulations. We would also expect the COs to become more robust as $`V_y`$ is reduced, which is seen. Motion becomes chaotic when the condition on $`B`$ is violated and a typical trajectory is shown in Fig. 2(c) for $`B=0.28\mathrm{T}`$.
A general feature that follows from the drift of the guiding center along contours, Eq. (3), is that there can be only one average direction of drift. By symmetry this must be along $`X`$ or $`Y`$ if only $`V_x`$ and $`V_y`$ are present, so it is impossible to have oscillations in both $`\rho _{xx}`$ and $`\rho _{yy}`$. New features appear when further Fourier components are added to the potential. Consider the simplest ‘diagonal’ component, $`V_{1,1}\mathrm{cos}[2\pi (x+y)/a]`$. The average over a cyclotron orbit gives an effective potential energy $`V_{1,1}^{\mathrm{eff}}=V_{1,1}J_0(2\sqrt{2}\pi R_\mathrm{c}/a)`$. This has a different dependence on magnetic field from $`V_x`$ and $`V_y`$, and $`V_{1,1}`$ can therefore dominate the behavior near zeroes of $`J_0(2\pi R_\mathrm{c}/a)`$ where the fundamental terms vanish.
The effect of raising $`V_{1,1}^{\mathrm{eff}}`$ from zero is displayed in Fig. 4, holding $`V_y^{\mathrm{eff}}=\frac{1}{2}V_x^{\mathrm{eff}}`$. The region of open contours parallel to $`Y`$ is distorted and shrinks as $`V_{1,1}^{\mathrm{eff}}`$ rises. It collapses when $`V_{1,1}^{\mathrm{eff}}=V_x^{\mathrm{eff}}`$, all orbits are pinned, and COs are quenched. Open contours reappear when $`V_{1,1}^{\mathrm{eff}}>V_x^{\mathrm{eff}}`$ but are now parallel to the diagonal $`Y=X`$. This diagonal drift induces a peak in both $`\rho _{xx}`$ and $`\rho _{yy}`$ instead of the expected minimum in $`\rho _{xx}`$. Such a mechanism may contribute to the antiphase effects seen in some experiments , particularly if the fundamental potential components are well balanced, and the corresponding COs are suppressed. Note, however, that no COs of any period will be seen within this model unless there is some asymmetry in the potentials.
Commensurability oscillations can also be induced by a 2D periodic magnetic field, $`B_z=B_0+B_\mathrm{m}(x,y)`$ . This can be analyzed in the same way with an effective potential energy given by $`V_\mathrm{m}^{\mathrm{eff}}(X,Y)=E_\mathrm{F}B_\mathrm{m}^{\mathrm{eff}}(X,Y)/B_0`$. Here the magnetic field $`B_\mathrm{m}^{\mathrm{eff}}(X,Y)`$ is averaged over the area of a cyclotron orbit centered on $`(X,Y)`$, rather than its perimeter. This introduces a factor of $`2J_1(\theta )/\theta `$ with $`\theta =2\pi R_\mathrm{c}/a`$ for the fundamental Fourier components, rather than $`J_0(\theta )`$. Experiments show that COs in a square array of magnetic elements appear only in the direction of in-plane magnetization in accord with contours of $`B_\mathrm{m}`$ presented there.
We have shown that the commensurability oscillations in a two-dimensional superlattice are quite different from the superposition of one-dimensional results. The addition of the potential energy $`V_y\mathrm{cos}(2\pi y/a)`$ suppresses the oscillations in $`\rho _{xx}(B)`$ due to $`V_x\mathrm{cos}(2\pi x/a)`$ alone, rather than adding new oscillations in $`\rho _{yy}(B)`$. There are no oscillations at all in a symmetric potential, $`V_x=V_y`$, provided that only open orbits contribute to conduction. An asymmetric potential is therefore needed to observe commensurability oscillations. This might seem to present difficulty, as most devices have symmetric patterns, but real structures contain strain that induces an asymmetric potential through the piezoelectric effect . The behavior in two dimensions can be explained from the drift of the guiding center of cyclotron motion along contours of an effective potential, as in one dimension, but the coupling of motion along $`x`$ and $`y`$ causes perturbation theory to fail. This coupling also reduces the stability of commensurability oscillations, and motion becomes chaotic at low magnetic fields. Higher Fourier components in the potential lead to characteristic signatures in both $`\rho _{xx}`$ and $`\rho _{yy}`$ near minima of the fundamental oscillations, and their detection would verify the theory presented here.
It is a pleasure to thank C. J. Emeleus, B. Milton, and S. Chowdhury for many illuminating conversations. We also acknowledge correspondence with R. R. Gerhardts, who has similar work in progress, and the support of the U. K. EPSRC. |
no-problem/9910/gr-qc9910041.html | ar5iv | text | # Untitled Document
July 1999 INFNCATH-9908
Primary scalar hair in dilatonic theories
with modulus fields
S. Mignemi e-mail: mignemi@cagliari.infn.it
Dipartimento di Matematica, Università di Cagliari
viale Merello 92, 09123 Cagliari, Italy
and
INFN, Sezione di Cagliari
ABSTRACT
We study the general spherical symmetric solutions of dilaton-modulus gravity nonminimally coupled to a Maxwell field, using methods from the theory of dynamical systems. We show that the solutions can be classified by the mass, the electric charge, and a third parameter which we argue can be related to a scalar charge. The global properties of the solutions are discussed.
Introduction. It is known that black hole solutions of charged dilaton-gravity models, as those arising in effective string theory, present a quite different behaviour from the Reissner-Nordström solutions of general relativity \[1-2\]. This is essentially due to the non-minimal coupling of the dilaton to the Maxwell field, which spoils the validity of the standard no-hair theorems and hence allows for the presence of a non-trivial dilaton field. In spite of this, the dilaton charge is not an independent parameter, but is still a function of the mass and the electric charge of the black hole and has henceforth sometimes been called a secondary hair.
In effective four-dimensional string theory, however, further scalar fields are present besides the dilaton, as for example the moduli coming from compactification of higher dimensions, which are non-minimally coupled to the Maxwell field . The introduction of these fields may change the properties of the black holes. A simplified model which takes into account one modulus has been studied some time ago . It was shown that an exact spherically symmetric black hole solution of the field equations can be found by requiring that the dilaton and the modulus are proportional. However, this restriction is not necessary, and it would be interesting to investigate the properties of the most general spherically symmetric solutions. In general, it is not possible to find these solutions in analytic form. (The field equations can in fact be cast in the form of a Toda molecule system of first order differential equations, which is exactly solvable only in a few special cases). However, the qualitative behaviour of the solutions and some quantitative results can be obtained by studying the Toda dynamical system. In particular, the metric and the scalar fields will necessarily be regular at all the points of the integral curves except critical points. Consequently, in order to determine the global properties of the solutions, as the structure of their horizons and asymptotic regions, it suffices to study their behaviour at the critical points of the dynamical system. One drawback of this method is that only the exterior region of the black hole can be studied. The interior may be however investigated numerically by continuing the solutions beside the horizon.
In this paper we undertake the investigation of the general solutions of the model introduced in using this approach, and show that in general there exists a three-parameter family of asymptotically flat black hole solutions. This result is interesting because the third parameter can be presumably related to a scalar charge, giving therefore an example of primary scalar hair. In addition to these solutions, the model also admits as a limiting case a two-parameter family of non-asymptotically flat black hole degenerate solutions of the kind discussed in . We also discuss the properties of extremal black hole solutions, which are of great interest in recent developments of string and membrane theories.
The paper is organized as follows. In section 1 we describe the model and obtain the dynamical system associated with the field equations. In section 2 we discuss the exact black hole solutions, obtained for special values of the parameters. In section 3 we study the dynamical system in its generality, while in section 4 we discuss the physical properties of its solutions.
1. The action and the field equations. We study the action :
$$S=d^4x\sqrt{g}\left[R2(\mathrm{\Phi })^2\frac{2}{3}(\mathrm{\Sigma })^2(e^{2\mathrm{\Phi }}+\lambda ^2e^{2q\mathrm{\Sigma }/3})F^2\right]$$
$`(1.1)`$
where $`\mathrm{\Phi }`$ and $`\mathrm{\Sigma }`$ are the 4-dimensional dilaton and modulus respectively, $`F`$ is the Maxwell field strength, and $`q`$ and $`\lambda `$ are coupling parameters. This action has been obtained by dimensional reduction of heterotic string effective action , with the addition of a non-minimal coupling term for the modulus, arising from integrating out heavy modes .
The field equations ensuing from (1.1) are
$$\begin{array}{cc}& R_{\mu \nu }=2_\mu \mathrm{\Phi }_\nu \mathrm{\Phi }+\frac{2}{3}_\mu \mathrm{\Sigma }_\nu \mathrm{\Sigma }+2(e^{2\mathrm{\Phi }}+\lambda ^2e^{2q\mathrm{\Sigma }/3})\left(F_\mu ^\rho F_{\nu \rho }\frac{1}{4}F^2g_{\mu \nu }\right)\hfill \\ & ^\mu [(e^{2\mathrm{\Phi }}+\lambda ^2e^{2q\mathrm{\Sigma }/3})F_{\mu \nu }]=0\hfill \\ & ^2\mathrm{\Phi }=\frac{1}{2}e^{2\mathrm{\Phi }}F^2\hfill \\ & ^2\mathrm{\Sigma }=\frac{q\lambda ^2}{2}e^{2q\mathrm{\Sigma }/3}F^2\hfill \end{array}$$
$`(1.2)`$
A spherically symmetric solution can be found with Maxwell field strength
$$F_{mn}=Qϵ_{mn}m,n=2,3$$
$`(1.3a)`$
and a metric of the form
$$ds^2=e^{2\nu }(dt^2+e^{4\rho }d\xi ^2)+e^{2\rho }d\mathrm{\Omega }^2$$
$`(1.3b)`$
where $`\nu `$, $`\rho `$, $`\mathrm{\Phi }`$ and $`\mathrm{\Sigma }`$ are functions of the ”radial” coordinate $`\xi `$. Defining a new function $`\zeta =\nu +\rho `$, the field equations (2) take the simpler form
$$\begin{array}{ccc}\hfill \zeta ^{\prime \prime }& =e^{2\zeta }\hfill & (1.4a)\hfill \\ \hfill \mathrm{\Phi }^{\prime \prime }& =Q_1^2e^{2\nu 2\mathrm{\Phi }}\hfill & (1.4b)\hfill \\ \hfill \mathrm{\Sigma }^{\prime \prime }& =qQ_2^2e^{2\nu 2q\mathrm{\Sigma }/3}\hfill & (1.4c)\hfill \\ \hfill \nu ^{\prime \prime }& =Q_1^2e^{2\nu 2\mathrm{\Phi }}+Q_2^2e^{2\nu 2q\mathrm{\Sigma }/3}\hfill & (1.4d)\hfill \end{array}$$
with $`Q_1^2=Q^2`$ and $`Q_2^2=\lambda ^2Q^2`$, subject to the constraint
$$\zeta ^2\nu ^2\mathrm{\Phi }^2\frac{1}{3}\mathrm{\Sigma }^2+Q_1^2e^{2\nu 2\mathrm{\Phi }}+Q_2^2e^{2\nu 2q\mathrm{\Sigma }/3}e^{2\zeta }=0$$
$`(1.5)`$
A first integral of eq.(1.4a) is given by
$$\zeta ^2=e^{2\zeta }+a^2$$
where $`a^2`$ is an integration constant, which has been chosen to be non-negative because otherwise one would obtain solutions with no asymptotic region, which are not of interest to us. For the moment we consider only strictly positive values of $`a`$. As we shall see, the limit $`a0`$, corresponds to extremal solutions. Integrating again, with a suitable choice of the origin of $`\xi `$, one gets
$$e^\zeta =\frac{2ae^{a\xi }}{1e^{2a\xi }}$$
$`(1.6)`$
where $`a`$ can be chosen to be positive without loss of generality. Moreover, from the remaining eqs. (1.4), one obtains the relation
$$\frac{1}{q}\mathrm{\Sigma }^{\prime \prime }+\mathrm{\Phi }^{\prime \prime }+\nu ^{\prime \prime }=0$$
which can be integrated, to read
$$\mathrm{\Sigma }^{}=q(\nu ^{}+\mathrm{\Phi }^{}+c)$$
$`(1.7)`$
with $`c`$ an integration constant. In view of (1.4) and (1.7), defining
$$\chi =\nu \mathrm{\Phi }\eta =\nu \frac{q}{3}\mathrm{\Sigma }$$
$`(1.8)`$
the field equations can be put in the ”Toda molecule” form
$$\begin{array}{cc}\hfill \chi ^{\prime \prime }=& 2Q_1^2e^{2\chi }+Q_2^2e^{2\eta }\hfill \\ \hfill \eta ^{\prime \prime }=& Q_1^2e^{2\chi }+\frac{3+q^2}{3}Q_2^2e^{2\eta }\hfill \end{array}$$
$`(1.9)`$
In terms of $`\chi `$ and $`\eta `$, the derivatives of the fields $`\mathrm{\Phi }`$, $`\mathrm{\Sigma }`$ and $`\nu `$ are given by
$$\begin{array}{cc}\hfill \mathrm{\Phi }^{}=& \frac{3}{3+2q^2}\left(\eta ^{}\frac{3+q^2}{3}\chi ^{}\frac{q^2}{3}c\right)\hfill \\ \hfill \mathrm{\Sigma }^{}=& \frac{3q}{3+2q^2}(\chi ^{}2\eta ^{}c)\hfill \\ \hfill \nu ^{}=& \frac{3}{3+2q^2}\left(\eta ^{}+\frac{q^2}{3}\chi ^{}\frac{q^2}{3}c\right)\hfill \end{array}$$
$`(1.10)`$
and eq.(1.5) can be written
$$a^2\frac{3}{3+2q^2}\left[\frac{3+q^2}{3}\chi ^2+2\eta ^22\eta ^{}\chi ^{}+\frac{q^2}{3}c^2\right]+Q_1^2e^{2\chi }+Q_2^2e^{2\eta }=0$$
$`(1.11)`$
The equations (1.9) with the constraint (1.11) can be solved exactly in a few special cases, which are reported in the following section.
In the general case, they can be recast in the form of a 3-dimensional system of first-order differential equations. If we define the variables
$$X=\chi ^{},Y=\eta ^{},Z=|Q_2|e^\eta $$
then the constraint (1.10) can be considered as a definition of $`e^{2\chi }`$. Eliminating the term $`e^{2\chi }`$ from eqs. (1.9), one obtains the system:
$$\begin{array}{cc}\hfill X^{}=& Z^2+2P(X,Y,Z)\hfill \\ \hfill Y^{}=& \frac{3+q^2}{3}Z^2+P(X,Y,Z)\hfill \\ \hfill Z^{}=& YZ\hfill \end{array}$$
$`(1.12)`$
where
$$P(X,Y,Z)=Q_2^2e^{2\chi }=\frac{1}{3+2q^2}\left[(3+q^2)X^2+6Y^26XY3B\right]Z^2$$
$`(1.13)`$
with $`B=\frac{3+2q^2}{3}a^2\frac{q^2}{3}c^2`$.
2. Exact solutions. A. The $`Q_2=0`$ case. This limit case corresponds to minimal coupling of $`\mathrm{\Sigma }`$, i.e. $`\lambda 0`$. By the no-hair theorem, the regular solutions should have constant $`\mathrm{\Sigma }`$, as we shall verify. When $`Q_2=0`$, the equations (1.9) take the form
$$\chi ^{\prime \prime }=2Q_1^2e^{2\chi },\eta ^{\prime \prime }=Q_1^2e^{2\chi }$$
$`(2.1)`$
The first equation can be integrated to give
$$\chi ^2=2Q_1^2e^{2\chi }+b^2$$
$`(2.2)`$
with $`b`$ an integration constant. Moreover, comparing the two equations (2.1),
$$\eta ^{}=\frac{1}{2}(\chi ^{}k)$$
$`(2.3)`$
with $`k`$ an arbitrary constant. The constraint equation (1.11) becomes then
$$a^2\frac{b^2}{2}\frac{3}{3+2q^2}\left[\frac{1}{2}k^2+\frac{q^2}{3}c^2\right]=0$$
$`(2.4)`$
Integrating again (2.2), one gets
$$Q_1e^\chi =\frac{\sqrt{2}bAe^{b\xi }}{1A^2e^{2b\xi }}$$
$`(2.5)`$
with $`A`$ an integration constant.
From these results and the relations (1.10), one can now write down the general solution in terms of the physical fields. Rather than giving all the explicit expressions, let us first consider the ”radial” metric function $`e^\rho =e^{\zeta \nu }`$. As $`\xi 0`$, $`e^\rho \mathrm{}`$, and hence one can identify this limit with spatial infinity. As $`\xi \mathrm{}`$, instead, one has from (1.6), (1.10) and (2.5),
$$e^\rho \mathrm{const}\times \mathrm{exp}\left[\left(a\frac{b}{2}+\frac{3}{3+2q^2}\left(\frac{1}{2}k+\frac{q^2}{3}c\right)\right)\xi \right]$$
$`(2.6)`$
which implies that for $`\xi \mathrm{}`$, $`e^\rho 0`$, giving rise to a singularity, except in the special case when the constant factor in the exponential vanishes, in which case $`e^\rho \mathrm{const}`$ as $`\xi \mathrm{}`$. In conjunction with (2.4), this request singles out a unique real solution for the parameters, given by $`a=b=c=k`$.
In order to analyze the metric, it is useful to write it in a Schwarzschild-like form, by introducing a new radial coordinate $`r`$, such that $`dr=e^{2\zeta }d\xi `$. In the new coordinates ,
$$ds^2=e^{2\nu }dt^2+e^{2\nu }dr^2+e^{2\rho }d\mathrm{\Omega }^2$$
$`(2.7)`$
where the metric functions are now viewed as functions of $`r`$. With a suitable choice of the origin of $`r`$, one has then
$$e^{2a\xi }=\frac{rr_+}{rr_{}},e^{2\zeta }=(rr_+)(rr_{}),1A^2e^{2a\xi }=(1A^2)\frac{r}{rr_{}}$$
$`(2.8)`$
with $`r_+=2a/(1A^2)`$, $`r_{}=2aA^2/(1A^2)`$. Moreover, if one chooses $`A`$ such that $`Q_1=2aA/(1A^2)`$, the physical fields read, in terms of the new radial coordinate,
$$\begin{array}{cc}\hfill e^{2\nu }& =1\frac{r_+}{r}e^{2\rho }=r^2\left(1\frac{r_{}}{r}\right)\hfill \\ \hfill e^{2\mathrm{\Phi }}& =1\frac{r_{}}{r}e^{2\mathrm{\Sigma }}=\mathrm{const}\hfill \end{array}$$
$`(2.9)`$
This is nothing but the well-known GHS solution \[1-2\]. It describes asymptotically flat black holes with mass $`r_+/2`$ and charge $`Q_1^2=r_+r_{}/2`$. The surface $`r=r_+`$ is a horizon while the point $`r=r_{}`$ is a singularity.
Qualitatively different solutions arise in the special case $`A=1`$. In this case, $`e^{2\zeta }e^{2\chi }`$, and choosing the origin of $`r`$ such that $`r_+=2a`$, one gets
$$\begin{array}{cc}\hfill e^{2\nu }& =rr_+e^{2\rho }=r\hfill \\ \hfill e^{2\mathrm{\Phi }}& =re^{2\mathrm{\Sigma }}=\mathrm{const}\hfill \end{array}$$
$`(2.10)`$
This solution is not asymptotically flat , but still possesses a horizon at $`r_+`$ and is singular at the origin. It has been investigated in detail in ref. .
Another important limit is reached when $`a=0`$ and corresponds to extremal black holes with $`r_{}=r_+`$. In fact, in that case $`e^{2\zeta }=\xi ^2`$, and proceeding as before one can show that the existence of a regular horizon implies that also the parameters $`b`$, $`c`$ and $`k`$ must vanish. Hence, one has $`e^{2\chi }=(\xi +A)^2`$, with $`A`$ an integration constant. Defining a new coordinate $`r=A(1A\xi ^1)`$, the metric functions can be finally cast in the form (2.9), with $`r_+=r_{}=A`$.
B. The $`Q_1=0`$ case. This case corresponds to minimal coupling of $`\mathrm{\Phi }`$ and can be considered the limit of (1.1) for $`\lambda \mathrm{}`$. By the no-hair theorem, the regular solutions must have constant $`\mathrm{\Phi }`$. The field equations are now
$$\chi ^{\prime \prime }=Q_2^2e^{2\eta },\eta ^{\prime \prime }=\frac{3+q^2}{3}Q_2^2e^{2\eta }$$
$`(2.11)`$
Proceeding as before, one gets
$$\begin{array}{cc}\hfill Q_2e^\eta & =\sqrt{\frac{3}{3+q^2}}\frac{2bAe^{b\xi }}{1A^2e^{2b\xi }}\hfill \\ \hfill \chi ^{}& =\frac{3}{3+q^2}(\eta ^{}k)\hfill \end{array}$$
$`(2.12)`$
where $`b`$, $`A`$, $`k`$ are integration constants, together with the constraint
$$a^2\frac{3}{3+q^2}b^2\frac{3}{3+2q^2}\left[\frac{3}{3+q^2}k^2+\frac{q^2}{3}c^2\right]=0$$
$`(2.13)`$
We look again for regular black hole solutions. For this purpose, we consider the asymptotic behaviour of $`e^\rho `$ as $`\xi \mathrm{}`$, which is now
$$e^\rho \mathrm{const}\times \mathrm{exp}\left[\left(a\frac{3}{3+q^2}b+\frac{3}{3+2q^2}\left(\frac{q^2}{3+q^2}k+\frac{q^2}{3}c\right)\right)\xi \right]$$
$`(2.14)`$
A horizon can only occur when the coefficient of $`\xi `$ in the exponential vanishes, in which case $`e^\rho \mathrm{const}`$ as $`\xi \mathrm{}`$. This condition, together with (2.13) implies that $`a=b=c=3k/q^2`$.
Defining a new coordinate $`r`$ as before, for $`A`$ such that $`Q_2=\sqrt{\frac{3}{3+q^2}}\frac{2aA}{1A^2}`$, one gets finally
$$\begin{array}{cc}\hfill e^{2\nu }& =\left(1\frac{r_+}{r}\right)\left(1\frac{r_{}}{r}\right)^{\frac{3q^2}{3+q^2}}e^{2\rho }=r^2\left(1\frac{r_{}}{r}\right)^{\frac{2q^2}{3+q^2}}\hfill \\ \hfill e^{2\mathrm{\Phi }}& =\mathrm{const}e^{2\mathrm{\Sigma }}=\left(1\frac{r_{}}{r}\right)^{\frac{6q}{3+q^2}}\hfill \end{array}$$
$`(2.15)`$
These solutions have not been considered previously, but essentially coincide with the generalized GHS solutions\[1-2\], where now $`\mathrm{\Sigma }`$ plays the role of the dilaton. They describe asymptotically flat black holes with mass $`\frac{1}{2}\left(r_++\frac{3q^2}{3+q^2}r_{}\right)`$ and charge $`Q_2^2=\frac{3}{3+q^2}r_+r_{}`$. A horizon occurs at $`r=r_+`$ and a singularity at $`r=r_{}`$. The extremal limit $`r_+=r_{}`$ is achieved when $`a=b=c=k=0`$.
Also the limit $`A=1`$ is special and describes non-asymptotically flat black holes . For $`A=1`$ one has, in fact,
$$\begin{array}{cc}\hfill e^{2\nu }& =(rr_+)r^{\frac{3q^2}{3+q^2}}e^{2\rho }=r^{\frac{2q^2}{3+q^2}}\hfill \\ \hfill e^{2\mathrm{\Phi }}& =\mathrm{const}e^{2\mathrm{\Sigma }}=r^{\frac{6q}{3+q^2}}\hfill \end{array}$$
$`(2.16)`$
Metrics of this form have been investigated in .
C. The case $`\chi ^{}=\eta ^{}`$. The last case in which exact solutions can be obtained is given by the condition $`\eta =\chi +\mathrm{const}`$, which corresponds to the solutions found in . Setting $`e^{2\eta }=K^2e^{2\chi }`$, the field equations become
$$\chi ^{\prime \prime }=(2Q_1^2+K^2Q_2^2)e^{2\chi }=\left(Q_1^2+\frac{3+q^2}{3}K^2Q_2^2\right)e^{2\chi }$$
$`(2.17)`$
Hence, $`K^2=\frac{3}{q^2}\frac{Q_1^2}{Q_2^2}=\frac{3}{\lambda ^2q^2}`$, and
$$Q_1e^\chi =\sqrt{\frac{q^2}{3+2q^2}}\frac{2bAe^{b\xi }}{1A^2e^{2b\xi }}$$
$`(2.18)`$
where $`b`$ and $`A`$ are integration constants. The constraint (1.5) reduces to
$$a^2\frac{3+q^2}{3+2q^2}b^2\frac{q^2}{3+2q^2}c^2=0$$
$`(2.19)`$
The solution possesses a horizon if
$$a\frac{3+q^2}{3+2q^2}b+\frac{q^2}{3+2q^2}c=0$$
$`(2.20)`$
From (2.19) and (2.20), one obtains $`a=b=c`$.
In terms of the coordinate $`r`$ defined above, choosing $`A`$ such that $`Q_1=\sqrt{\frac{q^2}{3+2q^2}}\frac{2aA}{1A^2}`$, the metric functions read
$$\begin{array}{cc}\hfill e^{2\nu }& =\left(1\frac{r_+}{r}\right)\left(1\frac{r_{}}{r}\right)^{\frac{3}{3+2q^2}}e^{2\rho }=r^2\left(1\frac{r_{}}{r}\right)^{\frac{2q^2}{3+2q^2}}\hfill \\ \hfill e^{2\mathrm{\Phi }}& =\left(1\frac{r_{}}{r}\right)^{\frac{2q^2}{3+2q^2}}e^{2\mathrm{\Sigma }}=(3/q^2\lambda ^2)^{\frac{3}{q}}\left(1\frac{r_{}}{r}\right)^{\frac{6q}{3+2q^2}}\hfill \end{array}$$
$`(2.21)`$
(Notice that we have exchanged the definition of $`r_+`$ and $`r_{}`$). These solutions describe asymptotically flat black holes of mass $`\frac{1}{2}\left(r_++\frac{3}{3+2q^2}r_{}\right)`$ and charge $`Q_1^2=\frac{q^2}{3+2q^2}r_+r_{}`$ . Also in this case the extremal black holes are obtained for vanishing $`a`$, $`b`$ and $`c`$.
In the special case $`A=1`$, the solutions reduce to
$$\begin{array}{cc}\hfill e^{2\nu }& =(rr_+)r^{\frac{3}{3+2q^2}}e^{2\rho }=r^{\frac{2q^2}{3+2q^2}}\hfill \\ \hfill e^{2\mathrm{\Phi }}& =r^{\frac{2q^2}{3+2q^2}}e^{2\mathrm{\Sigma }}=(3/q^2\lambda ^2)^{\frac{3}{q}}r^{\frac{6q}{3+q^2}}\hfill \end{array}$$
$`(2.22)`$
and describe non-asymptotically flat black holes .
3. The dynamical system. The dynamical system (1.12) is similar to analogous systems studied in several contexts , but differs from these because the critical points at finite distance lie on a compact curve. It is easy to see, in fact, that all the critical points at finite distance are placed at the intersection between the plane $`Z=0`$ and the hyperboloid $`P=0`$, with $`P`$ defined in (1.13), which (except in some degenerate cases) is an ellipse.
In particular, the plane $`Z=0`$ corresponds to the limit $`Q_1=0`$. The system is invariant under $`ZZ`$, but the $`Z<0`$ half-space is simply a copy of the positive $`Z`$ half-space and has no physical significance. Hence, we shall not consider it in the following.
The hyperboloid $`P=0`$ contains the trajectories corresponding to the limit $`Q_2=0`$. If $`B>0`$ it is one-sheeted, while it is two-sheeted if $`B<0`$. We shall consider only the former case. It is easy to see, in fact, that when $`B<0`$, the hyperboloid does not intersect the plane $`Z=0`$ and therefore there are no critical points at finite distance. It follows that the solutions of the dynamical system are of oscillatory type, and do not lead to reasonable black hole geometries. Moreover, the physically relevant solutions are those in the exterior of the hyperboloid , which corresponds to $`|Q_2|e^\chi >0`$, i.e. to the external region of the black hole. Finally, we notice that in the limit $`B=0`$, the hyperboloid reduces to a cone and the only critical point at finite distance is the origin of the coordinates. This limit corresponds to extremal black hole solutions.
As noted above, when $`B>0`$, the intersection of the hyperboloid $`P=0`$ with the plane $`Z=0`$ is given by an ellipse. More precisely, for every
$$|X_0|\sqrt{\frac{9B}{(3+2q^2)(3+q^2)}}$$
there is a critical point at $`X=X_0`$, $`Y=Y_0`$, $`Z=0`$, where $`Y_0`$ is given in terms of $`X_0`$ by the solution of the quadratic equation
$$(3+q^2)X_0^2+6Y_0^26X_0Y_03B=0$$
$`(3.1)`$
The characteristic equation for small perturbations,
$$\begin{array}{cc}\hfill X=X_0+x,& |x|1\hfill \\ \hfill Y=Y_0+y,& |y|1\hfill \\ \hfill Z=Z_0+z,& |z|1\hfill \end{array}$$
has eigenvalues $`0`$, $`2X_0`$ and $`Y_0`$. Hence, each point in the $`Z=0`$ plane satisfying (3.1) with $`X_0>0`$, $`Y_0>0`$, repels a 2-dimensional bunch of solutions in the full 3-dimensional phase space, while solutions of (3.1) with $`X_0<0`$, $`Y_0<0`$ are attractors. The points with $`X_0>0`$, $`Y_0<0`$ or $`X_0<0`$, $`Y_0>0`$ act as saddle points. The presence of a vanishing eigenvalue is due of course to the fact that there is a continuous set of critical points lying on a curve. The critical points correspond to $`\xi \mathrm{}`$ for trajectories starting from the ellipse, and to the limit $`\xi \mathrm{}`$ for trajectories ending at the ellipse.
The trajectories in the $`Z=0`$ plane, which correspond to the exact solutions discussed in sec. 2.A, are given by lines of equation $`Y=\frac{1}{2}(Xk)`$, with $`k`$ a constant. The lines which do not intersect the ellipse of critical points, i.e. those with $`|k|>\sqrt{B}`$, correspond to oscillatory behaviour of $`X`$ and $`Y`$ and are not of interest to us. Notice that the extremal trajectories, for which $`|k|=\sqrt{B}`$ are tangent to the ellipse at $`X=0`$.
The projection of the trajectories corresponding to the exact solutions of section 2.B from the hyperboloid to the $`Z=0`$ plane presents a similar phase portrait. The trajectories satisfy in this case the equation $`Y=\frac{3+q^2}{3}(Xk^{})`$ and only trajectories with $`|k^{}|<\sqrt{B/3}`$ cross the ellipse. The projections of the extremal trajectories corresponding to $`|k^{}|=\sqrt{B/3}`$ are now tangent to the ellipse at $`Y=0`$.
For completeness, we notice that the solutions of sect. 2.c are given by the hyperbola of equation
$$(3+q^2)(3+2q^2)Z^23(3+q^2)X^2=9B,$$
$`(3.2)`$
lying in the plane $`X=Y`$.
We pass now to consider the behaviour of the metric functions $`e^\rho `$, $`e^\nu `$, for $`\xi \mathrm{}`$ . In this limit, $`e^{2\chi }e^{2X_0\xi }`$, $`e^{2\eta }e^{2Y_0\xi }`$, and hence,
$$\begin{array}{cc}\hfill e^\rho & \mathrm{exp}\left[\frac{1}{3+2q^2}((3+2q^2)a+q^2cq^2X_03Y_0)\xi \right]\hfill \\ \hfill e^{2\nu }& \mathrm{exp}\left[\frac{2}{3+2q^2}(q^2c+q^2X_0+3Y_0)\xi \right]\hfill \end{array}$$
$`(3.3)`$
In general, the radius $`e^\rho 0`$ as $`\xi \mathrm{}`$, except in the special case
$$(3+2q^2)a+q^2cq^2X_03Y_0=0$$
$`(3.4)`$
This equation, combined with (3.1), gives the only real solution $`X_0=Y_0=a=c`$. In this case $`e^\rho \mathrm{const}`$ as $`\xi \mathrm{}`$ . When these conditions are not satisfied, the metric function $`e^{2\nu }`$ is singular near the critical points, giving rise to a singularity as $`e^\rho 0`$.
Also when the relation (3.4) is satisfied, the metric function $`e^{2\nu }`$ behaves singularly near the critical points, but this can be shown to be simply a coordinate singularity by computing the curvature invariants, which tend to a constant value as $`\xi \mathrm{}`$ when (3.4) holds. Therefore, all the trajectories starting from the point $`X_0=Y_0=\sqrt{\frac{3}{3+q^2}B}`$ correspond to solutions with regular horizon, provided $`c=a`$.
To complete the analysis of the phase space we must also investigate the nature of the critical points on the surface at infinity. This can be done by defining new coordinates $`u`$, $`y`$, and $`z`$ such that infinity corresponds to $`u0`$:
$$u=\frac{1}{X},y=\frac{Y}{X},z=\frac{Z}{X}$$
Then eqs. (1.12) take the form
$$\begin{array}{cc}\hfill \dot{u}& =(z^2+2p)u\hfill \\ \hfill \dot{y}& =(z^2+2p)y+\frac{3+q^2}{3}z^2+p\hfill \\ \hfill \dot{z}& =(yz^22p)z\hfill \end{array}$$
$`(3.5)`$
where we have defined $`p=P/X^2`$ and a dot denotes $`ud/d\xi `$. The critical points with $`u=0`$ can be classified in three categories:
1) Two critical points, which we denote $`L_{1,2}`$ placed at $`y=1/2`$, $`z=0`$, i.e.
$$X=\pm \mathrm{},Y=\frac{X}{2},Z=0$$
These are the endpoints of the trajectories lying in the $`Z=0`$ plane. The analysis of stability shows that the point with $`X>0`$ (resp. $`X<0`$) acts as an attractor (resp. repellor) both on the trajectories coming from finite distance and on the two-dimensional bunch of trajectories lying on the surface at infinity.
2) Two critical points $`M_{1,2}`$ lie at $`y=(3+q^2)/3`$, $`z^2=(3+q^2)/3`$, i.e.
$$X=\pm \mathrm{},Y=\frac{3+q^2}{3}X,Z=\sqrt{\frac{3+q^2}{3}}X$$
These are the endpoints of the trajectories lying on the hyperboloid $`P=0`$. The analysis of stability shows that also in this case the point with $`X>0`$ (resp. $`X<0`$) attracts (resp. repels) both the trajectories coming from finite distance and those lying on the surface at infinity.
3) Two critical points $`N_{1,2}`$ lie at $`y=1`$, $`z^2=3/(3+2q^2)`$, i.e.
$$X=\pm \mathrm{},Y=X,Z=\sqrt{\frac{3}{3+2q^2}}X$$
These are the endpoints of the hyperbola (3.2) in the $`X=Y`$ plane. The points with $`X>0`$ (resp. $`X<0`$) act as attractors (resp. repellors) on the trajectories coming from finite distance and as saddle points on the trajectories at infinity.
Fig. 1: The phase space at infinity.
In fig. 1 we sketch the pattern of trajectories on the surface at infinity. The point at infinity is reached for $`\xi \xi _0`$, where $`\xi _0`$ is a finite constant. It is easy to see that for $`\xi \xi _0`$, the functions $`\chi `$ and $`\eta `$ behave as
$$e^\chi |\xi \xi _0|^{1/v_0}e^\eta |\xi \xi _0|^{y_0/v_0}$$
where $`v_0z_0^2+2p_0`$, the subscript $`0`$ indicating the value taken at the critical points. Hence, if $`\xi _00`$, for $`\xi \xi _0`$, the metric functions behave as
$$e^\nu e^\rho |\xi \xi _0|^{\frac{3}{3+2q^2}(y_0+\frac{q^2}{3})v_0^1}$$
$`(3.6)`$
$$e^\mathrm{\Phi }|\xi \xi _0|^{\frac{3}{3+2q^2}(y_0\frac{3+q^2}{3})v_0^1}e^\mathrm{\Sigma }|\xi \xi _0|^{\frac{3q}{3+2q^2}(12y_0)v_0^1}$$
The following picture of the phase space emerges: a family of trajectories start at the ellipse and end at one of the critical points $`L_1`$, $`M_1`$, $`N_1`$. Another family of trajectories start at one of the critical points $`L_2`$, $`M_2`$ or $`N_2`$ and end at the ellipse. Moreover, there are trajectories which never intersect the ellipse, connecting the critical points at $`X=\mathrm{}`$ to those at $`X=+\mathrm{}`$. Of all the trajectories, only those starting at points of the ellipse such that $`X_0=Y_0`$ can correspond to regular solutions.
For completeness, we observe that most of the trajectories lying in the interior of the hyperboloid join $`M_1`$ to $`M_2`$, but we shall not study them in detail because they are devoid of physical significance.
4. Discussion. We finally discuss the implications of the phase space portrait of the previous section on the physical properties of the solutions. For this purpose, it is useful to define a new radial coordinate $`r`$ such that $`dr=e^{2\zeta }d\xi `$, as in sec. 2. One has:
$$r=\frac{r_+r_{}e^{2a\xi }}{1e^{2a\xi }}$$
$`(4.1)`$
where we have defined $`r_+=2a(1e^{2a\xi _0})^1`$, $`r_{}=2ae^{2a\xi _0}(1e^{2a\xi _0})^1`$ . In this way it is easy to identify the range of variation of $`\xi `$ with the corresponding physical regions of the spacetime.
For $`\xi \mathrm{}`$, $`rr_\pm `$, while for $`\xi 0`$, $`r\mathrm{}`$. Moreover, for $`\xi \xi _0`$, which without loss of generality we shall assume non-negative, $`r0`$, except when $`\xi _0`$ vanishes.
If $`\xi _00`$, we can identify the trajectories starting at the ellipse and ending at the point $`\xi _0`$ with the exterior region of the black hole $`r>r_+`$. If the condition (3.4) is satisfied, these solutions possess a regular horizon. Moreover, they are asymptotically flat , since $`e^{2\chi }`$ and $`e^{2\eta }`$ tend to a constant as $`\xi 0`$. One can calculate the behaviour of these solutions as $`rr_+`$ From (3.3) and (3.4), one sees that for regular solutions $`e^{2\nu }e^{2a\xi }`$ and hence, $`e^{2\nu }(rr_+)`$ for $`rr_+`$. In the same way one can see that the scalar fields are constant in that limit. It may be noticed that eq. (3.6) implies that in the unphysical limit $`r0`$, $`e^{2\nu }e^{2\rho }`$.
With our conventions, the trajectories starting at $`X=\mathrm{}`$ and ending at the ellipse correspond to the unphysical region $`0<r<r_{}`$. Unfortunately, since $`r_{}`$ is in general a singularity, one cannot single out the trajectories corresponding to physical solutions by requiring the regularity of the curvature invariants near that point, as for $`r_+`$. Moreover, with our methods, we are not able to connect the solutions in the region $`r>r_+`$ with those in $`r<r_+`$ and then to discuss their behaviour at $`r=r_{}`$. This may however be achieved by using numerical methods.
The case $`\xi _0=0`$ needs a separate discussion. The solutions are no longer asymptotically flat , but their behaviour for $`r\mathrm{}`$ can be obtained from the $`\xi 0`$ limit, which in our case turns out to be
$$e^\nu |\xi |^{\frac{3}{3+2q^2}(y_0+\frac{q^2}{3})v_0^1}e^\rho |\xi |^{1\frac{3}{3+2q^2}(y_0+\frac{q^2}{3})v_0^1}$$
$$e^\mathrm{\Phi }|\xi |^{\frac{3}{3+2q^2}(y_0\frac{3+q^2}{3})v_0^1}e^\mathrm{\Sigma }|\xi |^{\frac{3q}{3+2q^2}(12y_0)v_0^1}$$
Moreover, since for $`\xi 0`$, $`r|\xi |^1`$, it follows that for $`r\mathrm{}`$ the solutions behave in one of the following three ways, depending on the critical points where they terminate,
| | $`e^{2\nu }`$ | $`e^{2\rho }`$ | $`e^{2\mathrm{\Phi }}`$ | $`e^{2\mathrm{\Sigma }}`$ |
| --- | --- | --- | --- | --- |
| $`L_{1,2}`$ | $`r`$ | $`r`$ | $`r`$ | $`\mathrm{const}`$ |
| $`M_{1,2}`$ | $`r^{6/(3+q^2)}`$ | $`r^{2q^2/(3+q^2)}`$ | $`\mathrm{const}`$ | $`r^{6q/(3+q^2)}`$ |
| $`N_{1,2}`$ | $`r^{(6+2q^2)/(3+2q^2)}`$ | $`r^{2q^2/(3+2q^2)}`$ | $`r^{2q^2/(3+2q^2)}`$ | $`r^{6q/(3+2q^2)}`$ |
These patterns coincide with those of the exact solutions (2.10), (2.16) or (2.22): hence all solutions of the system (1.2) are either asymptotically flat or possess the same asymptotic behaviour as one of the exact non-flat solutions. Moreover, from the discussion of sect. 3 of the phase space at infinity, it follows that the points $`N_{1,2}`$ are unstable, so that most trajectories of this class actually behave like the solutions (2.10) or (2.16) for $`r\mathrm{}`$.
As noticed above, the other relevant limit case, $`B=0`$, corresponds to the extremal black hole limit. In this case, the only critical point at finite distance is the origin of coordinates, and all the eigenvalues of the linearized equations vanish. This degeneration corresponds to a power-law behaviour of the variables $`X`$, $`Y`$ and $`Z`$ near the critical point: $`X\alpha \xi ^1`$, $`y\beta \xi ^1`$, $`Z\xi ^\beta `$, $`\sqrt{P}\xi ^\alpha `$. One can easily see from the field equations that the only possible values for $`\alpha `$ and $`\beta `$ are $`\alpha =1,\beta =\frac{1}{2}`$, $`\alpha =1,\beta =1`$, and $`\alpha =\frac{3}{3+q^2},\beta =1`$, which coincide with those of the exact extremal solutions of section 2. One can also check numerically that only the values $`\alpha =1,\beta =1`$ are stable, so that all the trajectories, except the exact ones, behave near the critical point at the origin like the solutions (2.21) (case C). This is interesting, because from the previous discussion we know that this limit corresponds to the horizon of the extremal black hole. Now, it is well known that in the cases A and C of section 2, the extremal ”string” metric $`d\widehat{s}^2=e^{2\varphi }ds^2`$ has a ”near-horizon” limit in which the metric function $`e^{2\rho }`$ becomes constant , and hence the spacetime decouples in the direct product of two 2-dimensional spaces, while this is not true for case B. But since all solutions except A and B, behave like C near the horizon, we can conclude that solution B is the only one for which $`e^{2\rho }`$ is not constant near the horizon.
Before concluding this section, it is important to remark that the qualitative properties of the phase space and hence of the solutions are unaffected by the value of the parameter $`q`$, which is therefore essentially irrelevant for our discussion.
5. Conclusions. From the previous discussion results that there is a large class of asymptotically flat regular black hole solutions of the field equations (1.2). These are characterized by three parameters: mass, electric charge (or equivalently $`r_+`$ and $`r_{}`$, or $`a`$ and $`\xi _0`$), and a third parameter which classifies the different trajectories starting from the critical points $`X_0=Y_0=\sqrt{\frac{3}{3+q^2}B}`$, $`Z_0=0`$. We conjecture that the third parameter can be related to (a combination of) the scalar charges of the dilaton and the modulus. This conjecture cannot be checked explicitly because only in a few special cases the solution can be written in an analytic form.
The presence of an independent scalar charge would represent a novelty in the context of the no-hair results. In fact, in the known cases of dilaton gravity with non-minimal dilaton-Maxwell coupling, even if the dilaton is non-trivial, its charge is not an independent parameter, but is related to the mass and electric charge of the black hole (secondary hair). In our case of two non-minimally coupled scalar fields, it seems instead that a new independent charge is needed in order to classify the solutions.
Another interesting result is that in the extremal limit all the solutions except the unphysical case of a minimally coupled dilaton, have the same behaviour near the horizon, decoupling into the product of two 2-dimensional spaces. This is interesting since such a behaviour is required in recent attempts of calculating black hole entropy by counting microstates of a conformal field theory .
Finally, we have clarified the role of non-asymptotically flat solutions, which were first discussed in in the case of ordinary dilaton gravity, and shown that in our model they form a two-parameter family whose asymptotic behaviour can assume only three possible forms.
Acknowledgements
I wish to thank David Wiltshire for interesting discussions and Massimiliano Porcu for help with the numerical calculations. This work was partially supported by a coordinate research project of the University of Cagliari.
References
G.W. Gibbons, K. Maeda, Nucl. Phys. B 298, 741 (1988); D. Garfinkle, G.T. Horowitz and A. Strominger, Phys. Rev. D 43, 3140 (1991); J.D. Bekenstein, Phys. Rev. D 5, 1239 (1972); Phys. Rev. D 5, 2403 (1972); V. Kaplunowsky, Nucl. Phys. B 307, 145 (1988); J. Dixon, V. Kaplunowsky and J. Louis, Nucl. Phys. B 355, 649 (1991); M. Cadoni, S. Mignemi, Phys. Rev. D 48, 5536 (1993); K.C.K. Chan, J.H. Horne and R. Mann, Nucl. Phys. B 447, 441 (1995); E. Witten, Phys. Lett. B 155, 151 (1985); D.L. Wiltshire, Phys. Rev. D 36, 1634 (1987); Phys. Rev. D 44, 1100 (1991); S. Mignemi and D.L. Wiltshire, Class. Quantum Grav. 6, 987 (1989); S.B. Giddings and A. Strominger, Phys. Rev. D 46, 627 (1992); M. Cadoni and S. Mignemi, Nucl. Phys. B 427, 669 (1994); See for example J. Maldacena, J. Michelson and A. Strominger, JHEP 9902, 011 (1999) and references therein. |
no-problem/9910/cond-mat9910387.html | ar5iv | text | # Aging in glassy systems: new experiments, simple models, and open questions
## 1 Introduction
A great variety of systems exhibit slow ‘glassy’ dynamics : glasses of all kinds, but also spin-glasses or dipolar glasses . Another very interesting class of systems is that of pinned ‘defects’ such as Bloch walls, vortices in superconductors, charge density waves, dislocations, etc. interacting with randomly placed impurities . Another type of systems where surprisingly slow dynamics can occur are soft glassy materials, such as foams, dense emulsions or granular materials, which attracted much attention recently . Of particular interest is the so called aging phenomenon observed in the response function of these glassy systems. This response is either to a magnetic field in the case of disordered magnets, to an electric field in the case of dipolar glasses, or to an applied stress in the case of, e.g. glassy polymers or dense emulsions. The basic phenomenon is that the response is waiting time dependent, i.e. depends on the time $`t_w`$ one has waited in the low temperature phase before applying the perturbation. Qualitatively speaking, these systems stiffen with age: the longer one waits, the smaller the response to an external drive, as if the system settled in deeper and deeper energy valleys as time elapses. More precisely, if a system is cooled in zero field and left at $`T_1`$ (below the glass temperature $`T_g`$ <sup>1</sup><sup>1</sup>1Here the glass transition temperature is understood as the temperature below which the relaxation time becomes larger than the experimental time scales. This temperature may or may not correspond to a true phase transition.), during a time $`t_w`$ before applying an external field (or stress), then the time dependent magnetisation saturates after a time comparable to $`t_w`$ itself. In other words, the time dependent magnetisation (or strain) takes the following form:
$$M(t_w+t,t_w)M_{\mathrm{}}(t)+M_{\mathrm{ag}}(\frac{t}{t_w}),$$
(1)
where $`M_{\mathrm{}}(t)`$ is a ‘fast’ part,<sup>2</sup><sup>2</sup>2Actually, $`M_{\mathrm{}}(t)`$ should be written $`M_{\mathrm{}}(t/\tau _0)`$, where $`\tau _0`$ is a microscopic time scale. and $`M_{\mathrm{ag}}`$ is the slow, aging part. Analoguously, if a polymer glass is left at $`T_1`$ during a time $`t_w`$ before applying an external stress, the subsequent strains develop on a time scale given by $`t_w`$ itself. Note that the decomposition of the dynamics into a fast part and a slow part often also holds above (but close to) $`T_g`$, where one speaks of $`\beta `$relaxation and $`\alpha `$relaxation, respectively. The time scale $`\tau (T)`$ for the $`\alpha `$relaxation is however finite and waiting time independent above $`T_g`$. This time becomes effectively infinite for $`T<T_g`$, and is therefore replaced by the age of the system $`t_w`$. In other words, Eq. (1) is the continuation in the glass phase of the precursor two-step relaxation commonly observed above $`T_g`$ .
### Aging in correlation functions
One can also, at least numerically, observe aging in the correlation functions. For example, the dynamic structure factor of – say – a Lennard-Jones system, defined as:
$$S_q(t_w+t,t_w)=e^{i\stackrel{}{q}[\stackrel{}{r}(t+t_w)\stackrel{}{r}(t_w)]},$$
(2)
where $`\stackrel{}{r}(t)`$ is the position of a tagged particle at time $`t`$, exhibits interesting aging properties . Note that the corresponding experiments are much harder to realize, since the measurement of the structure factor typically takes several minutes or so. In order not to mix together different waiting times, one should redo the experiments several times, heating back the system above $`T_g`$, and cooling down again, until the number of runs is sufficient to obtain a good averaging for a fixed $`t_w`$. This point will be further discussed below – see Eq. (6).
In equilibrium, the correlation and the response function are related through the fluctuation-dissipation theorem. Interestingly, this theorem does not hold in general for the aging part of the correlation and response. A generalisation of this theorem has been provided in in the context of some mean-field spin-glass models, where the true temperature of the system is replaced by an effective temperature, higher than that of the thermal bath (and possibly time dependent). Quite a lot of efforts have been devoted to measure this effective temperature in glassy systems, either numerically , or experimentally .
## 2 Different types of aging
### 2.1 Aging in the a.c. susceptibility
Aging can also be seen on a.c. susceptibility measurements (response to an oscillating field at frequency $`\omega `$). These have the advantage that the perturbing field can then be extremely small. On the other hand, since one must wait for at least one period before taking a measurement, the time sector available in these experiments is confined to $`\omega t_w>1`$ (corresponding, in the language of the time dependent magnetisation discussed above, to the short time region $`t<t_w`$). The a.c. susceptibility typically takes the following form:
$$\chi ^{\prime \prime }(\omega ,t_w)=\chi _{\mathrm{}}^{\prime \prime }(\omega )+f(t_w)\chi _{\mathrm{ag}}(\omega )$$
(3)
where $`\chi _{\mathrm{}}(\omega )`$ is the stationary contribution, obtained after infinite waiting time. Depending on the system, the aging part behaves quite differently. For example, in spin glasses (sg), both the functions $`f`$ and $`\chi _{\mathrm{ag}}`$ behave similarly, as a power-law :
$$f(t_w)\chi _{\mathrm{ag}}(\omega )|_{\mathrm{sg}}A(\omega t_w)^{x1}x0.70.9.$$
(4)
This behaviour is the counterpart, in frequency space, of the $`t/t_w`$ scaling reported above. Some dipolar glasses (dg), close to a ferroelectric transition , reveal a very different behaviour, since in this case one has:
$$f(t_w)\chi _{\mathrm{ag}}(\omega )|_{\mathrm{dg}}At_w^{x1}\text{or}B\mathrm{log}t_w,$$
(5)
i.e., an aging part which is nearly frequency independent. Glycerol, on the other hand, shows an intermediate behaviour : the aging part is frequency dependent, but the frequency dependence is weaker than the waiting time dependence.
### 2.2 Role of the thermal history
If a system ages, then by definition it is out-of-equilibrium. Therefore, one might worry that the properties that one measures at $`T_1`$ actually strongly depends on the whole thermal history of the system. Here again, different systems behave very differently. A naive argument, based on the idea that thermal activation over high energy barriers plays a central role in aging, would suggest that the age of a system cooled very slowly before reaching $`T_1`$ should be much larger than the age of the same system, but cooled more rapidly. In other words, cooling the system more slowly allows energy barriers to be surmounted more efficiently at higher temperatures, and thus brings the system closer to equilibrium at $`T_1`$. This is precisely what happens for dg systems . One can actually use non-uniform cooling protocols, where the cooling rate is either slow or fast only in the vicinity of $`T_g`$, and then fixed to a constant value for the last few Kelvins – see Fig. 1. In the case of dg, one sees very clearly that the cooling rate when crossing $`T_g`$ is the crucial quantity which determines how well the system is able to equilibrate . When the system is cold, then the dynamics is essentially frozen on experimental time scales, and therefore the precise value of the cooling rate there is of minor importance.
Surprisingly, the situation is completely reversed for sg. In these systems, the value of the cooling rate in the vicinity of $`T_g`$ is completely irrelevant, and the observed a.c. susceptibility is to a large degree independent of the cooling rate, except for the very last Kelvins . This is illustrated in Fig. 1. The naive picture of a system crossing higher and higher barriers to reach an optimal configuration therefore certainly needs to be reconsidered for these systems.
### 2.3 Rejuvenation and Memory in temperature cycling experiments
An even more striking effect has been observed first in spin-glasses , and more recently in a variety of other systems. Upon cooling, say from $`T_1`$ to $`T_2<T_1`$, the system rejuvenates and returns to a zero age configuration even after a long stop at the higher temperature $`T_1`$. This is tantamount to saying that the thermal history is irrelevant, as we pointed out above: stopping at a higher temperature is more or less equivalent to cooling the system more slowly. The interesting point, however, is that the system at the same time remembers perfectly its past history: when heating back to $`T_1`$, the value of the a.c. susceptibility is seen to match precisely the one it had reached after the first passage at this temperature, as if the stay at $`T_2`$ had not affected the system at all: see Fig. 2. The paradox is that the system did significantly evolve at $`T_2`$, since a significant decrease of $`\chi ^{\prime \prime }(\omega ,t_w)`$ is also observed at $`T_2`$. The effect would be trivial if no evolution was observed at $`T_2`$: in this case, one would say that the system is completely frozen at the lowest temperature, and then all observables should indeed recover their previous value when the system is heated back. The puzzle comes from the coexistence of perfect memory on the one hand, and rejuvenation on the other.<sup>3</sup><sup>3</sup>3This rejuvenation has often been identified with some kind of ‘chaotic’ evolution of the spin-glass order with temperature. As we shall discuss below, this might be misleading and we prefer calling this effect ‘rejuvenation’ rather than ‘chaos’. Very similar effects have now been seen in different systems: in different spin glasses , some dipolar glasses , pmma , and very recently disordered ferromagnets . This last case is interesting because the system is a so-called reentrant spin-glass: the system is ferromagnetic at high temperatures, and then becomes a spin-glass at lower temperatures. One can thus compare in detail the aging effects in both phases. The ‘rejuvenation and memory’ turns out to be very similar in both phases, except that memory is only partial in the ferromagnetic phase: only when the system is left at $`T_2`$ for a rather short time does one keep the memory. In spin glasses, as soon as $`T_1T_2`$ is greater than a few degrees, the memory is kept intact, at least over experimentally accessible time scales.
### 2.4 Deviations from $`t/t_w`$ scaling
We have mentioned above that in many systems such as spin-glasses or polymer glasses, the aging part of the response function scales approximately as $`t/t_w`$, meaning that the effective relaxation time of a system below the glass transition is set by its age $`t_w`$. This scaling is actually only approximate. It appears that a better rescaling of the experimental data is achieved by plotting the data as a function of the difference $`(t+t_w)^{1\mu }t_w^{1\mu }`$, with $`\mu <1`$ . For $`\mu =0`$, $`t_w`$ drops out, and this describes a usual time-translation invariant response function, while the limit $`\mu 1`$ corresponds to a $`t/t_w`$ scaling. It is easy to see that the effective relaxation time grows as $`t_w^\mu `$, i.e. more slowly than the age $`t_w`$ itself for $`\mu <1`$ (‘sub-aging’). Note that for dimensional reasons, $`t_w^\mu `$ should in fact read $`t_w^\mu \tau _0^{1\mu }`$, where $`\tau _0`$ is a microscopic time scale. Therefore, deviations from the simple $`t/t_w`$ scaling would mean that the microscopic time scale is still relevant to the aging dynamics, even for asymptotically long times. Such a behaviour is predicted in some mean-field models of spin-glasses, and also in simpler ‘trap’ models, to be discussed below.
However, one should be very careful in interpreting any empirical value of $`\mu `$ less than one, because several artefacts can induce such an effective sub-aging behaviour. A first possible artefact is due to the external field applied to the system to measure its response. Experimentally, one finds that the higher field (or the external stress), the smaller the value of $`\mu `$: an external field destroys the aging effect . The extrapolation of $`\mu `$ to zero field is somewhat ambiguous, specially because of a second possible artefact, which is a bad separation between the ‘fast’ relaxation part in Eq. (1) and the slow aging part. This comes from the fact that the so-called ‘fast’ part $`M_{\mathrm{}}(t)`$ is not that fast. In spin glasses, it decays as $`(\tau _0/t)^\alpha `$, where $`\alpha `$ is a very small exponent, of the order of $`0.05`$. Hence, this leads to a rather substantial ‘tail’ even in the experimental region where $`t=10^{12}\tau _0`$, which pollutes the aging part and leads to an effective value of $`\mu <1`$. However, even after carefully removing this fast part, the value of $`\mu `$ in spin-glasses still appears to be stuck slightly below the value $`1`$ . A third reason for this to be so is finite size effects. One expects that for a system of finite size, aging will be interrupted after a finite time, when the system has fully explored its phase space. Therefore, after a possibly very long ‘ergodic’ time, the response of the system has to revert to being time-translation invariant, corresponding to $`\mu =0`$. An idea, advocated in and recently reconsidered by Orbach , is that a sample made of small grains of different sizes will lead to an effective value of $`\mu <1`$ because the ergodic time of some of the grains enter the experimental time window.
As a last possible artefact, let us mention the case of dynamical light scattering experiments where the signal is recorded while the system is aging. In this case, instead of measuring $`S_q(t_w+t,t_w)`$, one actually measures:
$$\stackrel{~}{S}_q(t_w+t,t_w)=\frac{1}{𝒯}_0^𝒯𝑑t^{}S_q(t_w+t^{}+t,t_w+t^{})$$
(6)
where $`𝒯`$ is the integration time needed to obtain a reliable signal. It is easy to see that even if $`S_q(t_w+t,t_w)`$ is a function of $`t/t_w`$, the averaging over $`t^{}`$ will lead to an effective value of $`\mu `$ smaller than one, tending to one only if $`𝒯t_w`$. It is possible that this mechanism can explain the value of $`\mu 0.5`$ determined in for a colloidal glass, where $`𝒯t_w`$.
Before ending this section, let us finally add an extra comment concerning the case $`\mu >1`$ (‘super-aging’). In some simple coarsening models of aging, discussed below, it can be argued that the relevant scaling variable should be $`\mathrm{log}(t+t_w)/\mathrm{log}(t_w)`$ rather than $`t/t_w`$. This corresponds to an effective value of $`\mu `$ greater than one. Other mechanisms leading to $`\mu >1`$ can be found, see and below.
## 3 Simple models of aging
### 3.1 Domain growth in pure systems
A simple case where aging effects do appear is phase ordering in pure systems . Take for example an Ising ferromagnet suddenly quenched from high temperatures to a temperature below the transition temperature. The system then wants to order, and has to choose between the up phase and the down phase. Obviously, this takes some time, and the dynamics proceeds via domain growth. After a time $`t_w`$ after the quench, the typical distance between domain walls is $`\xi (t_w)`$, which grows as a power-law of time, i.e. relatively quickly, even for small temperatures. A given spin, far from the domain walls, will thus have to wait for a time $`t`$ such that $`\xi (t_w+t)2\xi (t_w)`$ to flip from – say – the up phase to the down phase and decorrelate. For a power-law growth, this means that the effective relaxation time is of the order of $`t_w`$ itself, corresponding to $`\mu =1`$. This behaviour is confirmed by several exactly soluble models of coarsening, such as the Ising model on a chain, or the ‘spherical’ model, where one can compute explicitly the correlation function to find :
$$C(t_w+t,t_w)=C_{\mathrm{}}(t)+C_{\mathrm{ag}}(\frac{t}{t_w}).$$
(7)
One can also compute several other quantities, such as the effect of an external magnetic field $`h`$ on the aging properties. One then sees that when $`\xi (t_w)^1`$ is smaller than $`h`$, the driving force due to the curvature of the domain walls is superseded by the driving force due to the external field . In this situation, the favoured phase quickly invades the whole system and aging is stopped. This leads, for very small fields and moderate time scales, to an effective value of the exponent $`\mu <1`$, as discussed in section 2.4. The main quantity of interest to compare with experiments, however, is the response function. The result is that the aging part of the response function vanishes as $`\xi (t_w)^1`$ as $`t_w`$ becomes large. In terms of the a.c. susceptibility, one finds that :
$$\chi ^{\prime \prime }(\omega ,t_w)=\chi _{\mathrm{}}^{\prime \prime }(\omega )+\xi (t_w)^1\chi _{\mathrm{ag}}^{\prime \prime }(\omega t_w).$$
(8)
Intuitively, this result means that the aging part of the susceptibility only comes from the domain walls, while the spins in the bulk of domains contribute to the stationary part $`\chi _{\mathrm{}}^{\prime \prime }(\omega )`$. Since the density of spins belonging to domain walls decreases as $`\xi ^{d1}/\xi ^d=\xi ^1`$, the aging contribution decreases with time as the density of walls.
Domain growth in pure systems is driven by surface tension and does not require thermal activation; the aging effects in these systems are therefore hard to detect experimentally, since the typical size of the domains soon reaches its maximum value \[set either by the size of the system of by magneto-static (or other) considerations.\] Similarly, one does not expect the cooling rate to have a major influence on the coarsening of the system.
### 3.2 Domain growth in random systems
More interesting is the situation in disordered ferromagnets (for example in the presence of quenched random fields or random bonds). In this case, the impurities act as pinning sites for the domain walls. The problem of elastic objects (such as domain walls, but also vortices in superconductors, dislocations, etc.) pinned by random impurities has been the subject of intense work in the past decade, both from a static point of view (where the typical equilibrium conformation of such objects is investigated) or to understand their dynamics (relaxation to equilibrium, response to an external driving force, creep and depinning transition) – for reviews, see . Actually, these systems constitute ‘baby’ spin glasses: frustration is present because of the competition between pinning, which tends to distort the structure, and elasticity which tends to reduce the deformations. The main result of the theory is the appearance of a typical pinning energy $`E_p(\mathrm{})`$ associated to the linear size $`\mathrm{}`$ of the piece of domain wall that attempts to reconform. This energy scale grows as a power of $`\mathrm{}`$:
$$E_p(\mathrm{})\mathrm{{\rm Y}}(T)\mathrm{}^\theta ,$$
(9)
where $`\mathrm{{\rm Y}}`$ is a (temperature dependent) energy scale, and $`\theta `$ an exponent which depends on the dimensionality of the structure (1D for dislocations, 2D for domain walls, etc.) and on the correlations of the pinning field. Using a very naive Arrhenius law for thermal activation, this means that the typical time associated to reconformation events that occur on a scale $`\mathrm{}`$ take a time:
$$t(\mathrm{})\tau _0\mathrm{exp}\left[\frac{\mathrm{{\rm Y}}(T)\mathrm{}^\theta }{T}\right].$$
(10)
Equivalently, the size over which the system can equilibrate after a time $`t_w`$ only grows logarithmically:
$$\mathrm{}(t_w)\left[\frac{T}{\mathrm{{\rm Y}}(T)}\mathrm{log}(\frac{t_w}{\tau _0})\right]^{1/\theta }$$
(11)
In particular, the typical size of the growing domains in expected to grow logarithmically with time, that is, extremely slowly.<sup>4</sup><sup>4</sup>4Actually, for domain growth, the exponent of the logarithm is probably larger than $`1/\theta `$, but this does not matter for the present qualitative discussion. In particular, one expects the aging part of the correlation function to be a function of $`\mathrm{log}(t_w+t)/\mathrm{log}(t_w)`$. This very slow growth means that the density of domain walls only decays slowly with time, and therefore that the aging contribution to the susceptibility is still significant even after macroscopic times.
Although very simple, the exponential relation between length scales and time scales given by Eq. (10) has far-reaching consequences: the dynamics becomes, in a loose sense, hierarchical. This is illustrated in Fig. 3. The object evolves between metastable configurations which differ by flips of regions of size $`\mathrm{}_1`$ on a time $`t(\mathrm{}_1)`$ that, because of the exponential dependence in Eq. (10), is much shorter than the time needed to flip a region of size $`\mathrm{}_2>\mathrm{}_1`$. Therefore, the dynamics of the short wavelengths happens on a time scale such that long wavelengths are effectively frozen. As we shall explain below, this feature is, in our eyes, a major ingredient to understand the coexistence of rejuvenation and memory. Another important consequence is the fact that domain growth becomes a very intermittent process: once an event on the scale of the domain size $`\xi `$ has happened, the details of the conformation on scales $`\mathrm{}<\xi `$ start evolving between nearby metastable states, while the overall pattern formed by the domains on scale $`\xi `$ hardly change.
The equation (10) also allows one to define a very important quantity which we call, by analogy with the glass temperature $`T_g`$, the ‘glass length’ $`\mathrm{}_g`$, through $`\mathrm{{\rm Y}}(T)\mathrm{}_g^\theta =𝒜T`$, introduced in this context in . The factor $`𝒜`$ is rather arbitrary; the choice $`𝒜=35`$ corresponds to a time of $`1000`$ seconds if $`\tau _0=10^{12}`$ seconds. In analogy with the glass temperature $`T_g`$, one sees that length scales larger than $`\mathrm{}_g`$ cannot be equilibrated on reasonable time scales, while length scales smaller than $`\mathrm{}_g`$ are fully equilibrated. Qualitatively speaking, the equilibrated modes contribute to the stationary part of the correlation and/or response function, while the glassy modes $`\mathrm{}>\mathrm{}_g`$ contribute to the aging part. Therefore, the strong hierarchy of time scales induced by the exponential activation law allows equilibrated modes and aging modes to coexist.
Finally, it is easy to understand that the logarithmic growth law, Eq. (11), leads to a strong cooling rate dependence of the typical size of the domains : since the growth law is essentially that of pure systems as long as $`\xi \mathrm{}_g(T)`$, a longer time spent at higher temperatures (where $`\mathrm{}_g`$ is large) obviously allows the domains to grow larger before getting pinned at lower temperatures.
### 3.3 Diffusion in a random potential 1: The Sinai model
It is useful to consider a toy model for the dynamics of a pinned domain wall by considering the motion of a point particle in a random potential. This can be thought of as a reduction of the problem to the dynamics of the center of mass of the pinned object. From the study of these pinned objects, it is known that reconformations on scale $`\mathrm{}`$ typically change the position of the center of mass $`X`$ by an amount $`\mathrm{}^\zeta `$, where $`\zeta `$ is a certain exponent, analoguous to the exponent $`\theta `$ defined above. Since the energy changes by an amount $`\mathrm{{\rm Y}}\mathrm{}^\theta `$, the statistics of the random potential $`V(X)`$ acting on the center of mass $`X`$ must be such that:
$$[V(X)V(X^{})]^2\mathrm{{\rm Y}}^2|XX^{}|^{2\theta /\zeta }|XX^{}|L^\zeta ,$$
(12)
where $`L`$ is the total transverse size of the object. For larger distances, $`[V(X)V(X^{})]^2`$ saturates to a finite value, since the impurities encountered by the pinned object become uncorrelated.
An interesting example is provided by a line in a plane (i.e. a domain wall in two dimensions) in the presence of impurities. In this case, the exponents $`\zeta `$ and $`\theta `$ are exactly known and read: $`\zeta =2/3`$, $`\theta =1/3`$, leading to $`[V(X)V(X^{})]^2\mathrm{{\rm Y}}^2|XX^{}|`$. If one assumes that the statistics of $`V(X)`$ is Gaussian, then the potential $`V(X)`$ is a random walk, and the model under consideration is precisely the well known Sinai model, for which a large number of analytical results are known (for reviews, see ).
One can also define in this model a ‘glass scale’ $`X_g`$ such that, at temperature $`T_1`$, that:
$$\mathrm{{\rm Y}}(T_1)\sqrt{X_g}=𝒜T_1\text{or}X_g=\left(\frac{𝒜T_1}{\mathrm{{\rm Y}}(T_1)}\right)^2,$$
(13)
where we have taken into account a possible temperature dependence of $`\mathrm{{\rm Y}}`$. For $`XX_g`$, the diffusion motion is quasi-free, and the dynamics is ‘fast’ (diffusive). For times corresponding to distances larger than $`X_g`$, the energy barriers strongly impede the motion, and lead to a slow logarithmic sub-diffusion:
$$t\tau _0\mathrm{exp}[\frac{V(X)}{T_1}]X(t)X_g\mathrm{log}^2(\frac{t}{\tau _0}).$$
(14)
More precisely, all the particles initially lauched at $`X=0`$ will, after time $`t`$, be located in the deepest energy well available at time $`t`$, which is at a distance $`\mathrm{log}^2t`$ from the initial point. The relative distance between these particles, however, does not grow with time. The deepest trap available is so much more favourable than the others that the relative distance between all particles remains typically of order $`X_g`$: this is the Golosov phenomenon (see ). Within the deepest well, on the other hand, the probability is roughly uniform, since the energy landcape is shallow in comparison with $`T_1`$. One can actually argue, using the beautiful results of , that the (intra-well) response of the particle to a small oscillating external field should behave as:
$$\chi _{\mathrm{ag}}(\omega ,t_w)\frac{\mathrm{log}^3t_w}{\omega t_w}$$
(15)
where $`t_w`$ is the time since the quench from very high temperatures. This shows that within this simplified model, the response is not exactly a function of $`\omega t_w`$, but that there are logarithmic corrections. This means that the scaling variable is again, over a limited range of $`t_w`$, of the form $`\omega t_w^\mu `$ with $`\mu <1`$. In Fig. 4, we show the results of a numerical simulation performed with H. Yoshino , where $`\chi (\omega ,t_w)`$ is computed for the Sinai model at various frequencies, as a function of $`t_w`$. We have shown the slope $`1/t_w`$ for comparison, and rescaled the different curves using the value $`\mu =0.9`$ in the inset.
Now, the very interesting property of the Sinai model is the fact that the potential $`V(X)`$ is strictly self-affine. This means that the statistics of the potential at small scales is identical, up to a scaling factor, to the statistics of the potential at larger scales (see Fig. 5). Therefore, when the temperature is lowered from $`T_1`$ to $`T_2<T_1`$ after a time $`t_{w1}`$, the particle has to restart its search for the most favourable well, much as it had done when the temperature first reached $`T_1`$ from high temperatures. Since the probability distribution within the well is uniform at the moment of the second temperature change, it indeed corresponds, effectively, to a high temperature quench. The point however is that the particles cannot leave the deep well they had reached at $`T_1`$ before a time exceedingly long compared to $`t_{w1}`$. This time $`t_{w2}^{}`$ is the time needed to overcome the depth of the well reached at $`t_{w1}`$, and is given by:
$$t_{w2}^{}=\tau _0(\frac{t_{w1}}{\tau _0})^\beta \text{with}\beta =\frac{T_1\mathrm{{\rm Y}}(T_2)}{T_2\mathrm{{\rm Y}}(T_1)}.$$
(16)
Consider the case where the pinning energy vanishes above a certain temperature $`T_c`$, as $`\mathrm{{\rm Y}}(T)(T_cT)^\omega `$, with a certain exponent $`\omega `$. (This is the case, for example, for domain walls near the Curie temperature). Then taking $`\omega =1`$, $`T_1=0.9T_c`$ and $`T_2=0.8T_c`$, one finds that $`\beta =2.25`$. Therefore, if $`\tau _0=10^{12}`$ second, and $`t_{w1}=1`$ second, one finds that $`t_{w2}^{}`$ is astronomically long, equal to $`10^{15}`$ seconds! In other words, the particle is completely trapped, even when the temperature only changes by $`10\%`$.
Therefore, this model provided a tantalizing scenario for the ‘rejuvenation and memory’ effect: as the temperature is cooled down, new details of the potential appear, in a self-similar manner, and the aging dynamics over the barriers starts afresh. On the other hand, the particle remains effectively forever in the well that it had reached at $`T_1`$. Therefore, when the system is heated back to $`T_1`$, perfect memory is recovered. This scenario is very similar to the one advocated for spin glasses on the basis of a hierarchical landscape, inspired by Parisi’s mean field solution . We believe that the Sinai model offers a precise basis for such a picture. Note that there is no ‘chaos’ involved in this model: rejuvenation occurs because previously equilibrated modes are thrown out of equilibrium, but in the very same energy landscape.
### 3.4 Diffusion in a random potential 2: The trap model
The above model assumes that the random potential is Gaussian, with a correlation function compatible with direct scaling arguments, which lead to $`V(X)X^{\theta /\zeta }`$. A slightly different picture emerges from the replica variational theory which predicts a highly non-Gaussian effective pinning potential acting on the center of mass of the pinned object. As detailed in , the potential is a succession of local parabolas (corresponding to locally favourable configurations, or ‘traps’) matching at singular points (see also ). The full ‘replica symmetry breaking’ scheme needed to reproduce the correct scaling of the potential (i.e. $`V(X)X^{\theta /\zeta }`$) means that the potential is actually a hierarchy of parabolas within parabolas, etc. This hierarchy directly corresponds to the existence of different length scales $`\mathrm{}`$, each one characterized by an energy scale $`E_{\mathrm{}}\mathrm{{\rm Y}}\mathrm{}^\theta `$. An important difference with the above Sinai model is the statistics of the depth of the different valleys of the pinning potential: the prediction of the replica approach is that the deepest valleys obey the so-called Gumbel extreme value statistics:
$$P_{\mathrm{}}(E)_{EE_{\mathrm{}}}\frac{1}{E_{\mathrm{}}}\mathrm{exp}(\frac{E}{E_{\mathrm{}}}).$$
(17)
Assuming that the time needed to hop out of a trap of depth $`E`$ is $`\tau =\tau _0\mathrm{exp}(E/T)`$, one finds that the above exponential distribution of trap depths induces a power-law distribution of trapping times:
$$P_{\mathrm{}}(\tau )_{\tau \tau _0}\frac{\tau _0^x_{\mathrm{}}}{\tau ^{1+x_{\mathrm{}}}}x_{\mathrm{}}=\frac{T}{E_{\mathrm{}}}(\frac{\mathrm{}_g}{\mathrm{}})^\theta .$$
(18)
The model of a random walk between independent traps such that their release time is given by Eq. (18) has been investigated in details in , and more recently, in the context of the rheology of soft glassy materials, in . The interesting result is that the dynamics of this simple model is time-translation invariant as long as $`x_{\mathrm{}}>1`$ (high temperature phase, corresponding to small length scales), but becomes aging when the average trapping time diverges, i.e. when $`x_{\mathrm{}}<1`$ (large length scales). In this case, the dynamics becomes extremely intermittent, since the system spends most of its time in the deepest available trap.
One can compute, within this simple model, the a.c. susceptibility (or frequency dependent elastic modulus) to find :<sup>5</sup><sup>5</sup>5Note that the result quoted above for the Sinai model formally corresponds to $`x_{\mathrm{}}=0`$, a result that could have been anticipated , up to logarithmic corrections.
$$\chi _{\mathrm{}}^{\prime \prime }(\omega ,t_w)=A(x_{\mathrm{}})(\omega \tau _0)^{x_{\mathrm{}}1};(x_{\mathrm{}}>1)\text{and}\chi _{\mathrm{}}^{\prime \prime }(\omega ,t_w)=A(x_{\mathrm{}})(\omega t_w)^{x_{\mathrm{}}1};(x_{\mathrm{}}<1).$$
(19)
The case where all length scales evolve in parallel therefore leads to a total susceptibility given by $`\chi ^{\prime \prime }(\omega ,t_w)=\chi _{\mathrm{}}^{\prime \prime }(\omega )+\chi _{\mathrm{ag}}^{\prime \prime }(\omega t_w)`$ with:
$$\chi _{\mathrm{}}^{\prime \prime }(\omega )=\underset{\mathrm{}<\mathrm{}_g}{}𝒢(\mathrm{})A(x_{\mathrm{}})(\omega \tau _0)^{x_{\mathrm{}}1}\chi _{\mathrm{ag}}^{\prime \prime }(\omega t_w)=\underset{\mathrm{}>\mathrm{}_g}{}𝒢(\mathrm{})A(x_{\mathrm{}})(\omega t_w)^{x_{\mathrm{}}1},$$
(20)
where $`𝒢`$ is a ‘form factor’ counting the number of available modes at scale $`\mathrm{}`$. A very interesting consequence of Eq. (20) is that in the low frequency, long waiting time limit (more precisely when $`\omega \tau _01`$ and $`\omega t_w1`$), the sum over $`\mathrm{}`$ is dominated by the region $`\mathrm{}\mathrm{}_g`$, for which $`x_{\mathrm{}}1`$. For a fairly general function $`𝒢`$, one expects both the stationary and aging part of $`\chi ^{\prime \prime }`$ to behave asymptotically as $`1/\mathrm{log}\omega `$. This can be translated, using the fluctuation dissipation theorem, into a noise spectrum $`S(\omega )1/\omega \mathrm{log}\omega `$, i.e. a so-called $`1/f`$ noise, ubiquitous in glassy systems. This mechanism suggests that $`1/f`$ noise should generically exhibit a slowly evolving component. Following the same argument, one also expects that the aging contribution to $`\chi ^{\prime \prime }`$ decays as a small power of $`t_w`$, eventually reaching a $`1/\mathrm{log}(\omega t_w)`$ behaviour for $`\mathrm{log}(\omega t_w)1`$. As mentioned above (see Eq. (4)), data on spin glasses typically give $`1x_{\mathrm{}}0.10.3`$ for $`\omega t_w`$ in the range $`110000`$.
The strictly hierarchical nature of the landscape also leads, by construction, to the ‘rejuvenation and memory’ effect : when the temperature is reduced from $`T_1`$ to $`T_2`$, the ‘glass length scale’ moves down from $`\mathrm{}_{g1}`$ to $`\mathrm{}_{g2}`$. Modes corresponding to $`\mathrm{}_{g2}<\mathrm{}<\mathrm{}_{g1}`$, which were equilibrated at $`T_1`$, become aging at $`T_2`$ (hence the ‘rejuvenation’), while the modes such that $`\mathrm{}>\mathrm{}_{g1}`$, which were aging at $`T_1`$, become effectively frozen at $`T_2`$ (hence the ‘memory’). This is illustrated in Fig. 6.
Let us finally discuss how values of $`\mu 1`$ can arise within the trap model. If one assumes that the traps visited during the evolution of the system are all different, as implicitly done above, then $`\mu =1`$. This is not true if, for example, the geometry of the trap connectivity is of low dimension, for example if the traps are on a one-dimensional structure. In this case, one can show that $`\mu =1/(1+x)`$ for $`x<1`$. One can also look at simpler models, when a particle makes directed hops on a line where traps have a deterministic lifetime $`\tau (n)`$ which grows with the distance $`n`$ to the origin. The position $`N(t_w)`$ of the particle at time $`t_w`$ is therefore given by:
$$t_w=\underset{n=1}{\overset{N}{}}\tau (n),$$
(21)
since we have assumed that the walk is directed towards $`n>0`$. The subsequent evolution of the particle, for a time $`t\tau (N)`$, will be a function of $`t/\tau (N)`$. Taking $`\tau (n)n^\beta `$ immediately leads to $`\mu =\beta /(\beta +1)<1`$. If $`\tau (n)`$ grows faster than exponentially with $`n`$, then of finds that the effective value of $`\mu `$ is larger than one, because of the presence of logarithmic corrections which lead to superaging.
## 4 Back to experiments
Equipped with the above theoretical ideas, one can return to the experimental results presented in the first sections, and see how far one can go in their interpretation.
### 4.1 Disordered Ferromagnets
From the discussion above, one expects that aging in disordered ferromagnets can be understood in terms of the superposition of slow domain growth, and domain walls reconformations in the pinning field created by the disorder. Correspondingly, the aging part of the susceptibility is expected to behave as:
$$\chi _{\mathrm{ag}}^{\prime \prime }(\omega ,t_w)=\frac{1}{\xi (t_w)}\left[\chi _w\mathrm{}^{\prime \prime }(\omega )+\chi _{w\mathrm{ag}}^{\prime \prime }(\omega ,t_w)\right],$$
(22)
where $`\xi (t_w)`$ is the (slowly growing) domain size, $`\chi _w\mathrm{}^{\prime \prime }`$ the stationary mobility of the domain walls at frequency $`\omega `$, and $`\chi _{w\mathrm{ag}}^{\prime \prime }`$ the aging contribution coming from the reconformation modes, expected to scale as $`\omega t_w`$. This expression accounts well for the observations made in random ferromagnet like systems, such as the ones studied in , in particular:
* The aging part of the response is quite sensitive to the cooling rate, and decreases when cooling is slower and/or when the waiting time increases. This dependence is logarithmic, and directly reflects the behaviour of $`\xi (t_w)`$.
* For small frequencies, $`\chi _{w\mathrm{ag}}^{\prime \prime }`$ dominates and one observes an approximate $`\omega t_w`$ scaling of the aging part of $`\chi `$, up to log-corrections coming from the $`1/\xi `$ factor. On the other hand, for larger frequencies, the reconformation contribution becomes negligible and the $`\omega t_w`$ scaling breaks down completely, as observed in .
* One observes rejuvenation and memory, induced by the slow reconformation of the walls. However, memory is only recovered if the time spent at the lower temperature $`T_2`$ is short enough, such that the overall position of the domain walls has not changed significantly. In the other limit, i.e. when the walls can move substantially, the impurities interacting with the walls are completely renewed, and memory is lost .
The fact that aging in glycerol bears some resemblance with aging in disordered ferromagnets suggests that some kind of domain growth might also be present in structural glasses. The precise nature of this domain growth is however at this stage not very clear, but is certainly a very interesting subject to explore further.
### 4.2 Spin-glasses
The interpretation of the aging experiments in spin-glasses is not as transparent; this is directly related to the fact that the correct theoretical picture for spin-glasses in physical dimensions (as opposed to mean-field) is still very much controversial. The simplest description is the droplet theory, where one essentially assumes that a spin-glass is some kind of ‘disguised ferromagnet’, in the sense that there are only two stable phases for the system, that one can (by convention) call ‘up’ and ‘down’. The dynamics of the system after a quench can then be again thought of in terms of domain growth in a disordered system, with the difference that the ‘pattern’ that is progressively invading the system is itself random. Correspondingly, the energy of a ‘domain’ grows with its size $`\mathrm{}`$ as $`\mathrm{}^\theta `$, where $`\theta `$ is smaller than the value $`d1`$ which holds in a pure ferromagnet. This is the scenario proposed (in a dynamical context) by Fisher and Huse , and further investigated by Koper and Hilhorst . However, this scenario immediately stumbles on a first difficulty, in that it would predict a very strong cooling rate dependence of the susceptibility which, as shown in Fig. 1, is definitely not observed.
A way out of this contradiction is to argue that even if at any given temperature, there are only two stable phases in competition (i.e. one pattern and its spin reversed), the favoured pattern changes chaotically with temperature. More precisely, if the temperature changes by $`\mathrm{\Delta }T`$, then the precise relative arrangement of the spins is preserved for lengths scales less than a ‘chaos’ length $`\mathrm{}_{\mathrm{\Delta }T}\mathrm{\Delta }T^y`$, and completely destroyed at larger length scales . (Here, $`y`$ is a new exponent related to $`\theta `$.) If this is the case, then all the aging achieved at higher temperatures is useless to bring the system closer to equilibrium at $`T_1`$, where domain growth has to restart from scratch. This also explains how the system rejuvenates upon a small temperature change.
The problem with this interpretation is that: (i) chaos with temperature has not been convincingly established neither theoretically nor numerically. In particular, there seems to be no ‘chaos’ with temperature in the mean-field Sherrington-Kirkpatrick model ; and (ii) that if the evolution at $`T_2`$ consists in growing new domains of the $`T_2`$ phase everywhere in space, it is difficult to imagine how this does not destroy completely the correlations built at temperature $`T_1`$. A way to do this would be to say that the new $`T_2`$-phase only nucleates around particular nucleation sites and grows very slowly, in such a way that a substantial region of space is still filled by the $`T_1`$ phase. However, this would mean that it is impossible to describe the dynamics of the system in terms of two phases only, as assumed in the droplet model. At any temperature, the configuration would necessarily be a mixture of all the phases corresponding to the previous temperatures encountered during the thermal history of the system .
The ‘droplet’ theory is radically different from the picture emerging from the Parisi solution of the mean-field Sherrington-Kirkpatrick model . There, one can show that the number of ‘phases’ in which the system can organize is very large. More precisely, there are configurations of nearly equal energy which differ by the flip of a finite fraction of the total number of spins. A consistent picture for how this scenario applies for finite dimensional spin-glasses is however still missing. A recent interesting suggestion made by Houdayer and Martin is that these different phases differ by the reversal of large non-compact, sponge like objects . These objects have a linear dimension equal to the size of the system, but are not space-filling. Rather, their boundary separates an ‘interior’ from an ‘exterior’ which form a bi-continuous structure. If this is the case, then by definition this boundary is a kind of domain wall with zero surface tension. This is crucial in the sense that these ‘domain walls’ can hop from one metastable configuration to another (much as in a disordered ferromagnet) but with no overall tendency to coarsen.
In other words, the ordered phase of spin-glasses is, in a sense, full of permanent domain wall-like defects. <sup>6</sup><sup>6</sup>6These permanent defects are probably very similar to the ‘active’ droplets of Fisher and Huse. These domain walls can only be precisely defined in reference to the true ground state of the system; their physical reality should however be thought of as particularly ‘mobile’ regions of spins; the precise position of these ‘walls’ define the possible metastable states of the sample.
After quenching the system to low temperatures, the system coarsens to get rid of the excess intensive energy, as has been beautifully demonstrated numerically in (see also ). However, the state left behind is still a ‘soup’ of walls with zero surface tension. The density of these walls is large and does not decay with time, and therefore provide the major contribution to the aging signal. The rejuvenation and memory effect can be understood in terms of the progressive quenching of smaller and smaller length scales of those walls as the temperature decreases, as we argued above for disordered ferromagnets. This interpretation is actually motivated by the fact that the temperature cycling experiments in the ferromagentic phase and in the spin-glass phase of a single ‘reentrant’ spin-glass sample reveal very similar features . The only difference is that memory is perfect in spin-glasses, as if no coarsening was present.
Let us insist once more on the fact that the above mechanism for rejuvenation is very different from the ‘chaos’ hypothesis. Within the trap model, aging is the phenomenon which occurs when the Boltzmann weight, initially uniformly scattered among many microstates, has to ‘condense’ into a small fraction of them. It is the dynamic counterpart of the entropy crisis transition that takes place in the Random Energy Model .
Obviously, the above discussion is very speculative and a deeper understanding of aging in spin-glasses is very much needed. We however believe that the idea that the ordered phase of a spin-glass contains a large number of pinned, zero tension walls, which reconform in their disordered landscape, is a useful picture.
## 5 Conclusion
In these lectures, we have tried to review the most striking experimental results on aging in a variety of disordered systems, which reveal similar features but also important differences. We have argued that a generic model that reproduce many of these features is that of pinned defects in a disordered environment. The fact that energy barriers grow with the size of the reconforming regions immediately leads to a strong hierarchy of time scales. In particular, long wavelength aging modes and short wavelength equilibrated modes coexist in the system and offer a simple mechanism to explain the rejuvenation/memory effect. These properties can be discussed within simplified models where the dynamics whole pinned object is reduced to that of its center of mass in a disordered potential. The main difference between random ferromagnets and spin-glasses seems to lie in the fact that while domains slowly grow in the former case (and therefore progressively reduces the density of domain walls), the fraction of ‘domain walls’ in spin-glasses appears to remain constant in time.
We have not attempted to discuss here the dynamical mean-field models, which have been much studied recently (for a review, see ). These models are also able to reproduce many of the interesting features of aging, including the rejuvenation/memory effect . Furthermore, these models allow one to make precise predictions on the possible violations of the Fluctuation-Dissipation Theorem , and the relation between this violation and the non trivial overlap distribution function which appears in the static solution of these models . Finally, the exact dynamical equations of these models at high temperature are very similar to those of the Mode-Coupling Theory for fragile glasses. Therefore, one can study by analogy the extension of the Mode-Coupling equations to the glass phase, and obtain, within this framework, interesting results on the aging properties of fragile glasses . However, the relevance of these mean-field theories to finite dimensional systems is not obvious, especially at low temperatures. This is because these models actually describe the diffusion of a single particle in a very high dimensional disordered potential . In high dimensions, the particle is never trapped in the bottom of a valley: there are always directions to escape.<sup>7</sup><sup>7</sup>7This is true at least for discontinuous spin-glasses. The geometrical interpretation for ‘continuous’ spin-glasses is less clear, though. The aging dynamics is dominated by the fact that the average number of unstable directions is decreasing with time, not by the fact that typical energy barriers are growing with time . The inclusion of true activated effects in these mean-field (or Mode-Coupling) equations is still very much an open problem.
## Acknowledgements
The ideas presented here owe a lot to many collegues and friends with whom I have discussed these matters over the years, in particular F. Alberici, L. Cugliandolo, D. Dean, V. Dupuis, P. Doussineau, J. Hammann, J. Kurchan, A. Levelut, M. Mézard, P. Nordblad, E. Vincent and H. Yoshino. Interesting discussions with J. L. Barrat, A. Bray, M. Cates, D. S. Fisher, J. Houdayer, W. Kob, Ph. Maass, E. Marinari, O. Martin, R. Orbach, E. Pitard and P. Sollich must also be acknowledged. Finally, I want to thank Mike Cates for inviting me to give these lectures in St Andrews, and for a most enjoyable and fruitful collaboration over the years. |
no-problem/9910/hep-th9910234.html | ar5iv | text | # Instantons in curvilinear coordinates
## The third connection
The years that passed since the discovery of instantons, , did not bring answer to the question about the role of instantons in QCD, . As far as confinement remains a puzzle all references to instantons at long scales are ambiguous. Indications may come from studies of instanton effects in phenomenological models. These could tell whether confinement may seriously affect pseudoparticles and v. v.
Common confinement models look most natural in non-Cartesian coordinate frames. The obvious choice for bags are 3+1-cylindrical, i. e. 3-spherical+time, coordinates while strings would prefer 2+2-cylindrical (2+1-cylindrical+time) geometry. Nevertheless instantons were usually discussed in the Cartesian frame (that was ideal in vacuum). The purpose of the present work is to draw attention to the problem and to develop the adequate technique. We shall generalize to curvilinear coordinates the multi-instanton solutions by ’tHooft and Jackiw, Nohl & Rebbi, , and simplify the formulae by the gauge transformation. Presently I don’t know whether the procedure is good for other topological configurations but I would expect that it makes sense for the AHDM, , solution<sup>1</sup><sup>1</sup>1I’m grateful to L. Lipatov and S. Moch for the questions..
We start from the basics of curvilinear coordinates in Sect. 1.1 and introduce the first two connections, namely the Levi-Civita connection and the spin connection. In Sect. 1.2 we describe the multi-instanton solutions. In Sect. 2 we shall rewrite instantons in non-Cartesian coordinates and propose the gauge transform that makes formulae compact. The price will be the appearance of the third, so called compensating, gauge connection. The example of the $`O(4)`$-spherical coordinates is sketched in Sect. 3. Singularities of the gauged solution are discussed in Sect. 4. The last part summarizes the results.
## 1 Basics
### 1.1 Curvilinear coordinates
We shall consider flat 4-dimensional euclidean space-time that may be parametrized either by the set of Cartesian coordinates $`x^\mu `$ or by curvilinear ones called $`q^\alpha `$. The $`q`$-frame is characterized by the metric tensor $`g_{\alpha \beta }(q)`$:
$$ds^2=dx_\mu ^2=g_{\alpha \beta }(q)dq^\alpha dq^\beta .$$
(1)
In the $`q`$-frame the derivatives $`\frac{}{x^\mu }`$ should be replaced by the covariant ones, $`D_\alpha `$. For example the derivative of a covariant vector $`A_\beta `$ is:
$$D_\alpha A_\beta =_\alpha A_\beta \mathrm{\Gamma }_{\alpha \beta }^\gamma A_\gamma .$$
(2)
The function $`\mathrm{\Gamma }_{\beta \gamma }^\alpha `$ is called the Levi-Civita connection. It can be expressed in terms of the metric tensor ($`g_{\alpha \beta }g^{\beta \gamma }=\delta _\alpha ^\gamma `$):
$$\mathrm{\Gamma }_{\beta \gamma }^\alpha =\frac{1}{2}g^{\alpha \delta }\left(\frac{g_{\delta \beta }}{q^\gamma }+\frac{g_{\delta \gamma }}{q^\beta }\frac{g_{\beta \gamma }}{q^\delta }\right).$$
(3)
Often it is convenient to use instead of $`g_{\alpha \beta }`$ the four vectors $`e_\alpha ^a`$ called the vierbein:
$$g_{\alpha \beta }(q)=\delta _{ab}e_\alpha ^a(q)e_\beta ^b(q).$$
(4)
Multiplication by $`e_\alpha ^a`$ converts coordinate (Greek) indices into the vierbein (Latin) ones,
$$A^a=e_\alpha ^aA^\alpha .$$
(5)
Covariant derivatives of quantities with Latin indices are defined in terms of the spin connection $`R_{\alpha b}^a(q)`$,
$$D_\alpha A^a=_\alpha A^a+R_{\alpha b}^aA^b.$$
(6)
The two connections $`\mathrm{\Gamma }_{\alpha \delta }^\beta `$ and $`R_{\alpha b}^a`$ are related to each other as follows:
$$R_{\alpha b}^a=e_\beta ^a_\alpha e_b^\beta +e_\beta ^a\mathrm{\Gamma }_{\alpha \gamma }^\beta e_b^\gamma =e_\beta ^a(D_\alpha e^\beta )_b.$$
(7)
### 1.2 Instantons
We shall discuss pure euclidean Yang-Mills theory with the $`SU(2)`$ gauge group. The vector potential is $`\widehat{A}_\mu =\frac{1}{2}\tau ^aA_\mu ^a`$ where $`\tau ^a`$ are the Pauli matrices. The (Cartesian) covariant derivative is $`D_\mu =_\mu i\widehat{A}_\mu `$, and the action has the form:
$$S=\frac{\mathrm{tr}\widehat{F}_{\mu \nu }^2}{2g^2}d^4x=\frac{\mathrm{tr}\widehat{F}_{\alpha \beta }\widehat{F}^{\alpha \beta }}{2g^2}\sqrt{g}d^4q.$$
(8)
where $`g=detg_{\alpha \beta }`$. The formula for the gauge field strength $`\widehat{F}_{\alpha \beta }`$ is universal:
$$\widehat{F}_{\alpha \beta }(\widehat{A})=_\alpha \widehat{A}_\beta _\beta \widehat{A}_\alpha i[\widehat{A}_\alpha ,\widehat{A}_\beta ].$$
(9)
The action is invariant under gauge transformations,
$$\widehat{A}_\mu \widehat{A}_\mu ^\mathrm{\Omega }=\mathrm{\Omega }^{}\widehat{A}_\mu (x)\mathrm{\Omega }+i\mathrm{\Omega }^{}_\mu \mathrm{\Omega },$$
(10)
where $`\mathrm{\Omega }`$ is a unitary $`2\times 2`$ matrix, $`\mathrm{\Omega }^{}=\mathrm{\Omega }^1.`$
The field equations have selfdual ($`F_{\mu \nu }=\stackrel{~}{F}_{\mu \nu }=\frac{1}{2}ϵ_{\mu \nu \lambda \sigma }F^{\lambda \sigma }`$) solutions known as instantons. The most general explicit selfdual configuration found by Jackiw, Nohl and Rebbi, , is:
$$\widehat{A}_\mu (x)=\frac{\widehat{\eta }_{\mu \nu }^{}}{2}_\nu \mathrm{ln}\rho (x),$$
(11)
where $`\widehat{\eta }_{\mu \nu }`$ is the matrix version of the ’tHooft’s $`\eta `$-symbol, :
$$\widehat{\eta }_{\mu \nu }^\pm =\widehat{\eta }_{\nu \mu }^\pm =\{\begin{array}{cc}\tau ^aϵ^{a\mu \nu };& \mu ,\nu =1,2,3;\\ \pm \tau ^a\delta ^{\mu a};& \nu =4.\end{array}$$
(12)
The widely used regular and singular instanton gauges as well as the famous ’tHooft’s Ansatz may be cast into the form similar to (11).
Our aim is to generalize the solution (11) to curvilinear coordinates. We shall not refer to the explicit form of $`\rho (x)`$ and the results will be applicable to all the cases.
## 2 Multi-instantons in curvilinear coordinates
### 2.1 The problem and the solution
It is not a big deal to transform the covariant vector $`\widehat{A}_\mu ,`$ (11), to $`q`$-coordinates. However this makes the constant numerical tensor $`\widehat{\eta }_{\mu \nu }`$ coordinate-dependent
$$\widehat{\eta }_{\mu \nu }\widehat{\eta }_{\alpha \beta }=\widehat{\eta }_{\mu \nu }\frac{x^\mu }{q^\alpha }\frac{x^\nu }{q^\beta }.$$
(13)
We propose to factorize the coordinate dependence by means of the gauge transformation such that
$$\mathrm{\Omega }^{}\widehat{\eta }_{\alpha \beta }\mathrm{\Omega }=e_\alpha ^ae_\beta ^b\widehat{\xi }_{ab}.$$
(14)
Here $`\widehat{\xi }_{ab}`$ is a constant numerical matrix tensor,
$$\widehat{\xi }_{ab}=\delta _a^\mu \delta _b^\nu \widehat{\eta }_{\mu \nu }.$$
(15)
It takes the place of $`\widehat{\eta }_{\mu \nu }`$ in non-Cartesian coordinates.
It can be shown that the matrix $`\mathrm{\Omega }`$ does exist provided that the $`x`$-frame and the $`q`$-frame have the same orientation and the two sides of (14) are of same duality.
The gauge-rotated instanton field is the sum of the two pieces:
$$\widehat{A}_\alpha ^\mathrm{\Omega }(q)=\frac{1}{2}e_\alpha ^a\widehat{\xi }_{ab}e^{b\beta }_\beta \mathrm{ln}\rho (q)+i\mathrm{\Omega }^{}_\alpha \mathrm{\Omega }.$$
(16)
The first addend is almost traditional and does not depend on the $`\mathrm{\Omega }`$-matrix whereas the second one carries information about the $`q`$-frame. It is entirely of geometrical origin. We call it the compensating connection because it compensates the coordinate dependence of $`\widehat{\eta }_{ab}=e_a^\alpha e_b^\beta \widehat{\eta }_{\alpha \beta }`$ and reduces it to the constant $`\widehat{\xi }_{ab}`$.
So long we did not specify what was the duality of the $`\widehat{\eta }`$-symbol. However the $`\mathrm{\Omega }`$-matrices and compensating connections for $`\widehat{\eta }^+`$ and $`\widehat{\eta }^{}`$ are different. In general $`\widehat{A}_\alpha ^{\mathrm{comp}\pm }`$ are respectively the selfdual and antiselfdual projections of the spin connection onto the gauge group:
$$\widehat{A}_\alpha ^{\mathrm{comp}\pm }=i\mathrm{\Omega }_\pm ^{}_\alpha \mathrm{\Omega }_\pm =\frac{1}{4}R_\alpha ^{ab}\widehat{\xi }_{ab}^\pm .$$
(17)
The last formula does not contain $`\mathrm{\Omega }`$ that has dropped out of the final result. In order to write down the multi-instanton solution one needs only the vierbein and the associated spin connection.
### 2.2 Triviality of the compensating field.
The fact of the compensating connection $`\widehat{A}^{\mathrm{comp}}`$, (17), being a pure gauge is specific to the flat space. It turns out that the field strength $`\widehat{F}_{\alpha \beta }(\widehat{A}^{\mathrm{comp}})`$ is related to the Riemann curvature of the space-time $`R_{\alpha \beta }^{\gamma \delta }`$:
$$\widehat{F}_{\alpha \beta }(\widehat{A}^{\mathrm{comp}\pm })=\frac{1}{4}R_{\alpha \beta }^{\gamma \delta }\widehat{\xi }_{\gamma \delta }^\pm .$$
(18)
Thus $`\widehat{F}_{\alpha \beta }(\widehat{A}^{\mathrm{comp}})=0`$ provided that $`R_{\alpha \beta }^{\gamma \delta }=0`$. Simple changes of variables $`x^\mu q^\alpha `$ do not generate curvature and $`\widehat{A}^{\mathrm{comp}}`$ is a pure gauge. However this is not the case in curved space-times.
### 2.3 Duality and topological charge
As long as we limit ourselves to identical transformations the vector potential (16) must satisfy the classical field equations. However the duality equation looks differently in non-Cartesian frame. If written with coordinate indices it is:
$$\widehat{F}_{\alpha \beta }=\frac{\sqrt{g}}{2}ϵ_{\alpha \beta \gamma \delta }\widehat{F}^{\gamma \delta }.$$
(19a)
Still it retains the familiar form in the vierbein notation:
$$\widehat{F}_{ab}=\frac{1}{2}ϵ_{abcd}\widehat{F}^{cd}.$$
(19b)
The topological charge is given by the integral
$$q=\frac{1}{32\pi ^2}ϵ_{\alpha \beta \gamma \delta }\mathrm{tr}\widehat{F}^{\alpha \beta }\widehat{F}^{\gamma \delta }d^4q,$$
(20a)
which in the vierbein notation becomes
$$q=\frac{1}{32\pi ^2}ϵ_{abcd}\mathrm{tr}\widehat{F}^{ab}\widehat{F}^{cd}\sqrt{g}d^4q.$$
(20b)
The general expression for $`\widehat{F}_{\alpha \beta }`$ in non-Cartesian frame is rather clumsy but it simplifies for one instanton. The vector potential in regular gauge is, ($`r^2=x_\mu ^2`$):
$$\widehat{A}_\mu ^I=\frac{\widehat{\eta }_{\mu \nu }^+}{2}_\nu \mathrm{ln}\left(r^2+\rho ^2\right).$$
(21)
The conjugated coordinate and $`\mathrm{\Omega }_+`$ gauge transformations convert it into
$$\widehat{A}_\alpha ^I=\frac{1}{2}e_\alpha ^a\widehat{\xi }_{ab}e^{b\beta }_\beta \mathrm{ln}(r^2+\rho ^2)+\widehat{A}^{\mathrm{comp}+},$$
(22)
and the field strength becomes plainly selfdual:
$$\widehat{F}_{ab}(\widehat{A}^I)=\frac{2\widehat{\xi }_{ab}^+}{\left(r^2+\rho ^2\right)^2},$$
(23)
This generalizes regular gauge to any non-Cartesian coordinates.
## 3 Example
We shall consider an instanton placed at the origin of the $`O(4)`$-spherical coordinates. Those are the radius and three angles: $`q^\alpha =(\chi ,\varphi ,\theta ,r)`$. The polar axis is aligned with $`x^1`$ and
$`x^1`$ $`=`$ $`r\mathrm{cos}\chi ;`$ (24a)
$`x^2`$ $`=`$ $`r\mathrm{sin}\chi \mathrm{sin}\theta \mathrm{cos}\varphi ;`$ (24b)
$`x^3`$ $`=`$ $`r\mathrm{sin}\chi \mathrm{sin}\theta \mathrm{sin}\varphi ;`$ (24c)
$`x^4`$ $`=`$ $`r\mathrm{sin}\chi \mathrm{cos}\theta .`$ (24d)
The vierbein and the metric tensor are diagonal:
$$e_\alpha ^a=\mathrm{diag}(r,r\mathrm{sin}\chi \mathrm{sin}\theta ,r\mathrm{sin}\chi ,\mathrm{\hspace{0.17em}1}).$$
(25)
Now one may start from the vector potential (21) and consecutively carry out the entire procedure. But to the calculation of the instanton part this includes calculating $`\mathrm{\Gamma }_{\beta \gamma }^\alpha `$, finding $`R_\alpha ^{ab}`$ and, finally, computing $`\widehat{A}^{\mathrm{comp}+}`$. The $`\widehat{\xi }_{ab}^+`$-symbol coincides with $`\widehat{\eta }_{ab}^+`$, (12, 15). The result is:
$`\widehat{A}_\chi ^I`$ $`=`$ $`{\displaystyle \frac{\tau _x}{2}}\left({\displaystyle \frac{r^2\rho ^2}{r^2+\rho ^2}}\right);`$ (26a)
$`\widehat{A}_\varphi ^I`$ $`=`$ $`{\displaystyle \frac{\tau _x}{2}}\mathrm{cos}\theta +{\displaystyle \frac{\tau _y}{2}}\mathrm{sin}\chi \mathrm{sin}\theta \left({\displaystyle \frac{r^2\rho ^2}{r^2+\rho ^2}}\right)`$ (26b)
$`+{\displaystyle \frac{\tau _z}{2}}\mathrm{cos}\chi \mathrm{sin}\theta ;`$
$`\widehat{A}_\theta ^I`$ $`=`$ $`{\displaystyle \frac{\tau _y}{2}}\mathrm{cos}\chi +{\displaystyle \frac{\tau _z}{2}}\mathrm{sin}\chi \left({\displaystyle \frac{r^2\rho ^2}{r^2+\rho ^2}}\right);`$ (26c)
$`\widehat{A}_r^I`$ $`=`$ $`0.`$ (26d)
The corresponding field strength is given by (23).
## 4 Singularities
Note that the vector field (3) is singular since neither $`\widehat{A}_\theta ^I`$ nor $`\widehat{A}_\varphi ^I`$ goes to zero near polar axes $`\chi =0`$ and $`\theta =0`$. These singularities are produced by the gauge transformation and must not affect observables. However they may tell on gauge variant quantities. We shall demonstrate that for the Chern-Simons number.
The topological charge, (2.3), can be represented by the surface integral, $`q=K^\alpha 𝑑S_\alpha `$, . Here,
$$K^\alpha =\frac{ϵ^{\alpha \beta \gamma \delta }}{16\pi ^2}\mathrm{tr}\left(\widehat{A}_\beta \widehat{F}_{\gamma \delta }+\frac{2i}{3}\widehat{A}_\beta \widehat{A}_\gamma \widehat{A}_\delta \right).$$
(27)
Even though $`q`$ is invariant $`K^\alpha `$ depends on gauge. Consider Cartesian instanton in the $`\widehat{A}_4=0`$ gauge. The two contributions to the topological charge come from the $`x_4=\pm \mathrm{}`$ hyperplanes, $`q=N_{\mathrm{CS}}(\mathrm{})N_{\mathrm{CS}}(\mathrm{})`$, and the quantity
$$N_{\mathrm{CS}}(t)=_{x_4=t}K^4𝑑S_4$$
(28)
is called the Chern-Simon number. Instanton is a transition between two 3-dimensional vacua with $`\mathrm{\Delta }N_{\mathrm{CS}}=1`$.
Analysis of (3) reveals a striking resemblance with this case. By coincidence here again $`\widehat{A}_4=0,`$ (26d). This gives an idea to interpret $`r`$ as a time coordinate attributing the Chern-Simons number $`N_{\mathrm{CS}}(r)`$ to the sphere of radius $`r`$. A naive expectation would be that $`\mathrm{\Delta }N_{\mathrm{CS}}=N_{\mathrm{CS}}(r)|_0^{\mathrm{}}`$ gives the topological charge. However this is not true and $`\mathrm{\Delta }N_{\mathrm{CS}}=\frac{1}{2}`$. The second half of $`q`$ is contributed by the singularities at $`\theta =0,\pi `$. The $`\mathrm{\Omega }`$-transform has affected the distribution of $`N_{\mathrm{CS}}`$.
We conclude that in our approach gauge variant quantities depend on coordinate frame and may be localized at the singularities of the $`\mathrm{\Omega }`$-transform. This may be one more way to simplify calculations with the help of curvilinear coordinates.
## Summary
We have shown that explicit (multi-)instanton solutions can be generalized to curvilinear coordinates. The gauge transformation converts the coordinate-dependent $`\widehat{\eta }_{ab}`$-symbol into the constant $`\widehat{\xi }_{ab}`$. The gauge potential is a sum of the instanton part and the compensating gauge connection, (16).
The compensating gauge connection can be computed in the three steps:
1. One starts from the calculation of the Levi-Civita connection $`\mathrm{\Gamma }_{\beta \gamma }^\alpha `$, (3).
2. Covariant differentiation of the vierbein, (7), leads to the spin connection $`R_\alpha ^{ab}`$.
3. Convolution of the spin connection with the appropriate $`\widehat{\xi }_{ab}`$ gives the compensating gauge potential, (17).
The advantage of our solution is that it is constructed directly of geometrical quantities, i. e. the vierbein and the spin connection. Another attractive feature is the relation between gauge variant quantities and the coordinate frame. More details may be found in .
## Acknowledgments
I would like to acknowledge financial support of the Ministry of Science of Russian Federation as well as that of the Organizing Committee which made possible my participation in the conference. It is a pleasure to thank the Chairman, professor S. Narison and the staff for personal care of participants. |
no-problem/9910/astro-ph9910380.html | ar5iv | text | # Dwarf Elliptical Galaxies in the Perseus Cluster
## 1. Overview
Dwarf elliptical galaxies appear in abundance in galaxy clusters containing strong gravitational tides. These low luminosity, low surface brightness objects dominate clusters by number, giving them very steep luminosity functions. A basic question to ask is: where did these dwarfs come from, and how can they survive in rich clusters? Also, what is the relationship between the formation and evolution of giant galaxies and dwarfs?
To help answer these questions, we obtained images of Perseus, a rich galaxy cluster with a radial velocity of 5500 km/s (D=75 Mpc w/ H<sub>0</sub> = 75 km/s/Mpc). Morphological identification of dEs is tricky, and most previous studies pick their dE sample based on colors, typically 1.2$`<`$(B-R)$`<`$1.6. However, given the relative proximity of Perseus, it is possible to reverse this procedure - that is, to determine morphologically the dE sample, and then derive physically properties on this morphological selected sample.
Our major preliminary conclusions for this study are:
$``$ Dwarf candidates in Perseus appear to have a wide range of colors, making any simple, one time formation scenario unlikely.
$``$ We find the dEs, including the nucleated dEs (dE,N), are consistently the same color throughout the dwarf. This indicates the stellar populations are likely homogeneous in these dwarfs.
A Perseus cluster field of 35 kpc x 35 kpc showing the high number of dwarfs from a WIYN R-band observations is presented in Figure 1. We are able to identify 85 candidate dEs in our Perseus cluster images. This gives a number density of 1000 dEs/Mpc<sup>2</sup>. The basic properties of the dwarfs in Perseus are as follows: Absolute Magnitude range : -14$`<`$ M<sub>R</sub> $`<`$-11, colors : 0.67$`<`$(B-R)$`<`$2.6, and central surface brightness: 22 $`<`$ $`\mu _R`$ $`<`$25.
The surface brightness, and magnitudes are standard, and agree with other observations of dwarf ellipticals, but the colors of dEs in Perseus cover a wide range. There is a slight correlation between the absolute magnitude of the brighter dEs in Perseus and the (B-R) color (Figure 2). The line in Figure two is the relationship found between (B-R) and magnitude for dEs in the Coma cluster by Secker et al. (1997).
Nucleated dEs are a well established morphological sub-class, and appear as bright dEs with a concentrated center. These nucleated cores are generally believed to be super massive star clusters that are usually about 50 pc in size. The color of these nucleated cores have typically been found to be indistinguishable from the remainder of the galaxy. We verify this for the Perseus dEs.
What are the origin of these dEs? Following Merritt (1984) galaxies in clusters are subject to long term tidal forces from the cluster core; typically $``$ 10<sup>14</sup> M within a radius of 200-300 kpc. These should tidally truncate extreme dE dwarfs at $`R`$3-10 kpc; we find the largest dEs in Perseus and Coma to be about this size. Tides therefore may control dE sizes near cluster cores. Likewise, galaxies on high eccentricity orbits cross a cluster core in a time comparable to internal orbital periods for stars in their outer envelopes. These systems may experience tidal shocking, resulting in dynamical heating of the dwarf and possibly removal of mass. Collisions from impact approximation models predict episodic heating and some mass loss. Sometimes collisions will occur with low relative velocities; these can be more damaging, and possibly produce dEs with a range of colors from small disk galaxies.
## References
Secker, Harris, Plummer, 1997, PASP, 109, 1377
Merritt, D., 1984, ApJ, 276, 26 |
no-problem/9910/cond-mat9910005.html | ar5iv | text | # A novel iterative strategy for protein design
## I Introduction
Since the late 50’s it has been established that the native state of a protein is entirely and uniquely encoded by its amino acids sequence . One of the fondamental issues in molecular biology is understanding the relation between protein sequence and native structure. Remarkably, this relation is not symmetric: while a given sequence folds into a single structure, a given structure can be encoded by several homologous sequences. The problem of predicting the native state of a sequence is commonly known as ”protein folding”. Its solution amounts to minimize the energy of the given peptidic chain over all possible conformations. The design problem, i.e. finding the sequence(s) that fold into a desired structure, has also been given a simple and rigorous formulation. At the ”physiological” temperature $`\beta _p^1`$, the sequences that correctly design a given target structure, $`\mathrm{\Gamma }_t`$ (their native state), maximize the Boltzmann probability ,
$$P(s,\mathrm{\Gamma }_t,\beta _p)=\mathrm{exp}\left\{\beta _p[H_s(\mathrm{\Gamma }_t)F_s(\beta _p)]\right\},$$
(1)
where $`s=(s_1,s_2,\mathrm{},s_L)`$ represents the amino acid sequence (amino acid $`s_1`$ at the first position in $`\mathrm{\Gamma },\mathrm{},s_L`$ at the last position in $`\mathrm{\Gamma }`$) and $`H_s`$ is the energy of $`s`$ housed on $`\mathrm{\Gamma }`$. $`F_s`$ in eq. (1) is the conformational free energy of the sequence $`s`$,
$$F_s(\beta )=\beta ^1\mathrm{ln}\left\{\underset{\mathrm{\Gamma }}{}\mathrm{exp}[\beta H_s(\mathrm{\Gamma })]\right\},$$
(2)
where the summation is taken over all possible conformations the sequence $`s`$ can assume without violating steric constraints. Maximizing (1) poses some serious technical difficulties since, in principle, it entails an exploration of both sequence and structure space . Some simplifications have been used in the past in order to limit the space of sequences; this is conveniently done by subdividing amino acids into a limited number, $`q20`$, of classes . Some approximation schemes have also been used to reduce the difficulty of calculating $`F_s`$ . Reasonable success has been obtained, for example, by postulating a suitable functional dependence of $`F_s`$ on $`s`$ . Recently, it has also been argued that, despite the huge number of conformations, $`\mathrm{\Gamma }`$, the most significant contribution to (2) comes from the closest competitors of $`\mathrm{\Gamma }_t`$ . These are limited in number, since they are among the ones sharing a subset of native contacts with $`\mathrm{\Gamma }_t`$.
In this article we propose a new technique apt for designing a given structure, $`\mathrm{\Gamma }_t`$, using a minimal set of structure for calculating (2). The technique is simple to implement and does not require to constrain sequence composition and/or to search solutions with the lowest energy . Several exact tests have been implemented in order to assess the performance of the new method with respect to previously proposed techniques.
## II Theory: the iterative design scheme
In order to discuss the design method we introduce a Hamiltonian function, $`H_s(\mathrm{\Gamma })`$ depending only on coarse grained degrees of freedom. A commonly used form is the one in terms of the contact matrix $`\mathrm{\Delta }_2(\stackrel{}{r}_i,\stackrel{}{r}_j)`$ which is 1 when $`|\stackrel{}{r}_i\stackrel{}{r}_j|<a`$ with $`a68\AA `$, and 0 otherwise. Other forms which smoothly interpolate between 0 and 1 are also used in practice. Two amino acids, $`s_i`$ and $`s_j`$, which are in contact contribute to the energy by an amount $`ϵ_2(s_i,s_j)`$, a phenomenological symmetric matrix (see e.g. refs. \[\]). Many body interations can also be easily included in terms of a generalized contact maps $`\mathrm{\Delta }_k(\stackrel{}{r}_{i_1},\mathrm{},\stackrel{}{r}_{i_k})`$ depending only on relative distances and on extra energy parameters $`ϵ_k(s_{i_1},\mathrm{},s_{i_k})`$. Thus the energy can be written as
$$H_s(\mathrm{\Gamma })=\underset{k2}{}\underset{i_1<i_2<\mathrm{}<i_k}{}ϵ_k(s_{i_1},\mathrm{},s_{i_k})\mathrm{\Delta }_k(\stackrel{}{r}_{i_1},\mathrm{},\stackrel{}{r}_{i_k}).$$
(3)
Two structures which have the same values for all the $`\mathrm{\Delta }_k`$’s, i.e. the same generalised contact map, will be regarded as identical. This useful coarse-graining procedure neglects the fine structure fluctuacions (e.g. due to thermal excitations) and, for a sequence $`s`$ with native state $`\mathrm{\Gamma }`$, allows to define its folding temperature, $`\beta _F^1`$, such that
$$P(s,\mathrm{\Gamma },\beta )>1/2,$$
(4)
for all $`\beta >\beta _F`$. Conversely, all $`s`$’s satsfying inequality (4) have their unique ground state in $`\mathrm{\Gamma }`$ and folding temperature greater than $`\beta ^1`$.
Based on this observation the novel strategy for protein design can be formulated in terms of a scheme similar in spirit the one described in ref \[\]. The essence of the procedure relies on the fact that the sum in (2) is carried out only on a limited set of structures $`D`$. Initially, $`D`$ contains only the target structure itself and another structure with a different contact map and similar degree of compactness (chosen at random or with other criteria). Upon iterating the procedure several design solutions will be identified and stored in set $`S`$. This set is, of course, initially empty. The steps to be iterated are as follows,
1. An optimization procedure, like simulated annealing, is used to explore sequence space and isolate the sequence $`\overline{s}`$ (not already included in $`S`$), such that
$$\beta [H_{\overline{s}}(\mathrm{\Gamma }_t)\stackrel{~}{}_{\overline{s}}(\beta )]<\mathrm{ln}2.$$
(5)
$`\stackrel{~}{}`$ is calculated approximately by restricting the sum in (2) over the competitive structures held in $`D`$:
$$\stackrel{~}{}_s(\beta )=\beta ^1\mathrm{ln}\{\underset{\mathrm{\Gamma }D}{}\mathrm{exp}[\beta H_s(\mathrm{\Gamma })]\}.$$
(6)
2. Then the lowest energy state(s), $`\overline{\mathrm{\Gamma }}`$ of $`\overline{s}`$ are identified and the corresponding energy compared with that obtained by $`\overline{s}`$ on $`\mathrm{\Gamma }_t`$. By definition, if $`\overline{\mathrm{\Gamma }}\mathrm{\Gamma }_t`$ and $`H(\overline{s},\overline{\mathrm{\Gamma }})H(\overline{s},\mathrm{\Gamma }_t)`$, then $`\overline{s}`$ is not a solution to the design problem and $`\overline{\mathrm{\Gamma }}`$ is added to $`D`$. Otherwise, $`\overline{s}`$, is added to the set of known solutions, $`S`$.
The iterative procedure is repeated from step 1. The scheme stops when it is impossible to find sequences, satisying (5) not already included in $`S`$, or when a sufficient number of solutions has been retrieved. It is easy to see, using (5) and (6) that, in step 2, it can never happen that a newly chosen $`\overline{\mathrm{\Gamma }}\mathrm{\Gamma }_t`$ is already contained in $`D`$. Thus, at each iteration, new informations are collected, either in the form of a putative solution (added to $`S`$) or as a new decoy (added to $`D`$).
Notice that, if the exact form of $`F_s`$ were used instead of (6), then the sequences in $`S`$ would have a folding temperature greater than $`\beta ^1`$. However, since approximation (6) leads to systematically overestimating $`F_s(\beta )`$, it is not guaranteed that the selected sequences have $`\beta _F^1<\beta ^1`$. The inequality should however be satisfied to a better extent for larger decoys sets.
The method outlined here is rigorous and its iterative application allows, in principle, to extract all sequences designing a given structure. Its pratical implementation may encounter difficulties at step 2, where it is required to find the low energy conformation(s) of a sequence. Sequences selected at step 2 with a low $`\beta `$ will correspondingly have a high folding temperature and are expected to be good folders. Hence, it is plausible that identifying the corresponding low energy states is much simpler than solving the general folding problem. In fact, we have gathered numerical evidence showing that strategy can be stopped as soon as a one finds a structure where is attained an energy lower than on $`\mathrm{\Gamma }_t`$ (even if true native state has still lower energy). Notice that it is still necessary to have folding technique to allow to test if the design procedure is successful. Our iterative scheme is able to use informations of failed attempts in order to improve design at subsequent iterations.
## III Methods: implementation and test of the procedure
To carry out a rigorous and exhaustive test of the proposed strategy we have restricted the space of structures by discretizing the positions of amino acids, $`\stackrel{}{r}_i`$. We choose to follow the common practice of restricting the $`\stackrel{}{r}_i`$’s to occupy the nodes of a cubic lattice . This semplification allows for an exhaustive search of the whole conformation space for chain of a few dozen residues, albeit at the expenses of an accurate representation of protein structures, as discussed in ref. \[\]. To mimic the high degree of compactness found in naturally occurring proteins, we first considered all the maximally compact self-avoiding walks of length $`L=27`$ embedded in a $`3\times 3\times 3`$ cube. There are 103346 distinct oriented walks modulo rotations and reflections. This restriction is a good approximation if the interaction energies between amino acids are sufficiently negative, so that compact conformations are favoured over loose ones. Without loss of generality we adopt a Hamiltonian where only pairwise interactions are considered (corresponding to $`k=2`$ in (3)). If the interaction energies are sufficiently attractive it is guaranteed that the native state is compact. Step 2 of the iterative procedure was carried out in two distinct ways. In a first attempt we found the true lowest energy state of $`s`$ by exhaustive search. In a second attempt we tried to mimic the difficulty of finding the ground state in a realistic context and hence carried out a random partial exploration of the structure space. Although the first method was expected to be more efficient than the second, their performance turned out to be almost identical, as we discuss below.
The four target conformations used to test the procedure are given in Table I and shown in Fig 1a-d. We used three possible choices for the $`ϵ`$’s. First, we adopted the standard 2-class HP model with $`ϵ_{\mathrm{HH}}=1\alpha `$ and $`ϵ_{\mathrm{HP}}=ϵ_{\mathrm{PP}}=\alpha `$. $`\alpha `$ is a suitable constant ensuring that native conformations are compact. Since all conformations considered here have the same number of contacts the value of $`\alpha `$ is irrelevant and will be omitted from now on. The second case is a 6-class model and the $`ϵ`$’s are shown in Table II. For the last case we considered the full repertoire of 20 amino acids used the Miyazawa and Jernigan energy parameters given in Table 3 of ref. \[\]. With the standard HP parameters, structures $`\mathrm{\Gamma }_1\mathrm{\Gamma }_4`$ have various degree of designability. The latter is defined as the number of sequences admitting them as unique ground states . Hence, the encodability of $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ is poor and average respectively, while $`\mathrm{\Gamma }_3`$ and $`\mathrm{\Gamma }_4`$ have very large encodability. It was shown that the degree of encodability is mainly a geometrical property of the structure and not too sensitive to the number of amino acid classes or the values of interaction parameters . For this reason we expect that the relative encodability of $`\mathrm{\Gamma }_1\mathrm{\Gamma }_4`$ remain different when using all the three sets of parameters.
## IV Results and discussion
The “dynamical” performance of the algorithm can be seen in Figs. 2a-c. The plots show the number of solutions retrieved as a function of the number of iterations at a “physiological” temperature equal to $`0.1,10.0`$ and $`0.7`$ for 2, 6 and 20 classes of amino acids, respectively. The different values of the physiological temperature are related to the different energy scales of the interactions.
It can be seen that, after an initial transient, the performance of the method (given by the slope of the curves) is very high. In particular, for a large number of classes, it is nearly equal to 1 for all structures. Table III provides a quantitative summary of the performance of the method. For the HP model, first column of Table III, the method was iterated until it could not find further solutions with (estimated) folding temperature greater that 0.1 . For the cases of 6 and 20 classes, a very large number of solutions exist. Hence, we stopped the procedure after 1000 or 500 iterations, depending on the number of classes.
An appealing feature is that the extracted solutions show no bias for sequence composition (see Fig. 2d) or ground-state energy. This can be seen in Fig. 3, where we have plotted the energies of 1000 designed solutions of fixed composition for the 6-classes case. Solutions do not exhibit packing around the minimum energy ($`830`$) and their energy spread is fairly wide (the estimated maximum energy is $`170`$). Furthermore, for each extracted sequence we also calculated its folding temperature, to compare it with $`1/\beta `$. As we remarked, if all the significant competitors of $`\mathrm{\Gamma }_t`$ were included in $`D`$, then sequences satisfying (5) should have folding temperatures greater than $`1/\beta `$. As shown in the typical plot of Fig. 4 this is almost always the case, ensuring that solutions can be extracted with a desired thermal stability. An alternative measure of the thermal stability connected to the cooperativity and rapidity of the folding process is the $`Z_{score}`$. For a sequence, $`s`$, designing structure $`\mathrm{\Gamma }`$, the $`Z_{score}`$ is defined as :
$$Z_{score}=\frac{H_sH_s(\mathrm{\Gamma })}{\sigma _s},$$
(7)
where $`H_s`$ is the average energy over the maximally compact conformations and $`\sigma _s`$ the standard deviation of the energy in this ensemble. Fig. 5 shows a scatter plot of extracted solutions for target structure $`\mathrm{\Gamma }_1`$ for the 20-letter case. It can be seen that there exist solutions with very high $`Z_{score}`$ throughout the displayed energy range. This proves the usefulness of the novel design technique which has no bias in native-state energy. In fact, it allows to collect equally good folders with a wide range of native-state energy (and hence very different sequences). This ought to be useful in realistic design contexts, where among all putative design solutions one may wish to retain those with specific amino acids in key protein sites. The ability to select sequences across the whole energy range highlights the efficiency of the technique. In fact, as shown in Fig. 6, away from the lowest energy edge, the fraction of good sequences over the total ones with the same energy is minuscule (note the logarithmic scale). Our method is able to span across the whole energy range without restricting to those of minimal energy, which are a negligible fraction of the total solutions.
Finally, we analysed the degree of mutual similarity between extracted solutions. For the 6-classes case, the sequence similarity between solutions was rather low, being around 20%, as can be seen in Fig. 7. This rules out the possibility that solutions correspond to few point mutations of a single prototype sequence.
One of the most significant features of the novel design procedure is that the number of structures, $`D`$, used to calculate the approximate free energy, (6), can be kept to a negligible fraction of the total structures and yet allow a very efficient design. This is proved even more strikingly by a further test of our design strategy in the whole space of both compact and non-compact conformations. We carried out a design of structure $`\mathrm{\Gamma }_2`$ by using the HP parameters with the constant $`\alpha `$ set to 0. This amounts to allow for non-compact conformations to be native states. Since it is unfeasible to explore this enlarged structure space, step 2 was carried out with a stochastic Monte Carlo process, as described in refs. \[\], which generated dynamically growing low-energy conformations at a suitable fictitious Monte Carlo temperature. The correctness of the putative solutions was carried out by using an algorithm known as Constrained Hydrophobic Core Construction (CHCC). The algorithm relies on an efficient pruning of the complete search tree in finding possible low energy conformations for a sequence. At the heart of the algorithm is the observation that the most energetically convenient conformations for the hydrophobic monomers is to form a compact, cubic-like, core. This ideal situation may not be reachable for arbitrary sequences, due to frustration effects; these are taken systematically into account to build a compact core with a number of cavities sufficient to expose P singlets (i.e. a P flanked by two H monomers in the sequence) on the surface, which is energetically more effective than burying them in the core. Then, exhaustive search algorithms are used to check the compatibility of a sequence with cores of increasing surface area (i.e. decreasing energy). A detailed description of the method can be found in Refs. \[\]. The time required by CHCC to find the ground state energy of a sequence increases significantly, on average, with the increase of the number of H residues. For this reason we limited the search for design solutions to sequences with $`n_H=13`$. The solutions, obtained in about one hundred iterations, appears in table (IV). All the 23 extracted solutions had $`\mathrm{\Gamma }_2`$ as the unique ground state among the compact structures, and 17 of them retained $`\mathrm{\Gamma }_2`$ as ground state even when non-compact structures are considered. Given the vastity of the enlarged structure space this represent a remarkable result.
## V Conclusions
We have presented a novel approach to protein design that encompasses negative design features. Taking the latter into proper account has been shown to be crucial for a succesful protein design. From a practical point of view this amounts to calculating the conformational free energy of all sequences which are candidate solutions. This computational intensive task is kept to a minimum in our scheme thanks to the identification of a limited number of structures which are close competitors of the target conformations. The strategy, is easy to implement and has been tested on minimalist models. The method appears to be very efficient and reliable for a variety of different sets of amino acid interactions. Contrary to other design techniques, the extracted solutions show no bias in sequence composition or native state energy and can be chosen to have a desired thermal stability.
We thank J. Banavar, G. Morra, F. Seno and G. Settanni for discussions and suggestions. We acknowledge support from the Istituto Nazionale di Fisica della Materia. |
no-problem/9910/quant-ph9910070.html | ar5iv | text | # Untitled Document
Controlled quantum evolutions
and transitions
Nicola Cufaro Petroni
INFN Sezione di Bari, INFM Unità di Bari and
Dipartimento Interateneo di Fisica dell’Università e del Politecnico di Bari,
via Amendola 173, 70126 Bari (Italy)
CUFARO@BA.INFN.IT
Salvatore De Martino, Silvio De Siena and Fabrizio Illuminati
INFM Unità di Salerno,
INFN Sezione di Napoli - Gruppo collegato di Salerno and
Dipartimento di Fisica dell’Università di Salerno
via S.Allende, 84081 Baronissi, Salerno (Italy)
DEMARTINO@PHYSICS.UNISA.IT, DESIENA@PHYSICS.UNISA.IT
ILLUMINATI@PHYSICS.UNISA.IT
ABSTRACT: We study the non stationary solutions of Fokker–Planck equations associated to either stationary or non stationary quantum states. In particular we discuss the stationary states of quantum systems with singular velocity fields. We introduce a technique that allows arbitrary evolutions ruled by these equations to account for controlled quantum transitions. As a first signficant application we present a detailed treatment of the transition probabilities and of the controlling time–dependent potentials associated to the transitions between the stationary, the coherent, and the squeezed states of the harmonic oscillator.
1. Introduction In a few recent papers the analogy between diffusive classical systems and quantum systems has been reconsidered from the standpoint of the stochastic simulation of quantum mechanics , , and particular attention has been devoted there to the evolution of the classical systems associated to a quantum wave function when the conditions imposed by the stochastic variational principle are not satisfied (non extremal processes). The problem studied in those papers was the convergence of an arbitrary evolving probability distribution, solution of the Fokker–Planck equation, toward a suitable quantum distribution. In it was pointed out that, while the correct convergence is achieved for a few quantum examples, these results cannot be considered general as shown in some counterexamples: in fact not only for particular non stationary wave functions (as for a minimal uncertainty packet), but also for stationary states with nodes one does not recover in a straightforward way the correct quantum asymptotic behaviour. For stationary states with nodes the problem is that the corresponding velocity field to consider in the Fokker–Planck equation shows singularities at the locations of the nodes of the wave function. These singularities effectively separate the available interval of the configurational variables into non communicating sectors which trap any amount of probability initially attributed and make the system non ergodic.
In a more recent paper it has been shown that for transitive systems with stationary velocity fields (as, for example, a stationary state without nodes) we always have an exponential convergence to the correct quantum probability distribution associated to the extremal process, even if we initially start from an arbitrary non extremal process. These results can also be extended to an arbitrary stationary state if we separately consider the process as confined in every region of the configuration space between two subsequent nodes.
In the same paper it has been further remarked that while the non extremal processes should be considered virtual, as the non extremal trajectories of classical Lagrangian mechanics, they can however become physical, real solutions if we suitably modify the potential in the Schrödinger equation. The interest of this remark lies not so much in the fact that non extremal processes are exactly what is lacking in quantum mechanics in order to interpret it as a totally classical theory of stochastic processes (for example in order to have a classical picture of a double slit experiment ), but rather in the much more interesting possibility of engineering and controlling physically realizable evolutions of quantum states. This observation would be of great relevance, for instance to the study and the description of (a) transitions between quantum states (b) possible models for quantum measurements and (c) control of the dynamics of quantum–like systems (for instance charged beams in particle accelerators) .
In particular, case (c) is being studied in the framework of Nelson stochastic mechanics which is an independent and self–consistent reformulation of quantum mechanics , and can be applied in other areas of physical phenomenology. For instance it can usefully account for systems not completely described by the quantum formalism, but whose evolution is however strongly influenced by quantum fluctuations, i.e. the so–called mesoscopic or quantum–like systems. This behaviour characterizes, for example, the beam dynamics in particle accelerators and there is evidence that it can be described by the stochastic formalism of Nelson diffusions , since in these quantum–like systems, trajectories and transition probabilities acquire a clear physical meaning, at variance with the case of quantum mechanics.
On the other hand, quantum behaviours can be simulated by means of classical stochastic processes in a by now well defined and established framework . A stochastic variational principle provides a foundation for that, in close analogy with classical mechanics and field theory . In this scheme the deterministic trajectories of classical mechanics are replaced by the random trajectories of diffusion processes in configuration space. The programming equations derived from the stochastic variational principle are formally identical to the equations of the Madelung fluid , the hydrodynamical equivalent of Schrödinger equation . On this basis, it is possible to develop a model whose phenomenological predictions coincide with those of quantum mechanics for all the experimentally measurable quantities. Within this interpretative code stochastic mechanics is nothing but a quantization procedure, different from the canonical one only formally, but completely equivalent from the point of view of the physical consequences: a probabilistic simulation of quantum mechanics, providing a bridge between this fundamental physical theory and stochastic differential calculus. However, it is well known that the central objects in the theory of classical stochastic processes, namely the transition probability densities, seldom play any observable role in stochastic mechanics and must be considered as a sort of gauge variables. Several generalizations of Nelson stochastic quantization have been recently proposed to allow for the observability of the transition probabilities: for instance, stochastic mechanics could be modified by means of non constant diffusion coefficients ; alternatively, it has been suggested that the stochastic evolution might be modified during the measurement process .
The aim of the present paper is instead to show how the transition probabilities associated to Nelson diffusion processes can play a very useful role in standard quantum mechanics, in particular with regard to describing and engineering the dynamics of suitably controlled quantum evolutions and transitions. More precisely, we consider the following problem in the theory of quantum control: given an initial probability distribution $`\rho _i`$ associated to an arbitrarily assigned quantum state $`\psi _i`$, we study its time evolution with the drift associated to another arbitrarily assigned quantum state $`\psi _f`$, to determine the controlling time–dependent potential $`V_c(x,t)`$ such that, I) at any instant of time the evolving probability distribution is that associated to the wave function solution of the Schrödinger equation in the potential $`V_c(x,t)`$, and that, II) asymptotically in time the evolving distribution converges to the distribution $`\rho _f`$ associated to $`\psi _f`$.
After introducing the formalism of Nelson stochastic mechanics to describe quantum evolutions in Sections 2 and 3, we provide in Sections 3 and 4 a self–contained review of the Sturm–Liouville problem for the Fokker–Planck equation and the techniques of solution for the Nelson diffusions associated both to nonstationary and stationary quantum states. In Section 5 we discuss in detail the example of the harmonic oscillator, explicitely solving for the transition probability densities of the ground and of the low lying excited states. Sections 6, 7 and 8 are devoted to the study and the solution of the problem outlined above, discussing the potentials associated to the definition of controlled quantum evolution, and modelling transitions. Two explicit examples are studied in detail: the controlled transition between the invariant probability densities associated to the ground and the first excited state of the harmonic oscillator, and the controlled evolution between pairs of coherent or squeezed wave packets. In these cases the problem can be solved completely, yielding the explicit analytic form of the evolving transition probabilities and of the evolving controlling potentials at all times. Finally, in Section 9 we present our conclusions and discuss possible future extensions and applications of the technique introduced in the present paper, with regard to the discussion of anharmonic quantum and quantum–like systems, the role of instabilities in the initial conditions, and the implementation of optimization procedures.
2. Fokker-Planck equations and quantum systems Here we will recall a few notions of stochastic mechanics in order to fix the notation. The configuration of a classical particle is promoted to a vector Markov process $`\xi (t)`$ taking values in $`𝐑^3`$. This process is characterized by a probability density $`\rho (𝐫,t)`$ and a transition probability density $`p(𝐫,t|𝐫^{},t^{})`$ and its components satisfy an Itô stochastic differential equation of the form
$$d\xi _j(t)=v_{(+)j}(\xi (t),t)dt+d\eta _j(t),$$
$`(2.1)`$
where $`v_{(+)j}`$ are the components of the forward velocity field. However here the fields $`v_{(+)j}`$ are not given a priori, but play the role of dynamical variables and are consequently determined by imposing a specific dynamics. The noise $`\eta (t)`$ is a standard Wiener process independent of $`\xi (t)`$ and such that
$$𝐄_t\left(d\eta _j(t)\right)=0,𝐄_t\left(d\eta _j(t)d\eta _k(t)\right)=2D\delta _{jk}dt,$$
$`(2.2)`$
where $`d\eta (t)=\eta (t+dt)\eta (t)`$ (for $`dt>0`$), $`D`$ is the diffusion coefficient, and $`𝐄_t`$ are the conditional expectations with respect to $`\xi (t)`$. In what follows, for sake of notational simplicity, we will limit ourselves to the case of one dimensional trajectories, but the results that will be obtained can be immediately generalized to any number of dimensions. We will suppose for the time being that the forces will be defined by means of purely configurational potentials, possibly time–dependent $`V(x,t)`$. A suitable definition of the Lagrangian and of the stochastic action functional for the system described by the dynamical variables $`\rho `$ and $`v_{(+)}`$ allows to select, the processes which reproduce the correct quantum dynamics , . In fact, while the probability density $`\rho (x,t)`$ satisfies, as usual, the forward Fokker–Planck equation associated to the stochastic differential equation (2.1)
$$_t\rho =D_x^2\rho _x(v_{(+)}\rho )=_x(D_x\rho v_{(+)}\rho ),$$
$`(2.3)`$
the following choice for the Lagrangian field
$$L(x,t)=\frac{m}{2}v_{(+)}^2(x,t)+mD_xv_{(+)}(x,t)V(x,t),$$
$`(2.4)`$
enables to define a stochastic action functional
$$𝒜=_{t_0}^{t_1}𝐄\left[L(\xi (t),t)\right]𝑑t,$$
$`(2.5)`$
which leads, through the stationarity condition $`\delta 𝒜=0`$, to the equation
$$_tS+\frac{(_xS)^2}{2m}+V2mD^2\frac{_x^2\sqrt{\rho }}{\sqrt{\rho }}=0.$$
$`(2.6)`$
The field $`S(x,t)`$ is defined as
$$S(x,t)=_t^{t_1}𝐄\left[L(\xi (s),s)|\xi (t)=x\right]𝑑s+𝐄\left[S_1\left(\xi (t_1)\right)|\xi (t)=x\right],$$
$`(2.7)`$
where $`S_1()=S(,t_1)`$ is an arbitrary final condition. By introducing the function $`R(x,t)\sqrt{\rho (x,t)}`$ and the de Broglie Ansatz
$$\psi (x,t)=R(x,t)\mathrm{e}^{iS(x,t)/2mD},$$
$`(2.8)`$
equation (2.6) takes the form
$$_tS+\frac{(_xS)^2}{2m}+V\mathrm{\hspace{0.17em}2}mD^2\frac{_x^2R}{R}=0,$$
$`(2.9)`$
and the complex function $`\psi `$ satisfies the Schrödinger–like equation
$$i(2mD)_t\psi =\widehat{H}\psi =\mathrm{\hspace{0.17em}2}mD^2_x^2\psi +V\psi .$$
$`(2.10)`$
If the diffusion coefficient is chosen to be
$$D=\frac{\mathrm{}}{2m},$$
$`(2.11)`$
we recover exactly the Schrödinger equation of quantum mechanics. Different choices of $`D`$ allow instead to describe the effective quantum–like dynamics of more general systems.
On the other hand, if we start from the (one–dimensional) Schrödinger equation (2.10) with the de Broglie Ansatz (2.8) and the diffusion coefficient (2.11), separating the real and the imaginary parts as usual in the hydrodynamical formulation , we recover equations (2.3) and (2.6) with $`\rho =R^2=|\psi |^2`$ and the forward velocity field
$$v_{(+)}(x,t)=\frac{1}{m}_xS+\frac{\mathrm{}}{2m}_x(\mathrm{ln}R^2).$$
$`(2.12)`$
3. The Sturm–Liouville problem and the solutions of the Fokker–Planck equation Let us recall here (see for example ) a few generalities about the techniques of solution of the Fokker–Planck equation with $`D`$ and $`v_{(+)}`$ two time–independent continuous and differentiable functions defined for $`x[a,b]`$ and $`tt_0`$, such that $`D(x)>0`$, and $`v_{(+)}(x)`$ has no singularities in $`(a,b)`$. The Fokker–Planck equation then reads
$$_t\rho =_x^2(D\rho )_x(v_{(+)}\rho )=_x\left[_x(D\rho )v_{(+)}\rho \right].$$
$`(3.1)`$
The conditions imposed on the probabilistic solutions are of course
$$\begin{array}{cc}\hfill \rho (x,t)0,& a<x<b,t_0t,\hfill \\ \hfill _a^b\rho (x,t)𝑑x=1,& t_0t,\hfill \end{array}$$
$`(3.2)`$
and from the form of (3.1) the second condition takes the form
$$\left[_x(D\rho )v_{(+)}\rho \right]_{a,b}=0,t_0t.$$
$`(3.3)`$
Suitable initial conditions will be added to produce the required evolution: for example the transition probability density $`p(x,t|x_0,t_0)`$ will be selected by the initial condition
$$\underset{tt_0^+}{lim}\rho (x,t)=\rho _{in}(x)=\delta (xx_0).$$
$`(3.4)`$
It is also possible to show by direct calculation that
$$h(x)=N^1\mathrm{e}^{{\scriptscriptstyle [D^{}(x)v_{(+)}(x)]/D(x)𝑑x}},\left(N=_a^b\mathrm{e}^{{\scriptscriptstyle [D^{}(x)v_{(+)}(x)]/D(x)𝑑x}}𝑑x\right)$$
$`(3.5)`$
is always an invariant (time independent) solution of (3.1) satisfying the conditions (3.2) (here the prime symbol denotes differentiation). One should observe however that relation (3.1) is not in the standard self–adjoint form ; this fact notwithstanding, if we define the new function $`g(x,t)`$ by means of
$$\rho (x,t)=\sqrt{h(x)}g(x,t),$$
$`(3.6)`$
it is easy to show that $`g(x,t)`$ obeys an equation of the form
$$_tg=g,$$
$`(3.7)`$
where the operator $``$ acting on positive normalizable functions $`\phi (x)`$ and defined by
$$\phi =\frac{\mathrm{d}}{\mathrm{d}x}\left[r(x)\frac{\mathrm{d}\phi (x)}{\mathrm{d}x}\right]q(x)\phi (x),$$
$`(3.8)`$
with
$$\begin{array}{cc}\hfill r(x)& =D(x)>0,\hfill \\ \hfill q(x)& =\frac{\left[D^{}(x)v_{(+)}(x)\right]^2}{4D(x)}\frac{\left[D^{}(x)v_{(+)}(x)\right]^{}}{2},\hfill \end{array}$$
$`(3.9)`$
is now self–adjoint. By separating the variables by means of $`g(x,t)=\gamma (t)G(x)`$ we have $`\gamma (t)=\mathrm{e}^{\lambda t}`$ while $`G`$ must be solution of a typical Sturm-Liouville problem associated to the equation
$$G(x)+\lambda G(x)=0,$$
$`(3.10)`$
with the boundary conditions
$$\begin{array}{cc}& \left[D^{}(a)v_{(+)}(a)\right]G(a)+2D(a)G^{}(a)=0,\hfill \\ & \left[D^{}(b)v_{(+)}(b)\right]G(b)+2D(b)G^{}(b)=0.\hfill \end{array}$$
$`(3.11)`$
It is easy to see that $`\lambda =0`$ is always an eigenvalue for the problem (3.10) with (3.11), and that the corresponding eigenfunction is $`\sqrt{h(x)}`$ as defined from (3.5).
For the differential problem (3.10) with (3.11) we have that the simple eigenvalues $`\lambda _n`$ will constitute an infinite, monotonically increasing sequence and the corresponding eigenfunction $`G_n(x)`$ will have $`n`$ simple zeros in $`(a,b)`$. This means that $`\lambda _0=0`$, corresponding to the eigenfunction $`G_0(x)=\sqrt{h(x)}`$ which never vanishes in $`(a,b)`$, is the lowest eigenvalue, and that all the other eigenvalues are strictly positive. Moreover the eigenfunctions will form a complete orthonormal set of functions in $`L^2\left([a,b]\right)`$ . As a consequence, the general solution of equation (3.1) satisfying the conditions (3.2) will have the form
$$\rho (x,t)=\underset{n=0}{\overset{\mathrm{}}{}}c_n\mathrm{e}^{\lambda _nt}\sqrt{h(x)}G_n(x),$$
$`(3.12)`$
with $`c_0=1`$ for normalization (remember that $`\lambda _0=0`$). The coefficients $`c_n`$ for a particular solution are selected by an initial condition
$$\rho (x,t_0^+)=\rho _{in}(x),$$
$`(3.13)`$
and are then calculated from the orthonormality relations as
$$c_n=_a^b\rho _{in}(x)\frac{G_n(x)}{\sqrt{h(x)}}𝑑x.$$
$`(3.14)`$
In particular for the transition probability density we have from (3.4) that
$$c_n=\frac{G_n(x_0)}{\sqrt{h(x_0)}}.$$
$`(3.15)`$
Since $`\lambda _0=0`$ and $`\lambda _n>0`$ for $`n>1`$, the general solution (3.12) of (3.1) has a precise time evolution: all the exponential factors in (3.12) vanish as $`t+\mathrm{}`$ with the only exception of the term $`n=0`$ which is constant, so that exponentially fast we will always have
$$\underset{t+\mathrm{}}{lim}\rho (x,t)=c_0\sqrt{h(x)}G_0(x)=h(x).$$
$`(3.16)`$
Therefore the general solution will always relax in time toward the invariant solution $`h(x)`$. As a consequence the eigenvalues $`\lambda _n`$ which solve the Sturm–Liouville problem define the physical time scales of the decay. By the structure of equations (3.7)–(3.11) we see that tuning the choice of the physical parameters that enter in the diffusion coefficient and in the forward velocity field allows for different sets of eigenvalues which define different sets of time scales. Hence, the rate of convergence can be fixed as to yield fast decay, slow decay, or even, on proper observational scales, quasi metastable behaviours, according to what kind of physical evolution between quantum states one wants to realize. This point will be further discussed and elucidated in Section 6.
4. Processes associated to stationary quantum states Let us consider now a Schrödinger equation (2.10) with a time–independent potential $`V(x)`$ which gives rise to a purely discrete spectrum and bound, normalizable states. Let us introduce the following notations for stationary states, eigenvalues and eigenfunctions:
$$\begin{array}{cc}\hfill \psi _n(x,t)& =\varphi _n(x)\mathrm{e}^{iE_nt/\mathrm{}},\hfill \\ \hfill \widehat{H}\varphi _n& =\frac{\mathrm{}^2}{2m}\varphi _n^{\prime \prime }+V\varphi _n=E_n\varphi _n.\hfill \end{array}$$
$`(4.1)`$
Taking into account relation (2.11), the previous eigenvalue equation can also be recast in the following form
$$D\varphi _n^{\prime \prime }=\frac{VE_n}{\mathrm{}}\varphi _n.$$
$`(4.2)`$
For these stationary states the probability densities are the time–independent, real functions
$$\rho _n(x)=|\psi _n(x,t)|^2=\varphi _n^2(x),$$
$`(4.3)`$
while the phase and the amplitude of $`\psi _n`$ are from (2.8)
$$S_n(x,t)=E_nt,R_n(x,t)=\varphi _n(x),$$
$`(4.4)`$
so that the associated velocity fields are from (2.12)
$$v_{(+)n}(x)=2D\frac{\varphi _n^{}(x)}{\varphi _n(x)}.$$
$`(4.5)`$
Each $`v_{(+)n}`$ is time–independent and presents singularities in the nodes of the associated eigenfunction. Since the $`n`$–th eigenfunction of a quantum system with bound states has exactly $`n`$ simple nodes $`x_1,\mathrm{},x_n`$, the coefficients of the Fokker-Planck equation (2.3) are not defined in these $`n`$ points and it is necessary to solve it in separate intervals by imposing the correct boundary conditions connecting the different sectors. In fact these singularities effectively separate the real axis in $`n+1`$ sub–intervals with walls impenetrable to the probability current. Hence the process will not have an unique invariant measure and will never cross the boundaries fixed by the singularities of $`v_{(+)}(x)`$: if the process starts in one of the sub–intervals, it will always remain there .
As a consequence, the normalization integral (3.2) (with $`a=\mathrm{}`$ and $`b=+\mathrm{}`$) is the sum of $`n+1`$ integrals over the sub–intervals $`[x_k,x_{k+1}]`$ with $`k=0,1,\mathrm{},n`$ (where we understand, to unify the notation, that $`x_0=\mathrm{}`$ and $`x_{n+1}=+\mathrm{}`$). Hence for $`n1`$ equation (2.3) must be restricted in each interval $`[x_k,x_{k+1}]`$ with the integrals
$$_{x_k}^{x_{k+1}}\rho (x,t)𝑑x,$$
$`(4.6)`$
constrained to a constant value for $`tt_0`$. This constant is not, in general, equal to one (only the sum of these $`n+1`$ integrals amounts to one) and, since the separate intervals cannot communicate, it will be fixed by the choice of the initial conditions. Therefore, due to the singularities appearing in the forward velocity fields $`v_{(+)n}`$ for $`n1`$, we deal with a Fokker–Planck problem with barriers. The boundary conditions associated to (2.3) then require the conservation of probability in each sub–interval $`[x_k,x_{k+1}]`$, i.e. the vanishing of the probability current at the end points of the interval:
$$\left[D_x\rho v_{(+)}\rho \right]_{x_k,x_{k+1}}=0,tt_0.$$
$`(4.7)`$
To obtain a particular solution one must specify the initial conditions. In particular, we are interested in the transition probability density $`p(x,t|x_0,t_0)`$, which is singled out by the initial condition (3.4), because the asymptotic convergence in $`L^1`$ of the solutions of equation (2.3) is ruled by the asymptotic behaviour of $`p(x,t|x_0,t_0)`$ through the Chapman–Kolmogorov equation
$$\rho (x,t)=_{\mathrm{}}^+\mathrm{}p(x,t|y,t_0)\rho (y,t_0^+)𝑑y.$$
$`(4.8)`$
It is clear at this point that in every interval $`[x_k,x_{k+1}]`$ (both finite or infinite) we can write the solution of equation (2.3) along the guidelines sketched in Sect. 3. We must only keep in mind that in $`[x_k,x_{k+1}]`$ we already know the invariant, time–independent solution $`\varphi _n^2(x)`$ which is never zero inside the interval itself, with the exception of the end points $`x_k`$ and $`x_{k+1}`$. Hence, as we have seen in the general case, with the position
$$\rho (x,t)=\varphi _n(x)g(x,t),$$
$`(4.9)`$
we can reduce (2.3) to the form
$$_tg=_ng,$$
$`(4.10)`$
where $`_n`$ is now the self–adjoint operator defined on $`[x_k,x_{k+1}]`$ by
$$_n\phi (x)=\frac{\mathrm{d}}{\mathrm{d}x}\left[r(x)\frac{\mathrm{d}\phi (x)}{\mathrm{d}x}\right]q_n(x)\phi (x),$$
$`(4.11)`$
with
$$r(x)=D>0;q_n(x)=\frac{v_{(+)n}^2(x)}{4D}+\frac{v_{(+)n}^{}(x)}{2}.$$
$`(4.12)`$
Equation (4.10) is solved by separating the variables, so that we immediately have $`\gamma (t)=\mathrm{e}^{\lambda t}`$ while the spatial part $`G(x)`$ of $`g`$ must be solution of
$$_nG(x)+\lambda G(x)=0,$$
$`(4.13)`$
with the boundary conditions
$$\left[2DG^{}(x)v_{(+)n}(x)G(x)\right]_{x_k,x_{k+1}}=0.$$
$`(4.14)`$
The general behaviour of the solutions of this Sturm–Liouville problem obtained as expansions in the system of the eigenfunctions of (4.13) has already been discussed in Section 3. In particular we deduce from (3.12) that for the stationary quantum states (more precisely, in every subinterval defined by two subsequent nodes) all the solutions of (2.3) always converge in time toward the correct quantum solution $`|\varphi _n|^2`$. As a further consequence, any quantum solution $`\varphi _n^2`$ defined on the entire interval $`(\mathrm{},+\mathrm{})`$ will be stable under deviations from its initial condition.
5. An explicit example: the harmonic oscillator To provide an explicit evolution of the probability and the transition probability densities of stochastic mechanics, we consider in detail the example of a harmonic oscillator associated to the potential
$$V(x)=\frac{m}{2}\omega ^2x^2,$$
$`(5.1)`$
with energy eigenvalues
$$E_n=\mathrm{}\omega \left(n+\frac{1}{2}\right),n=0,1,2\mathrm{}.$$
$`(5.2)`$
Introducing the notation
$$\sigma _0^2=\frac{\mathrm{}}{2m\omega },$$
$`(5.3)`$
the time–independent part of the eigenfunctions (4.1) reads
$$\varphi _n(x)=\frac{1}{\sqrt{\sigma _0\sqrt{2\pi }2^nn!}}\mathrm{e}^{x^2/4\sigma _0^2}H_n\left(\frac{x}{\sigma _0\sqrt{2}}\right),$$
$`(5.4)`$
where $`H_n`$ are the Hermite polynomials. The corresponding forward velocity fields for the lowest lying levels are:
$$\begin{array}{cc}\hfill v_{(+)0}(x)& =\omega x,\hfill \\ \hfill v_{(+)1}(x)& =2\frac{\omega \sigma _0^2}{x}\omega x,\hfill \\ \hfill v_{(+)2}(x)& =4\omega \sigma _0^2\frac{x}{x^2\sigma _0^2}\omega x,\hfill \end{array}$$
$`(5.5)`$
with singularities in the zeros of the Hermite polynomials. When $`n=0`$ the equation (2.3) takes the form
$$_t\rho =\omega \sigma _0^2_x^2\rho +\omega x_x\rho +\omega \rho ,$$
$`(5.6)`$
and the fundamental solution turns out to be the Ornstein–Uhlenbeck transition probability density
$$p_0(x,t|x_0,t_0)=\frac{1}{\sigma (t)\sqrt{2\pi }}\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)},(tt_0),$$
$`(5.7)`$
where we have introduced the notation
$$\alpha (t)=x_0\mathrm{e}^{\omega (tt_0)},\sigma ^2(t)=\sigma _0^2\left[1\mathrm{e}^{2\omega (tt_0)}\right],(tt_0).$$
$`(5.8)`$
The stationary Markov process associated to the transition probability density (5.7) is selected by the initial, invariant probability density
$$\rho _0(x)=\frac{1}{\sigma _0\sqrt{2\pi }}\mathrm{e}^{x^2/2\sigma _0^2},$$
$`(5.9)`$
which is also the asymptotic probability density for every other initial condition when the evolution is ruled by equation (5.6) (see ) so that the invariant distribution plays also the role of the limit distribution. Since this invariant probability density also coincides with the quantum one $`\varphi _0^2=|\psi _0|^2`$, the process associated by stochastic mechanics to the ground state of the harmonic oscillator is nothing but the stationary Ornstein–Uhlenbeck process.
For $`n1`$ the solutions of (2.3) are determined in the following way. As discussed in the previous section, one has to solve the eigenvalue problem (4.13) which can now be written as
$$\frac{\mathrm{}^2}{2m}G^{\prime \prime }(x)+\left(\frac{m}{2}\omega ^2x^2\mathrm{}\omega \frac{2n+1}{2}\right)G(x)=\mathrm{}\lambda G(x),$$
$`(5.10)`$
in every interval $`[x_k,x_{k+1}]`$ between two subsequent singularities of the forward velocity fields $`v_{(+)n}`$. The boundary conditions at the end points of these intervals, deduced from (4.14) through (4.5), are
$$[\varphi _nG^{}\varphi _n^{}G]_{x_k,x_{k+1}}=0.$$
$`(5.11)`$
Reminding that $`\varphi _n`$ (but not $`\varphi _n^{}`$) vanishes in $`x_k,x_{k+1}`$, the conditions to impose are
$$G(x_k)=G(x_{k+1})=0,$$
$`(5.12)`$
where it is understood that for $`x_0`$ and $`x_{n+1}`$ we mean, respectively
$$\underset{x\mathrm{}}{lim}G(x)=0,\underset{x+\mathrm{}}{lim}G(x)=0.$$
$`(5.13)`$
It is also useful at this point to state the eigenvalue problem in adimensional form by using the reduced eigenvalue $`\mu =\lambda /\omega `$, and the adimensional variable $`x/\sigma _0`$ which, by a slight abuse of notation, will be still denoted by $`x`$. In this way the equation (5.10) with the conditions (5.12) becomes
$$\begin{array}{cc}\hfill y^{\prime \prime }(x)\left(\frac{x^2}{4}\frac{2n+1}{2}\mu \right)y(x)& =0,\hfill \\ \hfill y(x_k)=y(x_{k+1})& =0.\hfill \end{array}$$
$`(5.14)`$
If $`\mu _m`$ and $`y_m(x)`$ are the eigenvalues and eigenfunctions of (5.14), the general solution of the corresponding Fokker–Planck equation (2.3) will be (reverting to dimensional variables)
$$\rho (x,t)=\underset{m=0}{\overset{\mathrm{}}{}}c_m\mathrm{e}^{\mu _m\omega t}\varphi _n(x)y_m\left(\frac{x}{\sigma _0}\right).$$
$`(5.15)`$
The values of the coefficients $`c_m`$ will be fixed by the initial conditions and by the obvious requirements that $`\rho (x,t)`$ must be non negative and normalized during the whole time evolution. Two linearly independent solutions of (5.14) are
$$y^{(1)}=\mathrm{e}^{x^2/4}M(\frac{\mu +n}{2},\frac{1}{2};\frac{x^2}{2}),y^{(2)}=x\mathrm{e}^{x^2/4}M(\frac{\mu +n1}{2},\frac{3}{2};\frac{x^2}{2}),$$
$`(5.16)`$
where $`M(a,b;z)`$ are the confluent hypergeometric functions. The complete specification of the solutions obviusly requires the knowledge of all the eigenvalues $`\mu _m`$.
We consider first the instance $`n=1`$ ($`x_0=\mathrm{}`$, $`x_1=0`$ and $`x_2=+\mathrm{}`$), which can be completely solved . In this case equation (5.14) must be solved separately for $`x0`$ and for $`x0`$ with the boundary conditions $`y(0)=0`$ and
$$\underset{x\mathrm{}}{lim}y(x)=\underset{x+\mathrm{}}{lim}y(x)=0.$$
$`(5.17)`$
A long calculation shows that the transition probability density is now
$$p(x,t|x_0,t_0)=\frac{x}{\alpha (t)}\frac{\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)}\mathrm{e}^{[x+\alpha (t)]^2/2\sigma ^2(t)}}{\sigma (t)\sqrt{2\pi }},$$
$`(5.18)`$
where $`\alpha (t)`$ and $`\sigma ^2(t)`$ are defined in (5.8). It must be remarked however that (5.18) must be considered as restricted to $`x0`$ when $`x_0>0`$ and to $`x0`$ when $`x_0<0`$, and that only on these intervals it is suitably normalized. In order to take into account both these possibilities we can introduce the Heavyside function $`\mathrm{\Theta }(x)`$ so that for every $`x_00`$ we will have
$$p_1(x,t|x_0,t_0)=\mathrm{\Theta }(xx_0)\frac{x}{\alpha (t)}\frac{\mathrm{e}^{[x\alpha (t)]^2/2\sigma ^2(t)}\mathrm{e}^{[x+\alpha (t)]^2/2\sigma ^2(t)}}{\sigma (t)\sqrt{2\pi }}.$$
$`(5.19)`$
¿From equation (4.8) we can deduce the evolution of every other initial probability density. In particular it can be shown that, with $`\rho _1(x)=\varphi _1^2(x)`$
$$\underset{t+\mathrm{}}{lim}p_1(x,t|x_0,t_0)=2\mathrm{\Theta }(xx_0)\frac{x^2\mathrm{e}^{x^2/2\sigma _0^2}}{\sigma _0^3\sqrt{2\pi }}=2\mathrm{\Theta }(xx_0)\rho _1(x).$$
$`(5.20)`$
Hence, if $`\rho (x,t_0^+)=\rho _{in}(x)`$ is the initial probability density, we have for $`t>t_0`$
$$\begin{array}{cc}\hfill \underset{t+\mathrm{}}{lim}\rho (x,t)& =\underset{t+\mathrm{}}{lim}_{\mathrm{}}^+\mathrm{}p(x,t|y,t_0)\rho _{in}(y)𝑑y\hfill \\ & =2\varphi _1^2(x)_{\mathrm{}}^+\mathrm{}\mathrm{\Theta }(xy)\rho _{in}(y)𝑑y=\mathrm{\Gamma }(ϵ;x)\rho _1(x),\hfill \end{array}$$
$`(5.21)`$
where we have defined the function
$$\mathrm{\Gamma }(ϵ;x)=ϵ\mathrm{\Theta }(x)+(2ϵ)\mathrm{\Theta }(x);ϵ=2_0^+\mathrm{}\rho _{in}(y)𝑑y.$$
$`(5.22)`$
When $`ϵ=1`$ (with symmetric initial probability, equally shared on the two real semi–axis) we have $`\mathrm{\Gamma }(1;x)=1`$ and the asymptotical probability density coincides with the quantum stationary density $`\rho _1(x)=\varphi _1^2(x)`$. If on the other hand $`ϵ1`$ the asymptotic probability density has the same shape of $`\varphi _1^2(x)`$ but with different weights on the two semiaxes.
If we consider the higher excited states, the Sturm–Liouville problem (5.14) must be solved numerically in each sub–interval. For instance, in the case $`n=2`$ we have $`x_0=\mathrm{}`$, $`x_1=1`$, $`x_2=1`$ and $`x_3=+\mathrm{}`$. Considering in particular the sub–interval $`[1,1]`$, it can be shown that beyond $`\mu _0=0`$ the first few eigenvalues are determined as the first possible values such that
$$M(\frac{\mu +1}{2},\frac{3}{2};\frac{1}{2})=0.$$
$`(5.23)`$
This gives $`\mu _17.44`$, $`\mu _237.06`$, $`\mu _386.41`$.
6. Controlled evolutions In this Section we move on to implement the program outlined in the introduction, that is to exploit the transition probabilities of Nelson stochastic mechanics to model controlled quantum evolutions to arbitrarily assigned final quantum states. We start by observing that to every solution $`\rho (x,t)`$ of the Fokker–Planck equation (3.1), with a given $`v_{(+)}(x,t)`$ and constant diffusion coefficient (2.11), we can always associate the wave function of a quantum system. To this end, it is sufficient to introduce a suitable time–dependent potential.
Let us take a solution $`\rho (x,t)`$ of the Fokker–Planck equation (3.1), with a given $`v_{(+)}(x,t)`$ and a constant diffusion coefficient $`D`$: introduce the functions $`R(x,t)`$ and $`W(x,t)`$ from
$$\rho (x,t)=R^2(x,t),v_{(+)}(x,t)=_xW(x,t),$$
$`(6.1)`$
and remind from (2.12) that the relation
$$mv_{(+)}=_xS+\mathrm{}\frac{_xR}{R}=_xS+\frac{\mathrm{}}{2}\frac{_x\rho }{\rho }=_x\left(S+\frac{\mathrm{}}{2}\mathrm{ln}\stackrel{~}{\rho }\right)$$
$`(6.2)`$
must hold, where $`\stackrel{~}{\rho }`$ is an adimensional function (argument of a logarithm) obtained from the probability density $`\rho `$ by means of a suitable and arbitrary dimensional multiplicative constant. If we now impose that the function $`S(x,t)`$ must be the phase of a wave function as in (2.8), we immediately obtain from (6.1) and (6.2) the equation
$$S(x,t)=mW(x,t)\frac{\mathrm{}}{2}\mathrm{ln}\stackrel{~}{\rho }(x,t)\theta (t),$$
$`(6.3)`$
which allows to determine $`S`$ from $`\rho `$ and $`v_{(+)}`$ (namely $`W`$) up to an additive arbitrary function of time $`\theta (t)`$. However, in order that the wave function (2.8) with $`R`$ and $`S`$ given above be a solution of a Schrödinger problem in quantum mechanics, we must also make sure that the Hamilton–Jacobi–Madelung equation (2.9) is satisfied. Since $`S`$ and $`R`$ are now fixed, equation (2.9) must be considered as a relation (constraint) defining the controlling potential $`V_c`$, which, after straightforward calculations, yields
$$V_c(x,t)=\frac{\mathrm{}^2}{4m}_x^2\mathrm{ln}\stackrel{~}{\rho }+\frac{\mathrm{}}{2}\left(_t\mathrm{ln}\stackrel{~}{\rho }+v_{(+)}_x\mathrm{ln}\stackrel{~}{\rho }\right)\frac{mv_{(+)}^2}{2}m_tW+\dot{\theta }.$$
$`(6.4)`$
Of course if we start with a quantum wave function $`\psi (x,t)`$ associated to a given time–independent potential $`V(x)`$ and we select as a solution of (2.3) exactly $`\rho =|\psi |^2`$, then the formula (6.4) always yields back the given potential, as it should. This can be explicitly seen (to become familiar with this kind of approach) in the examples of the ground state and the first excited state of the harmonic oscillator potential (5.1), by choosing respectively in equation (6.4) $`\theta (t)=\mathrm{}\omega t/2`$ and $`\theta (t)=3\mathrm{}\omega t/2`$, which amounts to suitably fix the zero of the potential energy.
On the other hand the nonstationary fundamental solution (5.7) associated to the velocity field $`v_{(+)0}(x)`$ of (5.5) for the case $`n=0`$ (we put $`t_0=0`$ to simplify the notation) does not correspond to a quantum wave function of the harmonic oscillator whatsoever. However it is easy to show that, by choosing
$$\dot{\theta }(t)=\frac{\mathrm{}\omega }{2}\left(\frac{2\sigma _0^2}{\sigma ^2(t)}1\right)=\frac{\mathrm{}\omega }{2}\frac{1}{\mathrm{tanh}\omega t}\frac{\mathrm{}\omega }{2},(t+\mathrm{}),$$
$`(6.5)`$
and the time–dependent controlling potential
$$V_c(x,t)=\frac{\mathrm{}\omega }{2}\left[\frac{x\alpha (t)}{\sigma (t)}\right]^2\frac{\sigma _0^2}{\sigma ^2(t)}\frac{m\omega ^2x^2}{2}\frac{m\omega ^2x^2}{2},(t+\mathrm{}),$$
$`(6.6)`$
with $`\alpha (t)`$, $`\sigma (t)`$ and $`\sigma _0`$ defined in equations (5.8) and (5.3), we can define a quantum state, i.e. a wave function $`\psi _c(x,t)`$ solution of the Schrödinger equation in the potential (6.6). At the same time $`\psi _c`$ is associated to the transition probability density of the form (5.7) which is its modulus squared. Of course the fact that for $`t+\mathrm{}`$ we recover the harmonic potential is connected to the already remarked fact that the usual quantum probability density is also the limit distribution for every initial condition and in particular also for the evolution (5.7). In the case $`n=1`$, with the $`v_{(+)1}(x)`$ as given by equation(5.5) and the transition probability density (5.19), we define
$$T(x)=\frac{x}{\mathrm{tanh}x},$$
$`(6.7)`$
and then we choose
$$\dot{\theta }(t)=\frac{\mathrm{}\omega }{2}\left(\frac{4\sigma _0^2}{\sigma ^2(t)}\frac{2\sigma _0^2\alpha ^2(t)}{\sigma ^4(t)}1\right)\frac{3}{2}\mathrm{}\omega ,(t+\mathrm{}),$$
$`(6.8)`$
so that we have the following time–dependent controlling potential (for every $`x0`$):
$$\begin{array}{cc}\hfill V_c(x,t)& =\frac{m\omega ^2x^2}{2}\left(\frac{2\sigma _0^4}{\sigma ^4}1\right)+\mathrm{}\omega \left[1\frac{\sigma _0^2}{\sigma ^2}T\left(\frac{x\alpha }{\sigma ^2}\right)\right]\frac{\mathrm{}^2}{4mx^2}\left[1T\left(\frac{x\alpha }{\sigma ^2}\right)\right]\hfill \\ & \frac{m\omega ^2x^2}{2},(t+\mathrm{}).\hfill \end{array}$$
$`(6.9)`$
The limit $`t+\mathrm{}`$ must be obviously intended in a physical sense, i.e. for times much longer than $`\lambda _1^1`$, the largest characteristic time of decay in the expansion (3.12). In this particular case $`\lambda _1=\omega `$. In fact here too the asymptotic potential is the usual one of the harmonic oscillator, but it must be considered separately on the positive and on the negative $`x`$ semiaxis, since in the point $`x=0`$ a singular behaviour would show up when $`t0`$. This means that, also if asymptotically we recover the right potential, it will be associated with a new boundary condition in $`x=0`$ since the system must be confined on one of the two semiaxes.
7. Modelling transitions Given any couple $`(\rho ,v_{(+)})`$ associated to a Fokker–Planck equation, the possibility of promoting it to a solution of a Schrödinger problem by a suitable controlling potential $`V_c(x,t)`$ enables to model quantum evolutions driving, for instance, the probability density of a given quantum stationary state to another (decays and excitations). Moreover, an immediate generalization of this scheme might open the way to modelling evolutions from a given, arbitrary quantum state to an eigenfunction of a given observable. Besides other applications, this is something which could become a starting point for building simple models of the measurement process, where one tries to dynamically describe the wave packet collapse . As a first example let us consider the transition between the invariant probability densities associated to the ground and the first excited state of the harmonic oscillator potential (5.1):
$$\begin{array}{cc}\hfill \rho _0(x)& =\varphi _0^2(x)=\frac{1}{\sigma _0\sqrt{2\pi }}\mathrm{e}^{x^2/2\sigma _0^2},\hfill \\ \hfill \rho _1(x)& =\varphi _1^2(x)=\frac{x^2}{\sigma _0^3\sqrt{2\pi }}\mathrm{e}^{x^2/2\sigma _0^2}.\hfill \end{array}$$
$`(7.1)`$
If we choose to describe the decay $`\varphi _1\varphi _0`$ we may exploit the Chapman–Kolmogorov equation (4.8) with the transition probability density (5.7), and the initial probability density $`\rho _1(x)`$. An elementary integration shows in this case that the resulting evolution takes the form ($`t_0=0`$)
$$\rho _{10}(x,t)=\beta ^2(t)\rho _0(x)+\gamma ^2(t)\rho _1(x),$$
$`(7.2)`$
where
$$\beta ^2(t)=1\mathrm{e}^{2\omega t},\gamma (t)=\mathrm{e}^{\omega t}.$$
$`(7.3)`$
Recalling $`v_{(+)0}(x)`$ as given in (5.5) and the evolving probability density (7.2), and inserting them in equation (6.4) we obtain the following form of the controlling potential:
$$V_c(x,t)=\frac{m\omega ^2x^2}{2}2\mathrm{}\omega U(x/\sigma _0;\beta /\gamma ),$$
$`(7.4)`$
where
$$U(x;b)=\frac{x^4+b^2x^2b^2}{(b^2+x^2)^2}.$$
$`(7.5)`$
The parameter
$$b^2(t)=\frac{\beta ^2(t)}{\gamma ^2(t)}=\mathrm{e}^{2\omega t}1$$
$`(7.6)`$
is such that $`b^2(0^+)=0`$ and $`b^2(+\mathrm{})=+\mathrm{}`$. Thus $`U`$ goes to zero as $`t+\mathrm{}`$ for any $`x`$, and as $`t0^+`$ is 1 for every $`x`$, except for a negative singularity in $`x=0`$. As a consequence, while for $`t+\mathrm{}`$ the controlling potential (7.4) simply tends to the potential (5.1), for $`t0^+`$ it presents an unessential shift of $`2\mathrm{}\omega `$ in the zeroth level, and a deep negative singularity in $`x=0`$.
The singular behaviour of the controlling potential at the initial time of the evolution is a problem connected to the proper definition of the phase function $`S`$. In fact, from (6.3) we have:
$$S(x,t)=\frac{\mathrm{}}{2}\mathrm{ln}\left[\beta ^2(t)+\frac{x^2}{\sigma _0^2}\gamma ^2(t)\right]\frac{\mathrm{}\omega }{2}t,$$
$`(7.7)`$
so that in particular we have
$$S(x,0^+)=\frac{\mathrm{}}{2}\mathrm{ln}\frac{x^2}{\sigma _0^2}.$$
$`(7.8)`$
We would instead have expected that initially the phase be independent of $`x`$ as for every stationary wave function. This means that in the constructed evolution $`S(x,t)`$ presents a discontinuous behaviour for $`t0^+`$. The problem arises here from the fact that we initially have a stationary state characterized by a probability density $`\rho _1(x)`$ and a velocity field $`v_{(+)1}(x)`$, and then suddenly, in order to activate the decay, we impose to the same $`\rho _1`$ to be embedded in a different velocity field $`v_{(+)0}(x)`$ which drags it toward a new stationary $`\rho _0(x)`$. This discontinuous change from $`v_{(+)1}`$ to $`v_{(+)0}`$ is of course responsible for the remarked discontinuous change in the phase of the wave function. We have therefore modelled a transition which starts with a sudden, discontinuous kick. To construct a transition that evolves smoothly also for $`t0^+`$ we should take into account a continuous and smooth modification of the initial velocity field into the final one. This requirement compels us to consider a new class of Fokker–Planck equations with time–dependent forward velocity fields $`v_{(+)}(x,t)`$. In particular, to achieve the proposed smooth controlled decay between two stationary states of the harmonic oscillator, we should solve an evolution equation with a continuous velocity field $`v_{(+)}(x,t)`$ which evolves smoothly from $`v_{(+)1}(x)`$ to $`v_{(+)0}(x)`$. Clearly, the smoothing procedure can be realized in several different ways and the selection must be dictated by the actual physical requirements and outputs one is interested in. A suitable smoothing for our transitions which leads to manageable equations still has to be found; however in the following Section we will study a problem in which the smoothness of the evolution is a priori granted.
8. Smooth transitions: coherent and squeezed wave packets As anticipated at the end of the previous Section we will now consider an instance of controlled evolution that does not require an extra smoothing procedure for the driving velocity field, i.e. the transition between pairs of coherent wave packets. In particular we will consider both the transition from a coherent oscillating packet (coherent state) to the ground state of the same harmonic oscillator, and a dynamical procedure of squeezing a coherent wave packet.
To this end we will recall a simple result which indicates how to find the solutions of a particular class of evolution equations (2.3) which includes the situation of our proposed examples. If the velocity field of the evolution equation (2.3) has the linear form
$$v_{(+)}(x,t)=A(t)+B(t)x$$
$`(8.1)`$
with $`A(t)`$ and $`B(t)`$ continuous functions of time, then there are always solutions of the Fokker-Planck equation in the normal form $`𝒩(\mu (t),\nu (t))`$, where $`\mu (t)`$ and $`\nu (t)`$ are solutions of the differential equations
$$\dot{\mu }(t)B(t)\mu (t)=A(t);\dot{\nu }(t)2B(t)\nu (t)=2D$$
$`(8.2)`$
with suitable initial conditions. The first case that we consider is the coherent wave packet with a certain initial displacement $`a`$:
$$\psi (x,t)=\left(\frac{1}{2\pi \sigma _0^2}\right)^{1/4}\mathrm{exp}\left[\frac{(xa\mathrm{cos}\omega t)^2}{4\sigma _0^2}i\left(\frac{4ax\mathrm{sin}\omega ta^2\mathrm{sin}2\omega t}{8\sigma _0^2}+\frac{\omega t}{2}\right)\right],$$
$`(8.3)`$
whose forward velocity field reads
$$v_{(+)}(x,t)=a\omega (\mathrm{cos}\omega t\mathrm{sin}\omega t)\omega x.$$
$`(8.4)`$
The field (8.4) is of the required form (8.1) with $`A(t)=a\omega (\mathrm{cos}\omega t\mathrm{sin}\omega t)`$ and $`B(t)=\omega `$, while the configurational probability density is
$$\rho (x,t)=|\psi (x,t)|^2=\rho _0(xa\mathrm{cos}\omega t),$$
$`(8.5)`$
with $`\rho _0`$ is the one of the ground state of the harmonic oscillator given by (7.1). It is easy to show that when $`B(t)=\omega `$, as in the case of the wave packets we are considering, there are coherent solutions of (2.3) with $`\nu (t)=\sigma _0^2`$ of the form $`𝒩(\mu (t),\sigma _0^2)`$, i.e. of the form
$$\rho (x,t)=\rho _0\left(x\mu (t)\right).$$
$`(8.6)`$
Now the time evolution of such coherent solutions can be determined in one step, without implementing the two–step procedure of first calculating the transition probability density and then, through the Chapman-Kolmogorov equation, the evolution of an arbitrary initial probability density. On the other hand if we compare (5.5) and (8.4) we see that the difference between $`v_{(+)0}`$ and $`v_{(+)}`$ consists in the first, time–dependent term of the latter; hence it is natural to consider the problem of solving the evolution equation (2.3) with a velocity field of the type
$$\begin{array}{cc}\hfill v_{(+)}(x,t)& =A(t)\omega x,\hfill \\ \hfill A(t)& =a\omega (\mathrm{cos}\omega t\mathrm{sin}\omega t)F(t),\hfill \end{array}$$
$`(8.7)`$
where $`F(t)`$ is an arbitrary function varying smoothly between 1 and 0, or viceversa. In this case the evolution equation (2.3) still has coherent solutions of the form (8.6) with a $`\mu (t)`$ dependent on our choice of $`F(t)`$ through equation (8.2).
A completely smooth transition from the coherent, oscillating wave function (8..3) to the ground state $`\varphi _0`$ (5.4) of the harmonic oscillator can now be achieved for example by means of the following choice of the function $`F(t)`$:
$$F(t)=1\left(1\mathrm{e}^{\mathrm{\Omega }t}\right)^N=\underset{k=1}{\overset{N}{}}(1)^{k+1}\left(\genfrac{}{}{0pt}{}{N}{k}\right)\mathrm{e}^{\omega _kt},$$
$`(8.8)`$
where
$$\mathrm{\Omega }=\frac{\mathrm{ln}N}{\tau },\omega _k=k\mathrm{\Omega };\tau >0,N2.$$
$`(8.9)`$
In fact, a function $`F(t)`$ of this form goes monotonically from $`F(0)=1`$ to $`F(+\mathrm{})=0`$ with a flex point in $`\tau `$ (which can be considered as the arbitrary instant of the transition) where its derivative $`F^{}(\tau )`$ is negative and grows, in absolute value, logarithmically with $`N`$. The condition that the exponent $`N2`$ also guarantees that $`F^{}(0)=0`$, and hence that the controlling potential $`V_c(x,t)`$ given in equation (6.4) will continuously start at $`t=0`$ from the harmonic oscillator potential (5.1), and asymptotically come back to it for $`t+\mathrm{}`$. Finally, the phase function $`S(x,t)`$ will too change continuously from that of $`\psi `$ given in (8.3) to that of the harmonic oscillator ground state $`\psi _0`$. A long calculation yields the explicit form of the controlling potential:
$$V_c(x,t)=m\omega ^2\frac{x^2}{2}m\omega ax\underset{k=1}{\overset{N}{}}(1)^{k+1}\left(\genfrac{}{}{0pt}{}{N}{k}\right)\left[U_k(t)\omega _k\mathrm{e}^{\omega _kt}W_k\omega \mathrm{e}^{\omega t}\right],$$
$`(8.10)`$
where
$$\begin{array}{cc}\hfill U_k(t)& =\mathrm{sin}\omega t+\frac{2\omega ^2\mathrm{sin}\omega t\omega _k^2\mathrm{cos}\omega t}{(\omega _k\omega )^2+\omega ^2},\hfill \\ \hfill W_k& =1+\frac{2\omega ^2\omega _k^2}{(\omega _k\omega )^2+\omega ^2}=\sqrt{2}U_k\left(\frac{\pi }{4\omega }\right).\hfill \end{array}$$
$`(8.11)`$
The parameters $`\tau `$ and $`N`$, apart the constraints (8.9), are free and can be fixed by the particular form of the transition that we want to implement, according to what specific physical situations we are interested in. Finally we remark that, in a harmonic oscillator, the transition between a coherent, oscillating wave packet and the ground state is a transition between a (Poisson) superposition of all the energy eigenstates to just one energy eigenstate: an outcome which is similar to that of an energy measurement, but for the important difference that here the result (the energy eigenstate) is deterministically controlled by a time–dependent potential. The controlled transition that we have constructed does not produce mixtures, but pure states (eigenstates) and may be considered a dynamical model for one of the branches of a measurement leading to a selected eigenvalue and eigenstate.
Until now we have considered transitions between gaussian wave packets with constant width. However it is also of great interest to discuss the case of controlling potentials able to produce a wave–packet evolution with variable width: a kind of controlled squeezing of the wave–packet. This could be very useful in such instances as the shaping of the Gaussian output in the manufacturing of molecular reactions, or in the design of focusing devices for beams in particle accelerators, in which the width of the bunch must be properly squeezed. We will discuss now a simple case which shows also that, in the particular conditions chosen, it is also possible to avoid the integration of the differential equations (8.2).
Let us remember that when the forward velocity field has the form (8.1) the Fokker-Planck equation (2.3) always possesses Gaussian solutions of the form
$$\rho (x,t)=\frac{\mathrm{e}^{[x\mu (t)]^2/2\nu (t)}}{\sqrt{2\pi \nu (t)}},$$
$`(8.12)`$
if $`\mu (t)`$ and $`\nu (t)`$ are solutions of (8.2). We plan now to describe evolutions of the quantum state (2.8) such that I) both $`V_c(x,t)`$ and $`S(x,t)`$ be continuous and regular at every instant, and II) the variance $`\nu (t)`$ satisfy the relations
$$\nu (\mathrm{})=\sigma _0^2,\nu (+\mathrm{})=\sigma _1^2.$$
$`(8.13)`$
In practice this means that, if for example we require for the sake of simplicity $`\mu (t)=0`$ at every time, we will describe a transition from the ground state of an harmonic oscillator with frequency $`\omega _0=D/\sigma _0^2`$ to the ground state of another harmonic oscillator with frequency $`\omega _1=D/\sigma _1^2`$. It is convenient to remark here that this very simple transition cannot be achieved by means of an arbitrary time–dependent potential $`V_c(x,t)`$, given that it goes from $`m\omega _0^2x^2/2`$ for $`t\mathrm{}`$ to $`m\omega _1^2x^2/2`$ for $`t+\mathrm{}`$. The intermediate evolution, indeed, when not suitably designed, would introduce components of every other energy eigenstate of the final harmonic oscillator which will not, in general, asymptotically disappear.
Let us recall here that the relevant quantities are the phase function
$$S(x,t)=mW(x,t)mD\mathrm{ln}\stackrel{~}{\rho }(x,t)\theta (t)$$
$`(8.14)`$
(where $`\theta (t)`$ is arbitrary and, from (6.1) and (8.1), $`W(x,t)=A(t)x+B(t)x^2/2`$), and the controlling potential
$$V_c(x,t)=mD^2_x^2\mathrm{ln}\stackrel{~}{\rho }+mD\left(_t\mathrm{ln}\stackrel{~}{\rho }+v_{(+)}_x\mathrm{ln}\stackrel{~}{\rho }\right)\frac{mv_{(+)}^2}{2}m_tW+\dot{\theta }.$$
$`(8.15)`$
Both these two functions are determined from the knowledge of forward velocity field $`v_{(+)}(x,t)`$ and of the adimensional density $`\stackrel{~}{\rho }(x,t)=\sigma _0\rho (x,t)`$. However in this coherent evolution it will not be necessary to integrate the differential equations (8.2) to obtain an explicit form of $`S`$ and $`V_c`$. Indeed, since $`A(t)`$, $`B(t)`$ and $`\mathrm{ln}\rho (x,t)`$ can be expressed through (8.2) in terms of $`\mu (t)`$, $`\nu (t)`$, $`\dot{\mu }(t)`$, $`\dot{\nu }(t)`$ and $`D`$, it is a straightforward matter to show that the phase is of the general form
$$S(x,t)=\frac{m}{2}\left[\mathrm{\Omega }(t)x^22U(t)x+\mathrm{\Delta }(t)\right],$$
$`(8.16)`$
with
$$\begin{array}{cc}\hfill \mathrm{\Omega }(t)& =\frac{\dot{\nu }(t)}{2\nu (t)},\hfill \\ \hfill U(t)& =\frac{\mu (t)\dot{\nu }(t)2\nu (t)\dot{\mu }(t)}{2\nu (t)},\hfill \\ \hfill \mathrm{\Delta }(t)& =D\frac{\mu ^2(t)}{\nu (t)}+D\mathrm{ln}\frac{2\pi \nu (t)}{\sigma _0^2}\frac{2\theta (t)}{m},\hfill \end{array}$$
$`(8.17)`$
while the controlling potential reads
$$V_c(x,t)=\frac{m}{2}\left[\omega ^2(t)x^22a(t)x+c(t)\right],$$
$`(8.18)`$
where
$$\begin{array}{cc}\hfill \omega ^2(t)& =\frac{4D^22\nu (t)\ddot{\nu }(t)+\dot{\nu }^2(t)}{4\nu ^2(t)},\hfill \\ \hfill a(t)& =\frac{4D^2\mu (t)+4\nu ^2(t)\ddot{\mu }(t)2\mu (t)\nu (t)\ddot{\nu }(t)+\mu (t)\dot{\nu }^2(t)}{4\nu ^2(t)},\hfill \\ \hfill c(t)& =\frac{8D^2\mu ^2(t)4D\nu (t)\dot{\nu }(t)8D^2\nu (t)\left(2\nu (t)\dot{\mu }(t)\mu (t)\dot{\nu }(t)+2D\mu (t)\right)}{4\nu ^2(t)}+\frac{2\dot{\theta }(t)}{m}.\hfill \end{array}$$
$`(8.19)`$
We can simplify our notation by imposing that $`\mu (t)=0`$ (and hence $`\dot{\mu }(t)=\ddot{\mu }(t)=0`$) for every $`t`$, obtaining
$$S(x,t)=\frac{m}{2}\left[\mathrm{\Omega }(t)x^2+\mathrm{\Delta }(t)\right],$$
$`(8.20)`$
with
$$\begin{array}{cc}\hfill \mathrm{\Omega }(t)& =\frac{\dot{\nu }(t)}{2\nu (t)},\hfill \\ \hfill \mathrm{\Delta }(t)& =D\mathrm{ln}\frac{2\pi \nu (t)}{\sigma _0^2}\frac{2\theta (t)}{m}.\hfill \end{array}$$
$`(8.21)`$
The controlling potential too reduces to
$$V_c(x,t)=\frac{m}{2}\left[\omega ^2(t)x^2+c(t)\right],$$
$`(8.22)`$
where
$$\begin{array}{cc}\hfill \omega ^2(t)& =\frac{4D^22\nu (t)\ddot{\nu }(t)+\dot{\nu }^2(t)}{4\nu ^2(t)},\hfill \\ \hfill c(t)& =\frac{2\dot{\theta }(t)}{m}D\frac{\nu (t)\dot{\nu }(t)+2D\nu (t)}{\nu ^2(t)}.\hfill \end{array}$$
$`(8.23)`$
Hence the evolution is completely defined, through the four functions $`\mathrm{\Omega }(t)`$, $`\mathrm{\Delta }(t)`$, $`\omega ^2(t)`$ and $`c(t)`$, by $`\theta (t)`$ and $`\nu (t)`$. It is expedient in particular to choose
$$\theta (t)=\frac{mD}{2}\mathrm{ln}\frac{2\pi \nu (t)}{\sigma _0^2}+\frac{mD^2t}{\nu (t)}.$$
$`(8.24)`$
In this way
$$\mathrm{\Delta }(t)=\frac{2D^2t}{\nu (t)},$$
$`(8.25)`$
because from (8.13) we have $`\dot{\nu }(\pm \mathrm{})=0`$ so that (see (8.20)):
$$\begin{array}{cc}\hfill S(x,t)& \frac{mD^2t}{\sigma _0^2},t\mathrm{},\hfill \\ \hfill S(x,t)& \frac{mD^2t}{\sigma _1^2},t+\mathrm{}.\hfill \end{array}$$
$`(8.26)`$
This was to be expected from the fact that $`mD^2/\sigma _0^2=\mathrm{}\omega _0/2`$ and $`mD^2/\sigma _1^2=\mathrm{}\omega _1/2`$ are the energy eigenvalues of the ground states of the two harmonic oscillators. Moreover from the choice (8.24) also follows that $`c^2(\pm \mathrm{})=0`$ so that the controlling potential (8.22) will show no asymptotical extra terms with respect to the initial and final harmonic potentials.
In order to completely specify the controlled evolution we are now left with the determination of the form of $`\nu (t)`$. If $`b=\sigma _1/\sigma _0`$, then take
$$\nu (t)=\sigma _0^2\left(\frac{b+\mathrm{e}^{t/\tau }}{1+\mathrm{e}^{t/\tau }}\right)^2,(\tau >0),$$
$`(8.27)`$
so that the transition happens around the instant $`t=0`$ and $`\tau `$ controls its velocity. We thus obtain the explicit expressions for the four functions (8.21) and (8.23):
$$\begin{array}{cc}\hfill \mathrm{\Omega }(t)& =\frac{b1}{\tau }\frac{\mathrm{e}^{t/\tau }}{(b+\mathrm{e}^{t/\tau })(1+\mathrm{e}^{t/\tau })},\hfill \\ \hfill \mathrm{\Delta }(t)& =\frac{2D^2t}{\sigma _0^2}\left(\frac{1+\mathrm{e}^{t/\tau }}{b+\mathrm{e}^{t/\tau }}\right)^2,\hfill \\ \hfill \omega ^2(t)& =\frac{D^2}{\sigma _0^4}\left(\frac{1+\mathrm{e}^{t/\tau }}{b+\mathrm{e}^{t/\tau }}\right)^4+\frac{b1}{\tau ^2}\frac{\mathrm{e}^{t/\tau }(1\mathrm{e}^{t/\tau })}{(1+\mathrm{e}^{t/\tau })^2(b+\mathrm{e}^{t/\tau })},\hfill \\ \hfill c(t)& =\frac{4D^2(b1)}{\sigma _0^2}\frac{\mathrm{e}^{t/\tau }(1+\mathrm{e}^{t/\tau })}{(b+\mathrm{e}^{t/\tau })^3}\frac{t}{\tau }.\hfill \end{array}$$
$`(8.28)`$
Their form is displayed in the Figures 1–4, where to fix an example we have chosen the values $`\tau =1`$, $`b=2`$, $`\omega _0=1`$ and $`\sigma _0=1`$. As it can be seen in this case the behaviour of the potential time–dependent parameters is not trivial even for the very simple squeezing of a static gaussian wave packet from a given variance to another. How to precisely follow this time dependence in a stable way will be the argument of a forthcoming paper, as discussed in the next section.
9. Conclusions and outlook
We have shown how to treat the typical inverse problem in quantum control, i.e. that of determining a controlling potential for a given quantum evolution, in the framework of Nelson stochastic mechanics. In this way we have been able to determine the general characteristics of controlled evolutions between assigned initial and final quantum states. The techniques of solution and the relation between the transition probabilities, phase functions and controlling potentials have been discussed on general grounds. Detailed, explicit calculations have also been shown in the paradigmatic test arena provided by the harmonic oscillator.
Further extensions of the method outlined in the present paper are currently under study. One immediate application to be faced is the generalization of the analysis performed for the harmonic oscillator to anharmonic systems. The difficulty to be faced on the way toward this aim is that one is in general forced to deal with approximate quantum wave functions, as in the case of the quartic oscillator. Therefore the controlled evolution must be supplemented by a suitable feedback mechanism ensuring that the error initially made in choosing a certain initial approximate state does not grow during the controlled time evolution. One extremely interesting application would be the description of a controlled evolution driving initial approximate quantum states of anharmonic systems to stable wave packets generalizations of the coherent states of the harmonic oscillator . Besides the obvious interest in several areas of quantum phenomenology, the above is also of great potential interest in discussing the control and the reduction of aberrations in quantum–like systems, i.e. deviations from the harmonic evolution that are detected in systems like charged beams in particle accelerators.
Another very interesting future line of research that has been left virtually unexplored in the present paper is the introduction of optimization procedures. We have barely touched upon this problem when discussing the smoothing of the controlled transitions. Optimization of suitable functionals, chosen according to the kind of physical evolution one needs or desires to manufacture, would provide a powerful criterion of selection among the different possible smoothed evolutions. Instances of functionals to be optimized during the controlled dynamics that come naturally to mind are the uncertainty products of conjugate observables (to be optimized to a relative minimum under the constraint of Schrödinger dynamics ), or the relative entropy between the initial and final states. But many more can be imagined and devised, according to the nature of the physical problem considered.
One last, but important consideration is in order. When we implement a controlled evolution by means of a suitable controlling potential we must also bear in mind that in practice small deviations away from the designed potential and from the desired wave function are always possible. In general such deviations are not subsequently reabsorbed but rather tend to drag the state away from the required evolution. Hence to really control these quantum evolutions it will be very important to study their stability under small deviations and perturbations: this is of crucial importance from the standpoint of confronting the formal, theoretical scheme with the practical applications. Work is currently in progress in all the above mentioned extensions of the present research, and we plan to soon report on it.
REFERENCES
Cufaro Petroni N and Guerra F 1995 Found. Phys. 25 297
Cufaro Petroni N 1995 Asymptotic behaviour of densities for Nelson processes, in Quantum communications and measurement, ed V P Belavkin et al (New York: Plenum Press)
Cufaro Petroni N, De Martino S and De Siena S 1997 Non equilibrium densities of Nelson processes, in New perspectives in the physics of mesoscopic systems, ed S De Martino et al (Singapore: World Scientific)
Nelson E 1966 Phys. Rev. 150 1079
Nelson E 1967 Dynamical Theories of Brownian Motion (Princeton: Princeton U.P.)
Nelson E 1985 Quantum Fluctuations (Princeton: Princeton U.P.)
Guerra F 1981 Phys. Rep. 77 263
Blanchard Ph, Combe Ph and Zheng W 1987 Mathematical and physical aspects of stochastic mechanics (Berlin: Springer–Verlag)
Guerra F and Morato L M 1983 Phys. Rev. D 27 1774
Guerra F and Marra R 1983 Phys. Rev. D 28 1916
Guerra F and Marra R 1984 Phys. Rev. D 29 1647
Morato L M 1985 Phys. Rev. D 31 1982
Loffredo M I and Morato L M 1989 J. Math. Phys. 30 354
Bohm D and Vigier J P 1954 Phys. Rev. 96 208
Cufaro Petroni N, De Martino S and De Siena S 1998 Phys. Lett. A 245 1
Cufaro Petroni N 1989 Phys. Lett. A 141 370
Cufaro Petroni N 1991 Phys. Lett. A 160 107
Cufaro Petroni N and Vigier J P 1992 Found. Phys. 22 1
Cufaro Petroni N, De Martino S, De Siena S and Illuminati F 1998 A stochastic model for the semiclassical collective dynamics of charged beams in partcle accelerators, in 15th ICFA Advanced Beam Dynamics Workshop (Monterey, California, US)
Cufaro Petroni N, De Martino S, De Siena S, Fedele R, Illuminati F and Tzenov S 1998 Stochastic control of beam dynamics, in EPAC’98 Conference (Stockholm, Sweden)
Madelung E 1926 Z.Physik 40 332
de Broglie L 1926 C.R.Acad.Sci.Paris 183 447
Bohm D 1952 Phys.Rev. 85 166, 180
Cufaro Petroni N, Dewdney C, Holland P, Kyprianidis T and Vigier J P 1985 Phys.Rev. D 32 1375
Guerra F 1997 The problem of the physical interpretation of Nelson stochastic mechanics as a model for quantum mechanics, in New perspectives in the physics of mesoscopic systems ed S De Martino et al (Singapore: World Scientific)
Risken H 1989 The Fokker-Planck equation (Berlin: Springer)
Tricomi F 1948 Equazioni differenziali (Torino: Einaudi)
Tricomi F 1985 Integral equations (New York: Dover)
Albeverio S and Høgh-Krohn R 1974 J.Math.Phys. 15 1745
De Martino S, De Siena S and Illuminati F 1997 J. Phys. A: Math. Gen. 30 4117
Illuminati F and Viola L 1995 J. Phys. A: Math. Gen. 28 2953
FIGURE CAPTIONS
Figure 1. The parameter $`\mathrm{\Omega }(t)`$ in the phase function $`S(x,t)`$.
Figure 2. The parameter $`\mathrm{\Delta }(t)`$ in the phase function $`S(x,t)`$.
Figure 3. The parameter $`\omega ^2(t)`$ in the potential $`V_c(x,t)`$.
Figure 4. The parameter $`c(t)`$ in the potential $`V_c(x,t)`$. |
no-problem/9910/astro-ph9910121.html | ar5iv | text | # STIS Longslit Spectroscopy of the Narrow-Line Region of NGC 4151. II. Physical Conditions along Position Angle 221oBased on observations made with the NASA/ESA Hubble Space Telescope. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under the NASA contract NAS5-26555.
## 1 Introduction
NGC 4151 is one of the nearest (z=0.0033) Seyfert galaxies, and among the most extensively studied. Based on its strong broad (FWHM $``$ 5000 km s<sup>-1</sup>) permitted lines and our direct line of sight to the non-thermal continuum source, NGC 4151 is generally classified as a Seyfert 1 galaxy, although its continuum and broad emission lines are extremely variable and at one time the broad component nearly disappeared (Penston and Perez 1984), at which time the spectrum of NGC 4151 resembled that of a Seyfert 2 galaxy. It has an extended emission-line region (i.e. narrow-line region, NLR) which is often cited as a an example of the biconical morphology resulting from collimation of the ionizing radiation (cf. Schmitt & Kinney 1996). A linear radio structure (“jet”) extends 3$`^{\prime \prime }.`$5 southwest of the nucleus (Wilson & Ulvestad 1982) and may be the result of the interaction of radially accelerated energetic particles with the interstellar magnetic field of NGC 4151. There have been suggestions, most recently by Wilson & Raymond (1999), that the interaction of the jet with interstellar gas could be the source of the narrow-line emission. Much of the gas in the NLR appears to be radially outflowing from the nucleus (Hutchings et al. 1998), although there is no apparent correlation of the kinematics of the emission-line gas and the radio structure (Kaiser et al. 1999), which suggests another source for the radial outflow, such as a wind emanating from the inner regions of the nucleus (cf. Krolik & Begelman 1984).
One of the most interesting spectral characteristics of NGC 4151 is the presence of absorption by atomic gas, seen in the UV (cf. Weymann et al. 1997) and X-ray (cf. Holt et al. 1980). The variability of the absorption features suggest that the absorbing material must lie close to the nucleus, within the inner few parsecs for the UV absorber (Espey et al. 1998) and within a parsec for the X-ray absorber (Yaqoob, Warwick, & Pounds 1989). Although the absorbers can only be detected along our line of sight to the nucleus, their effect on the gas further out should be apparent (Kraemer et al. 1999) if they have a large covering factor, as suggested by Crenshaw et al. (1999). In fact, Alexander et al. (1999) have demonstrated that narrow emission lines of NGC 4151 could be the result of photoionization by an absorbed continuum. However, the heterogeneous nature of their data, and the lack of spatially resolved spectra, prevented them from constraining the spectral energy distribution (SED) of the ionizing continuum, other than by noting the apparent paucity of photons just above the He II Lyman limit (54.4 eV).
We have obtained low-dispersion long-slit data, at a position angle PA $`=`$ 221<sup>o</sup>, using HST/STIS. From these data, coupled with detailed photoionization models, we are able to determine the physical conditions of the NLR gas, including any radial variations. As we will show in the following sections, there is no evidence at this position angle for other sources of ionization other than photoionization by the nuclear continuum. The NLR does indeed appear to be ionized by an absorbed continuum, as proposed by Alexander et al. (1999). Furthermore, based on our modeling, we can place much tighter constraints on the SED of the ionizing radiation. Finally, these data provide further evidence that the NLR gas is outflowing radially from the nucleus of NGC 4151.
## 2 Observations and Analysis
The details of the observations and data reduction are described in Nelson et al. (1999, hereafter Paper I). To summarize, we used the G140L grating with the far-UV MAMA detector and the G230LB, G430L, and G750L gratings with the CCD detector, to cover the entire 1150 – 10,270 Å region, at a resolving power of $`\lambda `$/$`\mathrm{\Delta }\lambda `$ $`=`$ 500 – 1000. We used a narrow slit width of 0$`^{\prime \prime }.`$1 to avoid degradation of the spectral resolution. We extracted fluxes in bins with lengths of 0$`^{\prime \prime }.`$2 in the inner 1<sup>′′</sup> and 0$`^{\prime \prime }.`$4 further out to maximize the signal-to-noise ratios of the He II lines (see below) and yet isolate the emission-line knots identified in our earlier paper (Hutchings et al. 1998).
Our measurement techniques follow those used in previous papers (cf., Kraemer, Ruiz, & Crenshaw 1998). We obtained fluxes for nearly all of the emission lines by direct integration over a local baseline (see Paper I). For the blended lines of H$`\alpha `$ and \[N II\] $`\lambda \lambda `$6548, 6584, and \[S II\] $`\lambda \lambda `$6716, 6731, we used the \[O III\] $`\lambda `$5007 line as a template. We determined the reddening from the observed He II $`\lambda `$1640/$`\lambda `$4686 ratios, assuming an intrinsic ratio of 7.2 (expected from NLR temperatures and densities), and the Galactic reddening curve of Savage & Mathis (1979). We determined the errors in the observed ratios (relative to H$`\beta `$) from the sum in quadrature of photon noise and different reasonable continuum placements. Errors in the dereddened ratios include an additional contribution (added in quadrature) from the uncertainty in reddening (propagated from errors in the He II ratios); the latter dominates the errors in the dereddened UV lines.
Tables 1 and 2 give the dereddened ratios and errors in each bin, along with the reddening and its uncertainty, and the total H$`\beta `$ flux in that bin (the observed ratios are included in Paper I). For positions that include a knot of emission that we identified in an earlier paper (Hutchings et al. 1998), we list that knot. We note that the sensitivity of the G230LB grating is very low at the short wavelength end, and thus we could not detect the N III\] $`\lambda `$1750 line. The errors in the lines longward of 7000 Å are probably underestimates since this region of the G750L spectrum suffers from fringing (Plait & Bohlin 1998).
## 3 General Trends in the Line Ratios
Figure 1 shows various trends in the reddening and line ratios as a function of projected distance along the slit. The central position of each bin is plotted along the horizontal axis, with the convention that negative numbers represent the blueshifted (southwest) side. From the first plot, we see that there is significant reddening at many positions, whereas a few positions have essentially no reddening. The reddening does not appear to vary in a uniform fashion over this region, which suggests that it is not due to an external screen, but is associated with the emission-line knots. If we look at the inner region ($`\pm `$ 1<sup>′′</sup>), where the errors are smaller, there may be a trend, in that the reddening seems to drop from blueshifted to redshifted side.
The dereddened L$`\alpha `$/H$`\beta `$ plot shows a very interesting trend: this ratio increases dramatically from the blueshifted to redshifted side. Since there is no correlation between the reddening and the L$`\alpha `$/H$`\beta `$ ratio, this trend cannot be explained by a systematic error in determining the reddening. In the inner region, higher values of reddening on the redshifted side (see the previous plot), would only make the L$`\alpha `$/H$`\beta `$ trend steeper.
The L$`\alpha `$/H$`\beta `$ trend can be explained by radial outflow of optically thick clouds that contain dust. In such a cloud, resonance-line photons (and L$`\alpha `$ in particular) are selectively destroyed by multiple scatterings and eventual absorption by dust (cf. Cohen, Harrington, & Hess 1984). The observer sees a much higher L$`\alpha `$/H$`\beta `$ ratio when looking at the illuminated face of a cloud (as opposed to the back side), since escape of the L$`\alpha `$ photons is much easier in this direction. Thus, we expect higher L$`\alpha `$/H$`\beta `$ ratios for clouds on the far side of the nucleus, which in this case corresponds to redshifted clouds, indicating radial outflow (see Capriotti, Foltz, & Byard 1979). Although the UV absorption lines are blue-shifted (Weymann et al. 1997), indicating that components of gas in our line-of-sight are outflowing, the suppression of L$`\alpha `$ is the strongest evidence that the emission line gas in the inner NLR of NGC 4151 in outflowing radially (for further discussion of the NLR kinematics, see Hutchings et al. 1998; Kaiser et al. 1999; Paper I).
The \[O III\]/H$`\beta `$ plot verifies a trend that we first observed in our slitless data (Kaiser et al. 1999). In the inner NLR, this ratio decreases with increasing projected distance from the nucleus. Since the ionization parameter is proportional to n<sub>H</sub><sup>-1</sup>r<sup>-2</sup>, (where n<sub>H</sub> is the atomic hydrogen density and r is the distance from the nucleus), we suggested that n<sub>H</sub> decreases roughly as r<sup>-1</sup> in the inner NLR, and more steeply with increasing distance (Kaiser et al. 1999). However, we will show in the discussion of the model results that the closest radial points are best fit by assuming a nearly constant ionization parameter, which indicates that n<sub>H</sub> decreases more as r$`^{1.6to1.7}`$.
In the last plot, we show the H$`\gamma `$/H$`\beta `$ ratio as a function of projected distance. The ratio of these H recombination lines should show no trend, since it only has a slight dependence on temperature and density. In fact, we see that there is no observed trend, and are therefore confident that there are no large systematic errors in the ratios.
## 4 Photoionization Models
The basic methodology which we employ and the details of our photoionization code have been described in our previous publications (cf. Kraemer 1985; Kraemer et al. 1994). The models assume plane-parallel geometry and emission-line photon escape out either the illuminated face or the back-end of the slab. In the latter case, the back surface of the slab was assumed to be the point where the model was truncated (that is we ignored any effect from an additional neutral envelope). In practice, we used the back-end values for the three positions with a dereddened L$`\alpha `$/H$`\beta `$ $`<`$ 5. Most of the other line ratios are not affected by the choice of front- or back-end escape, due to the low column densities for these models (see Section 5.1).
The photoionization models are parameterized in terms of the density of atomic hydrogen (n<sub>H</sub>) and the dimensionless ionization parameter at the illuminated face of a cloud:
$$U=\frac{1}{4\pi D^2n_Hc}_{\nu _0}^{\mathrm{}}\frac{L_\nu }{h\nu }𝑑\nu ,$$
(1)
where L<sub>ν</sub> is the frequency dependent luminosity of the ionizing continuum, D is the distance between the cloud and the ionizing source and h$`\nu _0`$ = 13.6 eV.
### 4.1 The Ionizing Continuum
We have assumed in these simple models that the gas is photoionized by radiation from the central AGN. As has often been demonstrated (cf., Binette, Robinson, & Courvoisier 1988), the physical conditions in the NLR, and, hence, resulting emission-line spectrum, depend on both the intensity of the radiation field and its SED. Along our line-of-sight, the continuum radiation from NGC 4151 shows the effects of a complex system of absorbers, which is interior to the NLR and can alter the SED that the NLR sees drastically. Kraemer et al. (1999) have demonstrated that, for typical NLR conditions (U $`=`$ 10<sup>-2.5</sup>, n<sub>H</sub> $``$ 10<sup>5</sup> cm<sup>-3</sup>), the ratio of \[O III\] $`\lambda `$4363 to He II $`\lambda `$4686 increases as the depth of the absorption near the He II Lyman edge increases. From the observed ratios of these lines (see Tables 1 and 2), it is clear that \[O III\] $`\lambda `$4363 is relatively strong. Therefore, our approach was to estimate the intrinsic (unabsorbed) SED as simply as possible, and then include the effects of absorption to replicate the ionization state and electron temperature of the NLR gas. We constrained the physical characteristics of the absorbers to resemble those observed along our line-of-sight. The SED was then held fixed as an input to the models. Although Alexander et al. (1999) determined the general characterstics of the SED from emission-line diagnostics, they did not attempt to model the affects of realistic absorbers.
In order to approximate the intrinsic SED, we used the observed fluxes at energies where the effects of the absorber are not severe, specifically at energies lower than the Lyman limit and above several keV. In the 1 – 4 keV band, the continuum radiation from NGC 4151 has been modified by a large absorbing column of gas (Barr et al. 1977; Mushotzky, Holt, & Serlemitsos 1978), which may have multiple components (Weaver et al. 1994). For the purposes of these models, we assumed that the absorption at 5 keV is negligible and that the X-ray continuum down to 1 keV can be approximated by a power-law (F<sub>ν</sub> $``$ $`\nu ^\alpha `$) with spectral index, $`\alpha `$ $``$ 0.5 (Weaver et al. 1994). For the 5 keV flux, we have used the value from ASCA observations given in Edelson et al. (1996).
The far-UV continuum of NGC 4151 has been observed by HUT on both Astro-1 (Kriss et al. 1992) and Astro-2 (Kriss et al. 1995) missions and with the Berkeley spectrometer during the ORFEUS-SPAS II mission (Espey et al. 1998). Between the two HUT observations, the continuum flux at 1455 Å increased by a factor of $``$ 5 (Kriss et al. 1995), and somewhat more near the Lyman limit (the ORFEUS results are similar to those from Astro-2). Note, however, that the continuum irradiating any parcel of NLR gas is time averaged over the light crossing time to that parcel. Thus, variations in the nuclear continuum flux on time scales of a few years will be insignificant, since our measurements are sampling at least 30 years of continuum variations. To be conservative, we assumed a reddening corrected flux at the Lyman limit that is the average of the two HUT values.
As noted above, the X-ray continuum can be approximated by a hard power-law, the extrapolation of which would lie far below the observed flux near the Lyman limit. Since many Seyfert spectra appear to steepen in the soft X-ray (cf., Arnaud et al. 1985; Turner et al. 1999), we assume that the X-ray continuum of NGC 4151 extends to 1 keV, where the index becomes steeper, continuing down to the Lyman limit, and flattens to the canonical value ($`\alpha `$ $`=`$ 1) below 13.6 eV. The ionizing continuum is thus expressed as $`F_\nu `$$`=`$K$`\nu ^\alpha `$, where
$$\alpha =1.0,h\nu <13.6\mathrm{eV}$$
(2)
$$\alpha =1.4,13.6eVh\nu <1000\mathrm{eV}$$
(3)
$$\alpha =0.5,h\nu 1000\mathrm{eV}$$
(4)
The total luminosity between 13.6 eV and 1 KeV is $``$ 1.2 x 10<sup>43</sup> erg s<sup>-1</sup>. Although based on their models of the extended ($``$ 500 pc) emission-line region, Schulz & Komossa (1993) have argued for the existence of a “Big Blue Bump” in the EUV continuum of NGC 4151, Alexander et al. (1999) have demonstrated that the NLR shows no such effects. Furthermore, as we will show, this simple power-law fit provides a sufficient flux in ionizing radiation to power the NLR, even after including the effects of absorption near the He II Lyman limit. Hence, there is no reason to include a blue bump.
There have been several attempts to constrain the physical properties of the X-ray absorber using photoionization models (Yaqoob, Warwick, & Pounds 1989; Weaver et al. 1994; Warwick, Done, & Smith 1995) and, while it is likely to consist of multiple components, there is general agreement that U is in the range 0.2-1.1, with column density, N<sub>H</sub> $``$ 10<sup>23</sup> cm<sup>-2</sup>, where N<sub>H</sub> is the sum of both ionized and neutral hydrogen. One effect of the X-ray absorber is to produce deep O VII and O VIII absorption edges (739 eV and 871 eV, respectively). Absorption of continuum photons at these energies results in a lower electron temperature in the NLR gas, particularly in the partially ionized, X-ray heated envelope behind the H<sup>+</sup>/H<sup>0</sup> transition zone. In addition to the large column of X-ray absorbing material, there is a component of lower column, lower ionization gas, which is the source of the UV resonance line absorption which has been observed with IUE (Bromage et al. 1985) and HST (Weyman et al. 1997). Kriss (1998) has modeled several different kinematic components of the UV absorber with N<sub>H</sub> $``$ 10<sup>19.5</sup> cm<sup>-2</sup> and U $``$ 10<sup>-3.0</sup>. Such an absorber is optically thick at the He II Lyman limit and would produce the effects on emission-line diagnostics discussed in Kraemer et al. (1999).
For our models, we used a single X-ray absorber and a single UV absorber along each of the two lines-of-sight (SW and NE). For the X-ray absorber we obtained the best fit to the average NLR conditions by assuming U $`=`$ 1.0 and N<sub>H</sub> $`=`$ 3.2 x 10<sup>22</sup> cm<sup>-2</sup>. For the UV absorber, we used U $`=`$ 10<sup>-3.0</sup> and N<sub>H</sub> $`=`$ 2.8 x 10<sup>19</sup> cm<sup>-2</sup> and 5.6 x 10<sup>19</sup> cm<sup>-2</sup> for the southwest (SW) and northeast (NE) sides, respectively. The different column densities were chosen to fit the differences in the relative strengths of the \[O III\] $`\lambda `$4363 and He II $`\lambda `$4686 lines on each side of the nucleus. The unabsorbed and transmitted SEDs are shown in Figure 2 and 3. Note that we have assumed that the UV absorber is external to the X-ray absorber, although the NLR models are not particularly sensitive to the relative locations of the absorbers.
### 4.2 Model Input Parameters
For the sake of simplicity, we have assumed that the measured distances are the true radial distances on the NE side of the nucleus (note that we have assumed H<sub>0</sub> = 50 km s<sup>-1</sup> Mpc<sup>-1</sup> for this analysis, which places NGC 4151 at a distance of 20 Mpc, and thus 1$`^{\prime \prime }.`$0 corresponds to 100 pc; this was done so that our results could be compared more easily with previous studies of the NLR). In our model, the smaller column of the UV absorber towards the SW permits more ionizing photons to reach the NLR. Since we do not see strong evidence for higher ionization in the SW, we have scaled the distances by a factor of 1.2. Interestingly, on the SW side we are seeing some clouds which may have a larger component of velocity in our line-of-sight (Kaiser et al. 1999), which indicates a larger projection effect, and supports the scaling of observed radial distances.
Although the UV semi-forbidden emission lines from doubly ionized carbon, nitrogen, and oxygen can be used to determine the relative abundances of each element (Netzer 1997), the poor signal-to-noise quality our long-slit data between 1650 Å and 1800 Å make reliable measurement of the strengths of O III\] $`\lambda `$1663 and N III\] $`\lambda `$1750 impossible. However, the relative strengths of optical forbidden lines show no evidence of peculiar abundances. Therefore, we have assumed roughly solar abundances for these models (cf. Grevesse & Anders 1989). The abundances, relative to H by number, are He=0.1, C=3.4x10<sup>-4</sup>, O=6.8x10<sup>-4</sup>, N=1.2x10<sup>-4</sup>, Ne=1.1x10<sup>-4</sup>, S=1.5x10<sup>-5</sup>, Si=3.1x10<sup>-5</sup>, Mg=3.3x10<sup>-5</sup>, and Fe=4.0x10<sup>-5</sup>. However, as we will discuss, there is evidence for enhancement of iron close to the nucleus.
As we discussed in Section 3, the L$`\alpha `$/H$`\beta `$ ratio is strong evidence that there is cosmic dust mixed in with the emission-line gas. However, both C II\] $`\lambda `$2328 and Mg II $`\lambda `$2800 are relatively strong lines (about equal to H$`\beta `$, after correcting for reddening), which would indicate that a substantial fraction of these elements remain in gas phase. Therefore, we have assumed that the amounts of silicate and carbonate dust grains are 50% and 20% of the Galactic values, respectively, with depletions of elements onto dust grains, scaled accordingly, as per Seab & Shull (1983).
At all of the radial locations sampled, the \[O II\] $`\lambda `$3727 and \[S II\] $`\lambda \lambda `$6716, 6731 lines are among the strongest optical emission lines. Since the critical electron density for \[O II\] $`\lambda `$3727 is n<sub>e</sub> $`=`$ 4.5 x 10<sup>3</sup> cm<sup>-2</sup> (de Robertis & Osterbrock 1984), a substantial fraction of the emission-line gas must be at densities n<sub>H</sub> $``$ a few times 10<sup>4</sup> cm<sup>-3</sup>, although some of this emission may be due to projection of low density gas at larger radial distances. The average ratio of \[S II\] $`\lambda `$ 6716/$`\lambda `$6731 is near unity, which is consistent with these lines arising in the partially ionized envelope associated with the \[O II\] clouds, and, hence, where n<sub>e</sub> $`<`$ n<sub>H</sub>.
The spectra at each radial point show emission lines from a wide range in ionization state. For example, emission from O<sup>0</sup> through O<sup>+3</sup> are detected at every location, and, in several cases, N<sup>0</sup> through N<sup>+4</sup> are seen. It is plausible that the wide range in physical conditions indicated by the emission-line spectra is the result of local density inhomogeneities in the NLR gas. To model this effect, we have assumed that the gas in which the \[O II\] $`\lambda `$3727 emission arises is optically thick (radiation-bounded) and the principal source of hydrogen recombination radiation. To this we added a second component of less dense, optically thin (matter-bounded) gas, which is the source of the high ionization lines, such as C IV $`\lambda `$1550, \[Ne V\] $`\lambda \lambda `$3346, 3426, and most of the He II emission. The idea that high excitation lines arise in a tenuous component distributed through the NLR region of NGC 4151 has been discussed by Korista & Ferland (1989), although in our models the matter-bounded component is co-located with and has the same radial dependence in physical conditions as the denser component. We determined the relative contributions of these two components by given approximately equal weight to the predicted vs. observed fits for the following line ratios: \[O III\] $`\lambda `$5007/\[O II\]$`\lambda `$3727, He II $`\lambda `$4686/H$`\beta `$, and C IV $`\lambda `$1550/H$`\beta `$. The first is sensitive to ionization parameter, the second to SED and the fraction of matter-bounded to radiation-bounded gas, and the last is a trace of the high-ionization component (although sensitive to the effects of dust). In one case, the red-shifted core (0$`^{\prime \prime }.`$1 – 0$`^{\prime \prime }.`$3 NE), the C III\] $`\lambda `$1909/H$`\beta `$ ratio was included, since the spectrum suggests the presence of an additional high density component in which the oxygen lines might be collisionally suppressed. Since there are no obvious constraints on the size of the matter-bounded component, other than the extraction length along the slit, we truncated the model such that its physical extent was approximately 10 times that of its radiation-bounded companion. We have not considered the case where the high density gas is embedded within the low density gas, although this is certainly possible. Thus, in our models, each component sees the ionizing source directly.
The input parameters for the models are given in Table 3. As noted above, the density of the dominant (radiation-bounded) component is $``$ 2 x 10<sup>4</sup> cm<sup>-3</sup> at the innermost points, and falls off monotonically with radial distance. We have held U roughly constant for this component within the inner 1<sup>′′</sup> ($``$ 100 pcs), allowing it to drop for the outermost points. The ionization parameter for the matter-bounded component was allowed to vary to produce the best fit to the observed emission-line ratios when its contribution was added to that of the radiation-bounded component, however, its density is also a decreasing function of radial distance. The component-weighted model densities as a function of radial distance are plotted in Figure 4, along with power law fits, showing that density falls off as n<sub>H</sub> $``$ r<sup>-s</sup>, where s $`=`$ $``$1.6 to $``$1.7. The one exception to the two-component scheme is position 0$`^{\prime \prime }.`$1 – 0$`^{\prime \prime }.`$3 NE, for which three components were required, which might be expected since projection effects are likely to be greatest at small projected radial distances.
## 5 Model Results
The model results are shown graphically in Figures 5 and 6. As a measure of the goodness of the fit, we have included dotted lines to mark a factor of two divergence between the model prediction and the observed line strength. Given the simplicity of the models, the fact that the large majority of the emission lines are well fit validates our basic assumption, e.g. that photoionization is the dominant mechanism for ionizing the NLR gas, and that the densities, dust fraction and abundances used for the models are approximately correct. Also, our estimated central source luminosity is sufficient to power the NLR, which implies that 1) there is no evidence for anisotropy of the continuum radiation, and 2) the average luminosity of the central source has been roughly the same for several hundreds of years. Furthermore, by fitting lines from ions with a wide range of ionization potential (e.g., 7.6 eV for Mg II; 97.0 eV for Ne V) the model predictions confirm that the SED of the ionizing continuum is correct, including the assumptions regarding the effects of the UV and X-ray absorbers.
### 5.1 Fit to the Observations
As expected, we have achieved good fits simultaneously for the strengths of C IV $`\lambda `$1550, \[O III\] $`\lambda `$5007, \[O II\] $`\lambda `$3727, and He II $`\lambda `$4686, relative to H$`\beta `$. Even though we weighted the contributions of the individual components bases on these lines, the quality of the fit indicates that we have made a good approximation to the range of physical conditions at each point. Also, the composite models give good predictions for high ionization lines, such as O IV\] $`\lambda `$1402 and \[Ne V\] $`\lambda `$3426, and lower ionization lines, including those formed primarily in the H<sup>+</sup> zone, such as \[Ne III\] $`\lambda \lambda `$3869, 3968 and \[N II\] $`\lambda \lambda `$6548, 6584, and those formed in the H<sup>0</sup> zone, such as \[O I\] $`\lambda \lambda `$6300, 6364, \[N I\] $`\lambda \lambda `$5198, 5200, and \[S II\] $`\lambda \lambda `$6716, 6731. Not only does the quality of the fit further validate our simple two component scheme but, also, the fact that lines from the neutral envelopes are well-fit supports the assumption that much of the NLR gas is radiation-bounded.
Our central hypothesis was that the NLR was irradiated by an ionizing continuum which was absorbed by intervening gas close to the nucleus. As noted above, the fit to SED sensitive lines, such as He II $`\lambda `$4686, and lines from a wide range of ionization states indicate that the overall SED is approximatedly correct. The electron temperature is also sensitive to assumptions about the SED. For the models of the bins NE of the nucleus, the predicted ratio of \[O III\] $`\lambda `$4363/\[O III\] $`\lambda `$5007 is, on average, within a factor of $``$ 1.5 of that observed, which indicates a reasonably accurate prediction for the electron temperature. The relative strength of \[O III\] $`\lambda `$4363 is somewhat underpredicted in the models for the bins SW of the nucleus. We take this as an indication that we somewhat overestimated the absorption at energies greater than 100 eV, which is primarily due to the X-ray absorber (see Figure 3). Since the strength of \[O I\] $`\lambda `$6300 is sensitive to the electron temperature in the H<sup>0</sup> zone, the undeprediction of \[O I\] $`\lambda `$6300 in the SW is additional evidence that we have overestimated of the continuum absorption in the X-ray. Nevertheless, the main point is that the relative suppression of the He II line strength in gas with a fairly high state of excitation indeed results from ionization by a continuum which has a paucity of photons near the He II Lyman limit, as suggested by Alexander et al. (1999) and Kraemer et al. (1999), and that it appears that the much of the NLR in NGC 4151 sees the same continuum.
In Figure 7 we compare the observed ratio of \[S II\] $`\lambda `$6716/$`\lambda `$6731 to that predicted by the models. The observed ratio shows a general increase with radial distance, also seen in the model results, indicating a fall-off in electron density (Osterbrock 1989). The models overpredict the value of the $`\lambda `$6716/$`\lambda `$6731 ratio for several bins, particularly those closest to the nucleus, which is evidence that the actual densities are greater than those assumed for the models, although the differences are generally a factor of a few. This may result from our assumption of constant density for each component, since it is certainly plausible that the inner regions of the clouds are somewhat denser than the hotter, ionized outer parts, whether or not the clouds are pressure confined. Regarding pressure confinement, the two components are not in pressure equilibrium with one another, since the densities generally differ by factors of $``$ 30 (see Table 3), while the temperatures differ by factors of $``$ 2 (see Table 4). The simplest explanation is that the clouds are not fully pressure confined and are expanding as they traverse the NLR, which may be a natural explanation for the drop in density as a function of radial distance.
As noted in Section 3, the dereddened L$`\alpha `$/H$`\beta `$ ratios for some regions show the effects of resonance scattering and subsequent destruction of the L$`\alpha `$ photons by dust mixed in with the emission-line gas. Kraemer & Harrington (1986) discussed the importance of this process in the NLR of the Seyfert 2 galaxy Mrk 3, but with the current data we also gain insight into the geometry of the NLR of NGC 4151. As we have discussed, to calculate the effects of looking at the back-ends of the clouds, we have made the simple assumption that their physical size is no larger than that defined by our models. Based on our predictions for the L$`\alpha `$/H$`\beta `$ ratio for bins 0$`^{\prime \prime }.`$9 – 0$`^{\prime \prime }.`$1.1 SW, 1$`^{\prime \prime }.`$1 – 1$`^{\prime \prime }.`$5 SW, and 1$`^{\prime \prime }.`$9 – 2$`^{\prime \prime }.`$3 SW (see Figure 5), there is no evidence that these L$`\alpha `$ photons have a longer path length prior to escape. However, back-end escape greatly underpredicts the resonance lines formed in the neutral envelope of the radiation-bounded component, specifically Mg II $`\lambda `$2800 and C II $`\lambda `$1335. In each case, the models predictions of the relative strengths of these lines are within the errors if we calculated the escape out the front-end of the clouds. Therefore, either the clouds are not as physically thick as we have assumed or there is a sufficient velocity gradient across the cloud that our assumptions about line profile and frequency shift (see Kraemer 1985) are no longer correct. In any case, based on our results for L$`\alpha `$, the photon path length, dust fraction internal to the clouds and viewing aspect that we have assumed are all approximately correct. Also, there are several other clouds (bins 0$`^{\prime \prime }.`$5 – 0$`^{\prime \prime }.`$7 SW, 1$`^{\prime \prime }.`$5 – 1$`^{\prime \prime }.`$9 SW, 0$`^{\prime \prime }.`$3 – 0$`^{\prime \prime }.`$5 NE, and 0$`^{\prime \prime }.`$9 – 1$`^{\prime \prime }.`$1 NE) for which there is some suppression of L$`\alpha `$ (see Tables 1 and 2), although it is less extreme than in the cases we have addressed. This can be attributed to either an oblique viewing angle through the cloud or a back-end view through a smaller column of radiation-bounded gas, although, based on the strengths of the collisionally excited lines formed in the neutral envelopes, there is no evidence for the latter. As noted, the facts that the suppression of L$`\alpha `$ is much greater in the SW and is due to the effects on internal dust support our previous conclusions regarding the narrow-line kinematics (Hutchings et al. 1998), specifically that the narrow-line clouds in the inner $``$ 200 pc are in radial outflow from the nucleus.
In Table 3 we list the total column densities for the component models, which tend to be less than 2 x 10<sup>21</sup> cm<sup>-2</sup>. The actual column of hydrogen which line photons traverse while escaping the clouds is typically much less, since the emission-line are formed throughout the cloud, rather than strictly at either the surface or deepest point. On the other hand, the average reddening over the inner 3<sup>′′</sup>, E<sub>B-V</sub> $``$ 0.18, indicates that we are observing the emission-lines through columns of at least 2.5 x 10<sup>21</sup> cm<sup>-2</sup>, given the dust fraction assumed and the relationship between hydrogen column density and reddening determined for the Milky Way (Shull & Steenberg 1985). Therefore, the emission-lines are more reddened than can accounted for by the dust in the clouds in which they are formed (which is why we compared our model predictions to the dereddened line ratios in the first place). Instead, we are either viewing the clouds through a nonuniform external screen, since the reddening varies, or there are multiple dusty clouds along our line-of-sight in many cases, such that the further ones are reddened by the dust in those nearer. There is another compelling reason to believe we are seeing superposition of clouds, which will be discussed in section 5.2.
We have assumed solar abundances for these models and, for all but one of the regions modeled, there are no discrepancies between the model predictions and the observations that could be attributable to differences in the abundances. The one exception is the blue-shifted core (0$`^{\prime \prime }.`$1 – 0$`^{\prime \prime }.`$3 SW), whose spectrum shows relatively strong lines of \[Fe VII\] ($`\lambda `$6087, $`\lambda `$5721, $`\lambda `$3760, and $`\lambda `$3588). The model underpredicts these lines by a factor of $``$ 5, although the fit for \[Ne V\] $`\lambda \lambda `$ 3426,3346, which should arise in similar conditions, is good. We achieved a reasonable fit for the \[Fe VII\] line by running a test model in which the iron abundance was increased by a factor of 2 and no iron was depleted onto grains. Therefore, we suggest that the fraction of iron in gas phase is higher in this region, perhaps due to a liberation of iron from dust grains combined with a higher iron abundance. There is evidence for iron enhancement in the nucleus of NGC 1068 (Netzer & Turner 1997; Kraemer et al. 1998), so it is not surprising that a similar effect is seen in the inner NLR of NGC 4151.
### 5.2 Open Issues
Although the match between the model predictions and the observations are generally good, there are several apparent discrepancies. First of all, the predictions for the N V $`\lambda `$1240 are generally low. This line is somewhat suppressed due to the resonance scattering-dust destruction process discussed previously. If the dust is not uniformly distributed in the more tenuous components, we might expect to see less suppression of N V $`\lambda `$1240 than the other strong resonance lines, for example C IV $`\lambda `$1550, since the N<sup>+4</sup> zone in these models does not fully overlap the C<sup>+3</sup> zone and is nearer the illuminated surface of the cloud. Another possible explanation is that the N V line is enhanced through scattering of continuum radiation by N<sup>+4</sup> ions (cf. Hamann & Korista 1996), an effect which is not included in our photoionization code. This effect is particularly important if there is significant turbulence in the clouds. Finally, the N V $`\lambda `$1240 line may arise in gas that is not modeled in our simple two-component scheme. For example, it is possible that there is an even more tenuous, more highly ionized component (U $``$ 1.0) filling the narrow-line region. If the narrow-line clouds are driven outward by a wind originating in the inner nucleus (cf. Krolik & Begelman 1986), it may be that the N V emission arises in the wind itself. The models also underpredict the C II $`\lambda `$1335 strength, which is most likely due to enhancement by resonance scattering of continuum emission, particularly since the models predict columns of C<sup>+</sup> for the radiation-bounded components typically $``$ 10<sup>18</sup> cm<sup>-2</sup> for the clouds within 100 pc of the nucleus. Heckman et al. (1997) have found evidence for this effect in the spectrum of the Seyfert 2 galaxy I Zw 92 (Mrk 477). The line may also be affected by the way in which the reddening correction is determined, as we discuss below.
As can be seen from the ratios of the model/observed line strengths plotted in Figures 5 and 6, lines in the near UV band (1700 – 3000 Å ) show what may be a systematic trend. There are a large number of regions for which C III\] $`\lambda `$1909, C II\] $`\lambda `$2326 and \[Ne IV\] $`\lambda `$2423 are unpredicted, particularly on the SW side of the nucleus. Since these lines arise the same physical conditions as several well-fit optical lines (e.g. \[O III\] $`\lambda `$5007, \[N II\] $`\lambda \lambda `$6548, 6584, etc.) it is unlikely that their underprediction indicates the presence of another component of gas. The fact that the problem is worse in the SW points to the reddening correction as the cause, since we are observing those clouds through a larger column of dust, much of it internal to the emission-line clouds. However, the problem appears to be specific to the near UV band, since there are no apparent systematic discrepancies in the far UV spectra (1200 – 1700 Å ). If the strength of the 2200 Å feature in the interstellar medium of NGC 4151 is less than that in the Milky Way, we may have overestimated the intrinsic strength of the C II\] $`\lambda `$2326 and \[Ne IV\] $`\lambda `$2423 lines. Another explanation is required for the discrepant C III\] $`\lambda `$1909 predictions. It is interesting to note that those regions with the worst fit for the C III\] line (bins 0$`^{\prime \prime }.`$5 – 0$`^{\prime \prime }.`$7 NE, 0$`^{\prime \prime }.`$3 – 0$`^{\prime \prime }.`$5 SW, 0$`^{\prime \prime }.`$7 – 0$`^{\prime \prime }.`$9 SW, and 0$`^{\prime \prime }.`$9 – 1$`^{\prime \prime }.`$1 SW) have reddening corrected H$`\alpha `$/H$`\beta `$ ratios somewhat below Case B ($``$ 2.9; Osterbrock 1974), which implies that these lines have been overcorrected for extinction. The problem may be the manner in which the reddening is determined, specifically from the ratio of He II $`\lambda `$1640/He II $`\lambda `$4686. We have assumed that the He II lines are viewed through the same column of dust as the other emission lines. This may not be a robust assumption, since most of the He II emission comes from the matter-bounded components. Unfortunately, the physical sizes and geometry of the emitting regions are not well-constrained by our simple models, so it is not possible to quantify this effect. In any case, it is likely that the combined effects of the uncertainty in the extinction curve in NGC 4151 and different viewing angles through inhomogeneous emission-line regions account for these discrepancies.
In Table 4, we list the model predictions for the H$`\beta `$ flux emitted at the illuminated face of the cloud. We estimate the size of the emitting area, also given in Table 4, by dividing the observed H$`\beta `$ luminosity by the emitted flux. For the matter-bounded clouds, this area tends to be larger than the projected area of the bin, which suggests 1) clouds with larger depths in the line of sight, or 2) multiple clouds along the line of sight. We also estimate the depths of the emitting regions by dividing these areas by the slit width (10 pc). Within the inner 100 pc, the depths are not greater than the apparent radial distance, and are therefore reasonable in our estimation. At larger radial distances, the emitting area for the matter-bounded component becomes quite large, and would imply that we are seeing the sum of emission from up to several hundreds of parsecs into the galaxy. Note, however, that we arbitrarily truncated the size of the matter-bounded components. Increasing their column densities would result in a decrease in the size of the required emitting area. Therefore, the model predictions may be an overestimate of the depth into the galaxy.
One minor problem with the models is that they generally overpredict the strength of the \[S III\] $`\lambda `$9069 and $`\lambda `$9532 lines. Osterbrock, Tran, & Veilleux (1992) discussed this problem in the context of the spectral properties of a sample of Seyfert galaxies and suggested that the observed weakness of these lines may indicate a relative underabundance of sulfur. However, accurate dielectronic recombination rates have not yet been calculated for sulfur (Ali et al. 1991) and, as a result, the model predictions may not be reliable (in fact, it is not clear what rates were used by Stasinska (1984), whose results were used by Osterbrock et al.). For this dataset, the problem with the \[S III\] lines may be instrumental. Using the G750L grating with the STIS CCD at these wavelengths produces an instrumentally generated interference, or “fringing”, pattern (Plait & Bohlin 1998). In our data, the fringing, coupled with the method for subtracting the scattered nuclear spectrum, appears to have caused a certain amount of destructive interference: we estimate that the observed strengths of the \[S III\] lines have been underestimated by as much as a factor of two at some locations. Given this, we do not see that overprediction of the lines is a problem. The fringing effects can be corrected using a contemporaneous CCD flat-field exposure; we will have this in our subsequent STIS observations of NGC 4151, and will re-evaluate the \[S III\] problem at that time.
## 6 Implications for the Absorbers
We have shown that the NLR of NGC 4151, at least along one position angle, is ionized by an absorbed continuum, and, thus, there are important implications regarding the distribution of gas in the inner nuclear regions of the galaxy. We should note that we not have placed stringent constraints on the location of the absorber. Specifically, we have not yet discussed the possibility that the absorption could be due to the narrow-line clouds themselves. However, there are good reasons to reject such a model. We obtained a good fit to the emission-line ratios at each radial point assuming that much of the gas is optically thick to the ionizing radiation, at both the He II and hydrogen Lyman limits, unlike the gas that is producing the UV absorption, which is optically thick only at the He II Lyman limit. The one location where there may be evidence for a component with physical conditions similar to the UV absorber is the red-shifted core (0$`^{\prime \prime }.`$1 – 0$`^{\prime \prime }.`$3 NE), specifically the high density, matter-bounded component (see Table 3). However, at the radial distance of this component ($``$ 4 pc from the nucleus) the covering factor is $`<<`$ 0.01, assuming an ionization cone with an opening angle of 75<sup>o</sup> (Evans et al. 1993), so there could be no effect on clouds further out. It is possible that the X-ray absorber extends further into the NLR, and we have argued that there may be a component of high ionization, tenuous gas that contributes some of the N V emission. However, we estimate from U and the size of the emitting regions that the column densities of such a component could not be much greater than a few x 10<sup>21</sup> cm<sup>-2</sup>, and, therefore, insufficient to produce the depth of the absorption features in the X-ray required for our models. Furthermore, we do not see any trends with radial distance, such as weaker neutral lines or \[O III\] $`\lambda `$4363 emission, that would indicate a drop in electron temperature, and thus an increasing column of X-ray absorber. Thus, it is most likely that both the UV and X-ray absorbers lie within a few parsecs of the nucleus, as is apparently the case for the absorbers along our line-of-sight.
Based on the fraction of Seyferts found to possess UV and X-ray absorption, the absorber must have a covering factor between 0.5 – 1.0 (Crenshaw et al. 1999). Our analysis suggests that, along the position angle of these data, the absorber covers the lines of sight to the NLR, which is in agreement with the average NLR conditions in NGC 4151 discussed by Alexander et al. (1999). Unfortunately, we cannot determine the distribution of NLR gas into the plane of the galaxy from the current dataset, as we noted in the Section 5.1, although there are likely to be large projection effects. We have medium resolution ($`\lambda `$/$`\mathrm{\Delta }\lambda `$ $``$ 10,000) slitted observations planned with HST/STIS which may provide better constraints on the distribution of the narrow-line gas and help resolve this question.
## 7 Conclusions
We have examined the physical conditions, along PA 221<sup>o</sup>, in the narrow-line gas in the inner $``$ 250 pc of the Seyfert 1 galaxy NGC 4151, using low resolution, long slit data obtained with HST/STIS. We have measured the emission-line fluxes at contiguous radial points along the slit, performed reddening corrections based on the He II $`\lambda `$1640/$`\lambda `$4686 ratio, and compared the results with the predictions of detailed photoionization models. The main results are as follows:
1. The conditions in the NLR gas are the result of photoionization by continuum radiation emitted by the central active nucleus. If additional excitation mechanisms are present, such as shocks or interaction between the radio jet and the interstellar gas, the effects must be quite subtle, since the heating and ionization of the gas can be successfully modeled by photoionization effects. Note, however, that this position angle does not lie directly along the radio jet (see Kaiser et al. 1999, and references therein).
2. It is clear that we are also seeing the effects of dust imbedded in the clouds. Specifically, the L$`\alpha `$/H$`\beta `$ ratio on the SW side of the nucleus is much lower than predicted by the combination of recombination and collisional excitation, and is almost certainly due to line transfer effects within dusty gas. The simplest interpretation is that the clouds on the SW are viewed out the back-end, rather than from the illuminated face. We used the photoionization models to explore this effect and have demonstrated that the column densities and dust fraction that we assumed can produce the observed supression of L$`\alpha `$. This is further proof that the clouds in the inner NLR are outflowing radially from nucleus, with the SW side approaching us and the NE side receeding from us, as discussed our previous papers (Hutchings et al. 1998; Kaiser et al. 1999).
Although our model predictions indicate that the fraction of dust within the emission-line gas is roughly constant across the NLR, the amount of reddening varies. Although this could be due to variations in the columns density of an external dust screen, it can also be the result of stacking of NLR clouds along our line-of-sight. We prefer the latter explanation, since there is evidence, based on the predicted H$`\beta `$ fluxes, that we are seeing the superposition of clouds.
3. We have confirmed the earlier results of Kraemer et al. (1999) and Alexander et al. (1999), which indicated that the ionizing continuum which irradiates the NLR in NGC 4151 has been absorbed by an intervening layer of gas close to the nucleus. With these models we have put much more realistic constraints on the SED of the transmitted continuum. The fact that the NLR sees an absorbed continuum implies that the covering factor of the absorbing material is large, as Crenshaw et al. (1999) have suggested.
S.B.K. and D.M.C acknowledge support from NASA grant NAG 5-4103. |
no-problem/9910/astro-ph9910393.html | ar5iv | text | # The Energy Dependence of the Aperiodic Variability for Cygnus X-1, GX 339-4, GRS 1758-258 & 1E 1740.7-2942
## 1 Introduction
The galactic black-hole candidates (GBHCs) are known to exhibit rapid variability on time scales of hours down to milliseconds. This rapid aperiodic and quasiperiodic variability provides a diagnostic tool for understanding the nature of the high-energy emission in these objects (for reviews see van der Klis (1995); Cui (1998); Liang (1998)). The power density spectrum (PDS) of the variation is one of the widely studied properties. Generally the PDS of GBHCs consists of a flat plateau at low frequencies (red noise) up to a break frequency $`f_1`$ and a power-law, typically $`f^1`$ for $`f>f_1`$. In several sources, a second break is observed at frequency $`f_2`$, beyond which the PDS turns into a steeper power-law, typically $`f^\beta `$ with $`1.5\beta 2`$. Hereafter, we will refer to this structure using a double broken power-law model (although it can generally be equally well described using two zero centered lorentzians). Superposed on the above shape some GBHCs show “peaked noise” or QPOs in the power-law section of the PDS. The QPOs, which may be related to the dynamic processes in the accretion disks, have received widespread attention. In this paper, however, we focus on the energy dependence of the overall PDS.
Several X-ray emission models of the GBHCs have suggested that the PDS should be energy dependent. Hua et al. (1997) have proposed that Comptonization in an extended hot, thermal corona may be responsible for the PDS shape and the hard time lags of GBHCs. Böttcher & Liang (1998) studied the pure Comptonization model in great detail for different source geometries. They assumed that the X-ray emission is generated by the upscattering of soft photons, which are injected from either a central or an external source. They showed that the Comptonization model with a central soft photon source generally has a steeper PDS at higher energies, but this steepening trend is less significant when the density gradient of the Comptonizing cloud increases. However, if the soft photon source is external, the steepening trend is less significant with lower density gradient. To avoid the unrealistic requirement of the pure Comptonization model for the huge size (e.g. $`10^3`$$`10^4`$ $`R_g`$) of the Comptonizing corona, Böttcher & Liang (1999) proposed that the soft photons may be injected from a cool blob spiraling into the central object. In this scenario, the simulation shows that the PDS becomes flatter with increasing energies. In the magnetic flare model of Poutanen & Fabian (1999), a pulse-avalanche model is invoked in order to reproduce the overall shape of the observed PDS of Cygnus X-1. In this model, the general shape does not change with energy except that the second break frequency is shifted to lower frequencies for higher energy bands (Poutanen & Fabian 1999; J. Poutanen 1999, personal communication).
Given these different model predictions, it is important to find out in the available data how the PDS shape and integrated rms amplitude evolve with energy. The flattening PDS, as suggested by the spiraling blob model, was observed in Cygnus X–1 (Cui et al. (1997); Nowak et al. (1998)). Smith & Liang (1999) reported that there were no significant PDS shape changes among the energy bands for an RXTE observation of GX 339–4, and that the rms amplitude decreased with increasing photon energy. GRS 1758–258, on the other hand, had the highest rms amplitude in the medium energy band (Lin et al. (1999)). These different behaviors may be intrinsic to the sources, and thus different emission mechanisms or viewing geometries would be necessary for different GBHCs in spite of their spectral similarities. To address this issue, we have systematically analyzed the RXTE archival data for the four hard X-ray sources in their hard/low state: Cygnus X–1, GX 339–4, GRS 1758–258 and 1E 1740.7–2942. In section 2, we discuss how we select and reduce the data. In section 3, we show the analysis results. In section 4, we compare the results to various model predictions. In section 5, we discuss the results.
## 2 Data Selection and Reduction
The data have been properly selected from the $`RXTE`$ archive over the period of 1996 – 1997. We chose only those data that came from long enough observations and had proper data formats to allow us to do an accurate energy dependent analysis. In all cases, the sources were in the hard/low state, which was determined by careful spectral analyses. The dates of the observations are listed in Table 1. In order to study the energy dependence of the variability, we divided the PCA effective energy range into three energy bands, the ranges of which are approximately: 2 – 5 keV (L), 5 – 10 keV (M), and 10 – 40 keV (H). Some of the archival data formats make the energy ranges a little different from these values. The counts have been summed into time bins of 46.875 ms, and the power density spectra generated on intervals of 384 seconds. We subtracted from the data the background count rate estimated from the model L7/240 or (Q6+activation+X-ray). The contributions from the galactic diffuse emission, whose contamination to the X-ray flux of GRS 1758-258 and 1E 1740.7–2942 is significant, have also been subtracted from the light curves of the two sources, as described in Lin at al. (1999); see also Main et al (1999).
The power density spectrum for each band was generated using the FTOOLS analysis package with the white noise subtracted. For comparative purposes, we calculated the PDS over the frequency range of 0.002 – 10 Hz for all the sources even though some of them could be analyzed up to higher frequencies. We checked that we did not introduce any artificial effects by limiting the frequency range. For each observation, the final power density spectra have been averaged over all the observation segments. We calculated the H/L and M/L band ratios to investigate how the PDS shapes change with energy (the error of the PDS ratio is the propagating error). For each band we also integrated the PDS over the frequency range of 0.002 – 10 Hz, and took the square root to calculate the overall integrated rms amplitudes.
## 3 Analysis Results
### 3.1 Power Spectrum Shape
In Figure 1, we show an example of the energy dependent PDS and their ratios for one observation of Cygnus X-1. Similar results are seen in all seven observations of Cygnus X-1. The PDS ratios between the high- and low- energy bands significantly increase above 3 Hz. Below 3 Hz, the PDS ratios are consistent with constant values though there appears to be a slowly increasing trend. Moreover, we found that the two PDS ratios $`\mathrm{PDS}_\mathrm{H}/\mathrm{PDS}_\mathrm{L}`$ and $`\mathrm{PDS}_\mathrm{M}/\mathrm{PDS}_\mathrm{L}`$ start to increase roughly at the same frequency (3 Hz). Therefore, in terms of the double broken power law model, $`f_1`$ and $`f_2`$ are energy independent and $`\beta `$ decreases with energy. The decrease in $`\beta `$ means that the PDS shape becomes flatter at higher energies above $`f_2`$. This flattening trend in Cygnus X-1 has been previously reported (van der Klis (1995); Cui et al. (1997); Nowak et al. (1998)). The flat distribution of the PDS ratio over the frequency range between $`f_1`$ and $`f_2`$ indicates that the first power-law index, which is usually close to 1, indeed has little energy dependence.
No significant trends are seen for the other three sources. In all six observations of GX 339–4, we found the PDS ratios to be consistent with constants over the frequency range of 0.002 – 10 Hz (e.g. Figure 2), and we did not see any significant increases beyond 10 Hz either. Constant PDS ratios mean that the PDS shapes are energy independent, and thus $`f_1`$, $`f_2`$ and $`\beta `$ of the double broken power law model have little energy dependence. Constant ratios were also found in all six observations of GRS 1758–258 (e.g. Figure 3) although detailed model fittings indicate that QPOs in the PDS are energy dependent (Lin et al. (1999)). In three out of the five observations of 1E 1740.7–2942 the PDS ratios are also consistent with constants (e.g. Figure 4). In the other two observations of 1E 1740.7–2942, we found there were several big flares in the light curves. These flares, which have significant energy dependence, may be due to the nearby burst source previously detected by $`RXTE`$ (Strohmayer et al. (1997)). The burst source is located $`0.8^{}`$ away from the PCA pointing position. By excluding these flares from the timing analysis, we found the PDS ratios in these two observations are also consistent with a constant over 0.002 – 10 Hz. A complete analysis of the flares found in our 1997 Nov 11–12 observation of 1E 1740.7–2942 will be described in a separate paper (Lin et al. (2000)) along with a full spectral analysis of 1E 1740.7–2942.
### 3.2 Overall Variability
The behaviors of the overall variability, represented by the integrated rms amplitude over the frequency range of 0.002 – 10 Hz, are significantly different among the four sources (Figure 5). 1E 1740.7–2942 has the lowest variability in the low energy band and has a general trend of increasing variability with energy except for the observation made in 1996 (MJD 50158), which has the highest variability in the medium band. GRS 1758–258 has the highest variability in the medium energy band and the lowest variability in the high band. GX 339–4 has a generally decreasing variability with increasing energy, except for two observations in which the medium energy band has slightly higher variability than the low band. Among the seven observations of Cygnus X–1, we found four cases with decreasing variability, one with constant variability and two with increasing variability, and the pattern appears to depend on the overall variability of the source.
Previously Cygnus X–1 and GX 339–4 have been reported to have little energy dependence (van der Klis (1995)). This does not contradict our results. We have one observation of Cygnus X–1 (on MJD 50725) in which the overall variability is energy independent, and two other observations in which the integrated rms amplitude changes by less than 0.012 from the low to high energy bands. Similarly, in one observation of GX 339–4 (on MJD 50636), we see a change of less than 0.028 in the integrated rms amplitude over the three energy bands. Such small effects would have been indistinguishable in the early measurements (e.g. Nolan et al. (1981); Maejima et al. (1984)).
When one considers that the spectra of the four sources are similar in the hard/low state, it seems surprising that the overall variability behaviors are so different. It is interesting to note that only one spectral property, the hydrogen column density due to the ISM and any materials local to the source is significantly different among the sources. The typical hydrogen column density for each source is shown in Figure 5. We see a tendency that the hydrogen column density anti-correlates with the variability of the source, especially in the low energy band. One possible effect that the absorption has on the X-ray variation is that it might diminish the variability of the X-ray flux by absorbing and scattering the X-ray photons. 1E 1740.7–2942, for example, has the highest value for $`N_H`$, thus the diminishing effects on its X-ray flux can be significant up to 10 keV. This could make the lower two bands less variable if, for example, bright flares emitted from one region in the system are more absorbed than the general persistent emission.
By comparing the rms amplitudes with the related $`RXTE`$ all sky monitor (ASM) (2 – 12 keV) count rates, we found a tendency towards an anti-correlation between the overall variability and the X-ray flux (Table 1 and Figure 6). Due to closeness of their values and the big statistical errors, the rms amplitudes and ASM count rates of the observations on MJD 50470 and 50517 for GRS 1758-258 have been combined, as have the observations on MJD 50158–61 and 50764 for 1E 1740.7-2942. This anti-correlation is consistent with previous timing results of Cygnus X–1, GX 339–4 and GS 2033+38 (van der Klis (1995); Nolan et al. (1981); Miyamoto et al. (1992)), and can also be identified in the long-term monitoring of 1E 1740.7–2942 and GRS 1758–258 (see Figure 2 in Main et al. 1999). Again, this anti-correlation may imply that flaring and non-flaring emission regions are physically separated. With the increase of the non-flaring emission, the relative variation in the flux would decrease. We also compared the rms amplitude with the related BATSE Earth Occultation flux (20 – 120 keV), but no significant correlations were found.
## 4 Comparison with Aperiodic Variation Models
Since the discovery of rapid aperiodic X-ray intensity variations in Cygnus X–1 by Oda et al. (1971), various models of the aperiodic variations have been proposed. The early models were mostly phenomenological, such as shot models, but several more physical models have recently been proposed.
### 4.1 Shot Models
The shot models, originally proposed by Terrell (1972), are good at mimicking the observed light curves by superposing individual shots with properly chosen shot profiles. However, it is not clear how the shots are generated in the accretion disk (van der Klis (1995); Cui (1998)). Takeuchi, Mineshige & Negor (1995) proposed a shot generation mechanism based on self-organized criticality (SOC). In the SOC model, the mass accretion takes place in the form of avalanches only when the local density exceeds some critical value. The model can produce the 1/f-like power density spectrum and predicts a positive correlation between the dissipated energy and the duration of the shot. The photon energy in each shot is determined by the local temperature of the disk where the shot is generated. Therefore, the SOC model alone does not make any predictions about the energy dependence of the PDS shape. Takeuchi & Mineshige (1997) further developed a more realistic SOC model to include the dynamic processes of advection-dominated accretion disks. The viscosity parameter $`\alpha `$ is switched to a higher value when the surface density exceeds a critical value. Under this prescribed critical condition and the assumption of a uniform disk temperature, they showed that the improved SOC model can generate light curves and PDS similar to the observed ones. For a uniform temperature disk, the time profile of the thermal bremsstrahlung emissions is energy independent, and thus the PDS shape is also energy independent. To explain the energy dependent effects we have shown here, it will be necessary to expand this SOC model to the case of a multi-temperature disk. Such a multi-temperature disk is also needed to explain the X-ray energy spectra that have power-law photon indices higher than 1.
Like the SOC model, the wave propagation model (Manmoto et al. (1996)) also needs an assumption for the temperature profile in the disk. The wave is generated by disturbances such as magnetic flares at the outer part of the disk and propagates into the event horizon without much damping. The wave model can account for the time profiles of individual X-ray shots, and predicts that the energy of the emitted photons increases as the wave propagates into the horizon because the local temperature of the disk increases with decreasing radius. This can explain the hard lags of the X-ray emission. The time profile of the X-ray flux in each energy band is determined by the velocity of the wave and the temperature profile of the disk. If the radial velocity is constant and the temperature distribution is linear along the radial direction, the time profiles of the shots are the same for all the energy bands, and then the PDS shape should be energy independent. This is consistent with our analysis results of GRS 1758–258, 1E 1740.7–2942, and GX 339–4. A more realistic version of the model will be required to explain the energy dependent behaviors of Cygnus X–1.
### 4.2 Comptonization Models
The hard-low state spectrum of GBHCs can best be interpreted as the unsaturated Comptonization of soft photons by hot thermal electrons (Katz (1976); Liang (1998)). However, measurements of the Fourier-frequency dependence of the time lags between the signals in different X-ray energy channels indicates that this pure Comptonization model is not applicable to a moderate-sized uniform plasma (Miyamoto et al. 1988, (1993)). Kazanas et al. (1997) and Hua et al. (1997) found that Comptonization in a hot, inhomogeneous corona with a density profile $`n(r)r^1`$ can fit the combined spectral and timing properties of Cygnus X-1, although the hot corona has to extend out to $`1`$ light second from the black hole. Böttcher & Liang (1998) studied the isolated effects of Comptonization on the rapid aperiodic variability of black hole X-ray binaries in great detail, investigating different source geometries. In the case of a central soft photon source, corresponding to an accretion-disk corona geometry with a covering fraction of the corona close to unity, the PDS was generally found to become steeper for increasing photon energies. This trend becomes weaker as the density gradient of the hot, thermal corona becomes steeper. If the soft photon source is external to the hot corona, which corresponds to an ADAF-like geometry (Narayan & Yi (1994); Chen et al. (1995); Esin, McClintock & Narayan (1997); Luo & Liang (1998)) with a cool outer accretion disk or a radiation-pressure dominated, hot inner accretion disk solution (Shapiro, Lightman & Eardley (1976)), only a weak steepening of the PDS is predicted. This steepening trend is due to the averaging effect of Compton scatterings. Low energy photons, which have not gone through many up-scatterings, keep most of the variation of the soft photon input, while the high energy photons, having been accumulated in the corona for a longer time $`t`$, reflect the average of the soft photon input over a time $`t`$, and thus become less variable at high frequencies. For the same reason, the overall rms amplitude of the variability is expected to decrease with increasing photon energy. These predictions are not consistent with the data presented here. No cases of a steepening PDS shape were found. Therefore, the pure Compton scattering model does not fully explain the emission mechanism of GBHCs even for an inhomogeneous corona.
To overcome the problems with the pure Comptonization model, Böttcher & Liang (1999) developed a drifting blob model. It was motivated by the inhomogeneous accretion equilibrium found by Krolik (1998) and assumes that dense blobs of cool material are emitting soft photons and spiraling inward through a hot, optically thin inner accretion flow, which could be an ADAF or a Shapiro-Lightman-Eardley type hot inner disk. In this model, the hard time lags are due to the radial drifting time that the dense blobs spend in traveling from the cool to hot parts of the accretion flow. The model does not require a big Comptonization corona to account for the hard time lags, and thus the photon diffusion time is negligible. The photon flux resulting from a single drifting blob has significantly different time profiles in different energy bands. The low energy photon flux persists for the whole drifting period because most of the soft photons being steadily emitted by the drifting blob emerge from the corona without being uperscattered to higher energy. On the other hand, most of the high energy photons emerge only at later times when the blob drifts to the inner hot part of the disk. Therefore, the high energy shots are much narrower than the low energy shots, and thus have a broader band in frequency space. Consequently, the PDS is predicted to become flatter with increasing photon energy. This is consistent with the observations of Cygnus X–1, which has flatter power spectra at higher energies. In order to reproduce the complete PDS of individual objects, additional assumptions about the production of these inward-drifting blobs are required. In the simplest picture, the overall rms amplitude of the variability is expected to increase with increasing photon energy. Since we have shown here that this is not always the case, a more detailed model may be required.
Another Compton scattering related model is the magnetic flare model proposed by Poutanen & Fabian (1999), which involves active regions of a patchy corona above an optically thick accretion disk. Driven by the hydrodynamic and radiation pressure, the active regions are moving away from the disk and being heated up by the magnetic energy. As the active regions move outward, they become hotter because there is less Compton cooling. The feedback radiation from the active regions also heats up the disk regions that provide the soft photons for the Compton scatterings in the active regions. These heating processes make the observed photon energy increase as the flares proceed, and thus the low energy photon flux reaches a maximum earlier than the high energy flux. The light curves at different energies are self-similar but with higher characteristic time scales for higher energy photons. Consequently, the power density spectra in different energies have similar shapes with a energy independent $`\beta `$ and a decreasing $`f_2`$ for higher energies. The first broken power law is determined by the avalanche processes, and thus it is energy independent. The PDS ratios would then decrease at frequencies above $`f_1`$. However, we do not see this effect in any of our observations, implying that this version of the magnetic flare model is not supported.
## 5 Discussion
Our analysis shows that the classical black hole candidate Cygnus X–1 has significantly different temporal properties from 1E 1740.7-2942, GRS 1758–258 and GX 339-4 in spite of their similarities in energy spectra. The different energy dependence of the PDS shape may indicate that the emission mechanism in Cygnus X–1 is different from the other three GBHCs. If this is true, it will be necessary to develop accretion disk models that can accommodate this difference. The spiraling blob model, for example, could be the dominant mechanism for Cygnus X-1 while other mechanisms such as the wave propagation model may be more appropriate for the other three sources. However, these models do not exclude each other. Each of the models may just describe one aspect of the emission process. The SOC and wave models, for example, can be the blob generation mechanisms for the drifting blob model. Combining these models together will certainly make them more realistic and robust.
The analysis of the overall variability is complicated by the fact that the X-ray light curves may consist of two components: the shot component and the persistent component (Takeuchi & Mineshige 1997), which may have different origins and spectra. It is difficult to separate these two components spectrally and temporally. The integrated rms amplitude reflects total variability of the source emission and it is hard to directly compare the models that usually only simulate the shot processes. The SOC model is an exception. It generates shots using processes that occur in the steady accretion flow and, therefore, links the two components together. However, the energy dependence of the variability is still an open question for the SOC model.
The reason for the anti-correlating trend found between the integrated rms amplitude and the X-ray flux may also lie in the superposition of the shot and persistent components. If the radiation, for example, is dominated by the persistent component, any increase in the persistent flux would increase the total flux and thus reduce the relative variability, i.e., the integrated rms amplitude. If the emitting regions of the two components are physically separate, and there are locally more absorption materials around the shot emitting regions, then a high absorption would mean less variability. This would explain the anti-correlating trend between the rms amplitude and the hydrogen column density.
This work was supported by NASA grant NAG 5-3824 at Rice University. We thank Wei Cui and Juri Poutanen for their helpful discussions, David Smith for his assistance with the background determination for 1E 1740.7–2942 and GRS 1758–258, Jim Lochner and Gail Rohrbach of the $`RXTE`$ team for their help with the data processing, and the anonymous referee for invaluable comments and suggestions. The work of M.B. is supported by NASA through Chandra Postdoctoral Fellowship Grant number PF9-10007 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073. |
no-problem/9910/cs9910015.html | ar5iv | text | # PIPE: Personalizing Recommendations via Partial Evaluation
## 1 Introduction
Personalization of web content constitutes one of the fastest growing segments of the Internet economy today . It helps to retain customers, reduces information overload, and enables mass customization in E-commerce .
Two main approaches have been proposed to conduct personalization. The simplest are web search engines and the information filtering schemes which use content-based techniques to alleviate information overload. They, however, harness only a small fraction of the indexable web (one study estimates this to be $`<30\%`$ ), and still require users to sift through a multitude of results to determine relevant selections. The low coverage of search engines is attributed to at least two reasons: (i) a majority of web pages are dynamically generated (and hence not directly accessible via hyperlinks), and (ii) lack of sophisticated conceptual models for web information retrieval. At the other end of the spectrum, collaborative filtering techniques mine user access patterns, web logs, preferences, and profiles to precisely tailor the content provided (“Since you liked ‘Sense and Sensibility,’ you might be interested in ‘Pride and Prejudice’ too”) at specific sites. As businesses race to provide comprehensive experiences to web visitors, various combinations of these two approaches are used. This has spawned a multimillion dollar industry (NetPerceptions, Imana etc.) that provides custom-built personalization solutions for individual client specifications. See the web portal www.personalization.com for information on all aspects of this industry.
In this article, a customizable methodology called PIPE is presented that can be used to design personalization systems for a specific application (involving a collection of sites). PIPE allows the incorporation of both content-based and collaborative filtering techniques. It supports information integration, varying levels of input by web visitors, and facilitates ease of construction by a skilled systems engineer. For example, a designer wishing to construct a personalization facility for ‘wines’ can model various web resources (pertaining to this domain) using PIPE and create a facility that customizes content for visitors, based on wine preferences and attributes.
The rest of the article is organized as follows. Section 2 introduces partial evaluation — the key concept behind the methodology of PIPE. Sections 3 and 4 develop this idea further and present various schemes that extend it to large domains. Section 5 presents two case studies implemented using this framework. The evaluation of both these implementations are undertaken next, in Section 6. Section 7 discusses various aspects of the PIPE framework, its evaluation, and how it relates to other approaches to personalization.
## 2 Personalization is Partial Evaluation
The central contribution of this article is to model personalization by the programmatic notion of partial evaluation. Partial evaluation is a technique to specialize programs, given incomplete information about their input . The methodology presented here models a web site as a program (which abstracts the underlying schema of organization), partially evaluates the program with respect to user input, and recreates a personalized web site from the specialized program.
The input to a partial evaluator is a program and (some) static information about its arguments. Its output is a specialized version of this program (typically in the same language), that uses the static information to ‘pre-compile’ as many operations as possible. A simple example is how the C function pow can be specialized to create a new function, say pow2, that computes the square of an integer. Consider for example, the definition of a power function shown in the left part of Fig. 1 (grossly simplified for presentation purposes). If we knew that a particular user will utilize it only for computing squares of integers, we could specialize it (for that user) to produce the pow2 function. Thus, pow2 is obtained automatically (not by a human programmer) from pow by precomputing all expressions that involve exponent, unfolding the for-loop and by various other compiler transformations such as copy propagation and forward substitution. Its benefit is obvious when we consider a higher-level loop that would invoke pow repeatedly for computing, say, the squares of all integers from 1 through 100. Partial evaluation is now used in a wide variety of applications (scientific computing, database systems etc.) to achieve speedup in highly parameterized environments. Automatic program specializers are available for C, FORTRAN, PROLOG, LISP, and several other important languages. The interested reader is referred to for a good introduction. While the traditional motivation for using partial evaluation is to achieve speedup and/or remove interpretation overhead, it can also be viewed as a technique to simplify program presentation, by removing inapplicable, unnecessary, and ‘uninteresting’ information (based on user criteria) from a program.
Example 1: We now present a simple example to illustrate how a web site can be abstracted as a program for partial evaluation. Consider a congressional web site, organized in a hierarchical fashion, that provides information about US Senators, Representatives, their party, precinct, and state affiliations (Fig. 2). Later examples will remove this restriction to hierarchical sites. The individual web pages in Fig. 2 are denoted by the nodes (circles), and the links are assumed to be tagged via some labeling mechanism. Such labels can be obtained from the text anchoring the hyperlinks via ‘\<a href\>s’ in the web pages, or from XML tags . A web crawler employing a depth-first search can then be used to obtain a program, that models the links in a way that the interpretation of the program refers to the organization of information in the web sources. For example, the data in Fig. 2 produces the program (the line numbers are shown for ease of reference):
where the link labels are represented as program variables. The mutually-exclusive dichotomies of links at individual nodes (e.g. ‘A Democrat cannot be a Republican’) are modeled by else ifs<sup>1</sup><sup>1</sup>1This issue is addressed in more detail in Section 7.. Notice that while the program only models the organization of the web site, other textual information at each of the internal nodes can be stored/indexed alongside by associating augmented data structures with the program variables. Furthermore, at the ‘leaves’ (i.e., the innermost sections of the program), variable assignments corresponding to the individual URLs of the Senator/Representative home pages can be stored.
Assume that a user is interested in personalizing the web site to provide information only about ‘Democratic Senators.’ This is easily achieved by partially evaluating the above program with respect to the variables Senators and Dem (setting them to $`1`$). This produces the simplified program:
which can be used to recreate web pages, thus yielding personalized web content (shown by the circular region in Fig. 2). The flexibility of this approach is that it allows personalization even when variable values for certain level(s) are available, but not for level(s) higher in the hierarchy. For example, if the user desires information about a NY politician (but is unsure whether he/she is a Senator or Representative or a Democrat/Republican/Independent), then a partially evaluated output (with respect to NY and setting other variables such as CA to zero) will simplify the lower levels of the hierarchy, yielding:
The approach is thus responsive to varying levels of user input, ranging from no information (wherein the original program will be reproduced in its entirety) to choices that completely determine the end web page(s).
## 3 Mining Semi-Structured Data
The simplistic approach presented in Example 1 will be infeasible for realistic web sites, which are not strictly hierarchical, and best abstracted by ‘semi-structured’ data models, a term that has come to denote implicit, loose, irregular, and constantly evolving schema of information. To scale this methodology for semi-structured data, we propose the application of data mining techniques that extract compressed schema from web sites.
Various graph-based models have been proposed to model semi-structured data which, again, use directed labeled arcs to model the connection between web pages and between web sites. The data mining techniques that operate on such models obey the ‘approximation model’ of data mining. That is, they start with an accurate (exact) model of the data and deliberately introduce approximations, in the hope of finding some latent/hidden structure to the data. We illustrate the basic idea using the approximation model of Nestorov et al. , which treats web pages as atomic objects, and models the links between web pages as relations between the atomic objects.
Example 2: Consider the hypothetical web site depicted in the top left part of Fig. 3. The individual web pages are denoted by M1, M2 etc., the pre-leaf nodes by P1, P2 etc., while the other internal nodes are represented as S1, S2 etc. Notice again that the links are assumed to be tagged via some labeling mechanism. The first step in extracting structure from the web site is to proceed to type the data i.e., determine the minimum number of entities needed to model the web schema. For example, the S2 node in the top left part of Fig. 3 can be typed as:
S2(Y) :- S1(X), link(X,Y,’a’), P1(Z), link(Y,Z,’e’)
which indicates that it is reachable from S1 (using the a tag), and has a link to P1 (via the label e). Such a typing (expressed in the form of a logic program) might not yield any compression to the original data, so various approximations and simplifications are employed to reduce its size before partial evaluation. We first identify commonalities due to encountering the same page multiple times (top right of Fig. 3). This is easily achieved by using a hash indexed by page URL in the web crawler. Next, the algorithm of Nestorov et al. uses program-theoretic techniques to find the minimal set of types necessary to accurately represent the original data. For example, P1 and P2 have the same input and output labels (to the same page), and can be compressed into a single type P1,2, by computing the greatest fixed-point of the logic program (see ). And finally, allowing one type to be expressed as the superposition of multiple other types helps further reduce the size of the logic program. In this case, P3 can be subsumed by a combination of P1,2 and P4. The end-result of this process (see bottom right of Fig. 3) is a succinct schema that can be used for personalization. The cost of the mining algorithm is double-quadratic in the size of the web site (pre-leaf nodes). For web sites that are purely hierarchical and that do not contain cycles, more simplifications are available that enable efficient implementations of the mining algorithm .
## 4 Information Integration
The methodology described thus far is content-based, works at the level of web site organization, and does not mine/model the textual content or formatting within individual pages, beyond associating them with the appropriate nodes in the graph. For example, if a web page has two links L1 and L2 to other pages, the text anchoring L1 (upto the start of L2) is associated with the node corresponding to L1 in the graph, and so on. While data mining techniques are available that deal with textual information, we do not employ them in our study and implementations. We thus restrict our studies to web sites where most of the information content to be personalized is found at the ‘leaves’ of the structure tree. In addition, the ideas presented above are restricted to a single site. It is well understood that to provide compelling personalization scenarios, information needs to be integrated from multiple web sites and other sources of information, such as recommender systems and topic-specific cross-indices. Recommender systems, as introduced in Section 1, make selections of artifacts by mining profiles of customer choices and buying patterns. Topic-specific indices provide ontologies and taxonomies by cross-referencing information from multiple sites (e.g. the Yahoo taxonomy).
Example 3: Consider personalizing stock quotes for potential investors. The Yahoo! Finance Cross-Index at \[quote.yahoo.com\] provides a ticker symbol lookup for stock charts, financial statistics, and links to company profiles. It is easy to model and personalize this site by the techniques presented in previous sections. The program obtained could be partially evaluated with respect to ticker symbol to yield specialized information. However, what if the user does not know the ticker symbol, but has access to only the company name? What if he/she desires to index based on recommendations from an online brokerage? The key issue thus is to provide support for information integration from multiple web resources. This entails the inherent inconsistencies and uncertainties in representing and modeling textual labels. The online brokerage might refer to its recommendations by company name (e.g. ‘Microsoft’), while the Yahoo! cross-index uses the ticker symbol (‘MSFT’). More seriously, financial terms carry with them the twin idiosyncrasies of synonymy and polysemy. For example, ‘Investments’ referred to in one web site might be listed as ‘Ventures’ in another, which might mean something completely different in a non-financial setting.
We now outline possible solutions to these issues. The choices made by an individual recommender system can be modeled as statements in a program that abstract the control flow of the selection algorithm. For example, in the financial setting above, a special function can be written that takes as input the current user profile and returns a ticker symbol recommendation. This function can be called from a main() routine, which can then use the resulting ticker symbol to set variables for the program obtained by mining the Finance Cross-Index. In addition, the issue of synonymy can be addressed by introducing additional assertions such as:
at the beginning of the composite program, thus abstracting the task models underlying the application. This is the most domain-specific part of the methodology and cannot be easily automated (this aspect is discussed in more detail later in the paper). The literature on information integration proposes various solutions to this problem, notably wrappers and mediator-based schemes .
## 5 Case Studies
The above three aspects of (i) partial evaluation, (ii) mining semi-structured data, and (iii) information integration form the basis of the PIPE methodology for personalization. PIPE serves as a customizable framework that can compose individual content-based and collaborative engines to form full-fledged personalization systems in a specific domain. To the best of the author’s knowledge, there exists no comparable methodology for designing personalization systems, though similar architectures are available for other aspects of information capture and access . We now summarize the various steps of this methodology.
The first step is to identify the different ‘starting points’ for personalization — a domain specific consideration. The schema in these various sites should be modeled by labeled graphs involving semi-structured data. The second step is to extract typing rules from each of the site structures by the mining algorithm. The third step is to merge the diverse schema into a composite program, taking care to ensure that entities referred to in different ways by individual web sources are correctly merged together. We refer to the information space represented by the composite program as a ‘recommendation space.’ These steps constitute the off-line aspect of the methodology and need be performed only once for a specific implementation. The final step is the online aspect of partially evaluating the composite program and reconstructing the original information from the specialized program. We present empirical evidence for the effectiveness of this approach by application to two diverse domains:
* Creating personalized web pages for scientists and engineers who are trying to locate software on the Internet, and
* Delivering web-based tourist information for visitors to the Blacksburg Electronic Village (http://www.bev.net) (BEV).
The common denominator among these applications is their strong emphasis on multiple information resources (needed to achieve the desired effect), the heterogeneity of the sources, and the desire to provide integrated support for interesting personalization scenarios.
### 5.1 Personalizing Content for Scientists and Engineers
The PIPE methodology has been used in the context of creating personalized recommendations about mathematical and scientific software on the web. One of the main research issues here is understanding the fundamental processes by which knowledge about scientific problems is created, validated and communicated. Designing a personalization system for this domain involves at least three different sources:
* Web-based software repositories: In our study, we chose Netlib (http://www.netlib.org), a repository maintained by the AT & T Bell Labs, the University of Tennessee, and the Oak Ridge National Laboratory. Netlib provides access to thousands of pieces of software. Much of this software is organized in the form of FORTRAN libraries. For example, the QUADPACK library provides software routines for the domain of numerical quadrature (the task of determining the areas under curves, and the volumes bounded by surfaces).
* Individual Recommender Systems: These systems take a problem description and identify a good algorithm, that satisfies user specified performance constraints (such as error, time etc.). For example, the GAUSS recommender system successfully selects algorithms for numerical quadrature. The issue of how to identify good algorithm recommendations is a very complex and domain-specific one, and is not covered here. We refer the reader to for more details on this problem and a promising approach. It may be noted that GAUSS uses collaborative filtering to correlate variations in algorithm performance to specific characteristics of the problem input. The patterns mined by GAUSS are relational rules that model the connection between algorithms and the problems they are best suited to.
* Cross-Indices of Software: The GAMS (Guide to Available Mathematical Software) system (http://gams.nist.gov) provides a web based index for locating and identifying algorithms for scientific computing. GAMS indexes nearly $`10,000`$ algorithms for most areas of scientific software. While providing access to four different Internet repositories, GAMS’s main contribution to mathematical software, however, lies in the tree structured taxonomy of mathematical and software problems used to classify software modules. This taxonomy extends to seven levels and provides a convenient interface to home in on appropriate modules. Fig. 4 describes three screen shots during a GAMS session. The top left part of Fig. 4 depicts the root GAMS node, where the user is expected to make a selection about the type of problem to be solved (arithmetic/linear algebra/quadrature etc.). The user selects the ‘H’ category and then proceeds to make further selections. The class H2 corresponds to numerical quadrature, H2a corresponds to one-dimensional numerical quadrature and so on. The H2a1 node is shown in the top right part of Fig. 4. Continuing in this manner, the ‘leaves’ are arrived at (bottom part of Fig. 4), where there still exist several choices of algorithms for a specific problem. For some domains, it can be shown that there are nearly 1 million software modules that are potentially interesting and significantly different from one another! Recommenders for scientific software exist (as listed above) but they work only in specific isolated subtrees of the GAMS hierarchy (like the GAUSS system).
We present an implementation of PIPE for personalizing recommendations about quadrature software (GAMS category H2a). In the absence of PIPE, scientists typically use GAUSS to obtain a recommendation for a quadrature algorithm, then manually navigate the GAMS taxonomy starting from the root, looking for the right category containing an implementation for the recommended algorithm, and finally browse the Netlib site to download the source code and documentation for the recommendation. It is clear that any one of these resources does not provide enough information for personalization.
Experimental Setup: Schema was first extracted from the Netlib site and personalized for the input (QUADPACK=1) (which provides the algorithms for quadrature). The tree-building algorithm was written in Perl using the navigation capabilities of the lynx web browser. Perl hashes, which use arrays indexed by page URL, helped identify commonalities in tree-building. The mining algorithm did not yield any compression to the original Netlib schema for QUADPACK, since it is a strict two-level hierarchy. A portion of the schema (simplified for presentation) obtained from Netlib is shown below:
Next, tree-building and data mining were conducted for the GAMS website rooted at the H2a node (one-dimensional numerical quadrature). Notice that while links in GAMS (and most web sites) are not typed, we interpret the text anchoring the ‘\<a href\>’s in the web pages as the label when following the associated link. Furthermore, the labels for certain links (typically to software modules) are very long that they cannot be listed intelligibly on the originating page. In such cases, the label is suffixed with “…” and continued on the page pointed to (see bottom part of Fig. 4). For the purposes of personalization, we cannot ignore the continuation of the label as it may contain important keywords that describe the module. The compressions arising from mining GAMS schema were of two main flavors: (i) reductions due to factoring common nodes at the pre-leaf level (typically module sets), and (ii) reductions arising from links that violate the tree taxonomy. In overall, $`80`$ internal nodes in the H2a tree were reduced to $`74`$ nodes (after tree-building) and later, to $`69`$ nodes (after data mining and collapsing multiple roles). Thus, a compression of $`14\%`$ was observed for the H2a GAMS subtree. The schema at this stage is given by:
where Quadrature\_Problem, Automatic\_Accuracy are the link labels at the GAMS site. Finally, the recommendation rules from GAUSS are already in programmatic form (they take a vector of problem features and performance criteria as input and make a recommendation for an algorithm):
The above three schemas (programs) were merged taking into account the inconsistencies in the labeling of the three web sources. For example, Int in GAUSS is referred to as Quadrature\_Problem in GAMS, Finite in GAUSS is cross-referenced as Finite\_Interval in GAMS, and so on. The composite program was represented in the CLIPS programming language, which provides procedural, rule-based, and object-oriented paradigms for representation . The final program is structured as:
which models the control flow that gets partially evaluated. The end-user interface for the personalization system is shown in Fig. 5. As shown, the user provides the input problem in self-describing mathematical terms. The implementation of PIPE first parses the input to obtain as many symbolic features as possible (it is to be noted that for this domain, this involves some sophisticated mathematical reasoning). For instance, in the example problem shown in Fig. 5, simple parsing reveals that the problem is a quadrature problem (from the presence of the Int operator) and that it is a one-dimensional problem (from the range restriction). In addition, further mathematical reasoning reveals the presence of an oscillatory integrand on a finite domain. For more details on how this is achieved automatically, we refer the reader to .
Partially evaluating the CLIPS program above with this information by setting the appropriate feature values to $`1`$ starts a cascading effect of program simplification, removing nearly $`95\%`$ of the original information. The recommendation rules from GAUSS get partially evaluated, in turn navigating the GAMS taxonomy rules, in turn narrowing down on the Netlib URL for the selected algorithm. In this case, the evaluation is actually a complete evaluation, since the user has provided enough information to zoom in on a final leaf. The resulting program is then parsed to determine the individual program variables that are set at the end of this process. These are then used to produce the output shown in Fig. 5 that includes (i) the algorithm, (ii) the GAMS annotation, and (iii) the Netlib annotation indicating the resource from which it can be downloaded. The program segment producing the output shown in Fig. 5 is given by:
Notice the use of the GAMS\_annotation variable accompanying the node that was evaluated which provides information about the documentation for the algorithm. This implementation of PIPE should not be confused with a service run by Wolfram, Inc. (www.integrals.com) that evaluates quadrature problems symbolically.
### 5.2 Personalizing the BEV
The BEV (http://www.bev.net) provides a community resource for the New River Valley in Southwestern Virginia, USA, where nearly $`70\%`$ of the population use the Internet actively . In its seventh year, BEV offers a wide array of services — information pertaining to arts, religion, sports, education, tourism, travel, museums, health etc. An implementation of PIPE was designed for this facility where the goal is to direct tourists to appropriate resources in the town of Blacksburg. A first plan was to use two resources — the BEV web site (and various other pages that it links to) and the Blacksburg Community Directory (an offshoot of the BEV site). We experienced early problems with this approach due to ambiguities in the descriptions of the BEV entities. Assume that a user is querying for ‘art galleries.’ Blacksburg boasts of nearly $`25`$ galleries; only $`9`$ of which describe themselves as ‘galleries.’ Others register their organizations as ‘showrooms,’ ‘centers,’ or ‘museums’ with the BEV site.
Experimental Setup: To overcome these problems, we introduced a third personalization source — TOPIC, a computational distillation of basic keywords and topics used in the BEV site. The basic idea is to use orthogonal decompositions (such as Singular Value Decompositions of the term-document matrix, or Lanczos decompositions) to geometrically model semantic relationships. These approximations identify hidden structures in word usage, thus enabling searches that go beyond simple keyword matching. The exact computational algorithm is beyond the scope of this article, but we refer the reader to papers such as for this approach. The goal of TOPIC is to produce rules similar to the ‘Microsoft/MSFT’ matching that can be used to model recurrent low-dimensional subspaces in the BEV site(s). The mining algorithm was applied to the BEV site but it did not yield as much a compression to the original data as in the previous example. One reason for this could be the lack of global ‘controls’ in the construction of web pages by the BEV users and administrators. However, partial evaluation did yield very effective results as shown in the sample query of Fig. 6. In this case, the evaluation is truly partial, since it reproduces a collection of subtrees pertaining to ‘coffee.’ Notice that the second result depicted in Fig. 6 is a ‘false positive,’ since TOPIC associates the word ‘coffee’ with ‘cafe’ whereas none of the resources identified in ‘Blacksburg to Go’ are related to coffee shops. A more detailed evaluation of both these case studies follows next.
## 6 Evaluation of PIPE
The above described implementations of PIPE involve substantially different methodologies for evaluation. Consider the first case study, a domain of importance and immediate relevance to the computational scientist. Scientists and engineers would be experts at building models in their particular domain, but novices at understanding the intricacies of these mathematical models and the software systems required to solve them. In other words, their ratings/feedback cannot be used to evaluate PIPE’s recommendations. In addition, there does not exist any comparable comprehensive facility as described in this article. This implementation of PIPE is hence a novel application of personalization technology. To characterize the results, we used a benchmark set of problems described in and applied (ran the algorithms, effectively) the recommendations to see if they indeed satisfied the user’s constraints. In addition, we ensured that the web links from GAMS and Netlib were properly associated with all the recommendations.
First, all selections made by this implementation of PIPE were ‘valid’ (a selection is considered ‘invalid’ if the algorithm is inappropriate for the given problem, or if a wrong page from GAMS/Netlib was indexed). In addition, we recorded the accuracy of the final selections (a selection is accurate if the selected algorithm does result in solutions satisfying the requested criteria). The best algorithm was selected for $`87\%`$ of the cases, and the second best algorithm for $`7\%`$ of the cases. An acceptable choice was made for $`3\%`$ of the cases (which was not first or second best) and a wrong selection was made for only $`3\%`$ of the cases. It is to be mentioned that the ‘mistakes’ arise from the uncertainties in the GAUSS collaborative filtering system, and not as a result of PIPE’s methodology. Thus, it is seen that a suitable recommendation is made most of the time.
The evaluation of the second implementation of PIPE is more tricky and serves to illustrate the true potential of our methodology. We adopted the following approach. $`10`$ people were randomly selected and requested to identify $`10`$ queries (each) that might be pertinent to a Blacksburg visitor (‘hiking,’ ‘mountains,’ ‘trails’ etc.). We selected the $`10`$ most frequently cited queries as test cases for PIPE’s implementation. Stopword elimination (discarding terms like ‘of’ and ‘the’ in queries) and stemming (‘hikers’ was mapped to ‘hiking,’ for instance) were first applied to standardize these queries. These queries were then provided to $`25`$ Blacksburg residents who were asked to enumerate the answers (from their point of view) before personalization was conducted for these queries. The results obtained were then provided as feedback and they were asked if they would like to change their original answers or if they thought the results were deficient in any respect. Each user voted on a scale of 1–5 the mismatch between the results and any ‘expected’ answers (where a $`1`$ indicates that he/she is completely satisfied with the results), for each of the queries. We now summarize our results.
First, all votes were in the range 1–2, with the exception of $`32`$ votes with the value $`3`$ (more on this later). For each of the queries, we then conducted a distribution-free test (the Kruskal-Wallis test ) where the hypothesis tested was that the results were unanimous versus the alternative that they are not all equal. All $`10`$ hypotheses were accepted at the $`95\%`$ level, indicating conclusively that the results were very close to the expected answers. The $`32`$ votes with the value $`3`$ were spread over seven people who were less effusive with their ratings than others. This is a standard problem with rating-based collaborative filtering; one way to overcome this is to replace absolute ranks by ‘relative ranks’ so that they could be captured by certain two-way statistical tests (such as the Friedman, Kendall, and Babington-Smith test ). Another approach to overcome effusivity of ratings (or the lack thereof) is presented in . In addition, one of the queries had consistently lower ratings by nearly all $`25`$ participants. This was ‘Trails’ which failed to reproduce two of the most popular trails in the vicinity of Blacksburg. Not surprisingly, there were no web pages containing information about these trails in the considered collection. The results also fared well when compared with the traditional web search facilities available in the BEV site. For example, the standard BEV search engine produced no results for the query ‘coffee shops’ (or coffee).
## 7 Discussion and Comparisons with Other Approaches
The effectiveness of the methodology presented above relies on several factors, which are outlined below.
* The PIPE methodology allows programmatic composition to design full-fledged systems and hence, comparisons with individual recommender systems or other specialization facilities for particular domains are not strictly valid. With this in mind, one of its main advantages arises from integrating the design of personalization systems with the task model(s) underlying the assumed interaction scenario. It is this property that allows the designer to view the personalization system as a composition of individual subsystems, using a programming metaphor. PIPE is hence restricted to those domains that are most amenable to such decomposition and analysis techniques. More amorphous domains such as personalizing social networks in an organizational setting might not fit this framework.
* The above implementations assume that (i) the link labels represent choices made by a navigator, and (ii) it is possible to ascertain the values for such labels (program variables) from user input. In both cases this is conveniently achieved, since the GAMS/GAUSS and BEV sites serve as ontologies that help guide the personalization process. While the notion of partial evaluation works for any site even in the absence of ontologies (as shown in Example 1), personalization will only be as effective as the ease with which the link labels could be determined or supplied by the user. For example, if a medical informatics site is organized according to scientific names of diseases and ailments, personalizing for ‘headaches’ will require a parallel ontology or cross-index that maps everyday words into scientific nomenclature. In addition, the BEV case study shows that for certain domains it is acceptable (or even desirable) to be less strict in variable assignments, thus yielding more false positives. For other domains, personalization might pose more stringent demands.
* Personalizing textual content within web pages is not currently-addressed in the PIPE methodology which relies on the accuracy of the links to point to appropriate information. Combining textual mining techniques such as those presented in with the partial evaluation concept is an interesting research issue for future investigation. Other approaches to content-based personalization arise from the database management and information filtering communities. While languages like WebSQL, WebOQL, and Florid provide simple ‘web database’ lookups, they are not directly suited for personalization purposes since they accept queries in only a limited form, and are more attuned to structure-querying across known levels. However, such systems can be used for program creation and associating augmented data structures with the program that is partially evaluated. In addition, algorithms have been recently proposed that extract structure at the level of a single page. These techniques infer Document Type Definitions (DTDs) and page schemas from example web pages . The integration of link-based analyses and such content-based schemes in a programmatic context also deserves exploration.
* In typical web sites, the links are either mutually exclusive (e.g. in Example 1 and the GAMS case study), or are inclusive (e.g. the BEV case study). These are currently modeled by the presence or lack of else ifs in our programs (respectively). This has the advantage of supporting both disjunctions and conjunctions in the personalization queries. Automating this aspect in a web crawler requires more study, e.g. via meta-data or via explicit user direction.
* Partial evaluation, in general, is a costly operation because of the need to unroll loops and complex control structures in programs. However, such features are almost always absent in the kinds of studies considered here. Even links that point back to higher levels of the hierarchy do not cause code blowups since they are factored by the mining process. As a result, the cost of partial evaluation is not a severe bottleneck. The most expansive implementation of PIPE is the first case study which involved sites that have tens of thousands of web pages. Further analyses are needed, however, to characterize the scale up with respect to an ever greater number of web pages.
* It is instructive to characterize how much of the encouraging results are obtained by partial evaluation vis-a-vis carefully creating a tightly integrated implementation. In the absence of handcrafting, partial evaluation will still lead to personalization, but such a scenario is highly unrealistic. Partially evaluating the program mined from a beverages web site with respect to ‘Coke’ might not yield any results if the link label says ‘Coca-Cola.’ Approaches to alleviate this problem include the design of public ontologies and meta-data standards for commercial domains of expertise. Until such a time when such ontologies become prevalent on the web, this problem will not permit any general solutions. In the absence of partial evaluation, though, a more complicated strategy has to be in place to ensure that the personalization system handles all possible types of queries, spanning all combinations of levels of the link labels.
* Domain-specific techniques for making individual recommendations or choices are not part of the PIPE framework per se but will nevertheless form a critical aspect of any successful personalization system. Various techniques have been proposed for making recommendations; we refer the reader to . The current methodology thus allows the designer to choose the rating mechanism (value-neutrality). In the case of the first case study, this is the GAUSS recommender system. A different system, trained on qualitatively different examples, might be appropriate in another situation.
* Little attention has been devoted toward automating the determination of appropriate ‘starting points’ for personalization. In the general case, this should involve a systematic way of finding authoritative sources. A good reference is which uses linear-algebraic matrix transformations to determine the most ‘cited’ resources (via hyperlinks) for a given topic.
* The integrated methodology of PIPE is similar in spirit to various other systems, most notably Levy’s WebStrudel (a web site management system) . Such systems typically take a specification of a web site as input (graphs, rules etc.) and define the structure of the site in terms of the underlying data model. Currently, our implementations customize HTML content using the text manipulation capabilities provided in Perl. Programmatic reconstruction of web pages is a possible future extension of this work and systems like WebStrudel can also eliminate the need for restructuring when more web sources or additional rating mechanisms are introduced.
* In conclusion, the remark of Rus and Subramanian in is especially pertinent:
> “Whether users of the information superhighway prefer to build their own ‘hot rods’ \[by methodologies like PIPE\], or take ‘public transportation’ \[web search engines\] that serves all uniformly is an empirical question and will be judged by history.”
Acknowledgements: The author acknowledges the input of Sammy Perugini, Akash Rai, Mary Beth Rosson and the nearly $`40`$ volunteers who helped in the evaluation of the second study. Feedback from several anonymous referees helped clarify the presentation and improve the article. |
no-problem/9910/cond-mat9910211.html | ar5iv | text | # Talk presented at the SCCS 99, Saint-Malo (France), Sept 4--11, 1999 Dynamics of the 2D two-component plasma near the Kosterlitz-Thouless transition
* Abstract. We study the dynamics of a classical, two-component plasma in two dimensions, in the vicinity of the Kosterlitz-Thouless (KT) transition where the system passes from a dielectric low-temperature phase (consisting of bound pairs) to a conducting phase. We use two “complementary” analytical approaches and compare to simulations. The conventional, “intuitive” approach is built on the KT picture of independently relaxing, bound pairs. A more formal approach, working with Mori projected dynamic correlation functions, avoids to assume the pair picture from the start. We discuss successes and failures of both approaches, and suggest a way to combine the advantages of both.
1. INTRODUCTION
The two-component plasma in two dimensions (2D) is the generic model for topological excitations in 2D systems with a U(1) order parameter symmetry, usually called vortices . Prominent examples of such systems are the “2D superfluids”, superconducting or <sup>4</sup>He films and Josephson junction arrays (JJA’s), others are the $`XY`$ model of 2D planar magnets, and — with some modifications — 2D melting . The 2D plasma undergoes the well-known Kosterlitz-Thouless (KT) transition at a finite temperature $`T_{\mathrm{KT}}`$.
The dynamic behavior close to this transition is of both principal and practical interest. It is conveniently described by a frequency dependent, complex dielectric function $`ϵ(\omega )`$, related to the dynamic correlation function $`\mathrm{\Phi }_\rho (𝒌,\omega )`$ of the charge density $`\rho (𝒓,t)`$ of the plasma:
$`{\displaystyle \frac{1}{ϵ(\omega )}}`$ $`=`$ $`1{\displaystyle \frac{2\pi }{k_\mathrm{B}Tk^2}}\left[q^2\overline{n}S_\rho (𝒌)+\mathrm{i}\omega \mathrm{\Phi }_\rho (𝒌,\omega )\right]|_{𝒌\mathrm{𝟎}},`$ (1)
$`\mathrm{\Phi }_\rho (𝒌,\omega )`$ $`=`$ $`{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}dt\mathrm{e}^{\mathrm{i}\omega t}{\displaystyle \mathrm{d}^2𝒓\mathrm{e}^{\mathrm{i}𝒌𝒓}\rho (𝒓,t)\rho (\mathrm{𝟎},0)}.`$ (2)
Here $`\overline{n}`$ is the mean number density of the particles with charge $`\pm q`$, and $`S_\rho (𝒌)=\mathrm{\Phi }_\rho (𝒌,t=0)/q^2\overline{n}`$ is the static charge structure factor, determining the zero frequency limit of $`ϵ(\omega )`$. As usual, $`ϵ(\omega )`$ is related to the dynamic conductivity $`\sigma (\omega )`$ by $`ϵ(\omega )=12\pi \sigma (\omega )/\mathrm{i}\omega `$.
Recent numerical simulations by Jonsson and Minnhagen (JM) of the 2D $`XY`$ model with Ginzburg-Landau dynamics have shed new light onto the critical dynamics of the associated vortex plasma. Using an appropriate definition for the vortex density $`\rho (𝒓)`$, JM have computed $`\mathrm{\Phi }_\rho `$ and extracted the complex dielectric function $`ϵ(\omega )`$ using (1). It shows the following, interesting features:
* $`1/ϵ(\omega )`$ approximately has a scaling form with a characteristic frequency $`\omega _{\mathrm{ch}}`$ going to zero as $`TT_{\mathrm{KT}}`$ (“critical slowing down”) from both sides.
* For low frequencies, $`\mathrm{Re}[1/ϵ(\omega )1/ϵ(0)]\omega `$ over a relatively large $`\omega `$ interval, contrary to a simple “Drude” behavior ($`\omega ^2`$). This corresponds to a non-analyticity $`\sigma (\omega )\mathrm{ln}|\omega |`$ at low $`\omega `$.
JM obtain very good fits of their data by a phenomenological expression proposed earlier by Minnhagen. Experiments on JJA’s seem indeed to confirm the non-analytic frequency dependence of $`ϵ(\omega )`$, and various theoretical attempts have been made in order to explain this finding. The critical slowing down has, to our knowledge, not been theoretically addressed so far.
2. AHNS PAIR RELAXATION PICTURE
The “standard” dynamic theory of the KT transition has been developed in 1978 by Ambegaokar et al. (AHNS) , who built up the dielectric function $`ϵ(\omega )`$ additively from contributions of pairs of any size and of free particles. We use a slight modification of this idea here, defining by
$$\frac{1}{ϵ_\mathrm{b}(\omega )}=1+\underset{r_0}{\overset{\xi }{}}dr\frac{\mathrm{d}[1/\stackrel{~}{ϵ}(r)]}{\mathrm{d}r}\frac{1}{1\mathrm{i}\omega \tau (r)},\tau (r)\frac{r^2\stackrel{~}{ϵ}(r)\mathrm{\Gamma }}{3.5q^2}$$
(3)
a dielectric function of bound pairs, and setting $`ϵ(\omega )=ϵ_\mathrm{b}(\omega )2\pi q^2n_\mathrm{f}/\mathrm{i}\omega \mathrm{\Gamma }`$. The scale dependent dielectric constant $`\stackrel{~}{ϵ}(r)`$ is determined by the KT flow equations , and the factor $`1/[1\mathrm{i}\omega \tau (r)]`$ in the integral describes the relaxation of individual pairs of size $`r`$. $`r_0`$ is a minimum length of the order of the vortex core size, $`\xi `$ is the KT screening length, and $`n_\mathrm{f}\xi ^2`$ the density of free particles. To determine $`\stackrel{~}{ϵ}(r)`$, we use Minnhagen’s version of the KT flow equations for the scale dependent “reduced temperature” $`t(r)=1q^2/4\stackrel{~}{ϵ}(r)k_\mathrm{B}T`$ and fugacity $`y(r)`$. Their solutions are known analytically, in the regions $`T<T_{\mathrm{KT}}`$ and $`T>T_{\mathrm{KT}}`$ separately . The integral (3) is done numerically.
We find that the frequency at which $`\mathrm{Im}[1/ϵ(\omega )]`$ has its maximum is more or less temperature independent below $`T_{\mathrm{KT}}`$, whereas above it varies rapidly like $`\xi (T)^2`$. Both above and sufficiently far below $`T_{\mathrm{KT}}`$, the low-$`\omega `$ behavior of $`\mathrm{Re}[1/ϵ]`$ is closer to linear than a simple Drude form. However, above $`T_{\mathrm{KT}}`$ there finally is a crossover to “Drude” $`\omega ^2`$ behavior as $`\omega 0`$, and below $`T_{\mathrm{KT}}`$ both real and imaginary part of $`1/ϵ(\omega )1/ϵ(0)`$ behave as $`\omega ^\alpha `$ at low $`\omega `$, with an exponent $`\alpha =\frac{q^2}{2ϵ(0)k_\mathrm{B}T}2`$ which goes to zero as $`TT_{\mathrm{KT}}`$. If normalized by a factor of $`\stackrel{~}{ϵ}(\xi )`$, our data above $`T_{\mathrm{KT}}`$ display an almost perfect scaling behavior with a characteristic frequency $`\omega _{\mathrm{ch}}(\xi )10q^2/\xi ^2\stackrel{~}{ϵ}(\xi )\mathrm{\Gamma }`$, over an extremely wide range in $`\xi `$ (Fig. 1 L). We find no comparable scaling below $`T_{\mathrm{KT}}`$, but if $`\omega _{\mathrm{ch}}`$ is defined here by the condition $`\mathrm{Re}[1/ϵ(\omega _{\mathrm{ch}})1/ϵ(0)]=\frac{\pi }{2}\mathrm{Im}[1/ϵ(\omega _{\mathrm{ch}})]`$ (as done by JM), it decreases rapidly as $`TT_{\mathrm{KT}}`$ from both sides. In Fig. 1 R we have plotted the corresponding temperature dependences of $`\omega _{\mathrm{ch}}`$ both above and below $`T_{\mathrm{KT}}`$, which is in good qualitative agreement with Fig. 6 of JM .
3. CALCULATIONS USING MORI’S TECHNIQUE
Mori’s technique for evaluating dynamic correlation functions yields for the charge density correlator at small wave numbers $`k`$:
$$\mathrm{\Phi }_\rho (𝒌,\omega )\frac{q^2\overline{n}S_\rho (𝒌)}{\mathrm{i}\omega +\frac{k_\mathrm{B}Tk^2}{S_\rho (𝒌)\gamma _\rho (\omega )}}.$$
(4)
The “memory function” (or “random force” correlation function) $`\gamma _\rho (\omega )`$ obeys
$$\gamma _\rho (\omega )\mathrm{\Gamma }+\frac{q^2}{2k_\mathrm{B}T\overline{n}V^3}\underset{𝒌𝒍}{}𝒌𝒍U_𝒌U_𝒍\underset{0}{\overset{\mathrm{}}{}}dt\mathrm{e}^{\mathrm{i}\omega t}\delta n_𝒌(t)\rho _𝒌(t)\delta n_𝒍(0)\rho _𝒍(0).$$
(5)
where $`\delta n_𝒌(t)`$ denotes the deviation of the local number density from its mean value, and $`V`$ is the system volume. $`\gamma _\rho (\omega )`$ generalizes the bare friction constant $`\mathrm{\Gamma }`$, adding a contribution due to the particle interaction.
In the spirit of current practice in the theory of liquid dynamics we factorize, as a first step, the combined correlator into a product of a number and a charge correlation function:
$$\delta n_𝒌(t)\rho _𝒌(t)\delta n_𝒍(0)\rho _𝒍(0)V^2\delta _{𝒌𝒍}\mathrm{\Phi }_n(𝒌,t)\mathrm{\Phi }_\rho (𝒌,t).$$
(6)
Using $`\mathrm{\Phi }_n(𝒌,\omega )\overline{n}/(\mathrm{i}\omega +\frac{k_\mathrm{B}Tk^2}{\mathrm{\Gamma }})`$ and $`S_\rho (𝒌)k^2/(k^2+b^2)`$ (the length $`b`$ is a measure of the interparticle distance) we obtain the implicit equation
$$\widehat{\gamma }(\widehat{\omega })1+\frac{A}{1+\mathrm{i}\widehat{\omega }}\mathrm{ln}\frac{1+\widehat{\gamma }(\widehat{\omega })}{1\mathrm{i}\widehat{\omega }\widehat{\gamma }(\widehat{\omega })},$$
(7)
determining the function $`\widehat{\gamma }(\widehat{\omega })=\gamma _\rho (\omega )/\mathrm{\Gamma }`$ which depends on $`\widehat{\omega }=\omega /\omega _{\mathrm{ch},1}`$. The characteristic frequency $`\omega _{\mathrm{ch},1}=k_\mathrm{B}T/b^2\mathrm{\Gamma }`$ reflects the spatial density of particles, and $`A=\frac{\pi q^4\overline{n}b^2}{2(k_\mathrm{B}T)^2}`$ is a dimensionless parameter of order unity.
Fig. 2 L shows real and imaginary parts of the solution of (7) above $`T_{\mathrm{KT}}`$, where $`\sigma (\omega )\gamma _\rho (\omega )^1`$. $`\mathrm{Re}\widehat{\gamma }`$ shows the expected (see Sec. 1) logarithmic $`\omega `$ dependence in some region, but has a finite limit for $`\omega 0`$. As seen in Fig. 2 R, our mode decoupling approach yields a dielectric function somewhat closer to the phenomenology of JM than a pure Drude form. However, the critical slowing down is missing, since the factorization approximation (6) underestimates effects of charge binding into pairs on length scales $`<\xi `$.
A simple — and admittedly ad hoc — way to build pairing into the dynamic friction function consists in adding to the r.h.s. of (6) a slowly decaying term $`\mathrm{e}^{t/\tau _{\mathrm{esc}}}`$, corresponding to a contribution $`\gamma _{\mathrm{pairs}}(\omega )=\gamma _0/[\tau _{\mathrm{esc}}^1\mathrm{i}\omega ]`$ to $`\gamma _\rho (\omega )`$ and describing the random force correlations of pairs that unbind after a typical time $`\tau _{\mathrm{esc}}`$. We interprete $`\tau _{\mathrm{esc}}`$ as the “thermal escape time” out of the logarithmic pair potential, screened at a distance $`\xi `$:
$$\tau _{\mathrm{esc}}(\xi )\frac{r_0^2\mathrm{\Gamma }}{q^2}\left(\frac{\xi }{r_0}\right)^z,z=\frac{q^2}{k_\mathrm{B}T}.$$
(8)
The dynamic exponent $`z4.7`$ in the vicinity of the KT transition. Studying the correlations in an ensemble of independent pairs with average squared size $`\mathrm{\Delta }r^24b^2`$, the amplitude of $`\gamma _{\mathrm{pairs}}`$ can be estimated as $`\gamma _0q^4/k_\mathrm{B}T\mathrm{\Delta }r^2`$. The new term $`\gamma _{\mathrm{pairs}}`$ dominates $`\gamma _\rho `$ for large $`\xi `$ and low $`\omega `$, and so the static limit of the solution diverges as $`\gamma _\rho (\omega =0)\tau _{\mathrm{esc}}(\xi )\xi ^z`$. As a consequence, both the new characteristic frequency $`\omega _{\mathrm{ch},2}=\tau _{\mathrm{esc}}^1`$ and the static conductivity $`\sigma (0)=q^2\overline{n}/\gamma _\rho (0)`$ of the plasma vanish smoothly as $`\xi (T)^z`$. The prediction $`z4.7`$ is in striking contrast to the scaling with $`z=2`$ in Fig. 1 L obtained on the basis of AHNS theory, and to the usual expectation that $`\sigma (0)q^2n_\mathrm{f}/\mathrm{\Gamma }\xi ^2`$. We note, however, that recent dynamical scaling analyses for different realizations of 2D superfluids actually do suggest anomalously large dynamical exponents in the range $`z4.5`$$`\mathrm{\hspace{0.17em}5.9}`$.
4. CONCLUSIONS
We have studied here the critical dynamics of a 2D, two-component plasma in the vicinity of the KT phase transition, using a modification of AHNS theory and a Mori type approach. The AHNS approach is quite successful as a convenient phenomenological interpretation of simulation data, but conceptually dissatisfactory in that pairs are put in “by hand”. The Mori approach is more fundamental, starting from microscopic equations of motion, but the correlations it predicts (within a mode decoupling approximation) are not strong enough to imply critical slowing down. We have proposed a way to cure this deficiency by introducing additional, long-lived random force correlations, describing temporary pairing. A virtue of this extension is that it contains two frequency scales, one finite and the other vanishing at $`T_{\mathrm{KT}}`$, in agreement with the numerical findings.
Acknowledgments: The authors are grateful to Petter Minnhagen for helpful conversations. D. B. thanks the organizers of the SCCS 99 for financial support. |
no-problem/9910/cond-mat9910177.html | ar5iv | text | # Freezing of Simple Liquid Metals
## 1 Introduction
A microscopic description of freezing/melting phenomena presents a formidable challenge to present-day condensed matter theory . Among the simple liquids, liquid metals have received limited attention in this field, stemming from the relative complexity of their interatomic potentials and thermodynamic properties . Previous related studies include those of Stroud and Ashcroft , who addressed the melting of simple metals using variational principles; Iglói et al. , who studied the freezing of liquid aluminum and magnesium by means of density-functional (DF) methods; and Moriarty and coworkers , who calculated the structural stabilities of third-period simple metals using first-principles techniques.
In this contribution, we report on a new study of the freezing phenomena of simple metals that exploits recent advances in thermodynamic perturbation theory for solids. In line with Ref. , and most modern theories of freezing, our approach incorporates classical DF theory in calculating the Helmholtz free energy. Although formally rigorous, DF theory is limited in practice by insufficient knowledge of the free energy functional for most except hard-body systems. To circumvent this limitation, we apply Weeks-Chandler-Andersen (WCA) perturbation theory , which splits the effective interatomic pair potential into reference and perturbative parts and maps the reference system onto an effective hard-sphere (HS) system. For the inhomogeneous solid phase, DF theory provides an adequate approximation for the HS free energy, while an accurate model of the HS radial distribution function yields directly the first-order perturbation free energy. The homogeneous liquid phase is treated within the same perturbative framework, using essentially exact expressions for the HS free energy and pair distribution function. This treatment of the liquid and solid by the same theoretical approach constitutes an important requirement for a consistent description of phase coexistence.
The next section outlines the theoretical methods used to calculate effective pair potentials and free energies of simple metals. In Sec. 3 we present and discuss predictions of the theory for freezing transitions and relative stabilities of competing crystal structures. Finally, in Sec. 4 we summarize and conclude.
## 2 Theoretical Methods
Restricting attention to the simple metals, we construct effective density ($`\rho `$)-dependent interatomic pair potentials, $`\varphi (r;\rho )`$, and a volume-dependent, but structure insensitive, contribution to the binding energy, $`E_V(\rho )`$, via pseudpotential theory, assuming linear response of the conduction electrons to the ions. In practice we use empty-core pseudopotentials and the Ichimaru-Utsumi electron dielectric function, which are known to give reliable results for bulk liquid properties . Taking $`\varphi (r;\rho )`$ and $`E_V(\rho )`$ as the basic microscopic input, we proceed to calculate the thermodynamics of the liquid and solid phases by means of WCA perturbation theory. Thus, we separate $`\varphi (r;\rho )`$ at its first minimum into a short-range, purely repulsive, reference part, $`\varphi _\mathrm{o}(r)`$, and a long-range, weakly oscillating, perturbative part, $`\varphi _\mathrm{p}(r;\rho )`$, and map the reference system onto an effective HS system with a HS diameter prescribed by the well-known WCA criterion. This separation allows the ionic contribution to the Helmholtz free energy to be calculated within the coupling-constant formalism . To first order in $`\varphi _\mathrm{p}(r;\rho )`$, the resulting general (inhomogeneous) expression for the total Helmholtz free energy reads
$$F[\rho (𝐫)]=F_{\mathrm{HS}}[\rho (𝐫)]+\frac{2\pi N^2}{V}_0^{\mathrm{}}dr^{}r^2g_{\mathrm{HS}}(r^{};[\rho (𝐫)])\varphi _\mathrm{p}(r^{};\rho )+E_V(\rho ),$$
(1)
where $`N`$ is the number of particles, $`V`$ the volume, and $`F_{\mathrm{HS}}[\rho (𝐫)]`$ and $`g_{\mathrm{HS}}(r;[\rho (𝐫)])`$ the free energy and radial distribution function (RDF), respectively, of the HS reference system, both functionals of the equilibrium one-particle number density $`\rho (𝐫)`$. The RDF of the solid is defined as an orientational and translational average of the two-particle density . For the liquid we use the homogeneous version of Eq. (1), taking for the HS free energy and RDF the Carnahan-Starling and Verlet-Weis parametrizations of computer simulation data .
To calculate the free energy of the HS solid we invoke classical DF theory . The DF approach is based on the existence of a functional $`[\rho (𝐫)]`$ of the density $`\rho (𝐫)`$ that satisfies a variational principle, according to which $`[\rho (𝐫)]`$ is minimized – for a given average density and external potential – by the equilibrium density, its minimum value equaling the Helmholtz free energy $`F`$. In practice, $`[\rho (𝐫)]`$ is split into an (exactly-known) ideal-gas contribution $`_{\mathrm{id}}[\rho (𝐫)]`$ and an excess contribution $`_{\mathrm{ex}}[\rho (𝐫)]`$, the latter depending entirely upon internal interactions. Here we approximate $`_{\mathrm{ex}}[\rho (𝐫)]`$ by the modified weighted-density approximation (MWDA) . This approximation maps the excess free energy per particle of the solid onto that of a corresponding uniform fluid of effective density $`\widehat{\rho }`$, according to
$$_{\mathrm{ex}}^{\mathrm{MWDA}}[\rho (𝐫)]=Nf_{\mathrm{HS}}(\widehat{\rho }),$$
(2)
where the effective density, defined as
$$\widehat{\rho }=\frac{1}{N}d𝐫d𝐫^{}\rho (𝐫)\rho (𝐫^{})w(|𝐫𝐫^{}|;\widehat{\rho }),$$
(3)
is a self-consistently determined weighted average of $`\rho (𝐫)`$. The weight function, $`w(r)`$, is specified by normalization and by the requirement that $`_{\mathrm{ex}}^{\mathrm{MWDA}}[\rho (𝐫)]`$ generates the exact two-particle direct correlation function in the uniform limit (see Ref. for details).
Practical calculation of $`_{\mathrm{HS}}[\rho (𝐫)]`$ and $`g_{\mathrm{HS}}(r;[\rho (𝐫)])`$ requires specifying the solid density, i.e., the coordinates of the lattice sites and the shape of the density distribution about these sites. Here we consider the hcp, fcc, and bcc crystals with the density distribution modeled by the usual Gaussian ansatz, introducing a parameter $`\alpha `$ determining the width of the distribution. This parametrization allows the ideal contribution to the free energy functional to be accurately approximated by $`_{\mathrm{id}}/N=(3/2)k_\mathrm{B}T\mathrm{ln}(\alpha \mathrm{\Lambda }^2)5/2`$, where $`\mathrm{\Lambda }`$ is the thermal de Broglie wavelength, and yields the HS free energy as the minimum with respect to $`\alpha `$ of the approximate functional $`_{\mathrm{HS}}[\rho (𝐫)]=_{\mathrm{id}}[\rho (𝐫)]+_{\mathrm{ex}}^{\mathrm{MWDA}}[\rho (𝐫)]`$.
For the RDF appearing in the perturbation contribution to the free energy \[Eq. (1)\] we adopt the approach of Rascón et al. . Expressing $`g_{\mathrm{HS}}(r;[\rho (𝐫)])`$ as a sum over coordination shells, this scheme approximates the distributions of the second and higher coordination shells in simple mean-field fashion, but parametrizes the first-shell distribution in such a way as to incorporate nearest-neighbour correlations. The first-peak parameters are determined by imposing sum rules, e.g., the virial theorem and coordination number, so as to specify the contact ($`r=d`$) value and shape (area and first moment) of the first peak (for details see Ref. ). For a given solid structure, the lattice distances, the coordination number, the $`\alpha `$ that minimizes $`_{\mathrm{HS}}[\rho (𝐫)]`$, and the HS pressure, derived directly from $`F_{\mathrm{HS}}[\rho (𝐫)]`$, then combine to determine $`g_{\mathrm{HS}}(r;[\rho (𝐫)])`$ and hence the perturbation free energy.
Following the above procedure, we obtain the ionic free energy, $`F_{\mathrm{ion}}`$, which is the only structure-dependent contribution to the total free energy, and which suffices for assessing relative stabilities of different solid structures. To fully describe freezing/melting, however, it is essential to consider the total free energy, $`F=F_{\mathrm{ion}}+E_\mathrm{V}`$, where $`E_\mathrm{V}`$ is an additional contribution from the conduction electrons. The electronic free energy (or volume energy), $`E_\mathrm{V}`$, although independent of structure, depends on the average density and thus influences the densities of coexisting phases. Here we take for $`E_\mathrm{V}`$ the usual expression following from second-order perturbation theory and consistent with the effective pair potential.
## 3 Results and Discussion
Figure 1 illustrates the density dependence of the effective HS diameter, $`d`$, for three different crystal structures of Al near its melting point. According to the WCA prescription, $`d`$ depends both on the reference pair potential and on the first peak of the HS pair distribution function. Since the first peak is identical for hcp and fcc crystals, $`d`$ is the same for these two close-packed structures. Differences in the first peak for the bcc crystal result in a smaller diameter for that more open structure. Because of the sensitivity of $`F_{\mathrm{HS}}`$ to the HS packing fraction, these structural variations in $`d`$, though only of the order of 1 %, can amount to significant variations in the free energy. In passing, we note that the widely-used Barker-Henderson prescription for $`d`$ neglects such structural dependencies and that only knowledge of the HS RDF permits consistent application of the more accurate WCA prescription .
Predicted structural free energy differences for Na, Mg, and Al are displayed in Fig. 2. Evidently the theory correctly predicts the observed equilibrium structures for these three metals, namely bcc-Na, hcp-Mg, and fcc-Al, although the density at crossover from fcc\- to bcc-Na is somewhat overestimated. Compared with predictions of first-principles approaches, our results agree qualitatively well. For Mg and Al the stability trends of Fig. 2 match those from generalized pseudopotential theory (GPT) and the linear muffin-tin orbitals (LMTO) method . As well, for Na we predict the same trend as GPT, while LMTO fails for this case. Compared with available experimental data, our structural free energy differences are in somewhat better quantitative agreement than those of either GPT or LMTO. For example, for Mg and Al we predict differences between hcp and fcc free energies of $`1.6`$ and $`3.2`$ mRy, respectively, compared with $`0.6`$ and $`1.7`$ mRy from GPT, $`0.7`$ and $`1.6`$ mRy from LMTO, and $`1.5`$ and $`4.2`$ mRy from experiment (extrapolations of thermochemical alloy data, as quoted in Ref. ). It should be emphasized, however, that while the first-principles methods calculate ground-state ($`T=0`$) energies, thermodynamic perturbation theory yields free energies of finite-temperature systems. We note further that application of perturbation theory is restricted to those structures for which the HS system is at least metastable, i.e., for which $`_{\mathrm{HS}}[\rho (𝐫)]`$ has a local minimum. This excludes, e.g., the diamond structure of Si, the HS diamond crystal being unstable.
Finally, we investigate the freezing/melting transition by comparing predicted free energies for the liquid and solid phases. Figure 3 exemplifies the comparison for Al near its observed melting temperature. Crossing of the ionic free energy curves for the liquid and fcc phases implies a distinct transition near the observed equilibrium solid density. Similar comparisons for Na and Mg also demonstrate liquid-solid phase transitions in qualitative agreement with observation. In this respect, the effective-pair-potential approach reasonably describes the many-body interacting system. Note that the liquid-solid crossover density serves as a useful upper (lower) bound on the liquid (solid) density at coexistence. Adding the volume-energy contribution – identical for the two phases – while not affecting the crossover density, does alter the shape of the curves. As a result, the densities of coexisting liquid and solid phases, as determined by a Maxwell common tangent construction, depend on $`E_\mathrm{V}`$. An accurate volume energy is known to be essential for calculating realistic pressures and bulk moduli of metals from pseudopotential theory at the level of effective pair interactions . Similarly, we find the liquid-solid coexistence analysis to be sensitive to the form of $`E_\mathrm{V}`$, particularly its curvature with respect to $`\rho `$. In fact, the sensitivity is such as to preclude here a reliable determination of coexistence densities. Given the reasonable predictions for ionic free energies, this apparent limitation of the theory may reflect a need to go beyond the linear response approximation to include higher-order response effects in the volume energy (see also , p. 42ff). The relative significance of such effects, as well as distinctions between the real-space representation of $`E_\mathrm{V}`$ used here and the reciprocal-space representation of Ref. , are interesting issues deserving of further attention.
## 4 Conclusions
Summarizing, working within an effective-pair-potential framework, we have implemented a consistent form of thermodynamic perturbation theory, one that is of comparable accuracy for both the liquid and solid phases, and used it to study structural stabilities and freezing behaviour of third-period simple metals. Predictions for structural free energy differences are at least as accurate as those from two first-principles methods and are in generally good qualitative agreement with experiment. On the basis of ionic free energies, freezing/melting transitions are predicted near the observed densities. As the densities of coexisting liquid and solid phases are found to depend sensitively on the electronic free energy, a complete coexistence analysis has not been attempted pending more accurate knowledge of this part of the free energy. The computationally practical approach demonstrated here should prove useful in future studies of the freezing of liquid metal mixtures, as well as in guiding more sophisticated ab initio simulations.
Acknowledgements
ARD acknowledges the Forschungszentrum Jülich for use of its computing facilities. GK and JH acknowledge the support of the Österreichische Forschungsfonds under Proj. Nos. P11194 and P13062 and of the Oesterreichische Nationalbank under Proj. No. 6241. |
no-problem/9910/hep-th9910137.html | ar5iv | text | # Effective Action for an 𝑆𝑈(2) Gauge Model with a Vortex
## I Introduction
The phenomenon of confinement in non-Abelian gauge theories has satisfactory explanation in a number of models (for a review, see, e.g., ). According to one of them, the color electric flux of the quark anti-quark pair is squeezed into a flux tube and this leads to a linear rise of the quark interaction potential energy. The so called projection techniques, developed since the early papers of ’t Hooft and Mandelstam , provided convenient tools, such as maximal Abelian gauge, for further studies of this mechanism on the lattice, which brought numerical evidence that color electric flux tubes are formed due to the dual Meissner effect generated by the condensation of color monopoles (see, also , and references therein) – the phenomenon dual to the well known formation of strings with a magnetic flux in them in the theory of superconductivity (Abrikosov-Nielsen-Olesen strings ). Another explanation of confinement is based on the method of maximal center gauges , , which displayed the center dominance and lead to the center vortex picture of confinement. In this picture, the existence of $`Z(N_c)`$ vortices, whose distribution in space time fluctuates sufficiently randomly, provides for the so called area decay law for the Wilson loop expectation value, which implies a linear static quark potential (for latest references see, e.g. ).
There are many questions left concerning the dynamics of vortices, as well as their origin. A field theoretical description of them has been started as early, as in 1978 , and in the well known “spaghetti vacuum” picture , which has been recently developed in the framework of the theory of 4D surfaces and strings in a number of papers , , . Recently, the continuum analogue of the maximum center gauge was constructed and discussed in . At the same time, study of simple models that allow for exact solutions may shed light on the complicated general problem of vortex dynamics.
In the present letter, we consider the one-loop contribution of gauge field fluctuations about a pure gauge configuration with a gauge field vortex in a 4-dimensional space of nontrivial topology, $`S^1\times R^3`$, i.e., with a cylinder. This configuration resembles the simplest imitation of the Aharonov-Bohm effect with a string and a non-zero magnetic flux in it. We demonstrate that our result, with an evident reinterpretation, confirms the conclusions of the recent work of D. Diakonov , who investigated potential energy of vortices in the 4- and 3-dimensional gauge theories.
## II The effective action
We consider the generating functional of the gluodynamic model for the gauge group $`SU(2)`$ in the 4-dimensional space time $`S^1\times R^3`$ with a cylinder
$`Z[\overline{A},j]={\displaystyle 𝑑a_\mu ^a𝑑\chi 𝑑\overline{\chi }\mathrm{exp}\left[S_4\right]𝑑a_\mu ^a\mathrm{exp}\left[S_2\right]},`$ (1)
where the action $`S`$ of the gauge field is split into two parts,
$`S=S_4+S_2={\displaystyle d^4x(L_4+j^{a\mu }a_\mu ^a)}+{\displaystyle d^2x(L_2+j^{a\mu }a_\mu ^a)},`$ (2)
$`S_4`$ for the 4-dimensional space time $`S^1\times R^3`$ and $`S_2`$ for the 2-dimensional cylinder $`S^1\times R^1`$, whose presence, in the same way as a deleted point in the 2-dimensional plane, attributes a nontrivial topology to this space configuration. The Lagrangian of the gauge field $`A_\mu `$ in the external (background) field has the familiar form
$`L_4={\displaystyle \frac{1}{4}}(F_{\mu \nu }^a)^2+{\displaystyle \frac{1}{2\xi }}(\overline{D}_\mu ^{ab}a^{b\mu })^2+\overline{\chi }_a(\overline{D}^2)_{ab}\chi _b.`$ (3)
Here, $`F_{\mu \nu }^a=_\mu A_\nu _\nu A_\mu ig(T^a)^{bc}A_\mu ^bA_\nu ^c`$, $`\overline{D}_\mu ^{ab}=\delta ^{ab}_\mu ig(T^c)^{ab}\overline{A}_\mu ^c`$, $`T^a`$ are the $`SU(2)`$-group generators taken in the adjoint representation, $`A_\mu ^a=\overline{A}_\mu ^a+a_\mu ^a`$, where $`\overline{A}_\mu ^a`$ is the background field, $`a_\mu ^a`$ are quantum fluctuations of the gluon field about the background, and $`\chi `$ and $`\overline{\chi }`$ are ghost fields.
We expand the Lagrangian and keep the terms that are quadratic in gluon fluctuations, which corresponds to the one-loop approximation. After this, the path integral becomes Gaussian and for the effective action $`\mathrm{\Gamma }`$, related to $`Z`$ by $`Z=\mathrm{exp}(\mathrm{\Gamma })`$, we obtain $`\mathrm{\Gamma }^{(1)}=\mathrm{\Gamma }_4^{(1)}+\mathrm{\Gamma }_2^{(1)}`$, where
$`\mathrm{\Gamma }_4^{(1)}[\overline{A}]={\displaystyle \frac{1}{2}}\mathrm{Tr}\mathrm{ln}[\overline{\mathrm{\Theta }}_{\mu \nu }^{ab}]\mathrm{Tr}\mathrm{ln}[(\overline{D}^2)^{ab}],\mu ,\nu S^1\times R^3,`$ (4)
(here the first term corresponds to the gluon contribution and the second one is the ghost contribution), and
$`\mathrm{\Gamma }_2^{(1)}[\overline{A}]={\displaystyle \frac{1}{2}}\mathrm{Tr}\mathrm{ln}[\overline{\mathrm{\Theta }}_{\mu \nu }^{ab}],\mu ,\nu S^1\times R^1.`$ (5)
The operator $`\mathrm{\Theta }`$ in (4) is defined as
$`\overline{\mathrm{\Theta }}_{\mu \nu }^{ab}=g_{\mu \nu }(\overline{D}_\lambda \overline{D}^\lambda )^{ab}+2ig(T^c)^{ab}\overline{F}_{\mu \nu }^c+(11/\xi )(\overline{D}_\mu \overline{D}_\nu )^{ab},`$ (6)
where $`T^a`$ are the $`SU(2)`$-group generators taken in the adjoint representation. We choose the gauge $`\xi =1`$, which makes the third term in (6) vanish. We take the external field potential to be Abelian- like
$`\overline{A}_\mu ^a=n^a\overline{A}_\mu ,\overline{F}_{\mu \nu }^a=n^a\overline{F}_{\mu \nu },`$ (7)
where $`n^a`$ is a unit vector pointing in a certain direction in the color space.
Let $`\nu ^a(a=1,2,3)`$ denote the eigenvalues of the color matrix $`(n^cT^c)`$, then we have $`|\nu ^a|=(1,1,0)`$. Finally, for the operator (6) we obtain:
$`\overline{\mathrm{\Theta }}_{\mu \nu }^{ab}=g_{\mu \nu }[_\lambda ig(n^cT^c)\overline{A}_\lambda ]_{}^{2}{}_{}{}^{ab}+ig(n^cT^c)^{ab}\overline{F}_{\mu \nu }.`$ (8)
Following the ideas of , in order to model the vortex configuration in continuum Yang-Mills theory, we take the external field potential to be pure gauge on the surface of the cylinder $`\rho =const=R`$, i.e.,
$$\overline{A}_\mu =\frac{i}{g}\omega _\mu \omega ^1.$$
As a simple nonsingular configuration (corresponding to a ”thick vortex” ), we take the gauge field potential in the $`S^1`$ circle $`\rho =const=R,`$ in the polar coordinates, in the form
$`A_\mu =\{\begin{array}{cc}\overline{A}_\theta =\mathrm{\Phi }/(2\pi gR),\mu S^1(\mu =\theta )& \\ 0,\mu 𝐑^3(\mu =2,3,4),& \end{array}`$ (11)
with $`\mathrm{\Phi }/g=\text{c}onst`$ as the flux through the cylinder. As is well known, in this situation we can introduce the 2-dimensional vector in the $`\rho \theta `$ plane
$$G_\mu =\frac{1}{2\pi }\epsilon _{\mu \nu }A_\nu .$$
Then
$$2\pi r_\mu G_\mu =\frac{i}{g}\omega \frac{}{\theta }\omega ^1,$$
and we have
$`g{\displaystyle \underset{0}{\overset{2\pi }{}}}𝑑\theta r_\mu G_\mu ={\displaystyle \frac{i}{2\pi }}{\displaystyle \underset{0}{\overset{2\pi }{}}}𝑑\theta \omega {\displaystyle \frac{}{\theta }}\omega ^1=n,`$ (12)
i.e., the topological charge corresponding to mapping from the circle around the cylinder surface to the $`U(1)`$ group space (Pontryagin index). In our example (11), $`\mathrm{\Phi }=2\pi n`$.
## III Effective potential
The spectrum of operator (8) after diagonalization in the 4-dimensional case
$`\overline{\mathrm{\Theta }}_\mu ^ag_{\mu \mu }\mathrm{\Delta }^a`$ (13)
is given by the eigenvalues of the operator
$`\mathrm{\Delta }^a(_\lambda ig\nu ^a\overline{A}_\lambda )^2,`$ (14)
which are evident in this case:
$`{\displaystyle \frac{1}{\rho ^2}}(l\nu ^ax)^2+𝐤^2\mathrm{\Lambda }_\mu ^a(𝐤^2,l),l𝐙,\mu =\theta ,2,3,4,`$ (15)
where $`x=Rg\overline{A}_\theta =\mathrm{\Phi }/(2\pi )`$. The same eigenvalue $`\mathrm{\Lambda }^a(𝐤^2,l)=\mathrm{\Lambda }_\mu ^a(𝐤^2,l)`$ is obviously obtained for the corresponding ghost operator in (4).
After all the transformation described above we arrive at the final formula for the 4-dimensional effective action
$`\mathrm{\Gamma }_4^{(1)}[\overline{A}]={\displaystyle \frac{\mathrm{\Omega }}{2\pi RL^3}}{\displaystyle \underset{a}{}}|\nu ^a|{\displaystyle \underset{l=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle \underset{𝐤}{}}\left({\displaystyle \frac{1}{2}}{\displaystyle \underset{\mu =\theta ,2,3,4}{}}\mathrm{log}[\mathrm{\Lambda }_\mu ^a(𝐤^2,l)]\mathrm{log}[\mathrm{\Lambda }^a(𝐤^2,l)]\right),`$ (16)
where $`\mathrm{\Omega }d^4x=2\pi R𝑑x_2𝑑x_3𝑑x_4`$ is the 4-volume of the space time.
In what follows we use the “proper time” representation
$`\mathrm{log}A={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{ds}{s}}\mathrm{exp}(sA),\text{R}eA>0,`$ (17)
which is valid once the subtraction is performed at the lower integration limit. Next we transform summation over $`l`$ in (16) with the help of the identity
$`{\displaystyle \underset{l=\mathrm{}}{\overset{+\mathrm{}}{}}}\mathrm{exp}\left[{\displaystyle \frac{s}{R^2}}(l+x)^2\right]={\displaystyle \frac{\sqrt{\pi }R}{\sqrt{s}}}{\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}\mathrm{exp}({\displaystyle \frac{\pi ^2R^2l^2}{s}})\mathrm{cos}(2\pi xl).`$ (18)
After performing necessary operations in (16) we obtain the 4-dimensional effective potential:
$`v_4={\displaystyle \frac{\mathrm{\Gamma }^{(1)}[\overline{A}]}{\mathrm{\Omega }}}={\displaystyle \frac{1}{2\pi ^2}}{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{ds}{s^3}}{\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}\mathrm{exp}({\displaystyle \frac{\pi ^2R^2l^2}{s}})\mathrm{cos}(2\pi xl).`$ (19)
The final integration over $`s`$ is easily performed and we find:
$`v_4={\displaystyle \frac{4}{\pi ^2(2\pi R)^4}}{\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\mathrm{cos}(2\pi xl)}{l^4}}={\displaystyle \frac{4\pi ^2}{3(2\pi R)^4}}B_4(x).`$ (20)
This formula, after replacements $`2\pi R`$ by $`1/T`$, and $`A_\theta `$ by $`A_0`$, corresponds to the well known finite temperature ($`T0`$) result for the free energy of gluons in the background of a constant $`A_0`$ potential (see also, , ). Similar calculations for the contribution of the 2-dimensional cylinder give the result
$`v_2={\displaystyle \frac{2}{\pi (2\pi R)^2}}{\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\mathrm{cos}(2\pi xl)}{l^2}}={\displaystyle \frac{2\pi }{(2\pi R)^2}}B_2(x).`$ (21)
Here, and in (20) we used the Bernoulli polinomials defined according to
$$\underset{l=1}{\overset{\mathrm{}}{}}\frac{\mathrm{cos}(2\pi lx)}{l^{2n}}=(1)^{n1}\frac{1}{2}\frac{(2\pi )^{2n}}{(2n)!}B_{2n}(x).$$
In particular
$$B_2(x)=x^2x+1/6,B_4=x^42x^3+x^2\frac{1}{30}.$$
Concerning the results obtained, it should be remarked that, first, the Bernoulli polinomials depend on the argument defined modulo 1, and hence the effective potential conserves the $`Z_2`$ symmetry, characteristic for vortices, and, second, evident renormalization of the result has been performed. After summation of the two contributions is performed we obtain
$`v=v_4+{\displaystyle \frac{1}{3\pi R^2}}v_2.`$ (22)
The second term for the 2-dimensional cylinder has been averaged over three possible orientations of the coordinate axes with respect to the cylinder in the 4-dimensional space time and over the area of its cross section. The final result, with the terms independent of $`gA_\theta `$ having been omitted, has the simple form
$`v={\displaystyle \frac{1}{12\pi ^2R^4}}x(2x)(1x^2),x={\displaystyle \frac{\mathrm{\Phi }}{2\pi }}.`$ (23)
The result obtained coincides formally with that derived in for the following configuration of the 4-dimensional space time: a plane with a deleted point at the origin, and the potential $`A_\mu `$ with a singularity at a string going through the origin. This coincidence has a simple explanation: in both cases we have the same general topological situation, defined by the Pontryagin index (12). It should be mentioned, that in our case the background potential was defined on the cylinder and had no singularity, while in the latter case, it was defined on a plane with a singularity at the origin, and hence the complete basis with Bessel functions had to be employed in calculating the functional trace.
As in , the potential has minima (equal to zero) at $`x=0`$ and $`x=1`$, and, due to its periodic properties, at all integer values of $`x`$, though, the potential has a jump of its derivative at these points. The values of the flux that provide minimum for the effective potential, correspond to $`\pm 1`$ values of the elementary Wilson loop that goes along the contour $`C`$ encircling the cylinder:
$`<W(C)>={\displaystyle \frac{1}{2}}<Tr\text{P}\mathrm{exp}ig{\displaystyle _C}A_\mu 𝑑x_\mu >=\mathrm{cos}(\pi x)`$ (24)
The same situation has already been reported in . As was argued in recent publications (see, e.g., , ), the value $`W(C)=1`$ characterizes the vortices, that pierce the area of the large Wilson loop with Gaussian distribution, thus providing for the area law and for the linear confining potential.
## IV Acknowledgements
The author gratefully acknowledges the hospitality extended to him by the theory group of the Institut für Theoretische Physik, Universität Tübingen during his stay there. This work has been supported in part by the DFG grant 436 RUS 113/477/0. |
no-problem/9910/cond-mat9910116.html | ar5iv | text | # Unexpected Behavior of the Local Compressibility Near the B=0 Metal-Insulator Transition
\[
## Abstract
We have measured the local electronic compressibility of a two-dimensional hole gas as it crosses the B=0 Metal-Insulator Transition. In the metallic phase, the compressibility follows the mean-field Hartree-Fock (HF) theory and is found to be spatially homogeneous. In the insulating phase it deviates by more than an order of magnitude from the HF predictions and is spatially inhomogeneous. The crossover density between the two types of behavior, agrees quantitatively with the transport critical density, suggesting that the system undergoes a thermodynamic change at the transition.
\]
The recent discovery of a Two-Dimensional (2D) Metal-Insulator Transition (MIT) has raised the fundamental question concerning the existence of metallic 2D systems. In contrast to the scaling theory of localization, which predicts that only an insulating phase can exist in 2D, there is compelling evidence for metallic-like behavior in a growing number of 2D systems. To date, the vast majority of this evidence comes from transport measurements. Clearly, if a true zero-temperature phase transition exists, it may reveal itself also in the thermodynamic properties of the 2D gas, such as the electronic compressibility, $`\kappa ^1=n^2\frac{\delta \mu }{\delta n}`$ (where $`\mu `$ is the chemical potential and $`n`$ is the carrier density). For example, a crossover to a gapped insulator would result in a vanishing compressibility at the transition, whereas a crossover to an Anderson insulator would involve a continuous evolution of the compressibility across the transition.
The compressibility or $`\frac{\delta \mu }{\delta n}`$ of a system reflects how its chemical potential varies with density. For non-interacting electrons it simply amounts to the single-particle Density of States (DOS), which in 2D is density independent ($`\frac{\delta \mu }{\delta n}=\frac{\pi \mathrm{}^2}{m}`$). This picture, however, changes drastically when interactions are included. Exchange and correlation effects weaken the repulsion between the electrons, thereby reducing the energy cost, thus leading to negative and singular corrections to $`\frac{\delta \mu }{\delta n}`$. Within the Hatree-Fock (HF) theory, which includes both the DOS and exchange terms, one gets:
$$\frac{\delta \mu }{\delta n}=\frac{\pi \mathrm{}^2}{m}\left(\frac{2}{\pi }\right)^{\frac{1}{2}}\frac{e^2}{4\pi \epsilon }n^{1/2},$$
(1)
with the compressibility becoming negative at low enough densities. Measurements of the macroscopic compressibility of 2D electron/hole gases, in the metallic regime, have indeed confirmed this behavior.
In this work, we have expanded the study of $`\mu (n)`$ into the MIT regime. Our measurements utilize several Single Electron Transistors (SETs), situated directly above a Two-Dimensional Hole Gas (2DHG). This technique allows us to determine the local behavior of $`\mu (n)`$ as well as its spatial variations. Simultaneous macroscopic transport measurements were conducted to ensure a precise determination of the MIT critical density in the same sample. Our measurements indicate a clear thermodynamic change in $`\mu (n)`$ at the MIT. In addition, we find that the behavior of $`\mu (n)`$ in the metallic phase follows the HF model and is spatially homogeneous. The insulating phase, on the other hand, is found to be spatially inhomogeneous. In this regime, $`\frac{\delta \mu }{\delta n}`$ deviates by more than an order of magnitude from the predictions of the HF model.
The samples are inverted GaAs/AlGaAs heterostructures. A layer of AlGaAs, $`2000`$Å thick, separates the 2DHG from a $`p^+`$ layer that serves as a back gate, and is separately contacted. A GaAs spacer, $`500`$Å thick, separates the 2DHG from the doped GaAs layer situated just under the surface (see inset in Fig. 1). The mobility of the 2DHGs ranges from $`710^4cm^2/V\mathrm{sec}`$ to $`1.210^5cm^2/V\mathrm{sec}`$ at $`n=510^{10}cm^2`$ and $`T=4.2K`$. Biasing the back gate enables us to vary the density of holes by two orders of magnitude, from $`210^{11}cm^2`$ to $`210^9cm^2`$. Our samples show a clear MIT. A typical plot of the resistivity, $`\rho `$, vs. density at various temperatures is shown in Fig. 1. A clear crossing point between metallic and insulating behavior is observed. The critical density and resistivity, determined from the low temperature crossing point, is $`n_c2.110^{10}cm^2`$ and $`\rho _c0.9\frac{h}{e^2}`$. Similar samples were previously used to study the transport properties of the MIT in GaAs.
A local measurement of $`\mu `$ requires the ability to measure electrostatic potentials with high sensitivity and good spatial resolution. It has been previously shown that both can be achieved using an SET. We have therefore deposited several aluminum SETs on top of an etched Hall bar mesa (inset in Fig. 1). This configuration allows us to study simultaneously the local thermodynamic behavior and the macroscopic transport properties and, thereby, trace possible correlation between them. Our technique relies on the fact that $`\mu `$ of the 2DHG changes as its density is varied. At equilibrium, the Fermi energy is constant across the sample, and therefore, a change in $`\mu `$ induces a change in the electrostatic potential, which is directly measured by the SET. The size of the SET and its distance from the 2DHG determine its spatial resolution, estimated to be $`0.1\times 0.5\mu m^2`$ . The measured voltage sensitivity of the SET is $`10\mu V`$. Similar technique has been previously employed to image the local compressibility of a 2DEG in the quantum Hall regime.
To demonstrate the strength of our technique, we first employ it to study the magnetic field ($`B`$) dependence of $`\mu `$ deep in the metallic regime ($`n=1.9610^{11}cm^2`$, see Fig. 2). Theoretically, the formation of well-separated Landau levels at high magnetic fields is expected to induce sharp saw-tooth like oscillations in $`\mu `$. The sharp drops are at integer filling factors ($`\nu `$), and the slopes obey $`\frac{\delta \mu }{\delta B}=\left(j+\frac{1}{2}\right)\frac{\mathrm{}e}{m^{}}\pm g^{}\mu _B`$, where $`m^{},g^{}`$ are the effective mass and g-factor, $`j`$ is the Landau level number, and the $`\pm `$ relate to the two spin directions. We find good quantitative agreement between this model and the measurement using a single value for $`m^{}=0.34m_e`$ and $`g^{}=2.7`$. The measured value of $`m^{}`$ is also in good agreement with the known effective mass of light holes in GaAs, $`m^{}=0.38m_e`$. The position of the jumps allows us to determine the local density of the 2DHG under the SET: $`n_l=1.8210^{11}cm^2`$, which deviates by only 7% from the density measured macroscopically by transport techniques. This small deviation demonstrates that the area beneath the SET is hardly disturbed by its presence.
The main focus of this work is the density dependence of $`\mu `$ across the $`B=0`$ MIT. Both the theory and previous macroscopic measurements show that at low densities $`\mu `$ should increase monotonically as $`n`$ is decreased (Eq. 1). Unexpectedly, our local measurements demonstrate quite a different behavior. Several typical traces of $`\mu `$ vs. $`n`$ are shown in Fig. 3. Fig. 3a shows the behavior in the metallic regime and Fig. 3b shows the behavior across the MIT and in the insulating regime. Instead of the expected monotonic dependence, $`\mu `$ exhibits a rich structure of oscillations. All the oscillations, including the fine-structure seen on the left side of Fig. 3b, are completely reproducible and do not depend on the measurement rate or sweeping direction. Two distinct types of oscillations are observed: In the metallic regime (Fig. 3a) we observe long saw-tooth oscillations with a typical period in density of $`(12)10^{10}cm^2`$. Superimposed on them and starting in close proximity to the MIT, a new set of rapid oscillations emerges (Fig. 3b). Their typical period is an order of magnitude smaller: $`(12)10^9cm^2`$, and their amplitude grows continuously from their point of appearance to lower densities. Similar behavior has been measured on seven different SETs placed on five separate Hall bars from two different wafers. Although the specific pattern of oscillations varies between these Hall bars and changes also after thermal cycling, they all have the same general characteristics. The qualitative change in the oscillation pattern of $`\mu (n)`$ near the critical density suggests that the system experiences a thermodynamic change at the MIT.
Out of the two types of oscillations, the easier ones to explain are these on the metallic side. In Fig. 3a we compare the measured oscillations with the expected monotonic behavior of the HF model. In contrast to the model, the measured $`\mu `$ has a density independent average which suggests that some kind of screening mechanism is present in the system. While the negative slopes of $`\mu (n)`$ (black symbols) follow the HF theory, there are apparent additional drops (some of which are extremely sharp) between them (gray symbols). This saw-tooth profile is reminiscent of the behavior of the chemical potential of a quantum dot as a function of its density and suggests the existence of discrete charging events in the dopant layer, situated between the 2DHG and the SET. Thus, the measured $`\mu `$ of the 2DHG varies undisturbed along the negative slopes, until a certain bias is built between the 2DHG and the SET that makes the charging of an intermediate localized state energetically favorable. This causes a screening charge to pop-in which result in a sharp drop in the measured electrostatic potential, after which $`\mu `$ continues to vary undisturbed until the next screening event occurs. Because screening occurs only at discrete points we can reconstruct the underlying $`\mu (n)`$ from the unscreened segments in the measurement. We do this in Fig. 4, by collecting the slopes of these segments from the saw-tooth profiles of five different SETs placed on three separate Hall bars. Each single point in this graph represent the slope of a well-defined segment, like the ones emphasized in Fig 3a. In the metallic side all the data collapses onto a single curve (the dashed line in Fig. 4 is a guide to the eye). The prediction of the HF model is depicted by the solid line in Fig. 4. The finite width of the 2DHG adds a positive contribution to the compressibility as shown by the dotted line. It is evident that both theories describe the data to within a factor of two over more than an order of magnitude in density, throughout the metallic regime. The fact that different Hall bars and different SETs (namely different locations within the same Hall bar) produce the same dependence, suggests that the metallic phase is homogeneous in space.
The sudden appearance of rapid oscillations as the system crosses to the insulating side already signifies that something dramatic happens at the transition. Assuming that this new set of oscillations is caused by the same mechanism, namely screening by traps, we can proceed and extract its slopes and add them to the same plot of $`\delta \mu /\delta n`$ (see Fig. 4). Unlike in the metallic phase where the system clearly has a negative $`\delta \mu /\delta n`$, namely negative compressibility, in the insulating phase we a-priory do not know the sign of the compressibility. We, therefore, plot in the left side of Fig. 4 both negative and positive slopes, all of which deviate considerably from the expected $`n^{1/2}`$ power law. The deviation becomes greater than an order of magnitude at our lowest density. We see that $`\delta \mu /\delta n`$ starts deviating from the theoretical curve in close proximity to the transport-measured critical density, signifying a change in the screening properties of the 2DHG at the transition. Another intriguing result is the non-universal behavior of the slopes on the insulating side. It should be emphasized that each slope is determined from a complete segment in the $`\mu \left(n\right)`$ curve (see inset to Fig. 3b) and is, therefore, determined very accurately. The fluctuations in the slopes, seen even in a measurement done using a single SET, are completely reproducible and suggest that mesoscopic effects are present. Furthermore, the average behavior of $`\delta \mu /\delta n`$ in this fluctuating regime is seen to be position dependent. An example of that is shown in the inset in Fig. 4 where we plot $`\delta \mu /\delta n`$ measured by two separate SETs on the same Hall bar. Such dependence on position indicates that once the system crosses into the insulating phase it ceases to be spatially homogeneous.
The charge traps have a clear signature in our measurements, emphasizing their important role in the thermodynamic ground state of the system. It was recently suggested by Altshuler and Maslov that a gas that is in electro-chemical equilibrium with charge traps might show metallic-like behavior of the resistivity. This behavior is a result of a temperature-dependent scattering mechanism evoked by the coupling to the trap system. Although it is clear from our measurements that charging of the traps takes place as the density of holes is varied, we are currently unable to determine experimentally whether this coupling is responsible for the metallic behavior seen in our samples.
The large values of $`\delta \mu /\delta n`$ in the insulating phase is intriguing. Currently there are no theoretical predictions for large negative $`\delta \mu /\delta n`$, which in turn leads to a sharp rise in $`\mu `$ as the density is reduced. However, we would like to dwell on several scenarios in which the measured $`\delta \mu /\delta n`$ becomes positive at the transition and increasingly large as the density is further reduced. The first scenario assumes the formation of voids in the 2DHG. Such voids are unable to screen the back gate voltage and, hence, would result in an added large and positive contributions to $`\delta \mu /\delta n`$. In this scenario, the size and shape of the voids, as function of decreasing density, are responsible for the large and seemingly random set of positive slopes observed. A second scenario involves the formation of isolated puddles in the 2DHG. These puddles screen the back gate discretely and, hence, would produce large and positive slopes of $`\mu \left(n\right)`$. It should be noted that both scenarios suggest a percolative like transition and are in accord with our observation that the 2DHG is inhomogeneous in the insulating phase.
In conclusion, we have measured the dependence of $`\mu (n)`$ across the MIT. Our measurements clearly indicate a thermodynamic change in the screening properties of the 2DHG at the transition. We find the metallic phase to be spatially homogenous and to behave according to the predictions of the HF model. The insulating phase, on the other hand, deviates significantly from the HF predictions and is spatially inhomogeneous.
We benefited greatly from discussions with A. M. Finkel’stein, Y. Hanein, Y. Meir, D. Shahar, A. Stern and C. M. Varma. This work was supported by the MINERVA foundation, Germany. |
no-problem/9910/cond-mat9910071.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Recently the two-dimensional $`q`$-state random bond Potts model with $`q>4`$ has attracted considerable interest, because it serves as a paradigm for examining the effect of quenched randomness on a first-order phase transition . Since in this case the randomness couples to the local energy density, a theorem by Aizenman and Wehr , along with related analytical work , suggests that the transition should become continuous, as has indeed been verified by subsequent numerical studies . Unfortunately, analytical results have been scarce, except in the limit $`q\mathrm{}`$ where properties of a particular tricritical point were related to those of the zero-temperature fixed point of the random field Ising model in $`d=2+\epsilon `$ dimensions . From the conjectured phase diagram it is however known that this fixed point is not the analytical continuation of the line of random fixed points found for finite $`q>2`$ . Namely the latter (henceforth referred to as the $`q\mathrm{}`$ limit of the model) is rather believed to be associated with a subtle percolation-like limit , the exact properties of which have not yet been fully elucidated.
In the present publication we seek to gain further knowledge of this $`q\mathrm{}`$ limit by producing numerical results along the afore-mentioned line of critical fixed points for very large values of $`q`$. Since cross-over effects to the pure and percolative limits of the model have been shown to be important , special attention must be paid to the parametrisation of the critical line. Generalising a recently developed transfer matrix technique , in which the Potts model is treated through its loop representation , we were able to explicitly trace out this line, and as a by-product obtain very precise values of the central charge. Based on our numerical results for the $`q=8^k`$ state model with $`k=1,2,\mathrm{},6`$ we find compelling evidence that
$$c(q)=\frac{1}{2}\mathrm{log}_2(q)+𝒪(1).$$
(1.1)
Although this behaviour of the central charge is reminiscent of the Ising-like features of the tricritical fixed point discussed above, we shall soon see that from the point of view of the magnetic exponent the $`q\mathrm{}`$ limit is most definitely not in the Ising universality class. Note also that our precision allows us, for the first time, to convincingly distinguish the numerically computed central charge from its analytically known value in the percolation limit .
With the numerically obtained parametrisation of the critical disorder strength at hand we then proceed to measure the corresponding magnetic bulk scaling dimension $`x_1`$ as a function of $`q`$. The most suitable technique is here that of conventional Monte Carlo simulations. Our results lend credibility to the belief that $`x_1(q)`$ saturates as $`q\mathrm{}`$. Based on results for the $`q=8^k`$ state model with $`k=1,2,3`$ we propose the limiting value
$$x_1(q)0.192\pm 0.002\text{for}q\mathrm{},$$
(1.2)
in agreement with the one reported in Ref. . The fact that Eq. (1.2) does not coincide with any known scaling dimension of standard percolation is remarkable, and calls for further analytical investigations of the $`q\mathrm{}`$ limit.
After explaining the loop model transfer matrices in Section 2, we state our results for the critical line and the central charge in Section 3. The Monte Carlo method and the resulting values of the magnetic scaling dimension are presented in Section 4, and we conclude with a discussion.
## 2 Loop model transfer matrices
The partition function of the random bond Potts model can be written as
$$Z=\underset{\{\sigma \}}{}\underset{ij}{}\mathrm{e}^{K_{ij}\delta _{\sigma _i,\sigma _j}},$$
(2.1)
where the summation is over the $`q`$ discrete values of each spin and the product runs over all nearest-neighbour bonds on the square lattice. The $`K_{ij}`$ are the reduced coupling constants, which for the moment may be drawn from an arbitrary distribution. By the standard Kasteleyn-Fortuin transformation , Eq. (2.1) can be recast as a random cluster model
$$Z=\underset{\{𝒢\}}{}q^{C(𝒢)}\underset{ij𝒢}{}\left(\mathrm{e}^{K_{ij}}1\right),$$
(2.2)
where $`𝒢`$ is a bond percolation graph with $`C(𝒢)`$ independent clusters. Note that $`q`$ now enters only as a (continuous) parameter, and since the non-locality of the clusters does not obstruct the construction of a transfer matrix the interesting regime of $`q4`$ becomes readily accessible, provided that one can take into account the randomness in the couplings .
In an analogous fashion we can adapt the even more efficient loop model representation to the random case. Indeed, trading the clusters for their surrounding loops on the medial lattice , Eq. (2.2) is turned into
$$Z=q^{N/2}\underset{\{𝒢\}}{}q^{L(𝒢)/2}\underset{ij𝒢}{}\left(\frac{\mathrm{e}^{K_{ij}}1}{\sqrt{q}}\right),$$
(2.3)
where $`N`$ is the total number of spins, and configuration $`𝒢`$ encompasses $`L(𝒢)`$ loops. The strip width $`L`$ is measured in terms of the number of ‘dangling’ loop segments, and must be even by definition of the medial lattice .
A pleasant feature of the random bond Potts model is that the critical temperature is known exactly by self-duality . Employing for simplicity the bimodal distribution
$$P(K_{ij})=\frac{1}{2}\left[\delta (K_{ij}K_1)+\delta (K_{ij}K_2)\right],$$
(2.4)
and choosing the parametrisation $`s_{ij}(\mathrm{e}^{K_{ij}}1)/\sqrt{q}`$, the self-duality criterion takes the simple form
$$s_1s_2=1.$$
(2.5)
To fully identify the critical point the only free parameter is then the strength of the disorder, which can be measured in terms of $`RK_1/K_2>1`$ or $`ss_1>1`$.
## 3 Central charge
In Ref. we showed that Zamolodchikov’s $`c`$-theorem is a powerful tool for numerically identifying the fixed points of a pure system. The idea is simple: From the leading eigenvalue of the transfer matrix, specific free energies $`f_0(L)`$ can be computed as a function of the strip width $`L`$. Effective central charges $`c(L)`$ are then obtained by fitting data for two consecutive strip widths according to
$$f_0(L)=f_0(\mathrm{})\frac{\pi c}{6L^2}+\mathrm{}.$$
(3.1)
By tuning the free parameter $`s`$ of the system, local extrema $`c(L,s_{}(L))`$ are sought for, and finally the fixed point is identified by extrapolation: $`s_{}=s_{}(L\mathrm{})`$.
In principle this strategy can also be employed for a disordered system, provided that error bars are carefully kept under control. Now $`f_0(L)`$ is related to the largest Lyapunov exponent of a product of $`M\mathrm{}`$ random transfer matrices , and its statistical error vanishes as $`M^{1/2}`$ by the central limit theorem. Thus, for large enough $`M`$ any desired precision on $`f_0(L)`$ can be achieved.
An important observation is that for larger and larger $`L`$, the $`c(L)`$ found from Eq. (3.1) become increasingly sensitive to errors in $`f_0(L)`$. Therefore $`M`$ must be chosen in accordance with the largest strip width $`L_{\mathrm{max}}`$ used in the simulations. For the system at hand we found that 4 significant digits in $`c(L)`$ were needed for a reasonable precise identification of $`s_{}(L)`$, and with $`L_{\mathrm{max}}=12`$ this in turn implies that the $`f_0(L)`$ must be determined with 6 significant digits. We were thus led to choose $`M=10^8`$ for $`q=8`$, and $`M=10^9`$ for larger values of $`q`$.<sup>4</sup><sup>4</sup>4Incidentally, improving our results to $`L_{\mathrm{max}}=14`$ would require augmenting $`M`$ by at least a factor of 100 (apart from the increased size of the transfer matrices), and since several months of computations were spent on the present project this hardly seems possible in a foreseeable future.
Data collection was done by dividing the strip into $`M/l`$ patches of length $`l=10^5`$ lattice spacings, and for each patch the couplings were randomly generated from a canonical ensemble, i.e., the distribution (2.4) was restricted to produce an equal number of strong and weak bonds.
In the right part of Table 1 we show the resulting two-point fits (3.1) in the $`q=8`$ state model, as a function of $`s`$. The left part of the table provides analogous three-point fits, obtained by including a non-universal $`1/L^4`$ correction in Eq. (3.1). In all cases the error bars are believed to affect only the last digit reported. The two-point fits give clear evidence of a maximum in the central charge, and we estimate its location as $`s_{}=6.5\pm 2.0`$. The corresponding central charge is estimated from the three-point fits, as these are known to converge faster in the $`L\mathrm{}`$ limit , and we arrive at $`c=1.530\pm 0.001`$. To appreciate the precision of this result, we mention that the numerical values of $`c(q=8)`$ first reported were $`1.50\pm 0.05`$ and $`1.517\pm 0.025`$ .
Table 2 summarises our results for other values of $`q`$. Two remarkable features are apparent. First, $`s_{}q^w`$ is well fitted by a power law with $`w=0.31\pm 0.02`$. This gives valuable information on how the $`q\mathrm{}`$ limit of the model is approached, and implies that the ratio of the coupling constants $`RK_1/K_2=\mathrm{log}(1+s\sqrt{q})/\mathrm{log}(1+\sqrt{q}/s)`$ is a non-monotonic function of $`q`$ that tends to the finite limiting value $`\frac{1+2w}{12w}=4.3\pm 0.6`$ as $`q\mathrm{}`$. We shall discuss this finding further in Section 5. Second, the central charge seems to fulfil the relation (1.1) as stated in the Introduction.
## 4 Magnetic scaling dimension
In this section we explain the Monte Carlo method used for obtaining values of the magnetic scaling exponent. Simulations were performed on square lattices of size $`L\times L`$ with periodic boundary conditions, with $`L`$ ranging from 4 to $`L_{\mathrm{max}}=128`$ for $`q=8,64`$ and $`L_{\mathrm{max}}=64`$ for $`q=512`$.
We employed the Wolff cluster algorithm . The first part of the simulations was to determine the autocorrelation times $`\tau `$, which were found to increase with the lattice size and also with $`q`$. For the largest simulated lattices, we determined $`\tau `$ as follows: $`88\pm 4`$ cluster updates for $`q=8`$ and $`L=128`$, $`3000\pm 215`$ for $`q=64`$ and $`L=128`$, and $`31000\pm 3000`$ for $`q=512`$ and $`L=64`$. This rapid increase of $`\tau `$ with $`q`$ explains why we simulate only up to $`L=64`$ for the largest $`q`$.
Next, we measure the magnetisation, defined for each disorder sample $`x`$ by
$$m_x=\frac{q\rho 1}{q1},$$
(4.1)
where $`\rho =\mathrm{max}(N_1,N_2,\mathrm{},N_q)/L^2`$ and $`N_\sigma `$ is the number of Potts spins taking the value $`\sigma `$. Here $`\mathrm{}`$ denotes the thermal average. Then the magnetisation $`m(L)`$ is obtained by averaging over $`10^5`$ disorder configurations for $`q=8`$, and $`10^4`$ configurations for $`q=64`$ and $`512`$. For each disorder sample, $`100\times \tau `$ updates were dedicated to the thermalisation, and a further $`100\times \tau `$ to the magnetisation measurement. Error bars were computed from the disorder fluctuations (it can easily be checked that the contribution from thermal fluctuations is negligible), and the strength of the disorder was chosen as indicated in Table 2.
From a fit to $`m(L)L^{x_1}`$, we obtain for the magnetic scaling dimension:
$$x_1=\{\begin{array}{ccc}0.1535(10)\hfill & \text{for}\hfill & q=8\hfill \\ 0.172(2)\hfill & \text{for}\hfill & q=64\hfill \\ 0.180(3)\hfill & \text{for}\hfill & q=512\hfill \end{array}$$
(4.2)
We see that the magnetic exponent seems to saturate as we increase $`q`$. In view of the result (1.1) for the central charge we expect the asymptotic behaviour should involve $`\mathrm{log}(q)`$ rather than $`q`$ itself, and indeed the data are well fitted by
$$x_1(q)=a+b/\mathrm{log}(q)$$
(4.3)
with $`a=0.192(2)`$ and $`b=0.080(4)`$. Thus, based on the form (4.3) we are led to propose the limiting value (1.2) of $`x_1`$ given in the Introduction.
## 5 Discussion
It is useful to juxtapose our findings on the large-$`q`$ behaviour of the critical line with the phase diagram proposed in Ref. . In that work the disorder strength was parametrised through $`s=q^w`$ with $`w>0`$, and the limit $`w\mathrm{}`$ was identified with classical percolation on top of the strong bonds. Actually it is easily seen from Eq. (2.2) that directly at $`q=\mathrm{}`$ this percolation scenario holds true whenever $`w>1/2`$, and assuming that the line of critical fixed points is described by a monotonic function $`w_{}(q)`$ it can thus be confined to the region $`w1/2`$. With this slight reinterpretation, Ref. argues that at $`q=\mathrm{}`$ the critical point is located in the limit $`w1/2`$. Indeed, since for $`q=\mathrm{}`$ any initial $`w1/2`$ will be driven to larger values due to mapping to the random field Ising model, this is nothing but the usual assumption of “no intervening fixed points”.
However, this seems at odds with the results of Table 2, where we found that for $`q4`$ the critical line, when measured in terms of $`w`$, saturates at $`w=0.31\pm 0.02`$. Unless our numerical method is flawed by some gross systematic error, it is thus a priori difficult to see how this can be reconciled with the above result of $`w_{}(q=\mathrm{})=1/2`$. A possible explanation is that the limits $`q\mathrm{}`$ and $`w1/2`$ are highly non-commuting. This is witnessed by the jump in the central charge, which in the percolation limit ($`w=\mathrm{}`$ and $`q<\mathrm{}`$) reads
$$c_{\mathrm{perc}}=\frac{5\sqrt{3}}{4\pi }\mathrm{ln}(q)0.47769\mathrm{log}_2(q),$$
(5.1)
to be contrasted with our numerical result (1.1).
Acknowledgments
We are grateful to J. Cardy for some very helpful comments. |
no-problem/9910/cond-mat9910411.html | ar5iv | text | # Depletion of density of states near Fermi energy induced by disorder and electron correlation in alloys
## I Introduction
Two decades ago Altshuler and Aronov interpreted tunneling data of some disordered systems as showing the electron-electron interaction effect which induces change in the density of states (DOS) around Fermi level. Their model predicted the reduction of DOS in the case of 3 dimensional solids given by the relation
$`\delta \nu ={\displaystyle \frac{\lambda }{4\sqrt{2}\pi ^2}}{\displaystyle \frac{\sqrt{|ϵ|}}{(\mathrm{}D)^{3/2}}},`$
where $`\delta \nu `$ is change of the density of states, $`ϵ`$ energy measured from Fermi level, $`D`$ diffusion coefficient which is inversely proportional to the disorder driven resistivity $`\rho `$, $`\lambda `$ the electron-electron interaction strength. Recently it was applied to the interpretation of the photoemission spectra of La(NiMn)O<sub>3</sub> and La(NiFe)O<sub>3</sub> which undergo metal-insulator transition (MIT). The reduction of DOS at the Fermi energy ($`E_F`$) was observed as Mn or Fe ions were substituted, and it was interpreted as due to the disorder and electron correlation effect following Altshuler and Aronov’s theory. However, the reduction of DOS has not been observed before for typical disorder metallic alloy system, and this is the subject of present investigation.
Among metallic alloys, the most probable candidates showing both disorder and electron correlation effect are the transition metal alloys having 9 $`d`$ electrons. In fact, it is well known that the residual resistivity of many disordered alloy systems such as Cu-Au and Pd-Pt show simple Nordheim behavior, but most of binary alloys one component of which is noble-metal and the other transition metal with approximately 9 $`d`$ valence electrons Ni, Pd, and Pt (henceforth designated as $`d`$-9 transition metals) show great deviation from the Nordheim behavior. For example, Pt-rich Cu-Pt and Pd-rich Ag-Pd alloys have ill-defined Fermi surface especially around $`X`$ point due to the broadening of the Bloch spectral function at and near the Fermi vector k<sub>F</sub> and as a result, both theoretically and experimentally an asymmetric residual resistivity curve with values of as high as 85 $`\mu \mathrm{\Omega }`$ cm has been observed. It is thus expected that due to strong disorder at $`E_F`$ in noble - $`d`$-9 transition metal alloys and due to the electron-electron interaction between Pt, Pd, or Ni $`d`$ electrons at the Fermi level, one may observe the reduction of DOS at $`E_F`$, although the dip size might be smaller than in Ref. due to screened Coulomb interaction between valence electrons.
In order to see such effect, we compared the angle-integrated photoemission spectra of Cu-Pt, Cu-Pd, Cu-Ni, and Pt-Pd alloys near $`E_F`$ with those of pure $`d`$-9 transition metals which have almost smooth DOS near $`E_F`$. We observed some depletion of DOS at $`E_F`$ in Cu<sub>25</sub>Pt<sub>75</sub> and possibly for Cu<sub>50</sub>Pt<sub>50</sub> and Cu<sub>25</sub>Ni<sub>75</sub>, the residual resistivity values of which are almost 3 times higher than those of high Cu concentration alloys due to strong disorder effect at some portion of the Fermi surface which reduces electron mean free path. This along with the absence of depletion for Pd<sub>50</sub>Pt<sub>50</sub> whose electron mean free path is expected to be much longer due to sharp Bloch spectral function at $`E_F`$ (Ref.) show clear signature that the Altshuler and Aronov’s theory is applicable to ordinary disordered transition metal alloys as well as to transition metal oxides showing MIT.
## II Experimental
The polycrystalline Cu-Pt, Cu-Pd, Cu-Ni, and Pd-Pt alloys were made by arc melting of 99.99% Cu, 99.9% Pt, 99.95% Pd, and 99.997% Ni, and then annealed at temperature of 950C to ensure disorder phase which was confirmed with x-ray diffraction. The data were taken with VG Microtech CLAM 4 multi-channeltron detector electron energy analyzer with resolution of 40 meV full width at half maximum (FWHM), and the base pressure was $`1.5\times 10^{10}`$ torr. Photon source was unmonochromatized He I line ($`h\nu `$ = 21.2 eV). The samples were cooled down to 95 K with liquid nitrogen and the surfaces were scraped in situ at that temperature. In order to prevent contamination during the experiment, further scraping was done frequently.
## III Results and Discussion
Figure 1 shows the photoemission spectra of disordered Cu<sub>25</sub>Pt<sub>75</sub> along with that of Pt near $`E_F`$. We can see that the lineshape of Cu<sub>25</sub>Pt<sub>75</sub> and that of Pt around $`E_F`$ are not quite the same, which indicates the detailed difference in DOS. Since the difference of spectral lineshape between Cu<sub>25</sub>Pt<sub>75</sub> and Pt is quite small, we tested several possibilities which may induce such change besides the difference of DOS. Apparently, this kind of experimental lineshape difference cannot be attributed to a result of Fermi-Dirac distribution functions at slightly different temperatures. If it really were, the spectrum of a specimen at the lower temperature should have smaller spectral intensity at $`E>E_F`$ and more at $`E<E_F`$ and this difference should be symmetric about $`E_F`$, which is not the case in the figure. We even tried to fit the alloy spectrum with broadened Pt spectra to take account of possible temperature difference, but the lineshape was not the same at all. Repeating the measurements with new batch of samples also confirmed this behavior. Also, one might think that the Fermi level alignment can cause much uncertainty on the comparison of the two spectra, but when we shifted one of the spectra being compared by more than 2 meV, which was beyond the experimental error, the alignment was completely out of sense.
In order to see whether there really is some difference in the DOS near the Fermi level, we divided the spectrum of the alloy by that of pure Pt, assuming that the Pt DOS near $`E_F`$ does not have any sharp structure. The absence of sharp structures in pure Pt DOS is confirmed by fitting the Pt spectrum with a linearly changing DOS, which is reasonably a good approximation within the energy region of interest. This approximated DOS was multiplied by the Fermi-Dirac distribution and convoluted with a Gaussian curve corresponding to the instrumental resolution. The fitting was found to be perfect as expected, and the same was true for pure Cu spectrum.
The divided spectrum clearly reveals the presence of a dip at $`E_F`$ which extends to almost 50 meV above and below the Fermi level. The dip size is very small with its relative value of only 6 % of the divided spectral weight near the ends of the dip. The slope in the divided spectra seems to be caused by the detailed band structures of alloys and of pure Pt.
This dip structure could not be removed by any of the assumptions such as the mis-alignment of the Fermi level, temperature difference, inclusion of the satellites of He I line and the background treatment. These factors have their own effects when we simulate a curve, but none of them could be an explanation of the dip. For instance, the smooth decrease or increase of the divided spectral weight above the Fermi level was observed when there is slight difference in the temperature. If the temperature at which the pure Pt spectrum was taken is higher by only 1 K, great decrease of the divided spectra above $`E_F`$ was observed. However, the presence of a dip near $`E_F`$ was not disputable since the shape of the divided spectrum below $`E_F`$ did not change much with the temperature difference of about 1 K. We also checked the possibility of mis-alignment of the Fermi level as a probable explanation for the dip, but it was found to give only smooth change in the divided spectrum, not affecting the presence of a dip.
One might think that the depletion of DOS at the Fermi level could be understood as a real structure in calculated DOS in one electron picture. However, the coherent-potential-approximation (CPA) calculation which gives good description of one-electron DOS of alloys does not show any hint of a sharp dip at the Fermi level. It is also well known that when $`d`$-9 transition metals form alloys with other metals, the DOS near $`E_F`$ is lowered in some cases, but this happens only for low concentration of $`d`$-9 transition metals, not for high concentration. Furthermore, any sharp structure near $`E_F`$, if existed, should manifest itself more prominently in ordered phase rather than in disordered phase because of strong broadening of Bloch spectral function due to disorder for the latter. To the best of our knowledge, there have not been any report about a presence of sharp structure in high resolution photoemission spectra of pure metals. Therefore, this kind of reduction of DOS at the Fermi level we believe can only be interpreted as a result of the electron-electron interaction in disordered system.
In Fig. 2, we show divided spectra of Cu-Pt alloys with Pt concentration of 25, 50, and 75 at.% to see the possible sharp change of DOS near $`E_F`$ as a function of composition. From the figure, it is apparent that any such structure at $`E_F`$ does not exist in the case of Cu-rich alloy, and it can be safely argued that dips at $`E_F`$ for Cu<sub>25</sub>Pt<sub>75</sub> and Cu<sub>50</sub>Pt<sub>50</sub> cannot be understood within one-electron picture. For Cu-Pt alloys, the residual resistivity has its maximum value at 60 at.% Pt, and the reduction of DOS near $`E_F`$ should be the greatest around that concentration if we assume constant $`\lambda `$ throughout the whole concentration. This indeed is in agreement with the experiment where the depletion of DOS is observed only for Cu<sub>25</sub>Pt<sub>75</sub> and Cu<sub>50</sub>Pt<sub>50</sub>, the latter having a dip with only 3 % reduction.
To observe similar effect, we have also studied Cu-Ni alloys. It is generally believed that the electron-electron interaction between $`d`$ electrons is stronger for Ni 3$`d`$ than for Pt 5$`d`$ because of more localized wavefunctions. In fact, from the spectra of Cu<sub>25</sub>Ni<sub>75</sub> and of Ni and the divided spectrum shown in the upper panel of Fig. 3, the reduction of DOS is clearly visible. On the other hand, the divided spectrum of Cu<sub>50</sub>Ni<sub>50</sub> in the lower panel did not show such behavior. Although the values of $`\rho _0`$ are almost the same for these two compositions, the electron correlation effect of Ni 3$`d`$ must be small for Cu<sub>50</sub>Ni<sub>50</sub> than for Cu<sub>25</sub>Ni<sub>75</sub> because it is known that the partial DOS of Ni 3$`d`$ at $`E_F`$ diminishes as Ni is diluted. Therefore the absence of a dip for Cu<sub>50</sub>Ni<sub>50</sub> alloy can be explained within the context of Altshuler and Aronov’s model.
We also investigated the spectra of Cu-Pd alloys with 25, 50, 75, 90 at.% of Pd. In this case, we could not see any reduction of DOS as shown for the representative Cu<sub>25</sub>Pd<sub>75</sub> case in the upper panel of Fig. 4. This probably is due to smaller residual resistivity $`\rho _0`$ than Cu-Pt alloys and weaker electron-electron interaction than in Cu-Ni alloys, and a dip at $`E`$ $``$ 50 meV seems to be a result of deviation of Pd DOS itself near $`E_F`$ from almost linearly decreasing behavior within the energy region studied.
The absence of a dip for Cu-Pd alloys is actually against another possible explanation of depletion of DOS near $`E_F`$, namely the disorder induced non-uniform lifetime broadening of Bloch function. If the broadening is present only around the Fermi level, one may expect the depletion of DOS at the Fermi level because some of the spectral weight will be piled up at lower and higher energy side. Were it true, this kind of depletion should be observed in Cu-Pd alloys as well as in Cu-Pt and Cu-Ni alloys, because the broadening of Bloch spectral function near k<sub>F</sub> must be more or less similar to Cu-Ni and Cu-Pt alloys when $`d`$-9 transition metal concentration is between 50 to 80%. In fact, the similarity of Bloch spectral functions can be easily seen in the CPA calculation results of Ag-Pd, of Cu-Pt, and of Cu-Ni, which is expected on the ground that they have almost the same composition dependence of $`\rho _0`$. However, we do not see any prominent structure at all for Cu-Pd alloys, in contrast to this scenario.
The ill-defined Fermi surface itself does not mean explicitly that the electron at $`E_F`$ has relatively small mean free path, because the electronic states with short lifetime slightly below or above the Fermi level can cause it. Pd-Pt alloys show such behavior. In this case, although the Fermi surface could not be defined clearly because of short lifetime of the electrons just below and above the Fermi level, the spectral function becomes very sharp as the wavevector k crosses k<sub>F</sub>. This results in an ordinary Nordheim behavior of $`\rho _0`$. If such non-uniform lifetime broadening would cause some change in DOS, we may expect some increase of DOS at $`E_F`$ and small decrease just above and below $`E_F`$ in the experimental spectra. However, as in the lower panel of Fig. 4, we could not see such structure for Pd<sub>50</sub>Pt<sub>50</sub> with small $`\rho _0`$ value. This shows that both disorder and electron correlation effect are necessary to observe a dip at $`E_F`$ in agreement with Altshuler and Aronov’s model. For Cu-Pt alloys, the spectral function does have finite lifetime even at k<sub>F</sub> which accounts for the relatively high $`\rho _0`$ values and this is the reason we could observe disorder and electron interaction induced depletion of DOS.
The existence of the electron-electron interaction for the Cu alloys with different $`d`$-9 transition metals observed here suggests important implication for the understanding of their electronic structures . The transition metals with 9 $`d`$ electrons in their outermost shell have strong correlation effect and it is now well known that the so-called 6 eV satellite in photoemission spectrum of Ni (Ref.) can only be understood with the inclusion of many-body interaction. For Pd, it was suggested that the satellites of core levels and even of the valence band must be due to the same reason. From our results, it is quite likely that even pure Pt metal can have some electron correlation effect although this interaction might be largely screened by the electrons at k<sub>F</sub> the band of which is more dispersive than of Ni 3$`d`$. In fact, the presence of satellite structure in Pt 4$`f`$ core level spectra suggests rather strong electron-electron interaction.
## IV Conclusion
To conclude, we have studied the photoemission spectra of transition metal alloys and pure elements. There were dips at $`E_F`$ for some alloys with high $`\rho _0`$ values when one of the two elements have non-negligible electron correlation effect at $`E_F`$. The dip size at $`E_F`$ and its width in this work were smaller than those observed for a material with pseudo-gaps and much smaller than observed near MIT of transition metal compounds, but their existence was clearly established. We have argued that substitutional disorder in alloys cause depletion of DOS at $`E_F`$ when the electron-electron interaction is present, and this could be qualitatively interpreted within the context of Altshuler and Aronov’s model.
###### Acknowledgements.
T.-U. N. wishes to acknowledge the financial support of Hanyang University, Korea, made in the program year of 1998. This work was also supported in part by the Korean Science and Engineering Foundation through Center for Strongly Correlated Materials Research (CSCMR) at Seoul National University. |
no-problem/9910/cond-mat9910428.html | ar5iv | text | # Simulated coevolution in a mutating ecology
## Abstract
The bit-string Penna model is used to simulate the competition between an asexual parthenogenetic and a sexual population sharing the same environment. A new born of either population can mutate and become a part of the other with some probability. In a stable environment the sexual population soon dies out. When an infestation by fastly mutating genetically coupled parasites is introduced however, sexual reproduction prevails, as predicted by the so-called Red Queen hypothesis for the evolution of sex.
The question of why sexual reproduction prevails among an overwhelming majority of species has resisted a century-long investigation. It is clear that its appearance is rather ancestral for metazoan, multicellular animals. Recombination probably originated some three thousand million years ago, and eukaryotic sex one thousand million years ago. But the mechanisms through which simple haploid organisms mutated into diploid sexual forms (the origins of meiosis and the haploid cycle) remain one of the great puzzles of evolution theory . We can speculate about these origins, but cannot test our speculations. In contrast, selection must be acting today to maintain sex and recombination. We have to concentrate on maintenance rather than origins, because only thus can we have any hope of testing our ideas.
From a theoretical point of view, the selective advantages of a sexual population over a simple asexual one are well understood, and could be established through a variety of approaches. The bases for these advantages derive first from the covering up, by complementation, of deleterious genes, then from the ability to recombine genetic material, which haploid asexual reproduction lacks. But serious difficulties arise when sexual reproduction is compared with meiotic parthenogenesis, which is a kind of diploid asexual regime that also involves genetic recombination. In this case, the two-fold advantage of not having to produce males could give parthenogenetic populations the upper hand against a competing sexual variety, since recombination is present in both.
The theoretical problem thus posed to evolutionary biologists is indeed very difficult. The observation of this competition in natural habitats is not feasible, in general. One must rely on very indirect and often questionable data. It should be pointed out that this is a feature shared by many of the most important problems in the theory of evolution. This situation is certainly one of the main reasons for the recent boost of activities in physics directed towards biology, since the same methods and approaches could prove once again fruitful . In particular, physicists have pioneered in the use of techniques derived from the availability of powerful low-cost computers to fulfill the lack of and complement experimentation . Computer simulations of natural systems can provide much insight into their fundamental mechanisms, and can be used to put to a test theoretical ideas that could be otherwise viewed as too vague to deserve the status of scientific knowledge. The scientific literature of this decade is strong testimony to the success of this approach, in various and apparently disconnected fields; for a recent, although partial, survey of nonphysical applications I direct the reader to Ref. . It is against this background that this work is being reported.
Investigations of evolutionary problems by physicists have in fact boomed in the last few years. In what concerns biological aging, this boom can be traced back to the introduction of Penna’s bit-string model which was quickly adopted as referential in most studies, as a sort of “Ising model” of the field. Its simplicity and early successes in reproducing observed features of real populations, such as the Gompertz law , the Azbel phenomenology and the catastrophic senescence of semelparous animals , unleashed a burst of efforts in its application to a number of different phenomena. In some simple cases, analytical solutions could even be provided, shedding some light on the simulation results; recent reviews can be found in Ref. , and an extensive list of references in Ref. .
Other models have been examined in the recent literature. Of particular interest is the Redfield model , which assumes a constant population and does not allow mutational meltdown.
The Penna model has also been used to address the problem in question here, namely, the reasons for the maintenance of sexual reproduction . Simulations with this model showed that a genetical catastrophe could eliminate a parthenogenetical population, whereas sexually reproducing species survived. This result had the merit of pointing out a measurable effect of the greater genetic diversity created by sex, but the occurrence in nature of catastrophes of that kind seems rather unlikely. Its introduction in the simulation is prone to be thought of as too artificial to be convincing. Another drawback should also be mentioned. All previous comparisons between sexual and asexual reproduction have come from results where each population evolved by itself, as if each one lived in a separate environment. This is biologically improbable, and a simultaneous study of both regimes, sharing the same resources, is needed. This has been already done, in the context of the Redfield model. There, it was shown that sexual reproduction has a short-term advantage over the haploid asexual regime if the female mutation rate is high enough .
The present work represents a step in the direction of endowing the artificial world in which the populations evolve in Penna model simulations with more realistic features. Its purpose is again to verify measurable effects of sexual genetic diversity when compared to parthenogenetic reproduction. The simulations here reported are based on the recent field work of biologist Lively, in which he observed the effect of parasitic infestation of a freshwater snail’s (Potamopyrgus antipodarum) natural habitat on its dominant reproductive regime . This observation can be considered as an illustration of one of the theories in the debate over the reasons for the prevalence of sex, the so-called “Red Queen” hypothesis. This is a variation of the idea that sex serves to assemble beneficial mutation, and so creates a well-adapted lineage in the face of a rapidly changing environment. Because parasites adapt to the most common host genotype, evolution will favor hosts with a rare combination of resistant genes. Thus, the Red Queen predicts that selection will favor the ability to generate diversity and rare genotypes, exactly the abilities conferred by sex and recombination.
I describe in what follows the model used in the simulations, the Penna bit-string model with recombination, and the representation of a parasitic infestation in its context. The genome of each (diploid) organism is represented by two computer words. In each word, a bit set to 1 at a position (“locus”) corresponds to a deleterious mutation; a “perfect” strand would be composed solely of zeros. The effect of this mutation may be felt at all ages equal to or above the numerical order of that locus in the word. As an example, a bit set to one at the second position of one of the bit-strings means that a harmful effect may become present in the life history of the organism to which it corresponds after it has lived for two periods (“years”). The diploid character of the genome is related to the effectiveness of the mutations. A mutation in a position of one of the strands is felt as harmful either because of homozygose or because of dominance. For the former, a mutation must be present in both strings at the same position to be effective. The concept of dominance on the other hand relates to loci in the genome in which a mutation in just one strand is enough to make it affect the organism’s life. The life span of an individual is controlled by the amount of effective mutations active at any instant in time. This number must be smaller than a specified threshold to keep the individual alive; it dies as soon as this limit is reached.
Reproduction is modeled by the introduction of new genomes in the population. Each female becomes reproductive after having reached a minimum age, after which it generates a fixed number of offspring at the completion of each period of life. The meiotic cycle is represented by the generation of a single-stranded cell out of the diploid genome. To do so, each string of the parent genome is cut at a randomly selected position, the same for both strings, and the left part of one is combined with the right part of the other, thus generating two new combinations of the original genes. The selection of one of these complete the formation of the haploid gamete coming from the mother.
The difference between sexual and parthenogenetic reproduction appears at this stage. For the first, a male is selected in the population and undergoes the same meiotic cycle, generating a second haploid gamete out of his genome. The two gametes, one from each parent, are now combined to form the genome of the offspring. Each of its strands was formed out of a different set of genes.
For parthenogenesys, all genetic information of the offspring comes from a single parent. Its gamete is cloned, composing an homozygous genome for the offspring. For both regimes, the next stage of the reproduction process is the introduction of $`m`$ independent mutations in the newly generated genetic strands. In this kind of model it is normal to consider only the possibility of harmful mutations, because of their overwhelming majority in nature. For sexual populations, the gender of the newborn is then randomly selected, with equal probability for each sex.
A last ingredient of the model is a logistic factor, called the Verhulst factor, which accounts for the maximum carrying capacity of the environment for this particular (group of) species. It introduces a mean-field probability of death for an individual, coming from nongenetic causes, and for computer simulations has the benefit of limiting the size of populations to be dealt with.
The passage of time is represented by the reading of a new locus in the genome of each individual in the population(s), and the increase of its age by 1. After having accounted for the selection pressure of a limiting number of effective (harmful) mutations and the random action of the Verhulst dagger, females that have reached the minimum age for reproduction generate a number of offspring. The simulation runs for a prespecified number of time steps, at the end of which averages are taken over the population(s). A more detailed description of the standard Penna model can be found in Ref. , together with a sample computer code that implements its logic.
The extension of the original Penna model to simulate the coevolution of populations is rather straightforward. In this Rapid Communication, I focus on the coevolution of different varieties of the same species, sharing the same ecological range. This implies that the maximum carrying capacity relates to the total population, summing up all varieties. I use this simple extension to study the effect of introducing a small probability $`p`$ for a mutation to transform offspring of one variety into the other. This implies an extra stage for the reproduction logic. After a new-born genome is generated from the sexual population, it mutates into a parthenogenetic female and become part of the asexual population with probability $`p`$. Accordingly, the offspring of an asexual female can mutate to the sexual form, with the same probability $`p`$; if it does, a gender is randomly chosen for it.
Further extension is needed to simulate the conditions of a parasitic infestation. I chose a very simple strategy to reproduce Lively’s observation. Parasites are represented by a memory genetic bank of a fixed size $`M`$. At each time step, every female of the species establish contact with a fixed number $`P`$ of elements of the parasite population. If the female’s genome is completely matched by the genome already memorized by the parasite, she loses its ability to procreate. This simulates the action of the parasite Microphallus on the fresh water snail: this trematode renders the snail sterile by eating its gonads. The parasite memory bank has, on the other hand, a dynamic evolution. If an element of the parasite population contacts the same genome a certain number of times $`n`$ in a row, this particular genetic configuration is “learned” by the parasite, and turns it active against this genome, until a new pattern is found repetitively. Note that the same genome can be present in a number of different females, so the action of the parasite is not restrained by being randomly “chosen” by the same female the required number of times. Rather, the effectiveness of this simulated parasitic infestation is an indirect measure of the genetic variability (actually, of the lack of this variability) within the female population of each variety.
It must be remarked that this is not the only possible choice for the dynamic evolution of the parasite population. Nonetheless, it captures the essential features of parasitism; in particular, parasites that match frequently occurring female’s genomes will spread out through the population, showing that the above dynamical rules succeed in giving them an effective handicap.
All the species, the two varieties of the snail and the parasites, evolve as fast as they can, but neither gets far ahead, hence the theory’s name, after the Red Queen’s remark to Alice in Wonderland: “It takes all the running you can do, to keep in the same place.” The extra constraint that the two varieties of snail are competing for the same resources of the environment may eventually cause one of them to collapse. And Lively’s observations match this expectation: environments with few parasites tend to have mostly asexual snail populations, while a larger number of parasites is correlated to a predominant sexual variety. This pattern strongly suggests that the parasites prevent the elimination of sexual populations.
The two-fold short-term disadvantage of sexual reproduction is the object of the first result shown. The initial population is composed of a single sexual variety in which each reproductive female gives birth to a parthenogenetic female with a small probability $`p=0.0001`$. The minimum age for reproduction $`R=10`$ and the birth rate per female $`b=1`$ are fixed, and do not suffer the effects of mutations. Figure 1 shows the time evolution of the total population of each variety. In this run, the environment is kept fixed; there is no parasite infestation. The burden of having to produce males in the population is responsible for the quick establishment of a dominant asexual variety. The inset shows the time evolution of the fraction of asexual females in the total population once a parthenogenetic lineage appears for the first time. A simple argument that underlies this result goes as follows : suppose there are $`A`$ asexual females and $`S`$ sexual ones (and also $`S`$ males) that have already reached reproduction age in a generation. In the next one, there will be $`bA`$ and $`bS/2`$ of each variety; the factor of $`2`$ of the latter comes up because of equal probabilities to give birth to males and females. The fraction of parthenogenetic females in the total population increases from $`\frac{A}{A+2S}`$ to $`\frac{A}{A+S}`$; when $`A`$ is small, this is a doubling in each generation. This reasoning presupposes an unbounded increase in the total population and no overlapping between generations, which are not realistic assumptions in general. Instead, one should expect a smaller factor for this growth. The simulation shows clearly this exponential effect in a semilog plot, albeit with a factor close to 1.05.
The results described in what follows were obtained from runs where the basic parameters of the model for the infestation had values $`M=1000`$ and $`n=2`$. The use of different values does not, however, change any of the conclusions, which are essentially qualitative.
Figure 2 shows the effect of the parasite infestation. This infestation appears at time step $`10000`$, and is simulated by an exposure of each female to $`P=60`$ genomic imprints of the parasite bank before trying to give birth. After having almost vanished, and being kept alive solely by the infrequent back-mutations, the sexual variety becomes rapidly more abundant as soon as the infestation is unleashed and eventually dominates. In accordance with the Red Queen hypothesis, the greatest diversity of genomes that sexual reproduction engenders is showing one of its measurable consequences.
The density of the parasite infestation, measured by the number of exposure patterns, drives a transition between dominating reproductive regimes. This transition can be measured through the fraction of total sexual population over the total population. This number acts as an order parameter for this transition. Figure 3 shows this fraction as a function of parasite exposure after a stationary population has been reached. The sudden jump in the order parameter is signaling a first-order transition. This claim is further supported by the observation that some runs near the transition, differing only by the sequence of pseudorandom numbers used, had unusually long relaxation times. These long relaxations can be understood as related to metastable states, typical of first-order transitions.
This paper reports on a simulation of the coevolution of sexual and parthenogenetic varieties of a same species, competing for the same resources, in the framework of the Penna bit-string model. The two-fold disadvantage of having to produce males shows its deadly effect on the sexual population, in a static environment. On the other hand, the introduction in the model of an infestation of rapidly mutating parasites, tailored to reproduce recent observations of existing species, can revert the outcome of this competition. This result is in complete accordance with those observations, and acts as support for the Red Queen hypothesis for sex maintenance in the natural world. A transition, conjectured to be of first-order, between dominant reproductive regimes observed in nature could be simulated. The selective pressure of a mutating ecology is enough to enhance the genetic diversity promoted by sexual reproduction, giving it the upper hand against competing asexual populations. The results reported show that very simple simulational models can in fact be explored as testing ground for theories in biology where observational evidence is lacking or insufficient.
I am greatly indebted to D. Stauffer’s quest for eternal youth under the form of the challenges that he constantly proposes, and to my collaborator S.M. de Oliveira for drawing my attention to the possibilities of Penna’s model. My work is supported by DOE Grant No. DE-FG03-95ER14499. |
no-problem/9910/astro-ph9910110.html | ar5iv | text | # Abstract
## Abstract
We are on the verge of the first precision testing of the inflationary cosmology as a model for the origin of structure in the Universe. I review the key predictions of inflation which can be used as observational tests, in the sense of allowing inflation to be falsified. The most important prediction of this type is that the perturbations will cross inside the Hubble radius entirely in their growing mode, though nongaussianity can also provide critical tests. Spatial flatness and tensor perturbations may offer strong support to inflation, but cannot be used to exclude it. Finally, I discuss the extent to which observations will distinguish between inflation models, should the paradigm survive these key tests, in particular describing a technique for reconstruction of the inflaton potential which does not require the slow-roll approximation.
## 1 What does inflation predict?
Cosmological inflation is widely perceived as an excellent paradigm within which one can explain both the global properties of the Universe and the irregularities which give rise to structures within it. Despite this, it remains fair to say that as yet the inflationary paradigm has been confronted with only a few observational challenges, which it has comfortably surmounted. In years to come, it will face many more, and the purpose of this article is to discuss which of these tests are likely to be the most stringent.
In doing so, it is worthwhile to separate out the two key roles that inflation plays in modern cosmology. The first, which led to its introduction, is in setting the ‘initial conditions’ for the global Universe, by arranging a large homogeneous Universe devoid of unwanted relics such as monopoles. In terms of these global properties, it now seems unlikely that any new observations will undermine the inflationary picture, and, as Linde has argued , if it is to be supplanted that is likely to be because of the advent of a superior theory, rather than of superior observations. Accordingly, I will have little to say on this topic.
The second role, which is potentially much more fruitful as a probe of high-energy physics, is that inflation provides a theory for the origin of perturbations in the Universe (for reviews, see Refs. ). As these perturbations are believed to evolve into all the observed structures in the present Universe, including the existence and clustering of galaxies and the anisotropies in the cosmic background radiation, this proposal is subject to a wide variety of observational tests. Thus far, these tests have been rather qualitative in nature, but in the near future inflation as a theory of the origin of structure in the Universe will face precision testing.
The challenge facing cosmologists is therefore to address two questions:
* Is inflation right?
* If so, which version of inflation is right?
Unfortunately, in science one never gets to prove that a theory is correct, merely that it is the best available explanation. The way to convince the community that a theory is indeed the best explanation is if that theory can repeatedly pass new observational tests. In that regard, it is important to be as clear as possible concerning what these tests might be.
## 2 The predictions of inflation
The essence of testing inflation can be condensed into a single sentence, namely
> The simplest models of inflation predict power-law spectra of gaussian adiabatic scalar and tensor perturbations in their growing mode in a spatially-flat Universe.
This sentence contains 6 key predictions of the inflationary paradigm, which I’ve underlined, but also one crucial word, ‘simplest’. The trouble is that inflation is a paradigm rather than a model, and has many different realizations which can lead to a range of different predictions. From a straw poll of cosmologists, everyone agreed that there were at least several tens of different models, and I’d say there certainly aren’t as many as a thousand, so a reasonable first guess is that at present there are around one hundred different models on the market, all consistent (at least more or less) with present observational data.
A valuable scientific theory is one which has sufficient predictive power that it can be subjected to observational tests which are capable of falsifying it. When the model survives such a test, it strengthens our view that the model is correct; in Bayesian terms, its likelihood is increased relative to models which are less capable of matching the data. It is useful to think of these models at three different levels:
* Specific models of inflation. These are readily testable. For example, there will be a specific prediction for the spectral index $`n`$ of the density perturbations, and this will be measured to high accuracy.
* Classes of models. This means models sharing some common property, for example that the perturbations are gaussian. An entire class of models can be excluded by evidence against that shared property.
* The inflationary paradigm. Testing the paradigm as a whole requires a property which is robust amongst all models. As we’ll see, the most striking such property is that the perturbations should be in their growing mode, which leads to the distinctive signature of oscillations in the microwave anisotropy power spectra.
Finally, note that since one can never completely rule out a small inflationary component added on to some rival structure formation model (e.g. a combined cosmic strings and inflation model ), in practice we are initially testing the paradigm of inflation as the sole origin of structure in the Universe.
It can also be helpful to make the admittedly rather narrow distinction between tests and supporting evidence . A test arises when there is a prediction which, if contradicted by observations, rules out the model, or at least greatly reduces its likelihood relative to a rival model. In this sense, the geometry of the Universe is not a test of inflation, because there exist inflation models predicting whatever geometry might be measured (including open and closed ones), and there is no rival regarded as giving a better explanation for any particular possible observation. By contrast, the oscillations in the microwave anisotropy power spectra (both temperature and polarization) do give rise to a test, as we will shortly see.
Supporting evidence arises with observational confirmation of a prediction which is regarded as characteristic, but which is not generic. A good example would be the observation of tensor perturbations with wavelengths exceeding the Hubble length, for which inflation would be by far the best available explanation; they do not give rise to a test because if they are not observed, then there are plentiful inflation models where such perturbations are predicted to be below any anticipated observational sensitivity.
## 3 Testing the predictions
### 3.1 Spatial flatness
Of the listed properties, spatial flatness is the only one which refers to the global properties of the Universe.<sup>1</sup><sup>1</sup>1Inflation is also responsible for solving the horizon problem, ensuring a Universe close to homogeneity, but this is no longer a useful test as it is already observationally verified to high accuracy through the near isotropy of the cosmic microwave background. It is particularly pertinent because of the original strong statements that spatial flatness was an inevitable prediction of inflation, later retracted with the discovery of a class of models — the open inflation models — which cunningly utilize quantum tunnelling to generate homogeneous open Universes. In the recent ‘tunnelling from nothing’ instanton models of Hawking and Turok , any observed curvature has the interesting interpretation of being a relic from the initial formation of the Universe which managed to survive the inflationary epoch.
If we convince ourselves that, to a high degree of accuracy, the Universe is spatially flat, that will strengthen the likelihood that the simplest models of inflation are correct. However, an accurate measurement of the curvature is not a test of the full inflationary paradigm, because whatever the outcome of such a measurement there do remain inflation models which make that prediction. This point has recently been stressed by Peebles . The likelihood will have shifted to favour some inflation models at the expense of others, but the total likelihood of inflation will be unchanged.<sup>2</sup><sup>2</sup>2Indeed, the only existing alternative to inflation in explaining spatial flatness is the variable-speed-of-light (VSL) theories , which may be able to solve the problem without inflation, though at the cost of abandoning Lorentz invariance. There are no available alternatives at all to inflation in explaining an open Universe, so one might say that observation of negative curvature modestly improves the likelihood of inflation amongst known theories, by eliminating the VSL theories from consideration. Only if a rival class of theories can be invented, which predict say a negative-curvature Universe in a way regarded as more compelling than the open inflation models, will measurements of the curvature acquire the power to test the inflationary paradigm.
I should also mention that the standard definition of inflation — a period where the scale factor $`a(t)`$ undergoes accelerated expansion — is a rather general one, and in particular any classical solution to the flatness problem using general relativity must involve inflation. This follows directly from writing the Friedmann equation as
$$|\mathrm{\Omega }1|=\frac{|k|}{\dot{a}^2}.$$
(1)
An example is the pre big bang cosmology , which is now viewed as a novel type of inflation model rather than a separate idea. This makes it hard to devise alternative solutions to the flatness problem; open inflation models use quantum tunnelling but in fact still require classical inflation after the tunnelling, and presently the only existing alternative is the variable-speed-of-light theories which violate general relativity.
Before continuing on to the properties of perturbations in the Universe, there’s a final point worth bearing in mind concerning inflation as a theory of the global Universe. As I’ve said, there now seems little prospect that any observations will come along which might rule out the model. But it is interesting that while that is true now, it was not true when inflation was first devised. An example is the question of the topology of the Universe. We now know that if there is any non-trivial topology to the Universe, the identification scale is at least of order the Hubble radius, and I expect that that can be consistent with inflation (though I am unaware of any detailed investigation of the issue). However, from observations available in 1981 it was perfectly possible that the identification scale could have been much much smaller. Since inflation will stretch the topological identification scale, that would have set an upper limit on the amount of inflation strong enough to prevent it from solving the horizon and flatness problems. The prediction of no small-scale topological identification has proven a successful one. Another example of a test that could have excluded inflation, but didn’t, is the now-observed absence of a global rotation of our observable Universe .
### 3.2 Growing-mode perturbations
This is the key prediction of inflation as a theory of the origin of structure; inflation generically predicts oscillations in the temperature and polarization angular power spectra. If oscillations are not seen, then inflation cannot be the sole origin of structure.
The reason this prediction is so generic is because inflation creates the perturbations during the early history of the Universe, and they then evolve passively until they enter the horizon in the recent past. The perturbations obey second-order differential equations which possess growing and decaying mode solutions, and by the time the perturbations enter the horizon the growing mode has become completely dominant. That means the solution depends only on one parameter, the amplitude of the growing mode; in particular, the derivative of the perturbation is a known function of the amplitude. The solution inside the horizon is oscillatory before decoupling, and this fixes the temporal phase of the perturbations; all perturbations of a given wavenumber oscillate together and in particular at any given time there are scales on which the perturbation vanishes. Projected onto the microwave sky, this leads to the familiar peak structure seen in predicted anisotropy spectra, though if one wants to be pedantic the troughs are if anything more significant than the peaks.
The importance of the peak structure in distinguishing inflation from rivals such as defect theories was stressed by Albrecht and collaborators and by Hu & White . The prediction is a powerful one; in particular it still holds if the inflationary perturbations are partly or completely isocurvature, and if they are nongaussian.
I stress that while inflation inevitably leads to the oscillations, I am not saying that inflation is the only way to obtain oscillations. For example there are known active source models which give an oscillatory structure , though the favourite cosmic string model is believed not to. If observed, oscillations would support inflation but cannot prove it. However I might mention in passing that it is quite easy to prove that the existence of adiabatic perturbations on scales much larger than the Hubble radius would imply one of three possibilities; the perturbations were there as initial conditions, causality/Lorentz invariance is violated, or a period of inflation occurred in the past.
#### Against designer inflation
At this point it is worth saying something against ‘designer’ models of inflation which aim to match observations through the insertion of features in the spectra, by putting features in the inflationary potential. This idea first arose in considering the matter power spectrum , which is a featureless curve and so quite amenable to the insertion of peaks and troughs. However the idea is much more problematic in the context of the microwave anisotropy spectra, which themselves contain sharp oscillatory features. One might contemplate inserting oscillations into the initial power spectrum in such a way as to ‘cancel out’ the oscillatory structure, but there are however three levels of argument against this:
1. It’s a silly idea, because the physics during inflation has no idea where the peaks might appear at decoupling, and for the idea to be useful they have to match to very high accuracy. That argument is good enough for me, though perhaps not for everyone, so …
2. Even if you wanted to do it you probably cannot. Barrow and myself found that the required oscillations were so sharp as to be inconsistent with inflation taking place, at least in single-field inflation models. However, a watertight case remains to be made on this point, and it is not clear how one could extend that claim to multi-field models, so perhaps the most pertinent argument is …
3. Even if you managed to cancel out the oscillations in the temperature power spectrum, the polarization spectra have oscillations which are out of phase with the temperature spectrum, and so those oscillations will be enhanced .
### 3.3 Gaussianity and adiabaticity
While the simplest models of inflation predict gaussian adiabatic perturbations, many models are known which violate either or both of these conditions. Consequently there is no critical test of inflation which can be simply stated. Nevertheless, it is clear that these could lead to tests of the inflationary paradigm. For example, as far as inflation is concerned, there is good nongaussianity and bad nongaussianity. For example, if line discontinuities are seen in the microwave background, it would be futile to try and explain them using inflation rather than cosmic strings. On the other hand, nongaussianity with a chi-squared distribution is very easy to generate in inflation models; one only has to arrange that the leading contribution to the density comes from the square of a scalar field perturbation. Indeed, in isocurvature inflation models, it appears at least as easy to arrange chi-squared statistics as it is to arrange gaussian ones .
Inflation may also be able to explain nongaussian perturbations of a ‘bubbly’ nature, by attributing the bubbles to a phase transition bringing inflation to an end. The simplest models of this type have already been excluded, but more complicated ones may still be viable.
### 3.4 Tensor and vector perturbations
Gravitational wave perturbations, also known as tensor perturbations, are inevitably produced at some level by inflation, but the level depends on the model under consideration and it is perfectly possible, and perhaps even likely , that the level is too small to be detected by currently envisaged experiments. This prevents them acting as a test.
In standard inflation models, the gravitational waves are directly observable only by the microwave background anisotropies they induce. Assuming Einstein gravity, the Hubble parameter always decreases during inflation which leads to a spectrum which decreases with decreasing scale; the upper limit set by these anisotropies places the amplitude on short scales orders of magnitude below planned detectors (and probably well below the stochastic background from astrophysical sources). The exception is the pre big bang class of models (implemented in extensions of Einstein gravity), where the gravitational wave spectrum rises sharply to short scales and is potentially visible in laser interferometer experiments.
As well as the direct effect on the microwave background, gravitational waves evidence themselves as a deficit of short-scale power in the density perturbation spectrum of COBE-normalized models. Presently the combination of large-scale structure data with COBE gives the strongest upper limit on the fractional contribution $`r`$ on COBE scales, at $`r\text{ }<0.5`$ . There is no evidence to favour the tensors, but this constraint is fairly weak. Eventually the Planck satellite is expected to be able to detect (at 95% confidence) a contribution above $`r0.1`$ , and may perhaps do better if there is early reionization and/or the foreground contamination turns out to be readily modellable. Conceivably, high-precision observations of the polarization of the microwave background might improve this further.
The verdict, therefore, is that if a tensor component is seen, corresponding to gravitational waves on scales bigger than the Hubble radius at decoupling, that is extremely powerful support for the inflationary paradigm. This would be stronger yet if the observed spectrum could be shown to satisfy an equation known as the consistency equation to some reasonable accuracy; this relates the tensor spectral index to the relative amplitude of tensors and scalars, and signifies the common origin of the two spectra from a single inflationary potential $`V(\varphi )`$ . However, the tensor perturbations do not provide a test for inflation in the formal sense, since no damage is inflicted upon the inflationary paradigm if they are not detected.
While known inflationary models generate both scalar and tensor modes, it appears extremely hard to generate large-scale vector modes. There are two obstacles. The first is that massless vector fields are conformally invariant, which means that perturbations are not excited by expansion; this has to be evaded either by introducing a mass (which suppresses the effect of perturbations) or an explicit coupling breaking conformal invariance . The second obstacle is that vector perturbations die off rapidly as the Universe expands, and to survive until horizon entry their initial value would have to be considerably in excess of the linear regime. In consequence, a significant prediction of inflation is the absence of large-scale vector perturbations. If they are seen, it seems likely to be impossible to make them with inflation alone, though I am not aware of a cast-iron proof. By contrast, topological defect models generically excite vector perturbations.
## 4 Discriminating inflationary models
### 4.1 Power-law behaviour
If the inflationary paradigm survives the tests above, it will be time to decide which of the existing inflation models actually fits the data. In most models, to a good approximation the density perturbations are given by a power-law and the gravitational waves are at best marginally detectable by Planck. Accurate measures of these two quantities have the potential to exclude nearly all existing inflation models.
At present, the spectral index is very loosely constrained; in general the limits are probably around $`0.8<n<1.3`$, though if specific assumptions are made (e.g. critical matter density or significant gravitational waves) this can tighten. As it happens, this entire viable range is fairly well populated by inflation models, which means that any increase in observational sensitivity has the power to exclude a significant fraction of them.
A benchmark for future accuracy is the Planck satellite; recently a detailed analysis, including estimates of foreground removal efficiency, concluded that it would reach a 1-sigma accuracy on $`n`$ of around $`\pm 0.01`$ . By contrast, the 1-sigma error from the MAP satellite is predicted to be in the range $`0.05`$ to $`0.1`$, which in itself may not significantly impact on inflationary models, though it may be powerful in combination with probes such as the power spectrum from the Sloan Digital Sky Survey.
### 4.2 Deviations from the power-law
The power-law approximation to the spectra, as derived in Ref. , is particularly good at the moment because the available observations are not very accurate. In most models the spectra are indistinguishable from power laws even at Planck accuracy, but there are exceptions, and if deviations are observable they correspond to extra available information on the inflationary spectrum . One such class of models are models where features have been deliberately inserted into the potential in order to generate sharp features in the power spectrum, such as the broken scale-invariance models.
However, even without a specific feature, it may be possible to see deviations, if the slow-roll approximation is not particularly good. There is actually modest theoretical prejudice in favour of this, because in supergravity models the inflaton is expected to receive corrections to its mass which are large enough to threaten slow-roll . Specifically, the slow-roll parameter $`\eta `$, which is supposed to be small, receives a contribution of
$$\eta =1+\mathrm{`}\mathrm{something}^{},$$
(2)
where the ‘something’ is model dependent. It is clear that if slow-roll is to be very good, $`\eta 1`$, then the ‘something’ has to cancel the ‘1’ to quite high accuracy, and there is no theoretical motivation saying it should.
If we accept that, then we conclude that $`n`$ should not be extremely close to one (which would exacerbate the need for cancellation), and also that the deviation from scale-invariance, $`dn/d\mathrm{ln}k`$, which is given by the slow-roll parameters, might be large enough to be measurable . A specific example where this is indeed the case is the running mass model .
### 4.3 Reconstruction without slow-roll
Eventually, in order to get the best possible constraints on inflation one will want to circumvent the slow-roll approximation completely, and this can be done by computing the power spectra (first the scalar and tensor spectra, and from them the induced microwave anisotropies) entirely numerically. Such an approach was recently described by Grivell and Liddle , and represents the optimal way to obtain constraints on inflation from the data (though at present it has only been implemented for single-field models).
In this approach, rather than estimating quantities such as the spectral index $`n`$ from the observations, one directly estimates the potential, in some parametrization such as the coefficients of a Taylor series. An example is shown in Figure 1, which shows a test case of a $`\lambda \varphi ^4`$ potential as it might be reconstructed by the Planck satellite — see Ref. for full details. Twenty different reconstructions are shown (corresponding to different realizations of the random observational errors), whereas in the real world we would get only one of these. We see considerable variation, which arises because the overall amplitude can only be fixed by detection of the tensor component, which is quite marginal in this model. However, there are functions of the potential which are quite well determined. The lower panel shows $`V^{}/V^{3/2}`$ (where the prime is a derivative with respect to the field), which is given to an accuracy of a few percent on the scales where the observations are most efficient. This particular combination is favoured because it is the combination which (at least in the slow-roll approximation) gives the density perturbation spectrum.
No doubt, when first confronted with quality data people will aim to determine $`n`$, $`r`$, and so on along with the cosmological parameters such as the density and Hubble parameter. However, if we become convinced that the inflationary explanation is a good one, this direct reconstruction approach takes maximum advantage of the data in constraining the inflaton potential.
## 5 Outlook
While the present situation is extremely rosy for inflation, which stands as the favoured model for the origin of structure, there is a sense in which the present is the worst time to be considering inflation models. A quick survey of the literature suggests that there are perhaps of order 100 viable models of inflation, the most there has ever been. At the original Inner Space, Outer Space meeting in 1984, there were only a handful. It’s true that some models devised in those 15 years have been excluded, such as the extended inflation models , but model builders have for the most part had quite a free hand operating within the given constraints.
Further, this is likely to be about the most viable models there will ever be, because observations are at the threshold of significantly impacting on this collection. Experiments such as Boomerang, VSA and MAP are capable of ruling out inflation completely, by one of the methods outlined in this article. If inflation survives, they will have significantly reduced the number of models, and then a few years later Planck should eliminate most of the rest. Hopefully, by the time of Inner Space, Outer Space III in 2014, we will be back once more to a handful.
## Acknowledgments
Thanks to Ed Copeland, Ian Grivell, Rocky Kolb and Jim Lidsey for collaborations which underlie much of the discussion in this article, and to Anthony Lasenby for stressing to me the importance of the absence of vector perturbations as a prediction of known inflationary models. I thank the Royal Society for financial support. |
no-problem/9910/hep-th9910010.html | ar5iv | text | # QUATERNIONIC SPIN Much of this material was presented in an invited talk entitled Choosing a Preferred Complex Subalgebra of the Octonions given at the 5th International Conference on Clifford Algebras and their Applications in Mathematical Physics in Ixtapa, MÉXICO, in June 1999.
## 1 INTRODUCTION
We recently outlined a new dimensional reduction scheme . We show here in detail that applying this mechanism to the 10-dimensional massless Dirac equation on Majorana-Weyl spinors leads to a quaternionic description of the full 4-dimensional (free) Dirac equation which treats both massive and massless particles on an equal footing. Furthermore, there are naturally 3 such descriptions, each of which corresponds to a generation of leptons with the correct number of spin/helicity states.
The massive Dirac equation is usually formulated in the context of 4-component Dirac spinors. The 4 degrees of freedom correspond to the choice of spin (up or down) and the choice of particle or antiparticle. Similarly, 2-component Penrose spinors, which can be thought of as the square roots of null vectors, correspond to massless objects, such as photons. In Section 2 we set the stage by reviewing these standard properties of the chiral description of the momentum-space Dirac equation.
Penrose spinors are usually thought of as Weyl projections of Dirac spinors; a Dirac spinor contains twice the information of a single Penrose spinor. As an alternative to doubling the number of (complex) components, however, we double the dimension of the underlying division algebra, from the complex numbers $``$ to the quaternions $``$. The anticommutativity of the quaternions then enables us to package two complex representations of opposite chirality into the (now quaternionic) 2-component formalism. In Section 3 we show how to replace the usual 4-component complex Dirac description with an equivalent 2-component quaternionic Penrose description, and further discuss how this puts the massive and massless Dirac equations on an equal footing.
We then consider in Section 4 the massless Dirac equation on Majorana-Weyl spinors (in momentum space) in 10 dimensions, which can be nicely described in terms of 2-component spinors over the octonions $`𝕆`$, the only other normed division algebra besides $``$, $``$, and $``$. Solutions of this equation are automatically quaternionic, and thus lend themselves to the preceding quaternionic description.
The final, and most important, ingredient in our approach is the the dimensional reduction scheme introduced in . In Sections 5.1 and 5.2 we describe how the choice of a preferred octonionic unit, or equivalently of a preferred complex subalgebra $`𝕆`$, naturally reduces 10 spacetime dimensions to 4, and further allows us to use the standard representation of the Lorentz group $`SO(3,1)`$ as $`SL(2,)SL(2,𝕆)`$. Putting this all together, we show in Section 5.3 that the quaternionic spin/helicity eigenstates correspond precisely to the particle spectrum of a generation of leptons, consisting of 1 massive and 1 massless particle and their antiparticles.
In Section 5.4, we discuss the remarkable fact that the quaternionic spin eigenstates are in fact simultaneous eigenstates of all 3 spin operators, although the other two eigenvalues are not real. Finally, in Section 6 we discuss our results, in particular that, in a natural sense, there are precisely 3 such quaternionic subalgebras of the octonions, which we interpret as generations.
There is a long history of trying to use the quaternions in 4-dimensional quantum mechanics; see the comprehensive treatment in and references therein. Our approach is different in that we use the additional degrees of freedom to repackage existing information, rather than increasing the size of the underlying space of scalars. Ultimately, this leads us to work in more than 4 spacetime dimensions.
We also note a relatively unknown paper by Dirac which, much to our surprise, contains the precursors of several of the key ideas presented here.
The octonions were first introduced into quantum mechanics by Jordan . There has in fact been much recent interest in using the octonions in (higher-dimensional) field theory; excellent modern treatments can be found in .
After much of this work was completed, we became aware of the recent work of Schücking et al. , who also use a quaternionic formalism to describe a single generation of leptons. They further speculate that extending the formalism to the octonions would yield a description of a single generation of quarks as well. Although the language is strikingly similar, our approach differs fundamentally from theirs in its description of momentum. Ultimately, this hinges on our interpretation of the obvious $`SU(2)`$ as spin, whereas Schücking and coworkers interpret it as isospin.
## 2 COMPLEX FORMALISM
The standard Weyl representation of the gamma matrices in signature ($`+`$ $``$ $``$ $``$) is
$$\gamma _t=\left(\begin{array}{cc}0& I\\ \\ I& 0\end{array}\right)\gamma _a=\left(\begin{array}{cc}0& \sigma _a\\ \\ \sigma _a& 0\end{array}\right)$$
(1)
where $`\sigma _a`$ for $`a=x,y,z`$ denote the usual Pauli matrices <sup>1</sup><sup>1</sup>1For later compatibility with our octonion conventions we use $`\mathrm{}`$ rather than $`i`$ to denote the complex unit.
$$\sigma _x=\left(\begin{array}{cc}0& 1\\ \\ 1& 0\end{array}\right)\sigma _y=\left(\begin{array}{cc}0& \mathrm{}\\ \\ \mathrm{}& 0\end{array}\right)\sigma _z=\left(\begin{array}{cc}1& 0\\ \\ 0& 1\end{array}\right)$$
(2)
and $`I`$ is the $`2\times 2`$ identity matrix.
The original formulation of the Dirac equation involves the even part of the Clifford algebra, historically written in terms of the matrices $`\alpha _a=\gamma _t\gamma _a`$ and $`\beta =\gamma _t`$. Explicitly, the momentum-space Dirac equation in this signature can be written as
$$(\gamma _t\gamma _\alpha p^\alpha m\gamma _t)\mathrm{\Psi }=0$$
(3)
where $`\alpha =t,x,y,z`$ and $`\mathrm{\Psi }`$ is a 4-component complex (Dirac) spinor.
Writing $`\mathrm{\Psi }`$ in terms of two 2-component complex Weyl (or Penrose) spinors $`\theta `$ and $`\eta `$ as
$$\mathrm{\Psi }=\left(\begin{array}{c}\theta \\ \\ \eta \end{array}\right)$$
(4)
and expanding (3) leads to
$$\left(\begin{array}{cc}p^tIp^a\sigma _a& m\\ \\ m& p^tI+p^a\sigma _a\end{array}\right)\left(\begin{array}{c}\theta \\ \\ \eta \end{array}\right)=0$$
(5)
This leads us to identify the momentum 4-vector with the $`2\times 2`$ Hermitian matrix
$$𝐩=p^\alpha \sigma _\alpha =\left(\begin{array}{cc}p^t+p^z& p^x\mathrm{}p^y\\ \\ p^x+\mathrm{}p^y& p^tp^z\end{array}\right)$$
(6)
where we have set $`\sigma _t=I`$, which reduces (5) to the two equations
$`\stackrel{~}{𝐩}\theta m\eta `$ $`=`$ $`0`$ (7)
$`m\theta +𝐩\eta `$ $`=`$ $`0`$ (8)
where the tilde denotes trace-reversal. Explicitly,
$$\stackrel{~}{𝐩}=𝐩\mathrm{tr}(𝐩)I$$
(9)
which reverses the sign of $`p^t`$, so that $`\stackrel{~}{𝐩}`$ can be identified with the 1-form dual to $`𝐩`$. This interpretation is strengthened by noting that
$$\stackrel{~}{𝐩}𝐩=det(𝐩)I=p_\alpha p^\alpha I=m^2I$$
(10)
where the identification of the norm of $`p^\alpha `$ with $`m`$ is just the compatibility condition between (7) and (8).
## 3 QUATERNIONIC FORMALISM
The quaternions $``$ are the associative, noncommutative, normed division algebra over the reals. The quaternions are spanned by the identity element $`1`$ and three imaginary units, usually denoted $`i`$, $`j`$, $`k:=ij`$. Quaternionic conjugation, denoted with a bar, is given by reversing the sign of each imaginary unit. Each imaginary unit squares to $`1`$, and they anticommute with each other; the full multiplication table then follows using associativity.<sup>2</sup><sup>2</sup>2The use of $`\stackrel{\mathbf{}}{\mathit{ı}}`$, $`\stackrel{\mathbf{}}{\mathit{ȷ}}`$, $`\stackrel{\mathbf{}}{𝒌}`$ for Cartesian basis vectors originates with the quaternions, which were introduced by Hamilton as an early step towards vectors . Making the obvious identification of vectors $`\stackrel{\mathbf{}}{𝒗}`$, $`\stackrel{\mathbf{}}{𝒘}`$ with imaginary quaternions $`v`$, $`w`$, then the real part of the quaternionic product $`vw`$ is (minus) the dot product $`\stackrel{\mathbf{}}{𝒗}\stackrel{\mathbf{}}{𝒘}`$, while the imaginary part is the cross product $`\stackrel{\mathbf{}}{𝒗}\times \stackrel{\mathbf{}}{𝒘}`$. However, in order to avoid conflict with our subsequent conventions for the octonions, we will instead label our quaternionic basis $`\mathrm{}`$, $`k`$, $`\mathrm{}k`$. The imaginary unit $`\mathrm{}`$ will play the role of the complex unit $`i`$, and, as we will see later, $`k`$ will label this particular quaternionic subalgebra of the octonions. In terms of the Cayley-Dickson process , we have
$$=+k=(+\mathrm{})+(+\mathrm{})k$$
(11)
As vector spaces, $`=^2`$, which allows us to identify $`^2`$ with $`^4`$ in several different ways. We choose the identification
$$\left(\begin{array}{c}A\\ B\\ C\\ D\end{array}\right)\left(\begin{array}{c}CkB\\ \\ D+kA\end{array}\right)$$
(12)
with $`A,B,C,D`$. Equivalently, we can write this identification in terms of the Weyl (Penrose) spinors $`\theta `$ and $`\eta `$ as
$$\mathrm{\Psi }=\left(\begin{array}{c}\theta \\ \\ \eta \end{array}\right)\eta +\sigma _k\theta $$
(13)
where we have introduced the generalized Pauli matrix
$$\sigma _k=\left(\begin{array}{cc}0& k\\ \\ k& 0\end{array}\right)$$
(14)
Since (13) is clearly a vector space isomorphism, there is also an isomorphism relating the linear maps on these spaces. We can use the induced isomorphism to rewrite the Dirac equation (3) in 2-component quaternionic language. Direct computation yields the correspondences
$$\gamma _t\gamma _a=\left(\begin{array}{cc}\sigma _a& 0\\ \\ 0& \sigma _a\end{array}\right)\sigma _a$$
(15)
and
$$\gamma _t\sigma _k$$
(16)
and of course also
$$\gamma _t\gamma _t\sigma _t$$
(17)
since the left-hand-side is the $`4\times 4`$ identity matrix and the right-hand-side is the $`2\times 2`$ identity matrix. Direct translation of (3) now leads to the quaternionic Dirac equation
$$(𝐩m\sigma _k)(\eta +\sigma _k\theta )=0$$
(18)
Working backwards, we can separate this into an equation not involving $`k`$, which is precisely (8), and an equation involving $`k`$, which is
$$𝐩\sigma _k\theta m\sigma _k\eta =0$$
(19)
Multiplying this equation on the left by $`\sigma _k`$, and using the remarkable identity
$$\sigma _k𝐩\sigma _k=\stackrel{~}{𝐩}$$
(20)
reduces (19) to (7), as expected.
So far, we have done nothing more or less than rewrite the usual momentum-space Dirac equation in 2-component quaternionic language. However, the appearance of the term $`m\sigma _k`$ suggests a way to put in the mass term on the same footing as the other terms, which we now exploit. Multiplying (18) on the left by $`\sigma _k`$ and using (20) brings the Dirac equation to the form
$$(\stackrel{~}{𝐩}+m\sigma _k)\psi =0$$
(21)
where we have introduced the 2-component quaternionic spinor
$$\psi =\sigma _k(\eta +\sigma _k\theta )=\theta +\sigma _k\eta $$
(22)
When written out in full, (21) takes the form
$$\left(\begin{array}{cc}p^t+p^z& p^x\mathrm{}p^ykm\\ \\ p^x+\mathrm{}p^y+km& p^tp^z\end{array}\right)\psi =0$$
(23)
This clearly suggests viewing the mass as an additional spacelike component of a higher-dimensional vector. Furthermore, since the matrix multiplying $`\psi `$ has determinant zero, this higher-dimensional vector is null. We thus appear to have reduced the massive Dirac equation in 4 dimensions to the massless Dirac, or Weyl, equation in higher dimensions, thus putting the massive and massless cases on an equal footing. This expectation is indeed correct, as we show in the next section in the more general octonionic setting.
## 4 OCTONIONIC FORMALISM
### 4.1 Octonionic Penrose Spinors
The octonions $`𝕆`$ are the nonassociative, noncommutative, normed division algebra over the reals. The octonions are spanned by the identity element $`1`$ and seven imaginary units, which we label as $`\{i,j,k,k\mathrm{},j\mathrm{},i\mathrm{},\mathrm{}\}`$. Each imaginary unit squares to $`1`$
$$i^2=j^2=k^2=\mathrm{}=\mathrm{}^2=1$$
(24)
and the full multiplication table can be conveniently encoded in the 7-point projective plane, as shown in Figure 1; each line is to be thought of as a circle. The octonionic units can be grouped into (the imaginary parts of) quaternionic subalgebras in 7 different ways, corresponding to the 7 lines in the figure; these will be referred to as quaternionic triples. Within each triple, the arrows give the orientation, so that e.g.
$$ij=k=ji$$
(25)
Any three imaginary basis units which do not lie in a such a triple anti-associate. Note that any two octonions automatically lie in (at least one) quaternionic triple, so that expressions containing only two independent imaginary octonionic directions do associate. Octonionic conjugation is given by reversing the sign of the imaginary basis units, and the norm is just
$$|p|=\sqrt{p\overline{p}}$$
(26)
which satisfies the defining property of a normed division algebra, namely
$$|pq|=|p||q|$$
(27)
We follow in representing real $`(9+1)`$-dimensional Minkowski space in terms of $`2\times 2`$ Hermitian octonionic matrices. <sup>3</sup><sup>3</sup>3A number of authors, such as , have used this approach to describe supersymmetric theories in 10 dimensions. Fairlie & Manogue and Manogue & Sudbery described solutions of the superstring equations of motion using octonionic parameters, and Schray described the superparticle. A more extensive bibliography appears in . In analogy with the complex case, a vector field $`q^\mu `$ with $`\mu =0,\mathrm{},9`$ can be thought of under this representation as a matrix
$$Q=\left(\begin{array}{cc}q^+& \overline{q}\\ \\ q& q^{}\end{array}\right)$$
(28)
where $`q^\pm =q^0\pm q^9`$ are the components of $`q^\mu `$ in 2 null directions and $`q=q^1+q^2i+\mathrm{}+q^8\mathrm{}𝕆`$ is an octonion representing the transverse spatial coordinates. Following , we define
$$\stackrel{~}{Q}=Q\mathrm{tr}(Q)I$$
(29)
Furthermore, since $`Q`$ satisfies its characteristic polynomial, we have
$$Q\stackrel{~}{Q}=\stackrel{~}{Q}Q=Q^2+\mathrm{tr}(Q)Q=det(Q)I=g_{\mu \nu }q^\mu q^\nu I$$
(30)
where $`g_{\mu \nu }`$ is the Minkowski metric (with signature ($`+`$ $``$$``$)). We can therefore identify the tilde operation with the metric dual, so that $`\stackrel{~}{Q}`$ represents the covariant vector field $`q_\mu `$.
Just as in the complex case (compare (1)), this can be thought of (up to associativity issues) as a Weyl representation of the underlying Clifford algebra $`𝒞l(9,1)`$ in terms of $`4\times 4`$ gamma matrices of the form
$$q^\mu \gamma _\mu =\left(\begin{array}{cc}0& Q\\ \\ \stackrel{~}{Q}& 0\end{array}\right)$$
(31)
which are now octonionic. It is readily checked that
$$\gamma _\mu \gamma _\nu +\gamma _\nu \gamma _\mu =2g_{\mu \nu }$$
(32)
as desired.
In this language, a Majorana spinor $`\mathrm{\Psi }=\left(\begin{array}{c}\psi \\ \\ \chi \end{array}\right)`$ is a 4-component octonionic column, whose chiral projections are the Majorana-Weyl spinors $`\left(\begin{array}{c}\psi \\ \\ 0\end{array}\right)`$ and $`\left(\begin{array}{c}0\\ \\ \chi \end{array}\right)`$, which can be identified with the 2-component octonionic columns $`\psi `$ and $`\chi `$, which in turn can be thought of as generalized Penrose spinors. Writing
$$\gamma _\mu =\left(\begin{array}{cc}0& \sigma _\mu \\ \\ \stackrel{~}{\sigma }_\mu & 0\end{array}\right)$$
(33)
or equivalently
$$Q=q^\mu \sigma _\mu =q_\mu \sigma ^\mu $$
(34)
defines the octonionic Pauli matrices $`\sigma _\mu `$. The matrices $`\sigma _a`$, with $`a=1,\mathrm{},9`$, are the natural generalization of the ordinary Pauli matrices to the octonions, and $`\sigma _0=I`$. In analogy with our treatment of the complex case, we have
$$\gamma _0\gamma _\mu =\left(\begin{array}{cc}\stackrel{~}{\sigma }_\mu & 0\\ \\ 0& \sigma _\mu \end{array}\right)$$
(35)
For completeness, we record some useful relationships. The adjoint $`\overline{\mathrm{\Psi }}`$ of the Majorana spinor $`\mathrm{\Psi }`$ is given as usual by
$$\overline{\mathrm{\Psi }}=\mathrm{\Psi }^{}\gamma _0$$
(36)
since
$$\gamma _\mu ^{}\gamma _0^{}=\gamma _0\gamma _\mu $$
(37)
Given a Majorana spinor $`\mathrm{\Psi }=\left(\begin{array}{c}\psi \\ \\ \chi \end{array}\right)`$, we can construct a real vector <sup>4</sup><sup>4</sup>4We assume here that the components of our spinors are commuting, as we believe that the anticommuting nature of fermions may be carried by the octonionic units themselves. An analogous result for anticommuting spinors was obtained in both of .
$$q^\mu [\mathrm{\Psi }]=\mathrm{Re}(\mathrm{\Psi }^{}\gamma ^0\gamma ^\mu \mathrm{\Psi })$$
(38)
corresponding in traditional language to $`\overline{\mathrm{\Psi }}\gamma ^\mu \mathrm{\Psi }`$. We can further identify this with a $`2\times 2`$ matrix $`Q[\mathrm{\Psi }]`$ as in (31) above. Direct computation using the cyclic property of the trace, e.g. for octonionic columns $`\mathrm{\Psi }_1`$, $`\mathrm{\Psi }_2`$, and octonionic matrices $`\gamma `$
$$\mathrm{Re}(\mathrm{\Psi }_1^{}\gamma \mathrm{\Psi }_2^{})=\mathrm{Re}\left(\mathrm{tr}(\mathrm{\Psi }_1^{}\gamma \mathrm{\Psi }_2^{})\right)=\mathrm{Re}\left(\mathrm{tr}(\mathrm{\Psi }_2^{}\mathrm{\Psi }_1^{}\gamma )\right)$$
(39)
shows that
$$Q[\mathrm{\Psi }]=2\psi \psi ^{}2\stackrel{~}{\chi \chi ^{}}$$
(40)
### 4.2 Octonionic Dirac Equation
The momentum-space massless Dirac equation (Weyl equation) in 10 dimensions can be written in the form
$$\gamma _0\gamma _\mu p^\mu \mathrm{\Psi }=0$$
(41)
Choosing $`\mathrm{\Psi }=\left(\begin{array}{c}\psi \\ \\ 0\end{array}\right)`$ to be a Majorana-Weyl spinor, and using (35) and (34), (41) finally takes the form
$$\stackrel{~}{P}\psi =0$$
(42)
which is the octonionic Weyl equation. In matrix notation, it is straightforward to show that the momentum $`p^\mu `$ of a solution of the Weyl equation must be null: (42) says that the $`2\times 2`$ Hermitian matrix $`P`$ has $`0`$ as one of its eigenvalues, <sup>5</sup><sup>5</sup>5It is not true in general that the determinant of an $`n\times n`$ Hermitian octonionic matrix is the product of its (real) eigenvalues, unless $`n=2`$; however, see also . which forces $`det(P)=0`$, which is
$$\stackrel{~}{P}P=0$$
(43)
which in turn is precisely the condition that $`p^\mu `$ be null.
Equations (43) and (42) are algebraically the same as the octonionic versions of two of the superstring equations of motion, as discussed in , and are also the octonionic superparticle equations . As implied by those references, (43) implies the existence of a 2-component spinor $`\theta `$ such that
$$P=\pm \theta \theta ^{}$$
(44)
where the sign corresponds to the time orientation of $`P`$, and the general solution of (42) is
$$\psi =\theta \xi $$
(45)
where $`\xi 𝕆`$ is arbitrary. The components of $`\theta `$ lie in the complex subalgebra of $`𝕆`$ determined by $`P`$, so that (the components of) $`\theta `$ and $`\xi `$ (and hence also $`P`$) belong to a quaternionic subalgebra of $`𝕆`$. Thus, for solutions (45), the Weyl equation (41) itself becomes quaternionic.
Furthermore, it follows immediately from (45) that
$$\psi \psi ^{}=\pm |\xi |^2P$$
(46)
Comparing this with (40), we see that the vector constructed from $`\psi `$ is proportional to $`P`$, or in more traditional language
$$\overline{\mathrm{\Psi }}\gamma ^\mu \mathrm{\Psi }p^\mu $$
(47)
which can be interpreted as the requirement that the Pauli-Lubanski spin vector be proportional to the momentum for a massless particle.
## 5 DIMENSIONAL REDUCTION AND SPIN
### 5.1 Choosing a Preferred Complex Subalgebra
The description in the preceding section of 10-dimensional Minkowski space in terms of Hermitian octonionic matrices is a direct generalization of the usual description of ordinary (4-dimensional) Minkowski space in terms of complex Hermitian matrices. If we fix a complex subalgebra $`𝕆`$, then we single out a 4-dimensional Minkowski subspace of 10-dimensional Minkowski space. The projection of a 10-dimensional null vector onto this subspace is a causal 4-dimensional vector, which is null if and only if the original vector was already contained in the subspace, and timelike otherwise. The time orientation of the projected vector is the same as that of the original, and the induced mass is given by the norm of the remaining 6 components. Furthermore, the ordinary Lorentz group $`SO(3,1)`$ clearly sits inside the Lorentz group $`SO(9,1)`$ via the identification of their double-covers, the spin groups $`\text{Spin}(d,1)`$, namely <sup>6</sup><sup>6</sup>6The last equality is more usually discussed at the Lie algebra level. Manogue & Schray gave an explicit representation using this language of the finite Lorentz transformations in 10 spacetime dimensions. For further discussion of the notation $`SL(2,𝕆)`$, see also .
$$\text{Spin}(3,1)=SL(2,)SL(2,𝕆)=\text{Spin}(9,1)$$
(48)
Therefore, all it takes to break 10 spacetime dimensions to 4 is to choose a preferred octonionic unit to play the role of the complex unit. We choose $`\mathrm{}`$ rather than $`i`$ to fill this role, preferring to save $`i`$, $`j`$, $`k`$ for a (distinguished) quaternionic triple. The projection $`\pi `$ from $`𝕆`$ to $``$ is given by
$$\pi (q)=\frac{1}{2}(q+\mathrm{}q\overline{\mathrm{}})$$
(49)
and we thus obtain a preferred $`SL(2,)`$ subgroup of $`SL(2,𝕆)`$, corresponding to the “physical” Lorentz group.
### 5.2 Spin
Since we now have a preferred 4-d Lorentz group, we can use its rotation subgroup $`SU(2)SL(2,)`$ to define spin. However, care must be taken when constructing the Lie algebra $`su(2)`$, due to the lack of commutativity.
Under the usual action of $`MSU(2)`$ on a Hermitian matrix $`Q`$ (thought of as a spacetime vector via (34)), namely
$$QMQM^{}$$
(50)
we can identify the basis rotations as usual as
$$R_z=\left(\begin{array}{cc}e^{\mathrm{}\frac{\varphi }{2}}& 0\\ \\ 0& e^{\mathrm{}\frac{\varphi }{2}}\end{array}\right)R_y=\left(\begin{array}{cc}\mathrm{cos}\frac{\varphi }{2}& \mathrm{sin}\frac{\varphi }{2}\\ \\ \mathrm{sin}\frac{\varphi }{2}& \mathrm{cos}\frac{\varphi }{2}\end{array}\right)R_x=\left(\begin{array}{cc}\mathrm{cos}\frac{\varphi }{2}& \mathrm{}\mathrm{sin}\frac{\varphi }{2}\\ \\ \mathrm{}\mathrm{sin}\frac{\varphi }{2}& \mathrm{cos}\frac{\varphi }{2}\end{array}\right)$$
(51)
corresponding to rotations by the angle $`\varphi `$ about the $`z`$, $`y`$, and $`x`$ axes, respectively.
The infinitesimal generators of the Lie algebra $`su(2)`$ are obtained by differentiating these group elements, via
$$L_a=\frac{dR_a}{d\varphi }|_{\varphi =0}$$
(52)
where as before $`a=x,y,z`$. For reasons which will become apparent, we have not multiplied these generators by $`\mathrm{}`$ to obtain Hermitian matrices. We have instead
$$2L_z=\left(\begin{array}{cc}\mathrm{}& 0\\ \\ 0& \mathrm{}\end{array}\right)2L_y=\left(\begin{array}{cc}0& 1\\ \\ 1& 0\end{array}\right)2L_x=\left(\begin{array}{cc}0& \mathrm{}\\ \\ \mathrm{}& 0\end{array}\right)$$
(53)
which satisfy the commutation relations
$$[L_a,L_b]=ϵ_{abc}L_c$$
(54)
where $`ϵ`$ is completely antisymmetric and
$$ϵ_{xyz}=1$$
(55)
Spin eigenstates are usually obtained as eigenvectors of the Hermitian matrix $`\mathrm{}L_z`$, with real eigenvalues. Here we must be careful to multiply by $`\mathrm{}`$ in the correct place. We define
$$\widehat{L}_z\psi :=L_z\psi \mathrm{}$$
(56)
which is well-defined by alternativity, so that
$$\widehat{L}_z=\mathrm{}_RL_z$$
(57)
where the operator $`\mathrm{}_R`$ denotes right multiplication by $`\mathrm{}`$ and where $``$ denotes composition. The operators $`\widehat{L}_a`$ are self-adjoint with respect to the inner product
$$\psi ,\chi =\pi \left(\psi ^{}\chi \right)$$
(58)
We therefore consider the eigenvalue problem
$$\widehat{L}_z\psi =\psi \lambda $$
(59)
with $`\lambda `$. It is straightforward to show that the real eigenvalues are
$$\lambda _\pm =\pm \frac{1}{2}$$
(60)
as expected. However, the form of the eigenvectors is a bit more surprising:
$$\psi _+=\left(\begin{array}{c}A\\ kD\end{array}\right)\psi _{}=\left(\begin{array}{c}kB\\ C\end{array}\right)$$
(61)
where $`A,B,C,D`$ are any elements of the preferred complex subalgebra, and $`k`$ is any imaginary octonionic unit orthogonal to $`\mathrm{}`$, so that $`k`$ and $`\mathrm{}`$ anticommute. Thus, the components of spin eigenstates are contained in the quaternionic subalgebra $`𝕆`$ which is generated by $`\mathrm{}`$ and $`k`$.
Therefore, if we wish to consider spin eigenstates, $`\mathrm{}`$ must be in the quaternionic subalgebra $``$ defined by the solution. We can further assume without loss of generality that $``$ takes the form given in (11). Thus, the only possible nonzero components of $`p_\mu `$ are $`p_t=p_0`$, $`p_x=p_1`$, $`p_k=p_4`$, $`p_k\mathrm{}=p_5`$, $`p_y=p_8`$, and $`p_z=p_9`$, corresponding to the gamma matrices with components in $``$. We can further assume (via a rotation in the ($`k,k\mathrm{}`$)-plane if necessary) that $`p_5=0`$, so that
$$P=\pi (P)+m\sigma ^k$$
(62)
where
$$\pi (P)=p_\alpha \sigma ^\alpha 𝐩$$
(63)
with $`\alpha =0,1,8,9`$ (or equivalently $`\alpha =t,x,y,z`$) is complex, and corresponds to the 4-dimensional momentum of the particle, with squared mass
$$m^2=p_\alpha p^\alpha =det(\pi (P))$$
(64)
Inserting (62) into (42), we recover precisely (21), and we see that we have come full circle: Solutions of the octonionic Weyl equation (41) are described precisely by the quaternionic formalism of Section 3, and the dimensional reduction scheme determines the mass term.
### 5.3 Particles
For each solution $`\psi `$ of (45), the momentum is proportional to $`\psi \psi ^{}`$ by (46). Up to an overall factor, we can therefore read off the components of the 4-dimensional momentum $`p_\alpha `$ directly from $`\pi (\psi \psi ^{})`$. We can use a Lorentz transformation to bring a massive particle to rest, or to orient the momentum of a massless particle to be in the $`z`$-direction.
If $`m0`$, we can distinguish particles from antiparticles by the sign of the term involving $`m`$, which is the coefficient of $`\sigma _k`$ in $`P`$. Equivalently, we have the particle/antiparticle projections (at rest)
$$\mathrm{\Pi }_\pm =\frac{1}{2}\left(\sigma _t\pm \sigma _k\right)$$
(65)
If $`m=0`$, however, we can only distinguish particles from antiparticles in momentum space by the sign of $`p^0`$, as usual; this is the same as the sign in (46). Similarly, in this language, the chiral projection operator is constructed from
$$\mathrm{{\rm Y}}^5=\sigma ^t\sigma ^x\sigma ^y\sigma ^z=\left(\begin{array}{cc}\mathrm{}& 0\\ 0& \mathrm{}\end{array}\right)$$
(66)
However, as with spin, we must multiply by $`\mathrm{}`$ in the correct place, obtaining
$$\widehat{\mathrm{{\rm Y}}}^5=\mathrm{}_R\mathrm{{\rm Y}}^5$$
(67)
As a result, even though $`\mathrm{{\rm Y}}^5`$ is a multiple of the identity, $`\widehat{\mathrm{{\rm Y}}}^5`$ is not, and the operators $`\frac{1}{2}(\sigma _t\pm \widehat{\mathrm{{\rm Y}}}^5)`$ project $`^2`$ into the Weyl subspaces $`^2^2k`$ as desired.
Combining the spin and particle information, over the quaternionic subalgebra $`𝕆`$ determined by $`k`$ and $`\mathrm{}`$, we thus find 1 massive spin-$`\frac{1}{2}`$ particle at rest, with 2 spin states, namely
$$e_{}=\left(\begin{array}{c}1\\ k\end{array}\right)e_{}=\left(\begin{array}{c}k\\ 1\end{array}\right)$$
(68)
whose antiparticle is obtained by replacing $`k`$ by $`k`$ (and changing the sign in (46)). We also find 1 massless spin-$`\frac{1}{2}`$ particle involving $`k`$ moving in the $`z`$-direction, with a single helicity state,
$$\nu _z=\left(\begin{array}{c}0\\ k\end{array}\right)$$
(69)
which corresponds, as usual, to both a particle and its antiparticle. It is important to note that
$$\nu _z=\left(\begin{array}{c}k\\ 0\end{array}\right)$$
(70)
corresponds to a massless particle with the same helicity moving in the opposite direction, not to a different particle with the opposite helicity. Each of the above states may be multiplied (on the right) by an arbitrary complex number.
There is also a single complex massless spin-$`\frac{1}{2}`$ particle, with the opposite helicity, which is given in momentum space by
$$\text{Ø}_z=\left(\begin{array}{c}0\\ 1\end{array}\right)$$
(71)
As with the other massless momentum space states, this describes both a particle and an antiparticle. Alone among the particles, this one does not contain $`k`$, and hence does not depend on the choice of identification of a particular quaternionic subalgebra $``$ satisfying $`𝕆`$.
### 5.4 Spin Operators
We saw in the previous section that the spin up particle state $`e_{}`$ is a simultaneous eigenvector of the spin operator and particle projections, that is
$$2\widehat{L}_ze_{}=e_{}=\mathrm{\Pi }_+e_{}$$
(72)
Remarkably, $`e_{}`$ is also an eigenvector of the remaining spin operators, namely
$$2\widehat{L}_xe_{}=e_{}k2\widehat{L}_ye_{}=e_{}k\mathrm{}$$
(73)
although the eigenvalues are not real. Similar statements hold for the corresponding spin down and antiparticle states, although with different eigenvalues.
We find it illuminating to consider the equivalent right eigenvalue problem
$$L_z\psi =\psi \lambda $$
(74)
for the non-Hermitian operator $`L_z`$. The operator $`L_z`$ admits imaginary eigenvalues $`\pm \mathrm{}/2`$, which correspond to the usual spin eigenstates. But $`L_z`$ also admits other imaginary eigenvalues! These correspond precisely to the eigenvalues of $`\widehat{L}_z`$ which are not real, and in fact not in $``$. We emphasize that the spin operators are self-adjoint (with respect to (58)). However, over the octonions it is not true that all the eigenvalues of Hermitian matrices, are real ; the case of self-adjoint operators is similar.
How does it affect the traditional interpretation of quantum mechanics to have simultaneous eigenstates of all 3 spin operators? The essential feature which permits this is that only one of the eigenvalues is real, and only real eigenvalues correspond to observables. Thus, from this point of view, the reason that the spin operators fail to commute is not that they do not admit simultaneous eigenstates, but rather that their eigenvalues fail to commute!
Furthermore, while (right) multiplication by a (complex) phase does not change any of the real eigenvalues, the nonreal eigenvalues do depend on the phase, since the phase doesn’t commute with the eigenvalue! Does this allow one in principle to determine the exact spin orientation, even if no corresponding measurement exists?
## 6 DISCUSSION
We have shown how the massless Dirac equation in 10 dimensions reduces to the (massive and massless) Dirac equation in 4 dimensions when a preferred octonionic unit is chosen.
The quaternionic Dirac equation discussed in Section 3 describes 1 massive particle with 2 spin states, 1 massless particle with only 1 helicity, and their antiparticles. We identify this set of particles with a generation of leptons.
Furthermore, as can be seen from Figure 1, there is room in the octonions for exactly 3 such quaternionic descriptions which have only their complex part in common, corresponding to replacing $`k`$ in turn by $`i`$ and $`j`$. We identify these 3 quaternionic spaces as describing 3 generations of leptons.
There is, however, one additional massless particle/antiparticle pair, given by (71). Being purely complex, it does not belong to any generation, and it has the opposite helicity from the other massless particles. We do not currently have a physical interpretation for this additional particle; if this theory is to correspond to nature, then this additional particle must for some reason not interact much with anything else.
Note that the mass appears in this theory as an overall scale, which can be thought of as the length scale associated with the corresponding quaternionic direction. In particular, antiparticles must have the same mass as the corresponding particles. This suggests that the only free parameters in this theory are 3 length scales, corresponding to the masses in each generation.
The theory presented here can be elegantly rewritten in terms of Jordan matrices, i.e. $`3\times 3`$ octonionic Hermitian matrices, along the lines of the approach to the superparticle presented in . This approach, which is briefly described in , demonstrates that the theory is invariant under a much bigger group than the Lorentz group, namely the exceptional group $`E_6`$ (actually, $`E_7`$, since only conformal transformations are involved). We therefore believe it may be possible to extend the theory so as to include quarks and color.
Finally, as noted in , we have worked only in momentum space, and have discussed only free particles. Perhaps our most intriguing result is the observation that the introduction of position space would require a preferred complex unit in the Fourier transform. Similarly, a description of interactions based on minimal coupling would again involve a preferred complex unit. Therefore, it does not appear to be possible to use the formalism presented here to give a full, interacting, 10-dimensional theory in which all 10 spacetime dimensions are on an equal footing. We view this as a tantalizing hint that not only interactions, but even 4-dimensional spacetime itself, may arise as a consequence of the symmetry breaking from 10 dimensions to 4!
ACKNOWLEDGMENTS
It is a pleasure to thank David Griffiths, Phil Siemens, Tony Sudbery, and Pat Welch for comments on earlier versions of this work, Paul Davies for moral support, and Reed College for hospitality. |
no-problem/9910/astro-ph9910326.html | ar5iv | text | # Critical self-organization of astrophysical shocks
## 1. Introduction
It is now generally recognized that most of the observed gamma radiation derives in one way or another from accelerated particles. Radio and x-ray spectral components from a variety of astrophysical objects are believed to have a similar origin. High energy neutrinos, whose detection is on the program for the future and existing water/ice detectors, must also be related to the ultra-high energy cosmic rays (UHECR). Their origin is, in turn, a mystery and the huge Auger detector complex is now being built to elucidate it (Blandford (1999); Cronin (1999)).
There has been essential progress in our understanding of how accelerated particles produce the radiation detected. The models concentrate on the synchrotron and inverse Compton emission for electrons and on the $`\gamma `$ ray and neutrino production in $`pp`$ and $`p\gamma `$ reactions for protons. However, the primary spectrum of accelerated particles remains a stumbling block making predictions of otherwise similar models so different (e.g., Mannheim et al., (1999); Waxman & Bahcall (1999)).
The “standard” mechanism of particle acceleration, capable of producing nonthermal power-law spectra extending over many decades in energy is the I-order Fermi or diffusive shock acceleration. It was originally suggested to explain the origin of galactic cosmic rays (CRs). For the purposes of the high-energy radiation and UHECR it is usually adopted as an axiom, and mostly only in its simplest, test particle (TP) or linear realization. In particular, it is assumed that any strong nonrelativistic shock routinely produces a $`E^2`$ spectrum of protons and/or electrons. In fact this spectrum arises from a simple formula $`FE^{(r+2)/(r1)}`$ where $`r`$ is the shock compression (for $`r=4`$). However, it is valid only if the shock thickness is much smaller than the particle mean free path. This, in turn, is true only if the energy content of accelerated particles is small compared to the shock energy (inefficient acceleration) so that the shock structure is maintained by the thermal, not by the high energy particles. Otherwise, the accelerated particles create the shock structure on their own and if so, then obviously on a scale that is larger or of the order of their mean free path, thus making the above formula invalid. Therefore, the TP regime requires a very low CR number density (the rate of injection $`\nu `$ into the acceleration process), which appears to be impossible in the parameter range of interest. It has been inferred from observations (e.g., Lee (1982)), simulations (e.g., Bennet & Ellison (1995)), and theory (e.g., Malkov (1998)) that the CR number density $`n_{\mathrm{CR}}`$ at a strong shock must be $`10^3`$ of the background density $`n_1`$ upstream. It is important to emphasize here that when the actual injection rate $`\nu `$ exceeds the critical value (denote it $`\nu _2`$), the test particle ($`E^2`$) solution simply does not exist. Simple measures, such as calculating corrections, are intrinsically inadequate. What happens is that two other solutions with considerably higher efficiencies branch off at a somewhat lower injection rate $`\nu _1<\nu _2`$, one of which disappears again at $`\nu =\nu _2`$, together with the test particle solution.
Thus, it seems to be difficult to put an accelerating shock into a regime in which the CR energy production rate (acceleration efficiency) could be gradually adjusted by changing parameters. It is either too low (TP regime) or it is close to unity. Note, that this situation is quite suggestive of that occurring in phase transitions or bifurcations.
Generally, neither of those extreme regimes provide an adequate description of particle spectra and related emission. Nevertheless, we argue in this Letter that despite this apparent lack of regulation ability, shocks must be still capable of self-regulation and self-organization. The transition region between the two acceleration regimes (critical region) is very narrow in control parameters like $`\nu `$. On the other hand, the self-regulation can work efficiently only when the parameters are within this region. This requirement determines them, and also resolves the question of the mechanism of self-regulation. The above consideration is similar to the concept of self-organized criticality (SOC) (e.g., Bak, Tang & Wiesenfeld (1987), Hwa & Kardar (1992), Diamond & Hahm (1995)).
## 2. Formulation of the problem
We use the diffusion-convection equation (e.g., Drury (1983)) for describing the distribution of high energy particles (CRs). We assume that the gaseous discontinuity (also called the subshock) is located at $`x=0`$ and the shock propagates in the positive $`x`$\- direction. Thus, the flow velocity in the shock frame can be represented as $`V(x)=u(x)`$ where the (positive) flow speed $`u(x)`$ jumps from $`u_2u(0)`$ downstream to $`u_0u(0+)>u_2`$ across the subshock and then gradually increases up to $`u_1u(+\mathrm{})u_0`$. In a steady state the equation reads
$$u\frac{f}{x}+\kappa (p)\frac{^2f}{x^2}=\frac{1}{3}\frac{du}{dx}p\frac{f}{p},$$
(1)
where $`f(x,p)`$ is the isotropic (in the local fluid frame) part of the particle distribution. This is assumed to vanish far upstream ($`f0,x\mathrm{}`$), while the only bounded solution downstream is obviously $`f(x,p)=f_0(p)f(0,p)`$. The most plausible assumption about the cosmic ray diffusivity $`\kappa (p)`$ is that of the Bohm type, i.e., $`\kappa (p)=Kp^2/\sqrt{1+p^2}`$ (the particle momentum $`p`$ is normalized to $`mc`$). In other words $`\kappa `$ scales as the gyroradius, $`\kappa r_\mathrm{g}(p)`$. The reference diffusivity $`K`$ depends on the $`\delta B/B`$ level of the MHD turbulence that scatters the particles in pitch angle. The minimum value for $`K`$ would be $`Kmc^3/eB`$ if $`\delta BB`$. Note that this plain parameterization of this important quantity is perhaps the most serious incompleteness of the theory which will be discussed later.
To include the backreaction of accelerated particles on the plasma flow three further equations are needed. First, one simply writes the conservation of the momentum flux in the smooth part of the shock transition ($`x>0`$, the so-called CR-precursor)
$$P_\mathrm{c}+\rho u^2=\rho _1u_1^2,x>0$$
(2)
where $`P_\mathrm{c}`$ is the pressure of the high energy particles
$$P_\mathrm{c}(x)=\frac{4\pi }{3}mc^2_{p_0}^{p_1}\frac{p^4dp}{\sqrt{p^2+1}}f(p,x)$$
(3)
It is assumed here that there are no particles with momenta $`p>p_1`$ (they leave the shock vicinity because there are no MHD waves with sufficiently long wave length $`\lambda `$, since the cyclotron resonance requires $`p\lambda `$). The momentum region $`0<p<p_0`$ cannot be described by equation (1) and the behavior of $`f(p)`$ at $`pp_0`$ is described by the injection parameters $`p_0`$ and $`f(p_0)`$ (Malkov (1997), \[M97\]). The plasma density $`\rho (x)`$ can be eliminated from equation (2) by using the continuity equation $`\rho u=\rho _1u_1`$. Finally, the subshock strength $`r_\mathrm{s}`$ can be expressed through the Mach number $`M`$ at $`x=\mathrm{}`$ (e.g., Landau & Lifshitz )
$$r_\mathrm{s}\frac{u_0}{u_2}=\frac{\gamma +1}{\gamma 1+2R^{\gamma +1}M^2}$$
(4)
where the precursor compression $`Ru_1/u_0`$ and $`\gamma `$ is the adiabatic index of the plasma.
The system of equations (1,2,4) describes in a self-consistent manner the particle spectrum and the flow structure. An efficient way to solve it is to reduce this system to one integral equation (M97). A key dependent variable is an integral transform of the flow profile $`u(x)`$ with a kernel suggested by an asymptotic solution of the system (1)-(2) which has the form
$$f(x,p)=f_0(p)\mathrm{exp}\left[\frac{q}{3\kappa }\mathrm{\Psi }\right]$$
where
$$\mathrm{\Psi }=_0^xu(x^{})𝑑x^{}$$
and the spectral index downstream $`q(p)=d\mathrm{ln}f_0/d\mathrm{ln}p`$. The integral transform is as follows
$$U(p)=\frac{1}{u_1}_0^{\mathrm{}}\mathrm{exp}\left[\frac{q(p)}{3\kappa (p)}\mathrm{\Psi }\right]𝑑u(\mathrm{\Psi })$$
(5)
and it is related to $`q(p)`$ through the following formula
$$q(p)=\frac{d\mathrm{ln}U}{d\mathrm{ln}p}+\frac{3}{r_\mathrm{s}RU(p)}+3$$
(6)
Thus, once $`U(p)`$ is found both the flow profile and the particle distribution can be determined by inverting transform (5) and integrating equation (6). Now, using the linearity of equation (2) ($`\rho u=const`$), we derive the integral equation for $`U`$ by applying the transformation (5) to the $`x`$ derivative of equation (2) (M97). The result reads
$`U(t)`$ $`=`$ $`{\displaystyle \frac{r_\mathrm{s}1}{Rr_\mathrm{s}}}+{\displaystyle \frac{\nu }{Kp_0}}{\displaystyle _{t_0}^{t_1}}𝑑t^{}\left[{\displaystyle \frac{1}{\kappa (t^{})}}+{\displaystyle \frac{q(t^{})}{\kappa (t)q(t)}}\right]^1`$ (7)
$`\times `$ $`{\displaystyle \frac{U(t_0)}{U(t^{})}}\mathrm{exp}\left[{\displaystyle \frac{3}{Rr_\mathrm{s}}}{\displaystyle _{t_0}^t^{}}{\displaystyle \frac{dt^{\prime \prime }}{U(t^{\prime \prime })}}\right]`$
where $`t=\mathrm{ln}p`$, $`t_{0,1}=\mathrm{ln}p_{0,1}`$. Here the injection parameter
$$\nu =\frac{4\pi }{3}\frac{mc^2}{\rho _1u_1^2}p_0^4f_0(p_0)$$
(8)
is related to $`R`$ by means of the following equation
$`\nu `$ $`=`$ $`Kp_0\left(1R^1\right)`$ (9)
$`\times `$ $`\left\{{\displaystyle _{t_0}^{t_1}}\kappa (t)𝑑t{\displaystyle \frac{U(t_0)}{U(t)}}\mathrm{exp}\left[{\displaystyle \frac{3}{Rr_\mathrm{s}}}{\displaystyle _{t_0}^t}{\displaystyle \frac{dt^{}}{U(t^{})}}\right]\right\}^1`$
The equations (4,7,9) form a closed system that can be easily solved numerically. We analyze the results in the next section.
## 3. Mechanisms of critical self-organization
The critical nature of this acceleration process is best seen in variables $`R,\nu `$. The quantity $`R1`$ is a measure of shock modification produced by CRs, in fact $`(R1)/R=P_\mathrm{c}(0)/\rho _1u_1^2`$ (eq.) and may be regarded as an order parameter. The injection rate $`\nu `$ characterizes the CR density at the shock front and can be tentatively treated as a control parameter. It is convenient to plot the function $`\nu (R)`$ instead of $`R(\nu )`$ (using equation ), since $`R(\nu )`$ is not always a single-valued function, Fig. Critical self-organization of astrophysical shocks.
The injection rate $`\nu `$ at the subshock should be calculated given $`r_\mathrm{s}(R)`$ (M97) with the self-consistent determination of the flow compression $`R`$ on the basis of the $`R(\nu )`$ dependence obtained. However, in view of its critical character, this solution can be physically meaningful only in regimes far from criticality, i.e., when $`R1`$ (test particle regime) or $`R1`$ (efficient acceleration). But, it is difficult to see how this system could stably evolve remaining in one of these two regimes. Indeed, if $`\nu `$ is subcritical it will inevitably become supercritical when $`p_1`$ is sufficiently high. Once it happened, however, the strong subshock reduction (equation ) will reduce $`\nu `$ and drive the system back to the critical regime, Fig. Critical self-organization of astrophysical shocks.
The maximum momentum $`p_1`$ is subject to self-regulation as well. Indeed, when $`R1`$, the generation and propagation of Alfven waves is characterized by strong inclination of the characteristics of wave transport equation towards larger wavenumbers $`k`$ on the $`kx`$ plane due to wave compression. Thus, considering particles with $`pp_1`$ inside the precursor, one sees that they are in resonance with waves that must have been excited by particles with $`p>p_1`$ further upstream but, there are no particles with $`p>p_1`$. Therefore, the required waves can be excited only locally by the same particles with $`pp_1`$ which substantially diminishes the amplitude of waves that are in resonance with particles from the interval $`p_1/R<p<p_1`$. (The left inequality arises from the resonance condition $`kcpeB/mc`$ and the frequency conservation along the characteristics $`ku(x)=const`$). This will worsen the confinement of these particles to the shock front. The quantitative study of this process is the subject of current research. What can be inferred from Fig. Critical self-organization of astrophysical shocks now, is that the decrease of $`p_1`$ straightens and rise the curve $`\nu (R)`$, so that it returns to the monotonic behaviour. However once the actual injection becomes subcritical (and thus $`R1`$) then $`p_1`$ will grow again restoring the two extrema on the curve $`\nu (R)`$.
The above dilemma is quite typical for dynamical systems that are close to criticality or marginal stability. A natural way to resolve it consists in collapsing the extrema into an inflection point so that a self-organized critical (SOC) acceleration regime is established being determined by the conditions $`\nu ^{}(R^{})=\nu ^{\prime \prime }(R^{})=0`$. These conditions not only determine unique critical values $`R^{}`$ and $`\nu ^{}\nu (R^{})`$ but also yield the maximum momentum $`p_1`$ as a function of $`M`$, which is shown in Fig. Critical self-organization of astrophysical shocks. A few particle spectra that develop in the SOC states for different Mach numbers are shown in Fig. Critical self-organization of astrophysical shocks, along with the asymptotic $`M=\mathrm{}`$ non SOC spectrum. The latter can be calculated in a closed form (M97). Note that the hardening of the spectra in about the last decade below the cut-off is entirely due to the abrupt cut-off itself.
## 4. Discussion
The detailed microphysics behind the SOC is extremely complex and must include the self-consistent turbulence evolution and particle acceleration with their strong backreaction on the shock structure. We have simplified it and argued that the most important dynamical components, the bulk plasma flow and the high-energy particles, must be in a balance that constitutes a certain equipartition of the shock energy between the two. This was done by identifying the factors that prevent either of them from prevailing alone.
The above situation is similar to that in e.g., a simple sandpile paradigm of the SOC. It is impossible (in fact unnecessary) to describe the individual grain dynamics, but it is clear that when the critical macroscopic characteristic of the system (the slope of the sandpile) becomes too steep due to the action of external factors, like tilting of the entire system or addition of sand at the top, the sandpile relaxes bringing the slope to its critical magnitude.
## 5. Possible feedback from observations
Perhaps the most significant observational aspect is the particle spectrum. Although the conversion of detected radiation spectra into the primary particle spectra is ambiguous, in some cases it may be compared with the theory. The most striking prediction is that in shocks with very high maximum particle energy, the spectra must be harder than $`q=4`$ (or 2 in the normalization $`f(E)dE`$). This is because of a very low injection requirement for efficient acceleration in such shocks (Fig.Critical self-organization of astrophysical shocks). If we (conservatively) set $`n_{\mathrm{CR}}/n_110^3`$, then $`\nu 10^3cp_0/mu_1^2`$ which may easily exceed $`\nu _2`$ (local maximum on the $`\nu (R)`$ curve) already for $`p_110^410^5`$, putting the acceleration into a strongly nonlinear regime. This should have important observational consequences.
First, the particle energy is concentrated at the upper cut-off instead of being evenly distributed over the logarithmic energy bands as in the test particle $`E^2`$ solution. This makes the upper bounds on CR generated neutrino fluxes (see e.g., Bahcall & Waxman (1999) and Mannheim et al., (1999) and references therein) rather ambiguous. Indeed, the UHECR spectrum is normalized to the observed one at $`E_{\mathrm{norm}}10^{19}`$ eV while being obscured by the galactic background at the energies $`E10^{18}`$ eV. Thus, CR spectra harder than $`E^2`$ imply that the upper limit on the neutrino fluxes e.g., derived by Waxman & Bahcall (1999) should be even lower than the $`E^2`$ spectrum implies for the energies $`E_\nu <510^{17}`$ eV ($`E_\nu /E_{\mathrm{CR}}0.03`$ is a typical energy relation). According to the same logic, it should be increased for higher energies. Note that the upper cut-offs in individual shocks contributing to the UHECR must be still much higher than $`E_{\mathrm{norm}}`$ to validate our simplified handling of particle losses. On the other hand if the sources with $`E_{\mathrm{max}}<E_{\mathrm{norm}}`$ contribute significantly, the measurements at $`E_{\mathrm{norm}}`$ tell us nothing about their normalizations and the upper bound on neutrino fluxes may be increased up to the level dictated by the CR observations in the lower energy range, as suggested by Mannheim et al., (1999). This scenario is supported by Fig. Critical self-organization of astrophysical shocks, provided that there are many strong shocks in the ensemble. The observed CR steep power-law spectrum is then essentially a superposition of flatter or even non-power-law spectra from individual sources properly distributed in $`E_{\mathrm{max}}`$.
To summarize our conclusions, the main factor that should determine the particle primary spectra, and thus the neutrino flux, is how the accelerating shocks are distributed in cut-off momenta, which in a SOC state means in Mach numbers. There is no universal spectral form for individual shocks at the current state of the theory (except a not quite representative case of $`M\mathrm{}`$). Therefore, one should understand particle losses mechanisms, since they determine the shock structure and thus the spectra, directly and through $`E_{\mathrm{max}}`$. These mechanisms are inseparable from the dynamics of strong compressible MHD turbulence generated by those same particles. The further progress in its study will improve our understanding of the acceleration process and related radiation.
This work was supported by U.S. DOE under Grant No. FG03-88ER53275. We also acknowledge helpful discussions with F. A. Aharonian and J. P. Rachen. |
no-problem/9910/cond-mat9910329.html | ar5iv | text | # A Superfluid Hydrodynamic Model for the Enhanced Moments of Inertia of Molecules in Liquid 4He
## Abstract
We present a superfluid hydrodynamic model for the increase in moment of inertia, $`\mathrm{\Delta }I`$, of molecules rotating in liquid <sup>4</sup>He. The static inhomogeneous He density around each molecule (calculated using the Orsay–Paris liquid <sup>4</sup>He density functional) is assumed to adiabatically follow the rotation of the molecule. We find that the $`\mathrm{\Delta }I`$ values created by the viscousless and irrotational flow are in good agreement with the observed increases for several molecules \[OCS, (HCN)<sub>2</sub>, HCCCN, and HCCCH<sub>3</sub>\]. For HCN and HCCH, our model substantially overestimates $`\mathrm{\Delta }I`$. This is likely to result from a (partial) breakdown of the adiabatic following approximation.
The spectroscopy of atoms and molecules dissolved in He nanodroplets provides both a new way to study microscopic dynamics of this unique quantum fluid , and a very cold matrix (0.4 K ) to create and study novel species . Recent experiments have demonstrated that even heavy and anisotropic molecules display rotationally resolved vibrational spectra with a structure reflecting the gas phase symmetry of the molecule. However, the rotational constants required to reproduce the spectra are often substantially reduced from those of the isolated molecule. For example, the $`\nu _3`$ vibrational band of SF<sub>6</sub> dissolved in He nanodroplets (first observed by Goyal et al. and later rotationally resolved and analyzed by Hartmann et al. ) indicates that the effective moment of inertia, $`I_{\mathrm{eff}}`$, in liquid <sup>4</sup>He is 2.8 times that of the isolated molecule. The same qualitative behavior has been found for a wide range of other molecules . In an elegant recent experiment, it has been demonstrated that the rotational structure of OCS broadens and collapses in pure <sup>3</sup>He droplets, and is recovered when $`60`$ <sup>4</sup>He atoms are co-dissolved in the <sup>3</sup>He . The association of the weakly damped, unhindered rotation with the Bose symmetry of <sup>4</sup>He suggests that this phenomenon is a manifestation of superfluidity, and has been called the microscopic Andronikashvili experiment .
A theory able to reproduce the observed increase, $`\mathrm{\Delta }I`$, in molecular moments of inertia would be of interest for at least two reasons. First, the enhanced inertia provides a window into the dynamics of the liquid. Second, the ability to predict the rotational constants would further improve the utility of He nanodroplet isolation spectroscopy for the characterization of novel chemical species.
The first model proposed to explain the observed $`\mathrm{\Delta }I`$ assumed that a certain number of He atoms, trapped in the interaction potential of the solute, rotate rigidly with the latter . In the case of SF<sub>6</sub>, 8 He atoms trapped in the octahedral global potential minima would create a rigidly rotating ‘supermolecule’ that would have approximately the observed $`I_{\mathrm{eff}}`$. In the case of OCS, putting a six He atom ‘donut’ in the potential well around the molecule also reproduces the observed $`I_{\mathrm{eff}}`$ . Recent Diffusion Monte Carlo (DMC) calculations have predicted that the effective rotational constant of SF<sub>6</sub>–He<sub>N</sub> monotonically decreases from that of the isolated molecule to the large cluster limit, reached at $`N=8`$, and remains essentially constant for $`N=8`$$`20`$. The supermolecule model has been recently extended to consider the rigid rotation of a ‘normal fluid fraction’ of the He density which is claimed to be significant only in the first solvation layer , based on Path Integral Monte Carlo calculations of Kwon et al. which show a molecule-induced reduction of the superfluid fraction. These calculations have been recently used to propose a definition of a spatially dependent normal fluid fraction which reproduces the observed $`I_{\mathrm{eff}}`$ of solvated SF<sub>6</sub> .
The limitations of the supermolecule model are made clear by the $`\mathrm{\Delta }I`$ observed for HCN in He droplets \[8, a\] which is only $`5\%`$ of the $`\mathrm{\Delta }I`$ observed upon formation of a gas phase He$``$HCN van der Waals complex . Furthermore, it has been previously recognized that in principle there is also a superfluid hydrodynamic contribution, $`I_\mathrm{h}`$, to $`\mathrm{\Delta }I`$ . Previous estimates, based upon a classic treatment of the rotation of an ellipsoid in a fluid of uniform density, found that $`I_\mathrm{h}`$ is only a small fraction of the observed $`\mathrm{\Delta }I`$, at least for heavy rotors such as OCS . In this report, we show that if the spatial variation of the He solvation density around the solute molecule is taken into account, the calculated $`I_\mathrm{h}`$ is instead rather large and agrees well with experimental data. We compare our calculations with the experimental results available in the literature (OCS , HCN \[8, a\]) and with results recently obtained in our laboratory for HCCCH<sub>3</sub> and HCCCN, and in the laboratory of R. E. Miller for (HCN)<sub>2</sub> \[8, b\] and HCCH \[8, c\].
We first calculate the ground state He density, $`\rho `$, around a static solute molecule. The molecule is then considered to undergo classical rotation, slowly enough that the helium ground state density adiabatically follows the molecular rotation. The kinetic energy associated with the He flow (assumed viscousless and irrotational) is used to calculate $`I_\mathrm{h}`$.
The main input of our hydrodynamic model is the ground state density of He around the solute. DMC calculations can provide this density with a minimum of assumptions beyond the interaction potentials , but are computationally expensive. The Density Functional method, which is a good compromise between accuracy and computational cost, consists in numerically minimizing the total energy of the many-body system in the form of a semi-empirical functional of the He density: $`E=𝑑𝐫[\rho (𝐫)]`$. The energy density $``$ contains an effective non-local interaction with a few parameters fixed to reproduce known properties of bulk liquid He. The functional used here is the one termed Orsay-Paris , which was shown to accurately reproduce the static properties of pure and doped He clusters . The need to treat axially symmetric molecules implies moving from one to two-dimensional equations. The new routines have been extensively tested against previously calculated spherically symmetric systems. The minimization of energy is carried out by mapping the density distribution on a grid of points and propagating it in imaginary time, starting from a trial distribution.
The density functional also contains the interaction between the He and the impurity molecule. The interaction, assumed pairwise, is treated as a static external potential, since the molecules considered here are expected to have negligible zero point motion. Existing potentials for He-HCN , He-HCCH , and He-OCS have been used without modifications. The He-(HCN)<sub>2</sub> potential was generated as the superposition of the potential due to two HCN molecules whose centers of mass are separated by 4.44 $`\mathrm{\AA }`$ (the equilibrium distance for the gas phase dimer ). The repulsive part of the He-HCCCN potential has been taken from ; the attractive part from the He-HCN and He-HCCH potentials, using the concepts of distributed interaction and transferability . The He-HCCH and He-CH<sub>4</sub> interactions were used to generate the potential between He and HCCCH<sub>3</sub>, treating the latter molecule as cylindrically symmetric. Full detail on all potentials used are available from the authors, and will be published separately .
Once the helium density profiles are calculated, the molecules are assumed to rotate perpendicularly to their symmetry axis with angular velocity $`\omega `$. We assume that the He density adiabatically follows this rotation, which allows us to calculate the laboratory-frame time-dependent density at each point in the liquid. This assumption is only valid if at each point the velocity of the fluid, $`v(𝐫)`$, is less than a critical velocity, $`v_c`$. If $`v_c`$ is taken to be the velocity of sound, this is true for all our molecules, at the temperature of the droplet: 0.4 K. A further justification to our assumption is also the fact that no critical value of angular momentum is experimentally observed for a wide class of molecules (i.e. for a wide range of fluid velocities).
The second assumption that we make is that the He behaves entirely as a superfluid undergoing irrotational flow. The assumption that the motion is irrotational implies that $`𝐯(𝐫)`$ can be written as the gradient of a scalar potential: $`𝐯=`$$`\mathbf{}`$$`\mathit{\varphi }`$ (the dependence of $`𝝆\mathbf{,}𝐯\mathbf{,}\mathit{\varphi }`$ on $`𝐫`$ will be implicit from now on), where $`\mathit{\varphi }`$ is known as the velocity potential. These assumptions lead to the following hydrodynamic equation for the velocity potential :
$$\mathbf{}\mathbf{}\mathbf{(}𝝆\mathbf{}\mathit{\varphi }\mathbf{)}\mathbf{=}\frac{\mathbf{}𝝆}{\mathbf{}𝒕}\mathbf{=}\mathbf{}\mathbf{(}\mathbf{}𝝆\mathbf{)}\mathbf{}\mathbf{(}𝝎\mathbf{\times }𝐫\mathbf{)}\mathbf{.}$$
(1)
The first equality is just the continuity equation, while the second reflects the statement that the density is time-independent in the rotating frame. We select our axis system with $`𝒛`$ along the symmetry axis of the molecule, and assume that rotation takes place round the $`𝒙`$ axis with angular velocity $`𝝎\mathbf{=}𝝎\widehat{𝐱}`$. In order to better exploit the symmetry of the problem, we have used elliptical coordinates $`𝝃\mathbf{,}𝜽\mathbf{,}𝝋`$, where $`𝒙\mathbf{=}𝒇\sqrt{𝝃^\mathrm{𝟐}\mathbf{}\mathrm{𝟏}}\mathrm{𝐬𝐢𝐧}\mathbf{(}𝜽\mathbf{)}\mathrm{𝐜𝐨𝐬}\mathbf{(}𝝋\mathbf{)}\mathbf{,}𝒚\mathbf{=}𝒇\sqrt{𝝃^\mathrm{𝟐}\mathbf{}\mathrm{𝟏}}\mathrm{𝐬𝐢𝐧}\mathbf{(}𝜽\mathbf{)}\mathrm{𝐬𝐢𝐧}\mathbf{(}𝝋\mathbf{)}`$, and $`𝒛\mathbf{=}𝒇𝝃\mathrm{𝐜𝐨𝐬}\mathbf{(}𝜽\mathbf{)}`$. The surfaces of constant $`𝝃`$ are ellipses of rotation with foci at $`𝒛\mathbf{=}\mathbf{\pm }𝒇`$. Two such surfaces limit the region where Eq. (1) is solved. The inner boundary excludes the volume occupied by the impurity, and is chosen as the largest ellipse contained in the region where $`𝝆\mathbf{<}\mathbf{0.005}𝝆_\mathrm{𝟎}`$ ($`𝝆_\mathrm{𝟎}\mathbf{=}\mathbf{0.0218}\mathit{\AA }^\mathbf{}\mathrm{𝟑}`$ is the bulk liquid density). Von Neumann boundary conditions $`\widehat{𝒏}\mathbf{}\mathbf{}\mathit{\varphi }\mathbf{=}\mathbf{}\widehat{𝒏}\mathbf{}\mathbf{(}𝝎\mathbf{\times }𝐫\mathbf{)}`$ insure that the normal component of velocity matches the normal component of motion of the boundary . For the outer boundary, any ellipse large enough that the motion of the outside fluid is negligible can be chosen (with Dirichlet boundary conditions $`\mathit{\varphi }\mathbf{=}\mathrm{𝟎}`$). These boundary conditions result in a unique solution to the hydrodynamic equations. Other solutions exist if we do not require the fluid to be irrotational, but it is known that these are higher in energy , and will include any solutions that have some portion (a “normal component” or a He “snowball”) of the He density that rigidly rotates with the molecule.
Given the solution, $`\mathit{\varphi }`$, to the hydrodynamic equation, we can calculate the kinetic energy, $`𝑲_𝐡`$, in the motion of the fluid by the following:
$`𝑲_𝐡`$ $`\mathbf{=}`$ $`{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}𝑰_𝐡𝝎^\mathrm{𝟐}\mathbf{=}{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}𝒎_{\mathrm{𝐇𝐞}}{\displaystyle \mathbf{}𝝆\mathbf{(}\mathbf{}\mathit{\varphi }\mathbf{)}\mathbf{}\mathbf{(}\mathbf{}\mathit{\varphi }\mathbf{)}𝒅𝑽}`$ (2)
$`𝑲_𝐡`$ $`\mathbf{=}`$ $`{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}𝒎_{\mathrm{𝐇𝐞}}\mathbf{\left[}\mathbf{}{\displaystyle \mathbf{}\mathit{\varphi }\mathbf{\left(}\frac{\mathbf{}𝝆}{\mathbf{}𝒕}\mathbf{\right)}𝒅𝑽}\mathbf{+}{\displaystyle \mathbf{}𝝆\mathit{\varphi }\mathbf{(}\mathbf{}\mathit{\varphi }\mathbf{)}\mathbf{}𝒅𝐒}\mathbf{\right]}\mathbf{.}`$ (3)
Eq. (2) follows directly from the definition of kinetic energy; Eq. (3) is derived from Eq. (2) using standard vector identities and assuming that $`\mathit{\varphi }`$ is a solution of Eq. (1). $`𝒅𝐒`$ is defined as positive when pointing out of the region of the fluid. $`𝑰_𝐡`$ is the hydrodynamic contribution to the moment of inertia for rotation about the $`𝒙`$ axis, and $`𝒎_{\mathrm{𝐇𝐞}}`$ is the atomic mass of helium. Both $`\mathbf{(}\mathbf{}𝝆\mathbf{/}\mathbf{}𝒕\mathbf{)}`$ and $`\mathit{\varphi }`$, are proportional to $`𝝎`$, thus the above definition of $`𝑰_𝐡`$ is independent of $`𝝎`$. The total kinetic energy of rotation will include the contribution from the molecule, $`𝑲_𝐦\mathbf{=}\frac{\mathrm{𝟏}}{\mathrm{𝟐}}𝑰_𝐦𝝎^\mathrm{𝟐}`$, where $`𝑰_𝐦`$ is the moment of inertia of the free molecule. We can also calculate the net angular momentum created by the motion of the He fluid: $`𝐉_𝐡\mathbf{=}𝒎_{\mathrm{𝐇𝐞}}\mathbf{}𝝆𝐫\mathbf{\times }\mathbf{(}\mathbf{}\mathbf{}\mathit{\varphi }\mathbf{)}𝒅𝑽`$. By use of standard vector identities and Eq. (1), this definition can be shown to lead to $`𝐉_𝐡\mathbf{=}𝑰_𝐡𝝎`$. The total angular momentum is the sum of that of the rotating molecule and the total moment of inetia the sum of the moment of inertia of the molecule and that due to hydrodynamic motion of the superfluid. The local shape of the velocity field $`𝐯\mathbf{(}𝐫\mathbf{)}`$ can be rather complex due to the presence of strong inhomogeneities in the density distribution.
We calculate $`𝑰_𝐡`$ by solving the hydrodynamic equation, Eq. (1) for $`\mathit{\varphi }`$, assuming unit angular velocity rotation around the $`𝒙`$ axis. It is computationally convenient to solve a slightly transformed version of Eq. (1), where the smoother function $`\mathrm{𝐥𝐧}𝝆\mathbf{(}𝐫\mathbf{)}`$ appears instead of $`𝝆\mathbf{(}𝐫\mathbf{)}`$:
$$\mathbf{}^\mathrm{𝟐}\mathit{\varphi }\mathbf{+}\mathbf{\left(}\mathbf{}\mathrm{𝐥𝐧}𝝆\mathbf{\right)}\mathbf{}\mathbf{\left(}\mathbf{}\mathit{\varphi }\mathbf{+}\widehat{𝒙}\mathbf{\times }𝐫\mathbf{\right)}\mathbf{=}\mathrm{𝟎}\mathbf{.}$$
(4)
Eq. (4) is solved, subject to the boundary conditions, by converting it to a set of finite difference equations on a grid of points in our elliptical coordinate system and using the Gauss-Seidel relaxation method . Both Eq. (2) and Eq. (3) are then evaluated by simple numerical quadrature, and are found to give the same value of $`𝑰_𝐡`$ within a few percent. We also carefully tested the convergence of $`𝑰_𝐡`$ with grid size.
As an example of density distribution and velocity field, Fig. 1 shows our results for the OCS molecule in a cluster of 300 He atoms. On the left we give the contour plot of the He density near the molecule. One clearly sees the complex structure which results from the tendency to have He atoms near local minima of the impurity-He potential. The highest peak, at $`\mathbf{(}𝒚\mathbf{,}𝒛\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{}\mathbf{3.6}\mathbf{,}\mathbf{}\mathbf{1.2}\mathbf{)}`$, corresponds to a ring of atoms perpendicular to the axis of the molecule. The integral of the density within this structure gives 6.5 atoms, and indeed 7–8 is the number of He atoms one expects to fit into such a ring by close-packing. On the right side of the same figure we plot the current density, $`𝝆𝐯`$. We find that most of the kinetic energy density, $`\frac{\mathrm{𝟏}}{\mathrm{𝟐}}𝝆𝐯^\mathrm{𝟐}`$, comes from the first solvation layer, the outer part of the cluster giving a negligible effect.
In Table I our results are compared with existing experimental values for several molecules in He nanodroplets. There is an overall good agreement between the predicted and observed enhancements of the effective moment of inertia. From a quantitative viewpoint, one notices that the predicted moments of inertia tend to overestimate the experimental values. In the case of the lightest rotors (HCN and HCCH) the large discrepancy suggests the breakdown of the assumption of adiabatic following as recently predicted . In that paper the importance of He exchange is pointed out; it is also shown that the interplay of the rotational constant with the potential anisotropy determines the extent to which the anisotropic He solvation density can adiabatically follow the rotation of the molecule. When the rotational constant of SF<sub>6</sub> is arbitrarily increased in the calculation by a factor of 10, the He density in the molecule-fixed frame becomes much more isotropic and the solvation-induced $`𝚫𝑰`$ decreases by a factor of 20 . We have recently obtained experimental evidence that $`𝚫𝑰`$ is larger for DCN than for HCN, which we believe to be direct experimental evidence for this effect . It is interesting to note that for these light (i.e. fast spinning) rotors the maximum of $`𝐯\mathbf{(}𝐫\mathbf{)}`$ approaches the bulk <sup>4</sup>He sound velocity.
The overestimate of the moments of inertia for the other molecules likely reflects the uncertainties in the calculated $`𝝆\mathbf{(}𝐫\mathbf{)}`$. We should remark here that while, by construction, the Orsay-Paris functional prevents $`\overline{𝝆}`$ (the density averaged over an atomic volume) from becoming much larger than $`𝝆_\mathrm{𝟎}`$, the functional was not constructed to deal with density gradients as high as those found in the first solvation layer. We observed that small changes in the form of the He density within the deep potential well of those molecules produce significant variations of the predicted moments of inertia, limiting the accuracy of the final results to 20% – 30%. This uncertainty does not affect the main result emerging from Table I that the hydrodynamic contribution to the moment of inertia of these systems, instead of being negligible, is rather large and can explain the observed rotational constants.
One could object that the density values found at the minima of the He-molecule interaction potential (e.g. $`\mathbf{}\mathrm{𝟏𝟏}𝝆_\mathrm{𝟎}`$ for OCS) are too high to be treated as those of a liquid, and should be interpreted as localized He atoms rigidly rotating with the molecule; it has been proposed that the He density distribution around the OCS-He<sub>6</sub> supermolecule is only weakly anisotropic and thus can rotate without generating a significant hydrodynamic contribution . We have calculated the above density distribution, and found that it is still strongly anisotropic, leading to a hydrodynamic moment of inertia of over 400 $`𝐮\mathbf{}\mathbf{\AA }^\mathrm{𝟐}`$. When combined with the moment of inertia of the OCS-He<sub>6</sub> supermolecule, this gives a total effective moment of inertia of over 650 $`𝐮\mathbf{}\mathbf{\AA }^\mathrm{𝟐}`$, dramatically larger than the experimental value (230 $`𝐮\mathbf{}\mathbf{\AA }^\mathrm{𝟐}`$).
In summary, the spatial dependence of the He density, which is caused by the molecule-He interaction, results in a hydrodynamic contribution to the moment of inertia more than an order of magnitude larger (in the case of the heavier rotors) than that predicted for the rotation of a reasonably sized ellipsoid in He of uniform bulk liquid density. Furthermore, the present calculations suggest that the effective moments of inertia of molecules in He nanodroplets (and likely also bulk He) can be quantitatively predicted by assuming irrotational flow of a spatially inhomogeneous superfluid.
We are pleased to acknowledge useful discussions and/or the sharing of unpublished information with D. Farrelly, Y. Kwon, E. Lee, R. E. Miller, K. Nauta, L. Pitaevskii, and K. B. Whaley. The work was supported by the National Science Foundation. |
no-problem/9910/nucl-th9910007.html | ar5iv | text | # Microscopic calculation of the inclusive electron scattering structure function in 16O
## Abstract
We calculate the charge form factor and the longitudinal structure function for <sup>16</sup>O and compare with the available experimental data, up to a momentum transfer of 4 fm<sup>-1</sup>. The ground state correlations are generated using the coupled cluster \[$`\mathrm{exp}(𝐒)`$\] method, together with the realistic v$`18`$ $`NN`$ interaction and the Urbana IX three-nucleon interaction. Center-of-mass corrections are dealt with by adding a center-of-mass Hamiltonian to the usual internal Hamiltonian, and by means of a many-body expansion for the computation of the observables measured in the center-of-mass system.
2
One of the fundamental problems in nuclear physics is related to developing a complete understanding of how nuclear structure arises as a result of the underlying interaction between nucleons. This in turn should help us develop a complete understanding of the electromagnetic structure of the nucleus, as revealed by the wealth of high-quality data that electron scattering experiments have provided for the past 30 years. The interplay of nuclear correlations, meson-exchange current or charge densities, relativistic effects in nuclei, the importance of three- or more many-body interactions in relation with the dominant two-body interaction in nuclei awaits to being assessed in greater detail. Unfortunately, solutions of the many-body Schrödinger equation with realistic interactions have been proven very difficult to obtain. Only in recent years has progress been made and first results of microscopic calculations relating to ground-state and low-excited states for nuclei with $`A7`$ been reported . These calculations have been obtained using the Green’s function Monte Carlo method, but this approach, just like the Fadeev or the Correlated Hyperspherical Harmonics methods successfully used for the $`A=3,4`$-body system, suffers limitations in the number of nucleons they can treat. To date, only the Variational Monte Carlo method has enjoyed success in solving the many-body problem for medium nuclei, but those results still show a room for improvement.
We are using the $`exp(𝐒)`$ coupled-cluster expansion to calculate the ground state of <sup>16</sup>O. Our approach is very similar to the standard approach, first developed by the Bochum group , and has been outlined recently in . The idea behind the coupled-cluster expansion formalism relies on the ability of expanding the model nuclear wave function in the many-body Hilbert space in terms of two Abelian subalgebras of multiconfigurational creation and their Hermitian-adjoint destruction operators. The expansion coefficients carry then the interpretation of nuclear correlations. The fact that we make no artificial separation between “short-range” and “long-range” correlations is one particular strength of this many-body method.
The derivation of the explicit equations is quite tedious, but requires only standard techniques. For a closed-shell nuclear system, the total Hamiltonian is given as
$$𝐇=\underset{i}{}T_i+\underset{i<j}{}V_{ij}+\underset{i<j<k}{}V_{ijk}^{tni}.$$
(1)
The Hamiltonian includes a nonrelativistic one-body kinetic energy, a two-nucleon potential, and a supplemental three-nucleon potential. We have chosen the Argonne $`v`$18 potential as the most realistic nucleon-nucleon interaction available today. The Argonne $`v`$18 model provides an accurate fit for both $`pp`$ and $`nn`$ scattering data up to 350 MeV with a $`\chi ^2`$/datum near one. The introduction of charge-independence breaking in the strong force is the key element of obtaining this high performance. However, the two-body part of this interaction results in over-binding and too large a saturation density in nuclear matter. Therefore, the $`NN`$ potential is supplemented by a three-nucleon interaction (part of the Urbana family ), which includes a long-range two-pion exchange and a short-range phenomenological component. The Urbana-IX potential is adjusted to reproduce the binding energy of <sup>3</sup>H and give reasonable saturation density in nuclear matter when used with Argonne $`v`$18 .
We are searching for the correlated ground state of the Hamiltonian $`H`$, which we denote by $`|\stackrel{~}{0}`$. The ansatz for the many-body wave function $`|\stackrel{~}{0}`$ is defined as the result of the cluster correlation operator, $`\mathrm{S}^{}`$, acting on the reference state of the many-body system, the uncorrelated ground state $`|0`$:
$`|\stackrel{~}{0}=e^𝐒^{}|0.`$
For a number-conserving Fermi system, the standard choice for $`|0`$ is the single-particle shell-model (Slater determinant) state formed from an antisymmetrized product of single-particle wave functions. The cluster correlation operator is defined in terms of its ph-creation operators expansion ($`𝐎_0^{}`$ = $`\mathrm{𝟏}`$, $`𝐎_1^{}`$ = $`𝐚_{p_1}^{}𝐚_{h_1}`$, $`𝐎_2^{}`$ = $`𝐚_{p_1}^{}𝐚_{p_2}^{}𝐚_{h_2}𝐚_{h_1}`$) as
$`𝐒^{}={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{n!}}S_n𝐎_n^{}.`$
The problem of solving for the many-body wave function $`|\stackrel{~}{0}`$ and the ground-state energy, $`E`$, is now reduced to the problem of solving for the amplitudes $`S_n`$. This implies solving a set of non-linear equations, which may be obtained using a variational principle. We construct a variation $`\delta |\stackrel{~}{0}`$ orthogonal to the correlated ground state as
$`\delta |\stackrel{~}{0}=e^𝐒𝐎_n^{}|0,`$
and require that the Hamiltonian between the ground state and such a variation vanishes. As a result, we obtain an equation for the ground-state energy eigenvalue $`E`$ in terms of the cluster correlation coefficients, $`\{S_n\}`$, and a set of formally exact coupled nonlinear equations for these coefficients:
$`E`$ $`=`$ $`\stackrel{~}{0}|e^𝐒𝐇e^𝐒|\stackrel{~}{0},`$
$`0`$ $`=`$ $`0|e^𝐒𝐇e^𝐒𝐎_n^{}|0.`$
Then, the computation breaks down into two steps : In the first step, the G-matrix interaction is calculated inside the nucleus including all the corrections. This results in amplitudes for the 2$`p`$2$`h`$ correlations, which are implicitly corrected for the presence of 3$`p`$3$`h`$ and 4$`p`$4$`h`$ correlations. In the second step the mean field is calculated from these correlations and the single-particle Hamiltonian is solved to give mean-field eigenfunctions and single-particle energies. These two steps are iterated until a stable solution is obtained. Calculations are carried out entirely in configuration space where a $`50\mathrm{}\omega `$ space is used. The general approach, when the Hamiltonian includes only up to two-body operators, has been presented in . The results we report here have been obtained by taking into account the three-nucleon interaction via a density-dependent approach. The details of this approach will be presented elsewhere .
Once the correlated ground state $`|\stackrel{~}{0}`$ is obtained, we can calculate the expectation value of any arbitrary operator $`A`$ as
$`\overline{a}=0|e^𝐒Ae^𝐒\stackrel{~}{𝐒}^{}|\mathrm{\hspace{0.17em}0},`$
where $`\stackrel{~}{𝐒}^{}`$ is also defined by its decomposition in terms of ph-creation operators
$`\stackrel{~}{𝐒}^{}={\displaystyle \underset{n}{}}{\displaystyle \frac{1}{n!}}\stackrel{~}{S}_n𝐎_n^{}.`$
The amplitudes $`\stackrel{~}{S}_n`$ are obtained in terms of the $`S_n`$ amplitudes in an iterative fashion.
Note that the correlated ground state $`|\stackrel{~}{0}`$ is not translationally invariant since it depends on the $`3A`$ coordinates of the nucleons in the laboratory frame. Therefore, in practice one has to take special care of correcting for the effects of the center-of-mass motion. This is done in several steps: First, the Hamiltonian (1) is replaced by the internal Hamiltonian
$`H_{int}=HT_{CM},`$
which is now entirely written in the center-of-mass frame by removing the center-of-mass kinetic energy, $`T_{CM}=P_{CM}^2/(2mA)`$, with $`m`$ the nucleon mass. Both the two- and three-nucleon interactions are given in terms of the relative distances between nucleons, so in this respect no corrections are needed. Secondly, a many-body expansion has been devised in order to carry out the necessary corrections required by the calculation of observables, which are measured experimentally in the center-of-mass frame. This procedure is based on the assumption that we can neglect the correlations between the center-of-mass and relative coordinates degrees of freedom, and a factorization of the correlated ground state $`|\stackrel{~}{0}`$ into components which depend only on the center-of-mass and the relative coordinates, respectively, is possible. We also assume that, indeed, the correlated ground state $`|\stackrel{~}{0}`$ provides a good description of the internal structure of the nucleus. Finally, in order to ensure such a separation, a supplemental center-of-mass Hamiltonian is added
$`H_{CM}=\beta _{CM}\left[T_{CM}+{\displaystyle \frac{1}{2}}(mA)\mathrm{\Omega }^2R_{CM}^2\right],`$
which has the role of constraining the center-of-mass component of the ground-state wave function . We choose the values of the parameters $`\beta _{CM}`$ and $`\mathrm{\Omega }^2`$ such that they correspond to a value domain for which the binding energy
$`E=H_{int}^{^{}}=H_{int}+H_{CM}H_{CM}`$
is relatively insensitive to the choice of the $`\beta _{CM}`$ and $`\mathrm{\Omega }^2`$ values . When leaving out the center-of-mass Hamiltonian, the calculated binding energy of <sup>16</sup>O is equal to 7.54 Mev/nucleon, which is thought as a reasonable value, given the uncertainties related to the three-nucleon interaction.
Figure 1 shows the theoretical result for the charge form factor in <sup>16</sup>O. In the one-body Born-approximation picture, the charge form factor is given as
$`F_L(q)=\stackrel{~}{0}|{\displaystyle \underset{k}{}}f_k(q^2)e^{i\stackrel{}{q}\stackrel{}{r}_k^{}}|\stackrel{~}{0},`$
with $`f_k(q^2)`$ the nucleon form factor, which takes into account the finite size of the nucleon $`k`$ . We also take into account the model-independent part of the “Helsinki meson-exchange model” , namely the contributions from the $`\pi `$\- and $`\rho `$-exchange “seagull” diagrams, with the pion- and $`\rho `$-meson propagators replaced by the Fourier transforms of the isospin-dependent spin-spin and tensor components of the $`v`$18 $`NN`$ interaction. This substitution is required in order for the exchange current operator to satisfy the continuity equation together with the interaction model. The contributions of the $`\pi `$\- and $`\rho `$-exchange charge density give a measurable correction only for $`q>2`$ fm<sup>-1</sup>.
In order to generate the form factor depicted in Fig. 1, we have first used the procedures of , keeping the contributions that can be written in terms of the one- and two-body densities. Then we take the Fourier transform in order to produce the theoretical charge density. Using this theoretical charge density, we generate the charge form factor in a Distorted Wave Born Approximation picture , in order to take into account the distortions due to the interaction of the electron with the Coulomb field. This last step results in smoothing out the sharp diffraction minima usually seen in the calculated charge form factor . The agreement with the experiment is reasonably good over the whole range of $`q`$ spanned by the available experimental data .
A second electron scattering observable that we would like to compare with is the longitudinal structure function $`S_L(q)`$, sometimes called the Coulomb sum rule, which is sensitive to the short-range correlations induced by the repulsive core of the $`NN`$ interaction . The Coulomb sum rule, $`S_L(q)`$, represents the total integrated strength of the longitudinal response function measured in inclusive electron scattering. In the nonrelativistic limit , we have
$`S_L(q)1+\rho _{LL}(q){\displaystyle \frac{1}{Z}}|\stackrel{~}{0}|\rho (q)|\stackrel{~}{0}|^2,`$
where $`\rho (q)`$ is the nuclear charge operator
$`\rho (q)={\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{\overset{A}{}}}e^{i\stackrel{}{q}\stackrel{}{r}_i}(1+\tau _{z,i}),`$
and $`\rho _{LL}(q)`$ is the longitudinal-longitudinal distribution function
$`\rho _{LL}(q)={\displaystyle 𝑑\stackrel{}{r}_1𝑑\stackrel{}{r}_2j_0(q|\stackrel{}{r}_1\stackrel{}{r}_2|)\rho ^{(p,p)}(\stackrel{}{r}_1,\stackrel{}{r}_2)}.`$
Here $`\rho ^{(p,p)}(\stackrel{}{r}_1,\stackrel{}{r}_2)`$ is the proton-proton two-body density
$`\rho ^{(p,p)}(\stackrel{}{r}_1,\stackrel{}{r}_2)`$
$`={\displaystyle \frac{1}{4}}{\displaystyle \underset{i,j}{}}\stackrel{~}{0}|\delta (\stackrel{}{r}_1\stackrel{}{r}_i)\delta (\stackrel{}{r}_2\stackrel{}{r}_j)(1+\tau _{z,i})(1+\tau _{z,j})|\stackrel{~}{0},`$
normalized as
$`{\displaystyle 𝑑\stackrel{}{r}_1𝑑\stackrel{}{r}_2\rho ^{(p,p)}(\stackrel{}{r}_1,\stackrel{}{r}_2)}=Z1.`$
In light nuclei reasonable agreement between theory and experiment is obtained for the Coulomb sum rule . In heavier nuclei, however, the experimental situation is a lot more controversial, since both a certain lack of strength has been reported and because of the inherent difficulty of separating the longitudinal and transverse contributions in the cross section due to the distortion effects of the electron waves in the nuclear Coulomb field. Figure 2 shows the calculated Coulomb sum in <sup>16</sup>O. Since no experimental data are available for <sup>16</sup>O, we compare the results of the present calculation with the <sup>12</sup>C experimental data from with an estimate for contributions from large $`\omega `$. The large error bars on the experimental data are largely due to systematic uncertainties associated with tail contribution . Preliminary theoretical results for <sup>12</sup>C are also shown and appear to follow closely the results for theoretical curve for <sup>16</sup>O.
This calculation represents the most detailed calculation available today, using the coupled cluster expansion, for a nuclear system with $`A>8`$. This also represents a contribution to the on-going effort of carrying out microscopic calculations that directly produce nuclear shell structure from realistic nuclear interactions. Similar calculations for other closed-shell nuclei in the $`p`$\- and $`sd`$-shell are currently under way.
This work was supported in part by the U.S. Department of Energy (DE-FG02-87ER-40371). The work of B.M. was also supported in part by the U.S. Department of Energy under contract number DE-FG05-87ER40361 (Joint Institute for Heavy Ion Research), and DE-AC05-96OR22464 with Lockheed Martin Energy Research Corp. (Oak Ridge National Laboratory). The calculations were carried out on a dual-processor 500 MHz Pentium II PC in the Nuclear Physics Group at the University of New Hampshire. Some of the preliminary calculations have also run on a 180 MHz R10000 Silicon Graphics Workstation in the Computational and Theoretical Physics Section at ORNL. The authors gratefully acknowledge useful conversations with John Dawson and David Dean. |
no-problem/9910/astro-ph9910180.html | ar5iv | text | # Gamma-ray line emission from OB associations and young open clusters
## Evolutionary Synthesis Model
Our evolutionary synthesis model is based on the multi-wavelength code described in mashesse91 ; cervino94 , enhanced by the inclusion of nucleosynthesis yields. In summary, the evolution of each individual star in a stellar population is followed using Geneva evolutionary tracks with enhanced mass-loss rates meynet94 . In our present implementation, and similar to mashesse91 ; cervino94 , stellar Lyman continuum luminosities are taken from mihalas72 ; kurucz79 . Note, however, that modern atmosphere models including the effects of line blanketing and stellar winds predict enhanced ionising fluxes with respect to these models, hence our predicted ionising luminosities should be considered as preliminary and possibly are somewhat too low<sup>1</sup><sup>1</sup>1We currently are implementing the CoStar atmosphere models of schaerer97 in our code that consistently treat the stellar structure and atmosphere and include line blanketing and stellar winds.. At the end of stellar evolution, stars initially more massive than $`M_{\mathrm{WR}}=25`$ $`M_{}`$ are exploded as Type Ib supernovae, while stars of initial mass within $`8M_{}`$ and $`M_{\mathrm{WR}}`$ are assumed to explode as Type II SNe. Nucleosynthesis yields have been taken from meynet97 for the pre-supernova evolution and from woosley95 ; wlw95 for Type II and Type Ib supernova explosions, respectively. Note that Type II SN yields have only been published for stars without mass loss and Type Ib yields have only been calculated for pure Helium stars. In order to obtain consistent nucleosynthesis yields for Type II supernovae we followed the suggestion of maeder92 and linked the explosive nucleosynthesis models of woosley95 to the Geneva tracks via the core mass at the beginning of Carbon burning. For Type Ib SN we used the core mass at the beginning of He core burning to link evolutionary tracks to nucleosynthesis calculations.
Evolutionary synthesis models were calculated using a stochastic initial mass function where random masses were assigned to individual stars following a Salpeter initial mass spectrum ($`d\mathrm{log}\xi /d\mathrm{log}M=1.35`$) until the number of stars in a given mass-interval reproduces the observed population. Typical results for a rich OB association (51 stars within $`1540`$ $`M_{}`$) are shown in Fig. 1. <sup>26</sup>Al production turns on at about $`12`$ Myr when the isotope starts to be expelled by stellar winds into the interstellar medium. Stellar <sup>26</sup>Al production reaches its maximum around 3 Myr when the most massive stars enter the Wolf-Rayet phase. Explosive nucleosynthesis sets on around 4.5 Myr for Type Ib and around 7 Myr for Type II supernovae, leading to a second peak in the <sup>26</sup>Al yield around 8 Myr. After this peak, a slightly declining <sup>26</sup>Al yield is maintained by Type II explosions until the last Type II SN exploded around 20 Myr. Afterwards, the exponential decay quickly removes the remaining <sup>26</sup>Al nuclei from the ISM.
We also calculated the time-dependent ionising luminosity ($`\lambda <912`$ Å) of the population from which we derived the equivalent O7V star <sup>26</sup>Al yield $`Y_{26}^{\mathrm{O7V}}`$. This quantity measures the amount of <sup>26</sup>Al ejected per ionising photon normalised on the ionisation power of an O7V star. The analysis of COMPTEL 1.809 MeV data suggests a galaxywide constant value of $`Y_{26}^{\mathrm{O7V}}=(1.0\pm 0.3)\mathrm{\hspace{0.17em}10}^4`$ $`M_{}`$ knoedl99 . Interestingly, the COMPTEL value is only reproduced for a very young population during a quite short age period ($`2.55`$ Myrs). For younger populations too few <sup>26</sup>Al is produced with respect to the ionising luminosity, leading to much lower equivalent yields. For older populations the ionising luminosity drops rapidly, resulting in much higher $`Y_{26}^{\mathrm{O7V}}`$ values. Thus, the equivalent O7V star <sup>26</sup>Al yield is a quite sensitive measure of the population age. In particular, the measurement of $`Y_{26}^{\mathrm{O7V}}`$ for individual OB associations or young open clusters provides a powerful tool to identify the dominant <sup>26</sup>Al progenitors.
## Application to the Cygnus region
We applied our evolutionary synthesis model to the Cygnus region from which prominent 1.809 MeV line emission has been detected by COMPTEL delRio96 . In delRio96 the authors modelled <sup>26</sup>Al nucleosynthesis in Cygnus by estimating the contribution from individual Wolf-Rayet stars and supernova remnants that are observed in this region. This approach, however, suffers from considerable uncertainties due to the poorly known distances to these objects. In this work we performed a complete census of OB associations and young open clusters in the Cygnus region. Individual association or cluster distances have been estimated by the method of spectroscopic parallaxes, ages have been determined by isochrone fitting. Distance and age uncertainties have also been estimated and were incorporated in the analysis by means of a Bayesian method. The richness was estimated for each association or cluster by building H-R diagrams for member stars and by counting the number of stars within mass-intervals that are probably not affected by incompleteness or evolutionary effects. In total we included 6 OB associations and 19 young open clusters in our analysis which house 94 O type and 13 Wolf-Rayet stars.
For each OB association or cluster, 100 independent evolutionary synthesis models were calculated that differ by the actual stellar population that has been realised by the random sampling procedure. In this way we include the uncertainties about the unknown number of massive stars in the associations that have already disappeared in supernova explosions. From these samples the actual age and distance uncertainties are eliminated by marginalisation, leading to a probability distribution for all quantities of interest. Note that in this approach an age uncertainty is equivalent to an age spread in the cluster formation, hence the possibility of non-instantaneous star formation has been taken into account. The results for all individual associations have been combined by marginalisation to predictions for the entire Cygnus region.
The predicted equivalent O7V star <sup>26</sup>Al yield amounts to $`(0.31.2)\mathrm{\hspace{0.17em}10}^4`$ $`M_{}`$ and is compatible with the COMPTEL observation, pointing to an extremely young population that is at the origin of <sup>26</sup>Al. Indeed, while $`90\%`$ of the <sup>26</sup>Al is produced in our model by stellar nucleosynthesis (during the main sequence and subsequent Wolf-Rayet phase), only $`10\%`$ may be attributed to explosive nucleosynthesis, mainly in Type Ib SN events. This is also reflected in the low <sup>60</sup>Fe yields ($`(07)\mathrm{\hspace{0.17em}10}^8`$ ph cm<sup>-2</sup> s<sup>-1</sup>) that are predicted by our model since <sup>60</sup>Fe is assumed to be only produced in supernovae.
However, in absolute quantities, our model underestimates the free-free intensity in Cygnus by about a factor 3 while the total <sup>26</sup>Al flux is low by a factor of 5. This points towards a possible incompleteness of our massive star census which has been based on surveys of OB associations and young open clusters in Cygnus available in the literature. Indeed, while we identify only 95 OB stars in Cyg OB2, reddish66 estimated 400 OB members in this association, indicating only $`25\%`$ completeness of our census. Taking $`25\%`$ as a typical completeness fraction for our OB association census and assuming that the young open cluster census is complete, we obtain a free-free intensity of $`0.25`$ mK and an <sup>26</sup>Al flux of $`\mathrm{4.3\hspace{0.17em}10}^5`$ ph cm<sup>-2</sup> s<sup>-1</sup> – values that are in fairly good agreement with the observations ($`0.26`$ mK from DMR microwave data and $`(7.9\pm 2.4)\mathrm{\hspace{0.17em}10}^5`$ ph cm<sup>-2</sup> s<sup>-1</sup> from COMPTEL 1.8 MeV observations; see Plüschke et al., these proceedings). However, we do not predict any noticeable amount of <sup>60</sup>Fe for the Cygnus region – a prediction which hopefully will be soon verified by the INTEGRAL observatory. We would like to stress that INTEGRAL has the potential to partially resolve some OB associations and young open clusters in the nearby Cygnus region, and thus may provide important new insights in massive star nucleosynthesis in this area. |
no-problem/9910/astro-ph9910287.html | ar5iv | text | # 1 Plan of the poster
OLD ISOLATED NEUTRON STARS
SERGEI B. POPOV
Sternberg Astronomical Institute, Moscow
polar@xray.sai.msu.su polar@sai.msu.ru
Abstract
In this poster I briefly review several articles on astrophysics of old isolated neutron stars,
which were published in 1994-99 by my co-authors and myself.
Acknowledgments
I want to thank all my co-authors: prof. V.M. Lipunov, dr. M.E. Prokhorov, prof. M. Colpi, dr. D. Yu. Konenkov, prof. A. Treves, dr. R. Turolla.
Simple Model Of Accretion Onto Isolated Neutron Star With Synchrotron Cooling
Popov S.B., Astron. Circ. N1556 pp. 1-2, 1994
Here I modeled accretion onto an isolated neutron star (INS) from the interstellar medium in the case of spherical symmetry for different values of the magnetic field strength, ambient gas density and NS’s mass. I tried to verify the idea that if the radius of corotation, $`R_{co}`$, is less than the Alfven radius, $`R_A`$, the shell will form around the INS and $`R_A`$ will decrease to $`R_{co}`$, and the periodic X-ray source will appear (see Treves et al., 1993 A&A 269, 319).
Dependence of $`R_A`$ on $`t`$ in our model roughly coincides with the analytic formula from Treves et al. (1993): $`Rt^{1/2}`$ (for some values $`R_A`$ was decreasing faster).
On the figure I show the growth of the envelope density (curves on the figure are plotted for different moments of time: higher density corresponds to later moments of time) and the decreasing of the Alfven radius with time.
Periodic sources with $`P`$ from several minutes to several months can appear.
Spin and its evolution for isolated neutron stars: the Spindown theorem
Lipunov V.M. & Popov S.B., AZh 72, N5, pp. 711-716, 1995 (astro-ph/9609185)
A possible scenario of spin evolution of isolated neutrons stars is considered.
The new points of our consideration are (all points, including the Spindown theorem, are formulated for constant field!):
–we give additional arguments for the relatively short time scale of the Ejector stage ( $`10^710^8`$ yrs for small velocities of NSs).
–we propose specific SPINDOWN THEOREM and give some arguments for its validity. This theorem argues, that the Propeller stage is always shorter than the Ejector stage (for constant magnetic field).
–we consider evolution of spin period of a NS on the Accretor stage and predict that its period without field decay is $`510^2`$ sec and INSs can be observed as pulsating X–ray sources.
–we consider new idea of stochastic acceleration of very old NSs due to accretion of turbulizated ISM. A specific equilibrium period can be reached.
– accreting INSs can be spin-up and spin-down with equal probability.
The figure illustrates magneto-rotational evolution of an isolated neutron star on $`Py`$ diagram. Gravimagnetic parameter: $`y=\frac{\dot{M}}{\mu ^2}`$.
RX J0720.4–3125 as a Possible Example of the Magnetic Field Decay of Neutron Stars
Konenkov D.Yu. & Popov S.B., PAZh 23, pp. 569-575, 1997 (astro-ph/9707318)
Popov S.B. & Konenkov D.Yu., Radiofizika 41, pp. 28-35, 1998 (astro-ph/9812482)
We studied possible evolution of the spin period and the magnetic field of the X-ray source RX J0720.4-3125 assuming this source to be an isolated neutron star accreting interstellar medium. Magnetic field of the source is estimated to be $`10^610^9`$ G, and it is difficult to explain observed spin period 8.38 s without invoking hypothesis of the magnetic field decay. We used the model of ohmic decay of the crustal magnetic field. The estimates of accretion rate ($`10^{14}10^{16}M_{}/\mathrm{yr}`$), velocity of the source relative to interstellar medium ($`1050`$ km/s), neutron star age ($`210^910^{10}`$ yrs) are obtained.
We also make new estimate of the equilibrium period for accreting INS in the ISM:
$$P_{eq}=960k_t^{1/3}\mu _{30}^{2/3}I_{45}^{1/3}\rho _{24}^{2/3}v_\mathrm{}_6^{13/3}v_{t_6}^{2/3}M_{1.4}^{8/3}\mathrm{sec}$$
The period $`P_{eq}`$ corresponds to the NS rms rotation rate obtained from the solution of the corresponding Fokker–Planck equation. In reality, the rotational period of INS fluctuates around this value. We take into account the three-dimensional character of turbulence, i.e. the fact that the vortex can be oriented not only in the equatorial plane but also at any angle to this plane. In this case, diffusion occurs in the three-dimensional space of angular velocities.
Spatial distribution of the accretion luminosity of isolated neutron stars and black holes in the Galaxy
Popov S.B. & Prokhorov M.E., A&A 331, pp. 535-540, 1998 (astro-ph/9705236)
Popov S.B. & Prokhorov M.E., astro-ph/9606126, 1996
We present here a computer model of the spatial distribution of the luminosity, produced by old isolated neutron stars (NS) and black holes (BH) accreting from the interstellar medium.
We solved numerically the system of differential equations of motions in the Galactic potential. The density in our model is constant in time. In our model we assumed that the birthrate of NS and BH is proportional to the square of the local density. Stars were assumed to be born in the Galactic plane (Z=0) with circular velocities plus additional isotropic kick velocities. Kick velocities were taken equal for NS and BH. It is possible however that BH have lower kick velocities because of their higher masses.
We used masses $`M_{NS}=1.4M_{}`$ for NS and $`M_{BH}=10M_{}`$ for BH. The radii, $`R_{lib}`$, where the energy is liberated, was assumed to be equal to 10 km for NS and 90 km (i.e. $`3R_g`$, $`R_g=2GM/c^2`$) for BH.
For each star we computed the exact trajectory and the accretion luminosity. The accretion luminosity was calculated using Bondi’s formula. Calculations used a grid with a cell size 100 pc in the R-direction and 10 pc in the Z-direction.
As expected, BH give higher luminosity than NS, as they are more massive. But if the total number of BH is significantly lower than the number of NS, their contribution to the luminosity can be less than the contribution of NS. The total accretion luminosity of the Galaxy for $`N_{NS}=10^9`$ and $`N_{BH}=10^8`$ is about $`10^{39}10^{40}`$erg/s. For a characteristic velocity of 200 km/s the maximum of the distribution is situated approximately at 5.0 kpc for NS and at 4.8 kpc for BH. For NS with a characteristic velocity of 400 km/s maximum is located at 5.5 kpc, and for BH at 5.0 kpc. This result is also expected because of high masses of BH.
The toroidal structure of the luminosity distribution of NS and BH is an interesting and important feature of the Galactic potential. As one can expect, for low characteristic kick velocities and for BH we have a higher luminosity.
As we made very general assumptions, we argue, that such a distribution is not unique for our Galaxy, and all spiral galaxies can have such a distribution of the accretion luminosity, associated with accreting NS and BH.
Nature of the compact X-ray source in supernova remnant RCW103
Popov S.B., Astron. Astroph. Trans. 17, pp. 35-40, 1998 (astro-ph/9708044; astro-ph/9806354)
Here I briefly discuss the nature of the compact X-ray source in the center of the supernova remnant RCW 103. Several models, based on the accretion onto a compact object such as a neutron star or a black hole (isolated or binary), were analyzed.
I showed that it is more likely that the central X-ray source is an accreting neutron star than an accreting black hole. I also argue that models of a disrupted binary system consisting of an old accreting neutron star and a new one observed as a 69-ms X-ray and radio pulsar are most favored.
Population synthesis of old neutron stars in the Galaxy: no field decay
Popov S.B., Colpi M., Treves A., Turolla R., Lipunov V.M., Prokhorov M.E., ApJ 530 (20 Feb), 2000 (astro-ph/9910114)
Isolated neutron stars (NSs) are expected to be as many as $`10^8`$$`10^9`$, $`1\%`$ of the total stellar content of the Galaxy. Young NSs, active as pulsars, comprise only a tiny fraction ($`10^310^4`$) of the entire population, and about 1,000 have been detected in radio surveys.
The paucity of old isolated accreting neutron stars in ROSAT observations is used to derive a lower limit on the mean velocity of neutron stars at birth. The secular evolution of the population is simulated following the paths of a statistical sample of stars for different values of the initial kick velocity, drawn from an isotropic Gaussian distribution with mean velocity $`0V550`$ $`\mathrm{km}\mathrm{s}^1`$. The spin–down, induced by dipole losses and the interaction with the ambient medium, is tracked together with the dynamical evolution in the Galactic potential, allowing for the determination of the fraction of stars which are, at present, in each of the four possible stages: Ejector, Propeller, Accretor, and Georotator. Taking from the ROSAT All Sky Survey an upper limit of $`10`$ accreting neutron stars within $`140`$ pc from the Sun, we infer a lower bound for the mean kick velocity, $`<V>200300`$ $`\mathrm{km}\mathrm{s}^1`$.
Present results, moreover, constrain the fraction of low velocity stars, which could have escaped pulsar statistics, to $`1\%`$.
Population synthesis of old neutron stars in the Galaxy: with field decay
Popov S.B., Colpi M., Treves A., Turolla R., Lipunov V.M., Prokhorov M.E., ApJ 530 (Feb 20), 2000 (astro-ph/9910114)
The time evolution of the magnetic field in isolated NSs is still a very controversial issue and no firm conclusion has been established as yet. A strong point is that radio pulsar observations seem to rule out fast decay with typical times less than $`10`$ Myr, but this does not exclude the possibility that $`B`$ decays over much longer timescales ($`t_d10^910^{10}`$ yr).
The same conclusion as at the previous page is reached for both a constant ($`B10^{12}`$ G) and a magnetic field decaying exponentially with a timescale $`10^9`$ yr.
We refer here only to a very simplified picture of the field decay in which $`B(t)=B(0)\mathrm{exp}(t/t_d)`$. Calculations have been performed for $`t_d=1.1\times 10^9`$ yr, $`t_d=2.2\times 10^9`$ yr and $`\mu _{30}(0)=1`$. Results are shown in the figure.
As it is expected, the number of Propellers is significantly increased with respect to the non–decaying case, while Ejectors are now less abundant. Georotators are still very rare. The fraction of Accretors is approximately the same for the two values of $`t_d`$, and, at least for low mean velocities, is comparable to that of the non–decaying field while, at larger speeds, it seems to be somehow higher. This shows that the fraction of Accretors depends to some extent on how the magnetic field decays. By contrast, a fast and progressive decay of $`B`$ would lead to an overabundance of Accretors because this situation is similar to “turning off” the magnetic field, i.e., quenching any magnetospheric effect on the infalling matter.
Summarizing, we can conclude that, although both the initial distribution and the subsequent evolution of the magnetic field strongly influences the NS census and should be accounted for, the lower bound on the average kick derived from ROSAT surveys is not very sensitive to $`B`$, at least for not too extreme values of $`t_d`$ and $`\mu (0)`$, within this model.
ROSAT X-ray sources and exponential field decay in isolated neutron stars
Popov S.B. & Prokhorov M.E., astro-ph/9908212, 1999
Many astrophysical manifestations of neutron stars (NSs) are determined by their periods and magnetic fields. Magnetic field decay in NSs is a matter of controversy. Many models of the magnetric field decay have been proposed.
The influence of exponential magnetic field decay on the spin evolution of isolated neutron stars is studied. The ROSAT observations of several X-ray sources, which can be accreting old isolated neutron stars, are used to constrain the exponential decay parameters. Even if all modern candidates are not accreting objects, the possibility of limitations of magnetic field decay models based on future observations of isolated accreting NSs should be addressed.
We show that the range of minimum value of magnetic moment, $`\mu _b`$, and the characteristic decay time, $`t_d`$, $`10^{29.5}\mu _b10^{28}\mathrm{G}\mathrm{cm}^3`$ , $`10^8t_d10^7\mathrm{yrs}`$ are excluded assuming the standard initial magnetic momentum, $`\mu _0=10^{30}\mathrm{G}\mathrm{cm}^3`$. For these parameters, neutron stars would never reach the stage of accretion from the interstellar medium even for a low space velocity of the stars and a density of the ambient plasma. The range of excluded parameters increases for lower values of $`\mu _0`$.
In fact the limits obtained are even stronger than they could be in nature, because we did not take into account that NSs can spend some significant time (in the case with field decay) at the propeller stage.
We conclude that the existence of several old isolated accreting NSs observed by ROSAT (if it is the correct interpretation of observations), can put important bounds on the models of the magnetic field decay for isolated NSs (without influence of accretion, which can stimulate field decay). These models should explain the fact of observations of $`10`$ accreting isolated NSs in the solar vicinity. Here we can not fully discuss the relations between decay parameters and X-ray observations of isolated NSs without detailed calculations. What we showed is that this connection should be taken into account and made some illustrations of it, and future investigations in that field would be desireble. |
no-problem/9910/astro-ph9910176.html | ar5iv | text | # Einstein frame or Jordan frame ?
## 1 Introduction
Scalar–tensor theories of gravity, of which Brans–Dicke theory is the prototype, are competitors to Einstein’s theory of general relativity for the description of classical gravity. Renewed interest in scalar–tensor theories of gravity is motivated by the extended and hyperxtended inflationary scenarios of the early universe. Additional motivation arises from the presence of a fundamental Brans–Dicke–like field in most high energy physics theories unifying gravity with the other interactions (the string dilaton, the supergravity partner of spin $`1/2`$ particles, etc.).
It is well known since the original Brans–Dicke paper that two formulations of scalar–tensor theories are possible; the version in the so–called Jordan conformal frame commonly presented in the textbooks (e.g. ), and the less known version based on the Einstein conformal frame, which is related to the former one by a conformal transformation and a redefinition of the gravitational scalar field present in the theory. The possibility of two formulations related by a conformal transformation exists also for Kaluza–Klein theories and higher derivative theories of gravity (see for reviews). The problem of whether the two formulations of a scalar–tensor theory in the two conformal frames are equivalent or not has been the issue of lively debates, which are not yet settled, and often is the source of confusion in the technical literature. While many authors support the point of view that the two conformal frames are equivalent, or even that physics at the energy scale of classical gravity and classical matter is always conformally invariant, other authors support the opposite point of view, and others again are not aware of the problem (see the “classification of authors” in ). The issue is important in principle and in practice, since there are many applications of scalar–tensor theories and of conformal transformation techniques to the physics of the early universe and to astrophysics. The theoretical predictions to be compared with the observations (in cosmology, the existence of inflationary solutions of the field equations, and the spectral index of density perturbations) crucially depend on the conformal frame adopted to perform the calculations.
In addition, if the two formulations of a scalar–tensor theory are not equivalent, the problem arises of whether one of the two is physically preferred, and which one has to be compared with experiments and astronomical observations. Are both conformal versions of the same theory viable, and good candidates for the description of classical gravity ? Unfortunately many authors neglect these problems, and the issue is not discussed in the textbooks explaining scalar–tensor theories. On the other hand, it emerges from the work of several authors, in different contexts (starting with Refs. on Kaluza–Klein and Brans–Dicke theories, and summarized in ), that
1. the formulations of a scalar–tensor theory in the two conformal frames are physically inequivalent.
2. The Jordan frame formulation of a scalar–tensor theory is not viable because the energy density of the gravitational scalar field present in the theory is not bounded from below (violation of the weak energy condition ). The system therefore is unstable and decays toward a lower and lower energy state ad infinitum ).
3. The Einstein frame formulation of scalar–tensor theories is free from the problem 2). However, in the Einstein frame there is a violation of the equivalence principle due to the anomalous coupling of the scalar field to ordinary matter (this violation is small and compatible with the available tests of the equivalence principle ; it is indeed regarded as an important low energy manifestation of compactified theories , ).
It is clear that property 2) is not acceptable for a viable classical theory of gravity (a quantum system, on the contrary, may have states with negative energy density ). A classical theory must have a ground state that is stable against small perturbations.
In spite of this compelling argument, there is a tendency to ignore the problem, which results in a uninterrupted flow of papers performing computations in the Jordan frame. The use of the latter is also implicitely supported by most textbooks on gravitational theories. Perhaps this is due to reluctance in accepting a violation of the equivalence principle, on philosophical and aesthetic grounds, or perhaps it is due to the fact that the best discussions of this subject are rather mathematical than physical in character, and not well known. In this paper, we present a straightforward argument in favor of the Einstein frame, in the hope to help settling the issue.
In Sec. 2 we recall the relevant formulas. In Sec. 3 we present a simple argument based on scalar–tensor gravitational waves, and section 4 contains a discussion and the conclusions.
## 2 Conformal frames
The textbook formulation of scalar–tensor theories of gravity is the one in the Jordan conformal frame, in which the action takes the form<sup>1</sup><sup>1</sup>1The metric signature is – + + +, the Riemann tensor is given in terms of the Christoffel symbols by $`R_{\mu \nu \rho }^{}{}_{}{}^{\sigma }=\mathrm{\Gamma }_{\mu \rho ,\nu }^\sigma \mathrm{\Gamma }_{\nu \rho ,\mu }^\sigma +\mathrm{\Gamma }_{\mu \rho }^\alpha \mathrm{\Gamma }_{\alpha \nu }^\sigma \mathrm{\Gamma }_{\nu \rho }^\alpha \mathrm{\Gamma }_{\alpha \mu }^\sigma `$, the Ricci tensor is $`R_{\mu \rho }R_{\mu \nu \rho }^{}{}_{}{}^{\nu }`$, and $`R=g^{\alpha \beta }R_{\alpha \beta }`$. $`_\mu `$ is the covariant derivative operator, $`\mathrm{}g^{\mu \nu }_\mu _\nu `$, $`\eta _{\mu \nu }=`$diag$`(1,1,1,1)`$, and we use units in which the speed of light and Newton’s constant assume the value unity.
$$S=\frac{1}{16\pi }d^4x\sqrt{g}\left[f(\varphi )R\frac{\omega (\varphi )}{\varphi }g^{\alpha \beta }_\alpha \varphi _\beta \varphi +\mathrm{\Lambda }(\varphi )\right]+d^4x\sqrt{g}_{matter},$$
(2.1)
where $`_{matter}`$ is the Lagrangian density of ordinary matter, and the couplings $`f(\varphi )`$, $`\omega (\varphi )`$ are regular functions of the scalar field $`\varphi `$. Although our discussion applies to the generalized theories described by the action (2.1), for simplicity, we will restrict ourselves to Brans–Dicke theory, in which $`\omega `$ and $`\mathrm{\Lambda }`$ are constants and we will omit the non–gravitational part of the action, which is irrelevant for our purposes. We further assume that $`\mathrm{\Lambda }=0`$; the field equations then reduce to
$$R_{\mu \nu }\frac{1}{2}g_{\mu \nu }R=\frac{\omega }{\varphi ^2}\left(_\mu \varphi _\nu \varphi \frac{1}{2}g_{\mu \nu }^\alpha \varphi _\alpha \varphi \right)+\frac{1}{\varphi }\left(_\mu _\nu \varphi g_{\mu \nu }\mathrm{}\varphi \right),$$
(2.2)
$$\mathrm{}\varphi +\frac{\varphi R}{2\omega }=0.$$
(2.3)
It is well known since the original Brans–Dicke paper that another formulation of the theory is possible: the conformal transformation
$$g_{\mu \nu }\stackrel{~}{g}_{\mu \nu }=\varphi g_{\mu \nu },$$
(2.4)
and the scalar field redefinition
$$\varphi \stackrel{~}{\varphi }=\frac{\left(2\omega +3\right)^{1/2}}{\varphi }𝑑\varphi ,$$
(2.5)
(where $`\omega >3/2`$), recast the theory in the so–called Einstein conformal frame<sup>2</sup><sup>2</sup>2Also called “Pauli frame” in Refs. ), in which the gravitational part of the action becomes that of Einstein gravity plus a non self–interacting scalar field<sup>3</sup><sup>3</sup>3If the Lagrangian density $`_{matter}`$ of ordinary matter is included in the original action, it will appear multiplied by a factor $`\mathrm{exp}\left(\alpha \stackrel{~}{\varphi }\right)`$ in the action (2.6); this anomalous coupling is responsible for a violation of the equivalence principle in the Einstein frame , ).,
$$S=d^4x\sqrt{\stackrel{~}{g}}\left[\frac{\stackrel{~}{R}}{16\pi }\frac{1}{2}\stackrel{~}{g}^{\mu \nu }\stackrel{~}{}_\mu \stackrel{~}{\varphi }\stackrel{~}{}_\nu \stackrel{~}{\varphi }\right].$$
(2.6)
The field equations are the usual Einstein equations with the scalar field as a source,
$$\stackrel{~}{R}_{\mu \nu }\frac{1}{2}\stackrel{~}{g}_{\mu \nu }\stackrel{~}{R}=8\pi \left(\stackrel{~}{}_\mu \stackrel{~}{\varphi }\stackrel{~}{}_\nu \stackrel{~}{\varphi }\frac{1}{2}\stackrel{~}{g}_{\mu \nu }\stackrel{~}{}^\alpha \stackrel{~}{\varphi }\stackrel{~}{}_\alpha \stackrel{~}{\varphi }\right),$$
(2.7)
$$\stackrel{~}{\mathrm{}}\stackrel{~}{\varphi }=0.$$
(2.8)
It has been pointed out (see for reviews) that the Jordan frame formulation of Brans–Dicke theory is not viable because the sign of the kinetic term for the scalar field is not positive definite, and hence the theory does not have a stable ground state. The system decays toward lower and lower energy states without a lower bound. On the contrary, the Einstein frame version of the theory possesses the desired stability property. These features were first discovered in Kaluza–Klein and Brans–Dicke theory , and later ( and references therein) in scalar–tensor and non–linear theories of gravity with Lagrangian density of the form $`=f(\varphi ,R)`$. Despite this difficulty with the energy, the textbooks still present the Jordan frame version of the theory without mention of its Einstein frame counterpart. The technical literature is also haunted by confusion on this topic, expecially in cosmological applications . Many authors perform calculations in both conformal frames, while others support the use of the Jordan frame, or even claim that the two frames are physically equivalent. The issue of the conformal frame may appear a purely technical one, but it is indeed very important, in principle, and because the physical predictions of a classical theory of gravity, or of an inflationary cosmological scenario, are deeply affected by the choice of the conformal frame. Here, we study the violation of the weak energy condition by classical gravitational waves. It appears very hard to argue with the energy argument that leads to the choice of the Einstein frame ; moreover, the entire realm of classical<sup>4</sup><sup>4</sup>4Quantum states can violate the weak energy condition ; in this paper, we restrict to classical gravitational theories. physics is not conformally invariant. The literature on the topic is rather mathematical and abstract, and can be easily missed by the physically–minded reader. In the next section we propose a physical illustration of how the weak energy condition is violated in the Jordan frame, but not in the Einstein frame.
## 3 Gravitational waves in the Jordan and in the Einstein frame
We begin by considering gravitational waves in the Jordan frame version of Brans–Dicke theory. In a locally freely falling frame, the metric and the scalar field are decomposed as follows
$$g_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu },$$
(3.1)
$$\varphi =\varphi _0+\phi ,$$
(3.2)
where $`\eta _{\mu \nu }`$ is the Minkowski metric, $`\varphi _0`$ is constant, and the wave–like perturbations $`h_{\mu \nu }`$, $`\phi /\varphi _0`$ have the same order of magnitude,
$$\text{O}\left(\frac{\phi }{\varphi _0}\right)=\text{O}\left(h_{\mu \nu }\right)=\text{O}\left(ϵ\right)$$
(3.3)
in terms of a smallness parameter $`ϵ`$. The linearized field equations in the Jordan frame
$$R_{\mu \nu }=\frac{_\mu _\nu \phi }{\varphi _0},$$
(3.4)
$$\mathrm{}\phi =0,$$
(3.5)
allow the expansion of $`\phi `$ in monochromatic plane waves:
$$\phi =\phi _0\mathrm{cos}\left(k_\alpha x^\alpha \right),$$
(3.6)
where $`\phi _0`$ is constant and $`\eta _{\mu \nu }k^\mu k^\nu =0`$. Now note that, for any timelike vector $`\xi ^\mu `$, the quantity $`T_{\mu \nu }\xi ^\mu \xi ^\nu `$ (which represents the energy density of the waves as seen by an observer with four–velocity $`\xi ^\mu `$) is given, to the lowest order, by
$$T_{\mu \nu }\xi ^\mu \xi ^\nu =\left(k_\mu \xi ^\mu \right)^2\frac{\phi }{\varphi _0}.$$
(3.7)
This quantity oscillates, changing sign with the frequency of $`\phi `$ and therefore violating the weak energy condition . In addition, the energy density is not quadratic in the first derivatives of the field, and this implies that the energy density of the scalar field $`\phi `$ is of order O($`ϵ`$), while the contribution of the tensor modes $`h_{\mu \nu }`$ is only of order O($`ϵ^2`$) (and is given by the Isaacson effective stress–energy tensor $`T_{\mu \nu }^{(eff)}[h_{\alpha \beta }]`$). The Jordan frame formulation of Brans–Dicke theory somehow discriminates between scalar and tensor modes. From an experimental point of view, this fact has important consequences for the amplification induced by scalar–tensor gravitational waves on the light propagating through them, and for ongoing VLBI observations . If the Jordan frame formulation of scalar–tensor theories was the physical one, astronomical observations could potentially detect the time–dependent amplification induced by gravitational waves in a light beam, which is of order $`ϵ`$ . If instead the Einstein frame formulation of scalar–tensor theories is physical (which is the case, as as we shall see in the following), then the amplification effect is of order $`ϵ^2`$, and therefore undetectable .
We now turn our attention to gravitational waves in the Einstein frame version of Brans–Dicke theory. The metric and scalar field decompositions
$$\stackrel{~}{g}_{\mu \nu }=\eta _{\mu \nu }+\stackrel{~}{h}_{\mu \nu },$$
(3.8)
$$\stackrel{~}{\varphi }=\stackrel{~}{\varphi }_0+\stackrel{~}{\phi },$$
(3.9)
where $`\stackrel{~}{\varphi }_0`$ is constant and O$`(\stackrel{~}{h}_{\mu \nu })=`$O$`(\stackrel{~}{\phi }/\stackrel{~}{\varphi }_0)=`$O$`(ϵ)`$, lead to the equations
$$\stackrel{~}{R}_{\mu \nu }\frac{1}{2}\stackrel{~}{g}_{\mu \nu }\stackrel{~}{R}=8\pi \left(\stackrel{~}{T}_{\mu \nu }[\stackrel{~}{\phi }]+\stackrel{~}{T}_{\mu \nu }^{(eff)}[\stackrel{~}{h}_{\mu \nu }]\right),$$
(3.10)
$$\stackrel{~}{\mathrm{}}\stackrel{~}{\phi }=0.$$
(3.11)
Here $`\stackrel{~}{T}_{\mu \nu }[\stackrel{~}{\phi }]=_\mu \stackrel{~}{\phi }_\nu \stackrel{~}{\phi }\eta _{\mu \nu }^\alpha \stackrel{~}{\phi }_\alpha \stackrel{~}{\phi }/2`$. Again, we consider plane monochromatic waves
$$\stackrel{~}{\phi }=\stackrel{~}{\phi }_0\mathrm{cos}\left(l_\alpha x^\alpha \right),$$
(3.12)
where $`\stackrel{~}{\phi }_0`$ is a constant and $`\eta _{\mu \nu }l^\mu l^\nu =0`$. The energy density measured by an observer with timelike four–velocity $`\xi ^\mu `$ in the Einstein frame is
$$\stackrel{~}{T}_{\mu \nu }\xi ^\mu \xi ^\nu =\left[l_\mu \xi ^\mu \stackrel{~}{\phi }_0\mathrm{sin}\left(l_\alpha x^\alpha \right)\right]^2+\stackrel{~}{T}_{\mu \nu }^{(eff)}[\stackrel{~}{h}_{\alpha \beta }]\xi ^\mu \xi ^\nu ,$$
(3.13)
which is positive definite. The contributions of the scalar and tensor modes to the total energy density have the same order of magnitude O$`(ϵ^2)`$, and are both quadratic in the first derivatives of the fields. The weak energy condition is satisfied in the Einstein, but not in the Jordan frame; physically reasonable matter in the classical domain is expected to satisfy the energy conditions .
Let us return for a moment to the Jordan frame: analogously to eq. (3.7), the energy–momentum 4–current density of scalar gravitational waves in the Jordan frame is
$$T_{0\mu }=k_\mu (k_\nu \xi ^\nu )\frac{\phi }{\varphi _0}.$$
(3.14)
In the Jordan frame, the energy density and current of spin 0 gravitational waves average to zero on time intervals much longer than the period of the waves. However, this is not a solution to the problem, since one can conceive of scalar gravitational waves with very long period. For example, gravitational waves from astronomical binary systems have periods ranging from hours to months (waves from $`\mu `$–Sco, e.g., have period $`310^5`$ s). The violation of the weak energy condition over such macroscopic time scales is unphysical.
## 4 Discussion and conclusions
The violation of the weak energy condition by scalar–tensor theories formulated in the Jordan conformal frame makes them unviable descriptions of classical gravity. Due to the fact that scalar dilatonic fields are ubiquitous in superstring and supergravity theories, there is a point in considering Brans–Dicke theory (and its scalar–tensor generalizations) as toy models for string theories (e.g. ), and in this case our considerations should be reanalyzed, because negative energy states are not forbidden at the quantum level . However, this context is quite limited, and differs from the usual classical studies of scalar–tensor theories.
The reluctance of the gravitational physics community in accepting the energy argument in favor of the Einstein frame is perhaps due to the fact that it was formulated in a rather abstract way. The example illustrated in the present paper shows, in a straightforward way, the violation of the weak energy condition by wave–like gravitational fields in Brans–Dicke theory formulated in the Jordan frame, and the viability of the Einstein frame counterpart of the same theory. The example is not academic, since a infrared catastrophe for scalar gravitational waves would have many observational consequences. One example studied in the astronomical literature consists of the amplification effect induced by scalar–tensor gravitational waves on a light beam, which differs in the Jordan and in the Einstein frame .
The argument discussed in this paper for Brans–Dicke theory can be easily generalized to other scalar–tensor theories. Our conclusions agree with, and are complementary to, those of Refs. , although our approach is different. It has also been pointed out that the Einstein frame variables $`(\stackrel{~}{g}_{\mu \nu },\stackrel{~}{\varphi })`$, but not the Jordan frame variables $`(g_{\mu \nu },\varphi )`$, are appropriate for the formulation of the Cauchy problem .
The example presented in this paper agrees with recent studies of the gravitational collapse to black holes in Brans–Dicke theory . The noncanonical form of the stress–energy tensor of the Jordan frame Brans–Dicke scalar is responsible for the violation of the null energy condition ($`R_{\alpha \beta }n^\alpha n^\beta 0`$ for all null vectors $`n^\alpha `$). This causes a decrease in time of the area of the black hole horizon , contrarily to the behaviour predicted by black hole thermodynamics in general relativity. In our example, the null energy condition is satisfied, but there are still pathologies due to the violation of the weak energy condition. Within the classical context, scalar–tensor theories must be formulated in the Einstein conformal frame, not in the Jordan one.
## Acknowledgments
This work was partially supported by EEC grants numbers PSS\* 0992 and CT1\*–CT94–0004, and by OLAM, Fondation pour la Recherche Fondamentale, Brussels. |
no-problem/9910/cond-mat9910059.html | ar5iv | text | # 1 Replica symmetric density of states below 𝑇_𝑓 showing at a small central peak emerge (and disappear at lowest 𝑇, left figure) for a chemical potential 𝜇=0.39𝐽 just below the (0RPSB-) gap energy 𝐽√2/𝜋 and a large one for 𝜇=0.8𝐽 in the right figure
PSEUDOGAPS AND CHARGE BAND IN THE PARISI SOLUTION OF INSULATING AND SUPERCONDUCTING ELECTRONIC SPIN GLASSES AT ARBITRARY FILLINGS
Reinhold Oppermann and Heiko Feldmann
Abstract. We report progress in understanding the fermionic Ising spin glass with arbitrary filling. A crossover from a magnetically disordered single band phase via two intermediate bands just below the freezing temperature to a 3-band structure at still lower temperatures - beyond an almost random field instability - is shown to emerge in the magnetic phase. An attempt is made to explain the exact solution in terms of a quantum Parisi phase. A central nonmagnetic band is found and seen to become sharply separated at $`T=0`$ by gaps from upper and lower magnetic bands. The gap sizes tend towards zero as the number of replica symmetry breaking steps increases towards infinity. In an extended model, the competition between local pairing superconductivity and spin glass order is discussed.
1. INTRODUCTION
The fermionic Ising spin glass has many interesting and important features. To mention a few:
i) It is an insulating limit of a large class of itinerant models that involve frustrated magnetic order,
ii) its classical part coincides at $`T=0`$ with the $`S=1`$ Ghatak-Sherrington model , whose nonmagnetic $`S=0`$-section differs slightly at finite temperatures,
iii) its definition on the Fock space allows quantum-dynamical fermionic correlations in addition to the static spin- and charge-correlations;
iv) the existence of low energy excitations in this sector of quantum-dynamical correlations was seen to be connected with full (infinite-step) replica permutation symmetry breaking of the Parisi type . The fascinating aspect was that infinite breaking of discrete symmetries may lead to soft modes commonly known from weak breaking of continuous symmetries;
v) its Onsager reaction field led to complications in the mean field theory comparable to those encountered in the infinite-dimensional Hubbard model ,
vi) the strong coupling of charge and spin fluctuations causes a specific influence of frustrated magnetic interactions on electronic transport including superconductivity .
2. MAGNETIC AND NONMAGNETIC BANDS IN THE INSULATING MODEL
In order to obtain conclusions about the band structure of the model we first solved analytically and numerically a set of coupled selfconsistent equations for magnetic parameters. The number of these quantities increases towards infinity for the full Parisi solution. This turned out not to be a completely unsolvable problem. At half filling we obtained the ratio between density of states at the gap edge and gap width as an invariant under the number of steps which break replica permutation symmtery (RPSB). Recognizing the gap widths to be given by the nonequilibrium susceptibility, which is known to approach zero as $`T0`$, we concluded a pseudogap as the exact solution. As for the Hubbard model, doping leads to serious complications in the fermionic spin glass. Away from half-filling the fermionic Ising spin glass must accomodate doubly occupied sites (even at $`T=0`$ and positive chemical potential).
As Fig.1 indicates, the lowest order calculation with a fermionic excitation gap $`E_{g0}=\sqrt{2/\pi }`$ keeps $`\nu =1`$ for $`\mu <1/\sqrt{2\pi }`$. The regions with and without a central nonmagnetic band are separated by the line $``$, shown in Fig.2. Since $`\mu =0.39`$ is chosen here very close to $`\frac{1}{2}E_{g0}`$ and since the line $``$ includes a small area below $`\frac{1}{2}E_{g0}`$ at finite temperatures, before it turns around to meet $`E_{g0}/2`$ at $`T=0`$, a certain kind of reentrance is observed. Decreasing the temperature one observes the growth of a small central peak, which looses however its weight in favour of the magnetic bands as $`T0`$.
Picking one representation from our numerous calculations for $`\mu `$ larger than $`E_{g0}/2`$, where the system is no more half-filled, we selected the one for $`\mu =0.8`$, a value still in the magnetic regime but not far from the discontinuous transition into the paramagnetic phase. For all $`\mu >1/\sqrt{2\pi }`$ the central peak grows until it reaches its maximum height at zero temperature.
This band becomes completely separated from the magnetic bands to its left and right, when $`T`$ reaches zero. The area under the central band is equal to $`\nu 1`$, which is the deviation from half-filling ($`\nu =_\sigma (\widehat{n}_{}+\widehat{n}_{})`$=1).
We have also calculated numerically and analytically the 1-step replica permutation symmetry broken solutions, which will be published elsewhere. In this improved approximation a similar scenario takes place, only with the gap energy depressed to the much smaller value of $`E_{g1}/2.119`$. For chemical potentials larger than this value the system is no more half filled. In the range $`E_{g1}<2\mu <E_{g0}`$ the central peak is already present, only its growth is confined to lower temperatures. This will continue for more and more RPSB-steps and, finally, the central peak will exist everywhere except at half-filling. The exact solution for the nonmagnetic charge band at $`\mathrm{}`$-RPSB maintains features of its replica symmetric Gaussian predecessor.
The special line L shown in Figure 2 encloses the 3-band low $`T`$ domain and its definition resembles an instability condition. However, offdiagonal elements $`Q^{ab}`$ exist (ie finite saddle point solutions in the infinite-range model) and stabilize the system against random field criticality. It is the Almeida Thouless eigenvalues which decide stability. The ubiquituous instability against RPSB does not allow exact conclusions on other potential instabilities. This renders the task extremely difficult and has led to delay in attempts to solve related problems. The possibility of vector replica symmetry breaking must be seriously considered.
3. COMPETITION BETWEEN LOCAL SUPERCONDUCTING AND SPIN GLASS ORDER
This type of problem was studied by several groups, who focused on models which couple a localized spin system with randomly frozen magnetic moments to another species of mobile and eventually superconducting fermions. We considered in particular the case of a single fermion species that is exposed to the abovementioned random magnetic interaction and to an attractive interaction. Following the experience with negative U Hubbard models for example, arbitrarily small transport processes are known to delocalize the preformed pairs and turn the system superconducting.
Many details of our analysis were published in . The new insight gained from the selforganized 3-band structure in the low temperature regime of the purely random magnetic system appears relevant for local pairing and superconductivity too. The central nonmagnetic band meets a magnetic band at the Fermi level. For any finite order of RPSB-steps they were separated by finite gaps, but finally these become pseudogaps. The phase diagram shown in Fig.2 reveals the increasing importance of RPSB for the competition between local pairing and spin glass order as the temperature decreases. The presence of smaller and smaller order parameters in higher RPSB order seems to allow superconductivity to regain more pieces of the phase diagram. Very unlike the competition between ferromagnetism and spin glass order for incomplete frustrated interactions, the SC-SG interface does not tend towards an infinitely steep line. Coexistence between singlet local pairing and spin glass order parameters was not found, but the vicinity of the phases in ($`\mu ,T`$) led to special pairbreaking behaviour.
One major goal for the future is of course to find as many features as possible of the quantum Parisi solution for the Ising type models. The relation between quantum dynamics and replica symmetry breaking must also be analysed in systems which are strongly quantum dynamical in spin- and charge correlations (superconducting order parameter adds this type of quantum dynamics too, but amounts only to shifts of the first order transition line between superconductor and spin glass).
Acknowledgments
This work was supported by project Op28/5-1 and by the SFB 410 at the university of Würzburg.
References
* S. K. Ghatak and D. Sherrington, J. Phys. C 10, 3149 (1977).
* R. Oppermann and B. Rosenow, Europhys. Lett. 41, 525 (1998).
* R. Oppermann and B. Rosenow, Phys. Rev. Lett. 80, 4767 (1998).
* H. Feldmann and R.Oppermann, Eur. Phys. J. B 10, 429 (1999).
* V. Dotsenko and M. Mézard, J. Phys. A 30, 3363 (1997). |
no-problem/9910/astro-ph9910220.html | ar5iv | text | # The Nature of Lyman Break Galaxies in Cosmological Hydrodynamic Simulations
## 1. Introduction
The detection of large numbers of high-redshift galaxies using the Lyman break technique has greatly furthered our understanding of early galaxy formation. A variety of arguments, from clustering to semi-analytic modeling to N-body simulations, suggest that these Lyman break galaxies (LBGs) form in highly biased, rare density peaks in the early universe. However, the nature of these galaxies remains controversial. Are they the most massive galaxy contained in these peaks, having quiescently formed stars for some time? Or are they smaller galaxies residing in large potential wells that are undergoing an short-lived merger-induced starburst? The key to answering this question is to determine the mass of the underlying galaxy. This may be done observationally or by modeling processes of galaxy formation. So far, only N-body and semi-analytic techniques have been applied, and the results vary, depending primarily on what is assumed for merger-induced starbursts. In principle, hydrodynamic simulations of galaxy formation including star formation, together with population synthesis models, can directly address these questions within a given cosmology. That is what we investigate in these proceedings.
## 2. Simulation and Analysis
We simulate a random 11.111$`h^1`$Mpc cube in a $`\mathrm{\Lambda }`$CDM universe, with $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$, $`H_0=65`$, $`n=0.95`$, and $`\mathrm{\Omega }_b=0.02h^2`$. We use Parallel TreeSPH to advance $`128^3`$ gas and $`128^3`$ dark matter particles from $`z=49`$ to $`z=3`$. Our spatial resolution is $`1.7h^1`$ comoving kpc (equivalent Plummer softening), implying that at $`z=3`$ our physical resolution is $`640`$pc. Our mass resolution is $`m_{SPH}=1.3\times 10^7M_{}`$ and $`m_{dark}=1\times 10^8M_{}`$. Using a 60-particle criterion for our simulated galaxy completeness limit implies that we are resolving most galaxies with $`M_{baryonic}>8\times 10^8M_{}`$.
We include star formation and thermal feedback. At $`z=3`$, we identify galaxies using Spline Kernel Interpolative DENMAX (SKID), and compile a list of star formation events in each galaxy. Since gas is gradually converted into stars in each SPH particle, a given particle can have up to 20 star formation events. We treat each event as an instantaneous single-burst population using Bruzual & Charlot’s GISSEL98, assuming a Scalo IMF with $`Z=0.4Z_{}`$<sup>1</sup><sup>1</sup>1Using a Salpeter or Miller-Scalo IMF results in more LBGs. However, dust plays a larger role in determining galaxy properties.. We sum the spectra for all events in a galaxy to produce its rest-frame spectrum at $`z=3`$. We apply a correction for dust absorption using a galactic extinction law with $`A_V=1.0`$. We then redshift the spectra to $`z=0`$ and apply $`U_nGR`$ filter functions to obtain the observed broad-band colors for our simulated galaxy population. Note that no K-correction is necessary since we redshift the spectrum prior to applying the filters.
## 3. The Simulated Lyman Break Galaxy Population
Our simulation produces 1238 galaxies at $`z=3`$. Figure 1 shows the luminosity functions $`\mathrm{\Phi }`$ in $`R`$ (solid histogram), $`G`$ (dotted line) and $`U_n`$ (dashed line) of these galaxies. Note that $`\mathrm{\Phi }(U_n)`$ is shown without any attenuation due to HI along the line of sight. The left and right panels show $`\mathrm{\Phi }`$ without and with dust, respectively. The turnover above $`R>28`$ is likely due to resolution effects, while the lack of galaxies with $`R<24`$ is due to our small volume. Between these values, our luminosity function (with dust) is in rough agreement with the observed $`R`$-band luminosity function , shown as the solid curve down to $`R=27`$ (the current observational limit), although somewhat steeper. In reality, there is probably a range of dust extinctions, and this will tend to flatten $`\mathrm{\Phi }`$.
Figure 1: Luminosity function of high-redshift galaxies in $`U_n`$, $`G`$ and $`R`$, without (left panel) and with (right panel) dust.
The number of Lyman break galaxies expected for this cosmology and volume is $`7`$ , though this number could be higher due to source confusion . With dust included, we produce 7 galaxies with $`R<25.5`$, of which 6 satisfy the LBG color selection, in reasonable agreement with observations. Without dust, there are 38. Not surprisingly, the number density of simulated LBGs is highly sensitive to the amount of dust included, and undoubtedly to the type and distribution of dust as well.
Figure 2: $`(U_n+2)G`$ vs. $`GR`$ of simulated galaxies, without (left panel) and with (right panel) dust. Triangles have $`R<25.5`$, dots have $`R>25.5`$.
Color selection is at the heart of the Lyman break technique. In Figure 2 we show $`U_nG`$ vs. $`GR`$ plots of our simulated galaxies, with an arbitrary two magnitudes of extinction added to $`U_n`$ to crudely mimic intervening HI absorption. Triangles represent galaxies with $`R<25.5`$, and dots are the remaining galaxies. The Lyman break color selection is up and to the left of the dashed boundary. Left and right panels show without and with dust, respectively. Dust moves galaxies to higher $`GR`$, and somewhat higher $`U_nG`$. Most galaxies at $`z=3`$ fall within the color selection, but significantly more dust would move the bright galaxies outside the $`GR<1.2`$ criterion.
Figure 3: Stellar mass vs. $`R`$, without dust (left panel) and with dust (right panel). We now investigate the mass of simulated LBGs. Figure 3 shows the stellar mass vs. $`R`$-band magnitude. The horizontal line demarcates $`R=25.5`$, the magnitude limit of the observed LBG sample. While there is some scatter, the clear trend is that the brightest objects are also the most massive ones. The scatter increases to smaller masses, and is slightly larger in $`G`$ and $`U_n`$, but our simulations indicate that LBGs are the most massive galaxies at $`z=3`$.
Figure 4: Star formation rate per unit stellar mass as a function of time (age of the universe), in three different $`R`$-band magnitude ranges. Figure 4 shows the evolution of the star formation rate per solar mass of stars for three galaxy samples: $`R<25.5`$ (left panel), $`25.5<R<27.5`$ (middle panel), and $`27.5<R<29.5`$ (right panel). The brightest galaxies have been forming stars the longest, typically for over a Gyr by $`z=3`$. Thus they contain a significant older stellar population. Fainter (and smaller) galaxies have formed the bulk of their stars more recently.
## 4. Conclusions
Our simulation roughly reproduces the number density and luminosity function of LBGs for a reasonable value of dust extinction. It suggests that LBGs are the most massive objects at $`z3`$, and that they contain a significant older stellar population.
While this simulation puts forth a consistent picture for the nature of LBGs, we cannot rule out the aforementioned alternative scenario that LBGs are smaller starbursting galaxies. The reason is that starburst regions are typically a few hundred parsecs across, and therefore below our resolution. The star formation rate in our simulations is tied primarily to the local density (using a Schmidt Law) which is limited by resolution. Thus we do not effectively mimic “starbursts” as would occur in a much higher resolution merger simulation. Conversely, some semi-analytic models insert such starburst behavior explicitly, so it is not surprising that they obtain different results.
At present, it is not feasible to run simulations of sufficient resolution to resolve starbursts while still modeling a random cosmological volume. Furthermore, starbursts are likely to be governed by many other physical processes that we are only crudely modeling at present, such as feedback and ionization. Thus we cannot rule out the possibility that smaller starbursting systems also contribute to the observed Lyman break galaxy population. Nevertheless, our simulation with reasonable physical parameters is able to reproduce the basic observed properties of this population without including such objects.
### Acknowledgments.
Thanks to Kurt Adelberger for the $`U_nGR`$ filter response functions. Thanks to Stephane Charlot for GISSEL98. Thanks to Rachel Somerville and Tsafrir Kolatt for helpful comments. The simulation was run on the NCSA Origin 2000.
## References
Adelberger, K. L., Steidel, C. C., Giavalisco, M., Dickinson, M., Pettini, M., & Kellogg, M. 1998, ApJ, 505, 18
Baugh, C. M., Cole, S., Frenk, C. S., & Lacey, C. G. 1998, ApJ, 498, 504
Wechsler, R. H., Gross, M., Primack, J. R., Blumenthal, G. R., & Dekel, A. 1998, ApJ, 506, 19
Steidel, C. C., Adelberger, K. L., Dickinson, M., Giavalisco, M., Pettini, M., & Kellogg, M. 1998, ApJ, 492, 428
Lowenthal, J. D., Koo, D., Gúzman, R., Gallego, J., Phillips, A. C., Faber, S. M., Vogt, N., Illingworth, G. D., & Gronwall, C. 1997, ApJ, 481, 673
Pettini, M., Kellogg, M., Steidel, C. C., Dickinson, M., Adelberger, K. L., & Giavalisco, M. 1998, ApJ, 508, 539
Kolatt, T. S., Bullock, J. S., Somerville, R. S., Sigad, Y., Jonsson, P., Kravtsov, A. V., Klypin, A. A., Primack, J. R., Faber, S. M., Dekel, A. 1999, ApJ, 523, L109
Weinberg, D. H., Davé, R., Gardner, J. P., Hernquist, L., & Katz, N. 1999 in ”Photometric Redshifts and High Redshift Galaxies”, eds. R. Weymann, L. Storrie-Lombardi, M. Sawicki & R. Brunner, (SF: ASP Conf Series)
Katz, N., Weinberg D.H., & Hernquist, L. 1996, ApJS, 105, 19
Bruzual, G. A., & Charlot, S. 1993, ApJ, 405, 538
Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1993, ApJ, 345, 245
Steidel, C. C., Pettini, M. & Hamilton, D. 1995, AJ, 110, 2519
Steidel, C. C., Adelberger, K. L., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, ApJ, 519, 1
Mihos, J. C., & Hernquist, L. 1996, ApJ, 464, 641
Cotter, G., Haynes, T., Baker, J. C., Jones, M. E., Saunders, S. 1999, astro-ph/9910059 |
no-problem/9910/quant-ph9910113.html | ar5iv | text | # High-Temperature Expansions of Bures and Fisher Information Priors
## Abstract
For certain infinite and finite-dimensional thermal systems, we obtain — incorporating quantum-theoretic considerations into Bayesian thermostatistical investigations of Lavenda — high-temperature expansions over inverse temperature $`\beta `$ induced by volume elements (“quantum Jeffreys’ priors) of Bures metrics. Similarly to Lavenda’s results based on volume elements (Jeffreys’ priors) of (classical) Fisher information metrics, we find that in the limit $`\beta 0`$ the quantum-theoretic priors either conform to Jeffreys’ rule for variables over $`[0,\mathrm{}]`$, by being proportional to $`1/\beta `$, or to the Bayes-Laplace principle of insufficient reason, by being constant. Whether a system adheres to one rule or to the other appears to depend upon its number of degrees of freedom.
In this communication — initially motivated by the investigation “Bayesian Approach to Thermostatistics” of Lavenda (cf. ) — we examine certain prior distributions $`\omega (\beta )`$ over the inverse temperature parameter $`\beta `$ that have recently been presented in the literature . These distributions are derived from “quantum Jeffreys’ priors”, that is, the volume elements $`\text{d}V_{Bures}`$ of the Bures/minimal monotone metric , for various finite and infinite-dimensional convex sets of density matrices. We find that some, but not all, of these derived priors satisfy — in the high-temperature limit, $`\beta 0`$ — Jeffreys’ choice of “$`\omega (\beta )1/\beta `$, which is invariant to transformations of the form $`\zeta =\beta ^n`$, since $`d\beta /\beta `$ and $`d\beta ^n/\beta ^n`$ are always proportional. This would not be true if the uniform distribution were used. Jeffreys cited the measurement of the charge of an electron, where some methods give $`e`$ while others $`e^2`$, and certainly $`\text{d}e`$ and $`\text{d}e^2`$ are not proportional” . (Along these lines, let us emphasize for the purposes of this study the obvious assertion that $`\beta ^1T`$, where $`T`$ is the temperature.)
Lavenda \[1, sec. 4\] analyzed three models in particular: (1) an ideal monatomic gas having the logarithm of its partition function $`\frac{3}{2}\mathrm{log}\beta `$; (2) the harmonic oscillator with frequency $`\nu `$; and (3) a Fermi oscillator with two levels. He determined that in the high-temperature limit the first two of these yielded priors $`\omega (\beta )`$ proportional to $`1/\beta `$, while the third gave a constant prior. Quite similarly to this set of findings of Lavenda, all the prior distributions that we will examine below are either proportional in the high-temperature limit to $`1/\beta `$ or to a constant. It is interesting to observe that one of the infinite-dimensional systems we study — the displaced thermal states — has the same prior distribution, based on the quantum Jeffreys’ prior, as that obtained for the Fermi oscillator by Lavenda, in his different analytical (classical) framework. Also, when we attempt to apply the procedure of Lavenda to these states, as well as to the displaced squeezed thermal states , we find different high-temperature behavior (that is, of the $`1/\beta `$ type) than when we rely upon the quantum Jeffreys’ priors. But, for the squeezed thermal states , the behavior using the two different (quantum and classical) approaches near $`\beta `$ is $`1/\beta `$ in nature.
The term “quantum Jeffrey prior” was first employed in . There, relying upon the innovative study of Twamley — the first to explicitly determine the Bures metric in an infinite-dimensional setting — a simple product (independence) form
$$\text{d}V_{Bures}=\upsilon (r)\omega (\beta )\text{d}r\text{d}\beta \text{d}\theta ,$$
(1)
was obtained for the squeezed thermal states
$$\rho (\beta ,r,\theta )=S(r,\theta )T(\beta )S^{}(r,\theta )/Z(\beta ).$$
(2)
Here, $`S(r,\theta )`$ is the one-photon squeeze operator, and
$$Z(\beta )=(2\mathrm{sinh}\frac{\beta }{4})^1$$
(3)
is a normalization factor (partition function) chosen so that $`\text{Tr}\rho =1`$. In the form (1) (which we note is independent of the unitary parameter $`\theta `$), $`\upsilon (r)=\mathrm{sinh}2r`$, and of more immediate interest to the investigation here,
$$\omega (\beta )=\frac{\mathrm{cosh}\frac{\beta }{4}\mathrm{coth}\frac{\beta }{4}\text{sech}\frac{\beta }{2}}{8}.$$
(4)
A series expansion in the vicinity of $`\beta =0`$ yields
$$\omega (\beta )=\frac{1}{2\beta }\frac{7\beta }{192}+\frac{667\beta ^3}{184320}+O[\beta ]^5$$
(5)
Near $`\beta =0`$, the first term predominates, so we discern that in the high-temperature limit the $`\beta `$-dependent part (5) of the quantum Jeffreys’ prior (1), in fact, satisfies the Jeffreys/Lavenda desideratum for a prior distribution over $`\beta [0,\mathrm{}]`$ of being proportional to $`1/\beta `$.
Subsequent to , Paraoanu and Scutaru studied the case of the displaced thermal states. They found the quantum Jeffreys’ prior to be of the simple form
$$\text{d}V_{Bures}=\frac{\text{sech}\frac{\beta }{2}\text{d}p\text{d}q\text{d}\beta }{8},$$
(6)
where the variables $`p`$ and $`q`$ denote the displacements in momentum and position. Now, it is of interest to note, that unlike (5), this volume element can be normalized over the full infinite range $`\beta [0,\mathrm{}]`$ to a proper prior probability distribution function, $`\omega (\beta )=\frac{\text{sech}\frac{\beta }{2}}{\pi }`$. (The mean of $`\beta `$ for this distribution is the product of Catalan’s constant, which is approximately 0.95966, and $`\frac{8}{\pi }`$, while the second moment of $`\beta `$ is $`\pi ^2`$.) Now, expanding around $`\beta =0`$, we have
$$\omega (\beta )=\frac{\text{sech}\frac{\beta }{2}}{\pi }=\frac{1}{\pi }\frac{\beta ^2}{8\pi }+\frac{5\beta ^4}{384\pi }+O[\beta ]^6.$$
(7)
So, near $`\beta =0`$, the prior behaves as a uniform distribution, not fulfilling the Jeffreys/Lavenda desideratum. (“This, however, is precisely the Bayes-Laplace rule, which Jeffreys considers as an unacceptable representation of the ignorance concerning the value of the parameter” .) The thermostatistical characteristics of this model for displaced thermal states is essentially fully equivalent to those found by Lavenda for a Fermi oscillator with two levels: 0 and $`ϵ_0`$ (cf. \[15, eq. (3.5.11\]). (“As we have seen, the \[Jeffreys’\] invariance property also holds for Bose particles in the high-temperature limit. However, the same is not true for Fermi particles” . We have determined that this latter behavior also holds generically — in the classical framework of Lavenda — for the $`SU_q(2)`$ fermion model, relying upon its grand partition function \[16, eq. (23)\].)
Kwek, Oh and Wang — making use of the Baker-Campbell-Hausdorff formula for quadratic operators — then, extended these studies to the displaced squeezed thermal states. They obtained the volume element \[8, eq. (15)\],
$$\text{d}V_{Bures}=(\frac{1}{2}\mathrm{cosh}^2\frac{\beta }{4}\text{sech}^{\frac{3}{2}}\frac{\beta }{2})\sqrt{4\mathrm{cosh}^2(2r)\mathrm{sin}^2(2r)}\text{d}p\text{d}q\text{d}r\text{d}\beta \upsilon (r)\omega (\beta )\text{d}p\text{d}q\text{d}r\text{d}\beta .$$
(8)
Now,
$$\omega (\beta )=\frac{1}{2}\mathrm{cosh}^2\frac{\beta }{4}\text{sech}^{\frac{3}{2}}\frac{\beta }{2}=\frac{1}{2}\frac{\beta ^2}{16}+\frac{23\beta ^4}{3072}+O[\beta ]^6.$$
(9)
So, similarly to (7) and unlike (5), this univariate prior behaves uniformly in the immediate vicinity of $`\beta =0`$. (The difference between (5) and (9), in this respect, is easily evident in Fig. 2 of .) Kwek, Oh and Wang noted that “whereas the marginal probability distribution for the undisplaced squeezed state diverges as $`\beta 0`$ or at high temperature, in the case of the displaced squeezed state, the marginal probability distribution goes to a finite value. The result is reminiscent of a similar situation in chi-square distribution curves in which the probability density function diverges at one degree of freedom, but not with higher degrees of freedom. This analogy seems to indicate that the change in the marginal probability density function in terms of inverse temperature stems from an increased degree of freedom associated with the displacement of the squeezed states” \[8, p. 6617\]. This line of argument suggests that perhaps a relation can be established between the degrees of freedom of a system (in particular, the three instances analyzed above) and whether or not the associated prior $`\omega (\beta )`$ fulfills in the limit $`\beta 0`$ the Jeffreys/Lavenda desideratum or the Bayes-Laplace rule (or conceivably neither).
The three scenarios — squeezed thermal states, displaced thermal states, and displaced squeezed thermal states — examined above all pertain to infinite-dimensional (continuous variable) quantum systems. We now turn our attention to the cases of spin-$`\frac{1}{2}`$ and spin-1 (that is, two and three-level) systems. Here, the quantum Jeffreys’ priors, that is the volume elements of the associated Bures metrics, are not typically parameterized in terms of inverse temperature parameters. So we can not immediately study the high-temperature limit but must have recourse to a somewhat more indirect, but quite standard argument. That is, we compute the one-dimensional (univariate) marginal distributions of the (multivariate) quantum Jeffreys’ priors , which we interpret as densities-of-state or structure functions, $`\mathrm{\Omega }(ϵ)`$. Then, applying Boltzmann factors and normalizing by the resulting partition functions $`Z(\beta )`$, we determine the corresponding canonical Gibbs distributions, $`\mathrm{\Omega }^{}(ϵ|\beta )=\text{exp}[\beta ϵ\mathrm{log}Z(\beta )]\mathrm{\Omega }(ϵ)`$. (We also note that Lavenda \[1, eq. (29a)\] considers, as well, the different “structure function” $`\mathrm{\Omega }(\beta )=\omega (\beta )/Z(\beta )`$, and the possibility of taking its Laplace transform to obtain a moment-generating function, $`Y(ϵ)`$.) We use the contention of Lavenda (relying upon the asymptotic equivalence between the maximum-likelihood estimate of $`\beta `$ and its average value) that the implied prior (Bayes) distribution over $`\beta `$ should be taken to be
$$\omega (\beta )\sqrt{var(ϵ)}=\sqrt{\frac{^2}{\beta ^2}\mathrm{log}Z(\beta )},$$
(10)
where $`var(ϵ)`$ is the variance of the energy — that is, $`(ϵϵ)^2`$. This is nothing other than the application to the canonical distribution of the Bayesian/Jeffreys procedure for constructing reparameterization-invariant priors. This consists of taking the prior to be proportional to the volume element of the (classical) Fisher information metric .
For spin-$`\frac{1}{2}`$ systems, relying upon the Bures/minimal monotone metric, one finds that \[21, eq. (12)\]
$$Z(\beta )=\frac{2I_1(\beta )}{\beta },$$
(11)
where $`I_n(\beta )`$ denotes the modified (hyperbolic) Bessel function of the first kind. Now, in this case,
$$\omega (\beta )=\sqrt{\frac{^2\mathrm{log}Z(\beta )}{\beta ^2}}=\frac{1}{2}\frac{\beta ^2}{32}+\frac{7\beta ^4}{3072}+O[\beta ]^6,$$
(12)
so, again, the Jeffreys/Lavenda desideratum of being proportional to $`1/\beta `$ is not satisfied, but rather the prior behaves uniformly in the vicinity of $`\beta =0`$. (“One method very common in statistical mechanics is the use of a high-temperature expansion: as $`T\mathrm{}`$ one tries to expand the partition function as a series in powers of some parameter $`\kappa (T)`$ such that $`\kappa (T)0`$ as $`T\mathrm{}`$\[22, p. 8\]. In these studies, we expand not the partition function per se, but the square root of the second derivative with respect to $`\beta `$ of its logarithm.)
Use of the maximal monotone metric (which is based on the left logarithmic derivative ), in this case, rather than the Bures/minimal one (based on the symmetric logarithmic derivative), yields \[5, eq. (24)\]
$$Z(\beta )=(\frac{\pi }{2\beta })^{\frac{1}{2}}I_{\frac{1}{2}}(\beta )=\frac{\mathrm{sinh}\beta }{\beta },$$
(13)
leading to the arguably theoretically preferable Langevin function ,
$$\frac{\mathrm{log}Z(\beta )}{\beta }=ϵ=\text{coth}\beta \frac{1}{\beta }.$$
(14)
Nevertheless, the high-temperature behavior of the implied prior $`\omega (\beta )`$ — again based on the relation (10) — remains that of a constant near the origin, that is,
$$\omega (\beta )\frac{1}{\sqrt{3}}\frac{\beta ^2}{10\sqrt{3}}+\frac{137\beta ^4}{12600\sqrt{3}}+O[\beta ]^6.$$
(15)
In , we studied certain three-level systems of the form
$$\rho =\frac{1}{2}\left(\begin{array}{ccc}v+z& 0& xiy\\ 0& 22v& 0\\ x+iy& 0& vz\end{array}\right),$$
(16)
which are one-parameter ($`v`$) extensions, in which the middle level has become accessible, of the two-level systems,
$$\rho =\frac{1}{2}\left(\begin{array}{cc}1+z& xiy\\ x+iy& 1z\end{array}\right).$$
(17)
(We note that the full convex set of spin-1 density matrices is eight-dimensional in character .) The univariate marginal probability distribution over $`v`$, obtained by integrating over the variables $`x`$, $`y`$ and $`z`$ in the normalized quadrivariate Bures volume element
$$p(v,x,y,z)=\frac{3}{4\pi ^2v(1v)^{\frac{1}{2}}(v^2x^2y^2z^2)^{\frac{1}{2}}}$$
(18)
is \[28, eq. (19)\]
$$\stackrel{~}{p}(v)=\frac{3v}{4\sqrt{1v}},0v1.$$
(19)
We interpreted (19) as a density-of-states or structure function. We, then, determined \[5, eq. (42)\] the associated partition function
$$Z(\beta )=\frac{3e^\beta ((1+2\beta )\sqrt{\pi }\text{erfi}(\sqrt{\beta })2\sqrt{\beta }e^\beta )}{8\beta ^{\frac{3}{2}}}$$
(20)
(here $`\text{ erfi}(z)`$ represents the imaginary error function, that is $`\frac{\text{erf}(iz)}{i}`$) by applying the Boltzmann factor $`e^{\beta v}=e^{\beta H}`$ to (19), where
$$H=\frac{1}{2}\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right).$$
(21)
This leads — via the argument of Lavenda again, based on the relation (10) — to
$$\omega (\beta )\frac{1}{\beta }\frac{119\beta }{40}+\frac{1891\beta ^2}{140}+O[\beta ]^3,$$
(22)
thus, satisfying the Jeffreys/Lavenda desideratum. Since spin-$`\frac{1}{2}`$ particles are fermions and spin-1 particles are bosons, these results conform to Lavenda’s assertion that priors associated with bosons satisfy the Jeffreys’ rule, while fermions do not. We also note, somewhat in line with the discussion of Kwek, Oh and Wang , quoted above, that our spin-$`\frac{1}{2}`$ example has an underlying three degrees of freedom, while the spin-1 case has one more.
For the spin-$`\frac{1}{2}`$ systems (17), the trivariate (normalized) quantum Jeffreys’ prior is
$$p(x,y,z)=\frac{1}{\pi ^2(1x^2y^2z^2)^{\frac{1}{2}}}.$$
(23)
The univariate marginal probability distributions are of the form
$$\stackrel{~}{p}(z)=\frac{2(1z^2)^{\frac{1}{2}}}{\pi }$$
(24)
Interpreting (24) as a density-of-states function, and using as the Hamiltonian,
$$H=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right),$$
(25)
one arrives at the partition function (11).
Now, let us seek to apply the method for generating priors over $`\beta `$ of Lavenda directly to the three infinite-dimensional scenarios (squeezed thermal states, displaced thermal states and displaced squeezed thermal states) first considered above, by taking for the partition function $`Z(\beta )`$ in (10), the normalization factor that renders the trace unity, so that one obtains a (properly normalized) density matrix. For the squeezed thermal states, substituting (3) into (10), we have
$$\omega (\beta )=\frac{1}{\beta }\frac{\beta }{96}+\frac{7\beta ^2}{92160}+O[\beta ]^4,$$
(26)
thus conforming to Jeffreys’ rule — under which $`\mathrm{log}\beta `$, not $`\beta `$ itself, is distributed uniformly. For the other two types of infinite-dimensional thermal states considered above, the result must be the same as well, because $`Z(\beta )`$ takes the same form in them as (3), since the displacement and squeeze operators are unitary \[18, p. 4187\]. Contrastingly, based on the volume elements (quantum Jeffreys’ priors) of the associated Bures metrics, as we have noted in the first part of this letter, the prior (4) over $`\beta `$ for the squeezed thermal states does conform to the Jeffreys/Lavenda desideratum in the high-temperature limit, but the priors for the other two, (6) and (9), follow the Bayes-Laplace principle of insufficient reason. (One might then be puzzled by why, despite the unitarity of the squeeze and displacement operators, these three priors take different forms (cf. ).)
It would, of course, be of interest to study invariance properties of prior distributions over the inverse temperature parameter $`\beta `$ for additional physical scenarios, both in relation to quantum Jeffreys’ priors and the (Fisher information) scheme of Lavenda for obtaining such distributions, and to elucidate further any underlying governing principles.
Let us note the assertion of Frieden and his associates that many physical laws have a Fisher information-theoretic basis . In particular, Frieden, Plastino, Plastino and Soffer have “shown that the Legendre-transform structure of thermodynamics can be replicated without any changes if one replaces the entropy $`S`$ by Fisher’s information measure $`I`$. Also, Grover’s quantum search algorithm has been demonstrated to be determined by a condition for minimizing Fisher information . In influential work, Voiculescu has developed analogues of the entropy and Fisher information measure for random variables in the context of free probability theory. (Three different models of free probability theory are provided by convolution operators on free groups, creation and annihilation operators on the Fock space of Boltzmann statistics, and random matrices in the large $`N`$-limit.)
In concluding, let us observe that Braunstein and Caves derived the Bures distance between two density operators by optimizing the Fisher information distance (obtained using the Cramér-Rao bound on the variance of any estimator) over arbitrary generalized quantum measurements, not just ones described by one-dimensional orthogonal projectors. Of course, the volume elements (Jeffreys’ priors and quantum Jeffreys’ priors) of the Fisher information and Bures metrics have been the basis for the thermostatistical investigation here.
###### Acknowledgements.
I would like to express appreciation to the Institute for Theoretical Physics for computational support in this research and to K. Życzkowski for his sustained interest in my work. |
no-problem/9910/astro-ph9910348.html | ar5iv | text | # Minor-Axis Stellar Velocity Gradients in Disk and Polar-Ring Galaxies
## 1. Introduction
Kinematical decoupling between two components of a galaxy suggests the occurence of a second event. In disk galaxy it is usually observed as a counterrotation of the stellar disk with respect to the gaseous component and/or with respect to a second stellar disk. We discuss here three cases of kinematical orthogonal decoupling of the innermost region of the spheroidal component with respect to the disk or ring component in two Sa spirals and in an elliptical galaxy with a polar ring.
## 2. The Case Galaxies
### 2.1. NGC 4698
NGC 4698 is classified Sa by Sandage & Tammann (1981) and Sab(s) by de Vaucouleurs et al. (1991, RC3). Sandage & Bedke (1994, CAG) in The Carnegie Atlas of Galaxies presented NGC 4698 in the Sa section as an example of the early-to-intermediate Sa type. They describe the galaxy as characterized by a large central E-like bulge in which there is no evidence of recent star formation or spiral structure. The spiral arms are tightly wound and become prominent only in the outer parts of the disk.
As discussed by Bertola et al. (1999), this galaxy is characterized by a remarkable geometric decoupling between bulge and disk, whose apparent major axes appear oriented in an orthogonal way at a simple visual inspection of the galaxy images (e.g., Panels 78, 79 and 87 in CAG). The $`R`$band isophotal map of the Sa galaxy NGC 4698 shows that the inner region of the bulge structure is elongated perpendicularly to the major axis of the disk (Fig. 1), while the outer structure is perpendicular to the disk if a parametric bulge-disk decomposition is adopted. The bulge tends to become rounder, but never elongated along the disk major axis, using a non-parametric decomposition.
The stellar velocity curve measured along the major axis of NGC 4698 is characterized by a central plateau, indeed the stars have a zero rotation for $`|r|8^{\prime \prime }`$ (Fig. 2). At larger radii the observed stellar rotation increases from zero to an approximately constant value of about 200 $`\mathrm{km}\mathrm{s}^1`$ for $`|r|>50^{\prime \prime }`$ up to the farthest observed radius at about $`80^{\prime \prime }`$. We measured the minor-axis stellar kinematics out to about $`20^{\prime \prime }`$ on both sides of the galaxy. In the nucleus the stellar velocity rotation increases to about 30 $`\mathrm{km}\mathrm{s}^1`$ at $`|r|2^{\prime \prime }`$. Then it decreases between $`2^{\prime \prime }`$ and $`6^{\prime \prime }`$ and it is characterized by an almost zero value beyond $`6^{\prime \prime }`$. The ionized-gas velocity field, which is presented here for the first time (Fig. 2), is characterized by a velocity gradient along the major axis higher than that of the stars. Along the minor axis the gas velocities closely match those of the stars. This suggests that this gas is associated to the stars giving rise to the minor-axis velocity gradient. These observations point out to the presence in NGC 4698 of two gaseous and stellar components characterized by an orthogonal geometrical and kinematical decoupling.
### 2.2. NGC 4672
NGC 4672 is an highly-inclined early-type disk galaxy classified Sa(s) pec sp in the RC3. It is characterized by an intricate dust pattern crossing the bulge near its center. As for NGC 4698, the bulge of NGC 4672 appears elongated in an orthogonal way with respect to the disk as shown by its $`R`$band isophotal map (Fig. 1). Whitmore et al. (1990) considered NGC 4672 a possible candidate for an S0 galaxy with a polar ring. However, as discussed in more detail by Sarzi et al. (1999, in this volume), there are a number of evidences indicating that NGC 4672 is a spiral galaxy.
The major-axis stellar velocity curve is characterized by a central plateau of zero rotation (Fig. 2). The minor-axis stellar velocity curve shows a steep gradient in the nucleus ($`|r|2^{\prime \prime }`$), rising to maximum of about 80 $`\mathrm{km}\mathrm{s}^1`$. At larger radii the velocity tends to drop to a zero value. The major-axis ionized gas velocity curve (extending to about $`60^{\prime \prime }`$ from the center on both sides of the galaxy) is radially asymmetric. No significant gas rotation is detected along the minor axis for $`|r|<6^{\prime \prime }`$.
Also in this case a kinematical orthogonal decoupling between the inner stellar component and the disk is present.
### 2.3. AM 2020-504
This galaxy is considered the prototype of ellipticals with polar ring. It is constituted by two distinct structures: a mostly gaseous ring and a spheroidal stellar component (E4) with the major axes perpendicular each other. It has been extensively studied by Whitmore et al. (1990) and by Arnaboldi et al. (1993a,b).
The most characteristic feature of the stellar kinematics of AM 2020-504 is the presence of a velocity gradient along the major axis of the spheroidal component and consequently perpendicular to the ring major axis. This gradient has been observed by Whitmore et al. (1990) and Arnaboldi et al. (1993a) and confirmed by us (Fig. 2). The high-resolution spectrum by Arnaboldi et al. (1993a) shows a rise of the velocity up to about $`130`$ $`\mathrm{km}\mathrm{s}^1`$ at $`r4^{\prime \prime }`$ followed by a decline to zero velocity outside $`10^{\prime \prime }`$. This, together with the zero velocity we observed within $`|r|<3^{\prime \prime }`$ along the ring minor axis, indicates a rotation around the minor axis. Our velocity curve of the gas along the disk major axis is in agreement with that of Whitmore et al. (1990). The warped model for the gaseous component discussed by Arnaboldi et al. (1993a) predicts a velocity gradient along the minor axis of the ring, which is not shown by our data.
## 3. Discussion
In spite of their morphological differences, the three galaxies described in the previous section share two common characteristics:
* The major axis of the disk (or ring) component forms an angle of $`90^{}`$ with the major axis of the bulge (or elliptical) component. This orthogonal geometrical decoupling is quite uncommon among spiral galaxies.
* The stellar kinematics along the disk minor axis indicates the presence of a kinematically isolated core, which is rotating perpendicularly with respect to the disk (or ring) component.
At this point we ask ourselves whether these galaxies also share similar formation processes. Arnaboldi et al. (1993a; see also Sparke 1986) suggest that in AM 2020-504 the material forming the ring has been accreted into polar orbits by an oblate elliptical. The decoupled core may represent material which has settled down in the simmetry plane of the oblate spheroidal galaxy at the beginning of the acquisition process, and subsequently has turned into stars.
If this mechanism produced also the kinematically isolated stellar cores in the two spirals NGC 4698 and NGC 4672, then the geometrical orthogonal decoupling observed in these galaxies results from the fact that the disk moves in polar orbits around the central oblate spheroid. No velocity gradient along the disk minor axis and no geometrical decoupling are expected if the acquisition process produces a disk settled on the equatorial plane of the spheroidal component. The rarity of bulges which are perpendicularly sticking out from their disks would suggest that the case of equatorial acquisition is the most common one.
The above considerations lead us to face a scenario in which the disk of a spiral galaxy might have been formed as a second event by accretion around a pre-existing bare spheroid.
## References
Arnaboldi, M., Capaccioli, M., Cappellaro, E., et al. 1993a, A&A, 267, 21
Arnaboldi, M., Capaccioli, M., Barbaro, G., Buson, L.M. & Longo, G. 1993b, A&A 268, 103
Bertola, F., Corsini, E.M., Vega Beltrán, J.C., et al. 1999, ApJ, 519, L127
Sandage, A. & Bedke, J. 1994, The Carnagie Atlas of Galaxies (Washingthon: Carnagie Institution, Flintridge Foundation) (CAG)
Sandage, A. & Tammann, G.A. 1981, A Revised Shapley-Ames Catalog of Bright Galaxies (Washington: Carnegie Institution)
Sparke, L. 1986, MNRAS, 219, 657
Whitmore, B.C., Lucas, R.A., McElroy, D.B., et al. 1990, AJ, 100, 1489 |
no-problem/9910/cond-mat9910172.html | ar5iv | text | # Vortex Matter in Mesoscopic Superconducting Disks and Rings
## 1 Introduction
Modern fabrication technology has made it possible to construct superconducting samples of micro- and submicrometer dimensions which are comparable to the important length scales in the superconductor: the coherence ($`\xi `$) and the penetration length ($`\lambda `$). In such samples classical finite size effects are important which leads to flux confinement. The dimensions of the sample are considered to be sufficiently large such that quantum confinement of the single electron states are neglectable and the superconducting gap is not altered by the confinement. This is the regime in which the Ginzburg-Landau (GL) theory is expected to describe the superconducting state.
The size and shape of such samples strongly influences the superconducting properties. Whereas bulk superconductivity exists at low magnetic field (either the Meissner state for $`H<H_{c1}`$ in type-I and type-II superconductors or the Abrikosov vortex state for $`H_{c1}<H<H_{c2}`$ in type-II superconductors), surface superconductivity survives in infinitely large bounded samples up to the third critical field $`H_{c3}1.695H_{c2}`$. For mesoscopic samples of different shape, whose characteristic sizes are comparable to the coherence length $`\xi `$, recent experiments have demonstrated the Little-Parks-type oscillatory behavior of the phase boundary between the normal and superconducting state in the $`HT`$ plane, where $`T`$ and $`H`$ are the critical temperature and the applied magnetic field, respectively.
In the present paper we study superconducting disks of finite height with a hole in the center. Previous calculations were mainly concentrated on circular infinite long superconducting wires and hollow cylinders. In this case there are no demagnetization effects. Superconducting disks and rings of finite thickness have been studied much less extensively theoretically although they may often correspond more closely to experimental fabricated systems.
The giant vortex state in cylinders was studied in Ref. and in Ref. for infinitely thin disks and rings. Recently, Venegas and Sardella used the London theory to study the vortex configurations in a cylinder for up to 18 vortices. They found that those vortices form a ring structure very similar to classical confined charged particles . These results are analogous to those of Refs. where the image method was used to determine the vortex configuration. The transition from the giant vortex to the multi-vortex state in a thin superconducting disk was studied in Ref. . Square-shaped loops were investigated by Fomin et al in order to explain the Little-Park oscillations found by Moshchalkov et al and to study the effect of leads. In this case the magnetic field was taken uniform along the z-direction and demagnetization effects were disregarded. Loops of non zero width were studied by Bruyndoncx et al within the linearized GL theory. The magnetic field was taken uniform and equal to the applied field which is valid for very thin rings. They studied the $`T_c(H)`$ phase boundary and in particular the 2D-3D dimensional cross over regime. A superconducting film with a circular hole was studied by Bezryadin et al . This is the limiting case of the current ring structure for wide rings. Or conversely, it is the antidot version of the disk system studied in Refs. .
Most of previous calculations are restricted to the London limit (i.e. extreme type-II systems) or do not consider the demagnetization effect. Our approach is not limited to such situations and we calculate the superconducting wavefunction and the magnetic field fully self-consistently for a finite 3D system, i.e. for a disk and ring with finite width and thickness. The details of our numerical approach can be found in Ref. and therefore will not be repeated. Here we investigate the flux expulsion in the Meissner state, the giant $``$ multi-vortex transition in fat rings, and the absence of flux quantization in rings of finite height and non zero width. In contrast to most of earlier work (see e.g. ) we are interested in the superconducting state deep inside the $`T_c(H)`$ phase diagram where the magnetic field in and around the superconductor is no longer homogeneous. The transition from disk to thin loop structures is also investigated.
## 2 Meissner state
Meissner and Ochsenfeld (1933) found that if a superconductor is cooled in a magnetic field to below the transition temperature, then at the transition the lines of induction are pushed out. The Meissner effect shows that a bulk superconductor behaves as if inside the specimen the magnetic field is zero, and consequently it is an ideal diamagnet. The magnetization is $`M=H_{applied}/4\pi `$. This result is shown by the dashed curve in Fig. 1 which refers to the right axis.
Recently, Geim et al investigated superconducting disks of different sizes. Fig. 1 shows the experimental results (symbols) for an Al disk of radius $`R0.5\mu m`$ and thickness $`d0.15\mu m`$. Our numerical results which includes a full self-consistent solution of the two non-linear GL equations are given by the solid curve but which refers to the right axis (we took $`\lambda (0)=0.07\mu m`$, $`\xi (0)=0.25\mu m`$ and $`\kappa =0.28`$). There is very good qualitative agreement but quantitatively the theoretical results differ by a factor of $`50.5`$. The latter can be understood as due to a detector effect. Experimentally the magnetization is measured using a Hall bar. Consequently, the magnetic field is averaged over the Hall cross and it is the magnetization resulting from the field expelled from this Hall cross which is plotted while theoretically we plotted the magnetization resulting from the field expelled from the disk. This is illustrated in Fig. 2 where we show a contour plot of the magnetic field profile in the plane of the middle of the disk and a cut through it (see inset of Fig. 2). The results are for a disk of radius $`R=0.8\mu m`$ for a magnetic field such that $`L=5`$ which implies that a giant vortex with six flux quanta is centered in the middle of the disk leading to an enhanced magnetic field near the center of the disk and expulsion of the field in a ring-like region near the edge of the sample. The demagnetization effect is clearly visible which leads to a strong enhancement of the field near the outer boundary of the disk. The theoretical result in Fig. 1 corresponds to an averaging of the magnetic field over the disk region while the Hall detector averages the field over a much larger area which brings the average field much closer to the applied field. The inset of Fig. 2 shows clearly regions with $`H<H_{applied}`$ which correspond to diamagnetic response and regions with $`H>H_{applied}`$ which correspond to paramagnetic response. The averaging over the detector area (if width of the detector $`W>R`$) adds additional averaging over paramagnetic regions to the detector output.
Using a Hall bar of width $`W=2.5\mu m`$ a distance $`h=0.15\mu m`$ separated from the superconducting disk results into an expelled field which is a factor $`50.44`$ smaller than the expelled field of the disk and brings our theoretical results in Fig. 1 in quantitative agreement with the experimental results. This averaging over the Hall cross scales the results but does not change the shape of the curve as long as $`L`$ is kept fixed. For different $`L`$ this scale factor is slightly different because it leads to different magnetic field distributions.
Notice also that the magnetization as function of the magnetic field is linear only over a small magnetic field range, i.e. $`H<20G`$ and the slope (see dashed curve in Fig. 1) is a factor 2.5 smaller than expected from an ideal diamagnet. This clearly indicates that for such small and thin disks there is a substantial penetration of the magnetic field into the disk. This is illustrated in Fig. 3 where the magnetic field lines are shown for a superconducting disk of radius $`R=0.8\mu m`$ in the $`L=0`$ state, i.e. the Meissner state. This strong penetration of the magnetic field inside the disk is also responsible for the highly nonlinear magnetization curve for $`H20G`$.
## 3 Giant-vortex state versus multi-vortex state
Next, we generalize our system to a disk with thickness $`d`$ and radius $`R_o`$ containing a hole in the center with radius $`R_i`$. The system under consideration is circular symmetric and therefore if the GL equations would have been linear the Cooper pair wavefunction could have been written as $`\mathrm{\Psi }(\stackrel{}{\rho })=F(\rho )\mathrm{exp}(iL\varphi )`$. We found that even for the nonlinear problem such a circular symmetric state, also called giant vortex state, still has the lowest energy when the confinement effects are dominant. This is the case when $`R_o`$ is small or $`R_o/R_i1`$, or in the case of large magnetic fields when there is only surface superconductivity. This is illustrated in the phase diagram shown in Fig. 4 for a thin superconducting ring of outer radius $`R_o/\xi =4`$ and width $`R_oR_i`$. The equilibrium regions for the giant vortex states with different angular momentum $`L`$ are separated by the solid curves. Notice that with decreasing width, i.e. increasing inner radius $`R_i`$: 1) the superconducting state survives up to large magnetic fields. This is a consequence of the enhanced superconductivity due to surface superconductivity which in the limit of $`R_iR_o`$ leads to $`H_{c3}\mathrm{}`$ . 2) The L-transitions occur for $`\mathrm{\Phi }=(L+1/2)\mathrm{\Phi }_o`$ in the limit $`R_iR_o`$, where $`\mathrm{\Phi }_o=hc/2e`$ is the flux quantum.
For large type-II systems one expects the giant-vortex state to break up into the Abrikosov triangular lattice of single vortices. The nonlinear term in the GL theory is responsible for this symmetry breaking. Such a multi-vortex state is the lowest energy state for the shaded areas in Fig. 4. Notice that as compared to the disk case the presence of the hole in the center stabilizes the multi-vortex state. Except for large $`R_i`$ because then the confinement effect starts to dominate which imposes the symmetry of the edge of the system on the superconducting state. Near the normal/superconducting boundary only surface superconductivity survives and the symmetry of the superconducting state is determined by the symmetry of the surface which leads to the giant vortex state. The same holds for the Meissner state (i.e. $`L=0`$). For $`L=1`$ there is one flux quantum in the center of the system and there is no distinction between the giant and the multi-vortex states. Only for $`L2`$ we can have broken symmetry configurations.
The transition from the multi-vortex state to the giant-vortex state with increasing magnetic field is illustrated in Fig. 5 where we plot the superconducting electron density for a disk with radius $`R_o/\xi =4`$ containing a hole in the center with radius $`R_i/\xi =0.4`$ and for a magnetic field range such that the vorticity is $`L=4`$. First, we can clearly discriminate three vortices arranged in a triangle with one vortex in the center piercing through the hole (solid circle) of the disk. Increasing the magnetic field drives the vortices closer to the center and the single vortices start to overlap. Finally, for sufficiently large magnetic fields the four vortices form one circular symmetric giant-vortex state with winding number $`L=4`$.
If the hole is sufficiently large more than one flux quantum can pierce through this hole for sufficiently large magnetic fields. This is illustrated in Fig. 6 for a ring of $`R_o/\xi =4,R_i/\xi =1`$ with an external magnetic field such that the vorticity is $`L=5`$. There are four vortices in the superconducting material arranged on the edge of a square and two piercing through the hole. The latter conclusion can be easily verified by considering a contour plot of the phase (right contour plot in Fig. 6). Notice that encircling the hole the phase of the superconducting wavefunction changes with $`2\times 2\pi `$ while encircling the outer edge of the ring it changes with $`6\times 2\pi `$. In this plot the location of the vortices is also clearly visible. Encircling a single vortex changes the phase by $`2\pi `$.
## 4 Is the flux through the hole of the ring quantized?
It is often mentioned that the flux through a loop is quantized into integer multiples of the flux quantum $`\mathrm{\Phi }_o=hc/2e`$. In order to check the validity of this assertion we plot in Fig. 7(a) the flux through the hole of a wide ring of thickness $`d/\xi =0.1`$ with outer $`R_o/\xi =2`$ and inner radius $`R_i/\xi =1`$. From this figure it is clear that the flux through the hole of such a ring is not quantized. Note that depending on the magnetic field strength there is compression or expulsion of magnetic field in the hole region (compare solid curve with dashed line which represents the flux $`\mathrm{\Phi }=\pi R_i^2H`$). The absence of quantization was also found earlier for hollow cylinders when the thickness of the cylinder wall is smaller than the penetration length of the magnetic field. In order to understand this apparent breakdown of flux quantization lets turn to the derivation of the flux quantization condition. Inserting the wavefunction $`\mathrm{\Psi }=|\mathrm{\Psi }|\mathrm{exp}(i\varphi )`$ into the current operator we obtain (under the assumption that the spatial variation of the superconducting density is weak)
$$\stackrel{}{j}=\frac{e\mathrm{}}{m}|\mathrm{\Psi }|^2(\stackrel{}{}\varphi \frac{2e}{\mathrm{}c}\stackrel{}{A}),$$
(1)
which after integrating over a closed contour C inside the superconductor leads to
$$_C\left(\frac{mc}{2e^2|\mathrm{\Psi }|^2}\stackrel{}{j}\stackrel{}{A}\right)𝑑\stackrel{}{l}=L\mathrm{\Phi }_o.$$
(2)
When the contour C is chosen along a path such that the superconducting current is zero we obtain
$$L\mathrm{\Phi }_o=_C\stackrel{}{A}𝑑\stackrel{}{l}=rot\stackrel{}{A}𝑑\stackrel{}{S}=\stackrel{}{H}𝑑\stackrel{}{S}=\mathrm{\Phi },$$
(3)
which tells us that the flux through the area encircled by C is quantized. In our wide superconducting ring the current is non zero at the inner boundary of the ring and consequently the flux through the hole does not have to be quantized. In fact the flux will be $`\mathrm{\Phi }=L\mathrm{\Phi }_o+\mathrm{\Phi }_j`$ where
$$\mathrm{\Phi }_j=\mathrm{\Phi }_o\frac{m}{he}_C\frac{\stackrel{}{j}}{|\mathrm{\Psi }|^2}𝑑\stackrel{}{l},$$
(4)
which depends on the size of the current at the surface of the inner ring.
The radius of the considered ring in Fig. 7 is sufficiently small that we have a circular symmetric vortex state and consequently the current has only an azimuthal component. The radial variation of this current is plotted in Fig. 8(a) for three different values of the winding number $`L`$ for a fixed magnetic field. For $`L0`$ the current in such a wide ring reaches zero at some radial position $`\rho ^{}`$. As shown in Fig. 8(b) the corresponding flux through a surface with radius $`\rho ^{}`$ is exaclty quantized into $`L\mathrm{\Phi }_o`$ as required by the above quantization condition. The current at the inner edge of the ring is clearly non zero and therefore it is easy to understand why the flux through the hole is not an integer multiple of the flux quantum. The effective radius $`\rho ^{}`$ is plotted in Fig. 7(b) which is an oscillating function of the magnetic field. The dashed curves in the figure correspond to the values for the metastable states. Notice that $`\rho ^{}`$ oscillates around the value $`\sqrt{R_oR_i}=\sqrt{2}=1.41`$ which can be obtained within the London theory . At the $`LL+1`$ transition this radius exhibits a jump which moves closer towards the outer radius while for fixed $`L`$ the effective radius becomes closer to the inner radius with increasing $`H`$.
Next we consider the magnetic field range, $`\mathrm{\Delta }H`$, needed to increase $`L`$ with one flux quantum, i.e. it is the distance between two consecutive jumps in the magnetization. The corresponding flux $`\mathrm{\Delta }\varphi =\pi R_o^2\mathrm{\Delta }H`$ is shown in Fig. 9. For small $`R_o`$ (see Fig. 9(a)) this value is almost independent of $`L`$ for fixed $`R_i`$. Its value is approximately given by the flux increase with one flux quantum through a circular area with radius $`\sqrt{R_oR_i}`$ (dashed horizontal curves in Fig. 9). For larger $`R_o`$ (see Fig. 9(b)) this relation is much more complicated and the distance between the jumps in the magnetic field is a strong function of $`L`$, except for $`R_iR_o`$ where it is again determined by the flux quantization through an area with radius $`\rho ^{}=\sqrt{R_iR_o}`$.
## 5 Conclusion
Mesoscopic superconducting disks of non zero thickness containing a hole in the center were studied theoretically by solving numerically the coupled non linear GL equations. In thin structures there is a substantial penetration of the magnetic field into the superconductor leading to a large (non linear) demagnetization effect. The hole in the disk enhances superconductivity and for small holes it stabilizes the multi-vortex state. The flux through the hole of the disk is not quantized. But for rings with a narrow width an increase of the applied magnetic field with one flux quantum through a region with radius $`\sqrt{R_iR_o}`$ increases the winding number $`L`$ with one unit.
Acknowledgement We acknowledge discussions with A. Geim and V. Moshchalkov. This work was supported by the Flemish Science Foundation (FWO-Vl) and IUAP-VI. FMP is a research director with the FWO-Vl. |
no-problem/9910/astro-ph9910084.html | ar5iv | text | # Reflection and noise in Cygnus X–1
## 1 Introduction
Comptonization of soft seed photons in a hot, optically thin, electron cloud located in the vicinity of a compact object is thought to form the hard X–ray radiation in the low spectral state of Cyg X–1 (Sunyaev & Truemper (1979)). For a thermal distribution of the electrons with temperature $`T_\mathrm{e}`$, the spectrum of the Comptonized radiation is close to a power law at energies sufficiently lower than $`3kT_\mathrm{e}`$ (Sunyaev & Titarchuk (1980)). The slope of the Comptonized spectrum is governed by the ratio of the energy deposited into the electrons and the influx of the soft radiation into the Comptonization region; the lower the ratio the steeper the Comptonized spectrum (e.g. Sunyaev & Titarchuk (1989), Haardt & Maraschi (1993), Gilfanov et al. (1995)).
The deviations from a single slope power law observed in the spectra of X–ray binaries in the $`530`$ keV energy band are mainly due to the reprocessing (e.g. reflection) of the primary Comptonized radiation by a cold medium located in the vicinity of the primary source. A plausible candidate for the reflecting medium is the optically thick accretion disk surrounding the inner region occupied by hot optically thin plasma. The main observables of the emission, reprocessed by a cold neutral medium, are the fluorescent K<sub>α</sub> line of iron at 6.4 keV, iron K-edge at 7.1 keV and a broad hump at $`2030`$ keV (Basko et al. (1974), George & Fabian (1991)). These signatures are indeed observed in the spectra of X–ray binaries. Their particular shape and relative amplitude, however, depend on the geometry of the primary source and the reprocessing medium and the abundance of heavy elements. And it is affected by effects such as ionization and proper motion (e.g. Keplerian motion in the disk) of the reflector and general relativity effects in the vicinity of the compact object. The amplitude of the reflection features is proportional to the solid angle $`\mathrm{\Omega }_{\mathrm{refl}}`$ subtended by the reflector. An accurate estimate of $`\mathrm{\Omega }_{\mathrm{refl}}`$ is complicated and the result is strongly dependent on the details of the spectral model. However, $`\mathrm{\Omega }_{\mathrm{refl}}`$ is known to vary rather strongly from source to source and, for a given source, from epoch to epoch (e.g. Done & Zycki (1999), Gierlinski et al. (1999)).
Based on the analysis of a large sample of Seyfert AGNs and several X–ray binaries Zdziarski et al. (1999) recently found a correlation between the amount of reflection and the slope of the underlying power law. They concluded that the existence of such a correlation implies a dominant role of the reflecting medium as the source of seed photons to the Comptonization region.
The power density spectra of X–ray binaries in the low state (see van der Klis (1995) for a review) are dominated by a band limited noise component which is relatively flat below a break frequency $`\nu _{\mathrm{br}}`$ and approximately follows a power law with a slope of $`1`$ or steeper above the $`\nu _{\mathrm{br}}`$. Superimposed on the band limited noise is a broad bump, sometimes having the appearance of a QPO, which frequency, $`\nu _{\mathrm{QPO}}`$, is by an $``$order of magnitude higher than the break frequency of the band limited noise. Although a number of theoretical models was proposed to explain the power spectra of X–ray binaries (e.g. Alpar et al. (1992), Nowak & Wagoner (1991), Ipser (1996)), the nature of the characteristic noise frequencies is still unclear. Despite of the fact that the characteristic noise frequencies $`\nu _{\mathrm{br}}`$ and $`\nu _{\mathrm{QPO}}`$ vary from source to source and from epoch to epoch, they follow a rather tight correlation, $`\nu _{\mathrm{QPO}}\nu _{\mathrm{br}}^\alpha `$, in a wide range of source types and luminosities (Wijnands & van der Klis (1999), Psaltis et al. (1999)).
Analyzing the GRANAT/SIGMA data Kuznetsov et al. (1996, 1997) found a correlation between the rms of aperiodic variability in a broad frequency range and the hardness of the energy spectrum above 35 keV for Cyg X–1 and GRO J0422+32 (X–ray Nova Persei). Crary et al. (1996) came to similar conclusions based on the data of CGRO/BATSE observations of Cyg X–1.
We present below the results of a systematic analysis of the RXTE observations of Cyg X–1 performed from 1996-1998 aimed to search for a relation between characteristic noise frequencies and spectral properties.
## 2 Observations and data analysis
We used the publicly available data of Cyg X–1 observations with the Proportional Counter Array aboard the Rossi X-ray Timing Explorer performed between Feb. 1996 and Feb. 1998 during the low (hard) spectral state of the source. In total our sample contained 26 observations randomly chosen from proposals 10235, 10236, 10238, 20175 and 30157 (Table 1). The energy and power density spectra were averaged for each individual observation. The 4-20 keV flux from Cyg X–1 varied from $`7.210^9`$ to $`1.810^8`$ erg/sec/cm<sup>2</sup> which corresponds to luminosity range of $`5.410^{36}1.310^{37}`$ erg/sec assuming a 2.5 kpc distance.
The power spectra were calculated in the 2–16 keV energy band and the $`0.00232`$ Hz frequency range following the standard X–ray timing technique and normalized to units of squared relative rms. The white noise level was corrected for the dead time effects following Vikhlinin et al. (1994) and Zhang et al. (1995).
The energy spectra were extracted from the “Standard Mode 2” data and ARF and RMF were constructed using standard RXTE FTOOLS v.4.2 tasks. We assumed a 0.5% systematic error in the spectral fitting. The “Q6” model was used for the background calculation. We used XSPEC v.10.0 (Arnaud (1996)) for the spectral fitting.
## 3 Results
Several power density and counts spectra of Cyg X–1 observed at different epoch are shown in Fig.1. The power density is plotted in the units of frequency$`\times `$power presenting squared fractional rms at a given frequency per decade of frequency. This way of representing the power spectra clearly characterizes the relative contribution of variations at different frequencies to the total observed rms. Most of the power of aperiodic variations below $`30`$ Hz is approximately equally divided between two broad, $`\mathrm{\Delta }\nu /\nu 1`$ “humps” separated in frequency by a $``$decade (the left panel in Fig.1). The lower frequency “hump” corresponds to what is usually referred to as a band limited noise (e.g. van der Klis (1995)). The second “hump” is sometimes called a “QPO”. Both frequencies are equally important quantitative characteristics of the aperiodic variability – most of the apparently observed variability, indeed, occurs roughly at these two characteristic frequencies (Fig.1). Despite of a factor of $`1015`$ change of the characteristic noise frequencies in our sample, the high frequency part of the power spectrum, above $`8`$ Hz, remains unchanged (cf. Belloni & Hasinger (1990)). However, the low and intermediate frequency parts, responsible for the most of the observed variability, change in an approximately self–similar manner which can be described as a logarithmic shift along the frequency axis (Fig.2) (cf. van der Hooft et al. (1996)). This is a manifestation of the fundamental correlation between the break and the QPO frequencies found by Wijnands & van der Klis (1999) and Psaltis et al. (1999) for a broad range of the source types.
The counts spectra change in accordance with the change of the noise frequency (cf. left and right panels in Fig.1). The increase of characteristic noise frequency is accompanied by the general steepening of the energy spectrum and an increase of the relative amplitude of the reflection features.
In order to quantify this effect we fit the energy spectra in our sample with a simple model consisting of a power law with a superimposed reflected continuum (pexrav model in XSPEC) and an intrinsically narrow line at 6.4 keV. The binary system inclination was fixed at $`i=50\mathrm{°}`$ (Sowers et al. (1998), Done & Zycki (1999); see, however, Gies & Bolton (1986)), the iron abundance was fixed at the solar value of $`A_{\mathrm{Fe}}=3.310^5`$ and the low energy absorption at $`N_\mathrm{H}=610^{21}`$ cm<sup>-2</sup>. Effects of ionization were not included. In order to approximately include in the model the smearing of the reflection features due to motion in the accretion flow the reflection continuum and line were convolved with a gaussian, which width was a free parameter of the fit. The spectra were fitted in the 4–20 keV energy range. For the spectrum of 07/04/96 (observation 10238–01–04–00) an additional soft component (XSPEC diskbb model) was included in the model. This observation occurred shortly before the soft 1996 state of the source. The energy spectrum had the largest amplitude of reflection and the power spectrum showed the highest noise frequencies (open circles in Figs.1 and 2). The best–fit parameters are listed in Table 1. The accuracy of the absolute values of the best–fit parameters is discussed in the next section.
The model describes all the spectra in our sample with an accuracy better than $`1\%`$ and reduced $`\chi _\mathrm{r}^2`$ in the range 0.5-1.1. The quality of the fit decreases by somewhat with the increase of the best fit value of the reflection scaling factor. The model, however, fails to reproduce the exact shape of the line and, especially, of the edge. Nearly the same pattern of systematic deviations of the data from the model was found for all spectra in the $`612`$ keV energy range. Inclusion in the model of the effect of ionization and/or account for the exact shape of the relativistic smearing, assuming Keplerian motion in the disk, does not change significantly the pattern of residuals.
We found a fairly good correlation between the photon index of the underlying power law and the reflection scaling factor $`R\mathrm{\Omega }/2\pi `$ roughly characterizing the solid angle subtended by the reflecting media (Fig.3). Other parameters change in an anticipated way indicating that the the spectral model includes the most physically important features. In particular, the width of the Gaussian used to model the smearing of the reflection spectrum and the equivalent width of the 6.4 keV line increase as the reflection scaling factor increases, the equivalent width being roughly proportional to $`R`$.
In order to relate spectral properties with characteristic noise frequency we fit the power density spectra with a template, allowing logarithmic shift of the template spectrum along the frequency axis and its renormalization. Only the low and intermediate frequency part of the power spectra, up to the second “hump”, was used in the fitting (cf. Fig.2). The best fit values of the two parameter of the fit, the frequency shift and the normalization, were searched for using the $`\chi ^2`$ minimization technique. In order to compute the $`\chi ^2`$, the template spectrum, logarithmically shifted along the frequency axis, was rebinned to the original frequency bins using linear interpolation . We chose the average power spectrum for observations 10238–01–05–00 and 10238–01–05–000 as a template. The best–fit values of the frequency scaling factor obtained in such a way are listed in Table 1. Fig.4 shows the frequency scaling factor plotted versus reflection amplitude (top) and photon index of the underlying power law (bottom).
The change of the template power spectrum does not affect the correlations shown in Fig.4. More sophisticated spectral models including the effect of ionization and/or the exact shape of the relativistic smearing, assuming Keplerian motion in the disk, change the particular values of the best fit parameter but do not remove the general trends in Figs. 3 and 4. The same is true for a reflection model based on the results of our Monte–Carlo calculations of the reflected spectrum (an isotropic source above an optically thick slab with solar abundance of heavy elements; ionization effects included) in which the equivalent width of the K–$`\alpha `$ line is linked to the amplitude of the absorption K–edge and of the reflected continuum. More sophisticated spectral models affect mostly the value of the reflection scaling factor, increasing the scatter of the points along the horizontal axis in Fig.3 and in the top panel in Fig.4. The scatter of the values of photon index, on the other hand, almost does not change.
## 4 Discussion
We found a correlation between the noise frequency and spectral parameters, in particular, the amount of reflection and the slope of the underlying power law. The increase of the noise frequency is accompanied by the steepening of the spectrum of the primary radiation and the increase of the amount of reflection.
The correlation between the spectral parameters – amount of reflection and the slope of the primary emission – is the same as recently found by Zdziarski et al. (1999) for a large sample of Seyfert AGNs and several X–ray binaries. The existence of such a correlation hints at a close relation between the solid angle subtended by the reflecting media and the influx of the soft photons to the Comptonization region. More specifically, it suggests that the reflecting media gives a dominant contribution to the influx of the soft photons to the Comptonization region. The geometry, commonly discussed in application to the low spectral state of X–ray binaries, involves a hot quasi-spherical Comptonization region near the compact object surrounded by an optically thick accretion disk. In such a geometry it is natural to expect that the decrease of the inner radius of the disk would result in an increase of the solid angle, subtended by the reflector (disk), and an increase of the energy flux of the soft photons to the Comptonization region. The correlated behavior of the noise frequency and spectral parameters suggests, that a decrease of the inner radius of the disk leads also to an increase of the noise frequency.
In the soft state the inner boundary of the optically thick disk is likely to shift closer to the compact object, $`R_\mathrm{d}3R_\mathrm{g}`$ (cf. $`R_\mathrm{d}15100R_\mathrm{g}`$ in the hard state, Done & Zycki (1999), Gilfanov et al. (1998)). Correspondingly, one might expect that the soft state spectra should have stronger reflected component. An accurate estimate of the spectral parameters in the soft state is a complicated task and is beyond the scope of this paper. However, in order to qualitatively check this hypothesis we analyzed a set of RXTE observations of Cyg X–1 in the soft spectral state (May–August 1996). The spectral model was identical to the one used for the analysis of the low state data with addition of a disk component (diskbb model in XSPEC); the energy range was 3–20 keV. We found that the correlation between the slope of the primary emission and the amount of reflection continues smoothly into the soft state (Fig.5), but the best–fit values of the reflection scaling factor are too high and, in particular, considerably exceed unity. However the qualitative conclusion that the amount of reflection increases from the low to the high spectral state is evident from the comparison of typical low and high state spectra (Fig.6). The results of Done & Zycki (1999) and Gierlinski et al. (1999) based on more realistic spectral models also confirm the existence of such a trend – $`\mathrm{\Omega }/2\pi 0.1`$ and $`\mathrm{\Omega }/2\pi 0.60.7`$ for the low and the high state respectively.
The spectral model is obviously oversimplified. Therefore the best fit values do not necessarily represent the exact values of the physically meaningful parameters. Particularly subject to the uncertainties due to the choice of the spectral model are the reflection scaling factor $`R\mathrm{\Omega }/2\pi `$ and the equivalent width of the iron line. Our estimates of the reflection scaling factor for the low state are systematically higher than those typically obtained using the more elaborate models $`\mathrm{\Omega }/2\pi 0.10.2`$ (e.g. Done & Zycki (1999)). Moreover, the best–fit values of the $`R\mathrm{\Omega }/2\pi `$ for the high spectral state exceed the unity considerably, what is implausible in the usually adopted geometry of the accretion flow. More realistic models, however, impose stringent requirements on the quality and energy range of the data in order to eliminate the degeneracy of the parameters. We therefore chose a model including the most physically important features and satisfactorily describing the data, and on the other hand, having a minimal number of free, especially, mutually dependent parameters. Although the absolute values of the best–fit parameters obtained with such a model should be treated with caution, the model correctly ranks the spectra according to the importance of the reprocessed component. In order to demonstrate this we plotted in Fig.7 the ratio of several counts spectra in the low and the high state with different best–fit values of $`R\mathrm{\Omega }/2\pi `$ to the spectrum with the lowest value of reflection in our sample. The Fig.7 clearly shows that the spectra having higher best-fit values of $`R\mathrm{\Omega }/2\pi `$ have more pronounced reflection signatures – the fluorescent K<sub>α</sub> line of iron at $`67`$ keV and broad smeared iron K-edge at $`710`$ keV. Similarly we used a simple way of quantifying the characteristic noise frequency in terms of a logarithmic shift of a template spectrum along the frequency axis.
Recently, Revnivtsev et al. (1999) applied a frequency resolved spectral analysis to the data of Cyg X–1 observations. They showed that energy spectra corresponding to the shorter time scales ($`0.11`$ sec) exhibit less reflection than that of the longer time scales. Interpretation of the frequency resolved spectra is not straightforward and requires some a priori assumptions. We shall assume below that the different time scale variations occur in geometrically distinct regions of the accretion flow and the spectral shape does not change during a variability cycle on a given time scale. Under these assumptions the frequency resolved spectra can be treated as representing the energy spectra of the events occurring on the different time scales. We reanalyzed the data from Revnivtsev et al. (1999) using the spectral model described in the previous section. We found that the frequency resolved spectra follow the same trend as the averaged energy spectra (Fig.8), thus confirming the existence of an intimate relation between the slope of primary radiation and the amount of reflection. Secondly, energy spectra of the longer time scale ($`0.015`$ Hz) variations, giving the dominant contribution to the observed rms, are considerably softer and contain more reflection than the averaged energy spectrum. Such behavior hints at the non-uniformity of the conditions in the Comptonization region. Higher frequency variations are associated with a (presumably inner) part of the Comptonization region having a smaller solid angle, subtended by the disk, and a larger ratio of the energy deposited into the electrons to the flux of soft seed photons from the disk.
## 5 Conclusions
We analyzed a number of RXTE/PCA observations of Cyg X-1 from 1996–1998.
1. We found a tight correlation between characteristic noise frequency and spectral parameters – the slope of primary Comptonized radiation and the amount of reflection in the low spectral state (Fig.3, 4). We argue that the simultaneous increase of the noise frequency, the amount of reflection and the steepening of the spectrum of the Comptonized radiation are caused by a decrease of the inner radius of the optically thick accretion disk.
2. The soft state spectra have larger reflection than the low state spectra and obey the same correlation between the slope of the Comptonized radiation and the amount of reflection (Fig.5).
3. A similar correlation between the slope of the primary radiation and the amount of reflection was found for the frequency resolved spectra. The energy spectra at the lower frequencies (below $``$ several Hz), responsible for most of the apparently observed aperiodic variability, are considerably steeper and contain a larger amount of reflection than the spectra of the higher frequencies and, most importantly, than the average spectrum. We suggest that this reflects non-uniformity and/or non-stationarity of the conditions in the Comptonization region.
###### Acknowledgements.
The authors are grateful to R.Sunyaev for stimulating discussions and valuable comments on the manuscript. M.Revnivtsev acknowledges hospitality of the Max–Planck Institute for Astrophysics and a partial support by RBRF grant 96–15–96343 and INTAS grant 93–3364–ext. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. |
no-problem/9910/cond-mat9910093.html | ar5iv | text | # 1 Low-energy part of the eigenvalue spectrum of the PHM [periodic boundary conditions, SMA]. Eigenvalues in (c) and (d) are two-fold degenerate.
Local mode behaviour in quasi-1D CDW systems
H. Fehske<sup>a</sup>, G. Wellein<sup>b</sup>, H. Büttner<sup>a</sup>, A. R. Bishop<sup>c</sup>, and M. I. Salkola<sup>d</sup> <sup>a</sup>Physikalisches Institut, Universität Bayreuth, D-95440 Bayreuth, Germany <sup>b</sup>Regionales Rechenzentrum Erlangen, Universität Erlangen, 91058 Erlangen, Germany <sup>c</sup>MSB262, Los Alamos National Laboratory, Los Alamos, NM 87545, USA <sup>d</sup>Superconductor Technologies Inc., Santa Barbara, CA 93111, USA Abstract
We analyze numerically the ground-state and spectral properties of the three-quarter filled Peierls-Hubbard Hamiltonian. Various charge- and spin-ordered states are identified. In the strong-coupling regime, we find clear signatures of local lattice distortions accompanied by intrinsic multi-phonon localization. The results are discussed in relation to recent experiments on MX chain \[-PtCl-\] complexes. In particular we are able to reproduce the observed red shift of the overtone resonance Raman spectrum. Keywords: charge density wave, localized lattice distortions, polarons, MX-chain compounds
Inspired by the recent observation of intrinsically localized vibrational modes in halide-bridged transition metal \[-PtCl-\] complexes , we study strong coupling effects between electronic and lattice degrees of freedom on the basis of a two-band, 3/4-filled Peierls-Hubbard model (PHM)
$``$ $`=`$ $`t{\displaystyle \underset{i,j\sigma }{}}c_{i\sigma }^{}c_{j\sigma }+{\displaystyle \underset{i\sigma }{}}\epsilon _in_{i\sigma }+{\displaystyle \underset{i}{}}U_in_in_i`$ (1)
$`+\lambda _I(b_I+b_I^{})(n_1+n_3n_2n_4)+\mathrm{}\omega _Ib_I^{}b_I`$
$`+\lambda _R(b_R+b_R^{})(n_2n_4)+\mathrm{}\omega _Rb_R^{}b_R.`$
In (1), the $`c_{i\sigma }^{[]}`$ are fermion operators, $`n_{i\sigma }=c_{i\sigma }^{}c_{i\sigma }`$, and $`b_R^{[]}`$ and $`b_I^{[]}`$ are the boson operators for the Raman active (R) and infrared (I) optical phonon modes with bare phonon frequencies $`\omega _R`$ and $`\omega _I`$. With regard to the quasi-1D charge density wave (CDW) system $`\{[\mathrm{Pt}(\mathrm{en})_2][\mathrm{Pt}(\mathrm{en})_2\mathrm{Cl}_2](\mathrm{ClO}_4)_4\}`$ (en= ethylenediamine), subsequently abbreviated as PtCl, the Pt (Cl) atoms are denoted by the site index $`i=2,\mathrm{\hspace{0.17em}4}`$ ($`i=1,\mathrm{\hspace{0.17em}3}`$). The CDW state is built up by alternating nominal $`\mathrm{Pt}^{+4}`$ and $`\mathrm{Pt}^{+2}`$ sites with a corresponding distortion of Cl<sup>-</sup>-ions towards $`\mathrm{Pt}^{+4}`$. To model the situation of a 3/4-filled charge transfer insulator within a single-mode approach (SMA), we only include the R-($`\nu _1`$)-mode with $`\mathrm{}\omega _R=0.05`$ , and parametrize the site energies by $`\mathrm{\Delta }=(\epsilon _{\mathrm{Pt}}\epsilon _{\mathrm{Cl}})/t=1.2`$ and the Hubbard repulsions by $`U_{\mathrm{Pt}}=0.8`$ and $`U_{\mathrm{Cl}}=0`$ (all energies are measured in units of $`t`$). Alternatively, we employ a more realistic double-mode approach (DMA), using $`\mathrm{}\omega _I=0.06`$ for the I-($`\nu _2`$)-mode .
The ground-state and spectral properties of the four-site PHM are determined by finite-lattice Lanczos diagonalization that preserves the full dynamics of the phonons .
Figure 1 shows the variation of the lowest energy states as a function of the electron-phonon coupling. In the weak-coupling region, the ground-state is basically a zero-phonon state (cf.
Fig. 2 a) and the peaks in Fig. 1 (a) correspond to multiples of the fundamental phonon frequency $`\omega _R^{(1)}`$. As $`\lambda _R`$ increases a strong mixing of electron and phonon degrees of freedom takes place, and finally the lowest states with total momentum $`K=0`$ and $`K=\pi `$ become nearly degenerate. In this limit a non-linear lattice potential stabilizing so-called intrinsically localized vibrational modes (ILMs) is self-consistently generated .
A prominent feature of these bound states is their strong anharmonic redshift, resulting from the attractive interaction of Raman phonon quanta located at the same PtCl<sub>2</sub> unit. The calculated red shift, $`r_n=[n\omega _R^{(1)}\omega _R^{(n)}]/\omega _R^{(1)}`$ with $`\omega _R^{(n)}=(E_nE_0)`$, of the (doublet) overtones shown in Fig. 1 (c) is successfully compared to experimental data probed by resonance Raman scattering (see Tab. 1). To elucidate the different nature of the ground state in the weak- and strong-coupling regimes, several characteristic quantities listed in Tab. 2. Obviously the self-localization transition is accompanied by significant changes in the spin- and charge correlations. As can be seen from the weight of the $`m`$-phonon state in the ground state, $`|c_0^m|^2`$ , depicted in Fig. 2, the appearance of the $``$-$``$-$``$-$``$ configuration is related to large occupation numbers of the localized vibrational (R) mode.
To summarize, the numerical results obtained for the PHM provide strong evidence for the existence of a dynamical spatial localization of vibrational energy (ILM) in MX solids, due to high intrinsic nonlinearity from strong electron lattice coupling.
1. B. I. Swanson et al., Phys. Rev. Lett. 82 (1999) 3288.
2. S. P. Love et al., Phys. Rev. B 47 (1993) 11107.
3. B. Bäuml et al., Phys. Rev. B 98 (1998) 3663. |
no-problem/9910/astro-ph9910475.html | ar5iv | text | # User Manual for DUSTY
## 1 Introduction
The code DUSTY was developed at the University of Kentucky by Željko Ivezić, Maia Nenkova and Moshe Elitzur for a commonly encountered astrophysical problem: radiation from some source (star, galactic nucleus, etc.) viewed after processing by a dusty region. The original radiation is scattered, absorbed and reemitted by the dust, and the emerging processed spectrum often provides the only available information about the embedded object. DUSTY can handle both planar and centrally-heated spherical density distributions. The solution is obtained through an integral equation for the spectral energy density, introduced in . The number of independent input model parameters is minimized by fully implementing the scaling properties of the radiative transfer problem, and the spatial temperature profile is found from radiative equilibrium at every point in the dusty region.
On a Convex Exemplar machine, the solution for spherical geometry is produced in a minute or less for visual optical depth $`\tau _V`$ up to $``$ 10, increasing to 5–10 min for $`\tau _V`$ higher than 100. In extreme cases ($`\tau _V1000`$) the run time may reach 30 min or more. Run times for the slab case are typically five times shorter. All run times are approximately twice as long on a 300 MHz Pentium PC.
The purpose of this manual is to help users get quickly acquainted with the code. Following a short description of the installation procedure (§2), the input and output are described in full for the spherical case in §3 and §4. All changes pertaining to the plane-parallel case are described separately in §5. Finally, §6 describes user control of DUSTY itself.
This new version of DUSTY is significantly faster than its previous public release. Because of the addition of many features, the structure of the input has changed and old input files will not run on the current version.
## 2 Installation
The FORTRAN source dusty.f along with additional files, including five sample input files, come in a single compressed file dusty.tar.gz. This file and its unpacking instructions are available at DUSTY’s homepage at http://www.pa.uky.edu/$``$moshe/dusty. Alternatively, anonymous ftp gradj.pa.uky.edu, cd dusty/distribution and retrieve all files and sub-directories.
DUSTY was developed on a Pentium PC and has been run also on a variety of Unix workstations. It is written in standard FORTRAN 77, and producing the executable file is rather straightforward. For example, on a Unix machine
f77 dusty.f -o dusty
If the compilation is successful you can immediately proceed to run DUSTY without any further action. It should produce the output files sphere1.out and slab1.out, printed in appendices C and D, respectively. On a 300 MHz Pentium PC with DUSTY compiled by Visual FORTRAN under Windows, these files are produced in just under 2 minutes. Execution times under Linux are roughly three times longer; the Linux/FORTRAN implementation on the Digital alpha machine appears to be especially poor, the execution may be as much as ten times longer. Execution times on SUN workstations vary greatly with the model: about 1:30 min on an Enterprise 3000, 3 min on SPARC Ultra 1 and 6 min on SPARC20. These run-times should provide an indication of what to expect on your machine. If DUSTY compiles properly but the execution seems to be going nowhere and the output is not produced in the expected time, in all likelihood the problem reflects insufficient amount of machine memory. As a first measure, try to close all programs with heavy demand on system resources, such as ghostview and Netscape, before running DUSTY. If this does not help, the problem may be alleviated by reducing DUSTY’s memory requirements. Section 6.1 describes how to do that.
## 3 Input
A single DUSTY run can process an unlimited number of models. To accomplish this, DUSTY’s input is always the master input file dusty.inp<sup>2</sup><sup>2</sup>2dusty.inp must be kept with the DUSTY executable file in the same directory., which lists the names of the actual input files for all models. These filenames must have the form fname.inp, where fname is arbitrary and can include a full path, so that a single run may produce output models in different directories. In dusty.inp, each input filename must be listed on a separate line, with the implied extension .inp omitted. Since FORTRAN requires termination of input records with a carriage return, make sure you press the “Enter” key after every filename you enter, especially if it is in the last line of dusty.inp. Empty lines are ignored, as is all text following the ‘%’ sign (as in ). This enables you to enter comments and conveniently switch on and off the running of any particular model. The sample dusty.inp, supplied with the program, points to the five actual input files sphereN.inp (N = 1–3) and slabM.inp (M = 1, 2). Only sphere1 and slab1 will be executed, since the others are commented out, providing samples of DUSTY’s simplest possible input and output. Once they have been successfully run you may wish to remove the ‘%’ signs from the other entries, which demonstrate more elaborate input and output, and check the running of a full sequence. Your output can be verified against the corresponding sample output files accessible on DUSTY’s homepage.
Each model is characterized by properties of the radiation source and the dusty region, and DUSTY produces a set of up to 999 solutions for all the optical depths specified in the input. The output file for fname.inp is fname.out, containing a summary of the run and a table of the main output results. Additional output files containing more detailed tables of radiative and radial properties may be optionally produced.
The input file has a free format, text and empty lines can be entered arbitrarily. All lines that start with the ‘\*’ sign are echoed in the output, and can be used to print out notes and comments. This option can also be useful when the program fails for some mysterious reason and you want to compare its output with an exact copy of the input line as it was read in before processing by DUSTY. The occurrence of relevant numerical input, which is entered in standard FORTRAN conventions, is flagged by the equal sign ‘=’. The only restrictions are that all required input entries must be specified, and in the correct order; the most likely source of an input error is failure to comply with these requirements. Recall, also, that FORTRAN requires a carriage return termination of the file’s last line if it contains relevant input. Single entries are always preceded by the equal sign, ‘=’, and terminated by a blank, which can be optionally preceded with a punctuation mark. For example: T = 10,000 K as well as Temperature = 1.E4 degrees and simply = 10000.00 are all equivalent, legal input entries (note that comma separations of long numbers are permitted). Some input is entered as a list, in which case the first member is preceded by ‘=’ and each subsequent member must be preceded by a blank (an optional punctuation can be entered before the blank for additional separation); for example, Temperatures = 1E4, 2E4 30,000. Because of the special role of ‘=’ as a flag for input entry, care must be taken not to introduce any ‘=’ except when required. All text following the ‘%’ sign is ignored (as in ) and this can be used to comment out material that includes ‘=’ signs. For example, different options for the same physical property may require a different number of input entries. By commenting out with ‘%’, all options may be retained in the input file with only the relevant one switched on.
The input contains three types of data — physical parameters, numerical accuracy parameters, and flags for optional output files. The physical parameters include characteristics of the external radiation, properties of the dust grains, and the envelope density distribution. Detailed description of the program input follows, including examples marked with the ‘$``$’ sign. Each example contains a brief explanation, followed by sample text typeset in typewriter font as it would appear in the input file. The sample input files sphereN.inp and slabM.inp, supplied with DUSTY, are heavily commented to ease initial use, and can be used as templates.
### 3.1 External Radiation
In the spherical case, DUSTY assumes that the external radiation comes from a point source at the center of the density distribution. Thanks to scale invariance, the only relevant property of the external radiation under these circumstances is its spectral shape (see ). Six different flag selected input options are available. The first three involve entry in analytical form:
1. A combination of up to 10 black bodies, each described by a Planck function of a given temperature. Following the spectrum flag, the number of black bodies is specified, followed by a list of the temperatures. When more then one black-body is specified, the temperature list must be followed by a list of the fractional contributions of the different components to the total luminosity.
* A single black body:
```
Spectrum = 1
Number of BB = 1
Temperature = 10,000 K
```
This could also be entered on a single line as
type = 1, N = 1, T = 1E4
* Two black bodies, e.g. a binary system, with the first one contributing 80% of the total luminosity; note that the distance between the stars must be sufficiently small that the assumption of a central point source remain valid:
```
Spectrum = 1
Number of BB = 2
Temperatures = 10,000, 2,500 K
Luminosities = 4, 1
```
2. Engelke-Marengo function. This expression improves upon the black-body description of cool star emission by incorporating empirical corrections for the main atmospheric effects. Engelke found that changing the temperature argument of the Planck function from $`T`$ to $`0.738T[1+79450/(\lambda T)]^{0.182}`$, where $`T`$ is in K and $`\lambda `$ is wavelength in $`\mu `$m, adequately accounts for the spectral effect of H<sup>-</sup>. Massimo Marengo devised an additional empirical correction for molecular SiO absorption around 8 $`\mu `$m, and has kindly made his results available to DUSTY. The selection of this combined Engelke-Marengo function requires as input the temperature and the relative (to the continuum) SiO absorption depth in %.
* Stellar spectrum parametrized with Engelke–Marengo function:
```
Spectrum = 2
Temperature = 2500 K
SiO absorption depth = 10 percents
```
3. Broken power law of the form:
$$\lambda F_\lambda \{\begin{array}{cc}0\hfill & \lambda \lambda (1)\hfill \\ \lambda ^{k(1)}\hfill & \lambda (1)<\lambda \lambda (2)\hfill \\ \lambda ^{k(2)}\hfill & \lambda (2)<\lambda \lambda (3)\hfill \\ \mathrm{}\hfill & \\ \lambda ^{k(N)}\hfill & \lambda (N)<\lambda \lambda (N+1)\hfill \\ 0\hfill & \lambda (N+1)<\lambda \hfill \end{array}$$
In this case, after the option selection the number $`N`$ is entered, followed by a list of the break points $`\lambda (i)`$, $`i=1\mathrm{}N+1`$, in $`\mu `$m and a list of the power indices $`k(i)`$, $`i=1\mathrm{}N`$. The wavelengths $`\lambda (i)`$ must be listed in increasing order.
* A flat spectrum confined to the range 0.1–1.0 $`\mu `$m:
```
Spectrum = 3
N = 1
lambda = 0.1, 1 micron
k = 0
```
All spectral points entered outside the range covered by DUSTY’s wavelength grid are ignored. If the input spectrum does not cover the entire wavelength range, all undefined points are assumed zero.
The other three options are for entry in numerical form as a separate user-supplied input file which lists either (4) $`\lambda F_\lambda `$ (= $`\nu F_\nu `$) or (5) $`F_\lambda `$ or (6) $`F_\nu `$ vs $`\lambda `$. Here $`\lambda `$ is wavelength in $`\mu `$m and $`\nu `$ the corresponding frequency, and $`F_\lambda `$ or $`F_\nu `$ is the external flux density in arbitrary units.
1. Stellar spectrum tabulated in a file. The filename for the input spectrum must be entered separately in the line following the numerical flag. This input file must have a three-line header of arbitrary text followed by a two-column tabulation of $`\lambda `$ and $`\lambda F_\lambda `$, where $`\lambda `$ is in $`\mu `$m and $`\lambda F_\lambda `$ is in arbitrary units. The number of entry data points is limited to a maximum of 10,000 but is otherwise arbitrary. The tabulation must be ordered in wavelength but the order can be either ascending or descending. If the shortest tabulated wavelength is longer than 0.01 $`\mu `$m, the external flux is assumed to vanish at all shorter wavelengths. If the longest tabulated wavelength is shorter than 3.6 cm, DUSTY will extrapolate the rest of the spectrum with a Rayleigh-Jeans tail.
* Spectrum tabulated in file quasar.dat:
Spectrum = 4
quasar.dat
2. Stellar spectrum read from a file as in the previous option, but $`F_\lambda `$ is specified (in arbitrary units) instead of $`\lambda F_\lambda `$.
* Kurucz model atmosphere tabulated in file kurucz10.dat:
Spectrum = 5
kurucz10.dat
3. Stellar spectrum read from a file as in the previous option, but $`F_\nu `$ is specified (in arbitrary units) instead of $`F_\lambda `$.
In the last three entry options, the filename for the input spectrum must be entered separately in the line following the numerical flag. Optionally, you may separate the flag line and the filename line by an arbitrary number of lines that are either empty or commented out (starting with ‘%’). The files quasar.dat and kurucz10.dat are distributed with DUSTY.
### 3.2 Dust Properties
Dust optical properties are described by the dust absorption and scattering cross-sections, which depend on the grain size and material. Currently, DUSTY supports only single-type grains, namely, a single size and chemical composition. Grain mixtures can still be treated, simulated by a single-type grain constructed from an appropriate average. This approximation will be removed in future releases which will provide full treatment of grain mixtures.
#### 3.2.1 Chemical Composition
DUSTY contains data for the optical properties of six common grain types. In models that utilize these standard properties, the only input required is the fractional abundance of the relevant grains. In addition, optical properties for other grains can be supplied by the user. In this case, the user can either specify directly the absorption and scattering coefficients or have DUSTY calculate them from provided index of refraction. The various alternatives are selected by a flag, as follows:
1. DUSTY contains data for six common grain types: ‘warm’ and ‘cold’ silicates from Ossenkopff et al (, Sil-Ow and Sil-Oc); silicates and graphite grains from Draine and Lee (, Sil-DL and grf-DL); amorphous carbon from Hanner (, amC-Hn); and SiC by Pègouriè (, SiC-Pg). Fractional number abundances must be entered for all these grain types, in the order listed.
* Mixture containing only dust grains with built-in data for optical properties:
```
optical properties index = 1
Abundances for supported grain types, standard ISM mixture:
Sil-Ow Sil-Oc Sil-DL grf-DL amC-Hn SiC-Pg
x = 0.00 0.00 0.53 0.47 0.00 0.00
```
The overall abundance normalization is arbitrary. In this example, the silicate and graphite abundances could have been entered equivalently as 53 and 47, respectively.
2. With this option, the user can introduce up to ten additional grain types on top of those built-in. First, the abundances of the six built-in types of grains are entered as in the previous option. Next, the number ($`10`$) of additional grain types is entered, followed by the names of the data files, listed separately one per line, that contain the relevant optical properties. These properties are specified by the index of refraction, and DUSTY calculates the absorption and scattering coefficients using Mie theory. Each data file must start with seven header lines (arbitrary text) followed by a three-column tabulation of wavelength in $`\mu `$m, and real (n) and imaginary (k) parts of the index of refraction. The number of table entries is arbitrary, up to a maximum of 10,000. The tabulation must be ordered in wavelength but the order can be either ascending or descending. DUSTY will linearly interpolate the data for n and k to its built-in wavelength grid. If the supplied data do not fully cover DUSTY’s wavelength range, the refraction index will be assumed constant in the unspecified range, with a value equal to the corresponding end point of the user tabulation. The file list should be followed by a list of abundances, entered in the same order as the names of the corresponding data files.
* Draine & Lee graphite grains with three additional grain types whose n and k are provided by the user in data files amC-zb1.nk, amC-zb2.nk and amC-zb3.nk, distributed with DUSTY. These files tabulate the most recent properties for amorphous carbon by Zubko et al :
```
Optical properties index = 2
Abundances of built-in grain types:
Sil-Ow Sil-Oc Sil-DL grf-DL amC-Hn SiC-Pg
x = 0.00 0.00 0.00 0.22 0.00 0.00
Number of additional components = 3, properties listed in files
amC-zb1.nk
amC-zb2.nk
amC-zb3.nk
Abundances for these components = 0.45, 0.10, .23
```
3. This option is similar to the previous one, only the absorption and scattering coefficients are tabulated instead of the complex index of refraction, so that the full optical properties are directly specified and there is no further calculation by DUSTY. The data filename is listed in the line following the option flag. This file must start with a three-line header of arbitrary text followed by a three-column tabulation of wavelength in $`\mu `$m, absorption ($`\sigma _{\mathrm{abs}}`$) and scattering ($`\sigma _{\mathrm{sca}}`$) cross sections. Units for $`\sigma _{\mathrm{abs}}`$ and $`\sigma _{\mathrm{sca}}`$ are arbitrary, only their spectral variation is relevant. The number of entries is arbitrary, with a maximum of 10,000. The handling of the wavelength grid is the same as in the previous option.
* Absorption and scattering cross sections from the file ism-stnd.dat, supplied with DUSTY, listing the optical properties for the standard interstellar dust mixture:
Optical properties index = 3; cross-sections entered in file
ism-stnd.dat
DUSTY’s distribution includes a library of data files with the complex refractive indices of various compounds of common interest. This library is described in appendix E.
#### 3.2.2 Grain Size Distribution
The grain size distribution must be specified only when the previous option was set to 1 or 2. When the dust cross sections are read from a file (previous option set at 3), the present option is skipped.
DUSTY recognizes two distribution functions for grain sizes $`n(a)`$ — the MRN power-law with sharp boundaries
$$n(a)a^q\text{for}a_{\mathrm{min}}aa_{\mathrm{max}}$$
(1)
and its modification by Kim, Martin and Hendry , which replaces the upper cutoff with a smooth exponential falloff
$$n(a)a^qe^{a/a_0}\text{for}aa_{\mathrm{min}}$$
(2)
DUSTY contains the standard MRN parameters $`q`$ = 3.5, $`a_{\mathrm{min}}`$ = 0.005 $`\mu `$m and $`a_{\mathrm{max}}`$ = 0.25 $`\mu `$m as a built-in option. In addition, the user may select different cutoffs as well as power index for both distributions.
1. This is the standard MRN distribution. No input required other than the option flag.
2. Modified MRN distribution. The option flag is followed by listing of the power index $`q`$, lower limit $`a_{\mathrm{min}}`$ and upper limit $`a_{\mathrm{max}}`$ in $`\mu `$m.
* Standard MRN distribution can be entered with this option as:
Size distribution = 2
q = 3.5, a(min) = 0.005 micron, a(max) = 0.25 micron
* Single size grains with $`a`$ = 0.05 $`\mu `$m:
```
Size distribution = 2
q = 0 (it is irrelevant in this case)
a(min) = 0.05 micron
a(max) = 0.05 micron
```
3. KMH distribution. The option flag is followed by a list of the power index $`q`$, lower limit $`a_{\mathrm{min}}`$ and the characteristic size $`a_0`$ in $`\mu `$m.
* Size distribution for grains in the dusty envelope around IRC+10216 as obtained by Jura and verified in Ivezić & Elitzur :
Size distribution = 3
q = 3.5, a(min) = 0.005 micron, a0 = 0.2 micron
#### 3.2.3 Dust Temperature on Inner Boundary
The next input entry is the dust temperature $`T_1`$ (in K) on the shell inner boundary. This is the only dimensional input required by the dust radiative transfer problem . $`T_1`$ uniquely determines $`F_{e1}`$, the external flux entering the shell, which is listed in DUSTY’s output (see §4.1). In principle, different types of grains can have different temperatures at the same location. However, DUSTY currently treats mixtures as single-type grains whose properties average the actual mix. Therefore, only one temperature is specified.
### 3.3 Density Distribution
In spherical geometry, the density distribution is specified in terms of the scaled radius
$$y=\frac{r}{r_1}$$
where $`r_1`$ is the shell inner radius. This quantity is irrelevant to the radiative transfer problem , therefore it is never entered. ($`r_1`$ scales with the luminosity $`L`$ as $`L^{1/2}`$ when all other parameters are held fixed. The explicit relation is provided as part of DUSTY’s output; see §4.1.) The density distribution is described by the dimensionless profile $`\eta (y)`$, which DUSTY normalizes according to $`\eta 𝑑y=1`$. Note that the shell inner boundary is always $`y=1`$. Its outer boundary in terms of scaled radii is the shell relative thickness, and is specified as part of the definition of $`\eta `$.
DUSTY provides three methods for entering the spherical density distribution: prescribed analytical forms, hydrodynamic calculation of winds driven by radiation pressure on dust particles, and numerical tabulation in a file.
#### 3.3.1 Analytical Profiles
DUSTY can handle three types of analytical profiles: piecewise power-law, exponential, and an analytic approximation for radiatively driven winds. The last option is described in the next subsection on winds.
1. Piecewise power law:
$$\eta (y)\{\begin{array}{cc}y^{p(1)}\hfill & 1y<y(1)\hfill \\ y^{p(2)}\hfill & y(1)y<y(2)\hfill \\ y^{p(3)}\hfill & y(2)y<y(3)\hfill \\ & \text{ }\mathrm{}\hfill \\ y^{p(N)}\hfill & y(N1)yy(N)\hfill \end{array}$$
After the option selection, the number $`N`$ is entered, followed by a list of the break points $`y(i)`$, $`i=1\mathrm{}N`$, and a list of the power indices $`p(i)`$, $`i=1\mathrm{}N`$. The list must be ascending in $`y`$. Examples:
* Density falling off as $`y^2`$ in the entire shell, as in a steady-state wind with constant velocity. The shell extends to 1000 times its inner radius:
```
density type = 1; N = 1; Y = 1.e3; p = 2
```
* Three consecutive shells with density fall-off softening from $`y^2`$ to a constant distribution as the radius increases by factor 10:
```
density type = 1
N = 3
transition radii = 10 100 1000
power indices = 2 1 0
```
2. Exponentially decreasing density distribution
$$\eta \mathrm{exp}\left(\sigma \frac{y1}{Y1}\right)$$
(3)
where $`Y`$ is the shell’s outer boundary and $`\sigma `$ determines the fall-off rate. Following the option flag, the user enters $`Y`$ and $`\sigma `$.
* Exponential fall-off of the density to $`e^4`$ of its inner value at the shell’s outer boundary $`Y=100`$:
density type = 2; Y = 100; sigma = 4
#### 3.3.2 Radiatively Driven Winds
The density distribution options 3 and 4 are offered for the modeling of objects such as AGB stars, where the envelope expansion is driven by radiation pressure on the dust grains. DUSTY can compute the wind structure by solving the hydrodynamics equations, including dust drift and the star’s gravitational attraction, as a set coupled to radiative transfer. This solution is triggered with density type = 3, while density type = 4 utilizes an analytic approximation for the dust density profile which is appropriate in most cases and offers the advantage of a much shorter run time.
1. An exact calculation of the density structure from a full dynamics calculations (see and references therein). The calculation is performed for a typical wind in which the final expansion velocity exceeds 5 $`\mathrm{km}\mathrm{s}^1`$, but is otherwise arbitrary. The only input parameter that needs to be specified is the shell thickness $`Y=r_{\mathrm{out}}/r_1`$.
* Numerical solution for radiatively driven winds, extending to a distance $`10^4`$ times the inner radius:
```
density type = 3; Y = 1.e4
```
The steepness of the density profile near the wind origin increases with optical depth, and with it the numerical difficulties. DUSTY handles the full dynamics calculation for models that have $`\tau _V`$ $`<`$ 1,000, corresponding to $`\dot{M}`$ $``$ 4$`\times `$$`10^4`$ $`M_{}`$ $`\mathrm{yr}^1`$.
2. When the variation of flux-averaged opacity with radial distance is negligible, the problem can be solved analytically . In the limit of negligible drift, the analytic solution takes the form
$$\eta \frac{1}{y^2}\left[\frac{y}{y1+(v_1/v_e)^2}\right]^{1/2}$$
(4)
This density profile provides an excellent approximation under all circumstances to the actual results of detailed numerical calculations (previous option). The ratio of initial to final velocity, $`ϵ_v=v_1/v_e`$, is practically irrelevant as long as $`ϵ_v`$ $`<`$ 0.2. The selection density type = 4 invokes this analytical solution with the default value $`ϵ_v=0.2`$. As for the previous option, the only input parameter that needs to be specified in this case is the outer boundary $`Y`$.
* Analytical approximation for radiatively driven winds, the shell relative thickness is $`Y=10^4`$:
```
density type = 4; Y = 1.e4
```
Run times for this option are typically 2–3 times shorter and it can handle larger optical depths than the previous one. Although this option suffices for the majority of cases of interest, for detailed final fitting you may wish to switch to the former.
#### 3.3.3 Tabulated Profiles
Arbitrary density profiles can be entered in tabulated form in a file. The tabulation could be imported from another dynamical calculation (e.g., star formation), and DUSTY would produce the corresponding IR spectrum.
1. The input filename must be entered separately in the line following the numerical flag. This input file must consist of a three-line header of arbitrary text, followed by a two-column tabulation of radius and density, ordered in increasing radius. The inner radius (first entry) corresponds to the dust temperature $`T_1`$, entered previously (§3.2.3). Otherwise, the units of both radius and density are arbitrary; DUSTY will transform both to dimensionless variables. The number of entry data points is limited to a maximum of 1,000 but is otherwise arbitrary. DUSTY will transform the table to its own radial grid, with typically $``$ 20–30 points.
* Density profile tabulated in the file collapse.dat:
```
density type = 5; profile supplied in the file:
collapse.dat
```
This file is supplied with DUSTY and contains tabulation of the profile $`\eta y^{3/2}`$, corresponding to steady-state accretion to a central mass.
In all cases, care must be taken that $`\eta `$ not become so small that roundoff errors cause spline oscillations and decrease accuracy. To avoid such problems, DUSTY will stop execution with a warning message whenever $`\eta `$ dips below $`10^{12}`$ or its dynamic range exceeds $`10^{12}`$. This is particularly pertinent for very steep density profiles, where the outer boundary should be chosen with care.
### 3.4 Optical Depth
For a given set of the parameters specified above, DUSTY will generate up to 999 models with different overall optical depths. The list of optical depths can be specified in two different ways. DUSTY can generate a grid of optical depths spaced either linearly or logarithmically between two end-points specified in the input. Alternatively, an arbitrary list can be entered in a file.
1. Optical depths covering a specified range in linear steps: Following the option selection, the fiducial wavelength $`\lambda _0`$ (in $`\mu `$m) of optical depth $`\tau _0`$ is entered. The $`\tau _0`$ grid is then specified by its two ends and the number of points ($`999`$).
* Models with 2.2 $`\mu `$m optical depths including all the integers from 1 to 100:
```
tau grid = 1
lambda0 = 2.2 micron
tau(min) = 1; tau(max) = 100
number of models = 100
```
2. Same as the previous option, only the $`\tau _0`$ range is covered in logarithmic steps:
* Three models with visual optical depth $`\tau _V`$ = 0.1, 1 and 10:
```
tau grid = 2
lambda0 = 0.55 micron
tau(min) = 0.1; tau(max) = 10
number of models = 3
```
3. Optical depths list entered in a file: The file name is entered on a single line after the option selection. The (arbitrary) header text of the supplied file must end with the fiducial wavelength $`\lambda _0`$, preceded by the equal sign, ‘=’. The list of optical depths, one per line up to a maximum of 999 entries, is entered next in arbitrary order. DUSTY will sort and run it in increasing $`\tau _0`$.
* Optical depths from the file taugrid.txt, supplied with the DUSTY distribution:
```
tau grid = 3; grid supplied in file:
taugrid.dat
```
The file taugrid.dat is used in the sample input files slab2.inp and sphere3.inp.
### 3.5 Numerical Accuracy and Internal Bounds
The numerical accuracy and convergence of DUSTY’s calculations are controlled by the next input parameter, $`q_{\mathrm{acc}}`$. The accuracy is closely related to the set of spatial and wavelength grids employed by DUSTY. The wavelength grid can be modified by users to meet their specific needs (see §6.2) and it does not change during execution. The spatial grids are automatically generated and refined until the fractional error of flux conservation at every grid point is less than $`q_{\mathrm{acc}}`$. Whenever DUSTY calculates also the density profile $`\eta `$, the numerical accuracy of that calculation is also controlled by $`q_{\mathrm{acc}}`$.
The recommended value is $`q_{\mathrm{acc}}=0.05`$, entered in all the sample input files. The accuracy level that can be accomplished is related to the number of spatial grid points and the model’s overall optical depth. When $`\tau _V`$ $`<`$ 100, fewer than 30 points will usually produce a flux error of $`<`$ 1% already in the first iteration. However, as $`\tau _V`$ increases, the solution accuracy decreases if the grid is unchanged, and finer grids are required to maintain a constant level of accuracy. This is done automatically by DUSTY. The maximum number of grid points is bound by DUSTY’s array dimensions, which are controlled by the parameter npY whose default value is 40. This internal limit suffices to ensure convergence at the 5% level for most models with $`\tau _V`$ $`<`$ 1000.<sup>3</sup><sup>3</sup>3Convergence and execution speed can be affected by the input radiation spectral shape. A hard spectrum heavily weighed toward short wavelengths, where the opacity is high, can have an effect similar to large $`\tau _V`$. If higher levels of accuracy or larger $`\tau _V`$ are needed, DUSTY’s internal limits on array sizes must be expanded by increasing npY, as described in §6.1.
### 3.6 Output Control
The final input entries control DUSTY’s output. The first is a flag that sets the level of DUSTY’s verbosity during execution. With verbose = 1, DUSTY will output to the screen a minimal progress report of its execution. With verbose = 2 you get a more detailed report that allows tracing in case of execution problems. verbose = 0 suppresses all messages. The messages are printed to the standard output device with the FORTRAN statement write(\*). If you suspect that your system may not handle this properly, choose verbose = 0.
All other output and its control is explained in the next section. Note again that this section describes only the output for spherical models. All changes necessitated by the planar geometry are described separately in §5.2.
## 4 Output
A typical DUSTY run generates an enormous amount of information, and the volume of output can easily get out of hand. To avoid that, DUSTY’s default output is a single file that lists only minimal information about the run, as described next. All other output is optional and fully controlled by the user. §4.2 describes the optional output and its control.
### 4.1 Default Output
DUSTY always produces the output file fname.out for each model input fname.inp. In addition to a summary of the input parameters, the default output file tabulates global properties for each of the optical depths covered in the run. The table’s left column lists the sequential number ### of the model with the fiducial optical depth tau0 listed in the next column. Subsequent columns list quantities calculated by DUSTY for that tau0:
* F1 – the bolometric flux, in $`\mathrm{W}\mathrm{m}^2`$, at the inner radius $`y=1`$. Only the external source contributes to F1 since the diffuse flux vanishes there under the point-source assumption. Note that F1 is independent of overall luminosity, fully determined by the scaled solution (see ). The bolometric flux emerging from the spherical distribution is F1/$`Y^2`$.
Any measure of the shell dimension is irrelevant to the radiative transfer problem and thus not part of DUSTY’s calculations. Still, the shell size can be of considerable interest in many applications. For convenience, the next three output items list different measures of the shell size expressed in terms of redundant quantities such as the luminosity:
* r1(cm) – the shell inner radius where the dust temperature is T1, specified in the input (§3.2.3). This radius scales in proportion to $`L^{1/2}`$, where $`L`$ is the luminosity. The tabulated value corresponds to $`L=10^4L_{}`$.
* r1/rc – where rc is the radius of the central source. This quantity scales in proportion to $`(T_e/T_1)^2`$, where $`T_e`$ is the external radiation effective temperature. The listed value is for $`T_e=10,000`$ K with two exceptions: when the spectral shape of the external radiation is the Planck or Engelke-Marengo function, the arguments of those functions are used for $`T_e`$.
* theta1 – the angular size, in arcsec, of the shell inner diameter. This angle depends on the observer’s position and scales in proportion to $`F_{\mathrm{obs}}^{1/2}`$, where $`F_{\mathrm{obs}}`$ is the observed bolometric flux. The tabulated value corresponds to $`F_{\mathrm{obs}}=10^6\mathrm{W}\mathrm{m}^2`$.
* Td(Y) – the dust temperature, in K, at the envelope’s outer edge.
* err – the numerical accuracy, in %, achieved in the run. Specifically, if $`r`$ is the ratio of smallest to largest bolometric fluxes in the shell, after accounting for radial dilution, then the error is $`(1r)/(1+r)`$. Errors smaller than 1% are listed as zero.
When the density distribution is derived from a hydrodynamics calculation for AGB winds (§3.3.2), three more columns are added to fname.out listing the derived mass-loss rate, terminal outflow velocity and an upper bound on the stellar mass. These quantities posses general scaling properties in terms of the luminosity $`L`$, gas-to-dust mass ratio $`r_{\mathrm{gd}}`$ and dust grain bulk density $`\rho _s`$ . The tabulations are for $`L=10^4L_{}`$, $`r_{\mathrm{gd}}=200`$ and $`\rho _s=3\mathrm{g}\mathrm{cm}^3`$, and their scaling properties are:
* Mdot – the mass loss rate in $`M_{}`$ $`\mathrm{yr}^1`$, scales in proportion to $`L^{3/4}(r_{\mathrm{gd}}\rho _s)^{1/2}`$. This quantity has $``$ 30% inherent uncertainty because varying the gravitational correction from 0 up to 50% has no discernible effect on the observed spectrum.
* Ve – the terminal outflow velocity in $`\mathrm{km}\mathrm{s}^1`$, scales in proportion to $`L^{1/4}(r_{\mathrm{gd}}\rho _s)^{1/2}`$. The provided solutions apply only if this velocity exceeds 5 $`\mathrm{km}\mathrm{s}^1`$. Ve is subject to the same inherent uncertainty as Mdot.
* M$`>`$ – an upper limit in $`M_{}`$ on the stellar mass $`M`$, scales in proportion to $`L/(r_{\mathrm{gd}}\rho _s)`$. The effect of gravity is negligible as long as $`M`$ is less than 0.5\*M$`>`$ and the density profile is then practically independent of $`M`$.
There is a slight complication with these tabulations when the dust optical properties are entered using optical properties = 33.2.1). With this option, the scattering and absorption cross sections are entered in a file, tabulated using arbitrary units since only their spectral shape is relevant for the solution of the radiative transfer problem. However, the conversion to mass-loss rate requires also the grain size, and this quantity is not specified when optical properties = 3 is used. DUSTY assumes that the entered values correspond to $`\sigma /V`$, the cross section per grain volume in $`\mu \text{m}^1`$. If that is not the case, in the above scaling relations replace $`r_{\mathrm{gd}}`$ with $`r_{\mathrm{gd}}V/\sigma `$.
Finally, DUSTY assumes that the external radiation originates in a central point source. This assumption can be tested with eqs. (27) and (28) of which give expressions for the central source angular size and occultation effect. From these it follows that the error introduced by the point-source assumption is no worse than 6% whenever
$$T_e>2\times \mathrm{max}[T_1,(F_{e1}/\sigma )^{1/4}].$$
(5)
Thanks to scaling, $`T_e`$ need not be specified and is entirely arbitrary as far as DUSTY is concerned. However, compliance with the point-source assumption implies that the output is meaningful only for sources whose effective temperature obeys eq. 5. For assistance with this requirement, fname.out lists the lower bound on $`T_e`$ obtained from this relation for optically thin sources. Since $`F_{e1}`$ decreases with optical depth (see ), the listed bound ensures compliance for all the models in the series. However, in optically thick cases $`F_{e1}`$ may become so small that the listed bound will greatly exceed the actual limit from eq. 5. In those cases, the true bounds can be obtained, if desired, from eq. 5 and the model tabulated F1 (note again that with the point source assumption, F1 = $`F_{e1}`$).
Black-body emission provides an absolute upper bound on the intensity of any thermal source. Therefore, input radiation whose spectral shape is the Planck function at temperature $`T`$ is subject to the limit $`T_eT`$ even though $`T_e`$ is arbitrary in principle. In such cases $`T`$ must comply with eq. 5, otherwise DUSTY’s output is suspect and in fact could be meaningless. DUSTY issues a stern warning after the tabulation line of any model with input spectral shape that is either the Planck or Engelke-Marengo function whose temperature violates eq. 5.
### 4.2 Optional Output
In addition to the default output, the user can obtain numerous tabulations of spectra, imaging profiles and radial distributions of various quantities of interest for each of the optical depths included in the run. This additional output is controlled through flags entered at the end of the input file fname.inp that turn on and off the optional tabulations. Setting all flags to 0, as in sphere1.inp and slab1.inp, suppresses all optional tabulations and results in minimal output. A non-zero output flag triggers the production of corresponding output, occasionally requiring additional input. Further user control is provided by the value of the output flag. When a certain flag is set to 1, the corresponding output is listed in a single file that contains the tabulations for all the optical depth solutions. Setting the flag to 2 splits the output, when appropriate, tabulating the solution for each optical depth in its own separate file. This may make it more convenient for plotting purposes, for example, at the price of many small files. A few flags can also be set to 3, splitting the output even further.
Each of the following subsections describes in detail the optional tabulations triggered by one of the output flags and any additional input it may require. Appendix A summarizes all the output flags and the corresponding output files they trigger, and can be used for quick reference.
#### 4.2.1 Properties of Emerging Spectra
Setting the first optional flag to 1 outputs a variety of spectral properties for all the model solutions to the file fname.spp. The tabulation has four header lines and starts with the model sequential number ###. The following columns list the corresponding tau0 and the scaling parameter $`\mathrm{\Psi }`$ (see ) for the model. The subsequent columns list fluxes $`f(\lambda )=\lambda F_\lambda /F`$, where $`F=F_\lambda 𝑑\lambda `$, for various wavelengths of interest:
* fV – relative emerging flux at 0.55 $`\mu `$m.
* fK – relative emerging flux at 2.2 $`\mu `$m.
* f12 – relative emerging flux at 12 $`\mu `$m, convolved with the IRAS filter for this wavelength.
Next are the IRAS colors, defined for wavelengths $`\lambda _1`$ and $`\lambda _2`$ in $`\mu `$m as:
$$[\lambda _2][\lambda _1]=\mathrm{log}\frac{\lambda _2f(\lambda _2)}{\lambda _1f(\lambda _1)}=\mathrm{log}\frac{F_\nu (\lambda _2)}{F_\nu (\lambda _1)}$$
(6)
Columns 5–7 list, in this order, C21 = $`[25][12]`$, C31 = $`[60][12]`$ and C43 = $`[100][60]`$. They are followed by tabulations of:
* b8-13 – the IRAS-defined spectral slope $`\beta _{813}`$ between 8 and 13 $`\mu `$m:
$$\beta _{813}=4.74\mathrm{log}\frac{f(13)}{f(8)}1.0$$
* b14-22 – the IRAS-defined spectral slope $`\beta _{1422}`$ between 14 and 22 $`\mu `$m:
$$\beta _{1422}=5.09\mathrm{log}\frac{f(22)}{f(14)}1.0$$
* B9.8 – the relative strength of the 9.8 $`\mu `$m feature defined as
$$B_{9.8}=\mathrm{ln}\frac{f(9.8)}{f_c(9.8)},$$
where $`f_c(9.8)`$ is the continuum-interpolated flux across the feature.
* B11.3 – the relative strength of the 11.3 $`\mu `$m feature defined as above for B9.8.
* R9.8-18 – the ratio of the fluxes at 9.8 $`\mu `$m and 18 $`\mu `$m, $`f(9.8)/f(18)`$.
#### 4.2.2 Detailed spectra for each model
The next output flag triggers listing of detailed spectra for each model in the run. Setting this flag to 1 produces tables for the emerging spectra of all models in the single output file fname.stb. Setting the flag to 2 places each table in its own separate file, where file fname.s### contains the tabulation for model number ### in the optical depth sequence listed in the default output file (§4.1).
In addition to the emerging spectrum, the table for each model lists separately the contributions of various components to the overall flux, the spectral shape of the input radiation, and the wavelength dependence of the total optical depth. The following quantities are tabulated:
* lambda – the wavelength in $`\mu `$m
* fTot – the spectral shape of the total emerging flux $`f(\lambda )=\lambda F_\lambda /F_\lambda 𝑑\lambda `$. Values smaller than $`10^{20}`$ are listed as 0.
* xAtt – fractional contribution of the attenuated input radiation to fTot
* xDs – fractional contribution of the scattered radiation to fTot
* xDe – fractional contribution of the dust emission to fTot
* fInp – the spectral shape of the input (unattenuated) radiation
* tauT – overall optical depth at wavelength lambda
* albedo – the albedo at wavelength lambda
#### 4.2.3 Images at specified wavelengths
The surface brightness is a luminosity-independent self-similar distribution of $`b/r_1`$, the impact parameter scaled by the envelope inner radius (fig. 1); note that $`r_1`$ is listed in the default output file (§4.1) for a source luminosity $`10^4`$ $`L_{}`$. DUSTY can produce maps of the surface brightness at up to 20 wavelengths, specified in the input file. Setting the option flag to 1 produces imaging tabulations for all the models of the run in the single output file fname.itb, setting the flag to 2 puts the table for model number ### in its separate file fname.i###.
Following the option selection flag, the number ($`20`$) of desired wavelengths is entered first, followed by a list of these wavelengths in $`\mu `$m.
* Example of additional input data required in fname.inp for imaging output:
```
imaging tables (all models in one file) = 1
number of wavelengths = 8
wavelengths = 0.55, 1.0, 2.2, 4, 10, 50, 100, 1000 micron
```
Whenever a specified wavelength is not part of DUSTY’s grid, the corresponding image is obtained by linear interpolation from the neighboring wavelengths in the grid. If the nearest wavelengths are not sufficiently close, the interpolation errors can be substantial. For accurate modeling, all wavelengths specified for imaging should be part of the grid, modifying it if necessary (see §6.2).
Each map is tabulated with a single header line as follows:
* b $`=b/r_1`$, where $`b`$ is the impact parameter.
* t(b) = $`\tau (𝚋)/\tau (\mathrm{𝟶})`$, where $`\tau (𝚋)`$ is the overall optical depth along a path with impact parameter b. Note that $`\tau (0)`$ is simply the overall radial optical depth tauT, listed in the file fname.s###4.2.2), and that t(b) doubles its value across the shell once the impact parameter exceeds the stellar radius.
* The intensity, in Jy arcsec<sup>-2</sup>, at each of the wavelengths listed in the header line.
A typical image contains a narrow central spike of width $`b_c=2r_c/r_1`$, where $`r_c`$ is the radius of the central source . Since this feature is unresolved in most observations, it is usually of limited interest. This spike is the only feature of the emerging intensity that depends on the effective temperature $`T_e`$ of the central source, which is irrelevant to DUSTY’s calculations. The width of the spike scales in proportion to $`T_e^2`$, its height in proportion to $`T_e^4`$. The listed value is for $`T_e=10,000`$ K with two exceptions: when the spectral shape of the external radiation is the Planck or Engelke-Marengo function, the arguments of those functions are used for $`T_e`$.
#### 4.2.4 Visibilities
Visibility is the two-dimensional spatial Fourier transform of the surface brightness distribution (for definition and discussion see ). Since the surface brightness is a self-similar function of $`b/r_1`$, the visibility is a self-similar function of $`q\theta _1`$ where $`q`$ is the spatial frequency, $`\theta _1=2r_1/D`$ and $`D`$ is the distance to the source; note that $`\theta _1`$ is listed in the default output file for the location where $`F_{\mathrm{obs}}=10^6\mathrm{W}\mathrm{m}^2`$4.1).
When imaging tables are produced, DUSTY can calculate from them the corresponding visibility functions. The only required input is the flag triggering this option; if images are not requested in the first place, this entry is skipped. When the visibility option flag is different from zero, it must be the same as the one for imaging. Setting both flags to 1 will add visibility tables for all models to the single file fname.itb. Setting the flags to 2 puts the imaging and visibility tables of each model in the separate file fname.i###, setting them to 3 further splits the output by putting each visibility table in the separate, additional file fname.v###.
Each visibility table starts with a single header line, which lists the specified wavelengths in the order they were entered. The first column lists the dimensionless scaled spatial frequency q = $`q\theta _1`$ and is followed by the visibility tabulation for the various wavelengths.
#### 4.2.5 Radial profiles for each model
The next option flag triggers tabulations of the radial profiles of the density, optical depth and dust temperature. Setting the flag to 1 produces tabulations for all the models of the run in the single output file fname.rtb, setting the flag to 2 places the table for model number ### in its own separate file fname.r###. The tabulated quantities are:
* y – dimensionless radius
* eta – the dimensionless, normalized radial density profile (§3.3)
* t – radial profile of the optical depth variation. At any wavelength $`\lambda `$, the optical depth at radius $`y`$ measured from the inner boundary is t\*tauT, where tauT is the overall optical depth at that wavelength, tabulated in the file fname.s###4.2.2).
* tauF – radial profile of the flux-averaged optical depth
* epsilon – the fraction of grain heating due to the contribution of the envelope to the radiation field (see ).
* Td – radial profile of the dust temperature
* rg – radial profile of the ratio of radiation pressure to gravitational force, where both forces are per unit volume:
$$\frac{_{\mathrm{rad}}}{_{\mathrm{grav}}}=\frac{3L}{16\pi GMcr_{gd}}\frac{{\displaystyle \underset{i}{}}n_{d,i}a_i^2{\displaystyle Q_{i,\lambda }f_\lambda 𝑑\lambda }}{{\displaystyle \underset{i}{}}n_{d,i}\rho _{s,i}a_i^3}$$
(7)
Here $`f_\lambda =F_\lambda /F_\lambda 𝑑\lambda `$ is the local spectral shape, $`\rho _{s,i}`$ is the material solid density and $`n_{d,i}`$ the number density of grains with size $`a_i`$. The gas-to-dust ratio, $`r_{gd}`$, appears since the gas is collisionally coupled to the dust. The tabulated value is for $`\rho _s`$ = 3 g cm<sup>-3</sup>, $`L/M`$ = $`10^4`$ $`L_{}`$/$`M_{}`$ and $`r_{gd}=200`$. In the case of radiatively driven winds $`r_{gd}`$ varies in the envelope because of the dust drift, and this effect is properly accounted in the solution. When the dust optical properties are entered using optical properties = 3, grain sizes are not specified (§3.2.1). This case is handled as described in the last paragraph of §4.1.
In the case of dynamical calculation with density type = 3 for AGB stars (§3.3.2), the following additional profiles are tabulated:
* u – the dimensionless radial velocity profile normalized to the terminal velocity Ve, which is tabulated for the corresponding overall optical depth in file fname.out4.1).
* drift – the radial variation of $`v_\mathrm{g}/v_\mathrm{d}`$, the velocity ratio of the gas and dust components of the envelope. This quantity measures the relative decrease in dust opacity due to dust drift.
#### 4.2.6 Detailed Run-time messages
In case of an error, the default output file issues a warning. Optionally, additional, more detailed run-time error messages can be produced and might prove useful in tracing the program’s progress in case of a failure. Setting the corresponding flag to 1 produces messages for all the models in the single output file fname.mtb, setting the flag to 2 puts the messages for model number ### in its own separate file fname.m###.
## 5 Slab Geometry
DUSTY offers the option of calculating radiative transfer through a plane-parallel dusty slab. The slab is always illuminated from the left, additional illumination from the right is optional.
As long as the surfaces of equal density are parallel to the slab boundaries, the density profile is irrelevant: location in the slab is uniquely specified by the optical depth from the left surface. Unlike the spherical case, there is no reference to spatial variables since the problem can be solved fully in optical-depth space. The other major difference involves the bolometric flux $`F`$. In the spherical case the diffuse flux vanishes at $`y=1`$ and $`F=F_{e1}/y^2`$, where $`F_{e1}`$ is the external bolometric flux at the shell inner boundary . In contrast, $`F`$ is constant in the slab and the diffuse flux does not vanish on either face. Therefore $`F/F_{e1}`$, where $`F_{e1}`$ is the bolometric flux of the left-side source at slab entry, is another unknown variable determined by the solution.
The slab geometry is selected by specifying density type = 0. The dust properties are entered as in the spherical case, with the dust temperature specified on the slab left surface instead of the shell inner boundary. The range of optical depths, too, is chosen as in the spherical case. The only changes from the spherical case involve the external radiation and the output.
### 5.1 Illuminating Source(s)
External radiation is incident from the left side. The presence of an optional right-side source is specified by a non-zero value for R, the ratio of the right-side bolometric flux at slab entry to that of the left-side source. Each input radiation is characterized by its spectral shape, which is entered exactly as in the spherical case (§3.1), and angular distribution. The only angular distributions that do not break the planar symmetry involve parallel rays, falling at some incident angle, and isotropic radiation (see figure 2). The parallel-rays distribution is specified by the cosine ($`>0.05`$) of the illumination angle, the isotropic distribution is selected by setting this input parameter to $`1`$. Since oblique angles effectively increase the slab optical depth, run-times will increase with incidence angle.
* Slab geometry with illumination by parallel rays normal to the left surface. In this case, the spectral shape of the source is entered as in the spherical case. The only change from the spherical input is that the density profile is replaced by the following:
```
density type = 0
cos(angle) = 1.0 (spectral shape entered previously)
R = 0 (no source on the right)
```
Two-sided illumination is specified by a non-zero R, where $`0<𝚁1`$. The properties of the right-side source are specified following the input for R.
* Slab illuminated from both sides. The left-side radiation has isotropic distribution whose spectral shape has been entered previously. The right-side source has a bolometric flux half that of the left-side source and a black-body spectral shape with temperature 3,000 K. It illuminates the slab with parallel rays incident at an angle of $`60^{}`$ from normal. The density profile is replaced by the following:
```
density type = 0
cos(angle) = -1.0 (spectral shape entered previously)
R = 0.5
Properties of the right-side source:
cos(angle) = 0.5
Spectrum = 1; N = 1; Tbb = 3000 K
```
### 5.2 Slab Output
The output-control flags are identical to those in the spherical case and the output files are analogous, except for some changes dictated by the different geometry.
#### 5.2.1 Default Output
In the default fname.out. the first two columns are the same as in the spherical case (§4.1) and are followed by:
* Fe1 – bolometric flux, in $`\mathrm{W}\mathrm{m}^2`$, of the left-side source at the slab left boundary.
* f1 $`=F/F_{e1}`$, where $`F`$ is the overall bolometric flux. Values at and below the internal accuracy of DUSTY’s flux computation, $`10^3`$ when $`q_{\mathrm{acc}}`$ = 0.05, are listed as zero.
* r1(cm) – the distance at which a point source with luminosity $`10^4`$ $`L_{}`$ produces the bolometric flux $`F_{e1}`$.
* Td(K) – the dust temperature at the slab right boundary.
* Te(L) – the effective temperature, in K, obtained from $`F_{e1}=\sigma \mathrm{𝚃𝚎}^4`$. When the slab is illuminated also from the right, a column is added next for Te(R), the effective temperature obtained similarly for the right-side flux.
* err – the flux conservation error, defined as in the spherical case.
#### 5.2.2 Spectral Profiles
Unlike the spherical case, the slab optional spectral files list properties of the half-fluxes emerging from both sides of the slab, calculated over the forward and backward hemispheres perpendicular to the slab faces. The magnitudes of the bolometric half-fluxes on the slab right and left faces can be obtained from tabulated quantities via
$$F_{\mathrm{right}}=(𝚁+\mathrm{𝚏𝟷})\mathrm{𝙵𝚎𝟷},F_{\mathrm{left}}=(1\mathrm{𝚏𝟷})\mathrm{𝙵𝚎𝟷}.$$
The right-emerging radiation replaces the spherical output in fname.spp, fname.stb and fname.s###, analogous tables for the left-emerging radiation are simply added to the appropriate output files. Setting the relevant selection flags to 3 places these additional tables in their own separate files — fname.zpp for spectral properties and fname.z### for the detailed spectra of model number ### in the optical depth sequence.
Similar to the fTot column of the spherical case, the spectral shape of the right-emerging half-flux is printed in column fRight. It consists of three components whose fractional contributions are listed next, as in the spherical case: xAtt for the left-source attenuated radiation, xDs and xDe for the diffuse scattered and emitted components, respectively. Subsequent columns are as in the spherical case. The tables for the spectral shape of the left-emerging half-flux fLeft are analogous.
#### 5.2.3 Spatial Profiles
The output for spatial profiles is similar to the spherical case. The radial distance y and density profile eta are removed. The relative distance in optical depth from the left boundary, t, becomes the running variable, and the tabulations of tauF, epsilon and Td are the same (see 4.2.5). The tabulation for $`_{\mathrm{rad}}`$/$`_{\mathrm{grav}}`$ is dropped, replaced by three components of the overall bolometric flux: febol is the local net bolometric flux of external radiation; fRbol and fLbol are, respectively, the rightward and leftward half-fluxes of the local diffuse radiation. All components are normalized by Fe1 so that the flux conservation relation is febol + fRbol - fLbol = f1 everywhere in the slab. Note that fRbol vanishes on the slab left face, fLbol on the right face.
DUSTY’s distribution contains two sample input files, slab1.inp and slab2.inp, which can be used as templates for the slab geometry. The output generated with slab1.inp is shown in appendix D.
## 6 User Control of DUSTY
DUSTY allows the user control of some of its inner working through tinkering with actual code statements that control the spatial and spectral grids. The appropriate statements were placed in the file userpar.inc separate from the main dusty.f, and are imbedded during compilation by the FORTRAN statement INCLUDE<sup>4</sup><sup>4</sup>4userpar.inc must always stay with the source code in the same directory.. After modifying statements in userpar.inc , DUSTY must be recompiled to enable the changes.
### 6.1 Array Sizes for Spatial Grid
The maximum size of DUSTY’s spatial grid is bound by array dimensions. These are controlled by the parameter npY which sets the limit on the number of radial points. The default value of 40 must be decreased when DUSTY is run on machines that lack sufficient memory (see §1) and increased when DUSTY fails to achieve the prescribed accuracy (see §3.5). This parameter is defined in userpar.inc via
```
PARAMETER (npY=40)
```
To modify npY simply open userpar.inc, change the number 40 to the desired value, save your change and recompile. That’s all. Every other modification follows a similar procedure. Since DUSTY’s memory requirements vary roughly as the second power of npY, the maximum value that can be accommodated on any given machine is determined by the system memory.
The parameter npY defines also the size npP of the grid used in angular integrations. In the case of planar geometry DUSTY uses analytic expressions for these integrations. Since this grid becomes redundant, npP can be set to unity, allowing a larger maximum npY. The procedure is described in userpar.inc.
### 6.2 Wavelength Grid
DUSTY’s wavelength grid is used both in the internal calculations and for the output of all wavelength dependent quantities. The number of grid points is set in userpar.inc by the parameter npL, the grid itself is read from the file lambda\_grid.dat<sup>5</sup><sup>5</sup>5lambda\_grid.dat must always stay with the DUSTY executable file in the same directory.. This file starts with an arbitrary number of text lines, the beginning of the wavelength list is signaled by an entry for the number of grid points. This number must be equal to npL entered in userpar.inc and to the actual number of entries in the list.
The grid supplied with DUSTY contains 105 points from 0.01 to $`3.6\times 10^4`$ $`\mu `$m. The short wavelength boundary is to ensure adequate coverage of input radiation from an O star, for example, which peaks at 0.1 $`\mu `$m. Potential effects on the grain material by such hard radiation are not included in DUSTY. The long wavelength end is to ensure adequate coverage at all wavelengths where dust emission is potentially significant. Wavelengths can be added and removed provided the following rules are obeyed:
1. Wavelengths are specified in $`\mu `$m.
2. The shortest wavelength must be $`0.01`$ $`\mu `$m, the longest $`3.6\times 10^4`$ $`\mu `$m.
3. The ratio of each consecutive pair must be $``$ 1.5.
The order of entries is arbitrary, DUSTY sorts them in increasing wavelength and the sorted list is used for all internal calculations and output. This provides a simple, convenient method for increasing the resolution at selected spectral regions: just add points at the end of the supplied grid until the desired resolution is attained. Make sure you update both entries of npL and recompile DUSTY.
In practice, tinkering with the wavelength grid should be reserved for adding spectral features. Specifying the optical properties of the grains at a resolution coarser than that of the wavelength grid defeats the purpose of adding grid points. The optical properties of grains supported by DUSTY are listed on the default wavelength grid. Therefore, modeling of very narrow features requires both the entry of a finer grid in lambda\_grid.dat and the input of user-supplied optical properties (see §3.2.1) defined on that same grid.
## Appendices
## Appendix A Output Summary
DUSTY’s default output is the file fname.out, described in §4.1. Additional output is optionally produced through selection flags, summarized in the following table. The second column lists the section number where a detailed description of the corresponding output is provided.
## Appendix B Pitfalls, Real and Imaginary
This appendix provides a central depository of potential programming and numerical problems. Some were already mentioned in the text and are repeated here for completeness.
* FORTRAN requires termination of input records with a carriage return. Make sure you press the “Enter” key whenever you enter a filename in the last line of dusty.inp.
* In preparing input files, the following two rules must be carefully observed: (1) all required input entries must be specified, and in the correct order; (2) the equal sign, ‘=’, must be entered only as a flag to numerical input. When either rule is violated and DUSTY reaches the end of the input file while looking for additional input, you will obtain the error message:
```
****TERMINATED. EOF reached by RDINP while looking for input.
*** Last line read:
```
This message is a clear sign that the input is out of order.
* Linux apparently makes heavier demand on machine resources than Windows. On any particular PC and a given value of npY, DUSTY may execute properly under Windows but not under Linux, dictating a smaller npY.
* DUSTY’s execution under the Solaris operating system occasionally gives the following warning message:
```
Note: IEEE floating-point exception flags raised:
Inexact; Underflow;
See the Numerical Computation Guide, ieee_flags(3M)
```
This ominous message is triggered on Solaris also by other applications and is not unique to DUSTY. The reason for it is not yet clear and it is not issued on other platforms. In spite of this statement, the code performs fine and produces results identical to those on machines that do not issue this warning.
* CRAY J90 machines have specific requirements on FORTRAN programs which prevent DUSTY from running in its present form. If you plan to run DUSTY on this platform you’ll have to introduce some changes in the source code, such as replacing all DOUBLE PRECISION statements with REAL\*4 .
## Appendix C Sample Output File: sphere1.out
```
===========================
Output from program Dusty
Version: 2.0
===========================
INPUT parameters from file:
sphere1.inp
* ----------------------------------------------------------------------
* NOTES:
* This is a simple version of an input file producing a minimal output.
* ----------------------------------------------------------------------
Central source spectrum described by a black body
with temperature: 2500 K
--------------------------------------------
Abundances for supported grains:
Sil-Ow Sil-Oc Sil-DL grf-DL amC-Hn SiC-Pg
1.000 0.000 0.000 0.000 0.000 0.000
MRN size distribution:
Power q: 3.5
Minimal size: 5.00E-03 microns
Maximal size: 2.50E-01 microns
--------------------------------------------
Dust temperature on the inner boundary: 800 K
--------------------------------------------
Density described by 1/r**k with k = 2.0
Relative thickness: 1.000E+03
--------------------------------------------
Optical depth at 5.5E-01 microns: 1.00E+00
Required accuracy: 5%
--------------------------------------------
====================================================
For compliance with the point-source assumption, the
following results should only be applied to sources
whose effective temperature exceeds 1737 K.
====================================================
RESULTS:
--------
### tau0 F1(W/m2) r1(cm) r1/rc theta1 Td(Y) err
### 1 2 3 4 5 6 7
==========================================================
1 1.00E+00 2.88E+04 3.26E+14 8.78E+00 2.43E+00 44 0
==========================================================
(1) Optical depth at 5.5E-01 microns
(2) Bolometric flux at the inner radius
(3) Inner radius for L=1E4 Lsun
(4) Ratio of the inner to the stellar radius
(5) Angular size (in arcsec) when Fbol=1E-6 W/m2
(6) Dust temperature at the outer edge (in K)
(7) Maximum error in flux conservation (%)
=================================================
Everything is OK for all models
========== THE END ==============================
```
## Appendix D Sample Output File: slab1.out
```
===========================
Output from program Dusty
Version: 2.0
===========================
INPUT parameters from file:
slab1.inp
* ----------------------------------------------------------------------
* NOTES:
* This is a simple version of an input file for calculation in
* planar geometry with single source illumination.
* ----------------------------------------------------------------------
Left-side source spectrum described by a black body
with temperature: 2500 K
--------------------------------------------
Abundances for supported grains:
Sil-Ow Sil-Oc Sil-DL grf-DL amC-Hn SiC-Pg
1.000 0.000 0.000 0.000 0.000 0.000
MRN size distribution:
Power q: 3.5
Minimal size: 5.00E-03 microns
Maximal size: 2.50E-01 microns
--------------------------------------------
Dust temperature on the slab left boundary: 800 K
--------------------------------------------
Calculation in planar geometry:
cos of left illumination angle = 1.000E+00
R = 0.000E+00
--------------------------------------------
Optical depth at 5.5E-01 microns: 1.00E+00
Required accuracy: 5%
--------------------------------------------
RESULTS:
--------
### tau0 Fe1(W/m2) f1 r1(cm) Td(K) Te(L) err
### 1 2 3 4 5 6 7
==========================================================
1 1.00E+00 2.59E+04 9.33E-01 3.43E+14 755 8.22E+02 0
==========================================================
(1) Optical depth at 5.5E-01 microns
(2) Bol.flux of the left-side source at the slab left boundary
(3) f1=F/Fe1, where F is the overall bol.flux in the slab
(4) Position of the left slab boundary for L=1E4 Lsun
(5) Dust temperature at the right slab face
(6) Effective temperature of the left source (in K)
(7) Maximum error in flux conservation (%)
=================================================
Everything is OK for all models
========== THE END ==============================
```
## Appendix E Library of Optical Constants
DUSTY’s distribution includes a library of data files with the complex refractive indices of various compounds of interest. The files are standardized in the format DUSTY accepts. Included are the optical constants for the seven built-in dust types as well as other frequently encountered astronomical dust components. This library will be updated continuously at the DUSTY site. The following table lists all the files currently supplied. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.